开发者

Errors in Polynomial fitting problem on CUDA

开发者 https://www.devze.com 2023-02-19 21:59 出处:网络
I tried to use CUDA to do some simple loops on device, but it seem that it is hard to understand Cuda. I am getting 0 from every function call, when I use CUDA kernel function with normal C code.

I tried to use CUDA to do some simple loops on device, but it seem that it is hard to understand Cuda. I am getting 0 from every function call, when I use CUDA kernel function with normal C code. The original code:

double evaluate(int D, double tmp[], long *nfeval)
{
/* polynomial fitting problem */
   int   i, j;
   int const M=60;
   double px, x=-1, dx=(double)M, result=0;

   (*nfeval)++;
   dx = 2/dx;
   for (i=0;i<=M;i++)
   {
      px = tmp[0];
      for (j=1;j<D;j++)
      {
     px = x*px + tmp[j];
      }
      if (px<-1 || px>1) result+=(1-px)*(1-px);
      x+=dx;
   }
   px = tmp[0];
   for (j=1;j<D;j++) px=1.2*px+tmp[j];
   px = px-72.661;
   if (px<0) result+=px*px;
   px = tmp[0];
   for (j=1;j<D;j++) px=-1.2*px+tmp[j];
   px =px-72.661;
   if (px<0) result+=px*px;
   return result;
}

I wanted to do first for loop on CUDA:

    double evaluate_gpu(int D, double tmp[], long *nfeval)
{
/* polynomial fitting problem */
   int    j;
   int const M=60;
   double px, dx=(double)M, result=0;
   (*nfeval)++;
   dx = 2/dx;
   int N = M;
   double *device_tmp = NULL; 
   size_t size_tmp = sizeof tmp;    
   cudaMalloc((double **) &device_tmp, size_tmp);   
   cudaMemcpy(device_tmp, tmp, size_tmp, cudaMemcpyHostToDevice);
   int block_size = 4;
   int n_blocks = N/block_size + (N%block_size == 0 ? 0:1);
   cEvaluate <<< n_blocks, block_size >>> (device_tmp, result, D);
 // cudaMemcpy(result, r开发者_StackOverflowesult, size_result, cudaMemcpyDeviceToHost);
    px = tmp[0];
   for (j=1;j<D;j++) px=1.2*px+tmp[j];
   px = px-72.661;
   if (px<0) result+=px*px;
   px = tmp[0];
   for (j=1;j<D;j++) px=-1.2*px+tmp[j];
   px =px-72.661;
   if (px<0) result+=px*px;
   return result;

}

Where the device function looks like:

    __global__ void cEvaluate_temp(double* tmp,double result, int D)
{  
  int M =60;
  double px; 
  double x=-1;
  double dx=(double)M ;
  int j;
  dx = 2/dx;
  int idx = blockIdx.x * blockDim.x + threadIdx.x;  
  if (idx < 60)   //<==>if (idx < M)
  {  
      px = tmp[0];
      for (j=1;j<D;j++)
      {
       px = x*px + tmp[j];
      }

      if (px<-1 || px>1) 
      { __syncthreads();
        result+=(1-px)*(1-px); //+=
      }  
       x+=dx;  
  }  
}

I know that I have not specified the problem, but it seem that I have much more than one.

I do not know when to copy variable to device, and when it will be copied 'automatically'. Now, I am using CUDA 3.2 and there is problem with emulation (I would like to use printf), when I run NVCC with make emu=1 , there is no error when I use printf, but I also do not get any output.

There is the simplest version of device function, I tested. Can anybody explain what will happen with result value after incrementing it in parallel ? I think I should use device shared memory and synchronization to do sth like "+=" .

    __global__ void cEvaluate(double* tmp,double result, int D)
{  
  int idx = blockIdx.x * blockDim.x + threadIdx.x;  
  if (idx < 60)   //<==>if (idx < M)
  {  
        result+=1;
        printf("res = %f ",result); //-deviceemu, make emu=1

  }  
} 


No, the variable result is not shared across multiple threads.

What I would suggest is to have a matrix of result values in shared memory, one result for each thread, compute every value and the reduce it to a single value.

      __global__ void cEvaluate_temp(double* tmp,double *global_result, int D)
{  
  int M =60;
  double px; 
  double x=-1;
  double dx=(double)M ;
  int j;
  dx = 2/dx;
  int idx = blockIdx.x * blockDim.x + threadIdx.x;  

  __shared__ shared_result [blocksize];


  if (idx >= 60) return;

  px = tmp[0];

  for (j=1;j<D;j++)
  {
    px = x*px + tmp[j];
  }

  if (px<-1 || px>1) 
  { 
    result[threadIdx] +=(1-px)*(1-px); 
  }  
       x+=dx;  
 }

 __syncthreads();

if( threadIdx.x == 0) {
total_result = 0.
for (idx in blocksize){
   total_result += result[idx];
}
global_result[0] = total_result;
}

Also you need the cudaMemcpy after the kernel invocation. Kernel are asynchronous and needs a sync function.

Also use the error check functions at each CUDA API invocation.

0

精彩评论

暂无评论...
验证码 换一张
取 消