I've written a very simple kernel recently:
__device__ uchar elem(const Matrix m, int row, int col) {
if(row == -1) {
row = 0;
} else if(row > m.rows-1) {
row = m.rows-1;
}
if(col == -1) {
col = 0;
} else if(col > m.cols-1) {
col = m.cols-1;
}
return *((uchar*)(m.data + row*m.step + col));
}
/**
* Each thread will calculate the value of one pixel of the image 'res'
*/
__global__ void resizeKernel(const Matrix img, Matrix res) {
int row = threadIdx.y + blockIdx.y * blockDim.y;
int col = threadIdx.x + blockIdx.x * blockDim.x;
if(row < res.rows && col < res.cols) {
uchar* e = res.data + row * res.step + col;
*e = (elem(img, 2*row, 2开发者_高级运维*col) >> 2) +
((elem(img, 2*row, 2*col-1) + elem(img, 2*row, 2*col+1)
+ elem(img, 2*row-1, 2*col) + elem(img, 2*row+1, 2*col)) >> 3) +
((elem(img, 2*row-1, 2*col-1) + elem(img, 2*row+1, 2*col+1)
+ elem(img, 2*row+1, 2*col-1) + elem(img, 2*row-1, 2*col+1)) >> 4);
}
}
Basically what it does is calculate the value of a pixel of a reduced-size image using values of a bigger image. Inside the 'if' in resizeKernel.
My first tests were not working properly. So, in order to find out what was going on, I started commenting some lines of this sum. Once I reduced the number of operations, it started working.
My theory was then, that it might have something to do with available memory to store the intermediate results of the expression. And so, reducing the number of threads per block, it started working perfectly, with no need to reduce the number of operations.
Based on this experience I'd like to know how can I better estimate the number of threads per block in order to avoid memory requirements superior to what I have available. How could I know how much memory I would need for the operations above? (and while we're at it, what kind of memory is it? Cache, shared memory, etc).
Thanks!
It is mostly probably registers, and you can find out the per thread register consumption by adding the -Xptxas="-v"
option to the nvcc call that compiles the kernel. The assembler will return the number of registers per thread, static shared memory, local memory and constant memory used by the compiled code.
NVIDIA make an occupancy calculator spreadsheet (available here) into which you can plug in the output of the assembler to see the feasible range of block sizes and their effect on GPU occupancy. Chapter 3 of the CUDA programming guide also contains a detailed discussion of the the concept of occupancy and how block size and kernel resource requirements interact.
精彩评论