开发者

CUDA reduction of many small, unequally sized arrays

开发者 https://www.devze.com 2022-12-11 21:37 出处:网络
I am wondering if anyone could suggest the best approach to computing the mean / standard deviation of a large number of relatively small but differently sized arrays in CUDA?

I am wondering if anyone could suggest the best approach to computing the mean / standard deviation of a large number of relatively small but differently sized arrays in CUDA?

The parallel reduction example in the SDK works on a single very large array and it seems the size is conveniently a multiple of the number of threads per block, but my case is rather different:

Conceptu开发者_StackOverflow社区ally, I however have a large number of objects which each contain two components, upper and lower and each of these components has an x and a y coordinate. i.e.

upper.x, lower.x, upper.y, lower.y

Each of these arrays is approximately 800 in length but it varies between objects (not within an object) e.g.

Object1.lower.x = 1.1, 2.2, 3.3
Object1.lower.y = 4.4, 5.5, 6.6
Object1.upper.x = 7.7, 8.8, 9.9
Object1.upper.y = 1.1, 2.2, 3.3

Object2.lower.x = 1.0,  2.0,  3.0,  4.0, 5.0 
Object2.lower.y = 6.0,  7.0,  8.0,  9.0, 10.0
Object2.upper.x = 11.0, 12.0, 13.0, 14.0, 15.0 
Object2.upper.y = 16.0, 17.0, 18.0, 19.0, 20.0

Please note the above is just my way of representing the array and my data is not stored in C structs or anything like that: the data can be organised in whatever way I need. The point is that, for each array, the mean, standard deviation and eventually a histogram needs to be computed and within one particular object, ratios and differences between arrays need to be computed.

How should I go about sending this data to the GPU device and organising my thread-block hierarchy? One idea I had was to zero pad all my arrays so they are of the same length, and have a group of blocks working on each object, but it seems there are all sorts of problems with that method if it would work at all.

Thanks in advance


If I understood correctly, you want to reduce Object1.lower.x to one result, Object1.lower.y to another result and so on. For any given object there are four arrays to be reduced, all of equal length (for the object).

There are many possible approaches to this, one influencing factor would be the total number of objects in your system. I'll assume that number is large.

For best performance you want an optimal memory access pattern and you want to avoid divergence. Since the number of congruent arrays is four, if you take the naive approach of doing one array per thread, below, not only will you suffer from having poor memory access but also the h/w will need to check on each iteration which threads in the warp need to execute the loop - those that don't will be disabled which can be inefficient (especially if one array is much longer than the others, for example).

for (int i = 0 ; i < myarraylength ; i++)
    sum += myarray[i];

Instead if you get each warp to sum one array then not only will it be more efficient but also your memory access pattern will be much better since adjacent threads will read adjacent elements [1].

for (int i = tidwithinwarp ; i < warparraylength ; i += warpsize)
{
    mysum += warparray[i];
}
mysum = warpreduce(mysum);

You should also take the alignment of the arrays into consideration, preferably align on a 64 byte boundary although if you are developing for compute capability 1.2 or higher then this is not quite as important as on the older GPUs.

In this example you would launch four warps per block, i.e. 128 threads, and as many blocks as you have objects.

[1] You do say that you can choose whatever memory arrangement you like, often it can be useful to interleave arrays so that array[0][0] is next to array[1][0] since this will mean that adjacent threads can operate on adjacent arrays and get coalesced accesses. However since the length of the arrays is not constant this is probably complex, requiring padding of the shorter arrays.


As a follow-up of Tom's answer, I would like to mention that warp reduction can be easily implemented by CUB.

Here is a worked example:

#include <cub/cub.cuh>
#include <cuda.h>

#include "Utilities.cuh"

#include <iostream>

#define WARPSIZE    32
#define BLOCKSIZE   256

const int N = 1024;

/*************************/
/* WARP REDUCTION KERNEL */
/*************************/
__global__ void sum(const float * __restrict__ indata, float * __restrict__ outdata) {

    unsigned int tid = blockIdx.x * blockDim.x + threadIdx.x;

    unsigned int warp_id = threadIdx.x / WARPSIZE;

    // --- Specialize WarpReduce for type float. 
    typedef cub::WarpReduce<float, WARPSIZE> WarpReduce;

    // --- Allocate WarpReduce shared memory for (N / WARPSIZE) warps
    __shared__ typename WarpReduce::TempStorage temp_storage[BLOCKSIZE / WARPSIZE];

    float result;
    if(tid < N) result = WarpReduce(temp_storage[warp_id]).Sum(indata[tid]);

    if(tid % WARPSIZE == 0) outdata[tid / WARPSIZE] = result;
}

/********/
/* MAIN */
/********/
int main() {

    // --- Allocate host side space for 
    float *h_data       = (float *)malloc(N * sizeof(float));
    float *h_result     = (float *)malloc((N / WARPSIZE) * sizeof(float));

    float *d_data;      gpuErrchk(cudaMalloc(&d_data, N * sizeof(float)));
    float *d_result;    gpuErrchk(cudaMalloc(&d_result, (N / WARPSIZE) * sizeof(float)));

    for (int i = 0; i < N; i++) h_data[i] = (float)i;

    gpuErrchk(cudaMemcpy(d_data, h_data, N * sizeof(float), cudaMemcpyHostToDevice));

    sum<<<iDivUp(N, BLOCKSIZE), BLOCKSIZE>>>(d_data, d_result);
    gpuErrchk(cudaPeekAtLastError());
    gpuErrchk(cudaDeviceSynchronize());

    gpuErrchk(cudaMemcpy(h_result, d_result, (N / WARPSIZE) * sizeof(float), cudaMemcpyDeviceToHost));

    std::cout << "output: ";
    for(int i = 0; i < (N / WARPSIZE); i++) std::cout << h_result[i] << " ";
    std::cout << std::endl;

    gpuErrchk(cudaFree(d_data));
    gpuErrchk(cudaFree(d_result));

    return 0;
}

In this example, an array of length N is created and the result is the sum of 32 consecutive elements. So

result[0] = data[0] + ... + data[31];
result[1] = data[32] + ... + data[63];
....
0

精彩评论

暂无评论...
验证码 换一张
取 消