开发者

BLAS and CUBLAS

开发者 https://www.devze.com 2022-12-28 12:49 出处:网络
I\'m wondering about NVI开发者_如何学JAVADIA\'s cuBLAS Library. Does anybody have experience with it? For example if I write a C program using BLAS will I be able to replace the calls to BLAS with cal

I'm wondering about NVI开发者_如何学JAVADIA's cuBLAS Library. Does anybody have experience with it? For example if I write a C program using BLAS will I be able to replace the calls to BLAS with calls to cuBLAS? Or even better implement a mechanism which let's the user choose at runtime?

What about if I use the BLAS Library provided by Boost with C++?


The answer by janneb is incorrect, cuBLAS is not a drop-in replacement for a CPU BLAS. It assumes data is already on the device, and the function signatures have an extra parameter to keep track of a cuBLAS context.

However, coming in CUDA 6.0 is a new library called NVBLAS which provides exactly this "drop-in" functionality. It intercepts Level3 BLAS calls (GEMM, TRSV, etc) and automatically sends them to the GPU, effectively tiling the PCIE transfer with on-GPU computation.

There is some information here: https://developer.nvidia.com/cublasxt, and CUDA 6.0 is available to CUDA registered developers today.

Full docs will be online once CUDA 6.0 is released to the general public.


CUBLAS does not wrap around BLAS. CUBLAS also accesses matrices in a column-major ordering, such as some Fortran codes and BLAS.

I am more used to writing code in C, even for CUDA. A code written with CBLAS (which is a C wrap of BLAS) can easily be change into a CUDA code. Be aware that Fortran codes that use BLAS are quite different from C/C++ codes that use CBLAS. Fortran and BLAS normally store matrices or double arrays in column-major ordering, but C/C++ normally handle Row-major ordering. I normally handle this problem writing saving the matrices in a 1D arrays, and use #define to write a macro toa access the element i,j of a matrix as:

/* define macro to access Aij in the row-wise array A[M*N] */
#define indrow(ii,jj,N) (ii-1)*N+jj-1 /* does not depend on rows M  */
/* define macro to access Aij in the col-wise array A[M*N] */
#define indcol(ii,jj,M) (jj-1)*M+ii-1 

CBLAS library has a well organize parameters and conventions (const enum variables) to give to each function the ordering of the matrix. Beware that also the storage of matrices vary, a row-wise banded matrix is not stored the same as a column-wise band matrix.

I don't think there are mechanics to allow the user to choose between using BLAS or CUBLAS, without writing the code twice. CUBLAS also has on most function calls a "handle" variable that does not appear on BLAS. I though of #define to change the name at each function call, but this might not work.


I've been porting BLAS code to CUBLAS. The BLAS library I use is ATLAS, so what I say may be correct only up to choice of BLAS library.

ATLAS BLAS requires you to specify if you are using Column major ordering or row major ordering, and I chose column major ordering since I was using CLAPACK which uses column major ordering. LAPACKE on the other hand would use row major ordering. CUBLAS is column major ordering. You may need to adjust accordingly.

Even if ordering is not an issue porting to CUBLAS was by no means a drop in replacement. The largest issue is that you must move the data onto and off of the GPU's memory space. That memory is setup using cudaMalloc() and released with cudaFree() which acts as one might expect. You move data into GPU memory using cudaMemcpy(). The time to do this will be a large determining factor on if it's worthwhile to move from CPU to GPU.

Once that's done however, the calls are fairly similar. CblasNoTrans becomes CUBLAS_OP_N and CblasTrans becomes CUBLAS_OP_T. If your BLAS library (as ATLAS does) allows you to pass scalars by value you will have to convert that to pass by reference (as is normal for FORTRAN).

Given this, any switch that allows for a choice of CPU/GPU would most easily be at a higher level than within the function using BLAS. In my case I have CPU and GPU variants of the algorithm and chose them at a higher level depending on the size of the problem.

0

精彩评论

暂无评论...
验证码 换一张
取 消