I'm currently implementing an algorithm that does allot of linear algebra on small matrices and vectors. the code is fast but I'm wondering if it would make sense to implement it on a gpgpu instead of the cpu.
I'm able to store most of the matrices and vectors in the gpu memory as a prepr开发者_StackOverflow社区ocessing step, and have profiles the multiplication algorithms, the algorithms are, ofcaurse, way faster on the gpu.
but now for my real question, how do I determine the overhead of making calls to the gpu from the cpu? how many cycles am I losing wayting for my code to be executed and stuff like that?
I hope someone has some input?
It is hard to determine the exact "overhead" of calling OpenCL, because operations on the GPU can be done in parallel with whatever else is running on the CPU. Depending on your application, you can, for example, do a transfer of a chunk of data to the GPU from your application and in paralell do some preprocessing in CPU of the following chunk of data. Similarly, while the code is executing on the GPU, you can be doing some prep work on the CPU on some data needed in the future.
The transfers to the GPU will be done via DMA transfers, which are very fast in general. From my experience, I was able to transfer around 4MB of data in the order of 4 milliseconds to the GPU (modern GPU, modern motherboard), while doing some processing on the data that was sent previosly. From that, it seems safe to say you can upload and download an order of 1GB of data per second to the GPU and do some processing on that data.
In your case, either the GPU or the CPU side will be the bottleneck. CPU side, if it cannot feed, say, 1GB of prepared data to the GPU per second. This may be very possibly limited by your disk I/O.
To test your GPU path, set up a bunch of buffers of data ready to process. You would want to keep re-sending that data to the GPU, processing it, and downloading the results (which you will discard). Measure the throughput and compare to the throughput of your CPU version of the application.
Don't measure just the GPU processing part, because transfers and processing on the GPU will compete for GPU memory controller time and will be affecting each other's pace.
Also, in case you want very good response time on small pieces of data, not good throughput, you probably won't benefit from going through the GPU, because it introduces a bit of delay to your processing.
The important thing to consider here is the time it takes to copy the data to the GPU and back. Even if the GPU implementation is much faster, the time spent doing transfers may wipe out any advantage.
Furthermore, if you are very serious about the accuracy of your algebra then you may want to consider that the operations you want to perform may not be available natively on the GPU with double accuracy.
Given that you say your matrices and vectors are small I suggest checking out SIMD optimisations that may improve the performance of your algorithm on CPU.
You can use clEvent objects to track the time that the actual computations take (latency). If you actually mean CPU cycles, use RDTSC (or its intrinsic, __rdtsc in MSVC) to do nanosecond-precise timing for the actual API calls. The RDTSC instruction (read time stamp counter) returns the number of clock cycles the cpu has completed since powerup.
If it really is that easy to upload, then you can batch up calls and perhaps add a dimension to your NDRange to do multiple computations in one call. Of course, the details depend on your kernel implementation.
I suggest using the following to measure the number of cpu cycles:
#include <stdlib.h>
#include <time.h>
// ...
clock_t start,end;
start = clock();
// do stuff...
end = clock();
cout<<"CPU cycles used: "<<end-start;
精彩评论