Is there a way to determine the maximum size of thrust::device_vector<T>
that you ca开发者_StackOverflow社区n safely allocate?
There isn't a straightforward way that I am aware of. My usual approach has been to do something like this:
const size_t MB = 1<<20;
size_t reserved, total;
cudaMemGetInfo( &reserved, &total );
char fail = 0;
while( cudaMalloc( (void**)&pool, reserved ) != cudaSuccess )
{
reserved -= MB;
if( reserved < MB )
{
fail = 1;
break;
}
}
which starts with the total free memory returned from cudaMemGetInfo
, then decrements it my a "reasonable" size (as best as I could tell in the GT200 era, the GPU MMU has a couple of different page sizes, with 1Mb being the largest). The loop continues until you either get an allocation, or memory is so fragmented or exhausted that even a single page will fail. Not very pretty, but it seems to work 99.999% of the time.
use cudaMemGetInfo
.
docs here
精彩评论