The new MacBookPros come with two graphic adapters, the Intel HD Graphics, and the NVIDIA GeForce GT 330M. OS X switches back and forth between them, depending on the workload, detection of an external monitor, or activation of Rosetta.
I want to get my feet wet with CUDA programming,开发者_Go百科 and unfortunately the CUDA SDK doesn't seem to take care of this back-and-forth switching. When Intel is active, no CUDA device gets detected, and when the NVidia card is active, it gets detected. So my current work-around is to use the little tool gfxCardStatus (http://codykrieger.com/gfxCardStatus/) to force the card on or off, just as I need it, but that's not satisfactory.
Does anybody here know what the Apple-blessed, Apple-recommended way is to (1) detect the presence of a CUDA card, (2) to activate this card when present?
Well, supposedly MacOsX should switch back and forth when needed, and apparently it doesn't consider CUDA.
In Snow Leopard Apple introduced OpenCL, which is supposed to be used to program the GPU by any application, this is probably Apple's recommended way of achieving that, instead of CUDA.
I am testing CUDA and OpenCL on the NVidia-Platform. All my application (i have to write it with cuda and opencl framework) achieve the same performance (measured in MFlops). BUT: if you use local memory optimization for NVidia, than there is some problems to run this application with ATI-GPU. So this is not really cross platform:(
精彩评论