i wrote a cuda program and i am testing it on ubuntu as a virtual machine. the reason for this is i have windows 7, i don't want to install ubuntu as a secondary operating system, an开发者_如何学Cd i need to use a linux operating system for testing. my question is: will the virtual machine limit the gpu resources? So will my cuda code be faster if i run it under my primary operating system than running it on a virtual machine?
I faced a similar task once. What I ended up doing was installing Ubuntu on a 8GB thumb drive with persistent mode enabled.
That gave me 4GB to install CUDA and everything else I needed.
Having a bootable USB stick around can be very useful. I recommend reading this.
Also, this link has some very interesting material if you're looking for other distros.
Unfortunately the virtual machine simulates a graphics device and as such you won't have access to the real GPU. This is because of the way the virtualisation handles multiple VMs accessing the same device - it provides a layer in between to share the real device.
It is possible to get true access to the hardware, but only if you have the right combination of software and hardware, see the SLI Multi-OS site for details.
So you're probably out of luck with the virtualisation route - if you really can't run your app in Windows then you're limited to the following:
- Unrealistic: Install Linux instead
- Unrealistic: Install Linux alongside (not an option)
- Boot into a live CD, you could prepare a disk image with CUDA and mount the image each time
- Setup (or beg/borrow) a separate box with Linux and access it remotely
I just heard a talk at NVIDIA's GPU technology conference by a researcher named Xiaohui Cui (Oak Ridge National Laboratory). Among other things, he described accessing GPUs from Virtual machines using something called gVirtuS. He did not create gVirtuS, but described it as an opensource "virtual cuda" driver. See following link: http://osl.uniparthenope.it/projects/gvirtus/
I have not tried gVirtuS, but sounds like it might do what you want.
As of CUDA 3.1 it's virtualization capabilities are not vivid, so the only usable approach is to run CUDA programs directly on the target HW+SW
Use rCUDA to add a virtual GPU to your VM.
精彩评论