开发者

How to use GPU for mathematics [closed]

开发者 https://www.devze.com 2023-03-03 04:37 出处:网络
Closed. This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this
Closed. This question needs to be more focused. It is not currently accepting answers.

Want to improve this question? Update the question so it focuses on one problem only by editing this post.

Closed 5 years ago.

Improve this question

I am looking at util开发者_运维知识库ising the GPU for crunching some equations but cannot figure out how I can access it from C#. I know that the XNA and DirectX frameworks allow you to use shaders in order to access the GPU, but how would I go about accessing it without these frameworks?


I haven't done it from C#, but basically you use the CUDA (assuming you're using an nVidia card here, of course) SDK and CUDA toolkit to pull it off.

nVidia has ported (or written?) a BLAS implementation for use on CUDA-capable devices. They've provided plenty of examples for how to do number crunching, although you'll have to figure out how you're going to pull it off from C#. My bet is, you're going to have to write some stuff in un-managed C or C++ and link with it.

If you're not hung-up on using C#, take a look at Theano. It might be a bit overkill for your needs, since they're building a framework for doing machine learning on GPUs from Python, but ... it works, and works very well.


If your GPU is NVidia, you can use CUDA.

There is an example here, that explain all the chain, including some C/C++ code: CUDA integration with C#

And there is a library called CUDA.NET available here: CUDA.NET

If your GPU is ATI, then there is ATI Stream. .NET support is less clear to me on this. Maybe the Open Toolkit Library has it, through OpenCL support.

And finally, there is an Microsoft Research project called "Accelerator" which has a managed wrapper which should work on any hardware (provided it supports DirectX 9).


How about Brahma (LINQ to GPU)?

Gotta love LINQ!


I'm afraid that my knowledge of using the GPU is rather theoretical beyond writing shaders for DirectX / XNA and dabbling a little bit with CUDA (NVidia specific). However, I have heard quite a lot about OpenCL (Open Computing Language) which allows you to run algorithms which OpenCL will intelligently push out to your graphics cards, or run on the CPU if you don't have a compatible GPU.

The code you run on the GPU will have to be written specifically in OpenCL's subset of C99 (apologies if this does not meet your reqiurements as you've asked how to use it from C#), but beyond your number crunching algorithms, you can write the rest of your application in C# and have it all work together nicely by using The Open Toolkit;

http://www.opentk.com/


There are two options if you don't want to mess with P/Invoke stuff and unmanaged code:

  1. Use mentioned CUDA.NET library. It works very well, but it's targeting CUDA, so only nVidia cards. If you'd like to solve more complex problems you'd have to learn CUDA, write your own kernel (in C...), compile it with nvcc and execute from C# via this library.
  2. Use Microsoft Research Accelerator. It's a nice library build by MS Research that runs your code on anything that has lots of cores (many-core nVidia/ATI GPUs and multi-core processors). It's completely platform independent. Used it and I'm pretty impressed with the results. There is also a very good tutorial on using Accelerator in C#.

The second option is that I'd recommend, but if you have no problem with sticking to nVidia GPUs only - the first would probably be faster.


I have done it in C# by leveraging NVIDIA's CUDA libraries and .NET's P/invoke. This requires some careful memory management and a good detailed understanding of the CUDA libraries. This technique can be used in conjunction with any custom GPU/CUDA kernels you would like to create in C, so it's a very powerful flexible approach.

If you would like to save yourself a lot of effort you could buy NMath Premium from CenterSpace software (who I work for) and you can be running large problems on your NVIDIA GPU in minutes from C#. NMath Premium a large C#/.NET math library that can run much of LAPACK and FFT's on the GPU, but falls back to the CPU if the hardware isn't available or the problem size doesn't justify a round trip to the GPU.

0

精彩评论

暂无评论...
验证码 换一张
取 消