开发者

How to directly access a GPU?

开发者 https://www.devze.com 2023-01-08 03:46 出处:网络
As most of you know CPUs are not well designed to do floating point calculation in cont开发者_如何学编程rast to GPUs. I am wondering how to use GPU\'s power without any abstraction layer or driver. Ca

As most of you know CPUs are not well designed to do floating point calculation in cont开发者_如何学编程rast to GPUs. I am wondering how to use GPU's power without any abstraction layer or driver. Can I program for a GPU using assembly, C, C++ language (I mean how?). Although assembly seems to help me access the gpu directly, C/C++ are likely to need a medium library (e.g. OpenCL) to access the GPU.

Let me ask you another question: How much of a modern GPU's capability will be exposed to a programmer without any third-party driver?


The interfaces aren't documented so something like OpenCL is the only practical way to program the GPU directly.

Without a driver you would be stuck trying to reverse engineer the complete functionality of the GPU on your own.


Well, essentially, you would have to write a driver on either Windows or Linux. And the interfaces may be documented depending on which chipset you are trying to use. Intel has loads of PDF documentation on there website. However, this is a non trivial exercise at best and your code would only be able to used on that set of hardware. Meerly reading and understanding the documentation will take a bit of doing in most cases because "OOPs that's not how it really works" and how-tos do this or that aren't documented just the hardware and registers. However if REALLY want to do this your best bet would be to start with open source drivers on Linux for a particular chipset and tweek the to your SICK TWISTED purpose. All in all, other than for the learning aspect, it's prob a BAD idea.


The GPU manufacturer like NVDIA and ATI are closed source companies which has chosen not to disclose the GPU architecture and working abouts to the general public. This is why we cannot directly program the GPU as we can with the most CPU. The only way we can harness the power of the GPU for calculation is by using the provided library like CUDA in case of NVDIA. But there is a possible way where you can directly program a GPU for calculations but for that you need to reverse engineer and document all GPU and its registers and SYSTEMCALLS and you know that is not possible with our access to limited resources and limited time.

PS: The only other way is to sign in as a core developer for the GPU and sign a NDA (Non Disclosure Agreement) with the vendors which is likely not going to happen for starters and individuals like us.

0

精彩评论

暂无评论...
验证码 换一张
取 消