开发者

float vs double on graphics hardware

开发者 https://www.devze.com 2022-12-16 10:36 出处:网络
I\'ve been trying to find info on performance of using float vs double on graphics hardware. I\'ve found plenty of info on float vs double on CPUs, but such info is more scarce for GPUs.

I've been trying to find info on performance of using float vs double on graphics hardware. I've found plenty of info on float vs double on CPUs, but such info is more scarce for GPUs.

I code with OpenGL, so if there's any info specific to that API that you feel should be known, let's have at it.

I understand that if the progra开发者_C百科m is moving a lot of data to/from the graphics hardware, then it would probably be better to use floats as doubles would require twice the bandwidth. My inquiries are more towards how the graphics hardware does it's processing. As I understand it, modern Intel CPUs convert float/double to an 80-bit real for calculations (SSE instructions excluded) and both types are thus about equally fast. Do modern graphics cards do any such thing? is float and double performance about equal now? Are there any strong reasons to use one over the other?


In terms of speed, GPUs are optimized for floats. I'm much more familiar with Nvidia hardware, but in current generation hardware, there is 1 DP FPU for every 8 SP FPU. In next generation hardware, they're expected to have more of a 1 to 2 ratio instead.

My recommendation would be to see if your algorithm needs double precision. Many algorithms don't really need the extra bits. Run some tests to determine the average error that you get by going to single precision and figure out if it's significant. If not, just use single.

If your algorithm is purely for graphics, you probably don't need double precision. If you are doing general purpose computation, consider using OpenCL or CUDA.


Modern graphic cards do many optimizations, e.g.: they can even operate on 24-bit floats. As far as I know, internally graphic cards don't use doubles as they're built for speed, not necessarily precision.

From entry on GPGPU on Wikipedia:

The implementations of floating point on Nvidia GPUs are mostly IEEE compliant; however, this is not true across all vendors. This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs; some GPU architectures sacrifice IEEE-compliance while others lack double-precision altogether. There have been efforts to emulate double precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computation onto the GPU in the first place.


Most GPUs don't support double floats at all. The support has been added very recently (this generation), and not everywhere:

  • ATI:
    • HD5870 and HD5850 have it at decent speed (not as fast as single though)
    • HD5770 does not have it, despite being in the same generation as the HD5870.
  • Nvidia:
    • GT200 based cards have double support, but at a double/single ratio that is very low. (8:1 ratio ?)
    • Fermi is supposed to have it at half speed of single... Whenever that ships.

For everything else, you just don't have double support.

So... You should definitely not use double if you don't need it.


Doubles are not supported for rendering until DX11: (ie Shader Model 5)

http://msdn.microsoft.com/en-us/library/ee418354(VS.85).aspx

I suspect OpenGL will be the same.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号