开发者

Difference between double precision and full precision floating

开发者 https://www.devze.com 2023-04-04 03:52 出处:网络
I am researching for a possible gpu based teraflop computing machine... the benchmark to be used will be LINPACK

I am researching for a possible gpu based teraflop computing machine... the benchmark to be used will be LINPACK now heres the problem; going through l开发者_Python百科inpack documentation it says that it calculates in full precision and not in double precision ,for some machines full precision can be single precision. Can some one plz throw some light on the difference as this will dictate if I should go for the GTX 590s or the Tesla 2070s.


I think the term "full precision" was chosen to cover both IEEE-754 double precision (this is what is used on the GPUs mentioned) and the "single precision" format of old Cray vector computers, which sported 1 sign bit, 15 exponent bits, and 48 mantissa bits, providing a larger range but slightly less precision than IEEE-754 double precision. Here is documentation for the floating-point format used on the Cray-1:

http://ed-thelen.org/comp-hist/CRAY-1-HardRefMan/CRAY-1-HRM.html#p3-20


Concerning official nVidia's HPL version 0.8 (that's what we use to benchmark our hybrid machines):

It will run only on Teslas (it works only if your GPU has more than 2 GiB of memory, which, as far as I know, is true only for Tesla)

It uses double precision, so another point for using Teslas, since double arithmetic performance is limited on mainstream GPUs.

BTW: achieving at least 50% efficiency on 6-node machine (2 GPUs per node) is considered barely possible.

0

精彩评论

暂无评论...
验证码 换一张
取 消