开发者

How to use Floating Point extended precision in a MacOs or Windows system

开发者 https://www.devze.com 2023-04-11 13:15 出处:网络
This little piece of code is making me crazy: #include <stdio.h> int main() { double x; const double d=0.1;

This little piece of code is making me crazy:

#include <stdio.h>

    int main() 
{
double x;
const double d=0.1;
x=d ;
for (int i=0; i<30; i++) 
    {
    printf("Cycl开发者_开发知识库e %d  Value :%.20e \n",i,x);
    x=x*(double)11.-(double)10*d; //11*0.1 = 1.1 - 10*0.1 = 1 => 0.1

    }       
return 0;
}

In fact I was trying to demonstrate a pathological case due to the internal representation of floating numbers in IEEE 754 standard. On a MacOs or windows machine the final output line will read:

Cycle 29 Value :1.28084153156127500000e+13

But on a Linux ( Scientific Linux 5.4 ) the code will run with no problem. Reading I have found that:

On BSD systems such as FreeBSD, NetBSD and OpenBSD, the hardware double-precision rounding mode is the default, giving the greatest compatibility with native double precision platforms. On x86 GNU/Linux systems the default mode is extended precision (with the aim of providing increased accuracy).

On the same page GCC INTRO was explained how to enable double precision rounding on a Linux system but not how to use extended precision on other systems. Is that possible on MacOs or Windows ? and how ?


Simply using extended precision on OS X is easy:

x=11.L*x - 10.L*d;

The L suffix causes the two literals to be long doubles instead of doubles, which forces the entire expression to be evaluated in 80-bit extended per C's expression evaluation rules.

That aside, there seems to be some confusion in your question; you say "... on a Linux the code will run with no problem." A couple points:

  • Both the OS X result and the Linux result conform to IEEE-754 and to the C standard. There is no "problem" with either one of them.
  • The OS X result is reproducible on hardware that does not support the (non-standard) 80-bit floating point type. The Linux result is not.
  • Computations that depend on intermediate results being kept in 80-bit extended are fragile; changing compiler options, optimization settings, or even program flow may cause the result to change. The OS X result will be stable across such changes.

Ultimately, you must keep in mind that floating-point arithmetic is not real arithmetic. The fact that the result obtained on Linux is closer to the result obtained when evaluating the expression with real numbers does not make that approach better (or worse).

For every case where automatic usage of extended precision saved a naive user of floating-point, I can show you a case where the unpredictability of that evaluation mode introduces a subtle and hard-to-diagnose bug. These are commonly called "excess-precision" bugs; one of the most famous recent examples was a bug that allowed users to put 2.2250738585072011e-308 into a web form and crash the server. The ultimate cause is precisely that the compiler going behind the programmer's back and maintaining more precision than it was instructed to. OS X was not affected by this bug because double-precision expressions are evaluated in double-precision, not extended.

People can be educated about the gotchas of floating-point arithmetic, so long as the system is both reproducible and portable. Evaluating double-precision expressions in double and single-precision in single provides those attributes. Using extended-precision evaluation undermines them. You cannot do serious engineering in an environment where your tools are unpredictable.

0

精彩评论

暂无评论...
验证码 换一张
取 消