Below, the result1 and result2 variable values are reporting different values depending upon whether or not you compile the code with -g or with -O on GCC 4.2.1 and on GCC 3.2.0 (and I have not tried more recent GCC versions):
double double_identity(double in_double)
{
return in_double;
}
...
double result1 = ceil(log(32.0) / log(2.0));
std::cout << __FILE__ << ":" << __LINE__ << ":" << "result1==" << result1 << std::endl;
double result2 = ceil(double_identity(log(32.0) / log(2.0)));
std::cout << __FILE__ << ":" << __LINE__ << ":" << "result2==" << result2 << std::endl;
result1 and result2 == 5 only when compiling using -g, but if instead I compile with -O I get result1 == 6 and result2 == 5.
This seems like a difference in how optimization is being done by the compiler, or something to do with IEEE floating point representation internally, but I am curious as to exactly how this difference occurs. I'm hoping to avoid looking at the assembler if at all possible.
The above was compiled in C++, but I presume the same would hold if it was converted to ANSI-C c开发者_StackOverflow中文版ode using printfs.
The above discrepancy occurs on 32-bit Linux, but not on 64-bit Linux.
Thanks bg
On x86, with optimizations on, the results of subexpressions are not necessarily stored into a 64-bit memory location before being used as part of a larger expression.
Because x86's standard floating-point registers are 80 bits, this means that in such cases, extra precision is available. IF you then divide (or multiply) that especially-precise value by another, the effects of the increased precision can magnify to the point where they can be perceived by the naked eye.
Intel's 64-bit processors use SSE registers for floating point math, and those registers don't have the extra precision.
You can play around with g++ flags to fix this if you really care.
精彩评论