开发者

Numerical precision differences between VS6 and VS2008 using C++?

开发者 https://www.devze.com 2023-01-09 00:51 出处:网络
I\'ve been working on porting a legacy project from Visual Studio 6 to 2008.After jumping a few hurdles I now have the new project building and executing.However, I\'ve noticed that the output from th

I've been working on porting a legacy project from Visual Studio 6 to 2008. After jumping a few hurdles I now have the new project building and executing. However, I've noticed that the output from the two versions of the program are very slightly different, as though the floating-point calculations are not equivalent, despite the fact that the code is the same.

These differences usually start quite small (<1.0E-6) but accumulate over many calculations to the point where they start to have a material impact on the output. As one example, I looked at the exact double-precision storage in memory of a key variable after one of the first steps of the calculation and saw:

Visual Studio 6 representation: 0x4197D6CC85AC68D9

Decimal equivalent: 99988257.4183687120676040649414

Visual Studio 2008 representation: 0x4197D6CC85AC68EB

Decimal Equivalent: 99988257.4183689802885055541992

I've tried to debug this to track down where differences start, but the output is from an iterative numerical solver, so it will be a time-consuming process to trace through this at such a high-level of precision.

Is anyone aware of any expected differences between double-precision arithmetic operations of the two versions of the compiler? (Or any other ideas about what might be causing this?)

For now my next step wil开发者_StackOverflowl probably be to try to create a simple demo app that shows the issue and can be more easily examined.

Thanks!


This is just a guess, but most modern Intel/AMD CPUs have two seperate FPU models: The old-style i386 FPU and the newer SSE/SSE2 based model. The latter is has a more flexible programming model and is usually preferred.

You should check, if both VS6 and VS2008 generate code for the same model, because the old-school FPU has 80 bit intermediate results, which could lead to less rounding and potentially better results, but the actual results depend on what the optimizer does. Which is something that the science people really hate btw.. For example, if operands are spilled to memory, then they're truncated to 64 bit and the extra precision is lost.

IIRC then VS6 could not generate SSE/SSE2 code, but it had a /fp:precise option to round all intermediate results to their declared size. VS 2008 has this flag too, I think. So I'd suggest that you try /fp:precise for both compilers and compare the result again.


As you've noticed floating point results leave lots of room for inconsistency. Are you certain that the newer version is less correct? Do you have any sanity checks on the results you can perform?

Firstly, it appears that your algorithm is somewhat sensitive to slight changes in input. Have you examined your code (especially addition and subtractions) to make sure there aren't opportunities for error to be introduced?

On x86 at least, most FP operations are done in 80-bit internal registers but only have 64-bits in memory. If intermediate results were copied to memory (and truncated) at different points by the two different compilers, that could definitely result in different answers. The compiler optimization logic could certainly cause this to behave different, especially if the newer compiler took advantage of additional registers that the older one did not.

At one point I know there was a "use consistent floating point" option but I can't recall which version it was in. That caused the values to get truncated to 64-bits after each operation, but ensured that the results were consistent across multiple runs.

0

精彩评论

暂无评论...
验证码 换一张
取 消