I have a Theora video de开发者_开发技巧ocder library and application compiled using VS-2008 on windows(Intel x86 architecture). I use this setup to decode theora bit streams(*.ogg files). The source code for this decoder library is used from FFMPEG v0.5 source package with some modifications to make it compile on windows-VS-2008 combination.
Now when i decode same theora bitstream using the ffmpeg(V0.5) application on linux(Intel x86 architecture) that i have built using gcc, and get some decoded output yuv file, this output file has 1 bit differences with the output obtained from the windows-VS2008 setup, and that too for few bytes of the output file, not all. I expected the 2 outputs to be bit-matching.
I am doubting below factors:
a.)Some data type mismatch between the two compilers gcc and MS-VS2008?
b.)I have verified that the code is not using any run time math library function like log, pow, exp, cos, etc...but still my code has some operations like (a+b+c)/3.Could this be an issue?
The implementation of this "divide by three" or any other number can be different in the two setups.
c.)Some kind of rounding/truncation effects happening differently?
d.) Could i be missing any macro which is present in Linux as a makefile/configure option which is not there in windows setup?
But i am not able to narrow the problem and the fix for it.
1.) Are my doubts above valid, or could there be any other issues which could cause these 1 bit differences in the output produced by these two different setups.
2.) How do i debug and fix this?
I guess, this scenario of difference in outputs between linux-gcc setup and Windows MS compilers can be even be true for any generic code(not necessarily specific to my case of video decoder application)
Any pointers would be helpful regarding this.
thanks,
-AD
I think, such behavior may come from x87/sse2 math. What version of gcc do you use? Do you use float (32-bit) or double (64-bit)? Math on x87 have more precision bits internally (82), than can be stored in memory
Try flags for gcc -ffloat-store; -msse2 -mfpmath=sse
Flags for msvc /fp:fast /arch:SSE2
Regarding b), integer and float division are completely specified in C99. C99 specifies round-towards-zero for integers (earlier standards left rounding direction implementation-defined) and IEEE 754 for floating-point.
Having heard that VS2008 did not claim to implement C99, this does not really help. Implementation-defined at least means that you can write a few test cases and make sure which decision was made in your compiler.
If you really care about this, how about instrumenting the code to output verbose traces to a separate file and examine the traces for the first difference? Hey, perhaps the tracing is even already there for debugging purposes!
1, Probably a different optimization of some floating point lib
2, Is it a problem ?
edit:
Take a look at the "/fprecise" option on VS (http://msdn.microsoft.com/en-us/library/e7s85ffb.aspx) or "-fprecise-math" on gcc.
精彩评论