I compiled the following program with gcc 4.4.1 and I get unexpected output (Well, unexpected for me)
#include<stdio.h>
int main()
{
float x=0.3, y=0.7;
if(x==0.3)
{
if(y==0.7)
printf("Y\n\n");
else
printf("X\n\n");
}
else
printf("NONE\n\n");
}
Output: NONE
#include<stdio.h>
int main()
{
float x=0.3, y=0.7;
if(x<0.3)
{
if(y==0.7)
printf("Y\n\n");
else
开发者_如何学JAVA printf("X\n\n");
}
else
printf("NONE\n\n");
}
Output: NONE
#include<stdio.h>
int main()
{
float x=0.3, y=0.7;
if(x>0.3)
{
if(y>0.7)
printf("Y\n\n");
else
printf("X\n\n");
}
else
printf("NONE\n\n");
}
Output:X
So, it's clearly visible that the stored value in "x" is greater than 0.3 and the stored value in "y" is less than 0.7
Why is this happening? Is this a property of float datatype or the if-else statements interpret float in a different way?
Thanks.
Edit: Alright, I pondered it over and I'm getting a little confused now. Kindly tell if my understanding of this problem is correct or not.
float x=0.3;
This stores x=0.30000001192092895508
in the memory. Clearly, this is greater than 0.3
(Is this correct?)
Now, double x=0.3
results in x=0.29999999999999998890
and this is smaller than 0.3 (Is this correct too?)
Main question:
So if I use store 0.3
in float x
, then the following statement if(x>0.3)
results in x=0.30000001192092895508
being implicitly casted as a double and 0.3 is also a double instead of a float. Hence 0.3=0.29999999999999998890
and the internal operation is if((double) 0.30000001192092895508 > (double) 0.29999999999999998890)
. Is this correct?
You're using float
for storage, but you comparisons are being performed against the literals which are of type double
.
The values of x
and y
aren't exactly 0.3 and 0.7, as those numbers aren't representable in binary floating point. It happens that the closest float
to 0.3 is greater than the closest double
to 0.3, and the closest float
to 0.7 is less than the closest double
to 0.7... hence your comparison results.
Assuming the representations are the same as in C# (where I happen to have some tools to help) the values involved are:
0.3 as float = 0.300000011920928955078125
0.3 as double = 0.299999999999999988897769753748434595763683319091796875
0.7 as float = 0.699999988079071044921875
0.7 as double = 0.6999999999999999555910790149937383830547332763671875
So that explains why it's happening... but it doesn't explain how to work around the issue for whatever your code is actually trying to do, of course. If you can give more context to the bigger problem, we may be able to help more.
Computers can't store floating point numbers exactly. Just like 1/7 can't be represented in a finite number of decimal digits, lots of numbers can't be represented exactly in binary. 3/10 is such a number. When you write 0.3
your program actually stores 0.30000001192092895508
as that's the best it can do with the 32 bits available to it in a float
variable.
And it so happens that this value also differs from the double
value of 0.3
since the computer can store more digits in a 64-bit double
. When you write if (x == 0.3)
your value is actually promoted to a double
since floating-point constants are double
s unless explicitly specified otherwise. It's equivalent to writing if ((double) x == 0.3)
.
jkugelman$ cat float.c
#include <stdio.h>
int main() {
printf("%.20f\n", (float) 0.3); // Can also be written "0.3f".
printf("%.20f\n", (double) 0.3); // Cast is redundant, actually.
return 0;
}
jkugelman$ gcc -Wall -o float float.c
jkugelman$ ./float
0.30000001192092895508
0.29999999999999998890
Notice how the 0.2999...
value has more 9's in it than the 0.3000...
one. The double-precision value is closer to 0.3
thanks to the extra bits.
精彩评论