A similar question was asked during a programming test for a Software Developer position. After two days I still don't have an answer.
Suppose we have a function that takes two floats as input and it does not have any side effect and executes deterministically (also consider a single-thread environment). E.g. bool test(float arg1, float arg2);
To test it, let's say we use an arbitrary large number of random inputs. The function fails very rarely but it fails. Let's say th开发者_如何学运维at we use this piece of code to test:
//Set a,b
if(test(a,b)){
printf("Test passed\n");
} else {
printf("%f %f\n", a,b);
}
So, after capturing the input that was printed, we use those inputs like this :
a = //Fill from the printf
b = //Fill from the printf
boolean a = test(a,b);
After verifying the result of a, the test is valid/passed. What explanation do you have ? I know that printf for debugging can be tricky but... it was the question I was asked.
printf
with the %f
specifier doesn't print more than 7 decimal digits, to avoid giving an impression of greater accuracy than it actually has.
My guess would be that one of the inputs is such that when it's printed out via printf
, and then read back in by the compiler, the result is slightly different.
The values in a
and b
may have more significant digits than what is printed by printf
, thus making this a precision issue.
精彩评论