I'm wondering what the best way to fix precision errors is in Java. As you can see in the following example, there are precision errors:
class FloatTest
{
public static void main(String[] args)
{
Float number1 = 1.89f;
for(int i = 11; i < 800; i*=2)
{
System.out.println("loop value: " + i);
System.out.println(i*number1);
System.out.println("");
}
}
}
The result displayed is:
loop value: 11
20.789999
loop value: 22
41.579998
loop value: 44
83.159996
loop value: 88
1开发者_高级运维66.31999
loop value: 176
332.63998
loop value: 352
665.27997
loop value: 704
1330.5599
Also, if someone can explain why it only does it starting at 11 and doubling the value every time. I think all other values (or many of them at least) displayed the correct result.
Problems like this have caused me headache in the past and I usually use number formatters or put them into a String.
Edit: As people have mentioned, I could use a double, but after trying it, it seems that 1.89 as a double times 792 still outputs an error (the output is 1496.8799999999999).
I guess I'll try the other solutions such as BigDecimal
If you really care about precision, you should use BigDecimal
https://docs.oracle.com/javase/8/docs/api/java/math/BigDecimal.html
https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/math/BigDecimal.html
The problem is not with Java but with the good standard float's (http://en.wikipedia.org/wiki/IEEE_floating-point_standard).
You can either:
use Double and have a bit more precision (but not perfect of course, it also has limited precision)
use a arbitrary-precision-library
use numerically stable algorithms and truncate/round digits of which you are not sure they are correct (you can calculate numeric precision of operations)
When you print the result of a double operation you need to use appropriate rounding.
System.out.printf("%.2f%n", 1.89 * 792);
prints
1496.88
If you want to round the result to a precision, you can use rounding.
double d = 1.89 * 792;
d = Math.round(d * 100) / 100.0;
System.out.println(d);
prints
1496.88
However if you see below, this prints as expected, as there is a small amount of implied rounding.
It worth nothing that (double) 1.89
is not exactly 1.89 It is a close approximation.
new BigDecimal(double) converts the exact value of double without any implied rounding. It can be useful in finding the exact value of a double.
System.out.println(new BigDecimal(1.89));
System.out.println(new BigDecimal(1496.88));
prints
1.8899999999999999023003738329862244427204132080078125
1496.8800000000001091393642127513885498046875
Most of your question has been pretty well covered, though you might still benefit from reading the [floating-point]
tag wiki to understand why the other answers work.
However, nobody has addressed "why it only does it starting at 11 and doubling the value every time," so here's the answer to that:
for(int i = 11; i < 800; i*=2)
╚═══╤════╝ ╚╤═╝
│ └───── "double the value every time"
│
└───── "start at 11"
You could use doubles instead of floats
If you really need arbitrary precision, use BigDecimal.
first of Float
is the wrapper class for the primitive float
and doubles have more precision
but if you only want to calculate down to the second digit (for monetary purposes for example) use an integer (as if you are using cents as unit) and add some scaling logic when you are multiplying/dividing
or if you need arbitrary precision use BigDecimal
If precision is vital, you should use BigDecimal to make sure that the required precision remains. When you instantiate the calculation, remember to use strings to instantiate the values instead of doubles.
I never had a problem with simple arithmetic precision in either Basic, Visual Basic, FORTRAN, ALGOL or other "primitive" languages. It is beyond comprehension that JAVA can't do simple arithmetic without introducing errors. I need just two digits to the right of the decimal point for doing some accounting. Using Float subtracting 1000 from 1355.65 I get 355.650002! In order to get around this ridiculous error I have implemented a simple solution. I process my input by separating the values on each side of the decimal point as character, convert each to integers, multiply each by 1000 and add the two back together as integers. Ridiculous but there are no errors introduced by the poor JAVA algorithms.
精彩评论