I was shocked to see such a basic subtraction bug in JS, I am sure most of you here would have experienced this issue, please help me with a work around here, All I am doing is subtracting a number from 100 which gives unexpected results, an example is stated below
100 - 99.1
// returns 0.90000开发者_如何学运维00000000057
Am I doing something wrong here ? :S I am confused
Floating point values are never accurate as you expect. You can use Number object to convert this to answer as you need.
Number(100).toFixed(2) - Number(99.1).toFixed(2)
You are working with floating point numbers, not integers. This is expected.
The reason is that you can't accurately represent numbers like 0.1 and 0.3 in binary. Just like you can't represent 1/3 accruately in decimal form.
In JavaScript, all of the numeric types are Number
objects, which are represented as double-precision floating point numbers.
On another note, you can retrieve a string representation of these numbers to a specific decimal point by using Number
's toFixed
method, which accepts a number denoting how many decimal points you would like the returned string to represent based on the receiver.
An example:
var five = 5.00001;
console.log(five.toFixed(1)); //5.0
This is how floating point numbers work, they are not exact. http://en.wikipedia.org/wiki/Floating_point might be useful reading.
At first prevent yourself from assigning the float value to variable then do it with direct expression to a variable. here is an example:
**var c = 80.35-20.35;
alert(c.toFixed(2));**
this will assign a 60.00 but other way to solve this will return 59.99999999.
This is not a perfect answer but it works.
function format_float_bug(num)
{
return parseFloat( num.toFixed(15) );
}
you can use as follows,
format_float_bug(4990 % 10);
because below number (49.89999999999999857891452848) first 15 decimal places are like 9999999
精彩评论