开发者

.NET trouble when using doubles

开发者 https://www.devze.com 2023-02-15 16:00 出处:网络
In .NET, when I subtract 1.35 from 1.35072 it shows .000719999999. How could I get .00072 when using a double?

In .NET, when I subtract 1.35 from 1.35072 it shows .000719999999. How could I get .00072 when using a double?

TOTKILO.Text = KILO.Text * TOUCH.Text * 0.01;    //here 1.35072
TextBox10.Text = TextBox9.Text * TextBox8.Text * 0.01; //here 1.35
K = Val(TOTKILO.Text) - Val(TextBox10.Text);  开发者_如何学运维//here it shows 0.00719999


Depends on what you're really asking.

If you want to round to five decimal places, you can just do:

double x = 1.35072;
double y = 1.35;
double z = Math.Round(x - y, 5); // 0.00072

If, on the other hand, your goal is to always get precise results from adding/subtracting decimal numbers, use the decimal type instead of double since it is inherently a base-10 type (as opposed to a base-2 type) and can therefore represent numbers expressed in decimal form precisely.

decimal x = 1.35072M;
decimal y = 1.35M;
decimal z = x - y; // 0.00072M


From what I've seen, you almost never actually need a double. Nearly all the time a decimal (System.Decimal) works for what you need, and it is an exact number, not a floating point representation, so you'll never have these rounding problems.


Doubles and floating points are always represented with a varying degree of precision than what we visualize them to be. Always round up the values to few decimals and then check the answers.

Additional details are in How To Work Around Floating-Point Accuracy/Comparison Problems.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号