can anyone explain why I am getting the following results?
Dim badDecimal As Decimal = 54.50327999999999
Dim expectedDecimal As Decimal = CDec("54.50327999999999")
badDecimal = 54.50328D, while expected开发者_运维百科Decimal = 54.50327999999999D. My understanding is that badDecimal should contain the value of expectedDecimal (the fact that expectedDecimal can hold the correct value suggests that the Decimal type has the precision to hold the value).
thanks in advance for any help given.
According to this page: Decimal Data Type (Visual Basic), you need to suffix decimal literal values with the uppercase character D
, otherwise the compiler will try to compile it as an appropriate, but different, numeric type, like Integer, Long, Double, etc. depending on the constant value being used.
In your case, the code basically looks like this to the compiler:
Dim badDecimal As Decimal = (constant of type System.Double)
And thus already at compile-time, the constant value lost its precision.
Simply change the code to this:
Dim badDecimal As Decimal = 54.50327999999999D
^
+-- add this
and it should work as expected.
However, I would be weary of expecting the two variables to compare identical, there's enough questions on StackOverflow about "problems" with floating point types to at least make me tell you that you might have a minor difference in the Nth decimal, tiny enough to not make it to the display or the debugger, but enough to make the two variables compare different.
So keep that in mind if you intend to compare them, the typical way is to subtract one value from the other, take its absolute value, and compare that against some miniscule value to say "I accept a different this big, but not bigger".
ie. like this:
If Math.Abs(badDecimal - expectedDecimal) < 0.000001 Then
instead of this:
If badDecimal = expectedDecimal Then
精彩评论