开发者

Why see -0,000000000000001 in access query?

开发者 https://www.devze.com 2023-01-16 07:22 出处:网络
I have an sql: SELECT Sum(Field1),开发者_开发问答 Sum(Field2), Sum(Field1)+Sum(Field2) FROM Table

I have an sql:

SELECT Sum(Field1),开发者_开发问答 Sum(Field2), Sum(Field1)+Sum(Field2)
FROM Table
GROUP BY DateField
HAVING Sum(Field1)+Sum(Field2)<>0;

Problem is sometimes Sum of field1 and field2 is value like: 9.5-10.3 and the result is -0,800000000000001. Could anybody explain why this happens and how to solve it?


Problem is sometimes Sum of field1 and field2 is value like: 9.5-10.3 and the result is -0.800000000000001. Could anybody explain why this happens and how to solve it?

Why this happens

The float and double types store numbers in base 2, not in base 10. Sometimes, a number can be exactly represented in a finite number of bits.

9.5 → 1001.1

And sometimes it can't.

10.3 → 1010.0 1001 1001 1001 1001 1001 1001 1001 1001...

In the latter case, the number will get rounded to the closest value that can be represented as a double:

1010.0100110011001100110011001100110011001100110011010 base 2
= 10.300000000000000710542735760100185871124267578125 base 10

When the subtraction is done in binary, you get:

-0.11001100110011001100110011001100110011001100110100000
= -0.800000000000000710542735760100185871124267578125

Output routines will usually hide most of the "noise" digits.

  • Python 3.1 rounds it to -0.8000000000000007
  • SQLite 3.6 rounds it to -0.800000000000001.
  • printf %g rounds it to -0.8.

Note that, even on systems that display the value as -0.8, it's not the same as the best double approximation of -0.8, which is:

- 0.11001100110011001100110011001100110011001100110011010
= -0.8000000000000000444089209850062616169452667236328125

So, in any programming language using double, the expression 9.5 - 10.3 == -0.8 will be false.

The decimal non-solution

With questions like these, the most common answer is "use decimal arithmetic". This does indeed get better output in this particular example. Using Python's decimal.Decimal class:

>>> Decimal('9.5') - Decimal('10.3')
Decimal('-0.8')

However, you'll still have to deal with

>>> Decimal(1) / 3 * 3
Decimal('0.9999999999999999999999999999')
>>> Decimal(2).sqrt() ** 2
Decimal('1.999999999999999999999999999')

These may be more familiar rounding errors than the ones binary numbers have, but that doesn't make them less important.

In fact, binary fractions are more accurate than decimal fractions with the same number of bits, because of a combination of:

  • The hidden bit unique to base 2, and
  • The suboptimal radix economy of decimal.

It's also much faster (on PCs) because it has dedicated hardware.

There is nothing special about base ten. It's just an arbitrary choice based on the number of fingers we have.

It would be just as accurate to say that a newborn baby weighs 0x7.5 lb (in more familiar terms, 7 lb 5 oz) as to say that it weighs 7.3 lb. (Yes, there's a 0.2 oz difference between the two, but it's within tolerance.) In general, decimal provides no advantage in representing physical measurements.

Money is different

Unlike physical quantities which are measured to a certain level of precision, money is counted and thus an exact quantity. The quirk is that it's counted in multiples of 0.01 instead of multiples of 1 like most other discrete quantities.

If your "10.3" really means $10.30, then you should use a decimal number type to represent the value exactly.

(Unless you're working with historical stock prices from the days when they were in 1/16ths of a dollar, in which case binary is adequate anyway ;-) )

Otherwise, it's just a display issue.

You got an answer correct to 15 significant digits. That's correct for all practical purposes. If you just want to hide the "noise", use the SQL ROUND function.


I'm certain it is because the float data type (aka Double or Single in MS Access) is inexact. It is not like decimal which is a simple value scaled by a power of 10. If I'm remembering correctly, float values can have different denominators which means that they don't always convert back to base 10 exactly.

The cure is to change Field1 and Field2 from float/single/double to decimal or currency. If you give examples of the smallest and largest values you need to store, including the smallest and largest fractions needed such as 0.0001 or 0.9999, we can possibly advise you better.

Be aware that versions of Access before 2007 can have problems with ORDER BY on decimal values. Please read the comments on this post for some more perspective on this. In many cases, this would not be an issue for people, but in other cases it might be.

In general, float should be used for values that can end up being extremely small or large (smaller or larger than a decimal can hold). You need to understand that float maintains more accurate scale at the cost of some precision. That is, a decimal will overflow or underflow where a float can just keep on going. But the float only has a limited number of significant digits, whereas a decimal's digits are all significant.

If you can't change the column types, then in the meantime you can work around the problem by rounding your final calculation. Don't round until the very last possible moment.

Update

A criticism of my recommendation to use decimal has been leveled, not the point about unexpected ORDER BY results, but that float is overall more accurate with the same number of bits.

No contest to this fact. However, I think it is more common for people to be working with values that are in fact counted or are expected to be expressed in base ten. I see questions over and over in forums about what's wrong with their floating-point data types, and I don't see these same questions about decimal. That means to me that people should start off with decimal, and when they're ready for the leap to how and when to use float they can study up on it and start using it when they're competent.

In the meantime, while it may be a tad frustrating to have people always recommending decimal when you know it's not as accurate, don't let yourself get divorced from the real world where having more familiar rounding errors at the expense of very slightly reduced accuracy is of value.

Let me point out to my detractors that the example

Decimal(1) / 3 * 3 yielding 1.999999999999999999999999999

is, in what should be familiar words, "correct to 27 significant digits" which is "correct for all practical purposes."

So if we have two ways of doing what is practically speaking the same thing, and both of them can represent numbers very precisely out to a ludicrous number of significant digits, and both require rounding but one of them has markedly more familiar rounding errors than the other, I can't accept that recommending the more familiar one is in any way bad. What is a beginner to make of a system that can perform a - a and not get 0 as an answer? He's going to get confusion, and be stopped in his work while he tries to fathom it. Then he'll go ask for help on a message board, and get told the pat answer "use decimal". Then he'll be just fine for five more years, until he has grown enough to get curious one day and finally studies and really grasps what float is doing and becomes able to use it properly.

That said, in the final analysis I have to say that slamming me for recommending decimal seems just a little bit off in outer space.

Last, I would like to point out that the following statement is not strictly true, since it overgeneralizes:

The float and double types store numbers in base 2, not in base 10.

To be accurate, most modern systems store floating-point data types with a base of 2. But not all! Some use or have used base 10. For all I know, there are systems which use base 3 which is closer to e and thus has a more optimal radix economy than base 2 representations (as if that really mattered to 99.999% of all computer users). Additionally, saying "float and double types" could be a little misleading, since double IS float, but float isn't double. Float is short for floating-point, but Single and Double are float(ing point) subtypes which connote the total precision available. There are also the Single-Extended and Double-Extended floating point data types.


It is probably an effect of floating point number implementations. Sometimes numbers cannot be exactly represented, and sometimes the result of operations is slightly off what we may expect for the same reason.

The fix would be to use a rounding function on the values to cut off the extraneous digits. Like this (I've simply rounded to 4 significant digits after the decimal, but of course you should use whatever precision is appropriate for your data):

SELECT Sum(Field1), Sum(Field2), Round(Sum(Field1)+Sum(Field2), 4)
FROM Table
GROUP BY DateField
HAVING Round(Sum(Field1)+Sum(Field2), 4)<>0;
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号