I am running this sql code in SQL 2005
decl开发者_开发知识库are @a as float(24)
set @a=0.85
select cast ( @a as float(53))
and the result is
0.850000023841858
does anyone know why?
Thanks.
The first seven digits are the default level of precision for a float when in the declaration float(N) the value of N is 24 or less. After that, the digits could show up as pretty much anything when cast to a float of greater precision. That's the 'floating point'.
http://msdn.microsoft.com/en-us/library/ms173773.aspx
The number you see is as close as the computer can get within however many binary digits it has available to use.
If you try and write 1/3 in decimal, but you only have enough space for 8 digits, the closest you can get is 0.33333333
. That's still off by a quite a way, but if you had more decimal places you could get more accurate. This is exactly the same probably as the computer faces, but whereas each of your successive digits represents 1/10ths, 1/100ths, 1/1000ths, the computer works in 1/2, 1/4, 1/8, 1/16.
精彩评论