开发者

difference between float and [numeric](18, 10)

开发者 https://www.devze.com 2022-12-12 13:23 出处:网络
what\'s the differece between a sql server type of: float开发者_JS百科 and [numeric](18, 10)FLOAT conforms to IEEE 754 and approximates decimal representation.

what's the differece between a sql server type of:

float开发者_JS百科 and [numeric](18, 10)


FLOAT conforms to IEEE 754 and approximates decimal representation.

NUMERIC is exact in decimal representation (up to the declared precision).

SELECT  CAST(PI() AS FLOAT),
        CAST(PI() AS NUMERIC(20, 18)),
        CAST(PI() AS NUMERIC(5, 3))


---------------------- --------------------------------------- ---------------------------------------
3,14159265358979       3.141592653589793100                    3.142


numeric is a decimal (base-10) fixed-point datatype; float is a binary (base-2) floating-point datatype.

A numeric[18,10] defines a decimal with precision (maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point) 18 and scale (maximum number of decimal digits that can be stored to the right of the decimal point) 10. It consumes 9 bytes of storage to a float's default 8 bytes.

Here's a starting point for more reading.


float is defined as a binary floating-point number.

These are much more efficient to work with in binary computers than decimal floating-point numbers (in fact, most math operations on floats are implemented in hardware), and can be highly precise. However, since the precision is measured in bits, not decimal places, floats are not ideal for use with algorithms that depend on the decimal representation of a number (e.g. financial applications).

A couple of good references are Wikipedia's page on IEEE 754 (the floating-point standard), and David Goldberg's ACM article What Every Computer Scientist Should Know About Floating-Point Arithmetic.

0

精彩评论

暂无评论...
验证码 换一张
取 消