开发者

SQL Server BIGINT or DECIMAL(18,0) for primary key

开发者 https://www.devze.com 2023-03-22 22:44 出处:网络
We have a SQL Server 2005 database for which we want to improve performance of bulk delete/insert/selects and I notice it uses decimal(18,0) for its primary keys. I understand this will give us many m

We have a SQL Server 2005 database for which we want to improve performance of bulk delete/insert/selects and I notice it uses decimal(18,0) for its primary keys. I understand this will give us many more values than bigint but was hoping it might be a quick win and should last us for many million years of growth by my calculations.

I see in the .net docs decimals take 16 bytes instead of the 8 required by longs but in SQL Server it looks like bigint take 8 bytes but the decimal(18,0) takes only 5 bytes - as also seen by select DATALENGTH(max(id)) from table. Is this cor开发者_运维知识库rect?

Is there any other reason bigint might be slower or should I stick to decimal(18,0)?


You get this range with bigint:

-2^63 to 2^63-1 

also known as roughly:

-9.2 x 10^18 to 9.2 x 10^18

You get this range with decimal(18,0):

-10^18 to 10^18

Decimal: Storage Bytes per Precision

Precision    Storage Bytes
1-9:         5
10-19:       9
20-28:       13
29-38:       17

Integer Types and Storage Bytes

integer type    Storage Bytes
bigint          8
int             4
smallint        2
tinyint         1

Thoughts

The two examples posted in your Question actually yield virtually the same quantity of unique values.

Also, you are not going to see a significant performance change no matter your choice, but you will see a change in efficiency for other programmers on the team if you start using decimals where programmers are expecting an integer. This is a minor point.

To address your specific issue, if you want a larger range, use Decimal(38,0). This gives you:

-10^38 to 10^38

If you are concerned about speed, use the minimum precision that will last the lifetime of your software.

If you're not measuring time in nano-seconds, then choose the option that will fit best for your programmers' mindsets and your desire to have a very long set of numbers.

References

  • Decimal and Numeric Types
  • Integer Types


DATALENGTH is casting to varchar before counting bytes. So your max value is < 100000.

The 9 bytes can be proved with this. sys.columns has a max_length column (decimal is fixed length so it is always 9 bytes, before you ask otherwise)

CREATE TABLE dbo.foo (bar decimal(18,0))
GO
SELECT * FROM sys.columns WHERE object_id = OBJECT_ID('foo') 
GO
DROP TABLE dbo.foo
GO

For legacy reasons, decimal(18, 0) was often used as a surrogate for "64 bit integer" before bigint was added with SQL Server 2000.

decimal(18, 0) and bigint are roughly the same in range: decimal is one byte more at 9 bytes as per the documentation

On top of that, plain integer will be fractionally (maybe not measurable) faster then decimal. Saying that, if expect to have more then 4 billion rows in the next year or 5, then the performance should matter. If it doesn't, then just use int

0

精彩评论

暂无评论...
验证码 换一张
取 消