开发者

SQL Server 2000 Table Optimization

开发者 https://www.devze.com 2022-12-16 08:47 出处:网络
Ive been working all day on optimizing a SQL Server 2000 db table with ~9million rows. My only db experience has been with tables with a few hundred rows, so I\'ve never really had to deal with optimi

Ive been working all day on optimizing a SQL Server 2000 db table with ~9million rows. My only db experience has been with tables with a few hundred rows, so I've never really had to deal with optimization.

I am doing selects and updates based on a 21 digit number.

Using an indexed char(21) type, the queries take more then 2 seconds and the SQL Server process takes 1.5gigs of ram and 100% cpu.

With a indexed bigint type my queries take a few milliseconds and the process takes ~100MB of ram.

I just want to understand what is happening here, is this normal, or is there a specific way a cha开发者_如何学Gor type can be indexed for better performance?

Heres some of my sql:

CREATE TABLE data
(
    tableID int NOT NULL IDENTITY PRIMARY KEY,
    tag char(21) NOT NULL UNIQUE,
    dataColumn1 datetime NULL,
    dataColumn2 char(8) NULL,
    lastModified datetime NOT NULL
)

Parameterized query from c#:

SELECT tag FROM data WHERE tag = @tag;

Thanks for any help.


This really isn't unusual, SQL handles numbers much better than characters. A bigInt field uses 8 bytes, which fits neatly into a memory page. A char field takes 21 bytes which almost triples the amount of storage to put it in an index..

Another consideration, is either index clustered? The clustered index will perform much faster than a non-clustered index. There are a lot of additional factors to consider, beyond the simple, general statement that numbers will perform better and use less space in the index.


Character comparisons are somewhat slower - the collation sequence has to be considered - not to mention the physical difference in size between a 21-character string and a bigint (8 bytes). The index seek just can't be as efficient because it has to evaluate each byte in your char(21) values, decide what the character's sort order is, then decide how that compares to the matching character in the value you're looking for.

A clustered index will perform better for almost any query because the data (including the index pages iirc; I'm not a DBA) are in disk-seek order. Or at least closer to it.


My money's on the possibility that you're trying to query on some function of the char column, i.e. something like:

SELECT Column
FROM Table
WHERE UPPER(CharColumn) = 'ABC'

Obviously I'm just guessing, but this is a common source of performance issues with char/varchar/nvarchar columns. If you're writing queries like this, you have to realize that wrapping the column in a function prevents SQL Server from performing an index seek.

Also take into account the number of rows you're actually returning; 9 million rows isn't necessarily a lot to have in the table, but even 100,000 rows is a huge amount to have in a result set. Sometimes the bottleneck is just streaming all the results.

If none of this describes your problem, then I suggest updating your post with some information about the schema, the indexes you have, and the query that's running slow. There are further optimizations you can do, although SQL Server 2000 is getting on in years, so your options are limited.


Based on the schema you posted, assuming you have an index on the tag column, that you are only selecting tag and no other columns that might not be covered, that your WHERE condition is a simple equality on the tag column, and that the number of results returned is reasonably small, I don't think that there are any further optimizations you can do. You've pretty much reduced this to the simplest possible case.

0

精彩评论

暂无评论...
验证码 换一张
取 消