开发者

SQL Server 2008 Performance Question

开发者 https://www.devze.com 2022-12-12 05:07 出处:网络
I have a table with 30 columns and about 3.4 million records. Is it reasonable for SELECT * FROM [Table]; to开发者_JAVA技巧 take 8 to 12 minutes to return all 3.4 million results?

I have a table with 30 columns and about 3.4 million records. Is it reasonable for SELECT * FROM [Table]; to开发者_JAVA技巧 take 8 to 12 minutes to return all 3.4 million results?

If not, where's a good place/resource to begin diagnosing my problem?


Yes, is reasonable. For a system that finely tunned and runing optimally can deliver 3.4 mil rows in about 12 minutes, this is exactly the expected result...

None the less, some places to look for to improve performance:

  • Does the table fit in the buffer pool? Ie. do you have enough RAM to store your entire database? If no, then you're going to hit the disk for IO. Page life expectancy counter is a good indicator.
  • How fast is your disk I/O subsystem? Are we talking about a 5000 RPM second hand IDE drive or a RamSAN-500? What is the throughput reported by sqliosim ? How about perf counters, Avg. Disk Queue Length, Avg. Disk Sec/Transfer on the physical disks? Is it different for Reads vs. Writes?
  • How fragmented is the table? A scan performance is affected first and foremost by read-ahead efficiency and read-ahead size is determined by hobt fragment size. Perhaps you need to optimize the ETL of the table, follow the FastTrack methodology.
  • any contention going on? Have you measured lock wait times? Perhaps snasphot isolation can aleviate the problem.
  • Can the client receive the 3.4 mil rows in time? Does the server block on client buffers availability? Again, wait stats can indicate this.

Another good place to start is to follow the Wait and Queues methodology.


It's most likely that SQL server is doing its best to get the data you asked for. It's not unreasonable to assume at least 1K/record for 30 columns. 3.4M x 1K = 3.4Gb.

Just reading 3.4Gb from the disk could take minutes on an average machine (do not forget that this is not just reading, there's obviously some SQL processing overhead in it.

But of course in a real world scenario you don't want to retrieve all the data...


The best place to start diagnosing your problem is to determine whether you have a problem at all. Set a specific, measurable, business-oriented performance goal, and define exactly how long you think is reasonable for returning the data.

If your answer is 8-12 minutes, then you don't have a problem, which is always a good thing.

If your answer is less than that, then you now know that you have a problem, and how big the problem is (if you said 5 minutes then it's maybe not such a big problem, if you said 10 seconds then it's a much bigger issue). In this case, you'll probably want to start looking at the database performance counters to see whether it's got CPU/IO/memory/network bottlenecks, and looking at the execution plan for the query to see whether it could be improved by indexes (though this is unlikely for a SELECT *).


There are so many questions that could be asked relating to disk IO, size of the columns and other setup related things. Bottom line unless you are on a really really slow disk and slow network it should not take 12 minutes.

The first place to look is at the Execution plan. This should give you an idea of how SQL Server is handling things.

Couple things I would ask to better troubleshoot? Is there a primary key? Is it clustered? Is there an order by?


It might be more interesting to asses the queries your system is actually running. The Profiler tool that comes with SQL Server can make a log of all the queries your system is running. Let it run over a given period (assuming you have a good amount of extra disk space) and it will record what queries are being run, and the parameters given. It will also tell you how long they all took to execute.

Looking at this and figuring out what queries are using up your CPU time will help you figure out where to go for performance tuning - for instance, If Query A takes 60 seconds to run, and runs only once a day, it might have big impact on that specific app to tune it, but tuning that one query wont make your SQL Server faster. But if Query B takes 2 seconds to run and runs 4,000 times per day, tuning it may have a bigger overall impact.

Often adding relevant indexes and performance tuning your "big offender" queries can make a very serious positive impact on performance. What the profiler shows you about who those queries are might surprise you.


Reasonable compared to what?

  1. How wide are the rows?
  2. How fast is your CPU?
  3. How much RAM do you have?
  4. Is the table already in RAM when you start the query?
  5. Are you delivering the results over a network? If so, how fast is it?
  6. How fast is the client that's retrieving the rows?
  7. How fast are your disks?
  8. How fragmented is the table?
  9. Is the DB machine doing anything else at the same time?


I agree with you, I just brought back 20 million rows of data from a SQL 2008 server in less than 3 minutes - the hardware cost less than the SQL license.

Unless your hardware / network really sucks then there is a performance gain to be made somewhere.

0

精彩评论

暂无评论...
验证码 换一张
取 消