I am working on a SQL Job which involves processing around 75000 records.
Now, the job works fine for 10000/20000 records with speed of around 500/min. After around 20000 records, execution just dies. It loads around 3000 records every 30 mins and stays at simila开发者_StackOverflow社区r speed.
I asked a similar question yesterday and got few good suggestions on procedure changes. Here's the link: SQL SERVER Procedure Inconsistent Performance
I am still not sure how to find the real issue here. Here are few of the questions I have:
- If problem is tempdb, how can I monitor activities on tempdb?
- How can I check if it's the network being the bottleneck?
- Are there any other ways to find what is different between when job is running fast and when it slows down?
I have been the administrator for a couple large data warehouse implementations where this type of issue was common. Although, I can't be sure of it, what it sounds like is that the performance of your server is being degraded by either growing log files or by memory usage. A great tool for reviewing these types of issues is Perfmon.
A great article on using this tool can be found here
Unless your server is really chimped down, 75000 records should not be a problem for tempdb, so I really doubt that is your problem.
Your prior question indicated SQL Server, so I'd suggest running a trace while the proc is running. You can get statement timings etc from the trace and use that to determine where or what is slowing things down.
You should be running each customer's processing in separate transactions, or small groups of customers. Otherwise, the working set of items that the the ultimate transaction has to write keeps getting bigger and each addition causes a rewrite. You can end up forcing your current data to be paged out and that really slows things down.
Check the memory allocated to SQL Server. If it's too small, you end up paging SQL Server's processes. If it's too large, you can leave the OS without enough memory.
精彩评论