I'm a novice, both here at stack overflow as well as on sql server, so please let me know if this question is not appropriate in any way :)
Well, I'm developing a web application that will be used to analyze large amounts of data, which is stored in a SQL Server 2008 database. The interface will not allow the users to update or insert any data, so except from some updates of user data开发者_如何学Go, mainly SELECT commands will be sent to the db.
Every night, the system will close down and refresh it's information from other sources. This refresh will also involve large amounts of data so the database will mostly perform INSERT and UPDATE commands during this stage.
I've created appropriate indexes to achieve good performance on SELECTS, but theses indexes causes the nightly refresh to be slow. I want the "best of two worlds", so I've googled around and found out that a common strategy is to drop/disable all indexes before writing data, and to re-create them afterwards. I've also heard that a better approach is to limit the fillfactor of the indexes, which would save me from writing the scripts for dropping and re-creating the indexes.
What do you think is the best approach to use here is my main goal is good performance? Should I go with the "fillfactor"-approach or should I get dirty and write scripts for dropping/re-creating the indexes? Any suggestions are welcome!
Drop/re-create indexes nightly will help. Fill factor will only provide benefit if the level of insert/updates is causing fragmentation. You can check this by running DBCC SHOWCONTIG
(careful on production and if DB is large).
another option which should improve performance is to switch the recovery model of the database from full to simple during your 'system refresh' & then switch back.
You should take a full backup after you switch back to full, but it might be worth it, best is to try it against the workload & see.
精彩评论