we have a database table which has around 200,000 records. which includes 3 ntext columns, which hold string data with length vary from 4000-70000. but a mere selection on the table takes more than 1 minute to return data. and even using where condition, and indexes to select 12000 records for a condition it takes 40 sec.
so we decided to change those data types to nvarchar(max), but still did not notice a major difference, as it would store data out of the row since its too long. is there any better way i can improve this pe开发者_StackOverflowrformance of my table ?
What make you think your problem is your field data type? There are several other points to consider before anything else:
- Do your tables have indexes? Are you using them?
- Do you have enough bandwidth available?
- Are your NIC cards using most updated drivers?
- Have you analyzed your query execution plan?
- Are your SQL server under stress (CPU/memory/disk)? And your webserver/desktop?
- Are your data correctly normalized?
You should make the columns nvarchar instead of ntext, and you could include them in the index as nonkey columns. But... that's a lot of data you're fetching. If you need to perform the query so often that the 1 minute execution time is a problem then maybe you should rethink your approach.
Could you not move the large text fields to a separate table and link them to the main table with a 1-1 relationship. That might help speed things up
I agree with Kevin. Any scan (Clustered Index or otherwise) is bad, and including the data is not really a practical option.
Move the text into a separate table with it's own primary key, and use these three as foreign keys in your original table.
I do something very similar to this for storing textual data for medical claims, and it works a treat.
(As a side note) one other advantage to this is that odds are, you do not necessarilly have to display all of this text on the screen for the entire returned result set at one time - so you only end up fetching the specific text data you need down the line.
This lets you use the same table structure as both a summary view (like displaying a list of questions on stackoverflow) and as a detailed view (where you show all of the textual data for a single header record).
If you don't want to move the ntext column to another table, make sure you're not retrieving those columns until the very last pass. So instead of this:
SELECT * FROM tbl WHERE (/* your code here*/)
Try something like this:
SELECT * FROM tbl WHERE id IN (SELECT id FROM tbl WHERE /* your code here */)
for your second query, it could transfer over 4.6 gigabytes, so I can see it taking that long potentially...
for the single record query, you could try splitting it up into fixed-length columns:
ie. part1 nchar(2000), part2 nchar(4000), part3 nchar(8000), part4 nchar(16000) ...
if all the cols are non-varying, its easier to compute row boundaries when the cols are all fixed length.
if you "Show Execution Plan" in Query Analyze, anything useful show up...?
精彩评论