开发者

UPDATE or MERGE of very big tables in SQL Server

开发者 https://www.devze.com 2023-03-06 09:29 出处:网络
I need to perform a daily update of a very large (300M records) and broad TABLE1. The the source data for the updates is located in another table UTABLE that is 10%-25% the rows of TABLE1 but is narro

I need to perform a daily update of a very large (300M records) and broad TABLE1. The the source data for the updates is located in another table UTABLE that is 10%-25% the rows of TABLE1 but is narrow. Both tables have record_id as a primary key.

Presently, I am recreating TABLE1 using the following approach:

<!-- language: sql -->
    1) SELECT (required columns) INTO TMP_TABLE1 
    FROM TABLE1 T join UTABLE U on T.record_id=U.record_id  
    2) DROP TABLE TABLE1  
    3) sp_rename 'TMP_TABLE1', 'TABLE1'

However this takes nearly 40 minutes on my server (60GB of RAM for SQL Server). I want to achieve a 50% performance gain - what other options can I try?

  1. MERGE and UP开发者_JAVA技巧DATE - something like the code below works faster only for a very small UTABLE table - at full size, everything just hangs:

    <!-- language: SQL -->
    MERGE TABLE1 as target  
    USING UTABLE as source  
    ON target.record_id = source.record_id   
      WHEN MATCHED THEN   
        UPDATE SET Target.columns=source.columns
    
  2. I heard that I can perform a batch MERGE by using ROWCOUNT - but I don't think it can be fast enough for a 300M row table.

  3. Any SQL query hints that can be helpful?


Actually i've found out general recommendations for such a queries: Idea to use SQL Merge or Update is a very clever one but it fails when we need to update many records (i.e. 75M) in a big and wide table (i.e. 240M).

Looking at the query plan of the query below we can say that TABLE SCAN of TABLE1 and final MERGE are taking 90% of time.

MERGE TABLE1 as Target  
USING UTABLE as source  
ON Target.record_id = source.record_id   
WHEN MATCHED AND (condition) THEN   
    UPDATE SET Target.columns=source.columns

So in order to use MERGE we need to:

  1. Reduce the number of rows we need to update and correctly pass this information to SQL Server. This can be done by making UTABLE smaller or specifying additional condition that narrows part to-be-merged.
  2. Make sure that part to-be-merged fits in memory otherwise query runs way slower. Making TABLE1 twice less reduced my real query time from 11 hours to 40 minutes.

As Mark mentioned you can use UPDATE syntax and use WHERE clause to narrow part to-be-merged - this will give same results. Also please avoid indexing TABLE1 as this will cause additional work to rebuild index during MERGE


First up I'd find out where your bottleneck is - is your CPU pegged or idle? In other words - is your IO subsystem able to handle the load properly?

Recreating the full table is a lot of IO load, not to mention it'll take up a lot of space to basically have the table stored twice temporarily.

Do you need to perform a MERGE - from what I can see a simple update should suffice. Example:

UPDATE
    TABLE1
SET
    ColumnX = UTABLE.ColumnX
    ...
FROM
    TABLE1
INNER JOIN
    UTABLE ON TABLE1.record_id = UTABLE.record_id

You could batch up the updates using ROWCOUNT but that won't speed up the execution, it'll only help with reducing overall locking.

Also - what kind of indexes do you have on the table? It may be faster to disable the indexes before the update and then rebuild them from scratch afterwards (only the nonclustered).

0

精彩评论

暂无评论...
验证码 换一张
取 消