开发者

Update large number of rows - SQL Server 2005

开发者 https://www.devze.com 2023-04-03 16:45 出处:网络
I have several tab-separated files which I would like to use to update a large number of rows. So far I have naively created a temp table and imported the data into it and then updated the table which

I have several tab-separated files which I would like to use to update a large number of rows. So far I have naively created a temp table and imported the data into it and then updated the table which also involved a join. This created a large trans log. I would like to prevent this and do it in chunks. I have found this:

http://itknowledgeexchange.techtarget.com/itanswers/bulk-update-in-sql-server-2005/

Is this the best way?

Are there even faster methods (SSIS, bcp)?

Any feedback highly appreciated. Thanks.

Christian

PS:

My naive code looks like this. Unfort. i is not a primary key (the table has one as Bla is a child of another table)

drop table #temp
crea开发者_StackOverflow中文版te table #temp
(
    i int,
    xml nvarchar(max)
)
BULK INSERT #temp 
...

update a
set XML = b.xml
from Bla as a
inner join #temp as b on a.i = b.i


Do you have an indexed identity column in the target table? This is one of the few cases where I actually like to use a WHILE loop. The main problem with the solution in the link you posted is bad index usage.

    DECLARE @START INT, @FINISH INT, @LOOPEND INT
    SELECT @START = 1, @FINISH = 5000, @LOOPEND = MAX(ID)
    from Bla 

    WHILE @START <= @LOOPEND
    BEGIN
        update a
        set XML = b.xml
        from Bla as a
        inner join #temp as b on a.i = b.i
        WHERE a.ID BETWEEN @START AND @FINISH

        SET @START = @FINISH + 1
        SET @FINISH = @FINISH + 5000
    END

In cases where you have a surrogate key (identity column as primary key), which is not so uncommon, this would cause a simple index seek on the primary key and is adjustable simply by the growth amount (5000 in the example)

0

精彩评论

暂无评论...
验证码 换一张
取 消