开发者

What are the "best practices" to remove "non-unique" values from HUGE table?

开发者 https://www.devze.com 2023-01-15 06:15 出处:网络
Step 1: Loading data via \"bulk insert\" from .txt (delimited) file into Table 1 (no indexes etc.) bulk insert Table_1

Step 1: Loading data via "bulk insert" from .txt (delimited) file into Table 1 (no indexes etc.)

bulk insert Table_1
from '\\path\to_some_file.txt'
with ( tablock, FORMATFILE ='format_file_path.xml') 

Via format file I map output Column data types to avoid further conversions (from Char to Int for example)

Step 2: OUTPUT the result (perhaps NOT all the columns from Table 1) into another Table 2, but only DISTINCT values from Table 1.

NB! Table_1 is about 20 Million records (per load).

what we have now (example simplified):

select distinct convert(int, col1), convert(int, col2), col3, ... 
into Table_2
from Table_1

It takes about 3.5 mins to process. Could you advice some best practices that may help to reduce the processing time and put only UNIQUE records into Table_2?

Thanks in advance!

UPD 1: sorry for misu开发者_运维百科nderstanding - I meant that select distinct Query takes 3.5 mins. "bulk insert" is rather optimized - it loads via 8 threads (8 separate .txt files) "bulk insert" into 1 table with (TABLOCK) and imports 20mln records in about 1min.

UPD 2: I tested different approaches (didn't test on SSIS - in our application this approach won't work): The best result is the approach when data "bulk inserted" into TABLE_2 format already (column types match, data types - also) so we eliminate data type Converts. And just "plain" distinct:

select distinct * into Table_2 from Table_1

Gives 70sec of processing. So I could consider It's a best result I could get for now. I also tried a couple of techniques (additional Order by, CTE win grouping etc) - they were worse then "plain" distinct.

Thanks all for participation!


You have to know if it is your SELECT DISTINCT that is causing the issue or your INSERT INTO is causing the issue.

You will have to run the SELECT DISTINCT once with, and once without the INSERT INTO and measure the duration to figure out which one you have to tune.

If it is your SELECT DISTINCT, you can try to fine tune that query to be more efficient.

If it is your INSERT INTO, then consider the following:

With an INSERT INTO, a new table is created, and all the pages are allocated as required.

Are you dropping the old table and creating a new one? If so, you should change that to just DELETE from the old table - DELETE, not Truncate - this is because a truncate will let go of all the pages acquired by the table, and they have to be re-allocated.

You can try the one or several of the following things to improve efficiency.

  1. Ask your customer for non-duplicate data
    • Index on all the Duplicate-criteria columns. Scanning an index should be much faster than scanning a table.
    • Partition your staging table to get better performance
    • Create a view that selects the distinct values, and use BCP to fast load the data.


find those that are duplicates then delete, then copy into table2.

Find the duplicates like this

SELECT col1, 
COUNT(col1) AS NumOccurrences
FROM table1
GROUP BY col1
HAVING ( COUNT(col1) > 1 )


That's what SSIS is all about, man. :)

[Update]

Try this in SSIS to see how fast you can chew through that data:

Flat File Source --> Sort component with duplicate removal --> Flat File Destination (or Ole Db destination with tablock and stuff)


Your CONVERT statements may be causing the delay.


I would suggest creating a cursor that goes through all rows ordered by all fields and use variables to compare current row with previous row to detect if that row has been seen already. You can either delete duplicate rows during this process or create procedure returning a relation (sorry, don't know exact terminology for SQL server) containing only unique rows (if you detect duplication then you skip this row ie. don't yield).

0

精彩评论

暂无评论...
验证码 换一张
取 消