开发者

Large SQL inserts TVF vs BULK insert

开发者 https://www.devze.com 2023-03-16 04:36 出处:网络
What is the fastest way to insert a huge array (10M elements) from a C# application? Till now, I used bulk insert. C# app generates a large 开发者_开发百科textual file and I load it with BULK INSERT

What is the fastest way to insert a huge array (10M elements) from a C# application?

Till now, I used bulk insert. C# app generates a large 开发者_开发百科textual file and I load it with BULK INSERT command . Out of curiosity I wrote a simple user defined CLR table value function.

[SqlFunction(Name = "getArray", FillRowMethodName = "FillRow")]
        public static IEnumerable getArray(String name)
        {
        return my_arrays[name]; // returns the array I want to insert into db
        }

        public static void FillRow(Object o, out SqlDouble sdo)
        {
            sdo = new SqlDouble((double)o);
        }

And this query:

INSERT INTO my_table SELECT data FROM dbo.getArray('x');

Works almost 2 times faster than bulk equivalent. The exact results are:

BULK - 330s (write to disk + insert) TVF - 185s

Of course, this is due to write overhead, but I don't know if BULK insert have any in memory equivalent.

So my question is - is TVF better compering to the BULK (which is created for huge inserts), or am I missing something here. Is there any third alternative?


I use a SqlBulkCopy when I really need the very last drop of performance, that way you can skip the overhead of first putting it all on disk.

The SqlBulkCopy accepts an IDataReader that you have to implement, but only a few methods of the interface. What I always do is just create the class MyBulkCopySource : IDataReader, click 'Implement interface' and feed it to the BulkCopy as is to see wich method gets called. Implement that, try again etc. You only need to implement three of four of them, the rest never gets called.

AFAIK this is the fastest way to pump data from a C# program into a SqlDB.

GJ


  • Use SqlBulkCopy
  • From multiple threads with blocks like 30.000 rows each time.
  • NOT to the final table, but to a temporary table
  • From which you copy over, using a connection setting that does not honor locks.

This totally puts the smallest locking on the end table.

0

精彩评论

暂无评论...
验证码 换一张
取 消