开发者

Mssql/Perl DBI performance woes

开发者 https://www.devze.com 2023-03-27 02:58 出处:网络
Im writing an application that does a lot of large inserts to a remote mssql server with Perl DBI.Not sure if it will be 05 or 08 sql server yet, but I\'ve timed it in both so far and performance is s

Im writing an application that does a lot of large inserts to a remote mssql server with Perl DBI. Not sure if it will be 05 or 08 sql server yet, but I've timed it in both so far and performance is similar. Basically, there are a large number of rows that need to be inserted are turning out to be the bottleneck by far. I have tried multi-row inserts in 08 ( and the sub-select UNION ALL trick in 05), changing whether the inserts are fired off during a previous fetch or after, using execut_array() on single-row inserts, all these with/without binding params.

Psuedocode:

select data query
while fetchrow {
  do lots of calculations
  construct insert
  1) do inserts here
}
2) or do inserts here

The activity monitor on the sql server averages the multi-row inserts at 70ms a piece. The queries themselves have been limited to 58ish rows a piece because there are 36 fields on the insert, and it easily hits the 2100 parameter limit.

Is there anything开发者_StackOverflow中文版 obvious I am overlooking? Any other method I could try to improve the times? Ignoring issues like latency or hardware, I feel like there has to be another improvement to my Perl workflow or the queries themselves. (I was looking into sql server bcp, bulk inserts, etc).

Thanks for any advice


execute_array will not make much difference because it is not implemented in DBD::ODBC (because I've never got around to it). As such it is really DBI just calling execute repeatedly for you.

I don't understand the 58ish rows, 36 parameters and hitting 2100 parameter limit although I can see 58 * 36 is pretty much 2100. If your insert needs 36 parameters just prepare it before the select and execute it in the while loop - that should be faster since the SQL does not need to be passed/parsed each insert.

Apart from that, it you can disable AutoCommit and commit at the end or regularly but less than once per insert it should be a lot faster. You may also have indexes/triggers etc on the table which can slow inserts down so you could look at disabling them until afterwards.


I guess Perl/DBI is a requirement (because this isn't running on Windows perhaps)?

Any possibility of using BCP (you would write an intermediate text file) or SSIS (you might be able to do everything in SSIS) or even ADO.NET SqlBulkCopy (in some kind of .NET environment)? All these are specifically designed for large loads.

0

精彩评论

暂无评论...
验证码 换一张
取 消