开发者

how many rows can a be executed at one time unless to get time out

开发者 https://www.devze.com 2023-02-04 00:49 出处:网络
i work on sql 2005 server I have almost 350 000 insert scripts.. The insert script has 10 columns to be inserted.

i work on sql 2005 server I have almost 350 000 insert scripts.. The insert script has 10 columns to be inserted.

So how many rows should I select to be executed at one click. "Execute" clic开发者_如何学Gok..

Please tell me an average number according to an average system configuration..

Win XP Cor 2 Duo 3,66 Gb ram


Ok, lets get some things straight here:

Win XP Cor 2 Duo 3,66 Gb ram

Not average but outdated. On top it totally missed the most important nubmer for a db, which is speed/number of discs.

i work on sql 2005 server I have almost 350 000 insert scripts..

I seriously doubt you haver 350.000 insert SCRIPTS. THIs would be 350.000 FILES that contain insert commands. This is a lot of files.

The insert script has 10 columns to be inserted.

I order a pizza. How much fuel does my car require per km? Same relation. 10 columns is nice, but you dont say how many insert commands your scripts contain.

So, at the end the only SENSIBLE interpretation is you have to insert 350.000 rows, and try to do it from a program (i.e. there are no scripts to start with), but this is pretty much absolutely NOT what you say.

So how many rows should I select to be executed at one click

How many pizzas should I order with one telephone? THe click here is irrelevant. It woud also not get faster when you use a command line program to do the isnerts.

The question is how to get inserts into the db fastest.

  • For normal SQL:

    • Batch the inserts. Like 50 or 100 into one statement (yes, you can write more than one insert into one command).
    • Submit them interleaved async, preparing the next statement while the prevous one executes.
    • This is very flexible as yuo can do real sql etc.
  • for real mass inserts:

    • Forget the idea of writing insertsstatements. Prepare the data properly as per table structure, use SqlBulkCopy to mass insert them.
    • Less flexible - but a LOT faster.

The later approach on my SMALL (!) database computer would handle this in about 3-5 seconds when the fields are small (a field dan be a 2gb binary data thing, you know). I handle about 80.000 row isnerts per second without a lot of optimization, but i have small and a little less fields. This is 4 processor cores (irrelvant, they never get busy), 8gb RAM (VERY small for a datbase server, irrelevant as well in this context), and 6 vlociraptors for the data in a Raid 10 (again, a small configuration for a database, b ut very relevant). I get a peak insert in the 150mb per second range here in activity monitor. I will do a lot of optimization here, as i open /close a db connection at the moment every 20.000 items... bad batching.

But then, you dont seem to have a database system at all, just a database installed on a low end workstation, an this means your IO is going to be REALLY slow compared to database servers, and insert speed / update speed is IO bound. Desktop discs suck, and you have data AND logs on the same discs.

But.... at the end you dont really say us anything about your problem.

And... the timeout CAN be set programmatically on the connection object.


I'm pretty sure the timeout can be set by the user by going Server Properties -> Connections -> Remote query timeout. If you set this sufficiently high (or to 0 which should mean it never times out) then you can run as many scripts as you like.

Obviously this is only ok if the database is not yet live - and you're simply needing to populate. If the data is coming from another MS SQL Server however you might just want to take a Full backup and restore - this will be both simpler and quicker.

This may be of help.


The general rule of thumb is to not exceed 0.1 seconds per UI operation for excellent performance. You are going to need to benchmark to find out what that is.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号