开发者

Why does Perl makes the system very slow when I made more than 4,000 database connections?

开发者 https://www.devze.com 2023-01-20 22:46 出处:网络
I was writing a code to find the speed of my d开发者_开发问答atabase using a Perl script. My intention was to make a 4,000 database connection after each fork (which would act as a 4,000 different cl

I was writing a code to find the speed of my d开发者_开发问答atabase using a Perl script.

My intention was to make a 4,000 database connection after each fork (which would act as a 4,000 different clients) and sleep, and I issue the update command when I get the signal but the system itself becomes very slow and almost hangs for making the connections itself and even I couldn't send the signal using my terminal.

I am using DBI module, I have 4GB RAM in my system where Postgres 8.3 is running in a different machine.


I'm not entirely clear on whether you're saying you wanted to a) Open 4,000 connections, fork, open 4,000 more connections, etc. or b) Fork 4,000 times and open one connection from each process, but 4,000 database connections or 4,000 processes is some pretty serious resource consumption either way. I'm not at all surprised that it's slowing your system to a crawl - I would expect that to be the end result regardless of the language used.

What are you actually attempting to achieve by creating all of these processes and/or connections? There's probably a better way to do it that won't be quite so resource-intensive.


I've seen pgpool in use on production systems where the number of postgres connections could not be limited to something reasonable. You may wish to look into using this yourself to mitigate against poor application design by your developers.

Essentially, pgpool acts as a proxy to postgres. It multiplexes queries on lots of connections to a much smaller (and manageable) number to the back-end database.


That is relativity speaking a lot of connections to have at once, but not unheard of by any means. How much memory do you have on the database server? Each connection takes resources, if you don't have a database server setup to handle that volume of connections it will be slow no matter what language you use to connect.

A simple analogy would be if you had a Toyota Prius (old days I would had said Ford Pinto) pulling a semi trailer with 80,000 lbs (typical legal weight in a lot of the states) of weight in it. It would burn that little Prius up in a heartbeat like you are seeing. To do it right you need to buy your self a big rig and hook it to that trailer to move that amount of weight.


Ignoring the wisdom of doing 4000 connection forks, you should work through your performance issues with something akin to Devel::NYTProf.

I would alternatively setup persistent workers in gearman and do my gearman client requests. Persistence and your scheduled forks on demand.

0

精彩评论

暂无评论...
验证码 换一张
取 消