I have a python loader using Andy McCurdy's python library that opens multiple Redis DB connections and sets millions of keys looping through files of lines each containing an integer that is the redis-db number for that record. Alltogether, only 20 databases are open at the present time, but eventually there may be as many as 100 or more.
I notice that the redis log (set to verbose) always tells me there are "4 clients connected (0 slaves),... though I know that my 20 are open and are being used.
So I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess? If so the real question is is there a way to increase the pool size -- I have plenty of machine resources, a lot dedicated to Redis? Would increasin开发者_如何学Gog the pool size help performance as the number of virtual connections I'm making goes up?
At this point, I am actually hitting only ONE connection at a time though I have many open as I shuffle input records among them. But eventually there will be many scripts (2 dozen?) hitting Redis in parallel, mostly reading and I am wondering what effect increasing the pool size would have.
Thanks matthew
So I'm guessing this is about the connection pooling support built into the python library. Am I correct in that guess?
Yes.
If so the real question is is there a way to increase the pool size
Not needed, it will increase connections up to 2**31 per default (andys lib). So your connections are idle anyways.
If you want to increase performance, you will need to change the application using redis.
and I am wondering what effect increasing the pool size would have.
None, at least not in this case.
IF redis becomes the bottleneck at some point, and you have a multi-core server. You must run multiple redis instances to increase performance, as it only runs on a single core. When you run multiple instances, and doing mostly reads, the slave feature can increase performance as the slaves can be used for all the reads.
精彩评论