开发者

Python + MongoDB - Cursor iteration too slow

开发者 https://www.devze.com 2023-02-20 20:30 出处:网络
I\'m actually working in a search engine project. We are working with python + mongoDb. I have a pymongo cursor afte开发者_JAVA技巧r excecuting a find() command to the mongo db. The pymongo cursor has

I'm actually working in a search engine project. We are working with python + mongoDb.

I have a pymongo cursor afte开发者_JAVA技巧r excecuting a find() command to the mongo db. The pymongo cursor has around 20k results.

I have noticed that the iteration over the pymongo cursor is really slow compared with a normal iteration over for example a list of the same size.

I did a little benchmark:

  • iteration over a list of 20k strings: 0.001492 seconds
  • iteration over a pymongo cursor with 20k results: 1.445343 seconds

The difference is really a lot. Maybe not a problem with this amounts of results, but if I have millions of results the time would be unacceptable.

Has anyone got an idea of why pymongo cursors are too slow to iterate? Any idea of how can I iterate the cursor in less time?

Some extra info:

  • Python v2.6
  • PyMongo v1.9
  • MongoDB v1.6 32 bits


Is your pymongo installation using the included C extensions?

>>> import pymongo
>>> pymongo.has_c()
True

I spent most of last week trying to debug a moderate-sized query and corresponding processing that took 20 seconds to run. Once the C extensions were installed, the whole same process took roughly a second.

To install the C extensions in Debian, install the python development headers before running easy install. In my case, I also had to remove the old version of pymongo. Note that this will compile a binary from C, so you need all the usual tools. (GCC, etc)

# on ubuntu with pip
$ sudo pip uninstall pymongo
$ sudo apt-get install python-dev build-essential
$ sudo pip install pymongo


Remember the pymongo driver is not giving you back all 20k results at once. It is making network calls to the mongodb backend for more items as you iterate. Of course it wont be as fast as a list of strings. However, I'd suggest trying to adjust the cursor batch_size as outlined in the api docs:


the default cursor size is 4MB, and the maximum it can go to is 16MB. you can try to increase your cursor size until that limit is reached and see if you get an improvement, but it also depends on what your network can handle.


You don't provide any information about the overall document sizes. Fetch such an amount of document requires both network traffic and IO on the database server.

The performance is sustained "bad" even in "hot" state with warm caches? You can use "mongosniff" in order to inspect the "wire" activity and system tools like "iostat" to monitor the disk activity on the server. In addition "mongostat" gives a bunch of valuable information".

0

精彩评论

暂无评论...
验证码 换一张
取 消