开发者

Performance issues with mongo + PHP with pagination, distinct values

开发者 https://www.devze.com 2023-03-31 09:53 出处:网络
I have a mongodb collection contains lots of books with many fields. Some key fields which are relevant for my question are :

I have a mongodb collection contains lots of books with many fields. Some key fields which are relevant for my question are :

{
book_id : 1, 
book_title :"Hackers & Painters", 
category_id : "12",
related_topics : [ {topic_id : "8", topic_name : "Computers"},
                   {topic_id : "11", topic_name : "IT"}
                 ]
...
... (at least 20 fields more)
...
}

We have a form for filtering results (with many inputs/selectbox) on our search page. And of course there is also pagination. With the filtered results, we show all categories on the page. For each category, number of results found in that category is also shown on the page.

We try to use MongoDB instead of PostgreSQL. Because performance and speed is our main con开发者_开发百科cern for this process.

Now the question is :

I can easily filter results by feeding "find" function with all filter parameters. That's cool. I can paginate results with skip and limit functions :

$data = $lib_collection->find($filter_params, array())->skip(20)->limit(20);

But I have to collect number of results found for each category_id and topic_id before pagination occurs. And I don't want to "foreach" all results, collect categories and manage pagination with PHP, because filtered data often consists of nearly 200.000 results.

Problem 1 : I found mongodb::command() function in PHP manual with a "distinct" example. I think that I get distinct values by this method. But command function doesn't accept conditional parameters (for filtering). I don't know how to apply same filter params while asking for distinct values.

Problem 2 : Even if there is a way for sending filter parameters with mongodb::command function, this function will be another query in the process and take approximately same time (maybe more) with the previous query I think. And this will be another speed penalty.

Problem 3 : In order to get distinct topic_ids with number of results will be another query, another speed penalty :(

I am new with working MongoDB. Maybe I look problems from the wrong point of view. Can you help me solve the problems and give your opinions about the fastest way to get :

  • filtered results
  • pagination
  • distinct values with number of results found

from a large data set.


So the easy way to do filtered results and pagination is as follows:

$cursor = $lib_collection->find($filter_params, array())
$count = $cursor->count();
$data = $cursor->skip(20)->limit(20);

However, this method may not be somewhat inefficient. If you query on fields that are not indexed, the only way for the server to "count()" is to load each document and check. If you do skip() and limit() with no sort() then the server just needs to find the first 20 matching documents, which is much less work.

The number of results per category is going to be more difficult.

If the data does not change often, you may want to precalculate these values using regular map/reduce jobs. Otherwise you have to run a series of distinct() commands or in-line map/reduce. Neither one is generally intended for ad-hoc queries.

The only other option is basically to load all of the search results and then count on the webserver (instead of the DB). Obviously, this is also inefficient.

Getting all of these features is going to require some planning and tradeoffs.


Pagination

Be careful with pagination on large datasets. Remember that skip() and take() --no matter if you use an index or not-- will have to perform a scan. Therefore, skipping very far is very slow.

Think of it this way: The database has an index (B-Tree) that can compare values to each other: it can tell you quickly whether something is bigger or smaller than some given x. Hence, search times in well-balanced trees are logarithmic. This is not true for count-based indexation: A B-Tree has no way to tell you quickly what the 15.000th element is: it will have to walk and enumerate the entire tree.

From the documentation:

Paging Costs

Unfortunately skip can be (very) costly and requires the server to walk from the beginning of the collection, or index, to get to the offset/skip position before it can start returning the page of data (limit). As the page number increases skip will become slower and more cpu intensive, and possibly IO bound, with larger collections.

Range based paging provides better use of indexes but does not allow you to easily jump to a specific page.

Make sure you really need this feature: Typically, nobody cares for the 42436th result. Note that most large websites never let you paginate very far, let alone show exact totals. There's a great website about this topic, but I don't have the address at hand nor the name to find it.

Distinct Topic Counts

I believe you might be using a sledgehammer as a floatation device. Take a look at your data: related_topics. I personally hate RDBMS because of object-relational mapping, but this seems to be the perfect use case for a relational database.

If your documents are very large, performance is a problem and you hate ORM as much as I do, you might want to consider using both MongoDB and the RDBMS of your choice: Let MongoDB fetch the results and the RDBMS aggregate the best matches for a given category. You could even run the queries in parallel! Of course, writing changes to the DB needs to occur on both databases.

0

精彩评论

暂无评论...
验证码 换一张
取 消