开发者

Paginating data, has to be a better way

开发者 https://www.devze.com 2023-01-03 16:09 出处:网络
I\'ve read like 10 or so \"tutorials\", and they all involve the same thing: Pull a count of the data set

I've read like 10 or so "tutorials", and they all involve the same thing:

  • Pull a count of the data set
  • Pull t开发者_JS百科he relevant data set (LIMIT, OFFSET)

IE:

SELECT COUNT(*) 
  FROM table 
 WHERE something = ?

SELECT * 
  FROM table 
 WHERE something =? 
 LIMIT ? offset ?`

Two very similar queries, no? There has to be a better way to do this, my dataset is 600,000+ rows and already sluggish (results are determined by over 30 where clauses, and vary from user to user, but are properly indexed of course).


Unfortunately, to get the exact count as it is at the moment of the query, postgresql has to go through all the rows that match the criteria and see if they are visible to your transaction. But you probably don't need the exact count, because results are stale anyway as soon as you send the results to the user. So there are things you may try:

  1. cache the count for subsequent queries so the cost is only for the first page (probably doesn't help much, most people only look at the first page anyway)
  2. if the queries map well, use an specialized inverted index search engine for the searches. Lucene/Solr is a good choice.
  3. If sometimes having the counts wildly off isn't a problem, use postgresql's built in statistics to estimate the amount of rows that might match. You can get at the numbers by doing an EXPLAIN on the query. Increase statistics target atleast for the relevant tables to get more accurate numbers. Numbers might still be significantly off with multiple predicates because the planner doesn't know the correlation between different predicates and assumes that they aren't correlated. So things like WHERE sex='male' AND has_breasts=true will assume that 25% will match, which is probably an order of magnitude off. If you run the explain with analyze, you can check how many rows the planner expected to have to go through to get the first page of results, how many it actually had to go through, and scale the estimate accordingly. This is probably somewhat similar to what google uses to estimate how many pages match your query. If I remember correctly Lucene should support similar estimation.


Use the statistics for a count estimate. That will do for paginantion and won't give you much overhead.

See http://wiki.postgresql.org/wiki/Count_estimate


You may want to consider using a cursor.


You could CREATE TABLE AS and place all the results in a new table. You do have to manage the created tables though if TEMP tables are not an option.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号