Here's the problem I'm trying to solve:
I need to be able to display a paged, sorted table of data that is stored across several database shards.
Paging and sorting are well known problems that most of us can solve in any number of ways when the data comes from a single source. But if you're splitting your data across shards or using a DHT or distributed document database or whatever flavor of NoSQL you prefer, things get more complicated.
Here's a simpl开发者_C百科e picture of a really small data set:
Shard | Data
1 | A 1 | D 1 | G 2 | B 2 | E 2 | H 3 | C 3 | F 3 | ISorted into pages (Page Size = 3):
Page | Data
1 | A 1 | B 1 | C 2 | D 2 | E 2 | F 3 | G 3 | H 3 | IAnd if we wanted to show the user page 2, we'd return:
D
E FIf the size of the table in question is something like 10 million rows, or 100 million, you can't just pull down all the data onto a web/application server to sort it and return the correct page. And you obviously can't let each individual shard sort and page its own slice of the data because the shards don't know about each other.
To complicate matters, the data I need to present can't be too far out of date, so pre-calculating a set of useful sorts ahead of time and storing the results for later retrieval isn't practical.
There are several solutions, some of which may not be feasible for you, but maybe one of them will stick:
- Do the sharding by input ranges for this value (e.g., shard 1 contains A-C, shard 2 D-F, etc.). Alternately, use another table with foreign keys to this table as an index, and shard the index table using this system. That way you can easily locate and fetch specified ranges. This solution is probably the best in terms of performance, if you can do it (it assumes that the number of shards is static and the shards are reliable).
- Identify the page items by binary search. For example, say you want items 100 to 110. For each shard, count the number of values lexicographically below "M". If the sum of the numbers is above 100, reduce the pivot point, otherwise increase it (using binary search). After you identify the 100th item (the first item on your page), take top 9 (10 - 1) items larger than that item from every shard, fetch them, sort the entire list, take the top 9 from the list, prepend the first item and there's your page! This approach is more difficult to implement and will require
O(log(n))
queries so it is slower than (1), but still may be reasonably fast if the load is not very heavy. - Store the page number with each value. This would give you blazingly fast reads, but horribly slow writes, so it only works in the scenario where there are very few writes (or only appends in terms of the ordered variable).
精彩评论