开发者

Efficiently estimating the number of unique elements in a large list

开发者 https://www.devze.com 2022-12-15 15:56 出处:网络
This problem is a little similar to that solved by reservoir sampling, but not the same.I think its also a rather interesting problem.

This problem is a little similar to that solved by reservoir sampling, but not the same. I think its also a rather interesting problem.

I have a large dataset (typically hundreds of millions of elements), and I want to estimate the number of unique elements in this dataset. There may be anywhere from a few, to millions of unique elements in a typical dataset.

Of course the obvious solution is to maintain a running hashset of the elements you encounter, and count them at the end, this would yield an exact result, but would require me to carry a potentially large amount of state with me as I scan through the dataset (ie. all unique elements encountered so far).

Unfortunately in my situation this would require more RAM than is available to me (nothing that the dataset may be far larger than available RAM).

I'm wondering if there would be a statistical approach to this that would allow me to do a single pass th开发者_Python百科rough the dataset and come up with an estimated unique element count at the end, while maintaining a relatively small amount of state while I scan the dataset.

The input to the algorithm would be the dataset (an Iterator in Java parlance), and it would return an estimated unique object count (probably a floating point number). It is assumed that these objects can be hashed (ie. you can put them in a HashSet if you want to). Typically they will be strings, or numbers.


You could use a Bloom Filter for a reasonable lower bound. You just do a pass over the data, counting and inserting items which were definitely not already in the set.


This problem is well-addressed in the literature; a good review of various approaches is http://www.edbt.org/Proceedings/2008-Nantes/papers/p618-Metwally.pdf. The simplest approach (and most compact for very high accuracy requirements) is called Linear Counting. You hash elements to positions in a bitvector just like you would a Bloom filter (except only one hash function is required), but at the end you estimate the number of distinct elements by the formula D = -total_bits * ln(unset_bits/total_bits). Details are in the paper.


If you have a hash function that you trust, then you could maintain a hashset just like you would for the exact solution, but throw out any item whose hash value is outside of some small range. E.g., use a 32-bit hash, but only keep items where the first two bits of the hash are 0. Then multiply by the appropriate factor at the end to approximate the total number of unique elements.


Nobody has mentioned approximate algorithm designed specifically for this problem, Hyperloglog.

0

精彩评论

暂无评论...
验证码 换一张
取 消