开发者

Compact data structure for storing a large set of integral values

开发者 https://www.devze.com 2023-02-15 11:40 出处:网络
I\'m working on an application that needs to pass around large sets of Int32 values.The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,00

I'm working on an application that needs to pass around large sets of Int32 values. The sets are expected to contain ~1,000,000-50,000,000 items, where each item is a database key in the range 0-50,000,000. I expect distribution of ids in any given set to be effecti开发者_StackOverflowvely random over this range. The operations I need on the set are dirt simple:

  • Add a new value
  • Iterate over all of the values.

There is a serious concern about the memory usage of these sets, so I'm looking for a data structure that can store the ids more efficiently than a simple List<int>or HashSet<int>. I've looked at BitArray, but that can be wasteful depending on how sparse the ids are. I've also considered a bitwise trie, but I'm unsure how to calculate the space efficiency of that solution for the expected data. A Bloom Filter would be great, if only I could tolerate the false negatives.

I would appreciate any suggestions of data structures suitable for this purpose. I'm interested in both out-of-the-box and custom solutions.

EDIT: To answer your questions:

  • No, the items don't need to be sorted
  • By "pass around" I mean both pass between methods and serialize and send over the wire. I clearly should have mentioned this.
  • There could be a decent number of these sets in memory at once (~100).


Use the BitArray. It uses only some 6MB of memory; the only real problem is that iteration is Theta(N), i.e. you have to walk the entire range. Locality of reference is good though and you can allocate the entire structure in one operation.

As for wasting space: you waste 6MB in the worst case.

EDIT: ok, you've lots of sets and you're serializing. For serializing on disk, I suggest 6MB files :)

For sending over the wire, just iterate and consider sending ranges instead of individual elements. That does require a sorting structure.

You need lots of these sets. Consider if you have 600MB to spare. Otherwise, check out:

  • Bytewise tries: O(1) insert, O(n) iteration, much lower constant factors than bitwise tries
  • A custom hash table, perhaps Google sparsehash through C++/CLI
  • BSTs storing ranges/intervals
  • Supernode BSTs


It would depend on the distribution of the sizes of your sets. Unless you expect most of the sets to be (close to) the minimum you've specified, I'd probably use a bitset. To cover a range up to 50,000,000, a bitset ends up ~6 megabytes.

Compared to storing the numbers directly, this is marginally larger for the minimum size set you've specified (~6 megabytes instead of ~4), but considerably smaller for the maximum size set (1/32nd the size).

The second possibility would be to use a delta encoding. For example, instead of storing each number directly, store the difference between that number and the previous number that was included. Given a maximum magnitude of 50,000,000 and a minimum size of 1,000,000 items, the average difference between one number and the next is ~50. This means you can theoretically store the difference in <6 bits on average. I'd probably use the 7 least significant bits directly, and if you need to encode a larger gap, set the msb and (for example) store the size of the gap in the lower 7 bits plus the next three bytes. That can't happen very often, so in most cases you're using only one byte per number, for about 4:1 compression compared to storing numbers directly. In the best case this would use ~1 megabyte for a set, and in the worst about 50 megabytes -- 4:1 compression compared to storing numbers directly.

If you don't mind a little bit of extra code, you could use an adaptive scheme -- delta encoding for small sets (up to 6,000,000 numbers), and a bitmap for larger sets.


I think the answer depends on what you mean by "passing around" and what you're trying to accomplish. You say you are only adding to the list: how often do you add? How fast will the list grow? What is an acceptable overhead for memory use, versus the time to reallocate memory?

In your worst case, 50,000,000 32-bit numbers = 200 megabytes using the most efficient possible data storage mechanism. Assuming you may end up with this much use in your worst case scenario, is it OK to use this much memory all the time? Is that better than having to reallocate memory frequently? What's the distribution of typical usage patterns? You could always just use an int[] that's pre-allocated to the whole 50 million.

As far as access speed for your operations, nothing is faster than iterating and adding to a pre-allocated chunk of memory.

From OP edit: There could be a decent number of these sets in memory at once (~100).

Hey now. You need to store 100 sets of 1 to 50 million numbers in memory at once? I think the bitset method is the only possible way this could work.

That would be 600 megabytes. Not insignificant, but unless they are (typically) mostly empty, it seems very unlikely that you would find a more efficient storage mechanism.

Now, if you don't use bitsets, but rather use dynamically sized constructs, and they could somehow take up less space to begin with, you're talking about a real ugly memory allocation/deallocation/garbage collection scenario.

Let's assume you really need to do this, though I can only imagine why. So your server's got a ton of memory, just allocate as many of these 6 megabyte bitsets as you need and recycle them. Allocation and garbage collection are no longer a problem. Yeah, you're using a ton of memory, but that seems inevitable.

0

精彩评论

暂无评论...
验证码 换一张
取 消