开发者

Efficiency: What data structure to use...?

开发者 https://www.devze.com 2022-12-20 17:22 出处:网络
I am working with a very large dataset.Essentially I will be working with millions of records and storing a value into a dataset.

I am working with a very large dataset. Essentially I will be working with millions of records and storing a value into a dataset.

Each time that I store a value, I must first check to make sure that the value is not already in the data structure. If the value is in the data structure, I must update (or remove/add) the record to update the count.

开发者_StackOverflow中文版

There are repeats within the dataset, and I don't want to use a bad data structure and get a speed of O(n), as I'd like to be able to run this overnight and come in in the morning with it done!

Any advice?


As others have said, a hash table is probably the right answer, but hash tables aren't terribly space efficient, so if you get to the point where you may be exhausting your memory, you should consider a sorted array of keys and a parallel sorted array of values. Basically, if you can get access to the entire list of keys upfront, create an array of these and sort it. Then create a parallel array of values. Everytime you need to store something, simply do a binary search (O(log N)) to find the index in the key array, then update the corresponding index in the value array. This will be less speed-efficient than a hash table, but will guarantee virtually no space overhead.


It sounds like you want a hash table, combined with (possibly) a list or some specific structure. That, to me, sounds like a database.


Use a hashtable


You could try a binary tree. log_2(1,000,000) is about 20. This might be better if you don't know what all the keys will be ahead of time.

0

精彩评论

暂无评论...
验证码 换一张
取 消