I have a problem where I need a .NET dictionary that supports multiple items per key. In the past I've used the STL multimap in my C++ programs. How does the design of a multimap differ from a dictio开发者_运维百科nary of lists i.e. performance, size, etc. (excluding generics vs. templates)?
multimap.count
: O(log n + m)
where n
is number of keys and m
is number of items associated with a given key.
For a Dictionary<TKey, List<TValue>>
the equivalent functionality would be:
int count = dictionary[key].Count;
And safer is to say
int count;
List<TValue> list;
if(dictionary.TryGetValue(key, out list)) {
int count = list.Count;
}
This is an O(1)
operation because lookup is O(1)
1 and List<T>.Count
is O(1)
.
multimap.find
: O(log n)
where n
is number of keys
For a Dictionary<TKey, List<TValue>>
the equivalent functionality would be:
List<TValue> elements = dictionary[key];
And safer is to say
List<TValue> list;
if(dictionary.TryGetValue(key, out list)) {
// safe to iterate list
}
This is O(1)
. See the previous remark on lookup by key in a Dictionary<TKey, TValue>
.
multimap.insert
: O(log n)
where n
is the number of keys.
For a Dictionary<TKey, List<TValue>>
the equivalent functionality would be:
// value is TValue to insert
List<TValue> list;
if(!dictionary.TryGetValue(key, out list)) {
list = new List<TValue>();
dictionary.Add(key, list);
}
list.Add(value);
This is usually O(1)
but can be O(n)
when the capacity of the dictionary must be increased to accomodate the new element.
multimap.remove
: There are three overloads of this method; I will only consider the one that accepts a key and removes all occurrences of that key from the multimap. This is an O(log n + m)
operation where there n
keys and m
objects associate with a given key.
For a Dictionary<TKey, List<TValue>>
the equivalent functionality would be:
dictionary.Remove(key);
From the documentation: "This method approaches an O(1) operation." Same comment applies.
1: From the documentation: "Retrieving a value by using its key is very fast, close to O(1)
." Why the documentation is vague on this point is confusing to me. Either an operation is O(1)
or it isn't. There is no such thing as "close" to O(1)
.
multimap: insert/remove operations take logarithmic time Big-O(log n), lookup take constant time Big-O(1).
Each element is accessed using a key value, unlike the map, a multimap key value needs not be unique, associated values do not need to be unique. The map orders the elements by their keys using a stored function key_compare, which simply does a less-than comparison.
Here's an article on IDictionary performance which doesn't mention any Big-O performance metrics but does give some practical runs using the dictionary.
精彩评论