开发者

Cache data structure design

开发者 https://www.devze.com 2023-02-22 20:56 出处:网络
To explain the domain... I have a bunch (1,000,000) of items, each of a particular type of 12 possible types (TypeA, TypeB ... TypeK)

To explain the domain...

I have a bunch (1,000,000) of items, each of a particular type of 12 possible types (TypeA, TypeB ... TypeK) There are 3 immutable classes viz. ItemKey (to uniquely id the item), ItemTypeKey (to uniquely id the type) and ItemType (containing the type data including the ItemTypeKe开发者_运维技巧y)

I have in front of me a cache that stores this data in two data structures ...

ConcurrentHashMap<ItemKey, ItemTypeKey>
ConcurrentHashMap<ItemTypeKey, ItemType>

I would have implemented it simply as a ConcurrentHashMap<ItemKey, ItemType> The memory footprint would be minimial in this case too as the cache is only storing references anyway.

Is there any particular advantage to splitting the cache that I am not seeing? Any alternative data-structure designs too are welcome


Well, do you ever need to look up using ItemTypeKey as the lookup element? That's the only reason you'd do it this way.

The other potential problem with the sample you show is potential race conditions. The two maps are ConcurrentHashMaps which implies to me someone is planning to use this in multithreaded situations. If there is not a synchronized lock surrounding the usage of both maps (in which case a normal HashMap would be fine), then there can be brief inconsistencies when items are added/removed where one map is updated and the other has not been. That might not matter - depends on the the program.

This is a long rambling way of saying "Yeah if ItemKey is immutable, and you don't need ItemTypeKey as a lookup (at least often), your refactoring sounds good to me" :)

0

精彩评论

暂无评论...
验证码 换一张
取 消