开发者

LRU LinkedHashMap that limits size based on available memory

开发者 https://www.devze.com 2022-12-09 16:03 出处:网络
I want to create a LinkedHashMap which will limit its size based on available memory (ie. when freeMemory + (maxMemory - allocatedMemory) gets below a certain threshold).This will be used as a form of

I want to create a LinkedHashMap which will limit its size based on available memory (ie. when freeMemory + (maxMemory - allocatedMemory) gets below a certain threshold). This will be used as a form of cache, probably using "least recently used" as a caching strategy.

My concern though is that allocatedMemory also includes (I assume) un-garbage collected data, and thus will over-estimate the amount of used memory. I'm concerned about the unintended consequences this might have.

For e开发者_StackOverflow社区xample, the LinkedHashMap may keep deleting items because it thinks there isn't enough free memory, but the free memory doesn't increase because these deleted items aren't being garbage collected immediately.

Does anyone have any experience with this type of thing? Is my concern warranted? If so, can anyone suggest a good approach?

I should add that I also want to be able to "lock" the cache, basically saying "ok, from now on don't delete anything because of memory usage issues".


I know I'm biased, but I really have to strongly recommend our MapMaker for this. Use the softKeys() or softValues() feature, depending on whether it's GC collection of the key or of the value that more aptly describes when an entry can be cleaned up.


Caches tend to be problematic. IIRC, there's a SoftCache in Sun's JRE that has had many problems.

Anyway, the simplest thing is to use SoftReferences in the map. This should work fine so long as the overhead of SoftReference plus Map.Entry is significantly lower than the cached data.

Alternatively you can, like WeakHashMap, use a ReferenceQueue and either poll it or have a thread blocking on it (one thread per instance, unfortunately). Be careful with synchronisation issues.

"Locking" the map, you probably want to avoid if necessary. You'd need keep strong references to all the data (and evict if not null). That is going to be ugly.


I would strongly suggest using something like Ehcache instead of re-inventing a caching system. It's super simple to use, very configurable, and works great.


As matt b said, something like Ehcache or JbossCache is a good first step.

If you want something light-weight and in-process, look at google collections. For example, you can use MapMaker (http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?com/google/common/collect/BiMap.html) to make a map with Soft/Weak keys and values, so it would cache only those items it has room for (though you wouldn't get LRU).


I had the same need in the past and this is how I implemented my cache:

  • there is a cache memory manager, which has a minimum and a maximum memory limit(max limit it matters anyway)
  • every registered cache has the following(important) parameters: maximum capacity(most of the time you have a higher limit, you don't want go hold more than X items) & percent memory usage
  • I use a LinkedHashMap and a ReentrantReadWriteLock to guard the cache.
  • every X puts I calculate the mean memory consumption per entry and trigger eviction(async) if the calculated memory limit > allowed memory limit.
  • of course memory computation doesn't actually shows the real memory consumption but comparing the computed memory with the real value(using a profiler) I find that it is close enough.

I was planning to put also an additional guard on the cache, to evict in case puts are going faster than memory based evictions, but until now I didn't find the need to do it.

0

精彩评论

暂无评论...
验证码 换一张
取 消