开发者

Timeout Mechanism for Hashtable

开发者 https://www.devze.com 2023-02-05 02:29 出处:网络
I have a hashtable that under heavy-traffic. I want to add timeout mechanism to hashtable, remove too old records. My concerns are,

I have a hashtable that under heavy-traffic. I want to add timeout mechanism to hashtable, remove too old records. My concerns are, - It should be lightweight - Remove operation has not time critical. I mean (timeout value is 1 hour) remove operation can be after 1 hour or and 1 hour 15 minute. There is no problem.

My opinion is, I create a big array (as ring buffer)that store put time and hashtable key, When adding to hashtable, using array index find a next slot on array put time, if array slot empty, put insertion time and HT key, if array slot is not empty, compare insertion time for timeout occured.

if timeout occured remove from Hashtable (if not removed yet) it not timeout occured, increment index till to find empty slot or timeouted array slot. When removing from hashtable there is no operation on big array.

Shortly, for every add operation to Hashtable, may remove 1 timeouted element from hashtable or do nothing.

开发者_如何学运维

What is your the more elegant and more lightweight solution ?

Thanks for helps,


My approach would be to use the Guava MapMaker:

ConcurrentMap<String, MyValue> graphs = new MapMaker()
   .maximumSize(100)
   .expireAfterWrite(1, TimeUnit.HOURS)
   .makeComputingMap(
       new Function<String, MyValue>() {
         public MyValue apply(String string) {
           return calculateMyValue(string);
         }
       });

This might not be exactly what you're describing, but chances it's close enough. And it's much easier to produce (plus it's using a well-tested code base).

Note that you can tweak the behaviour of the resulting Map by calling different methods before the make*() call.


You should rather consider using a LinkedHashMap or maybe a WeakHashMap.

The former has a constructor to set the iteration order of its elements to the order of last access; this makes it trivial to remove too old elements. And its removeEldestEntry method can be overridden to define your own policy on when to remove the eldest entry automatically after the insertion of a new one.

The latter uses weak references to keys, so any key which has no other reference to it can be automatically garbage collected.


I think a much easier solution is to use LRUMap from Apache Commons Collections. Of course you can write your own data structures if you enjoy it or you want to learn, but this problem is so common that numerous ready-made solutions exist. (I'm sure others will point you to other implementations too, after a time your problem will be choosing the right one from them :))


Under the assumption that the currently most heavily accessed items in your cache structure are in the significant minority, you may well get by with randomly selecting items for removal (you have a low probability of removing something very useful). I've used this technique and, in this particular application, it worked very well and took next to no implementation effort.

0

精彩评论

暂无评论...
验证码 换一张
取 消