开发者

why does memcached not support "multi set"

开发者 https://www.devze.com 2023-01-08 20:08 出处:网络
Can anyone explain why memcached folks decided to support multi get but not multi set. By multi I mean operation involving more than one key (see protocol at http://code.google.com/p/memcached/wiki/Ne

Can anyone explain why memcached folks decided to support multi get but not multi set. By multi I mean operation involving more than one key (see protocol at http://code.google.com/p/memcached/wiki/NewCommands).

So you can get multiple keys in one shot (basic advantage is the standard saving you get by doing less round trips) but why can not you get bulk sets?

My theory is that it was meant to do less number of sets and that too individually (e.g. on a cache read and miss). But I still do not see how multi-set really conflicts with the general philosophy of memcached.

I looked开发者_高级运维 at the client features at http://code.google.com/p/memcached/wiki/NewCommonFeatures and it seems that some clients potentially do support "Multi-Set" (why only in binary protocol?). I am using Java spy memcached, btw.


It's not supported in the text protocol because it'd be very, very complicated to express, no clients would support it, and it would provide very little that you can't already do from the text protocol.

It's supported in the binary protocol because it's a trivial use case of binary operations.

spymemcached supports it implicitly -- just do a bunch of sets and magic happens:

http://dustin.github.com/2009/09/23/spymemcached-optimizations.html


I don't know a lot about memcache internals, but I assume writes have to be blocking, atomic operations. I assume that allowing multiple set operations to be batched, you could block all reads for a long time (or risk a get occurring while only half of a write had been applied). Forcing writes to be done individually allows them to be interleaved fairly with gets.


I would imagine that the restriction against using multi sets is to avoid collisions when writing cached values to the memcache.

As an object cache, I can't foresee an example of when you would need transactional type writes. This use case seems less suited for a caching layer, but better suited for the underlying database.

If sets come in interleaved from different clients, it is most likely the case that for one key, the last one wins, or is at least close enough, until the cache is invalidated and a newer value is written.

As Gian mentions, there don't seem to be any good reasons to block reads from the cache while several or many writes to the cache happen.

0

精彩评论

暂无评论...
验证码 换一张
取 消