开发者

Memcached vs. HW Load Balancer w/sticky sessions

开发者 https://www.devze.com 2023-01-19 22:08 出处:网络
All, I\'ve been doing some research on when (and when not) to use Memcached.I\'m familiar with distributed caching in terms of it\'s core objectives.Using Memcacached/similar makes sense to me if you\

All, I've been doing some research on when (and when not) to use Memcached. I'm familiar with distributed caching in terms of it's core objectives. Using Memcacached/similar makes sense to me if you've got a few servers.开发者_开发百科..as it would give you central virtual repository to get your cached data.

That said, if you have a hardware load balancer (say F5's BigIP) that can do sticky sessions - is having a distributed cache as advantageous? AFAIK, it seems like the only thing you'd be doing in this case is ensuring you aren't leveraging your web server(s) RAM for cache. Are there any other benefits to leveraging memcached in an environment where you've already got HW based load balancers that are leveraging sticky sessions?

As far as I know, having sticky sessions on does not cost much in terms of performance. Obviously, I could be wrong.


From my long experience in developing advanced load balancer devices, cookie stickiness actually kills your load balancing, because the balancing is done only on the first request of each session, and from then on all user requests will go to the same server. This creates an unbalanced environment.

Saving sessions in memcached is the optimal solution for that, as each web/app server can access the shared memcached for getting the user state, and the load balancing is done on per request basis, rather than per session.


We have both hardware load balancers and memcached. They serve different purposes and memcached really doesn't have much to do with sticky sessions.

This is very high-level, but you can think about it like this: load balancing lets you spread out requests for CPU and other resources across a bunch of servers. However, memcached makes it so that you don't even need those resources in the first place.

For example, let's say you have a search page and decide to cache search results. Let's say you have 20 servers. When a search request comes in, it will get routed to a server, which we can call Server A. This server will do the search and then cache the results in memcached.

Now, if the same search request comes in from a different session or user, it will likely get routed to a different server, Server B. But Server B will be able to retrieve the cached results from memcached that Server A put there, so it will never have to actually do the search.

Caveat: This would not apply if you're using memcached to cache only things that are session-specific and change from session to session, such as a unique login token. For these things, you might as well only cache on the server with the specific session if you have sticky sessions on. However, most things that are worth caching are more broadly accessed.


Having sticky sessions on, however, can potentially reduce the effectiveness of your load balancing. We saw this on our production site, where one server would (just through chance) have lots of low-activity sessions, while another server would have lots of high-activity sessions. The sessions balance out, but not necessarily the load or the traffic.

This isn't a complete answer, but it's an important factor to consider.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号