开发者

Staged, Asynchronous Processing in Servlet Containers?

开发者 https://www.devze.com 2023-01-29 10:07 出处:网络
I\'ve been reading a bit about Stage Event Driven Architecture (SEDA) and the concept of asynchronous I/O. I\'ve come across a few people talking about implementing SEDA to handle HTTP requests. Examp

I've been reading a bit about Stage Event Driven Architecture (SEDA) and the concept of asynchronous I/O. I've come across a few people talking about implementing SEDA to handle HTTP requests. Example:

  • Web server receives client HTTP request.
  • Request is put onto a queue
  • One, of a limited number of consumers, picks up request from queue
  • Request is processed, maybe via a typical MVC framework (e.g. Spring MVC)
  • Response is sent back to client

The motivation for this is described as being the ability to control the load c开发者_C百科oming from clients - i.e. it would scale much better than just handling a request in the same thread that accepted it. Once a request is queued, the thread picking up the request is available immediately to accept another request.

Surely this type of model, or something quite like it, is already implemented in servlet containers such as Tomcat, Jetty? The reading I've been doing almost implies that such containers do not implement this sort of approach and thus would have problems in terms of scalability in a high traffic environments.

Can anyone clear this matter up for me?


Yes, you are sort of right, most modern Servlet containers will 'queue' requests to be dispatched to a limited number of Threads that actually do the processing, and this is somewehat analagous to SEDA.

But of course, a standard HTTP request is not really being placed on a physical queue (as a task/request would be in SEDA), instead the users connection (request) is simply not accepted until the server is ready to process it (or until it is rejected), and this is the key difference, because the user is having to wait for the connection to be accepted and then processed - which all happens synchronously. A key feature of SEDA is that request processing is generally asynchronous i.e. you dispatch a task onto a queue and then forget about it, possibly to be notified of its completion some time later.

Anyway, in Tomcat you can tune the acceptCount and maxThreads to control how many requets are 'queued' before the server rejects new incoming connections, and how many are processed concurrently. Newer servers will allow you to process requests asynchronously, AJAX/Comet style.

0

精彩评论

暂无评论...
验证码 换一张
取 消