Running a web server on node.js is a simple thing to do (as seen by its excellent examples and documentation) but I wonder how you can fully use the CPU resources of a dedicated server?
Since node.js is single-threaded the only way to take advantage of multiple processors is via multiple processes. Of course, only one process can bind to a port so it seems there would have to be a master/worker pattern wherein the master forks children, binds to the incoming port, and delegates incoming connections (and the actual processing work) to the children. (Perhaps via a hungry-consumer pattern?)
Is this the best way to scale a web server running node.js? If so, are there librar开发者_C百科ies to simplify the master/worker pattern? If not, what patterns or deployment setups are recommended to best use the entire resources of a dedicated machine?
(Is this a better question for ServerFault?)
Multi-node is a library that provides the master/worker pattern.
If the server processes don't need to be able to talk to each other, and you aren't using Socket.IO, a simple option would be to just start one process/core, bind to local ports, and use something like nginx or HAProxy to load balance between them.
If you're using express, I'd use tj's Cluster: http://learnboost.github.com/cluster/
It provides 'transparent' cpu based load balancing, which is nice because you can use your existing express app, and it scales it across cores relatively painlessly.
精彩评论