, we have a similar use case in that some of our
compressing things, operations which are CPU intensive. In order to
make that we take advantage of the full power of our servers, we have
created the following thing (which we will be open sourcing soon).
We have an "admin" node process which is started with as input the
number of "feedly" node processes it should launch and monitor. When
the admin starts, it spawns multiple "feedly" node processes, padding
as input the port the feedly node should listen to 9701 for the first
one,...9710 for the 10th one.
On the server we have a varnish server which load balances the traffic
Each "feedly" node process has a 127.0.0.1:970x/heath endpoint where
it reports stats about its execution. The admin node monitor each
feedly node every 2 minutes and makes sure that the feedly node is
healthy and responding. If not, it kills the child and restart a new
instance of the feedly node listening to the same port.
All feedly nodes point to a redis instance for shared memory/session
management. This allows incoming HTTP requests to be load balances
across any feedly nodes transparently.
In dev/staging, we have varnish and all the node processes running on
the same box.
In production, we can scale out by having varnish on one server and
multiple node cell (where a cell is an admin+10 feedly). This allows
us to do rolling upgrades, etc. without any interruption to the
Overall, we are still learning but we are huge fans of the work the
nodejs/ryan is doing.
We will try to document/open source this blueprint as soon as we have