The Most Efficiency Method Inter-Process-Communication

5,415 views
Skip to first unread message

JustinSD

unread,
Apr 27, 2011, 3:14:04 PM4/27/11
to nodejs
We are doing inter-process communication between two node.js
applications. We initially have it setup via a simple TCP socket, but
wondering if we should switch to a named pipe (FIFO) for performance
reasons. Is this even possible in node? Basically, what is the most
efficient, and fastest way to do IPC between two node.js applications
on the same physical server?

Tim Caswell

unread,
Apr 27, 2011, 3:22:01 PM4/27/11
to nod...@googlegroups.com
It's been my experience that the bottleneck is the serialization and de-serialization of the data, not the actual channel.  I'm pretty sure you can use named pipes, but I'm not sure what the API is.  msgpack seems like a good format for the data interchange.  There are a few libraries out there that implement msgpack or ipc frameworks on top of it.




--
You received this message because you are subscribed to the Google Groups "nodejs" group.
To post to this group, send email to nod...@googlegroups.com.
To unsubscribe from this group, send email to nodejs+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/nodejs?hl=en.


Jorge

unread,
Apr 27, 2011, 6:45:52 PM4/27/11
to nod...@googlegroups.com

Interesting question, where's a benchmark ?
--
Jorge.

billywhizz

unread,
Apr 27, 2011, 6:56:28 PM4/27/11
to nodejs
you should get better performance/throughput from a unix socket as you
won't be going through the network stack at all. as far as i know
mkfifo doesn't have a binding in node.js so you'll have to do some c++
hackery to get fifo's working. stdin/stdout should also be very fast
but will do the job if you are just connecting two applications
together...

if you really want to you could use a memory mapped file (https://
github.com/bnoordhuis/node-mmap), but then you'll have to worry about
multiple readers/writers and will most likely have to do some kind of
locking, which is pretty much against the whole node.js ethos.

Tim is correct. JSON.parse and JSON.stringify suck really bad. You'll
have to roll your own serialization if you want real performance.
msgpack is good, but it's a generic solution and is doing a lot of
extra work because it has to handle any type of message. would be
fastest if the wire format is know in advance and you write a custom
parser. you'll have to do your own testing on whether to do the
parsing in js (working with binary is slow) or c/c++ (much faster but
you will be penalised every time you make a call into the external
library so you should probably be buffering data and doing as few
calls as possible).

Bert Belder

unread,
Apr 27, 2011, 6:57:18 PM4/27/11
to nodejs
You can use unix domain sockets instead of tcp streams.
Use (net.createServer()).listen("/tmp/my/socket") and
net.createConnection("/tmp/my/socket") to do that.

billywhizz

unread,
Apr 27, 2011, 6:58:16 PM4/27/11
to nodejs
i'll try to put some benchmarks together at the weekend. i have
already benchmarked a lot of this stuff when rolling my own binary
parsers, but i don't have any nice stats to hand at the moment.

kowsik

unread,
Apr 27, 2011, 7:23:37 PM4/27/11
to nod...@googlegroups.com
On Wed, Apr 27, 2011 at 3:56 PM, billywhizz <apjo...@gmail.com> wrote:
> you should get better performance/throughput from a unix socket as you
> won't be going through the network stack at all. as far as i know
> mkfifo doesn't have a binding in node.js so you'll have to do some c++
> hackery to get fifo's working. stdin/stdout should also be very fast
> but will do the job if you are just connecting two applications
> together...

Hmm, not necessarily. My experience with unix sockets is that you
don't get the same level of kernel tuning that you do with TCP (listen
backlog, max send/recv buffer size, etc). See this blog that talks
about FastCGI which does use unix/sockets. We ran into all sorts of
bottlenecks.

http://labs.mudynamics.com/2011/03/30/blitzio-the-long-night-before-an-iphone-app-launch/

stdin/stdout will be no-go as well since you are queueing up all of
the messages into a single fd-pair and will result in pipeline stall.
This is similar to using a single HTTP node client to talk to CouchDB,
which ends up queuing up all of the requests. Bad from a concurrency
standpoint.

K.
---
http://blitz.io
http://twitter.com/pcapr
http://labs.mudynamics.com

Matt

unread,
Apr 27, 2011, 7:58:51 PM4/27/11
to nod...@googlegroups.com
Not only that, but TCP over localhost is directly optimised (on Linux at least) and is as fast as unix domain sockets (uses the same code IIRC). It doesn't hit the network stack any (since about 10+ years ago).

billywhizz

unread,
Apr 27, 2011, 8:00:35 PM4/27/11
to nodejs
my bad. i was under the impression that stdio was full duplex.

had a look at that article. the problem they were having was because
they were setting the backlog too small. you would be getting dropped
packets and connection refused errors just the same if you had done
the same thing with TCP. i find it hard to believe you could make TCP
as fast as a unix socket considering TCP has to go through the network
stack and TCP has all the packet overhead too... W Richard Stevens
says unix domain sockets are more efficient than network sockets and
that's usually good enough for me. ;)

Unix domain sockets also allow you to use datagrams rather than
streams, which might be good depending on what your requirements are.
The datagrams are reliable and in-order over a unix socket though, not
like in UDP.

On Apr 28, 12:23 am, kowsik <kow...@gmail.com> wrote:
> On Wed, Apr 27, 2011 at 3:56 PM, billywhizz <apjohn...@gmail.com> wrote:
> > you should get better performance/throughput from a unix socket as you
> > won't be going through the network stack at all. as far as i know
> > mkfifo doesn't have a binding in node.js so you'll have to do some c++
> > hackery to get fifo's working. stdin/stdout should also be very fast
> > but will do the job if you are just connecting two applications
> > together...
>
> Hmm, not necessarily. My experience with unix sockets is that you
> don't get the same level of kernel tuning that you do with TCP (listen
> backlog, max send/recv buffer size, etc). See this blog that talks
> about FastCGI which does use unix/sockets. We ran into all sorts of
> bottlenecks.
>
> http://labs.mudynamics.com/2011/03/30/blitzio-the-long-night-before-a...

Dean Mao

unread,
Apr 27, 2011, 8:16:05 PM4/27/11
to nod...@googlegroups.com
We have a lib called redpack that we're using:

It is based partly on msgpack-rpc and previously used msgpack for serialization, but was switched to bson because msgpack was not very good for cross-language type conversions.  It supports sync & async rpc for node, java, and ruby, but requires you to have redis.  It's basically a very simple message queue rpc.

billywhizz

unread,
Apr 27, 2011, 8:19:14 PM4/27/11
to nodejs
does TCP over loopback on linux really not hit the network stack?
doesn't it do packet framing, checksums and protocol handshakes etc? i
can't find any reference to this, so if you have one, please let me
know.

i'm going to do a benchmark in node.js anyway to see what the
difference is. it will probably be negligible considering the overhead
in node.js and v8 anyway.

what do you think would be best for a benchmark? two node.js process
communicating with each other with varying message sizes and numbers
of connections? could also benchmark the different serialization
methods too which would be useful. JSON v msgpack v js custom parser v
c++ custom parser. i have a lot of code to do this already so wouldn't
be a huge amount of work...

On Apr 28, 12:58 am, Matt <hel...@gmail.com> wrote:
> Not only that, but TCP over localhost is directly optimised (on Linux at
> least) and is as fast as unix domain sockets (uses the same code IIRC). It
> doesn't hit the network stack any (since about 10+ years ago).
>
>
>
>
>
>
>
> On Wed, Apr 27, 2011 at 7:23 PM, kowsik <kow...@gmail.com> wrote:
> > On Wed, Apr 27, 2011 at 3:56 PM, billywhizz <apjohn...@gmail.com> wrote:
> > > you should get better performance/throughput from a unix socket as you
> > > won't be going through the network stack at all. as far as i know
> > > mkfifo doesn't have a binding in node.js so you'll have to do some c++
> > > hackery to get fifo's working. stdin/stdout should also be very fast
> > > but will do the job if you are just connecting two applications
> > > together...
>
> > Hmm, not necessarily. My experience with unix sockets is that you
> > don't get the same level of kernel tuning that you do with TCP (listen
> > backlog, max send/recv buffer size, etc). See this blog that talks
> > about FastCGI which does use unix/sockets. We ran into all sorts of
> > bottlenecks.
>
> >http://labs.mudynamics.com/2011/03/30/blitzio-the-long-night-before-a...

kowsik

unread,
Apr 27, 2011, 8:32:08 PM4/27/11
to nod...@googlegroups.com
On Wed, Apr 27, 2011 at 5:00 PM, billywhizz <apjo...@gmail.com> wrote:
> my bad. i was under the impression that stdio was full duplex.

Full duplex is not the same as concurrent. Each TCP connection is
full-duplex, but in order to achieve concurrency, you need multiple
connections.

> had a look at that article. the problem they were having was because
> they were setting the backlog too small. you would be getting dropped
> packets and connection refused errors just the same if you had done
> the same thing with TCP. i find it hard to believe you could make TCP
> as fast as a unix socket considering TCP has to go through the network
> stack and TCP has all the packet overhead too... W Richard Stevens
> says unix domain sockets are more efficient than network sockets and
> that's usually good enough for me. ;)
>
> Unix domain sockets also allow you to use datagrams rather than
> streams, which might be good depending on what your requirements are.
> The datagrams are reliable and in-order over a unix socket though, not
> like in UDP.

There's a possibility that the datagram unix sockets are faster, but
then you would have to account for the loss of messages in case of
buffer shortage. If you explore sysctl, there isn't much to tune for
unix sockets, but there are a bazillion options for TCP.

Matt

unread,
Apr 27, 2011, 8:44:58 PM4/27/11
to nod...@googlegroups.com
On Wed, Apr 27, 2011 at 8:19 PM, billywhizz <apjo...@gmail.com> wrote:
does TCP over loopback on linux really not hit the network stack?
doesn't it do packet framing, checksums and protocol handshakes etc? i
can't find any reference to this, so if you have one, please let me
know.

AFAIK it's optimised to not go through the network driver layer (so it still goes through parts of the stack, but is optimised so that it doesn't have to talk to hardware). But this is just a memory from when I used to read the Kernel change logs years and years ago.  I can't find any reference to it online now.

Matt.

billywhizz

unread,
Apr 27, 2011, 9:30:52 PM4/27/11
to nodejs
Thanks Matt. had a google but there doesn't seem to be much definitive
info out there. i'd wager for large message sizes a unix socket should
be quite a bit faster but if sending lots of small messages, you
probably won't see a whole lot of difference.

Justin, the best way to answer a question like this is to do the
testing yourself based on the system/environment you are optimizing
for. There's no way to give a definitive general answer to your
question.

anyway, i'll see what kind of benchmark i can throw together - i'm
more interested in the cost of serialization differences myself.

On Apr 28, 1:44 am, Matt <hel...@gmail.com> wrote:

Aikar

unread,
Apr 27, 2011, 10:42:12 PM4/27/11
to nodejs
Just throwing in my recent findings,
V8 has apparently done a massive performance boost on JSON.

msgpack is now alot slower than JSON

see: https://github.com/aikar/wormhole/issues/3

I had a good streaming msgpack parser implemented doing 50k messages/
sec, and switching to json tripled the speed.

billywhizz

unread,
Apr 27, 2011, 11:43:24 PM4/27/11
to nodejs
interesting. i am seeing the same speedup. the benchmark on msgpack
github is way out of date. i tested here using an old 0.1.96 version
of node versus latest 0.4.7 on 64 bit with v8 v3.1.8.10 and saw the
following results for peter's bench.js script (i just ran the JSON
tests not the msgpack ones):

v0.1.96:
json pack: 6787 ms
json unpack: 11888 ms

v0.4.7:
json pack: 1487 ms
json unpack: 851 ms

That's almost 8x faster for JSON.stringify and almost 14x faster for
JSON.parse. This is very nice indeed!

Chris

unread,
Apr 28, 2011, 12:02:14 AM4/28/11
to nodejs
I use a modified version of kriszyp's multi-node that enables 2-way
communication

Source is here:
https://github.com/chriso/node.io/blob/master/lib/node.io/multi_node.js

You set it up like this:
https://github.com/chriso/node.io/blob/master/lib/node.io/processor.js#L208-240

Then just use `master.send(msg)` or `workers[i].send(msg)`

It's been more than fast enough.

billywhizz

unread,
Apr 28, 2011, 12:46:35 AM4/28/11
to nodejs
btw - i should have said i am running node v0.4.6 above...

On Apr 28, 5:02 am, Chris <cohar...@gmail.com> wrote:
> I use a modified version of kriszyp's multi-node that enables 2-way
> communication
>
> Source is here:https://github.com/chriso/node.io/blob/master/lib/node.io/multi_node.js
>
> You set it up like this:https://github.com/chriso/node.io/blob/master/lib/node.io/processor.j...

billywhizz

unread,
Apr 28, 2011, 12:54:05 AM4/28/11
to nodejs
interestingly, node v0.4.6 built with v8 3.3.2 which should have
crankshaft enabled on 64 bit is slightly quicker for stringify and
quite a bit slower for parse:

node 0.4.6/v8 3.3.2
json pack: 1318 ms
json unpack: 1039 ms

node.0.4.6/v8 3.1.8.10
json pack: 1487 ms
json unpack: 851 ms

On Apr 28, 5:02 am, Chris <cohar...@gmail.com> wrote:
> I use a modified version of kriszyp's multi-node that enables 2-way
> communication
>
> Source is here:https://github.com/chriso/node.io/blob/master/lib/node.io/multi_node.js
>
> You set it up like this:https://github.com/chriso/node.io/blob/master/lib/node.io/processor.j...

Chris

unread,
Apr 28, 2011, 8:32:49 AM4/28/11
to nodejs
How does a straight `eval(json)` compare to `JSON.parse(json)` for
unpacking?

Aikar

unread,
Apr 28, 2011, 8:47:04 AM4/28/11
to nodejs
Justin/OP:
You can try my Wormhole lib.
https://github.com/aikar/wormhole

It's designed to simply pass it ends of a stream, and it just works.

As said in benchmark above, I've got it up to 150k messages processed
per second on a single 3.4ghz core.
And that was just 1 core feeding it, so sending would of bottlenecked
it, if you had multiple processes sending it data, im sure it would
get even faster

Bgsosh

unread,
Sep 21, 2012, 12:22:05 PM9/21/12
to nod...@googlegroups.com
Just to let you all know that there is a Stackoverflow question on this issue, in case you wanted to air your views there:

Yi

unread,
Sep 29, 2012, 3:52:13 AM9/29/12
to nod...@googlegroups.com, kow...@gmail.com
we are using UPD datagram for cross-process communication of node apps on the same machine. The performance is incredible.
here is the code repo:
https://github.com/SGF-Games/node-udpcomm 

regards,

ty


在 2011年4月28日星期四UTC+8上午8时32分08秒,kowsik写道:
Reply all
Reply to author
Forward
0 new messages