--
You received this message because you are subscribed to the Google Groups "Cap'n Proto" group.
To unsubscribe from this group and stop receiving emails from it, send an email to capnproto+...@googlegroups.com.
Visit this group at https://groups.google.com/group/capnproto.
epoll_wait(3, [{EPOLLOUT, {u32=33431864, u64=33431864}}], 16, -1) = 1
epoll_wait(3, [{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=33431864,
u64=33431864}}], 16, -1) = 1
This isn't obviously horrible except that the first epoll_wait seems
pointless (and possibly problematic under some conditions, given that
the server appears to want to read, not write, so if the write buffer
is full this could block unnecessarily.)
--
Hi Sandeep,Some comments:- An 8-byte byte string as the parameter / result is not really exercising Cap'n Proto's serialization layer much, which is the part that we claim to be much faster than alternatives. For a much larger, structurally complex payload, I'd expect Cap'n Proto to do better.- If you are testing local loopback (not over a slow network), then network latency is effectively zero, and the "latency" you are measuring is really CPU time. Since your application is a no-op, you are basically measuring the CPU complexity of the RPC stack. Note that in most real applications, the RPC stack itself -- aside from the serialization -- is not a particularly hot spot, so probably doesn't impact overall application performance that much.- Cap'n Proto's RPC is much more complicated than Thrift's. Last I knew, Thrift RPC was FIFO, which makes for a pretty trivial protocol and state machine, but tends to turn problematic quickly in complicated distributed systems. Cap'n Proto, meanwhile, is not just asynchronous, but is a full capability protocol with promise pipelining. This allows some very powerful designs and avoidance of network latency, but it means that basic operations are going to be slower. It does not surprise me at all that it would use 3x the CPU time -- in fact I'm surprised it's only 3x.- We haven't done much serious optimization work on Cap'n Proto's RPC layer.- Andy notes that a lot of time is spent in malloc (unsurprising, since promises do a lot of heap allocation), so the first thing you might want to try is using a different allocator, like tcmalloc or jemalloc.
Hi Sandeep,
If you're OK with synchronous, FIFO behavior, it should be pretty easy to write such a thing on top of Cap'n Proto serialization, skipping the RPC system. The server would, in a loop, use StreamFdMessageReader to read a message, process it, and writeMessage() the result. Instead of declaring an interface with methods, you would probably want to declare a big union of all the request types.-Kenton
Related question. Right now, all newly accepted sockets continue to be handled by the same thread. If the server is using the capnp::TwoPartyServer, is it possible to provide a thread pool to handle the accepted connections. Can I specialize a TwoPartyVatNetwork or ConnectionReceiver and provide it a parameter somehow ? Are there any non-thread-safe objects like Promises that I need to worry about ?
| number of clients : request/sec ---------------------------------------------- 1 : 37322 2 : 58619 4 : 80756 8 : 84614 16 : 83693 |
--
You received this message because you are subscribed to the Google Groups "Cap'n Proto" group.
Hi Sandeep,Is it maxing out a core at that point?In order to take advantage of 20 cores you would of course need to run 20 instances of the server, since Cap'n Proto is currently single-threaded.-Kenton