epoll_wait maxevents

139 views
Skip to first unread message

Solar Designer

unread,
Mar 2, 2016, 5:39:08 AM3/2/16
to onion-dev
Hi,

I found that libonion's hard-coded "#define MAX_EVENTS 10" in
src/onion/poller.c is often non-optimal. Specifically, when my program
needs a few milliseconds to produce a response, having maxevents > 1
results in the first few threads grabbing extra requests and keeping
other threads idle. When running with 32 threads on a machine with 32
logical CPUs and needing 2ms to 4ms to generate a dynamic response, and
having 32 clients talking to the machine from localhost or same LAN, the
default of MAX_EVENTS 10 results in only the first 4 to 8 threads
doing any useful work, and the rest never calling my handler. Reducing
MAX_EVENTS to 1 works around this, increasing the speed by a factor of
four or more. Having many more clients talk to the server (hundreds)
would also do the trick, but it's preferable to be able to utilize the
server's full capacity even with fewer concurrent clients.

I suppose MAX_EVENTS 1 defeats the purpose of using epoll over poll, but
that's sort of OK. A better architecture might be for me to queue up
the requests, handle them with separate 32 "compute threads", and
respond to them in the connection handling threads asynchronously (that
is, not blocking those threads for the duration of the computation), but
this doesn't appear to fit in libonion's APIs, so it's something I am
considering for a complete rewrite.

For now, my feature request is to be able to adjust maxevents from the
application.

Thanks,

Alexander

David Moreno Montero

unread,
Mar 2, 2016, 6:28:41 AM3/2/16
to Solar Designer, onion-dev
Hi,

you are right. The number 10 was used initially before the librery was using threads, and not changed since then.

As far as I see it the proper number should be 1.

I dont think that it defeats the pourpose of using epoll vs poll as the kernel interface is diferent: in poll (as in select) in each call you pass the full list of listening fds. In epoll you construct the list in the kernel and reuse it on each call, doing modifications as needed. Check the wikipedia page: https://en.wikipedia.org/wiki/Epoll

If nobody disagree I will make the change later today.

Regards,
David.



--
You received this message because you are subscribed to the Google Groups "onion-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to onion-dev+...@coralbits.com.
To post to this group, send email to onio...@coralbits.com.
Visit this group at https://groups.google.com/a/coralbits.com/group/onion-dev/.
For more options, visit https://groups.google.com/a/coralbits.com/d/optout.



--

David Moreno Montero

unread,
Mar 3, 2016, 12:07:07 PM3/3/16
to Solar Designer, onion-dev
Hi,

I've been doing some experiments changing the MAX_EVENTS value, and with 1 I get worse values (about -5%). But that was on the hello test, other tests will be completely different.

So I added the knob, onion_poller_set_queue_size_per_thread. The poller can be get from the onion object via onion_get_poller.

The documentation is as follow, if there any inaccuracy, please tell me:

/**
 * @short Sets the max events per thread queue size.
 *
 * This fine tune allows to change the queue of events per thread.
 *
 * In case of data contention a call to epoll_wait can return several ready
 * descriptors, which this value decides.
 *
 * If the value is high and some request processing is slow, the requests
 * after that one will have to wait until processed, which increases
 * latency. On the other hand, it requests are fast, using the queue is
 * faster than another call to epoll_wait.
 *
 * In non contention cases, where onion is mostly waiting for requests,
 * it makes no difference.
 *
 * Default: 10.
 */

The other option of "compute threads" can be explored, as it would also help a process based thread model, but somehow I think it will not make a diference over the current epoll_wait model, which uses the kernel to parallelize the requests quite nicely.

Regards,
David.

Reply all
Reply to author
Forward
0 new messages