On Mon, Jun 30, 2014 at 9:28 AM, Saúl Ibarra Corretgé <
sag...@gmail.com> wrote:
>
https://gist.github.com/saghul/909502cc7367a25c247a
Moving the goalposts a little: for consistency, similar changes should
be considered for uv_listen() and uv_udp_recv_start().
uv_udp_recv_start() is the easy one, that could just follow uv_read():
int uv_udp_recv(uv_udp_recv_t* req,
uv_udp_t* handle,
uv_alloc_cb alloc_cb,
uv_udp_recv_cb recv_cb);
Allowing for multiple queued uv_udp_recv_t requests would let libuv
exploit recvmmsg() on newer Linux systems. I've observed that
recvmmsg() isn't unequivocally faster than plain recvmsg() but it's
good to at least have the option.
uv_listen() is more interesting. It takes a callback that in turns
calls uv_accept() to accept the incoming connection. A problem with
the current implementation is that on UNIX platforms, the connection
has already been accept()'d by the time the callback is called;
uv_accept() just packages the socket file descriptor in a uv_stream_t.
It makes it difficult to implement throttling well because there's
always a connection that ends up in limbo until the application starts
calling uv_accept() again. There have been repeated requests for a
uv_listen_stop() function for exactly that reason.
Folding uv_listen() and uv_accept() into a single API function would
resolve that. I'll dub the new function uv_accept() and it would look
something like this:
int uv_accept(uv_accept_t* req,
uv_stream_t* server_handle,
uv_accept_cb accept_cb);
Where uv_accept_cb would look like this:
typedef void (*uv_accept_cb)(uv_stream_t* client_handle, int status);
As long as there are pending accept requests, the listen socket is
polled. When there are none, the socket is removed from the poll set.
Allowing for multiple pending accept requests lets libuv optimize for
systems having a (so far hypothetical) acceptv() system call.
One drawback with the suggested API is that it requires that the
client handle is allocated and initialized upfront, something that
would complicate cleanup for the user on shutdown or error. Another
potential issue is when the user embeds the handle in a larger data
structure that until now had an expectation of always having a fully
initialized handle.
Changing it to defer allocation of the handle until there is a
connection is an option, of course, but it would in turn make other
use cases more complicated: for example, using a stack-allocated
handle would require that the user carries the address of the handle
around until it's needed. Tradeoffs...
Boost.Asio takes the 'commit upfront' approach and I'm leaning towards
that as well, if only because it lowers the cognitive dissonance
between the two projects. Of course, enforcing proper cleanup is
easier in C++ than it is in C.
Last but not least, request-driven accept and receive functionality -
especially when cancellable - should make life a whole lot easier for
the people that implement synchronous green threading on top of
libuv's asynchronous API, like Rust and Julia.