few questions about libuv behavior

242 views
Skip to first unread message

CM

unread,
Mar 22, 2018, 6:49:41 PM3/22/18
to libuv
Hi,

1. does libuv provide any guarantees wrt callback order in case when uv_close() cancels all outstanding requests on given handle -- i.e. can I rely that in close_cb all canceled requests callbacks have already been called?

2. can uv_close callback be cancelled or not completed? E.g. if I ask event loop to terminate -- will it wait for all callbacks to complete? what if these callbacks try to queue another requests -- will they get cancelled or those requests will fail to submit?

2.1. does ev_run() guarantee that no outstanding requests are alive or callbacks are pending on return?

3. Can I move uv_tcp_t (or any other handle) instance in memory?

4. if I submit two write requests -- is it possible for first one to fail and for second one to succeed? (uv_tcp_t/uv_file_t/etc, Windows/Linux)

Thank you.

Regards,
Michael.

Ben Noordhuis

unread,
Mar 24, 2018, 6:15:37 AM3/24/18
to li...@googlegroups.com
On Thu, Mar 22, 2018 at 11:49 PM, CM <crusad...@gmail.com> wrote:
> Hi,
>
> 1. does libuv provide any guarantees wrt callback order in case when
> uv_close() cancels all outstanding requests on given handle -- i.e. can I
> rely that in close_cb all canceled requests callbacks have already been
> called?

Yes, request callbacks run before the close callback.

You should probably not rely on the order in which different request
types run, e.g., whether a connect callback comes before or after a
write callback.

> 2. can uv_close callback be cancelled or not completed? E.g. if I ask event
> loop to terminate -- will it wait for all callbacks to complete? what if
> these callbacks try to queue another requests -- will they get cancelled or
> those requests will fail to submit?

If "ask to terminate" means calling:

1. uv_stop() -> uv_stop() just tells uv_run() to return at its
earliest convenience, it doesn't shut down anything.
2. uv_loop_close() -> uv_loop_close() fails with UV_EBUSY if there are
active handles or requests.

A closing handle is considered active, even if it has been unref'd
with uv_unref().

When a callback creates another handle or request, it keeps the event
loop alive.

> 2.1. does ev_run() guarantee that no outstanding requests are alive or
> callbacks are pending on return?

Yes, provided you called it with mode=UV_RUN and didn't call
uv_stop(). Call uv_loop_alive() if you want to be sure.

> 3. Can I move uv_tcp_t (or any other handle) instance in memory?

No.

> 4. if I submit two write requests -- is it possible for first one to fail
> and for second one to succeed? (uv_tcp_t/uv_file_t/etc, Windows/Linux)

In theory, yes; in practice, no. Libuv stops reading and writing when
an I/O error happens. It is theoretically possible to start reading
and writing again if the error is transient, but in practice you
simply close the handle.

Michael Kilburn

unread,
Mar 24, 2018, 6:03:39 PM3/24/18
to li...@googlegroups.com
On Sat, Mar 24, 2018 at 5:15 AM, Ben Noordhuis <in...@bnoordhuis.nl> wrote:
On Thu, Mar 22, 2018 at 11:49 PM, CM <crusad...@gmail.com> wrote:
> 1. does libuv provide any guarantees wrt callback order in case when
> uv_close() cancels all outstanding requests on given handle -- i.e. can I
> rely that in close_cb all canceled requests callbacks have already been
> called?

Yes, request callbacks run before the close callback.

I assume that (with Windows IOCP) uv_close() on a handle with outstanding requests will end up calling CancelIo(Ex) -- which (according to limited knowledge in this area) marks underlying OS request as canceled and returns immediately (before cancellation notification is delivered to user). I.e. I suspect that uv_close callback might be called before all read/write callbacks have reported a "canceled" error -- that is unless uv_close() keeps track of currently outstanding requests and delays close_cb notification until they are all gone. Can you confirm or deny this?

Consider this pseudocode of a C++ coroutine:

void my_coro()
{
    RingBuffer buf(1024);
    SocketHandle sock(...);   // in dtor it will call uv_close(), suspend and resume in uv_close callback
    ...
    // submit read/write requests that access 'buf' in their respective callbacks
    ...
    // here we hit an error and leave (via return or exception)
}

as you see it is imperative to have a mechanism that guarantees "no more callback calls" right before 'buf' destructor. uv_close() may or may not provide this guarantee -- I don't know. It seems that uv_shutdown() provides it (even in case of shutdown() failure?) -- but it does it for write requests only, it seems.

So, this is quite important -- does libuv officially(!!!) provide this guarantee in uv_close() or not? Note that even if it does happen to provide it in current version, but it is not an official promise -- this means I can't rely on it and have to provide additional synchronization that will ensure my coroutine is suspended (right before 'buf' destructor) until all requests have completed.

 
You should probably not rely on the order in which different request
types run, e.g., whether a connect callback comes before or after a
write callback.

I understand this, but I had somewhat different problem in mind (as shown above).

 
> 2. can uv_close callback be cancelled or not completed? E.g. if I ask event
> loop to terminate -- will it wait for all callbacks to complete? what if
> these callbacks try to queue another requests -- will they get cancelled or
> those requests will fail to submit?

If "ask to terminate" means calling:

1. uv_stop() -> uv_stop() just tells uv_run() to return at its
earliest convenience, it doesn't shut down anything.
2. uv_loop_close() -> uv_loop_close() fails with UV_EBUSY if there are
active handles or requests.

A closing handle is considered active, even if it has been unref'd
with uv_unref().

When a callback creates another handle or request, it keeps the event
loop alive.

Hmm... Let me rephrase that -- how to cleanly shutdown a running event loop? uv_stop() gets me out of the loop, but all active handles are still somewhere in memory (connected to eventloop). Their respective handlers (coroutine stack frames, requests, buffers, etc) are on heap waiting for notifications (callbacks). What is the proper procedure to unwind all this and avoid leaks?

What if during unwinding I will submit another request or try to open a new handle?


> 4. if I submit two write requests -- is it possible for first one to fail
> and for second one to succeed? (uv_tcp_t/uv_file_t/etc, Windows/Linux)

In theory, yes; in practice, no.  Libuv stops reading and writing when
an I/O error happens.  It is theoretically possible to start reading
and writing again if the error is transient, but in practice you
simply close the handle.

I see... I understand that this shouldn't happen with uv_tcp_t -- because TCP is a "stream", i.e. guarantees the data transfer order. I suspect underlying mechanisms (OS) make sure that if first request fails, second will fail too.

But what about uv_udp_t? In case of two simultaneous write requests -- does libuv queues second one until first is completed? (and if first failed -- fail second one without submitting it to OS)

Same for uv_fs_t -- I can submit two write requests that will try to update different locations in the same file. If these writes are immediately "converted" to async/overlapped requests (on Windows) -- it is possible for first one to fail and second one to succeed. That is unless libuv serializes them and executes them one-by-one.


--
Sincerely yours,
Michael.

Jameson Nash

unread,
Mar 25, 2018, 3:05:33 PM3/25/18
to li...@googlegroups.com
> unless uv_close() keeps track of currently outstanding requests and
Libuv keeps track of all outstanding requests (how else would it have known who to notify for completion)

> how to cleanly shutdown a running event loop?
Libuv is designed for use from nodejs, which runs the event loop until all workers are closed and requests handed. To shutdown cleanly, wait for all outstanding requests to complete and then close all handles.

> What if during unwinding I will submit another request or try to open a new handle?
You will have to decide how your application needs to handle this.
 
Same for uv_fs_t -- I can submit two write requests that will try to update different locations in the same file. If these writes are immediately "converted" to async/overlapped requests
Yes, there's a long-standing bug / PR that libuv sometimes assumes that fs writes simply can't fail: https://github.com/libuv/libuv/pull/269#discussion_r28131975

Michael Kilburn

unread,
Mar 25, 2018, 9:26:42 PM3/25/18
to li...@googlegroups.com
On Sun, Mar 25, 2018 at 2:05 PM, Jameson Nash <vtj...@gmail.com> wrote:
> unless uv_close() keeps track of currently outstanding requests and
Libuv keeps track of all outstanding requests (how else would it have known who to notify for completion)

So, can I rely on uv_close callback to be invoked after all outstanding requests (on handle being closed) are canceled (and their callbacks are invoked)?

 
> how to cleanly shutdown a running event loop?
Libuv is designed for use from nodejs, which runs the event loop until all workers are closed and requests handed. To shutdown cleanly, wait for all outstanding requests to complete and then close all handles.

Does it mean there is no way to (cleanly) shutdown event loop without adding some extra logic into my code? Think of typical tcp echo server example -- lets say you want it cleanly exit (without leaks) if user sent a byte with certain value (e.g. 255) -- how to do it?

 
> What if during unwinding I will submit another request or try to open a new handle?
You will have to decide how your application needs to handle this.
 
Same for uv_fs_t -- I can submit two write requests that will try to update different locations in the same file. If these writes are immediately "converted" to async/overlapped requests
Yes, there's a long-standing bug / PR that libuv sometimes assumes that fs writes simply can't fail: https://github.com/libuv/libuv/pull/269#discussion_r28131975

Great...


Michael Kilburn

unread,
Mar 27, 2018, 10:39:33 PM3/27/18
to li...@googlegroups.com
On Sun, Mar 25, 2018 at 8:26 PM, Michael Kilburn <crusad...@gmail.com> wrote:
On Sun, Mar 25, 2018 at 2:05 PM, Jameson Nash <vtj...@gmail.com> wrote:
> unless uv_close() keeps track of currently outstanding requests and
Libuv keeps track of all outstanding requests (how else would it have known who to notify for completion)

So, can I rely on uv_close callback to be invoked after all outstanding requests (on handle being closed) are canceled (and their callbacks are invoked)?

For future reference:
I spent considerable time looking through uv_close() code for uv_tcp_t (win and linux) -- it seems that (as of now) close_cb is guaranteed to be called after all outstanding connect_cb/write_cb/read_cb are invoked with UV_ECANCELED. I am about 95% sure -- some of code paths I looked at are mind-bending...

It also seems that considerable effort was made to deliver this behavior (esp in Windows case) -- i.e. there is a good chance this guarantee is intentional and will stand.

Still not sure about other handle types (and general intent wrt order of close_cb invokation and UV_ECANCELED delivery).


Michael Kilburn

unread,
Mar 28, 2018, 3:35:54 PM3/28/18
to li...@googlegroups.com

Also, this guarantee isn't very useful in context of coroutines. I.e. it guarantees that related couroutine(s) will resume before close_cb is called, but it won't guarantee that it will complete execution (and not suspend again). I.e. if your write/read/connect callback is used to resume a coroutine -- you probably better off adding explicit synchronization between your coroutine execution and related data structure destruction.

Reply all
Reply to author
Forward
0 new messages