Backpressure / Pipelining

37 views
Skip to first unread message

Mark Maker

unread,
Jan 10, 2024, 11:53:55 AMJan 10
to libuv
Hi all,

thanks for a superb lib!

I'm still trying to wrap my head around event driven network programming in the realm of http and websockets.

All the examples one finds (various libs on top of libuv or similar) do very simple things, like echoing messages or reporting the time back. In my application I will have costly, CPU bound, and/or potentially blocking database tasks in the handlers, and/or potentially have to send large data back. 

First, I do understand that I need to offload tasks into separate threads.

But then, what I don't understand is how to propagate back-pressure?
  1. From response sender to the task scheduler, i.e. when the network is too slow to return (large) responses to the clients, I should suspend doing more tasks.
  2. From the tasks to the request/message receiver, i.e. when tasks are backed up, it should suspend receiving new requests/messages from the sockets, and propagate back-pressure through sockets/tcp flow-control to the clients.
  3. From the receiver to the listener, i.e. when requests/messages are backed up, it should suspend accepting new connections, also putting back-pressure into the listen-backlog, and ultimately to new clients (the backlog might in turn allow proper kernel load-balancing in a multi-threaded/multi-process design, using SO_REUSEPORT)
Am I right to assume that in an event driven design, unless these issues are addressed specifically, excessive buffering, memory exhaustion, and failure is inevitable when client driven overload is present? 

So I need to program uv_read_stop() and uv_listen stop() etc. manually, throttling these on/off with all the complexity that entails in the face of potential errors, timeouts etc.?

Or is there a "uv_callback_pipeline" that I could stick between producer and consumer, that buffers a limited number of callbacks (ring buffer) and stops/starts the producer uv_handle_t automatically, according to type?

Note: all this is automatic in a synchronous multi-threaded/multi-process design, by virtue of underlying socket-buffers/listen-backlogs and I'm not yet ready to believe that there is no similar automatism available in libuv or event driven programming in general. :-)

_Mark

Ben Noordhuis

unread,
Jan 12, 2024, 6:33:20 AMJan 12
to li...@googlegroups.com
Yes, you're right that you have to call uv_read_stop() and
uv_listen_stop() when you're not ready to receive more data.

That's the one big design mistake we made back then: reads and accepts
should have been request-based, like writes are, not the firehose
model we have today.

You can reasonably faithfully emulate the request-based model by
calling uv_read_stop() or uv_listen_stop() first thing in your
uv_read_cb or uv_connection_cb callback. That's what I usually do in
my programs.
Reply all
Reply to author
Forward
0 new messages