Python: Using response streaming api from a done callback

367 views
Skip to first unread message

Reino Ruusu

unread,
Sep 15, 2021, 3:40:46 AM9/15/21
to grpc.io
I have a case in which a call is made to a single-request-streaming-response api (through UnaryStreamMultiCallable.__call__()). This api is invoked from a callback that is registered using set_done_callback() to a future object returned by a call to UnaryUnaryMultiCallable.future(), so that the streaming is started asynchronously as soon as the previous call is finished.

This causes the iterator that is returned for the streaming response to deadlock in the first next() call, irrespective of whether the stream is producing messages or an exception.

The streaming call works as expected when called from some other context than the done-callback of the previous asynchronous call. This makes me suspect that some resource related to the channel is locked during the callback execution, resulting in a deadlock in the call to the stream's iterator.

Is there some way around this?

BR,
-- 
Reino Ruusu

Reino Ruusu

unread,
Sep 15, 2021, 3:49:27 AM9/15/21
to grpc.io
Of course I meant to write add_done_callback() instead of set_done_callback().

To clarify, the code looks like this:

it = stub.singleStreamApi(...)
next(it) # <-- This works as expected

fut = stub.singleSingleApi.future(...)
def callback(fut):
    it = stub.singleStreamApi(...)
    next(it) # <-- This gets stuck in a deadlock
fut.add_done_callback(callback)

Reino Ruusu

unread,
Sep 15, 2021, 4:09:22 AM9/15/21
to grpc.io
A further clarification: The thread is not waiting for the future but returns to the event loop. The callback function is definitely executed and the deadlock happens in the call to next(). Also, the same callback function is successful in synchronously making other single-single api calls, but the single-streming call is deadlocked.

Richard Belleville

unread,
Sep 15, 2021, 2:51:52 PM9/15/21
to grpc.io
So this is an interesting problem. It certainly is unintuitive behavior. I'm also not sure if we should change it. Let me start by explaining the internals of gRPC Python a little bit.

A server-streaming RPC call requires the cooperation of two threads: the thread provided by the client application calling __next__ repeatedly (thread A) and a thread created by the gRPC library that drives the event loop in the C extension, which ultimately uses a mechanism like epoll (thread B). Under the hood, __next__ (thread A) just checks to see if thread B has received a response from the server and, if so, returns it to the client code. Normally, this works out just fine.

But thread B has some other responsibilities, including running any RPC callbacks. This means that in the scenario you described above, thread A and thread B are actually the same thread. So when __next__ is called, there is no separate thread to drive the event loop and receive the responses.

So that's the cause for the deadlock you described. Now, you might say that this is an easy problem to solve. Why not just run the callbacks on a new thread? Then there is no deadlock in this scenario. True. But we've found that additional Python threads kill performance because they're all contending for the GIL. Doing this at the library level could slow down many existing workloads. We've actually put quite a bit of effort into reducing the number of threads we use in the library. There are some options we could consider to make this work out of the box without destroying performance, but it's going to take some thought and careful benchmarking.

For the moment, I'd recommend that you not initiate an RPC from the callback handler and instead use the callback handler just to notify another thread that your application has ownership of, whether that's the same thread as the unary RPC was initiated from or some other thread that you've created yourself.

Reply all
Reply to author
Forward
0 new messages