Please contribute to the design and implementation of asynchronous gRPC Python

1825 views
Skip to first unread message

Nathaniel Manista

unread,
Sep 12, 2016, 12:47:21 PM9/12/16
to grp...@googlegroups.com
There's been a great deal of interest in enhancing gRPC Python to work with asynchronous frameworks and language features. While gRPC has been open source for some time we haven't until now done much in the way of community-driven development; asynchronous gRPC Python is a chance to try out a more direct open process. Please participate!

For those of you that have found workarounds and drafted patches: you probably know more about the work ahead than I do, and I'd especially appreciate your involvement.

Process-wise, I have drafted this initial document, and for at least initial design I'd like to proceed by pull requests fleshing out the document with proposals and plans. Pull requests to gRPC itself will also of course be part of the conversation - please remember to include test coverage not only for code health's sake but also as an illustration of new behavior.

Looking forward,
-Nathaniel

leifu...@gmail.com

unread,
Sep 12, 2016, 1:46:02 PM9/12/16
to grpc.io
Is the idea to provide asynchronous / async-friendly APIs on the stub, service, or both? Based on my experience, I think designing an asynchronous api for rpc services is probably trickier, particularly for streaming replies. The simple and developer-user friendly generator idiom for streaming replies doesn't translate to asyncio coroutines, which can only return once. Coroutine generators are not implemented in python 3.5, see https://www.python.org/dev/peps/pep-0492/#coroutine-generators

What I think this means is that an asynchronous rpc method with a streaming reply would have to push elements onto a queue or other object representing a stream, rather than returning them directly. Something like the following:

async def MethodWithStreamingReplies(request, reply_stream):
   
while some_condition:
        item
= await some_coroutine()
        reply_stream
.push(item)
    reply_stream
.close()

Note that the reply stream needed to be closed explicitly; unlike a generator which raises StopIteration once it is exhausted, the reply stream presumably needs a terminating sentinel to notify its consumer that it has been exhausted.

Nathaniel Manista

unread,
Sep 20, 2016, 5:21:14 PM9/20/16
to leifu...@gmail.com, grpc.io
On Mon, Sep 12, 2016 at 10:46 AM, <leifu...@gmail.com> wrote:
Is the idea to provide asynchronous / async-friendly APIs on the stub, service, or both?

The feature requests we've received cover both although I think we've gotten more on the stub side. Since dynamically the two uses would be separated by many layers of code and perhaps some physical distance over a network, the reasons for developing them together would be... API style consistency? Which is plenty important, but does any other reason come to mind?

Based on my experience, I think designing an asynchronous api for rpc services is probably trickier, particularly for streaming replies. The simple and developer-user friendly generator idiom for streaming replies doesn't translate to asyncio coroutines, which can only return once. Coroutine generators are not implemented in python 3.5, see https://www.python.org/dev/peps/pep-0492/#coroutine-generators

What I think this means is that an asynchronous rpc method with a streaming reply would have to push elements onto a queue or other object representing a stream, rather than returning them directly. Something like the following:

async def MethodWithStreamingReplies(request, reply_stream):
   
while some_condition:
        item
= await some_coroutine()
        reply_stream
.push(item)
    reply_stream
.close()

For my own clarity: the preceding is a sketch of stub-side code within gRPC Python (or generated by gRPC Python)? How do you think the corresponding application-authored code would look?

Thanks for contributing,
-Nathaniel

la...@lyschoening.de

unread,
Sep 25, 2016, 11:27:33 AM9/25/16
to grpc.io, leifu...@gmail.com

Nathaniel: That looks like an example of a method implementation.

The stream does not have to be closed explicitly by the implementer (though it should be possible to do so). The stream should always close when the method returns. The server logic would look something like this:

async def invoke_unary_stream(method, request, reply_stream):
    await method
(request, reply_stream)
   
if not reply_stream.closed():
        reply_stream
.close()

def handle_unary_stream_request(method, request) -> Tuple[Future, ReplyStream]:
    reply_stream
= ReplyStream()
   
return asyncio.ensure_future(invoke_unary_stream(method, request, reply_stream)), reply_stream


The reply stream then has to be consumed by either a different coroutine or a thread that is responsible for sending response messages.

I have written a simple prototype of an asyncio gRPC implementation in hyper-h2 (http://python-hyper.org/projects/h2/en/stable/). I'll be happy to put that on GitHub if anyone is interested. I don't think it is particularly difficult to get gRPC working with asyncio in principle. The main issue that needs to be addressed with the official gRPC implementation is that there is a thread for each method call and messages related to a particular RPC call are sent/received in a blocking manner in the same thread where the method call is happening. It is straightforward to make coroutine method implementation today, but the exercise is pointless because each RPC call has its own thread. There should be some dedicated threads for receiving and sending messages and then a dedicated thread for the event loop. Then handling a request is just a matter of invoking the method on the event loop and passing any streamed messages between the threads (using a queue or some other mechanism).

Nathaniel Manista

unread,
Oct 6, 2016, 3:31:33 PM10/6/16
to la...@lyschoening.de, grpc.io, Leif Halldór Ásgeirsson
On Sun, Sep 25, 2016 at 8:27 AM, <la...@lyschoening.de> wrote:
Nathaniel: That looks like an example of a method implementation.

I think I'm so used to seeing the messages be protocol buffer objects that I was a simple "item" threw me off. :-P
 
The stream does not have to be closed explicitly by the implementer (though it should be possible to do so). The stream should always close when the method returns. The server logic would look something like this:

async def invoke_unary_stream(method, request, reply_stream):
    await method
(request, reply_stream)
   
if not reply_stream.closed():
        reply_stream
.close()

def handle_unary_stream_request(method, request) -> Tuple[Future, ReplyStream]:
    reply_stream
= ReplyStream()
   
return asyncio.ensure_future(invoke_unary_stream(method, request, reply_stream)), reply_stream


The reply stream then has to be consumed by either a different coroutine or a thread that is responsible for sending response messages.

I have written a simple prototype of an asyncio gRPC implementation in hyper-h2 (http://python-hyper.org/projects/h2/en/stable/). I'll be happy to put that on GitHub if anyone is interested.

Sure!

I don't think it is particularly difficult to get gRPC working with asyncio in principle. The main issue that needs to be addressed with the official gRPC implementation is that there is a thread for each method call

Agreed.

and messages related to a particular RPC call are sent/received in a blocking manner

Agreed.

in the same thread where the method call is happening.


It is straightforward to make coroutine method implementation today, but the exercise is pointless because each RPC call has its own thread. There should be some dedicated threads for receiving and sending messages


and then a dedicated thread for the event loop. Then handling a request is just a matter of invoking the method on the event loop and passing any streamed messages between the threads (using a queue or some other mechanism).

Sounds reasonable.
-N

claw...@gmail.com

unread,
Feb 7, 2017, 7:40:56 PM2/7/17
to grpc.io
A H2 based grpc implementation sounds interesting. Did you get around to posting your H2 based experiment to GitHub? If so, could you provide a link?

Lars Schöning

unread,
Mar 2, 2017, 6:59:25 AM3/2/17
to claw...@gmail.com, grpc.io
Sorry to take such a long time to reply. I initially didn’t post the code because it isn’t all that elegant and since it appears that the underlying issue with asyncio support in the official gRPC implementation is simply one of refactoring the code so that it doesn’t block a thread for each method call.

I’ve now uploaded the hyper-h2 implementation here:



On 08 Feb 2017, at 01:40, claw...@gmail.com wrote:

A H2 based grpc implementation sounds interesting. Did you get around to posting your H2 based experiment to GitHub? If so, could you provide a link?

--
You received this message because you are subscribed to a topic in the Google Groups "grpc.io" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/grpc-io/RpkyqqQy8TU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/62114d88-a2db-46d4-9d6b-fec82a7335e3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

da4...@gmail.com

unread,
May 30, 2017, 4:06:12 AM5/30/17
to grpc.io, la...@lyschoening.de, leifu...@gmail.com
Late to the party, but ... I have a need for this, so ...

On Friday, October 7, 2016 at 6:31:33 AM UTC+11, Nathaniel Manista wrote:
On Sun, Sep 25, 2016 at 8:27 AM, <la...@lyschoening.de> wrote:
Nathaniel: That looks like an example of a method implementation.

I think I'm so used to seeing the messages be protocol buffer objects that I was a simple "item" threw me off. :-P
 
The stream does not have to be closed explicitly by the implementer (though it should be possible to do so). The stream should always close when the method returns. The server logic would look something like this:

async def invoke_unary_stream(method, request, reply_stream):
    await method
(request, reply_stream)
   
if not reply_stream.closed():
        reply_stream
.close()

def handle_unary_stream_request(method, request) -> Tuple[Future, ReplyStream]:
    reply_stream
= ReplyStream()
   
return asyncio.ensure_future(invoke_unary_stream(method, request, reply_stream)), reply_stream


The reply stream then has to be consumed by either a different coroutine or a thread that is responsible for sending response messages.

I have written a simple prototype of an asyncio gRPC implementation in hyper-h2 (http://python-hyper.org/projects/h2/en/stable/). I'll be happy to put that on GitHub if anyone is interested.

Sure!

I don't think it is particularly difficult to get gRPC working with asyncio in principle. The main issue that needs to be addressed with the official gRPC implementation is that there is a thread for each method call

Agreed.

That's unfortunate.  I mean ... that thread could be dealt with in the background, but it's ugly.
 
and messages related to a particular RPC call are sent/received in a blocking manner

Agreed.

Is the existing futures implementation helpful here?

in the same thread where the method call is happening.


It is straightforward to make coroutine method implementation today, but the exercise is pointless because each RPC call has its own thread. There should be some dedicated threads for receiving and sending messages


So, ideally, that thread would either go away and the iomgr would be subsumed by the asyncio event loop (big, hard, and likely ugly) or ... we keep that thread, and have it post events to the event loop (for a received message) and pull send events off a queue?

and then a dedicated thread for the event loop. Then handling a request is just a matter of invoking the method on the event loop and passing any streamed messages between the threads (using a queue or some other mechanism).

Sounds reasonable.
-N

Has anything happened here recently?



Reply all
Reply to author
Forward
0 new messages