Grpc completion queue types

86 views
Skip to first unread message

Pau Freixes

unread,
Aug 21, 2018, 6:28:24 AM8/21/18
to grp...@googlegroups.com
Hi,

I've realized reading the current Grpc code that exists [1] other
alternatives to the completion queue next, the pluck and the callback
one.

I've been trying to seek some information, or usage, of these
alternatives and I found nothing.

I'm wondering why and behind what circumstances are they used, which
are the main characteristics compared to the completion queue next and
why they are not really documented.

Could anybody help me put some light here?

[1] https://github.com/grpc/grpc/blob/master/src/core/lib/surface/completion_queue.cc#L347
--
--pau

Christopher Warrington - MSFT

unread,
Aug 21, 2018, 5:39:05 PM8/21/18
to grpc.io
On Tuesday, August 21, 2018 at 3:28:24 AM UTC-7, Pau Freixes wrote:

> I've realized reading the current Grpc code that exists [1] other
> alternatives to the completion queue next, the pluck and the callback
> one.
>
> I've been trying to seek some information, or usage, of these
> alternatives and I found nothing.

There's some documentation in grpc.h [1] for
grpc_completion_queue_pluck. Most of the documentation for the core C
library is in grpc.h. The difference is that grpc_completion_queue_pluck
takes a tag to wait for, while grpc_completion_queue_next does not: it
returns some ready tag, whatever it may be.

grpc_completion_queue_pluck is often used to implement synchronous
processing, while grpc_completion_queue_next is used to implement
asynchronous processing.

The callback variant is very new and is still experimental. That's
likely why it is lacking documentation. From vjpai's pull request that
added the initial implementation [2]:

vjpai > This is not ready for public use at the current time. There are no
vjpai > end2end tests possible until after #16298 lands and is implemented
vjpai > with a real backing poller, but there is now a unit test added.

[1]: https://github.com/grpc/grpc/blob/82bc60c0e13bfb00213b3a94ba72893d044e4c9a/include/grpc/grpc.h#L115-L140
[2]: https://github.com/grpc/grpc/pull/16302

--
Christopher Warrington
Microsoft Corp.

Pau Freixes

unread,
Aug 21, 2018, 7:20:48 PM8/21/18
to chw...@microsoft.com, grp...@googlegroups.com
Thanks for the summary,

I'm starting an initiative to analyze how Grpc can be implemented on
top of Asyncio. One of the starting points is the current
implementation of node-grpc, that relays on grpc_completion_queue_next
[1] to achieve the needed cooperation without blocking the loop.

The usage of the queue_next and replicate the same pattern implemented
by node-grpc has different concerns, what worries me more is the
chances of having blocking calls calling the
grpc_completion_queue_next function [2].
So I was wondering if I might implement the asynchrony using a simple
callback pattern, so no having implicitly blocking calls.

I will contact the author of the MR to get more info about the goal of
the callback variant, at least the name sounds appealing to me :).
Also, I could consider implementing the asynchronous layer on top of
the grpc_completion_queue_pluck interface. So any advice will be
welcomed.

PD: unfortunately replicate the Node use case to implement the Asyncio
support might have some red-flags that would not be possible to
circumvent. For example, Grpc implements the IO manager for libuv
achieving automatically cooperation between the Grpc code and the Node
code, this is not portable to the Asyncio use case.


[1] https://github.com/grpc/grpc-node/blob/master/packages/grpc-native-core/ext/completion_queue.cc#L43
[2] https://github.com/grpc/grpc/blob/master/src/core/lib/surface/completion_queue.cc#L1030
> --
> You received this message because you are subscribed to the Google Groups "grpc.io" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
> To post to this group, send email to grp...@googlegroups.com.
> Visit this group at https://groups.google.com/group/grpc-io.
> To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/020b4deb-177f-40f2-9943-c36c4c090670%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



--
--pau

Vijay Pai

unread,
Aug 29, 2018, 2:15:50 PM8/29/18
to grpc.io
Hi there,

This is a followup of our discussion on the PR. I think it would be better to move the discussion here.

The intention with the recent and forthcoming PRs (https://github.com/grpc/grpc/pull/16302https://github.com/grpc/grpc/pull/16414https://github.com/grpc/grpc/pull/16492is to support a callback-based C++ async API, which has been a user request since almost day 1 after the gRPC alpha release. Making this fully feasible, however, is an ongoing effort that will substantially change iomgr, codegen, the C++ language binding, and ultimately coders. The core surface for this has the interface of a completion queue (but which doesn't actually have any queue) and does not use next. Instead, callbacks get invoked when operation batches are complete, which is typically realized with the aid of the iomgr (except for non-polling transports, like inproc). We are moving toward providing an iomgr in OSS that will allow callbacks to be triggered in a separate threadpool independently of application control.

It may become possible to consider using this in other language bindings as well and perhaps it could be suitable for interfacing with another async library; that is an interest of ours but not a priority so we'd certainly welcome any input or contributions. When this is a little less experimental, I can discuss it in one of our biweekly video meetups and collect feedback there for further use. I will also propose this officially through our gRFC process when we are ready to consider this for stabilization. In the meanwhile, though, feel free to kick the tires, and we'll keep watching this thread and our issues to gather any early input.

Regards,
vjpai

Pau Freixes

unread,
Aug 30, 2018, 5:44:25 PM8/30/18
to vp...@google.com, grp...@googlegroups.com
Thanks for moving this discussion here Vijay, definitely the better place.

A couple of comments about your notes

> We are moving toward providing an iomgr in OSS that will allow callbacks to be triggered in a separate threadpool independently of application control.

I'm not really aware of the concurrency model used by the C++
implementation - any link that its worth it to take a look will be
welcomed. But regarding your comments about the substantial change
that will imply a modification that will allow executing the callbacks
in a thread pool, remember that wrappers such as the Node.js one or a
future implementation for Asyncio won't need that which are based on
loop reactors where the callbacks are executed in the same thread. So,
I'm wondering if even being an experimental implemetnation it's in a
certain way ready to be used in scenarios where the concurrence is
achieved in another way.


> It may become possible to consider using this in other language bindings as well and perhaps it could be suitable for interfacing with another async library; that is an interest of ours but not a priority so we'd certainly welcome any input or contributions.

Regarding the Asyncio implementation, there is a lot of work to be
done and most probably many unknowns that have to be discovered and
circunvented. So, Im not worried about the experimental situation of
the callback completion queue, most probably if eventually, the
Asyncio binding becomes something real the time needed to so will be
enough to see this experimental change into something stable.


Thanks!
Reply all
Reply to author
Forward
0 new messages