Per connection completion queue

1,281 views
Skip to first unread message

jojy.v...@gmail.com

unread,
May 7, 2018, 4:00:50 PM5/7/18
to grpc.io
Hi all
 Is there a way to get per connection completion queue for a gRPC async server? What we want to achieve is that a `pipeline` (like c++ wangle) for each grpc connection.

thanks in advance
Jo

jojy.v...@gmail.com

unread,
May 8, 2018, 12:32:25 PM5/8/18
to grpc.io
Trying again:

We are trying to do the following:
- Async server that supports streaming and RPC
- For every connection, we should be able to get the events unique to that connection in completion queue

The examples dont show how we can achieve a per connection completion queue. 

thanks in advance,
Jo

Christopher Warrington - MSFT

unread,
May 9, 2018, 3:00:41 PM5/9/18
to grpc.io
On Tuesday, May 8, 2018 at 9:32:25 AM UTC-7, jojy.v...@gmail.com wrote:

> - For every connection, we should be able to get the events unique to that connection in completion queue

Pretend you had such an API. Can you share how you would make us of it and what higher-level problem it would let you solve?

There may be a different way to solve the same problem that doesn't need a per-connection completion queue...

--
Christopher Warrington
Microsoft Corp.

jojy.v...@gmail.com

unread,
May 9, 2018, 5:05:32 PM5/9/18
to grpc.io
We could then do things like act as a proxy service.  Clients trying to connect to an external server would be redirected to this service. This service will then make a connection to actual server for each connected client and forward the RPC (of course after some kind of processing that involves some policy  ).

-Jo

Christopher Warrington - MSFT

unread,
May 10, 2018, 3:00:55 PM5/10/18
to grpc.io
On Wednesday, May 9, 2018 at 2:05:32 PM UTC-7, jojy.v...@gmail.com wrote:

>>> For every connection, we should be able to get the events unique to that
>>> connection in completion queue

>> Pretend you had such an API. Can you share how you would make us of it
>> and what higher-level problem it would let you solve?
>>
>> There may be a different way to solve the same problem that doesn't need
>> a per-connection completion queue...

> We could then do things like act as a proxy service. Clients trying to
> connect to an external server would be redirected to this service. This
> service will then make a connection to actual server for each connected
> client and forward the RPC (of course after some kind of processing that
> involves some policy ).

How do you plan to poll all these completion queues? There's no pollset-like
API for completion queues. I can see two ways to poll a bunch of completion
queues, neither of which looks scalable:

1. Have a dedicated poller thread per completion queue.
    * This will result in lots of threads that are mostly idle. Stack space
      is wasted, there will be scheduling overhead, &c.
2. Have a small number of poller therads and use grpc_completion_queue_next
   with a small, but non-zero deadline and cycle to the next completion
   queue to check when GRPC_QUEUE_TIMEOUT is returned.
     * Completion queues with work that are "behind" completion queues with
       no work will get starved.

Instead, I think you want to explore an architecture where you multiplex all
the client connections over a handful of completion queues. In the data
structures you use for to track the outstanding operations (typically what
you pass the address of as the void* tag value), one of the values could be
a client identifier that you could use for whatever client-specific logic
you need in your processing.

Hope this helps.

jojy.v...@gmail.com

unread,
May 14, 2018, 11:21:56 AM5/14/18
to grpc.io
Thanks for the response. I wish there was a 'polling' interface for grpc.

jojy.v...@gmail.com

unread,
May 14, 2018, 11:51:29 AM5/14/18
to grpc.io
Had a question: How can we multipllex connections over  handful of CQs? Dont all the individual client (represented by its unique *tag*) have to poll on the CQs in its own thread? Which means we need N number of threads to call 'next' on the CQ for N *tags* . Right?

Christopher Warrington - MSFT

unread,
May 15, 2018, 7:26:07 PM5/15/18
to grpc.io
On Monday, May 14, 2018 at 8:51:29 AM UTC-7, jojy.v...@gmail.com wrote:
> Had a question: How can we multipllex connections over handful of CQs?
> Dont all the individual client (represented by its unique *tag*) have to
> poll on the CQs in its own thread? Which means we need N number of threads
> to call 'next' on the CQ for N *tags* . Right?

Individual client's are not represented by tags. The tag void* value is
opaque to gRPC: your application gives it to gRPC when starting some
asynchronous operation. When gRPC has completed the operation, the tag value
will end up coming out of a completion queue.

In an application implementing an async gRPC server, you will need to call
grpc::ServerBuilder::AddCompletionQueue at least once for each server that
you have. (Remember that a server can host multiple services). You will
typically have a dedicated poller thread for each of these CQs, but not one
for each client that connects to the server.

The tag value is *often* a pointer to a per-request data structure with
application-specific state.

Take a look at the C++ server async example [1], in particular the CallData
class. This implementation of a service uses the address of its CallData
instance as its tag value over multiple asynchronous operations (receiving a
request, sending a response, shutdown).

CallData is implemented as a little state machine for the life-cycle of the
server handling a request. For tracking specific clients, you could add a
data member to CallData that is your business logic's representation of the
client connection (e.g., something derived from
ServerContext::auth_context() that is accessible one the request gets to the
PROCESS stage). Then, in the subsequent handling of the request you could
use this data member in whatever logic you need to be client specific.

In this scheme the CQ polling threads have zero knowledge of which client is
assocaited with the request. All the polling threads do is retrieve the next
tag from the completion queue, cast it to CallData (this assumes that you
only ever enqueue CallData instances), and then tell that CallData instance
to do what ever is next based on its knowledge of the current state of the
request. CallData knows about the client identity, not the CQ.

Hope this helps.

[1]: https://github.com/grpc/grpc/blob/5fc081acd101d345786ebb072a434f6efacfe0a1/examples/cpp/helloworld/greeter_async_server.cc#L70

jojy.v...@gmail.com

unread,
May 17, 2018, 5:31:37 PM5/17/18
to grpc.io
Thanks for the detailed explanation.

If we have a polling thread that polls a CQ(calls next), it would mean that the polling thread would be doing the IO also right? I say this because if a client connection "writes" something, then that write will be picked up by the cq->next call of the polling thread (which internally looks for any work that needs to be done). Right?

Christopher Warrington - MSFT

unread,
May 18, 2018, 5:44:59 PM5/18/18
to grpc.io
On Thursday, May 17, 2018 at 2:31:37 PM UTC-7, jojy.v...@gmail.com wrote:

> If we have a polling thread that polls a CQ(calls next), it would mean
> that the polling thread would be doing the IO also right? I say this
> because if a client connection "writes" something, then that write will be
> picked up by the cq->next call of the polling thread (which internally
> looks for any work that needs to be done). Right?

Yes, when the polling threads have called `grpc::CompletionQueue::Next` or
`grpc::CompletionQueue::AsyncNext`, they are "borrowed" by gRPC to perform
I/O and computation. Additionally, the borrowed threads can be used to
perform work associated with all the completion queues, not just the one
they're polling. The tag will still only come out of the original queue,
even if the work was done by a thread polling a different completion queue.

You can sort of observe the borrowing behavior if you only ever poll using
`grpc::CompletionQueue::AsyncNext` and have a deadline of 0. gRPC will make
forward progress very slowly, if at all.
Reply all
Reply to author
Forward
0 new messages