async model rpc sequence

144 views
Skip to first unread message

Arpit Baldeva

unread,
Feb 15, 2017, 4:56:51 PM2/15/17
to grpc.io
Hi,

This post is around the rpc sequence issued by a single client. 

For the sync model, I understand that rpc call order guarantee can't be maintained due to a pool of threads executing concurrently. The sync model is not suitable for my use case for other reasons and I was looking at the async model which allows for better threading control. On surface, it seemed like it would allow my application to see the same rpc order as issued by a client if I processed completion queue on a single thread. 

However, I was looking at the C core implementation and this code caught my eyes (The usage of stack structure)

for (size_t i = 0; i < server->cq_count; i++) 
{ size_t cq_idx = (chand->cq_idx + i) % server->cq_count; int request_id = gpr_stack_lockfree_pop(rm->requests_per_cq[cq_idx]); 
if (request_id == -1) 
{ continue; } 
else { gpr_mu_lock(&calld->mu_state); calld->state = ACTIVATED; 
gpr_mu_unlock(&calld->mu_state); 
publish_call(exec_ctx, server, calld, cq_idx, &server->requested_calls_per_cq[cq_idx][request_id]); return; /* early out */ }

So if two rpcs come from the same client in the same "network chunk", would the order of rpcs that the application see is reversed or I misunderstood the code here?

Thanks.

Craig Tiller

unread,
Feb 15, 2017, 8:48:01 PM2/15/17
to Arpit Baldeva, grpc.io

You've misunderstood :)

The server application needs to request incoming calls, and we need to match them to requests as they come in. We arbitrarily reverse to order of matching (because we get a faster implementation), but preserve the order of actual requests presented to the application.


--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/2e657a3c-0c8a-4159-bfe1-4c914d2a55b1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Arpit Baldeva

unread,
Feb 16, 2017, 12:33:06 PM2/16/17
to grpc.io, abal...@gmail.com
Thanks for the clarification!  

rappor...@gmail.com

unread,
Apr 24, 2017, 4:10:20 AM4/24/17
to grpc.io, abal...@gmail.com
Hi,

Does the code snippet indicate that the grpc server does process the client requests in reversed order (using stack)?

"... but preserve the order of actual requests presented to the application."
Does it mean that with grpc async model, the client get replies in the same order of the requests are sent?
I thought it is only possible with streaming model??

Thanks a lot.

Sree Kuchibhotla

unread,
Jul 11, 2017, 7:34:55 PM7/11/17
to grpc.io, abal...@gmail.com, rappor...@gmail.com

I realized there was no response to this for a while. Sorry about that. 

>> Does it mean that with grpc async model, the client get replies in the same order of the requests are sent? I thought it is only possible with streaming model??
No, there are no order guarantees for client receiving responses to unary requests. You are right, the ordering guarantee is only for messages within a stream.

To fully explain the context here, I think I should start by re-answering the original question by Arpit:

 >> So if two rpcs come from the same client in the same "network chunk", would the order of rpcs that the application see is reversed or I misunderstood the code here?
The code snippet pasted by Arpit deals with "request-matching" i.e matching an incoming request with an available request id. 
More background: As you are probably aware of the grpc async model requires the server inform grpc core library that it is 'expecting' to receive a call . For example: see the async code example here https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_async_server.cc. The server expects to receive calls for "SayHello()" function and hence at line 91 (i.e https://github.com/grpc/grpc/blob/master/examples/cpp/helloworld/greeter_async_server.cc#L91) it calls service_->RequestSayHello() to set that expectation. In grpc jargon we refer to this as server "requests" the call. Each such request creates a request id (on the server - at grpc layer).  Once the server receives a call to SayHello() from a client, the request is satisfied and matched with an available request id.
(If server didn't make the ->RequestSayHello() call, the client's SayHello() would have reached the server but never actually delivered to the async server code. This also explains why in the example, you will notice that the first thing the async server does after receiving a SayHello() call is to make another call to service_->RequestSayHello (else, it would not receive the next!)

Anyway, the code Arpit posted there simply does this request matching. I.e match an incoming call with an available request id. The fact that we use a stack for these request ids is just an implementation detail and has no bearing on the order in which the requests are delivered to the async server. This is what Craig was trying to clarify.

Hope this helps.
-Sree
Reply all
Reply to author
Forward
0 new messages