gRPC C++ async api doc and sample code

8,702 views
Skip to first unread message

Arpit Baldeva

unread,
Mar 2, 2017, 2:34:04 PM3/2/17
to grpc.io

Hi,

 

Recently, I have been looking into the sync and async api of gRPC (C++) and how to choose between them for my usage. While gRPC has excellent documentation and examples overall, I found this area a bit lacking. So I had to go through a lot of past forum posts (and post some myself) for gaining insights on the subject. I am attaching a doc here that details the difference between the two model. People might find useful when trying to make their decision.

 

In addition, I implemented a set of classes that makes working with async API on server a bit easier from application code point of view. The example/test code I found would often ignore the error handling or do streaming calls in a way that hardly resembles how you might do things in your application. I realize the importance of existing example code as it is simple to start with however I feel a more complex example is also warranted. Using those utility classes, I went ahead and implemented the routeguide server example in a fully async fashion. So I am attaching that code as well in the hope that it is something other people can benefit from. May be it could become part of example code in gRPC codebase?


The code is commented as much as I deemed necessary. I have also stress tested it with multiple threads from the client and abrupt client process exit. I am also attaching the client stress test code but that bit isn’t substantially different from the existing example code around client (apart from adding some threading stuff).

 

Thanks. 

gRPC Threading Model.pdf
servermain.cpp
clientmain.cpp

Craig Tiller

unread,
Mar 21, 2017, 6:52:53 PM3/21/17
to grpc.io
The writeup looks great: it'd be good to get this as a .md file in our doc/c++ tree (would love to see a pull request).

I know there were some folks looking to update the example code also... I'm going to have them jump on this thread for where to go with the code.

Arpit Baldeva

unread,
Mar 22, 2017, 6:07:15 PM3/22/17
to grpc.io
Hi,

Thanks for getting back on this. Sure, I'll submit a PR for doc (and code if required). However, I am out for next 3 weeks on a vacation (with limited internet connectivity) so I think the best course of action is to submit the PR after that (just in case it takes a little back and forth). I'll touch base after April 17th. 

Thanks. 

Arthur Wang

unread,
Sep 8, 2018, 7:57:12 AM9/8/18
to grpc.io

Hi Arpit :

  Can't view or download your example code . Is that because far too early from now ? Where else can I view them for now ?

Thanks a lot.

Arpit Baldeva

unread,
Sep 16, 2018, 1:37:22 PM9/16/18
to pplo...@gmail.com, grpc.io
Reattached -

--
You received this message because you are subscribed to a topic in the Google Groups "grpc.io" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/grpc-io/DuBDpK96B14/unsubscribe.
To unsubscribe from this group and all its topics, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/e3409785-154c-4530-b6dc-78ca3f0d9520%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
clientmain.cpp
servermain.cpp

Arthur Wang

unread,
Sep 17, 2018, 1:48:28 AM9/17/18
to abal...@gmail.com, grp...@googlegroups.com
Got . Thanks

Debashish Deka

unread,
Oct 18, 2019, 2:52:05 AM10/18/19
to grpc.io
Thank you for the explanation.
I understood the logic of one thread waiting on the queue and submitting the tasks onto some container and the other thread processes those tasks one by one.

I want a design where one thread should wait on the queue and push incoming requests to a thread pool. I have already written the code for the thread pool. Clients can request various types of request (say for example type A,B,C). For each type of request, the server has one handler function. For simplicity lets assume, these handlers are nothing but CPU intensive for loops.

for example,
void handlerA() {

   for_loop(1000000);
   respond to client some dummy response.

}

I tried to understand the application logic of your server code. But it contains too many handlers and confused me. Can you help me with the above-mentioned design?

Thanks ! 

Debashish Deka

unread,
Oct 18, 2019, 2:55:43 AM10/18/19
to grpc.io
For example, the main thread should only pull requests from the queue and enqueue on to the queue.
for example,
// main_thread:
 cq.NEXT( ....) ...
 pool.enqueue(
     // what to enqueue so that when one worker thread of my thread pool pop one requests it can decide what handler to execute
 )

Debashish Deka

unread,
Oct 18, 2019, 2:58:09 AM10/18/19
to grpc.io
Assuming the server has only one service. And we want to process multi-client requests in parallel in a multi-core system.


On Friday, October 18, 2019 at 12:22:05 PM UTC+5:30, Debashish Deka wrote:

Debashish Deka

unread,
Oct 23, 2019, 7:42:41 AM10/23/19
to grpc.io

Saroj Mahapatra

unread,
Oct 23, 2019, 11:00:16 AM10/23/19
to grpc.io
The code seems to be a reasonable solution to avoid writing a lot of boilerplate code.
On first glance, the ‘ExperimentalCallbackService’ in hello world.grpc.pb.h from the async example looks very similar.
Experimental caveat aside, has anyone compared it against Arpit’s contributed example?

Thank you.

Saroj Mahapatra

unread,
Oct 23, 2019, 11:09:54 AM10/23/19
to grpc.io
Debasish, you might want to search for ‘g-research gRPC async example’ for the problems they encountered and their solutions.

Jeannot Langlois

unread,
Jul 31, 2024, 1:10:01 PM7/31/24
to grpc.io
Hello Arpit:

Both servermain.cpp and clientmain.cpp files above reference a header named "helper.h" -- can you attach it as well?

Thanks! :)

Jeannot Langlois

unread,
Jul 31, 2024, 5:59:10 PM7/31/24
to grpc.io
Nevermind:  I've figured out that helper.h is actually provided in the original gRPC example.
Reply all
Reply to author
Forward
0 new messages