[grpc c++] sending requests immediately - avoid batching

86 views
Skip to first unread message

Mass

unread,
Mar 13, 2018, 5:33:12 PM3/13/18
to grpc.io

I have been doing some latency measurement with different modes of grpc. The application that I have is time critical, and I need to ensure that requests are being completed in sub millisecond delay. In my test runs, I noticed that the average latency that I get for number of calls is around 300us, but there are number of request that are completed around 10 ms which are not acceptable. 
I have been trying to find a way to optimize for latency, and it seems to me the source of this jitter is batching that it is done in grpc. Hence, I found out in streaming mode you can give WriteOptions().set_write_through() when making the write call in order to send the packet instantly. But it didn't really help, I can still see that packets are sent in batches.

Is set_write_through() the right option to use? or there exist a better way to achieve this?
 

Mass

unread,
May 8, 2018, 3:53:08 AM5/8/18
to grpc.io
Any update on this? Does anybody know how to optimize for latency?

Sree Kuchibhotla

unread,
May 21, 2018, 5:22:19 PM5/21/18
to grpc.io
The set_write_through() option is a flag to let grpc know when to acknowledge write completion i.e whether to acknowledge it after bytes are sent on the wire (OR) whether to acknowledge once it cleared flow-control logic and grpc-transport is sure that they are going on the wire. 

It doesn't make any explicit guarantees on batching versus non-batching..

However, I must add that this code has changed in the past and I haven't been keeping up with the changes. I will take a closer look at the code and send an update..

I am also very curious about the 300us to 10ms variance in your latencies. Can you tell a bit more about your benchmark?  - what is the setup?/how many requests are you sending?..etc..Would like to recreate it here if possible.

-Sree
Reply all
Reply to author
Forward
0 new messages