Why is the server streaming responses in batches instead of one by one?

730 views
Skip to first unread message

Yan Yan

unread,
Aug 3, 2016, 11:18:00 PM8/3/16
to grpc.io
commit 7f6c7798b08afcbf77e675e0bc4e8ce5b6ba580d
Date:   Thu Jul 14 18:10:19 2016 -0700
route_guide.proto
rpc ListFeatures(Rectangle) returns (stream Feature) {}

I am running the above version of gRPC python. I have tried throttling the server to send 100, 500 or 1000 responses per second. I have found the client to receive responses in 4k batches. How can I send them ASAP? Is it related to TCP_NODELAY? Does gRPC c++ offer a way to send them ASAP? Is there a way to flush earlier? Is there away to adjust the batch size or buffer size? Thx.

Yan Yan

unread,
Aug 3, 2016, 11:26:59 PM8/3/16
to grpc.io
https://github.com/grpc/grpc-go/issues/524#issuecomment-180536815
Go's http2 server does this aggregation automatically. It has a write scheduler that only flushes only when a packet is full or there's nothing else to send. 

Yan Yan

unread,
Aug 8, 2016, 9:51:05 PM8/8/16
to grpc.io
I am now able to do 1 streaming response/sec by NOT reading data from stdin. Python stdin probably has a 4096 line buffer.


On Thursday, August 4, 2016 at 11:18:00 AM UTC+8, Yan Yan wrote:
Reply all
Reply to author
Forward
0 new messages