Default netty buffer size in java

730 views
Skip to first unread message

Rama Rao

unread,
Aug 13, 2019, 9:27:45 PM8/13/19
to grpc.io
For unary gRPC requests, what is the default netty buffer size? Is it pushed in buffers of 1M as mentioned here 

Thanks,
Rama

Eric Anderson

unread,
Aug 15, 2019, 2:29:08 PM8/15/19
to Rama Rao, grpc.io
Can you explain why you are interested or what you are doing? From the perspective of users, unary gRPC requests are simultaneously unbuffered and fully buffered, depending on what the user is truly asking.

NettyWritableBufferAllocator isn't related to network buffers. It produces buffers for serializing messages, however serialization is done all-at-once. The only thing the 1M value does in NettyWritableBufferAllocator is to limit the maximum block size in memory, which relates to allocation performance and memory fragmentation.

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CAFfmCFGKUhoy%3D0HbcxsfhWa430RZgTxFN8nq0f2QDxXKxGh4ng%40mail.gmail.com.

Rama Rao

unread,
Aug 15, 2019, 2:55:16 PM8/15/19
to Eric Anderson, grpc.io
We are proxying the requests with 2mb via a proxy and seeing that proxy is seeing partial request so interested in understanding the behaviour bit more. Can you explain more on “fully buffered on unbuffered depending on what user is truly asking”? Are you saying there a way to control this behaviour while making the request? If yes, can you point me to that?

Thanks,
Rama

Eric Anderson

unread,
Aug 15, 2019, 5:07:52 PM8/15/19
to Rama Rao, grpc.io
On Thu, Aug 15, 2019 at 11:55 AM Rama Rao <ramarao...@gmail.com> wrote:
We are proxying the requests with 2mb via a proxy and seeing that proxy is seeing partial request so interested in understanding the behaviour bit more.

Hmmm... Depending on what you are seeing there can be a lot of explanations.

1. HTTP/2 flow control prevents gRPC from sending. Note there is stream-level and connection-level flow control. By default grpc-java uses 1 MB as the window here, although your proxy is what is providing the window to the client. I only mention the 1 MB because the proxy will need to send data to a gRPC server and will be limited to 1 MB initially. Other implementations use 64 KB and auto-size the window.
  • On the server-side there is a delay before we let more than 1 MB be sent. We wake up the application and route the request, run interceptors, and eventually the application stub will request the request message. At that point the server allows more to be sent. If you suspect this may be related to what you see, you can change the default window and see if you see different behavior. Be aware that the proxy will have its own buffering/flow control.
2. More than one request is being sent, so their are interleaved. We are currently interleave in 1 KB chunks
3. The client is slow doing protobuf encoding. We stream out data while doing protobuf encoding, so (with some other conditions I won't get into) it is possible to see the first part of a message before the end of the message has been serialized on the client-side. This is purely CPU-limited, but could be noticed during a long GC, for example
4. Laundry list of other things, like dropped packets

Can you explain more on “fully buffered on unbuffered depending on what user is truly asking”? Are you saying there a way to control this behaviour while making the request? If yes, can you point me to that?

I wasn't referring to something controllable. Basically the client provides an entire request, all at once. gRPC is not "buffering" that message waiting for it to be sent; it sends it now. But it can't necessarily send it right now. It takes time to be sent and the message must sit in a buffer while we wait on TCP and TCP has buffers and there are buffers in the network, blah blah blah. We also have a "network thread" for doing the I/O. There is a buffer to enqueue work to that other thread. So lots and lots of buffers, but it all depends on your perspective and your definition of "buffer."

Rama Rao

unread,
Aug 15, 2019, 8:54:53 PM8/15/19
to Eric Anderson, grpc.io
Thanks for the detailed explanation. This case is reverse proxy so gRPC client is initiating the connection so probably setting the window might change the behaviour looks like? Thanks again for the explanation 

Eric Anderson

unread,
Aug 15, 2019, 8:56:37 PM8/15/19
to Rama Rao, grpc.io
The window is a receive window. So changing the client's window won't change any sending behavior. That's why I was mentioning the server's window, since the server will be receiving. There could also be window size options within the proxy itself.

Rama Rao

unread,
Aug 15, 2019, 8:57:44 PM8/15/19
to Eric Anderson, grpc.io
Got it. Thank you for the clarification
Reply all
Reply to author
Forward
0 new messages