gRPC Java Manual Flow Control Understanding

568 views
Skip to first unread message

Nathaniel Jones

unread,
Feb 11, 2023, 5:49:59 PM2/11/23
to grpc.io
Hello,

I've been experimenting with gRPC Java flow control patterns and came across the manual flow control client and server examples. I also came across some discussion threads regarding flow control (e.g. this thread and this thread). However, I still have some questions to check my understanding of gRPC Java auto and manual flow control.

First, I wanted to clarify some terminology and confirm some flow control facts. In a bidirectional stream, there are in a sense 4 things to consider: the client has an inbound and outbound direction, and the server has an inbound and outbound direction. The server outbound ties to the client inbound and the client outbound ties to the server inbound, but when looking at example code I think it's helpful to break it down. Also, those 4 items correspond with 4 buffers - my understanding is that the outbound buffers are a fixed size, and inbound buffers correspond to the http2 flow control window, which starts at 64k and by default use the bandwidth-delay product algorithm to grow or shrink. In my code, I've used NettyChannelBuilder.flowControlWindow and NettyServerBuilder.flowControlWindow to force inbound buffers to stay a fixed size so that it's easier to observe basic flow control behavior. Is my understanding of these basics correct?

Next, I wanted to confirm my understanding of the most basic gRPC Java flow control mechanism - the ServerCallStreamObserver and ClientCallStreamObserver .isReady() methods. These are only relevant for outbound from either server or client. These return true if the outbound buffer has any room in it. My main question for understanding: gRPC is communicating with the Netty/http2 layer in some fashion... so I'm imagining that this outbound buffer starts to get drained and sent over the network as http2 detects available window size (i.e. the eventual receiver across the network has sent some WINDOW_UPDATEs, so our local outbound buffer can send over network). Then in the gRPC application layer for our outbound stream listener, isReady() will return true because we have local outbound buffer room now that some draining has happened. Thus, isReady() becoming true is a bit disjoint with bytes going over the network thanks to the buffer. Is this a correct understanding?

Finally, I have some questions about the manual flow control example code.
1. It seems to me that both the client and server are each configuring manual flow control for both of their inbound / outbound directions. I.e. it's not the case that the code is only trying to flow control server -> client stream or client -> server stream. Is that correct?
2. This server code here describes how it's setting up flow control for the request stream, and that it's a little counterintuitive since this is the ServerCallStreamObserver. I interpret this to mean "turn on manual flow control for the server inbound direction, which means that data flowing from client to server is impacted." My understanding is that when this is done, the only way that the client -> server direction is going to make any progress is by the server code calling serverCallStreamObserver.request(1), which happens in a couple places. Is that the correct interpretation? (then the client-side version is the opposite)
3. I'm curious about what's really happening with the combo of serverCallStreamObserver's disableAutoRequest() and request(x). My understanding from testing it out is that: when in auto flow control mode, gRPC will invoke the inbound onNext(msg) whenever data has come to the inbound buffer... but in manual mode, you're telling gRPC "don't take anything off the http2 buffer and invoke onNext until request(x) is called" - thus, the client across the network is able to use its initial window to fill up the server's local inbound buffer, but it will naturally stop sending more messages when it has run out of window... until the server code does call request(x) which frees up some inbound window. Thus, there's no "magic" in manual flow control - it's really just a matter of gRPC waiting to call onNext until request()-ed, so that the window doesn't free up. Therefore, any language gRPC client across the network will work fine with this Java server using manual flow control. Is that what's happening in manual mode?
4. I understand that the onReady handler is an API on top of isReady() that lets the server code know when there's outbound buffer room (which is better than busy-waiting on isReady()). But I'm a little confused on the handler in the manual server example - the if condition is checking if there is outbound buffer space... but if that condition passes, the server is calling request(1), which has to do with consuming messages from the inbound buffer. Aren't the inbound / outbound directions independent? Why does that if statement correspond to request()-ing a message? Maybe there's some higher-level coordination going on, like saying "only if I'm able to send a message will I bother to consume one" - perhaps using manual flow control in both client and server helps with higher level patterns of flow control?
5. Even though the example code shows both the client and server turning on manual flow control, it's reasonable to only turn it on for one side, correct? E.g. if the server is concerned with getting too many messages from a client, it can use disableAutoRequest() and request(x) independently - the client doesn't know/care whether this is happening?

Thanks so much for any clarifications and answers!
Reply all
Reply to author
Forward
0 new messages