Java backpressure is ... strange

802 views
Skip to first unread message

Ryan Michela

unread,
May 24, 2017, 12:22:16 AM5/24/17
to grpc.io
I'm working with the grpc-java manual flow control API to implement backpressure. It's rather strange and I was hoping to learn more about why it is implemented the way it is.

I have a service where a client sends a stream of numbers to a server, which returns an Empty when the stream ends. On both the client and server, the StreamObserver that handles response messages configures request backpressure.

  • On line 65, the responseObserver disables inbound flow control on the request stream, and then requests one message.
  • On line 25, the responseObserver sets an onReadyHandler to send another request message in the request stream each time a message is requested from the server.
This API seems backwards. Why is the response observer responsible for managing the request observer's flow control?

Eric Anderson

unread,
May 24, 2017, 1:06:17 PM5/24/17
to Ryan Michela, grpc.io
On Tue, May 23, 2017 at 9:22 PM, Ryan Michela <delt...@gmail.com> wrote:
This API seems backwards. Why is the response observer responsible for managing the request observer's flow control?

Ah, yes. In short: "because that is the only object grpc creates". "History" could also be a partial explanation, but a bit weaker. I will note that if you look at the ClientCall/ServerCall and its Listener the API should look fairly natural. We implement the StreamObserver-based API on the lower-level Call API.

The "advanced" flow control API (disableAutoInboundFlowControl+request) was added late to the async API history when we didn't feel comfortable making large breaking changes. There was also some disagreement on the team with how frequently it would be necessary (vs blocking the onNext callback), which impacted how "pretty" the API needed to be.

The API was also limited for testing. At the time, it was expected that applications would mock the stub/StreamObserver directly. The InProcess transport was added much later, and it was later still before it was agreed directly mocking the stub/StreamObserver should be avoided/prohibited. During this time the StreamObservers needed to be kept simple since there would be many implementations in tests.

But really, the API is pretty close to what we want. If I could break API and all users were magically updated, I think the only real change I would make is removing the need to cast; I'd make the types be ClientCallStreamObserver/ServerCallStreamObserver directly. Or maybe use ClientCall/ServerCall directly, although that would introduce its own set of problems.

Ryan Michela

unread,
May 24, 2017, 2:34:32 PM5/24/17
to Eric Anderson, grpc.io
I'm noticing some inconsistent behavior when switching between the InProcess and Jetty-based Server/Channel implementations.

With InProcess, the producer's ClientCallStreamObserver onReadyHandler is called every time the consumer calls request(n).

With Jetty, the producer's onReadyHandler is called exactly once when the consumer calls request(n) for the first time.

Additionally, with InProcess, ClientCallStreamObserver.isReady() toggles to false after the producer produces the nth requested message.

With Jetty, ClientCallStreamObserver.isReady() stays true for the remainder of the stream.

Are these differences intentional?


--
You received this message because you are subscribed to a topic in the Google Groups "grpc.io" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/grpc-io/ePChS4cyEOc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to grpc-io+unsubscribe@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
Visit this group at https://groups.google.com/group/grpc-io.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/CA%2B4M1oMV-oP9OEzSZdk1ETkO%2BmM%3D47XzN7RS6b4nt2n_m4Gmow%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.

Eric Anderson

unread,
May 24, 2017, 5:16:23 PM5/24/17
to Ryan Michela, grpc.io
Note: it's Netty, not Jetty

The InProcess transport is stricter in its interpretations as it is able to do per-message flow control and it is not concerned with network latency. Netty translates the per-message flow control to byte-based flow control, plus some buffering. So Netty, for instance, won't turn isReady() from true to false unless you end up writing at least 64K worth of messages.

Note that both implement the API properly and an application using the API correctly should work without issue on both.
Reply all
Reply to author
Forward
0 new messages