gRPC client stream error semantics when server is shutdown but TCP connection remains

153 views
Skip to first unread message

C. Schneider

unread,
Nov 9, 2021, 9:28:41 PM11/9/21
to grpc.io
Hi,

For a chat service I have the client connect to a gRPC server running in Istio (and using FlatBuffers).

When the server is shutdown the TCP connection remains connected (to Istio it appears) but the client doesn't detect the server went away, so continues to send Keep Alives thinking the server should be sending data eventually, but the server never will since its RPC call state was lost when it was shutdown.

What is the expected RPC stream semantics in the case where the server goes away mid stream? Should the client be able to detect this and restart the RPC stream?

Thanks!


yas...@google.com

unread,
Dec 1, 2021, 1:39:47 PM12/1/21
to grpc.io
Having a HTTP/2 proxy in between is muddying the waters for keepalive. I believe istio/envoy have settings for keepalives that you might be able to employ here. If that doesn't work for you either, you might want to consider a custom application level ping.

C. Schneider

unread,
Dec 1, 2021, 3:46:29 PM12/1/21
to grpc.io
I ended up restarting the RPC call every 60s, since a dedicated app ping RPC call would continue to work even though the long running RPC call state had been silently lost server side. It seems a bit inefficient.

Semantically speaking what is the intended behavior when a long running RPC call can't complete due to a network issue?  I couldn't find that case in the spec, and even in the happy error case where the server could send an RST_STREAM, the client didn't bubble up any indication to the app that the stream was lost. 
Reply all
Reply to author
Forward
0 new messages