Unary Stream from Python server -> C# Client, StatusCode=Unknown, Detail="Stream removed"

1,068 views
Skip to first unread message

MNS

unread,
Jan 25, 2018, 9:47:04 AM1/25/18
to grpc.io
I have a unary streaming service from a Python server (1.8.4) to C# client (1.8.3). 

When I signal the Python to shutdown (TERM15) the shutdown method in the code listing below is called, the intention of which is to terminate the rRPCs gracefully, and shut the server down.

This works when I'm running a server on localhost, an RpcException with the expected status is raised: `Status(StatusCode=Cancelled, Detail="Completed")`

However, when running the server on GKE and terminating the pod I receive: `Grpc.Core.RpcException: Status(StatusCode=Unknown, Detail="Stream removed")` in the client.  The python server is behind an haproxy ingress controller and google-cloud-endpoint proxy, but in my understanding neither of these components should affect the connection.

Can anybody think what could be causing the different statuses in the RpcException on the client?

Thanks, Mark

def MyHandler(self, request, context):
   
while not stop_event.isSet():
       
try:
           
yield update_q.get(True, 0.1)
       
except queue.Empty:
           
continue


    context
.set_code(grpc.StatusCode.CANCELLED)
    context
.set_details("Completed")            




def shutdown(subscriber_service: StreamsService,
             executor
: futures.ThreadPoolExecutor,
             server
: grpc.Server, exit_code):


    logger
.info("Stopping stream handlers")
   
for stop_event in subscriber_service.stop_events:
        stop_event
.set()


    logger
.info("Stopping executor")
    executor
.shutdown()
   
    logger
.info("Stopping server")
    ev
: threading.Event = server.stop(grace=10)  # allows RPCs to terminate gracefully


    logger
.info("Waiting for server to stop gracefully")
    ev
.wait()


    logger
.info("Stopping process with exit code {}".format(exit_code))
    sys
.exit(exit_code)

MNS

unread,
Jan 26, 2018, 11:27:22 AM1/26/18
to grpc.io
Turns out this was a problem with the google-cloud-endpoint/NGINX container being terminated before the gRPC server container in the same pod had a chance to close it's connections gracefully.

If anyone is remotely interested in knowing more, there's conversation I had with myself over here: https://groups.google.com/forum/#!topic/google-cloud-endpoints/FyfdvD6xS1Q
Reply all
Reply to author
Forward
0 new messages