Long Running Bidi Stream - onCompleted

178 views
Skip to first unread message

Rama Rao

unread,
Aug 29, 2018, 7:17:32 AM8/29/18
to grpc.io
Hi,
We are using gRPC java. We have server built in Java and the client and server use long running bidi streams.
At some point, the clients disconnect suddenly and we get onError callback with Cancelled status code. 
At this point, one the server side should we call responseObserver.onCompleted() to properly close the stream? Would it leak any resources if we don't call responseObserver.onCompleted()? During the life of the server may clients can come and go like this. 

So if we do not call , responseObserver.onCompleted() on the server when client goes away does it cause any memory issues if these connections/disconnections happen all the time?

What is the grpc-java best practice here?

Thanks,
Rama

Eric Anderson

unread,
Aug 29, 2018, 12:41:33 PM8/29/18
to Rama Rao, grpc-io
On Wed, Aug 29, 2018 at 4:17 AM Rama Rao <ramarao...@gmail.com> wrote:
We are using gRPC java. We have server built in Java and the client and server use long running bidi streams.
At some point, the clients disconnect suddenly and we get onError callback with Cancelled status code. 
At this point, one the server side should we call responseObserver.onCompleted() to properly close the stream?

onCompleted() is for graceful termination. I'd expect onError() to be more appropriate in this case.

Would it leak any resources if we don't call responseObserver.onCompleted()? During the life of the server may clients can come and go like this.

When cancellation occurs everything in the library (including interceptors) is cleaned up by the time the application learns about it. So no, it wouldn't leak.

However, since StreamObservers generally expect that either onError/onCompleted will always be called, it isn't a bad idea to call onError(). This would be more important if you have a pipeline of many StreamObservers, something more akin to what happens in RxJava. I'm actually a bit surprised to see that StreamObserver doesn't mandate that onError/onCompleted will always be called. That sort of seems like an oversight.

On the other hand, calling onError() can be a major PITA in certain cases because you sometimes need to coordinate with another thread. The practical side of me would easily be like, "ain't nobody got time for that!" Granted, most frequently you are coordinating with that other thread anyway to stop its execution, so just having that thread call onError() as it stops is no big deal.

So all that is to say: I'd suggest "always" calling either onError/onCompleted as it makes the code more clear. But if you find a case that is a pain, it is likely prudent to ignore calling onError.

Rama Rao

unread,
Aug 29, 2018, 3:30:20 PM8/29/18
to Eric Anderson, grpc-io
Got it thanks. Another relevant question here. In this scenario do you think server having a fixedThreadPool would work better in place of default cached thread pool? I am seeing if there is burst in client connections , the threads seem to grow and spike. What is the best practice here?

Eric Anderson

unread,
Aug 29, 2018, 6:21:44 PM8/29/18
to Rama Rao, grpc-io
I'd suggest providing an executor that has some limit to the number of threads. We can't do that by default because if the application uses blocking operations then there is potential for deadlock, and we don't know what level of concurrency the application requires. A fixedThreadPool (or ForkJoinPool) could provide that, but other options would also be fine.
Reply all
Reply to author
Forward
0 new messages