Scalability issues with Server-Sent-Events in grpc c# implementation

881 views
Skip to first unread message

Michael Martin

unread,
Oct 21, 2018, 7:48:24 AM10/21/18
to grpc.io
Hello,
I choose grpc to replace a REST data interface together with Server-Events (SSE /Signalr /Websockets) with a single "non proprietary " protocol.

I found a downside of grpc in terms of Server-Sent-Events:
There will always be a blocking thread. -> https://github.com/grpc/grpc/issues/8718#issuecomment-354673344
My own implementation: https://pastebin.com/HwPY6nLX

So far i found no other way to implement (Server-Sent-Events), than to have the requesting thread to block for eternity (or at least for event subscription time) with a responseStream.
This results in an enormous amount of threads "hanging in sleep"  ( number of subscribed clients  * number of server side events). 

In Terms of scaling this is a bottleneck i am trying to adress here.
Has any of you better implementations or ideas?

In conclusion I wonder if there are any plans to make grpc a better SSE/Signalr/Websockets alternative in terms of server side events, cause
in my humble opinion the only problem is that currently a responseStream cannot survive the life-time-end of the initial request.

Thanks for your time in advance

Michael

Arpit Baldeva

unread,
Oct 26, 2018, 1:04:09 PM10/26/18
to grpc.io
You should be looking to run the grpc server in the async mode. That'd make sure that there is not a thread per server streaming rpc (you control the threading model).

Jan Tattermusch

unread,
Nov 6, 2018, 8:17:38 AM11/6/18
to grpc.io
It seems you are using C# server and the entire gRPC C# logic is implemented around the "async-await" pattern, which means there are no threads being
consumed if you await events in the right way (it's a bit more nuanced, but in short you should never use a blocking wait, but rely on the await keyword - I recommend reading up a bit on how async await works and the best practices).

On the client side, the API for streaming calls is also fully asynchronous - that means if you are reading from the responseStream (await responseStream.MoveNext()),
no thread is being consumed (the async method just yields the thread and revives when there is something to consume). This is true unless you add some blocking
primitives in your code yourself.

So basically, the solution would look something like this (there's many ways, this is just one of them):
- server keeps a pool of active subscribers.
- whenever an event is to be pushed to clients, send it to all subscribers using WriteAsync(), that can be done in parallel without blocking
- clients wait for events in an async loop (which calls await responseStream.MoveNext()), but as this is purely async, there are no "blocking" threads.
- if the RPC is interrupted, client connects to the server again.

No changes are needed on gRPC side.
Reply all
Reply to author
Forward
0 new messages