In java grpc how do I throttle client requests with flow control and still utilize load balancing?

70 views
Skip to first unread message

davidr...@gmail.com

unread,
Dec 22, 2017, 9:13:02 AM12/22/17
to grpc.io

I've been successfully used the Manual Flow Control example to make streaming async requests from one client to one server. I have also have been successful using the kubernetes load balancer example to allow one client to make round robin blocking stub requests to multiple servers. But I'm not able to combine the two examples to allow one client to make async requests to multiple servers. Is the Manual flow control example by it's nature a single server technique since it uses a ClientResponseServer relationship?

What would be the proper way for me to throttle my streaming async requests to multiple servers?

Carl Mastrangelo

unread,
Jan 4, 2018, 7:42:05 PM1/4/18
to grpc.io
To clarify: are you asking about doing streaming RPCs to multiple backends?   If so, each RPC (which consists of multiple messages) will be sent to different backends.  Once a streaming RPC is started, it will be pinned to a particular backend and not change.   

davidr...@gmail.com

unread,
Mar 9, 2018, 3:21:46 PM3/9/18
to grpc.io
Hey Carl, thank you for responding! I'm really liking grpc!

I want to distribute work from one service to multiple backend services. It's computational geometry queries, so the work can take a little time and the client would benefit from passing the work out to worker services. I can see now that streaming is not the way to do that as it is pinned (has to be to guaruntee order). 

So to distribute work to multiple backends can I use an async stub and some form of future? That way I send work out, it's round robin loadbalanced. Is this benchmark code the one I want to emulate? Are there other places of documentation for this work?


Thanks for the help!
D

Carl Mastrangelo

unread,
Mar 15, 2018, 8:50:33 PM3/15/18
to grpc.io
I would not emulate the benchmark code, except that it is async.  The benchmark code is pretty heavily tuned to the usage pattern of benchmark work, which may not be ideal for your use.      

From your work description, it sounds like to want to make the workers be servers, and the master be a client.   A client can contact the workers with a work unit, and each worker will respond to it.  That would be the Unary usage.   If you want each worker to be working on at most n things as a time, you can make a map of gRPC stubs to work items.   When a stub returns false for "CallStreamObserver.isReady()"  you can register a callback (setOnReadyHandler() )on the observer to re add itself to the map of available workers.  That way you limit workers to at most n work items, and only those which are not in flow control push back.   

I am kind of guessing what you want from your description, and I don't know the specifics for your problem, but the async stub usage sounds right for you.
Reply all
Reply to author
Forward
0 new messages