We are actually building this in under the banner of 'client side load balancing'. With this you'll create one channel that's connected to a set of backends, and the gRPC runtime will choose which backend to send each request to.
Initial code is in, but I wouldn't expect a full implementation to be ready for at least a few months.
In the meantime what you're describing would also work.
--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/d99eb1d3-bb90-47ed-82ce-7c0daf4b4c2c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/95cdbad6-788c-4731-9890-16ae9303afb2%40googlegroups.com.
Now I see.
As I know, "multiplexed" is a very useful feature in HTTP2. if I create channel and stub every time and then close it, it won't be a multiplexed connection. For better performance, I want to reuse connection rather than creating connection every time. Could I use singleton pattern to make a global channel? Then the whole app (including web app) will have only one connection to gRPC server. But the thing is if I code like this, there will be an error and I can't close my app.
Class GrpcClient(object):
def __init__(self):
self.channel = NewChannel()
self.stub = NewStub()
c = GrpcClient()
D0303 01:53:18.038069000 140735146815488 iomgr.c:96] Waiting for 1 iomgr objects to be destroyed
And also, Is there a queue or something when gRPC server processes requests? It looks like the later request/response time will be longer, or even "expired" If there is a lots of requests in only one connection.
Thanks a lot. I really appreciate your reply. that's really help!