Pointless for server to stay behind load balancer?

418 views
Skip to first unread message

Johan Jian An Sim

unread,
Oct 7, 2015, 12:54:48 AM10/7/15
to grpc.io
Hi,

I am looking at the gRPC Go which logs the events/requests to net/trace. When I look at that, I get a hint that client are tied to a server. Want to see if I am wrong about this.

So if I have 3 servers ready (behind Kubernetes service/load balancer) and only 1 client is going to connect. Is the client's request always going to 1 server (the server that it establish connection initially)?

Qi Zhao

unread,
Oct 7, 2015, 1:37:33 AM10/7/15
to Johan Jian An Sim, grpc.io
I have no idea how your balancer work. But by now all the rpcs are sent to whatever target you call grpc.Dial(...).

--
You received this message because you are subscribed to the Google Groups "grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email to grpc-io+u...@googlegroups.com.
To post to this group, send email to grp...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/grpc-io/204a73f5-7081-4d3c-9f8b-3b7fd6ca00c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Thanks,
-Qi

Eric Anderson

unread,
Oct 7, 2015, 11:31:59 AM10/7/15
to Johan Jian An Sim, grpc.io
Each RPC is a different HTTP request, but generally on the same TCP connection (via HTTP/2 multiplexing). So if you are using an HTTP load balancer then requests would use your various servers. If you are using a TCP load balancer then all requests from a client would go to the same server, until that connection was broken.

Note that streaming RPC are each a single HTTP request; all request messages in the stream go to the same server.

On Tue, Oct 6, 2015 at 9:54 PM, Johan Jian An Sim <joha...@gmail.com> wrote:

--

Johan Sim

unread,
Oct 7, 2015, 5:41:27 PM10/7/15
to Eric Anderson, grpc.io

Thanks, guys. I will do some study on that.

idan...@gmail.com

unread,
Sep 22, 2016, 12:19:56 PM9/22/16
to grpc.io, ej...@google.com
Hi,
Found this thread while searching for an answer to the same issue...
I am doing the same implementation, having a client that is directed to a kubernetes service (DNS discovery) and I have noticed that all calls are going to the same server.
This is a single client with multiple servers behind the "service" identity.
Is there any way to bypass is so that each call will actually choose a random server in the service pool? 

Kun Zhang

unread,
Sep 22, 2016, 7:55:00 PM9/22/16
to grpc.io, ej...@google.com, idan...@gmail.com
If you are using Java, a proposed new LoadBalancer API should be able to solve your problem:

It will allow LoadBalancer to create multiple connections for the same address.

Eric Anderson

unread,
Oct 3, 2016, 7:34:05 PM10/3/16
to Kun Zhang, grpc.io, idan...@gmail.com
The tracking issue is https://github.com/grpc/grpc/issues/7957 . The first resolution (#1) would be to add a timer to the server to send a GOAWAY after the connection is a certain age, allowing the proxy to choose another backend.

I have an open question in that issue whether we also need an option to make multiple connections and RR on them hoping they go to different backends (#2). If you need this, I'd suggest commenting on the issue.
Reply all
Reply to author
Forward
0 new messages