Load Balance secure connection

54 views
Skip to first unread message

mauricio...@lacity.org

unread,
Oct 16, 2018, 12:57:34 PM10/16/18
to grpc.io
We're setting up a mobile application (objective-c) that communicates back to the server (go) using gRPC.  We intend to place those servers behind a Netscaler load balancer.  We now have a requirement to encrypt the messages going through.  How would we configure the client/server/load balancer to accept and forward on the messages with TLS back to the individual servers?  We thought about attempting the 1st certificate, and if that fails, try the subsequent ones.  That seems a very fragile approach.  How does secure load balancing happen in gRPC world?

Carl Mastrangelo

unread,
Oct 16, 2018, 7:47:57 PM10/16/18
to grpc.io
There are a few options.  The key words to look for are "L7" loadbalancing and "L4" loadbalancing.  For L7, your entry point to the load balancer, typically some kind of reverse proxy, decodes the TLS and then forwards the traffic to the correct backend.  Your client sends traffic to the proxy which then decides which of the available backends is least loaded.   For L4, there is still a reverse proxy, but it does not decode TLS.  Instead, it forwards all the encrypted data to a backend IP address, again deciding where to send based on load.   (or using roudn robin even).   The benefit of L7 load balancing is that it can make smarter decisions about where to send traffic, but has a downside that it's slightly slower.  L4 is nice because it does not need the TLS certs (as the hardware may not be trusted), but can't decide which backend to route requests to.

In both cases, the client always sends traffic to the same place, which is in charge of routing to the next hop.  Also in both cases, the LB proxy needs to know all the backends available to send traffic to, and a way of telling if they are healthy.   Depending on how big your architecture is, even these two approaches are not enough, but let's not get too complicated too quickly.  

In gRPC LB, the approach is more different than the above two.  Instead, a dedicated load balancing service (i.e. gRPCLB) is contacted by the client at startup and asks for addresses to connect to.  The gRPCLB service can send a list of backend IP addresses to use, as well as relative weights for how much traffic each BE should take.   This is probably the most scalable approach, because it avoids the intermediate proxy altogether.   However, there is no premade gRPCLB server available and you would have to implement the protocol yourself.

HTH,
Carl

Carl Mastrangelo

unread,
Oct 17, 2018, 7:08:19 PM10/17/18
to grp...@googlegroups.com, edward...@lacity.org, mauricio...@lacity.org, michae...@lacity.org
Expanding on this answer further


> The data passes through the said proxy.


Ok, so that makes sense.   So basically:    

       + --> B1
       |
C ---> P --> B2
       |
       + --> B3

If this diagram doesn't get broke, the issue you have is that the Proxy can't have access to the certs but each of the backends may have different certs, which may be self signed.

Also assuming there isn't SSL today, then you can make traffic to the proxy use either port 80 or 443 to distinguish between plaintext and secure.  The client will need to know the CA cert that signed the TLS certs for B1, B2, and B3.  Each backend has their own key, and own cert.  When the client connects, the server uses it's own cert, trusted by the client via the CA cert.  


Reply all
Reply to author
Forward
0 new messages