HAProxy Multi Cluster/Service-mesh Load Balancing Inquiry

240 views
Skip to first unread message

s s

unread,
Jul 29, 2021, 12:40:05 PM7/29/21
to haproxy...@googlegroups.com
Hello all,
    I have a question regarding the Kubernetes HAProxy Ingress.  I wish to have an overall system architecture consisting of a highly-available auto-scaling active-active HAProxy cluster which load balances multiple identical back-end Kubernetes clusters with Istio service mesh.  How can this best be achieved?  Specifically, can this be done using Kubernetes HAProxy Ingress?  Can Kubernetes HAProxy Ingress run in "external mode" outside of the Kubernetes cluster similar to how the ingress controller by HAProxy company can (as of version 1.5, as mentioned in this article https://www.haproxy.com/blog/run-the-haproxy-kubernetes-ingress-controller-outside-of-your-kubernetes-cluster/)?  Or is it best to just have HAProxy load balancer running as a service in its own dedicated Kubernetes cluster which then connects to the back-end Kubernetes+Istio clusters?  Any advice you can offer on the capabilities and optimal architecture of HAProxy in this regard would be highly appreciated.  Please do forgive me if this question is overly basic or uninformed.  I have only recently found out about this project.  I eagerly look forward to any guidance and further resources that can be provided.
 
Thank You and Best Regards,
Sal
 

Joao Morais

unread,
Jul 30, 2021, 7:23:05 AM7/30/21
to haproxy...@googlegroups.com
On Thu, Jul 29, 2021 at 1:40 PM s s <mail...@yandex.com> wrote:
>
> Hello all,
> I have a question regarding the Kubernetes HAProxy Ingress. I wish to have an overall system architecture consisting of a highly-available auto-scaling active-active HAProxy cluster which load balances multiple identical back-end Kubernetes clusters with Istio service mesh. How can this best be achieved? Specifically, can this be done using Kubernetes HAProxy Ingress? Can Kubernetes HAProxy Ingress run in "external mode" outside of the Kubernetes cluster similar to how the ingress controller by HAProxy company can (as of version 1.5, as mentioned in this article https://www.haproxy.com/blog/run-the-haproxy-kubernetes-ingress-controller-outside-of-your-kubernetes-cluster/)? Or is it best to just have HAProxy load balancer running as a service in its own dedicated Kubernetes cluster which then connects to the back-end Kubernetes+Istio clusters? Any advice you can offer on the capabilities and optimal architecture of HAProxy in this regard would be highly appreciated. Please do forgive me if this question is overly basic or uninformed. I have only recently found out about this project. I eagerly look forward to any guidance and further resources that can be provided.


Hello, good questions and thanks for making them.

The short answer for your main question unfortunately is no, but let’s
split this into smaller pieces.

Regarding external mode, HAProxy Ingress supports it since the very
first version and, in fact, there is nothing special in making this
happen. How it works - lets start with the Helm Chart way of deploying
the controller, which is to run it as a pod, giving you three benefits
for free: 1) access to the cluster, 2) network, and 3) automation.
Access (1) you can configure with --kubeconfig command-line option
pointing to a kubectl-like config file. Network (2) you can configure
exposing the pod and/or the service (using service-upstream) networks
in a way that the ingress hosts can reach it; the blog post describes
one way to accomplish this. Automation (3) is the harder part, you’d
need another automation tool to manage your controller instances. Note
that being external or not doesn’t help - you can run the controller
as a pod, taint the node to only run the controller and haproxy,
configure it in the host network and give a public IP to it - it will
become external without the complexities.

Regarding the multi cluster access - this is the feature we don’t
support yet, which makes your proposal as not possible to accomplish.
However HAProxy Ingress internals already has support for multi-client
and it’s not that difficult to implement. If you issue a feature
request you’ll be notified when it’s implemented and available for
tests. It should work something like this: create a multi cluster
kubeconfig and use it in the command-line option; a new command-line
option will allow you to reference two or more contexts available in
the kubeconfig file; the ingress controller nodes should have access
to the pod/service/istio network in the target and it’s up to you to
provide this access (e.g. announcing bgp routes); the networks in the
target should have distinct CIDRs in order to properly configure the
route table in the ingress controller nodes, so maybe the Kubernetes
clusters cannot be identical.

Finally the high availability part depends on the automations you have
around the cluster. How do you resolve the domains, does the DNS
service understand when you have a new controller node or when a node
dies? Do you use another lb in front of them? How do you manage and
scale the controllers, will you use a dedicated Kubernetes cluster or
another automation tool? I currently cannot see the controller helping
here, but please let me know I’m wrong and let’s build such a feature
as well.

~jm
Reply all
Reply to author
Forward
0 new messages