MetalLB Installation with Helm + Ingress Configuration

70 views
Skip to first unread message

Abdou B

unread,
Jan 4, 2024, 10:51:13 AMJan 4
to metallb-users
Hello,

I Installed a kubernetes cluster, it is composed of 1 master node and 3 workers on baremetal at my home. 
* I used weavenet as CNI and the default is externalTrafficPolicy is   externalTrafficPolicy: Cluster

I created a namespace for MetalLB 

~/k8s/metalb$ cat namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: metallb
  # https://metallb.universe.tf/installation/#installation-with-helm
  labels:
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/warn: privileged



I install metallb using helm charts in the namespace
helm install metallb metallb/metallb -f values.yaml -n metallb
I customized some values :
* loadBalancerClass: "nginx"
* controller.logLevel: debug
* controller.labels:
* speaker
  * speaker.logLevel: info
  * speaker.labels:
      pod-security.kubernetes.io/warn: privileged
* crds.validationFailurePolicy=Ignore

I use kubectl logs and describe on the various pod.
I do not see any error. 
How to do you confirm metalLB is running correctly ?


Then a create a Ingress in my kubernetes cluster. 
No public IP is assign. 
I do not see error.

Any help would be appreciated 

Best Regards 
Abdou



Abdou B

unread,
Jan 4, 2024, 11:27:37 AMJan 4
to metallb-users

Does the ingress-nginx-controller service should have a public IP ? Because is still pending ?

:~/k8s/metalb$ kubectl get svc --selector app.kubernetes.io/instance=ingress-nginx
NAME                                                      TYPE                  CLUSTER-IP      EXTERNAL-IP   PORT(S)                                          AGE
ingress-nginx-controller                       LoadBalancer   10.106.177.67   <pending>        80:30280/TCP,443:30112/TCP   2d3h
ingress-nginx-controller-admission   ClusterIP            10.98.173.107   <none>             443/TCP                                          2d3h

Best Regards
Abdou

Abdou B

unread,
Jan 4, 2024, 5:09:11 PMJan 4
to metallb-users
I created a test deployement and a test service. 

apiVersion: v1
kind: Service
metadata:
  name: svc-nginx-test-metallb
spec:
  type: LoadBalancer
  selector:
    app: nginx-test-metallb
  ports:
    - protocol: TCP
      port: 8888
      nodePort: 30888
      targetPort: 80

As my services has no IP affected to it, I looked at the controller logs, the only logs I see are those ones.
kubectl logs -n metallb  $( kubectl get pod -n metallb | grep control | awk '{print $1}' ) --follow 

{"caller":"service_controller.go:60","controller":"ServiceReconciler","level":"info","start reconcile":"default/svc-nginx-test-metallb","ts":"2024-01-04T21:55:02Z"}
{"caller":"service_controller.go:74","controller":"ServiceReconciler","end reconcile":"default/svc-nginx-test-metallb","level":"info","ts":"2024-01-04T21:55:02Z"}
W0104 21:55:52.475627       1 warnings.go:70] metallb.io v1beta1 AddressPool is deprecated, consider using IPAddressPool

Also the doc state that the services should have events attach to it and my test services has no event attached to it ...  
~$ kubectl describe service svc-nginx-test-metallb
Name:                     svc-nginx-test-metallb
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=nginx-test-metallb
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.239.140
IPs:                      10.99.239.140
Port:                     <unset>  8888/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  30888/TCP
Endpoints:                10.36.0.1:80,10.44.0.6:80,10.47.0.6:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                  
<none>

Best Regards
Abdou 


Abdou B

unread,
Jan 4, 2024, 9:05:30 PMJan 4
to metallb-users
I have done additionnal checks : 
* I check I was using iptables and not ipvs. kube-proxy is using iptables. 
* check L2Advertisement configuration. ()
* service is accessible from inside the cluster and on the nodeport outside the cluster .

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: metallb-pool
  namespace: metallb
spec:
  addresses:
  - 192.168.1.x-192.168.1.y

apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-adv-metallb
  namespace: metallb
spec:
  ipAddressPools:
  - metallb-pool

I also deactivate ipv6 from all node in cluster.

best regards
Abdou

Reply all
Reply to author
Forward
0 new messages