Hyper-V + ubuntu + kubeadm + callico + metallb = external pending

547 views
Skip to first unread message

Sébastien Dionne

unread,
Jun 6, 2020, 4:59:12 PM6/6/20
to metallb-users
I have a little Setup on Hyper-v (windows 10): 1 master + 2 workers.  I'm not able to obtain an external IP for nginx.

my 3 VM have differents MAC  and they are on the same virtual switch : vagrant (external + OS autorize to share... (I had to translate from french.. not sure the exact term here)


kubectl get nodes -o wide

NAME          STATUS   ROLES    AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION       CONTAINER-RUNTIME
k8s-master    Ready    master   4h56m   v1.18.3   192.168.50.8    <none>        Ubuntu 18.04.4 LTS   4.15.0-101-generic   docker://19.3.11
k8s-worker1   Ready    <none>   4h52m   v1.18.3   192.168.50.9    <none>        Ubuntu 18.04.4 LTS   4.15.0-101-generic   docker://19.3.11
k8s-worker2   Ready    <none>   4h49m   v1.18.3   192.168.50.10   <none>        Ubuntu 18.04.4 LTS   4.15.0-101-generic   docker://19.3.11
vagrant@k8s-master:/vagrant/provision$

I installed metallb with these commands

      - kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
      - kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
      - kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

my configuration is

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.50.200-192.168.50.250


I install nginx with this (https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/tutorial-2.yaml    but I had to change the apiVersion of the deployment)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1
        ports:
        - name: http
          containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer


vagrant@k8s-master:/vagrant/provision$ kubectl get svc -o wide
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE     SELECTOR
kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP        4h59m   <none>
nginx        LoadBalancer   10.101.204.85   <pending>     80:32134/TCP   34m     app=nginx

Sébastien Dionne

unread,
Jun 7, 2020, 7:26:22 AM6/7/20
to metallb-users
kubectl logs --namespace metallb-system controller-57f648cb96-2wbgv

Trace[1343973922]: [30.000596592s] [30.000596592s] END
E0607 11:24:11.242032       1 reflector.go:125] pkg/mod/k8s.io/clie...@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0607 11:24:11.243589       1 trace.go:81] Trace[1101847029]: "Reflector pkg/mod/k8s.io/clie...@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch" (started: 2020-06-07 11:23:41.243022676 +0000 UTC m=+3608.443086730) (total time: 30.000545392s):
Trace[1101847029]: [30.000545392s] [30.000545392s] END
E0607 11:24:11.243640       1 reflector.go:125] pkg/mod/k8s.io/clie...@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.ConfigMap: Get https://10.96.0.1:443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
vagrant@k8s-master:~$


vagrant@k8s-master:~$ kubectl get all --all-namespaces -o wide
NAMESPACE            NAME                                           READY   STATUS    RESTARTS   AGE   IP                NODE          NOMINATED NODE   READINESS GATES
default              pod/nginx-584d4f8b45-xb92d                     1/1     Running   2          11h   192.168.194.68    k8s-worker1   <none>           <none>
kube-system          pod/calico-kube-controllers-76d4774d89-jf525   1/1     Running   4          22h   192.168.235.211   k8s-master    <none>           <none>
kube-system          pod/calico-node-6rlnt                          1/1     Running   4          22h   192.168.50.8      k8s-master    <none>           <none>
kube-system          pod/calico-node-8jnrr                          1/1     Running   4          22h   192.168.50.10     k8s-worker2   <none>           <none>
kube-system          pod/calico-node-mdlrr                          1/1     Running   4          22h   192.168.50.9      k8s-worker1   <none>           <none>
kube-system          pod/coredns-66bff467f8-6ljdd                   1/1     Running   4          22h   192.168.235.212   k8s-master    <none>           <none>
kube-system          pod/coredns-66bff467f8-sphrn                   1/1     Running   4          22h   192.168.235.209   k8s-master    <none>           <none>
kube-system          pod/etcd-k8s-master                            1/1     Running   4          22h   192.168.50.8      k8s-master    <none>           <none>
kube-system          pod/kube-apiserver-k8s-master                  1/1     Running   4          22h   192.168.50.8      k8s-master    <none>           <none>
kube-system          pod/kube-controller-manager-k8s-master         1/1     Running   5          22h   192.168.50.8      k8s-master    <none>           <none>
kube-system          pod/kube-proxy-82t4k                           1/1     Running   4          22h   192.168.50.10     k8s-worker2   <none>           <none>
kube-system          pod/kube-proxy-vxvp7                           1/1     Running   4          22h   192.168.50.9      k8s-worker1   <none>           <none>
kube-system          pod/kube-proxy-x6h28                           1/1     Running   4          22h   192.168.50.8      k8s-master    <none>           <none>
kube-system          pod/kube-scheduler-k8s-master                  1/1     Running   5          22h   192.168.50.8      k8s-master    <none>           <none>
local-path-storage   pod/local-path-provisioner-7d9c4586c4-wxp62    1/1     Running   5          22h   192.168.235.210   k8s-master    <none>           <none>
metallb-system       pod/controller-57f648cb96-2wbgv                1/1     Running   4          21h   192.168.126.7     k8s-worker2   <none>           <none>
metallb-system       pod/speaker-dc9bp                              1/1     Running   8          21h   192.168.50.9      k8s-worker1   <none>           <none>
metallb-system       pod/speaker-nsgv9                              1/1     Running   5          21h   192.168.50.8      k8s-master    <none>           <none>
metallb-system       pod/speaker-vrd82                              1/1     Running   8          21h   192.168.50.10     k8s-worker2   <none>           <none>

NAMESPACE     NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE   SELECTOR
default       service/kubernetes   ClusterIP      10.96.0.1       <none>        443/TCP                  22h   <none>
default       service/nginx        LoadBalancer   10.101.204.85   <pending>     80:32134/TCP             18h   app=nginx
kube-system   service/kube-dns     ClusterIP      10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   22h   k8s-app=kube-dns

NAMESPACE        NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE   CONTAINERS    IMAGES                          SELECTOR
kube-system      daemonset.apps/calico-node   3         3         3       3            3           kubernetes.io/os=linux        22h   calico-node   calico/node:v3.14.1             k8s-app=calico-node
kube-system      daemonset.apps/kube-proxy    3         3         3       3            3           kubernetes.io/os=linux        22h   kube-proxy    k8s.gcr.io/kube-proxy:v1.18.3   k8s-app=kube-proxy
metallb-system   daemonset.apps/speaker       3         3         3       3            3           beta.kubernetes.io/os=linux   21h   speaker       metallb/speaker:v0.9.3          app=metallb,component=speaker

NAMESPACE            NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                   SELECTOR
default              deployment.apps/nginx                     1/1     1            1           18h   nginx                     nginx:1                                  app=nginx
kube-system          deployment.apps/calico-kube-controllers   1/1     1            1           22h   calico-kube-controllers   calico/kube-controllers:v3.14.1          k8s-app=calico-kube-controllers
kube-system          deployment.apps/coredns                   2/2     2            2           22h   coredns                   k8s.gcr.io/coredns:1.6.7                 k8s-app=kube-dns
local-path-storage   deployment.apps/local-path-provisioner    1/1     1            1           22h   local-path-provisioner    rancher/local-path-provisioner:v0.0.14   app=local-path-provisioner
metallb-system       deployment.apps/controller                1/1     1            1           21h   controller                metallb/controller:v0.9.3                app=metallb,component=controller

NAMESPACE            NAME                                                 DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                   SELECTOR
default              replicaset.apps/nginx-584d4f8b45                     1         1         1       18h   nginx                     nginx:1                                  app=nginx,pod-template-hash=584d4f8b45
kube-system          replicaset.apps/calico-kube-controllers-76d4774d89   1         1         1       22h   calico-kube-controllers   calico/kube-controllers:v3.14.1          k8s-app=calico-kube-controllers,pod-template-hash=76d4774d89
kube-system          replicaset.apps/coredns-66bff467f8                   2         2         2       22h   coredns                   k8s.gcr.io/coredns:1.6.7                 k8s-app=kube-dns,pod-template-hash=66bff467f8
local-path-storage   replicaset.apps/local-path-provisioner-7d9c4586c4    1         1         1       22h   local-path-provisioner    rancher/local-path-provisioner:v0.0.14   app=local-path-provisioner,pod-template-hash=7d9c4586c4
metallb-system       replicaset.apps/controller-57f648cb96                1         1         1       21h   controller                metallb/controller:v0.9.3                app=metallb,component=controller,pod-template-hash=57f648cb96

Sébastien Dionne

unread,
Jun 7, 2020, 7:59:29 AM6/7/20
to metallb-users
vagrant@k8s-master:~/test$ ip route
default via 192.168.50.1 dev eth0 proto dhcp src 192.168.50.8 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.50.0/24 dev eth0 proto kernel scope link src 192.168.50.8
192.168.50.1 via 192.168.50.9 dev tunl0 proto bird onlink
192.168.50.1 dev eth0 proto dhcp scope link src 192.168.50.8 metric 100
192.168.126.0/26 via 192.168.50.10 dev tunl0 proto bird onlink
192.168.194.64/26 via 192.168.50.9 dev tunl0 proto bird onlink
blackhole 192.168.235.192/26 proto bird
192.168.235.209 dev cali275d800bcdb scope link
192.168.235.210 dev cali0a687fc05a9 scope link
192.168.235.211 dev cali46626f893b9 scope link
192.168.235.212 dev cali28c13f8223b scope link

Sébastien Dionne

unread,
Jun 7, 2020, 8:06:01 AM6/7/20
to metallb-users
I created the cluster with this :
kubeadm init --pod-network-cidr=192.168.0.0/16 --node-name k8s-master --control-plane-endpoint=192.168.50.8 --apiserver-advertise-address=192.168.50.8

Todor Petkov

unread,
Jun 7, 2020, 8:39:55 AM6/7/20
to Sébastien Dionne, metallb-users
Hello,

Can you rebuild the cluster with a pod network somewhere in the
10.x.x.x range and try again? Currently, your pod network overlaps
with the host network, and it can lead to issues.

Another option is to assign 'static' ip address of the service - for
example, put "loadBalancerIP: 192.168.50.200" or any other address in
the spec part of the service yaml and see if it's working.

Sébastien Dionne

unread,
Jun 7, 2020, 8:45:30 AM6/7/20
to metallb-users
no problem to recreate it.  I bind the MAC address of my master and worker node in my router, so I always have the same IP for my tests, but without that, I have a script to find my localIP and I used that to create the cluster with kubeadm


which file I should modify ?

metallb-config

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.50.200-192.168.50.250


I have my ansible to create the master

---
- hosts: all
  become: true
  tasks:
  - name: Get IP
    shell: ip route get 8.8.8.8 | fgrep src |  cut -f7 -d" "
    register: localip
 
  - debug:
      msg: LOCAL IP IS {{localip.stdout}}
 
  - name: Configure node ip
    lineinfile:
      path: /etc/default/kubelet
      line: KUBELET_EXTRA_ARGS=--node-ip={{ localip.stdout }}
      create: yes

  - name: Restart kubelet
    service:
      name: kubelet
      daemon_reload: yes
      state: restarted
     
  - debug:
      msg: kubeadm init --pod-network-cidr=192.168.0.0/16 --node-name k8s-master --control-plane-endpoint={{ localip.stdout }} --apiserver-advertise-address={{ localip.stdout }}

  - name: Initialize the Kubernetes cluster using kubeadm
    command: kubeadm init --pod-network-cidr=192.168.0.0/16 --node-name k8s-master --control-plane-endpoint={{ localip.stdout }} --apiserver-advertise-address={{ localip.stdout }}
 
  - name: Setup kubeconfig for vagrant user
    command: "{{ item }}"
    with_items:
      - mkdir -p /home/vagrant/.kube
      - chown vagrant:vagrant /home/vagrant/.kube
      - cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config
      - chown vagrant:vagrant /home/vagrant/.kube/config

  - name: Install calico pod network
    become: false
    command: kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
    #https://docs.projectcalico.org/manifests/calico.yaml

  - name: Generate join command
    command: kubeadm token create --print-join-command
    register: join_command

  - name: Copy join command to local file
    local_action: copy content="{{ join_command.stdout_lines[0] }}" dest="./join-command"

- name: Install MetalLB (LoadBalancer)
    become: false
    command: "{{ item }}"
    with_items:

      - kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
      - kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml
      - kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
     
  - name: Configure MetalLB (LoadBalancer)
    become: false
    command: "{{ item }}"
    with_items:
      - kubectl apply -f metallb-config.yaml


Todor Petkov

unread,
Jun 7, 2020, 9:18:20 AM6/7/20
to Sébastien Dionne, metallb-users
On Sun, Jun 7, 2020 at 3:45 PM Sébastien Dionne
<sebastie...@gmail.com> wrote:
>
> no problem to recreate it. I bind the MAC address of my master and worker node in my router, so I always have the same IP for my tests, but without that, I have a script to find my localIP and I used that to create the cluster with kubeadm
>
>
> which file I should modify ?

Try to set "loadBalancerIP: 192.168.50.23" (or another unused address
in the network) in the nginx yaml file, "kind:Service" part, and check
if you can see the ip address in "kubectl get svc -A"

As for initializing the cluster with a new network, replace
192.168.0.0/16 with 10.244.0.0/16 in the "Initialize the Kubernetes
cluster using kubeadm" subsection.

Hope the first one helps.

Sébastien Dionne

unread,
Jun 7, 2020, 11:08:00 AM6/7/20
to metallb-users
thanks a lot

changing to kubeadm init --pod-network-cidr=10.244.0.0/16

fix the issue. 

I didn't had that issue with vmware so I had no idea where the problem came from.

now it's working for Hyper-V.


Reply all
Reply to author
Forward
0 new messages