I use kubeadm to install kubernetes cluster. I'm using vagrant with centos 7. Configuration pretty simple - single master node and two worker nodes. And I have several questions and proposals.
Next thing - I run into a problem related to cluster networking. Let me describe everything from scratch.
I decided to use flannel for pods communication.
In /opt/vagrant/embedded/gems/gems/vagrant-1.9.1/plugins/guests/redhat/cap/configure_networks.rb
, add /sbin/ifup '#{network[:device]}'
right after nmcli c reload || true
.
I disabled selinux, iptables, firewalld - see install_kube.sh
To provision master host I used kubeadm init with the following parameters:
--api-advertise-addresses 192.168.100.10 - this is because vagrant defines eth0 interface as default routable interface for provided vms
--pod-network-cidr 172.18.0.0/20
--service-cidr 192.168.101.0/24
--token <previosly_generated_token> - this is to use predefined token instead of parsing kubeadm output
--api-external-dns-names=master,master.mykub - this is to include appropriate domain names into master certificate and to allow https access to it using it's name
And after running kubeadm I also installed flannel, dashboard and weave add-ons (see Vagrantfile):
sudo kubectl create -f /vagrant/kube-flannel.yml
sudo kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
sudo kubectl apply -f 'https://cloud.weave.works/launch/k8s/weavescope.yaml'
To make flannel use appropriate eth1 interface instead of eth0 I changed kube-flannel.yml file:
...
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface", "eth1" ]
...
After that I provisioned worker hosts which is strightforward:
kubeadm join --token=<previosly_generated_token> 192.168.100.10
After that I found that flanneld didn't start properly on worker nodes. And I don't understand how it is supposed to work.
flanneld is supposed to be running in a pod with host network - see kube-flannel.yml: hostNetwork: true
and as far as I know to organize service network iptables rules should be used - they have to route all service network IP requests to kube-proxy.
And I see flannel pod is failing to start on worker nodes because of:
E0301 20:51:49.862025 1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'default/kube-flannel-ds-kvk20': Get https://192.168.101.1:443/api/v1/namespaces/default/pods/kube-flannel-ds-kvk20: dial tcp 192.168.101.1:443: i/o timeout
So flannel is trying to access my service network. Seems it assumes that api server is 192.168.101.1, tries to access it and fails.
And I can't understand how it can reach API server through kubernetes service network - kubernetes service network requires network layer (flannel + kube+porxy in my case) and how flannel can use it? Why is it accessing 192.168.101.1 instead of accessing master host directly - 192.168.100.10?
So on my worker node I have interfaces:
eth0 - 10.0.2.15/24 - vagrant - specific interface used
eth1 - 192.168.100.11/24 - network used for communication between hosts
docker0 - 172.17.0.1/16
# ip route
default via 10.0.2.2 dev eth0 proto static metric 100
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100
169.254.0.0/16 dev eth1 scope link metric 1003
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.100.0/24 dev eth1 proto kernel scope link src 192.168.100.11
# iptables-save
# Generated by iptables-save v1.4.21 on Wed Mar 1 21:05:36 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-A3RDGLC7TMWZTFBP - [0:0]
:KUBE-SEP-KLMOTHZKN3LNJ7NB - [0:0]
:KUBE-SEP-S77W6PMQVTFQMRF2 - [0:0]
:KUBE-SEP-V35Q5WM2LXOHVZII - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-IOW2GPERHFNTRP6T - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d
127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30113 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30113 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-A3RDGLC7TMWZTFBP -s
192.168.100.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-A3RDGLC7TMWZTFBP -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-A3RDGLC7TMWZTFBP --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination
192.168.100.10:6443-A KUBE-SEP-KLMOTHZKN3LNJ7NB -s
172.18.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-KLMOTHZKN3LNJ7NB -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination
172.18.0.2:53-A KUBE-SEP-S77W6PMQVTFQMRF2 -s
172.18.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-S77W6PMQVTFQMRF2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination
172.18.0.2:53-A KUBE-SEP-V35Q5WM2LXOHVZII -s
172.18.0.3/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-V35Q5WM2LXOHVZII -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination
172.18.0.3:9090-A KUBE-SERVICES -d
192.168.101.182/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -d
192.168.101.22/32 -p tcp -m comment --comment "default/weave-scope-app:app cluster IP" -m tcp --dport 80 -j KUBE-SVC-IOW2GPERHFNTRP6T
-A KUBE-SERVICES -d
192.168.101.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d
192.168.101.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d
192.168.101.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-S77W6PMQVTFQMRF2
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-A3RDGLC7TMWZTFBP --mask 255.255.255.255 --rsource -j KUBE-SEP-A3RDGLC7TMWZTFBP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-A3RDGLC7TMWZTFBP
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-KLMOTHZKN3LNJ7NB
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-V35Q5WM2LXOHVZII
COMMIT
# Completed on Wed Mar 1 21:05:36 2017
# Generated by iptables-save v1.4.21 on Wed Mar 1 21:05:36 2017
*filter
:INPUT ACCEPT [4:196]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3:884]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d
192.168.101.22/32 -p tcp -m comment --comment "default/weave-scope-app:app has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Wed Mar 1 21:05:36 2017