Using kubeadm to install kubernetes cluster on vagrant with centos 7

1,739 views
Skip to first unread message

Pavel Moukhataev

unread,
Mar 1, 2017, 4:08:45 PM3/1/17
to kubernetes-sig-cluster-lifecycle
Hello
I use kubeadm to install kubernetes cluster. I'm using vagrant with centos 7. Configuration pretty simple - single master node and two worker nodes. And I have several questions and proposals.
I'm using this installation guide - https://kubernetes.io/docs/getting-started-guides/kubeadm/ and this one https://kubernetes.io/docs/admin/kubeadm/ to change define more fine-grained parameters.

First of all can you describe more detailed list of services on master and worker nodes and how that services are to be started - seems that for some services systectl is used and others are started in a docker environment. And it will be good to mention how that services are to be restarted. For example if non-default network is to be used for service (--service-cidr) then configuration file is to be changed manually (https://kubernetes.io/docs/admin/kubeadm/) but there is no information on how to restart kubelet.

Next thing - I run into a problem related to cluster networking. Let me describe everything from scratch.
I installed vagrant 1.9.1, virtualbox 5.1.6 onto Xubuntu 16.04.1 LTS. I used kubeadm from http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 repository as described here https://kubernetes.io/docs/getting-started-guides/kubeadm/
I used the following configuration:
192.168.100.10 - master host ip, dns names - master, master.mykub
192.168.100.11 - worker1 host ip, dns names - slave1, slave1.mykub
192.168.100.12 - worker2 host ip, dns names - slave2, slave2.mykub
I used /etc/hosts file as domain names provider

I decided to use flannel for pods communication.
Pods network: 172.18.0.0/20
Service network: 192.168.101.0/24

I created Vagrantfile to start and provision hosts. I chose centos/7 as vm.box. Used static host IPs. First I run into vagrant and centos/7 related problem - IPs where not assigned at first host start. That problem is described here - stackoverflow/centos7-with-private-network-lost-fixed-ip. Can be fixed by:

In /opt/vagrant/embedded/gems/gems/vagrant-1.9.1/plugins/guests/redhat/cap/configure_networks.rbadd /sbin/ifup '#{network[:device]}' right after nmcli c reload || true.


I disabled selinux, iptables, firewalld - see install_kube.sh

To provision master host I used kubeadm init with the following parameters:

--api-advertise-addresses 192.168.100.10 - this is because vagrant defines eth0 interface as default routable interface for provided vms

--pod-network-cidr 172.18.0.0/20

--service-cidr 192.168.101.0/24

--token <previosly_generated_token> - this is to use predefined token instead of parsing kubeadm output

--api-external-dns-names=master,master.mykub - this is to include appropriate domain names into master certificate and to allow https access to it using it's name


And after running kubeadm I also installed flannel, dashboard and weave add-ons (see Vagrantfile):

      sudo kubectl create -f /vagrant/kube-flannel.yml

      sudo kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml

      sudo kubectl apply -f 'https://cloud.weave.works/launch/k8s/weavescope.yaml'


To make flannel use appropriate eth1 interface instead of eth0 I changed kube-flannel.yml file:

...

command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface", "eth1" ]

...



After that I provisioned worker hosts which is strightforward:

kubeadm join --token=<previosly_generated_token192.168.100.10




After that I found that flanneld didn't start properly on worker nodes. And I don't understand how it is supposed to work.

flanneld is supposed to be running in a pod with host network - see kube-flannel.yml: hostNetwork: true

and as far as I know to organize service network iptables rules should be used - they have to route all service network IP requests to kube-proxy.



And I see flannel pod is failing to start on worker nodes because of:

E0301 20:51:49.862025       1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'default/kube-flannel-ds-kvk20': Get https://192.168.101.1:443/api/v1/namespaces/default/pods/kube-flannel-ds-kvk20: dial tcp 192.168.101.1:443: i/o timeout


So flannel is trying to access my service network. Seems it assumes that api server is 192.168.101.1, tries to access it and fails.



And I can't understand how it can reach API server through kubernetes service network - kubernetes service network requires network layer (flannel + kube+porxy in my case) and how flannel can use it? Why is it accessing 192.168.101.1 instead of accessing master host directly - 192.168.100.10?


So on my worker node I have interfaces:

eth0 - 10.0.2.15/24 - vagrant - specific interface used 
eth1 - 192.168.100.11/24 - network used for communication between hosts

docker0 - 172.17.0.1/16



# ip route

default via 10.0.2.2 dev eth0  proto static  metric 100 

10.0.2.0/24 dev eth0  proto kernel  scope link  src 10.0.2.15  metric 100 

169.254.0.0/16 dev eth1  scope link  metric 1003 

172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1 

192.168.100.0/24 dev eth1  proto kernel  scope link  src 192.168.100.11 





# iptables-save 
# Generated by iptables-save v1.4.21 on Wed Mar  1 21:05:36 2017
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-A3RDGLC7TMWZTFBP - [0:0]
:KUBE-SEP-KLMOTHZKN3LNJ7NB - [0:0]
:KUBE-SEP-S77W6PMQVTFQMRF2 - [0:0]
:KUBE-SEP-V35Q5WM2LXOHVZII - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-IOW2GPERHFNTRP6T - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:KUBE-SVC-XGLOHA7QRQ3V22RZ - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30113 -j KUBE-MARK-MASQ
-A KUBE-NODEPORTS -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp --dport 30113 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-A3RDGLC7TMWZTFBP -s 192.168.100.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ
-A KUBE-SEP-A3RDGLC7TMWZTFBP -p tcp -m comment --comment "default/kubernetes:https" -m recent --set --name KUBE-SEP-A3RDGLC7TMWZTFBP --mask 255.255.255.255 --rsource -m tcp -j DNAT --to-destination 192.168.100.10:6443
-A KUBE-SEP-KLMOTHZKN3LNJ7NB -s 172.18.0.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-KLMOTHZKN3LNJ7NB -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 172.18.0.2:53
-A KUBE-SEP-S77W6PMQVTFQMRF2 -s 172.18.0.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-S77W6PMQVTFQMRF2 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 172.18.0.2:53
-A KUBE-SEP-V35Q5WM2LXOHVZII -s 172.18.0.3/32 -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-MARK-MASQ
-A KUBE-SEP-V35Q5WM2LXOHVZII -p tcp -m comment --comment "kube-system/kubernetes-dashboard:" -m tcp -j DNAT --to-destination 172.18.0.3:9090
-A KUBE-SERVICES -d 192.168.101.182/32 -p tcp -m comment --comment "kube-system/kubernetes-dashboard: cluster IP" -m tcp --dport 80 -j KUBE-SVC-XGLOHA7QRQ3V22RZ
-A KUBE-SERVICES -d 192.168.101.22/32 -p tcp -m comment --comment "default/weave-scope-app:app cluster IP" -m tcp --dport 80 -j KUBE-SVC-IOW2GPERHFNTRP6T
-A KUBE-SERVICES -d 192.168.101.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -d 192.168.101.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -d 192.168.101.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-S77W6PMQVTFQMRF2
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -m recent --rcheck --seconds 10800 --reap --name KUBE-SEP-A3RDGLC7TMWZTFBP --mask 255.255.255.255 --rsource -j KUBE-SEP-A3RDGLC7TMWZTFBP
-A KUBE-SVC-NPX46M4PTMTKRN6Y -m comment --comment "default/kubernetes:https" -j KUBE-SEP-A3RDGLC7TMWZTFBP
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-KLMOTHZKN3LNJ7NB
-A KUBE-SVC-XGLOHA7QRQ3V22RZ -m comment --comment "kube-system/kubernetes-dashboard:" -j KUBE-SEP-V35Q5WM2LXOHVZII
COMMIT
# Completed on Wed Mar  1 21:05:36 2017
# Generated by iptables-save v1.4.21 on Wed Mar  1 21:05:36 2017
*filter
:INPUT ACCEPT [4:196]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3:884]
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION -j RETURN
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-SERVICES -d 192.168.101.22/32 -p tcp -m comment --comment "default/weave-scope-app:app has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Wed Mar  1 21:05:36 2017

hosts
id_rsa.pub
install_kube.sh
install_ssh.sh
kube-flannel.yml
selinux
Vagrantfile

Pavel Moukhataev

unread,
Mar 2, 2017, 6:52:18 PM3/2/17
to kubernetes-sig-cluster-lifecycle
Seems that I found the issue.

Flannel is trying to access kube service 192.168.101.1 which is part of my service network 192.168.101.0/24. And host iptables rules are used to organize this network. And seems there is source IP check in iptables rules:
-A KUBE-SEP-A3RDGLC7TMWZTFBP -s 192.168.100.10/32 -m comment --comment "default/kubernetes:https" -j KUBE-MARK-MASQ

And in vagrant hosts there are two network interfaces. Eth0 is used by vagrant and to access internet. Eth1 is used to communicate between hosts.

So if I use 'curl http://192.168.101.1' it fails. But if I use 'curl --interface eth1 http://192.168.101.1' then it succeeds.

This can be treated as flannel bug since flannel use default interface to access api or etcd server while having another interface specified.

So need to do something to make it possible to access both internet and other hosts using the same interface. I suppose this is vagrant-specific issue.


четверг, 2 марта 2017 г., 0:08:45 UTC+3 пользователь Pavel Moukhataev написал:

Vasista T

unread,
Apr 21, 2017, 7:13:39 AM4/21/17
to kubernetes-sig-cluster-lifecycle
I'm using the similar nstallation.

Centos 7 VM.

My ip route output is :

default via 192.168.170.1 dev ens160  proto static  metric 100
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1
192.168.170.0/23 dev ens160  proto kernel  scope link  src 192.168.170.152  metric 100

I can see ens160   instead of eth0 or eth1.

is this an issue ?


- Vasista
Reply all
Reply to author
Forward
0 new messages