Cannot get kube-dns to start on Kubernetes

121 views
Skip to first unread message

Paul Braham

unread,
Feb 4, 2017, 5:09:40 PM2/4/17
to kubernet...@googlegroups.com

Hoping someone can help. I have a 3x node CoreOS cluster running Kubernetes. The nodes are as follows: 192.168.1.201 - Controller 192.168.1.202 - Worker Node 192.168.1.203 - Worker Node

The cluster is up and running, and I can run the following commands:

> kubectl get nodes

NAME            STATUS                     AGE
192.168.1.201   Ready,SchedulingDisabled   1d
192.168.1.202   Ready                      21h
192.168.1.203   Ready                      21h

> kubectl get pods --namespace=kube-system

NAME                                    READY     STATUS             RESTARTS   AGE
kube-apiserver-192.168.1.201            1/1       Running            2          1d
kube-controller-manager-192.168.1.201   1/1       Running            4          1d
kube-dns-v20-h4w7m                      2/3       CrashLoopBackOff   15         23m
kube-proxy-192.168.1.201                1/1       Running            2          1d
kube-proxy-192.168.1.202                1/1       Running            1          21h
kube-proxy-192.168.1.203                1/1       Running            1          21h
kube-scheduler-192.168.1.201            1/1       Running            4          1d

As you can see, the kube-dns service is not running correctly. It keeps restarting and I am struggling to understand why. Any help in debugging this would be greatly appreciated (or pointers at where to read about debugging this. Running kubectl logs does not bring anything back...not sure if the addons function differently to standard pods.

Running a kubectl describe pods, I can see the containers are killed due to being unhealthy:

16m           16m             1       {kubelet 192.168.1.203} spec.containers{kubedns}        Normal          Created         Created container with docker id 189afaa1eb0d; Security:[seccomp=unconfined]
16m           16m             1       {kubelet 192.168.1.203} spec.containers{kubedns}        Normal          Started         Started container with docker id 189afaa1eb0d
14m           14m             1       {kubelet 192.168.1.203} spec.containers{kubedns}        Normal          Killing         Killing container with docker id 189afaa1eb0d: pod "kube-dns-v20-h4w7m_kube-system(3a545c95-ea19-11e6-aa7c-52540021bfab)" container "kubedns" is unhealthy, it will be killed and re-created

Please find a full output of this command as a github gist here: https://gist.github.com/mehstg/0b8016f5398a8781c3ade8cf49c02680

Thanks in advance!

meh...@gmail.com

unread,
Feb 4, 2017, 5:16:58 PM2/4/17
to Kubernetes user discussion and Q&A, meh...@gmail.com
To add. I missed out on my original post, I am running Flannel on the hosts too, so the 10.10.0.0 addresses.
Reply all
Reply to author
Forward
0 new messages