Can't get list of nodes

876 views
Skip to first unread message

Pierre Mavro

unread,
Nov 30, 2016, 4:57:57 AM11/30/16
to CoreOS User
Hi,

I'm trying to bootstrap a kubernetes cluster and I'm having issues on getting nodes list. It just return nothing. However, it looks like correctly bootstrapped when I see this command returns:

$ kubectl get ns  
NAME          STATUS    AGE
default       Active    5m
kube
-system   Active    5m


$ kubectl
get svc
NAME         CLUSTER
-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes  
10.3.0.1     <none>        443/TCP   5m


$ kubectl
get cs      
NAME                 STATUS    MESSAGE              ERROR
controller
-manager   Healthy   ok                  
scheduler            
Healthy   ok                  
etcd
-2               Healthy   {"health": "true"}  
etcd
-0               Healthy   {"health": "true"}  
etcd
-1               Healthy   {"health": "true"}  


I followed this documentation and when I'm trying to look at kubelet status, it looks like ok too:

$ curl -s localhost:10255/pods | jq -r '.items[].metadata.name'
kube
-apiserver-172.17.8.101
kube
-controller-manager-172.17.8.101
kube
-proxy-172.17.8.101
kube
-scheduler-172.17.8.101



I didn't find anything relevant in the logs and do not understand why I can't get nodes list. Does anyone get an idea?

Thanks in advance

Pierre

Rob Szumski

unread,
Nov 30, 2016, 12:07:23 PM11/30/16
to Pierre Mavro, CoreOS User
Are there any hints in `kubectl get nodes --v=6`? How did you set up this cluster?

--
You received this message because you are subscribed to the Google Groups "CoreOS User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to coreos-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Pierre Mavro

unread,
Dec 1, 2016, 1:43:44 AM12/1/16
to CoreOS User, deim...@gmail.com
Thanks for the answer Rob, here is what I've got:

$ kubectl get nodes --v=6
I1201
07:42:10.816372   22413 loader.go:354] Config loaded from file /home/pmavro/.kube/config
I1201
07:42:10.846837   22413 round_trippers.go:318] GET https://srv1.fqdn.com/api 200 OK in 25 milliseconds
I1201
07:42:10.848332   22413 round_trippers.go:318] GET https://srv1.fqdn.com/apis 200 OK in 1 milliseconds
I1201
07:42:10.851704   22413 round_trippers.go:318] GET https://srv1.fqdn.com/api 200 OK in 0 milliseconds
I1201
07:42:10.852655   22413 round_trippers.go:318] GET https://srv1.fqdn.com/apis 200 OK in 0 milliseconds
I1201
07:42:10.854770   22413 round_trippers.go:318] GET https://srv1.fqdn.com/api/v1/nodes 200 OK in 1 milliseconds

Maurizio Vitale

unread,
Dec 1, 2016, 10:01:27 AM12/1/16
to Pierre Mavro, CoreOS User
it looks like your master is ok, but your worker nodes didn't register with it. Take a look at the kubelet's logs.

Also, it might be that your apiserver only reply to https (and your kubectl use it). Maybe it has --insecure-bind-address=127.0.0.1 to only allow local insecure access and maybe the kubelets do the default http://master:8080.

Anyhow, I suspect the kubelets logs will tell you a lot.

--
You received this message because you are subscribed to the Google Groups "CoreOS User" group.
To unsubscribe from this group and stop receiving emails from it, send an email to coreos-user+unsubscribe@googlegroups.com.

Pierre Mavro

unread,
Dec 2, 2016, 2:13:17 AM12/2/16
to CoreOS User, deim...@gmail.com
Thanks for the answer, it looks like a communication problem between workers and master on port 8080. However, from a configuration point of view, it looks like normal in the documentation to get it listening on localhost. Any suggestion ? Here are the logs:

Dec 02 06:54:52 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:52.591160     797 kubelet_node_status.go:73] Attempting to register node 172.17.8.104
Dec 02 06:54:52 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:52.761383     797 eviction_manager.go:162] eviction manager: unexpected err: failed GetNode: node '172.17.8.104' not found
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:55.481971     797 event.go:208] Unable to write event: 'Post http://kub-master-host.myfqdn.com:8080/api/v1/namespaces/default/events: dial tcp 172.17.8.102:808
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:55.482386     797 reflector.go:203] pkg/kubelet/kubelet.go:403: Failed to list *api.Node: Get http://kub-master-host.myfqdn.com:8080/api/v1/nodes?fieldSelector
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:55.482652     797 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get http://kub-master-host.myfqdn.com:8080/api/v1/services?resourc
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:55.483130     797 reflector.go:203] pkg/kubelet/config/apiserver.go:43: Failed to list *api.Pod: Get http://kub-master-host.myfqdn.com:8080/api/v1/pods?fieldSe
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:55.483429     797 kubelet_node_status.go:97] Unable to register node "172.17.8.104" with API server: Post http://kub-master-host.myfqdn.com:8080/api/v1/nodes:
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:55.683847     797 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
Dec 02 06:54:55 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:55.685623     797 kubelet_node_status.go:73] Attempting to register node 172.17.8.104
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.313872     797 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: W1202 06:54:57.315793     797 pod_container_deletor.go:77] Container "58195f3f59ed8969f27b41d985fe3e50e62c4068d156bd518da8183d894db258" not found in pod'
s containers
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: W1202 06:54:57.315810     797 pod_container_deletor.go:77] Container "ba5d53533d603a1908105f9de78c023168ce0bd43bd34d2ad7eb6f7808096459" not found in pod's containers
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.315895     797 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.372321     797 reconciler.go:229] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/96d9a4425adf2047d3f3b6bd7833ee5f-ssl-certs
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.372753     797 reconciler.go:229] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/96d9a4425adf2047d3f3b6bd7833ee5f-kubeconfi
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.372991     797 reconciler.go:229] VerifyControllerAttachedVolume operation started for volume "kubernetes.io/host-path/96d9a4425adf2047d3f3b6bd7833ee5f-etc-kube-
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.473965     797 operation_executor.go:900] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/96d9a4425adf2047d3f3b6bd7833ee5f-ssl-certs" (spec.Name:
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.474331     797 operation_executor.go:900] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/96d9a4425adf2047d3f3b6bd7833ee5f-kubeconfig" (spec.Name
Dec 02 06:54:57 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:57.474561     797 operation_executor.go:900] MountVolume.SetUp succeeded for volume "kubernetes.io/host-path/96d9a4425adf2047d3f3b6bd7833ee5f-etc-kube-ssl" (spec.Na
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.486870     797 kubelet.go:1799] Failed creating a mirror pod for "kube-proxy-172.17.8.104_kube-system(96d9a4425adf2047d3f3b6bd7833ee5f)": Post http://kub-master-
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: W1202 06:54:58.487347     797 status_manager.go:450] Failed to update status for pod "_()": Get http://kub-master-host.myfqdn.com:8080/api/v1/namespaces/kube-system/pods/k
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.487554     797 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get http://kub-master-host.myfqdn.com:8080/api/v1/services?resourc
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.487745     797 reflector.go:203] pkg/kubelet/kubelet.go:403: Failed to list *api.Node: Get http://kub-master-host.myfqdn.com:8080/api/v1/nodes?fieldSelector
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.487781     797 reflector.go:203] pkg/kubelet/config/apiserver.go:43: Failed to list *api.Pod: Get http://kub-master-host.myfqdn.com:8080/api/v1/pods?fieldSe
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.487806     797 kubelet_node_status.go:97] Unable to register node "172.17.8.104" with API server: Post http://kub-master-host.myfqdn.com:8080/api/v1/nodes:
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.814018     797 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:58.912156     797 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:58.915249     797 kubelet_node_status.go:73] Attempting to register node 172.17.8.104
Dec 02 06:54:58 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:54:58.938849     797 docker_manager.go:746] Logging security options: {key:seccomp value:unconfined msg:}
Dec 02 06:54:59 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:54:59.601717     797 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
Dec 02 06:55:01 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:55:01.493400     797 reflector.go:203] pkg/kubelet/config/apiserver.go:43: Failed to list *api.Pod: Get http://kub-master-host.myfqdn.com:8080/api/v1/pods?fieldSe
Dec 02 06:55:01 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:55:01.493991     797 reflector.go:203] pkg/kubelet/kubelet.go:403: Failed to list *api.Node: Get http://kub-master-host.myfqdn.com:8080/api/v1/nodes?fieldSelector
Dec 02 06:55:01 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:55:01.494206     797 reflector.go:203] pkg/kubelet/kubelet.go:384: Failed to list *api.Service: Get http://kub-master-host.myfqdn.com:8080/api/v1/services?resourc
Dec 02 06:55:01 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:55:01.494373     797 kubelet_node_status.go:97] Unable to register node "172.17.8.104" with API server: Post http://kub-master-host.myfqdn.com:8080/api/v1/nodes:
Dec 02 06:55:01 core04.myfqdn.com kubelet-wrapper[797]: E1202 06:55:01.495393     797 kubelet.go:1799] Failed creating a mirror pod for "kube-proxy-172.17.8.104_kube-system(96d9a4425adf2047d3f3b6bd7833ee5f)": Post http://kub-master-
Dec 02 06:55:01 core04.myfqdn.com kubelet-wrapper[797]: W1202 06:55:01.496765     797 status_manager.go:450] Failed to update status for pod "_()": Get http://kub-master-host.myfqdn.com:8080/api/v1/namespaces/kube-system/pods/k
Dec 02 06:55:02 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:55:02.294609     797 kubelet_node_status.go:203] Setting node annotation to enable volume controller attach/detach
Dec 02 06:55:02 core04.myfqdn.com kubelet-wrapper[797]: I1202 06:55:02.296420     797 kubelet_node_status.go:73] Attempting to register node 172.17.8.104
To unsubscribe from this group and stop receiving emails from it, send an email to coreos-user...@googlegroups.com.

Maurizio Vitale

unread,
Dec 2, 2016, 10:00:30 AM12/2/16
to Pierre Mavro, CoreOS User
Difficult to say. I'd start to check that:

1. from core04 you can reach 172.17.8.102. Check your routes.
2. apiserver is listening to port 8080 (what is your --insecure-port and --insecure-bind-address?). Do something like curl http://172.17.8.102:8080/api from core04 (I think you said you can do it locally from the master, but verify that too w/ 127.0.0.1 from the master). On coreos you probably don't have curl, wget -q -O - would be a replacement.

With those two things in check, there's no reason for the kubelet not being able to register the node.


To unsubscribe from this group and stop receiving emails from it, send an email to coreos-user+unsubscribe@googlegroups.com.

Pierre Mavro

unread,
Dec 2, 2016, 6:39:25 PM12/2/16
to CoreOS User, deim...@gmail.com
Thanks for the answer. So here are mines:

1. Routing is ok, I can ping
2. apiserver is listening on 8080 but only on localhost, so curl can't work

Here is my apiserver conf:

apiVersion: v1
kind
: Pod
metadata
:
    name
: kube-apiserver
   
namespace: kube-system
spec
:
    containers
:
   
-   command:
       
- /hyperkube
        - apiserver
        - --bind-address=0.0.0.0
        - --etcd-servers=http:/
/srv01.myfqdn.com:2379,http://srv02.myfqdn.com:2379,http://srv03.myfqdn.com:2379
       
- --allow-privileged=true
       
- --service-cluster-ip-range=10.3.0.0/24
       
- --secure-port=443
       
- --advertise-address=172.17.8.101
       
- --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
       
- --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
       
- --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
       
- --client-ca-file=/etc/kubernetes/ssl/ca.pem
       
- --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
       
- --runtime-config=extensions/v1beta1=true,extensions/v1beta1/networkpolicies=true
        image
: quay.io/coreos/hyperkube:v1.4.6_coreos.0
        livenessProbe
:
            httpGet
:
                host
: 127.0.0.1
                path
: /healthz
                port: 8080
            initialDelaySeconds: 15
            timeoutSeconds: 15
        name: kube-apiserver
        ports:
        -   containerPort: 443
            hostPort: 443
            name: https
        -   containerPort: 8080
            hostPort: 8080
            name: local
        volumeMounts:
        -   mountPath: /
etc/kubernetes/ssl
            name
: ssl-certs-kubernetes
            readOnly
: true
       
-   mountPath: /etc/ssl/certs
            name
: ssl-certs-host
            readOnly
: true
    hostNetwork
: true
    volumes
:
   
-   hostPath:
            path
: /etc/kubernetes/ssl
        name
: ssl-certs-kubernetes
   
-   hostPath:
            path
: /usr/share/ca-certificates
        name
: ssl-certs-host


On a master, I can do it through localhost:

$ curl http://127.0.0.1:8080/api
{
 
"kind": "APIVersions",
 
"versions": [
   
"v1"
 
],
 
"serverAddressByClientCIDRs": [
   
{
     
"clientCIDR": "0.0.0.0/0",
     
"serverAddress": "172.17.8.101:443"
   
}
 
]
}

I've also played with this project (https://github.com/coreos/coreos-kubernetes) and tried to compare the configuration and it looks like similar. For example port 8080 is listening on localhost as well.

Any idea ?

Thanks

Rob Szumski

unread,
Dec 2, 2016, 6:57:50 PM12/2/16
to Pierre Mavro, CoreOS User
You should be using the secure port, 443. As you can see in your config, the “hostPort” field is what controls that. All of the CoreOS guides utilize that port with the correct TLS certificates in your kubecfg.

The insecure port has no auth, so it’s rightfully listening on localhost only.

Maurizio Vitale

unread,
Dec 2, 2016, 7:59:38 PM12/2/16
to Pierre Mavro, CoreOS User
if you want to access port 8080 from everywhere you need --insecure-bind-address=0.0.0.0 (that's why I asked about that setting from the beginning).
But I suggest that you configure the kubelet to use https (and do so using --kubeconfig, the other flags are in the process of being deprecated).

You might still see problems with other pieces, like kube-dns, but try to limit what use http to the minimum and then work incrementally so that there're no non-local uses of http.



To unsubscribe from this group and stop receiving emails from it, send an email to coreos-user+unsubscribe@googlegroups.com.

Maurizio Vitale

unread,
Dec 2, 2016, 8:10:48 PM12/2/16
to Pierre Mavro, CoreOS User
in coreos-kubernetes they configure the kubelet on the nodes with --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml and surely --api-servers would be https:MASTER:6433.
This way you are ok with only local access to http port 8080

that config file is the magic that gives the kubelet the right certs to talk w/ the apiserver and is something of the form:

apiVersion: v1 kind: Config users: - name: kubelet user: client-certificate: /etc/kubernetes/ssl/SOMETHING.pem client-key: /etc/kubernetes/ssl/SOMETHING-key.pem clusters: - name: local cluster: certificate-authority: /etc/kubernetes/ssl/ca.pem server: https://YOUR_MASTER_IP_ADDRESS:6443 contexts: - context: cluster: local user: kubelet name: service-account-context current-context: service-account-context

User (e.g. the CN inside the .pem file) doesn't matter much for now as many things in this are are not implemented yet. All that it matters is that certs are signed by the same CA the apiserver has and trust.
Again, that github project shows you how to generate the right certificates.



To unsubscribe from this group and stop receiving emails from it, send an email to coreos-user+unsubscribe@googlegroups.com.

Pierre Mavro

unread,
Dec 3, 2016, 2:55:34 PM12/3/16
to CoreOS User, deim...@gmail.com
Thanks a lot, it works better using 443 port :-)
Reply all
Reply to author
Forward
0 new messages