Sometimes when I delete a replication controller and create a new one, everything stops working until I re-create the whole cluster.
First I tried just deleting the node and let the cluster create a new one automatically, but that doesn't help always.
Right now I have a node running, but Kubernetes can't see it, even after multiple tries.
What could be the cause of this?
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-test-cluster-default-pool-0bd8d63e-pmmt europe-west1-c n1-highcpu-2 10.132.0.2 130.211.71.107 RUNNING
$ kubectl get nodes
$
$ kubectl get events -w
FIRSTSEEN LASTSEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
2016-05-07 20:13:02 +0200 CEST 2016-05-07 20:13:17 +0200 CEST 5 mongo-controller-4oz5x Pod Warning FailedScheduling {default-scheduler } no nodes available to schedule pods
Thanks in advance.
- Lukas