Create v0.7 cluster times out

131 views
Skip to first unread message

comne...@gmail.com

unread,
Dec 15, 2018, 5:10:00 AM12/15/18
to OpenShift 4 Developer Preview
I just tried to deploy a cluster using latest v0.7 image [1] and i think i'm hitting similar symptoms as mentioned [2] and [3] however i am not sure if we are talking about the same root cause.

from bootstrap node i see

Dec 15 09:15:57 ip-10-0-2-4 bootkube.sh[4424]: https://dani-cluster1-etcd-2.dani-40.metal:2379 is healthy: successfully committed proposal: took = 2.332534ms
Dec 15 09:17:08 ip-10-0-2-4 bootkube.sh[4424]: https://dani-cluster1-etcd-1.dani-40.metal:2379 is healthy: successfully committed proposal: took = 42.393811ms
Dec 15 09:17:45 ip-10-0-2-4 bootkube.sh[4424]: https://dani-cluster1-etcd-0.dani-40.metal:2379 is healthy: successfully committed proposal: took = 2.689164ms
Dec 15 09:17:46 ip-10-0-2-4 bootkube.sh[4424]: etcd cluster up. Killing etcd certificate signer...
Dec 15 09:17:46 ip-10-0-2-4 bootkube.sh[4424]: c87c63cc1055aac19a460f3488ce39fa11e5e5b3348bd4c00fe8bd2a9e5aac76
Dec 15 09:17:46 ip-10-0-2-4 bootkube.sh[4424]: Starting cluster-bootstrap...
Dec 15 09:17:48 ip-10-0-2-4 bootkube.sh[4424]: Trying to pull registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-15-021422@sha256:58b79dec7b54b6ade89615e2afc9cfdefb2f03bd612f6f27a4eff2763a342443...Getting image source signatures
Dec 15 09:17:49 ip-10-0-2-4 bootkube.sh[4424]: Copying blob sha256:aeb7866da422acc7e93dcf7323f38d7646f6269af33bcdb6647f2094fc4b3bf7
Dec 15 09:17:54 ip-10-0-2-4 bootkube.sh[4424]: [477B blob data]
Dec 15 09:17:54 ip-10-0-2-4 bootkube.sh[4424]: Copying blob sha256:dd5c67d91ba0d24be994af7183bb91f9b6f80f94380760a4c10c9cd6b717a9f8
Dec 15 09:17:56 ip-10-0-2-4 bootkube.sh[4424]: [97B blob data]
Dec 15 09:17:56 ip-10-0-2-4 bootkube.sh[4424]: Copying blob sha256:426829df3c48de49e93f69934ca3fd722b3685443fd1d65e5fb4a22a9e5954ec
Dec 15 09:17:57 ip-10-0-2-4 bootkube.sh[4424]: [97B blob data]
Dec 15 09:17:57 ip-10-0-2-4 bootkube.sh[4424]: Copying config sha256:6aee7868e4aafdf922271dd96fd8b8d240411a50e1dca6f31c063de1f839d34d
Dec 15 09:17:57 ip-10-0-2-4 bootkube.sh[4424]: [37B blob data]
Dec 15 09:17:57 ip-10-0-2-4 bootkube.sh[4424]: Writing manifest to image destination
Dec 15 09:17:57 ip-10-0-2-4 bootkube.sh[4424]: Storing signatures
Dec 15 09:18:04 ip-10-0-2-4 bootkube.sh[4424]: Starting temporary bootstrap control plane...
Dec 15 09:18:04 ip-10-0-2-4 bootkube.sh[4424]: Waiting for api-server...
Dec 15 09:18:09 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:09.792904       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 09:18:14 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:14.789668       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 09:18:19 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:19.789700       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 09:18:24 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:24.789208       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 09:18:29 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:29.789935       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 09:18:34 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:34.789497       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 09:18:46 ip-10-0-2-4 bootkube.sh[4424]: W1215 09:18:46.301996       1 create.go:58] Unable to determine api-server readiness: API Server http status: 500
Dec 15 09:18:49 ip-10-0-2-4 bootkube.sh[4424]: Creating self-hosted assets...
Dec 15 09:18:49 ip-10-0-2-4 bootkube.sh[4424]: Created /assets/manifests/0000_00_cluster-version-operator_00_namespace.yaml Namespace openshift-cluster-version
Dec 15 09:18:49 ip-10-0-2-4 bootkube.sh[4424]: Created /assets/manifests/00_openshift-kube-apiserver-ns.yaml Namespace openshift-kube-apiserver

Question: should we move on if API server is not reading ? Shouldn't we exit ?

and then few lines below i see

Dec 15 09:18:50 ip-10-0-2-4 bootkube.sh[4424]: Created /assets/manifests/secret-serving-cert.yaml Secret openshift-kube-apiserver/serving-cert
Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-cluster-version/cluster-version-operator-86998bcb77-w78wt        Pending
Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver        DoesNotExist
Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler        DoesNotExist
Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager        DoesNotExist



At this stage there are few pods up on bootstrap node

sudo crictl pods
POD ID              CREATED             STATE               NAME                                             NAMESPACE                   ATTEMPT
1f664c36a597f       5 minutes ago       Ready               bootstrap-kube-apiserver-ip-10-0-2-4             kube-system                 0
2fb4f0da3a583       5 minutes ago       Ready               bootstrap-kube-scheduler-ip-10-0-2-4             kube-system                 0
ee4d059294d07      
5 minutes ago       Ready               bootstrap-cluster-version-operator-ip-10-0-2-4   openshift-cluster-version   0
89f963f0826ae       5 minutes ago       Ready               bootstrap-kube-controller-manager-ip-10-0-2-4    kube-system                 0

After many many minutes waiting i get more info in the logs

Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager        DoesNotExist
Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: Error: error while checking pod status: timed out waiting for the condition
Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: Tearing down temporary bootstrap control plane...
Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: Error: error while checking pod status: timed out waiting for the condition
Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: Error: error while checking pod status: timed out waiting for the condition
Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: unable to find container etcd-signer: no container with name or ID etcd-signer found: no such container
Dec 15 09:38:50 ip-10-0-2-4 systemd[1]: bootkube.service: main process exited, code=exited, status=125/n/a
Dec 15 09:38:51 ip-10-0-2-4 systemd[1]: Unit bootkube.service entered failed state.
Dec 15 09:38:51 ip-10-0-2-4 systemd[1]: bootkube.service failed.
Dec 15 09:38:56 ip-10-0-2-4 systemd[1]: bootkube.service holdoff time over, scheduling restart.
Dec 15 09:38:56 ip-10-0-2-4 systemd[1]: Stopped Bootstrap a Kubernetes cluster.
Dec 15 09:38:56 ip-10-0-2-4 systemd[1]: Started Bootstrap a Kubernetes cluster.
Dec 15 09:39:04 ip-10-0-2-4 bootkube.sh[14188]: Starting etcd certificate signer...
Dec 15 09:39:04 ip-10-0-2-4 bootkube.sh[14188]: f5f4605fb5061d9bc2f8fbb75366302c9bee50c907dea86ad775f7ffe4c49547
Dec 15 09:39:04 ip-10-0-2-4 bootkube.sh[14188]: Waiting for etcd cluster...
Dec 15 09:39:05 ip-10-0-2-4 bootkube.sh[14188]: https://dani-cluster1-etcd-0.dani-40.metal:2379 is healthy: successfully committed proposal: took = 2.794154ms
Dec 15 09:39:05 ip-10-0-2-4 bootkube.sh[14188]: https://dani-cluster1-etcd-2.dani-40.metal:2379 is healthy: successfully committed proposal: took = 2.610652ms
Dec 15 09:39:05 ip-10-0-2-4 bootkube.sh[14188]: https://dani-cluster1-etcd-1.dani-40.metal:2379 is healthy: successfully committed proposal: took = 2.39741ms
Dec 15 09:39:05 ip-10-0-2-4 bootkube.sh[14188]: etcd cluster up. Killing etcd certificate signer...
Dec 15 09:39:05 ip-10-0-2-4 bootkube.sh[14188]: f5f4605fb5061d9bc2f8fbb75366302c9bee50c907dea86ad775f7ffe4c49547
Dec 15 09:39:05 ip-10-0-2-4 bootkube.sh[14188]: Starting cluster-bootstrap...
Dec 15 09:39:06 ip-10-0-2-4 bootkube.sh[14188]: Starting temporary bootstrap control plane...
Dec 15 09:39:06 ip-10-0-2-4 bootkube.sh[14188]: Waiting for api-server...
Dec 15 09:39:18 ip-10-0-2-4 bootkube.sh[14188]: W1215 09:39:18.697278       1 create.go:58] Unable to determine api-server readiness: API Server http status: 500
Dec 15 09:39:21 ip-10-0-2-4 bootkube.sh[14188]: Creating self-hosted assets...
Dec 15 09:39:21 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/0000_00_cluster-version-operator_00_namespace.yaml Namespace openshift-cluster-version: namespaces "openshift-cluster-version" already exists
Dec 15 09:39:21 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/00_openshift-kube-apiserver-ns.yaml Namespace openshift-kube-apiserver: namespaces "openshift-kube-apiserver" already exists


with the crictl output like

sudo crictl pods
POD ID              CREATED             STATE               NAME                                             NAMESPACE                   ATTEMPT
147a7e9da154b       6 minutes ago       Ready               bootstrap-kube-apiserver-ip-10-0-2-4             kube-system                 1
684c1a32c9c0c       6 minutes ago       Ready               bootstrap-cluster-version-operator-ip-10-0-2-4   openshift-cluster-version   1
3726571c18779       6 minutes ago       Ready               bootstrap-kube-scheduler-ip-10-0-2-4             kube-system                 1
e709e6cd7c0be      
6 minutes ago       Ready               bootstrap-kube-controller-manager-ip-10-0-2-4    kube-system                 1
1f664c36a597f       27 minutes ago      NotReady            bootstrap-kube-apiserver-ip-10-0-2-4             kube-system                 0
2fb4f0da3a583       27 minutes ago      NotReady            bootstrap-kube-scheduler-ip-10-0-2-4             kube-system                 0
ee4d059294d07      
27 minutes ago      NotReady            bootstrap-cluster-version-operator-ip-10-0-2-4   openshift-cluster-version   0
89f963f0826ae       27 minutes ago      NotReady            bootstrap-kube-controller-manager-ip-10-0-2-4    kube-system                 0

and it ends with


Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/openshift-service-signer-secret.yaml Secret openshift-service-cert-signer/service-serving-cert-signer-signing-key: secrets "service-serving-cert-signer-signing-key" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/pull.json Secret kube-system/coreos-pull-secret: secrets "coreos-pull-secret" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/secret-aggregator-client.yaml Secret openshift-kube-apiserver/aggregator-client: secrets "aggregator-client" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/secret-cluster-signing-ca.yaml Secret openshift-kube-controller-manager/cluster-signing-ca: secrets "cluster-signing-ca" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/secret-kubeconfig.yaml Secret openshift-kube-controller-manager/controller-manager-kubeconfig: secrets "controller-manager-kubeconfig" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/secret-kubelet-client.yaml Secret openshift-kube-apiserver/kubelet-client: secrets "kubelet-client" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/secret-service-account-private-key.yaml Secret openshift-kube-controller-manager/service-account-private-key: secrets "service-account-private-key" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: Failed creating /assets/manifests/secret-serving-cert.yaml Secret openshift-kube-apiserver/serving-cert: secrets "serving-cert" already exists
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: NOTE: Bootkube failed to create some cluster assets. It is important that manifest errors are resolved and resubmitted to the apiserver.
Dec 15 09:39:22 ip-10-0-2-4 bootkube.sh[14188]: For example, after resolving issues: kubectl create -f <failed-manifest>


Note:
  • the last line says to fix the manifest error and re-run kubectl however a) what is the full path to the manifest on the bootstrap node and b) you can't run kubectl since nothing been configured - see [4]

from master-0 node i see

The pods output looks like below

sudo crictl pods
POD ID              CREATED             STATE               NAME                                                  NAMESPACE           ATTEMPT
5ab2402807781       43 minutes ago      Ready               etcd-member-ip-10-0-2-46.us-west-2.compute.internal   kube-system         0

and the logs looks like below:

Dec 15 09:14:40 ip-10-0-2-46 coreos-growpart[4113]: realtime =none                   extsz=4096   blocks=0, rtextents=0
Dec 15 09:14:40 ip-10-0-2-46 coreos-growpart[4113]: data blocks changed from 768000 to 31380219
Dec 15 09:14:42 ip-10-0-2-46 kernel: TECH PREVIEW: Overlay filesystem may not be fully supported.
                                     
Please review provided documentation for limitations.
Dec 15 09:14:42 ip-10-0-2-46 crio[4273]: time="2018-12-15 09:14:42.571160212Z" level=error msg="error updating cni config: Missing CNI default network"
Dec 15 09:14:42 ip-10-0-2-46 kernel: nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Dec 15 09:14:42 ip-10-0-2-46 crio[4273]: time="2018-12-15 09:14:42.813390675Z" level=error msg="watcher.Add("/usr/share/containers/oci/hooks.d") failed: no such file or directory"

Question: is this a red hearing or a real problem ?

Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.103140    4132 kubelet.go:299] Watching apiserver
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: W1215 09:14:49.168654    4132 util_unix.go:75] Using "/var/run/crio/crio.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/crio/crio.sock".
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.186449    4132 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: the server could not find the requested resource (get pods)
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.186526    4132 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: the server could not find the requested resource (get nodes)
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.186617    4132 reflector.go:136] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: the server could not find the requested resource (get services)
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.187834    4132 certificate_manager.go:354] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server rejected our request for an unknown reason (post certificatesigningrequests.certificates.k8s.io)

followed by

Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.257214    4132 server.go:1023] Started kubelet
Dec 15 09:14:49 ip-10-0-2-46 systemd[1]: Reached target Multi-User System.
Dec 15 09:14:49 ip-10-0-2-46 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.264910    4132 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.264942    4132 status_manager.go:152] Starting to sync pod status with apiserver
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.264964    4132 kubelet.go:1741] Starting kubelet main sync loop.
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.265002    4132 kubelet.go:1758] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.267280    4132 volume_manager.go:247] Starting Kubelet Volume Manager
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.267364    4132 desired_state_of_world_populator.go:130] Desired state populator starts to run
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.269482    4132 certificate_manager.go:354] Failed while requesting a signed certificate from the master: cannot create certificate signing request: the server rejected our request for an unknown reason (post certificatesigningrequests.certificates.k8s.io)
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.270428    4132 kubelet.go:2101] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni config uninitialized
Dec 15 09:14:49 ip-10-0-2-46 systemd[1]: Started Update UTMP about System Runlevel Changes.
Dec 15 09:14:49 ip-10-0-2-46 systemd[1]: Startup finished in 2.129s (kernel) + 3min 56.929s (initrd) + 22.696s (userspace) = 4min 21.755s.
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: W1215 09:14:49.339076    4132 container.go:393] Failed to create summary reader for "/system.slice/systemd-update-utmp-runlevel.service": none of the resources are being tracked.
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.366530    4132 kubelet.go:1758] skipping pod synchronization - [container runtime is down]
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.367566    4132 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.368014    4132 kubelet_node_status.go:317] Adding node label from cloud provider: beta.kubernetes.io/instance-type=m4.large
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.368395    4132 kubelet_node_status.go:328] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/zone=us-west-2a
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.368785    4132 kubelet_node_status.go:332] Adding node label from cloud provider: failure-domain.beta.kubernetes.io/region=us-west-2
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: I1215 09:14:49.370144    4132 kubelet_node_status.go:79] Attempting to register node ip-10-0-2-46.us-west-2.compute.internal
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.372243    4132 kubelet_node_status.go:103] Unable to register node "ip-10-0-2-46.us-west-2.compute.internal" with API server: the server could not find the requested resource (post nodes)
Dec 15 09:14:49 ip-10-0-2-46 hyperkube[4132]: E1215 09:14:49.372729    4132 event.go:203] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-10-0-2-46.us-west-2.compute.internal.157076cbaebc114f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-2-46.us-west-2.compute.internal", UID:"ip-10-0-2-46.us-west-2.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node ip-10-0-2-46.us-west-2.compute.internal status is now: NodeHasSufficientDisk", Source:v1.EventSource{Component:"kubelet", Host:"ip-10-0-2-46.us-west-2.compute.internal"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbefd4f62560ef74f, ext:8796121429, loc:(*time.Location)(0x9062c80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbefd4f62560ef74f, ext:8796121429, loc:(*time.Location)(0x9062c80)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'the server could not find the requested resource (post events)' (will not retry!)


At this point nothing else works, things like  accepting nodes are compromised
Dec 15 09:15:01 ip-10-0-2-46 hyperkube[4132]: I1215 09:15:01.985419    4132 kubelet_node_status.go:79] Attempting to register node ip-10-0-2-46.us-west-2.compute.internal
Dec 15 09:15:01 ip-10-0-2-46 hyperkube[4132]: E1215 09:15:01.986486    4132 kubelet_node_status.go:103] Unable to register node "ip-10-0-2-46.us-west-2.compute.internal" with API server: the server could not find the requested resource (post nodes)





Dani

[1]

./openshift-install-linux-amd64 version
./openshift-install-linux-amd64 v0.7.0

registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-15-021422@sha256:58b79dec7b54b6ade89615e2afc9cfdefb2f03bd612f6f27a4eff2763a342443




[4]

kubectl  config view
apiVersion
: v1
clusters
: []
contexts
: []
current
-context: ""
kind
: Config
preferences
: {}
users
: []



wk...@redhat.com

unread,
Dec 15, 2018, 12:44:15 PM12/15/18
to OpenShift 4 Developer Preview
> Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager DoesNotExist
> Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: Error: error while checking pod status: timed out waiting for the condition

So my guess is that your master never came up, and we eventually gave up waiting for it. If you see this again, try [1,2].

[1]: https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md#kubernetes-api-is-unavailable
[2]: https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md#unable-to-ssh-into-master-nodes

Daniel Comnea

unread,
Dec 15, 2018, 2:00:49 PM12/15/18
to wk...@redhat.com, OpenShift 4 Developer Preview
I'll get out the logs using crictl and report back to you.
Regarding the 2nd check "unable to ssh into master node" that is not the case since i've provided the logs from bootstrap as well as master in my initial email.

--
You received this message because you are subscribed to the Google Groups "OpenShift 4 Developer Preview" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openshift-4-dev-p...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openshift-4-dev-preview/701b1e9b-08ba-4cb3-9440-ff698aed4f2d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Dani Comnea

unread,
Dec 15, 2018, 4:00:46 PM12/15/18
to OpenShift 4 Developer Preview
So this is getting interesting (at least to me ;) )

On bootstrap node i can see

sudo crictl ps
CONTAINER ID        IMAGE                                                                                                                                           CREATED             STATE               NAME                    ATTEMPT
1bf4870ea6eea       registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-15-160933@sha256:97eac256dde260e8bee9a5948efce5edb879dc6cb522a0352567010285378a56   2 minutes ago       Running             machine-config-server   0
[core@ip-10-0-11-88 ~]$ sudo crictl logs 1bf4870ea6eea
I1215
20:23:07.088210       1 bootstrap.go:37] Version: 3.11.0-356-gb7ffe0c7-dirty
I1215
20:23:07.088554       1 api.go:54] launching server
I1215
20:23:07.088571       1 api.go:54] launching server
2018/12/15 20:24:17 http: TLS handshake error from 10.0.20.86:28372: EOF
2018/12/15 20:24:18 http: TLS handshake error from 10.0.20.86:38438: EOF
2018/12/15 20:24:18 http: TLS handshake error from 10.0.47.69:26320: EOF
2018/12/15 20:24:18 http: TLS handshake error from 10.0.20.86:24501: EOF
2018/12/15 20:24:19 http: TLS handshake error from 10.0.47.69:45807: EOF
2018/12/15 20:24:19 http: TLS handshake error from 10.0.47.69:19433: EOF
2018/12/15 20:24:27 http: TLS handshake error from 10.0.20.86:52159: EOF
2018/12/15 20:24:28 http: TLS handshake error from 10.0.20.86:9176: EOF



which i think it does explain

Dec 15 20:28:29 ip-10-0-11-88 bootkube.sh[4416]: Starting temporary bootstrap control plane...
Dec 15 20:28:29 ip-10-0-11-88 bootkube.sh[4416]: Waiting for api-server...
Dec 15 20:28:34 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:28:34.959049       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 20:28:39 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:28:39.956926       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 20:28:44 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:28:44.956879       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 20:28:49 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:28:49.956372       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 20:28:54 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:28:54.956863       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 15 20:29:04 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:29:04.448331       1 create.go:58] Unable to determine api-server readiness: API Server http status: 500
Dec 15 20:29:04 ip-10-0-11-88 bootkube.sh[4416]: W1215 20:29:04.954270       1 create.go:58] Unable to determine api-server readiness: API Server http status: 500
Dec 15 20:29:09 ip-10-0-11-88 bootkube.sh[4416]: Creating self-hosted assets...
Dec 15 20:29:09 ip-10-0-11-88 bootkube.sh[4416]: Created /assets/manifests/0000_00_cluster-version-operator_00_namespace.yaml Namespace openshift-cluster-version


Then (on same node) i see the following

sudo crictl ps
CONTAINER ID        IMAGE                                                                                                                                           CREATED             STATE               NAME                       ATTEMPT
1cab85b8cb910       registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-15-160933@sha256:3096f424eeee7b40f8a70aaa51427cabba85d65dc71b6e05221de226f78f0364   3 minutes ago       Running             kube-scheduler             0
87156794d92d3       registry.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-15-160933@sha256:7d7552257984dff6db7f0bd0fe044535a6a71143713542a3e734567a482280fa   3 minutes ago       Running             kube-apiserver             0
cebdc03d4bd6f       registry
.svc.ci.openshift.org/openshift/origin-v4.0-2018-12-15-160933@sha256:3096f424eeee7b40f8a70aaa51427cabba85d65dc71b6e05221de226f78f0364   3 minutes ago       Running             kube-controller-manager    0
027295926f625       registry.svc.ci.openshift.org/openshift/origin-release@sha256:39df0cd1d81b5274149cd7c3a270255b9bab8fc5d09bd950e22aaa96d5f98933                  4 minutes ago       Running             cluster-version-operator   0


And the logs for kube-apiserver (running on bootstrap node) shows
 

I1215 20:29:01.718875       1 resolver_conn_wrapper.go:64] dialing to target with scheme: ""
I1215
20:29:01.718886       1 resolver_conn_wrapper.go:70] could not get resolver for scheme: ""
I1215
20:29:01.719009       1 balancer_v1_wrapper.go:91] balancerWrapper: is pickfirst: false
I1215
20:29:01.719085       1 balancer_v1_wrapper.go:116] balancerWrapper: got update addr from Notify: [{dani-cluster1-etcd-0.dani-40.metal:2379 <nil>} {dani-cluster1-etcd-1.dani-40.metal:2379 <nil>} {dani-cluster1-etcd-2.dani-40.metal:2379 <nil>}]
I1215
20:29:01.719172       1 balancer_conn_wrappers.go:168] ccBalancerWrapper: new subconn: [{dani-cluster1-etcd-0.dani-40.metal:2379 0  <nil>}]
W1215
20:29:01.719229       1 clientconn.go:696] Failed to dial dani-cluster1-etcd-1.dani-40.metal:2379: grpc: the connection is closing; please retry.
I1215
20:29:01.719242       1 balancer_conn_wrappers.go:168] ccBalancerWrapper: new subconn: [{dani-cluster1-etcd-1.dani-40.metal:2379 0  <nil>}]
I1215
20:29:01.719334       1 balancer_conn_wrappers.go:168] ccBalancerWrapper: new subconn: [{dani-cluster1-etcd-2.dani-40.metal:2379 0  <nil>}]
I1215
20:29:01.719469       1 balancer_v1_wrapper.go:224] balancerWrapper: handle subconn state change: 0xc429856600, CONNECTING
I1215
20:29:01.719530       1 balancer_conn_wrappers.go:190] ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc4295a6b40
I1215
20:29:01.719585       1 balancer_v1_wrapper.go:224] balancerWrapper: handle subconn state change: 0xc42a52c1c0, CONNECTING
I1215
20:29:01.719638       1 balancer_conn_wrappers.go:190] ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc4295a6b40
I1215
20:29:01.719691       1 balancer_v1_wrapper.go:224] balancerWrapper: handle subconn state change: 0xc42a52c220, CONNECTING
I1215
20:29:01.719743       1 balancer_conn_wrappers.go:190] ccBalancerWrapper: updating state and picker called by balancer: CONNECTING, 0xc4295a6b40
W1215
20:29:01.719980       1 clientconn.go:696] Failed to dial dani-cluster1-etcd-2.dani-40.metal:2379: grpc: the connection is closing; please retry.
I1215
20:29:01.737604       1 balancer_v1_wrapper.go:224] balancerWrapper: handle subconn state change: 0xc429856600, READY
I1215
20:29:01.737633       1 balancer_conn_wrappers.go:190] ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4295a6b40
I1215
20:29:01.738731       1 balancer_v1_wrapper.go:116] balancerWrapper: got update addr from Notify: [{dani-cluster1-etcd-0.dani-40.metal:2379 <nil>}]
I1215
20:29:01.738807       1 balancer_conn_wrappers.go:181] ccBalancerWrapper: removing subconn
I1215
20:29:01.738866       1 balancer_conn_wrappers.go:181] ccBalancerWrapper: removing subconn
I1215
20:29:01.738925       1 balancer_v1_wrapper.go:224] balancerWrapper: handle subconn state change: 0xc42a52c1c0, SHUTDOWN
I1215
20:29:01.738981       1 balancer_conn_wrappers.go:190] ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4295a6b40
I1215
20:29:01.739036       1 balancer_v1_wrapper.go:224] balancerWrapper: handle subconn state change: 0xc42a52c220, SHUTDOWN
I1215
20:29:01.739090       1 balancer_conn_wrappers.go:190] ccBalancerWrapper: updating state and picker called by balancer: READY, 0xc4295a6b40
W1215
20:29:01.739380       1 clientconn.go:696] Failed to dial dani-cluster1-etcd-2.dani-40.metal:2379: grpc: the connection is closing; please retry.
W1215
20:29:01.740010       1 clientconn.go:696] Failed to dial dani-cluster1-etcd-1.dani-40.metal:2379: grpc: the connection is closing; please retry.
I1215
20:29:04.176481       1 serve.go:116] Serving securely on [::]:6443
I1215
20:29:04.177479       1 clusterquotamapping.go:130] Starting ClusterQuotaMappingController controller
I1215
20:29:04.177523       1 autoregister_controller.go:136] Starting autoregister controller
I1215
20:29:04.177533       1 cache.go:32] Waiting for caches to sync for autoregister controller
I1215
20:29:04.177570       1 crd_finalizer.go:242] Starting CRDFinalizer
I1215
20:29:04.185354       1 apiservice_controller.go:90] Starting APIServiceRegistrationController
I1215
20:29:04.185423       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1215
20:29:04.186082       1 available_controller.go:317] Starting AvailableConditionController
I1215
20:29:04.186179       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1215
20:29:04.186283       1 controller.go:84] Starting OpenAPI AggregationController
I1215
20:29:04.187575       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I1215
20:29:04.187650       1 controller_utils.go:1025] Waiting for caches to sync for crd-autoregister controller
I1215
20:29:04.187713       1 customresource_discovery_controller.go:199] Starting DiscoveryController
I1215
20:29:04.187775       1 naming_controller.go:284] Starting NamingConditionController
I1215
20:29:04.187832       1 establishing_controller.go:73] Starting EstablishingController
I1215
20:29:04.259406       1 logs.go:49] http: TLS handshake error from 10.0.29.203:54506: EOF
I1215
20:29:04.315575       1 logs.go:49] http: TLS handshake error from 10.0.41.115:51268: EOF
I1215
20:29:04.315770       1 logs.go:49] http: TLS handshake error from 10.0.29.203:61653: EOF




Later i see

I1215 20:29:04.438893       1 logs.go:49] http: TLS handshake error from 10.0.47.69:48143: EOF
I1215
20:29:04.438982       1 logs.go:49] http: TLS handshake error from 10.0.9.208:62537: EOF
I1215
20:29:04.439056       1 logs.go:49] http: TLS handshake error from 10.0.47.69:43401: EOF
W1215
20:29:04.442626       1 healthz.go:161] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
[+]poststarthook/start-kube-apiserver-informers ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/oauth.openshift.io-startoauthclientsbootstrapping ok
[+]poststarthook/quota.openshift.io-clusterquotamapping ok
[+]poststarthook/openshift.io-startkubeinformers ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/apiservice-openapi-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[-]autoregister-completion failed: reason withheld
healthz check failed
I1215
20:29:04.447487       1 logs.go:49] http: TLS handshake error from 10.0.10.125:62695: EOF
E1215
20:29:04.459652       1 reflector.go:136] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to list *v1.SecurityContextConstraints: the server could not find the requested resource (get securitycontextconstraints.security.openshift.io)



followed by

I1215 20:29:04.492561       1 cache.go:39] Caches are synced for AvailableConditionController controller
I1215
20:29:04.493564       1 cache.go:39] Caches are synced for autoregister controller
I1215
20:29:04.493906       1 controller_utils.go:1032] Caches are synced for crd-autoregister controller
I1215
20:29:04.495437       1 logs.go:49] http: TLS handshake error from 10.0.20.86:5268: EOF
E1215
20:29:04.499378       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]
E1215
20:29:04.501790       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]
E1215
20:29:04.512664       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]
I1215
20:29:04.553203       1 logs.go:49] http: TLS handshake error from 10.0.41.115:2353: EOF
I1215
20:29:04.566209       1 logs.go:49] http: TLS handshake error from 10.0.41.115:25102: EOF
W1215
20:29:04.571604       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [10.0.11.88]


What i don't get is why after a fatal issue like above it still goes and create the RBACs

I1215 20:29:05.125878       1 logs.go:49] http: TLS handshake error from 10.0.29.203:7215: EOF
I1215
20:29:05.166300       1 logs.go:49] http: TLS handshake error from 10.0.10.125:8231: EOF
E1215
20:29:05.188668       1 oauth_apiserver.go:294] the server could not find the requested resource (post oauthclients.oauth.openshift.io)
I1215
20:29:05.194675       1 storage_scheduling.go:91] created PriorityClass system-node-critical with value 2000001000
I1215
20:29:05.202332       1 storage_scheduling.go:91] created PriorityClass system-cluster-critical with value 2000000000
I1215
20:29:05.202402       1 storage_scheduling.go:100] all system priority classes are created successfully or already exist.
I1215
20:29:05.213904       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1215
20:29:05.219899       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1215
20:29:05.225063       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1215
20:29:05.230306       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/admin
I1215
20:29:05.235781       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/edit
I1215
20:29:05.241035       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/view
I1215
20:29:05.247632       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1215
20:29:05.253130       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1215
20:29:05.258633       1 storage_rbac.go:187] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view



Let me know if you need any info, ican reproduce it every time (i am yet to see a fully running cluster .. will get there ;) )






On Saturday, December 15, 2018 at 7:00:49 PM UTC, Dani Comnea wrote:
I'll get out the logs using crictl and report back to you.
Regarding the 2nd check "unable to ssh into master node" that is not the case since i've provided the logs from bootstrap as well as master in my initial email.

On Sat, Dec 15, 2018 at 5:44 PM <wk...@redhat.com> wrote:
> Dec 15 09:19:30 ip-10-0-2-4 bootkube.sh[4424]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager        DoesNotExist
> Dec 15 09:38:50 ip-10-0-2-4 bootkube.sh[4424]: Error: error while checking pod status: timed out waiting for the condition

So my guess is that your master never came up, and we eventually gave up waiting for it.  If you see this again, try [1,2].

[1]: https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md#kubernetes-api-is-unavailable
[2]: https://github.com/openshift/installer/blob/master/docs/user/troubleshooting.md#unable-to-ssh-into-master-nodes

--
You received this message because you are subscribed to the Google Groups "OpenShift 4 Developer Preview" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openshift-4-dev-preview+unsub...@googlegroups.com.

Dani Comnea

unread,
Dec 15, 2018, 6:51:10 PM12/15/18
to OpenShift 4 Developer Preview
Ignore the TLS handshake error lines, that is pure noise caused by the LB health check being set to TCP rather the https and an api endpoint - opened https://github.com/openshift/installer/issues/923

wk...@redhat.com

unread,
Dec 16, 2018, 2:35:43 AM12/16/18
to OpenShift 4 Developer Preview
On Saturday, December 15, 2018 at 1:00:46 PM UTC-8, Dani Comnea wrote:E1215 20:29:04.499378       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]

E1215 20:29:04.501790       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]
E1215
20:29:04.512664       1 authentication.go:62] Unable to authenticate the request due to an error: [x509: certificate has expired or is not yet valid, x509: certificate has expired or is not yet valid]

This might be your kubelet certs expiring [1].  Are you clearing your state between attempts [2,3]?  At least at one point we had folks forgetting to remove their .openshift_install_state.json and rolling their old (and already expired) kubelet certs forward into new clusters.

Dani Comnea

unread,
Dec 16, 2018, 3:56:13 AM12/16/18
to OpenShift 4 Developer Preview
Okay, that was the issue because i thought / assumed that a destroy --dir test1 will delete the state too.

What will be the rational of keeping the state if my cluster was nuked ? (but will come back to this later)

So now made further progress thanks to your hint and i got to this point

sudo crictl pods
POD ID              CREATED             STATE               NAME                                                                        NAMESPACE                                                 ATTEMPT
314e8cbfcff64       6 minutes ago       Ready               node-exporter-vr54q                                                         openshift-monitoring                                      0
df6e5ecd4a3aa      
10 minutes ago      Ready               controller-manager-s8nxz                                                    openshift-controller-manager                              0
6cf43fc3a2b53       11 minutes ago      Ready               openshift-kube-apiserver-ip-10-0-15-5.us-west-2.compute.internal            openshift-kube-apiserver                                  0
ddbdf6d0b7a47      
11 minutes ago      NotReady            installer-2-ip-10-0-15-5.us-west-2.compute.internal                         openshift-kube-apiserver                                  0
8f4f33a8b3ad0       11 minutes ago      Ready               apiserver-2drcz                                                             openshift-apiserver                                       0
8826abc0e751b       11 minutes ago      Ready               openshift-console-57d8f58d69-dvdcl                                          openshift-console                                         0
962c92a116d13       12 minutes ago      Ready               tuned-zzxbn                                                                 openshift-cluster-node-tuning-operator                    0
d196ad9ec89cd      
12 minutes ago      Ready               registry-ca-hostmapper-mn9dw                                                openshift-image-registry                                  0
1e4eb9341cd10       14 minutes ago      Ready               openshift-kube-controller-manager-ip-10-0-15-5.us-west-2.compute.internal   openshift-kube-controller-manager                         0
b1cbca8146659      
15 minutes ago      NotReady            installer-1-ip-10-0-15-5.us-west-2.compute.internal                         openshift-kube-controller-manager                         0
89e0da0a6ca16       15 minutes ago      Ready               openshift-kube-scheduler-ip-10-0-15-5.us-west-2.compute.internal            openshift-kube-scheduler                                  0
08a8500125831       15 minutes ago      NotReady            installer-1-ip-10-0-15-5.us-west-2.compute.internal                         openshift-kube-scheduler                                  0
21bc88f3dcf91       15 minutes ago      Ready               machine-config-daemon-rljxh                                                 openshift-machine-config-operator                         0
c02b48075f722      
16 minutes ago      NotReady            installer-1-ip-10-0-15-5.us-west-2.compute.internal                         openshift-kube-apiserver                                  0
b9f9d8939729d      
16 minutes ago      Ready               machine-config-server-8zx8p                                                 openshift-machine-config-operator                         0
a9cce135fe14e      
16 minutes ago      Ready               machine-config-controller-54f8965b-phcgt                                    openshift-machine-config-operator                         0
b2b8510091aa1      
16 minutes ago      Ready               dns-default-fb2vh                                                           openshift-cluster-dns                                     0
5e5ecc2e68ef1       17 minutes ago      Ready               machine-config-operator-78dff89774-pz8g4                                    openshift-machine-config-operator                         0
c429c1ff63e26      
17 minutes ago      Ready               cluster-autoscaler-operator-64877cd456-7sf58                                openshift-cluster-api                                     0
49ec4c18f1f3b       17 minutes ago      Ready               openshift-cluster-openshift-controller-manager-operator-8bhj2c9             openshift-cluster-openshift-controller-manager-operator   0
2f5889878d75a       17 minutes ago      Ready               openshift-cluster-kube-controller-manager-operator-6d54fcbbk2xr             openshift-cluster-kube-controller-manager-operator        0
4e7b92729e41a       17 minutes ago      Ready               machine-approver-78c784c7dc-22pxx                                           openshift-cluster-machine-approver                        0
17e20c131a3ff       17 minutes ago      Ready               openshift-service-cert-signer-operator-55b94d6f56-d86fx                     openshift-core-operators                                  0
7b73bf9b70c78       17 minutes ago      Ready               openshift-cluster-openshift-apiserver-operator-74fbcfcd87-qzq7r             openshift-cluster-openshift-apiserver-operator            0
760ff900076ec       18 minutes ago      Ready               sdn-rq5rq                                                                   openshift-sdn                                             0
ed69ee27318ad      
18 minutes ago      Ready               ovs-qh8nv                                                                   openshift-sdn                                             0
3d142384518ce       18 minutes ago      Ready               sdn-controller-whlq4                                                        openshift-sdn                                             0
b6825ff41ebd0      
19 minutes ago      Ready               cluster-network-operator-mqqwn                                              openshift-cluster-network-operator                        0
5c6e3123da440       23 minutes ago      Ready               etcd-member-ip-10-0-15-5.us-west-2.compute.internal                         kube-system                                               0


and i think i'm stuck.
Any hints from here ?

Dani

P.S - i'll follow up with another thread on the operational side (maybe im missing bits of the puzzle)

Dani Comnea

unread,
Dec 16, 2018, 6:46:29 AM12/16/18
to OpenShift 4 Developer Preview
i just can't let it go ;) so carry on..

In  openshift-kube-scheduler-ip-10-0-15-5.us-west-2.compute.internal  i see

I1216 08:36:04.612873       1 shared_informer.go:123] caches populated
I1216
08:36:04.613007       1 controller_utils.go:1025] Waiting for caches to sync for scheduler controller
I1216
08:36:04.714970       1 shared_informer.go:123] caches populated
I1216
08:36:04.715078       1 controller_utils.go:1032] Caches are synced for scheduler controller
I1216
08:36:04.715165       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-scheduler...
I1216
08:36:04.770677       1 leaderelection.go:253] lock is held by ip-10-0-26-54_75151851-010d-11e9-9ca7-02412914a52c and has not yet expired
I1216
08:36:04.770806       1 leaderelection.go:190] failed to acquire lease kube-system/kube-scheduler
I1216
08:36:07.930911       1 leaderelection.go:253] lock is held by ip-10-0-26-54_75151851-010d-11e9-9ca7-02412914a52c and has not yet expired
I1216
08:36:07.931060       1 leaderelection.go:190] failed to acquire lease kube-system/kube-scheduler
I1216
08:36:10.029789       1 leaderelection.go:253] lock is held by ip-10-0-26-54_75151851-010d-11e9-9ca7-02412914a52c and has not yet expired


in controller-manager-s8nxz i see

I1216 08:40:07.845696       1 controller_manager.go:40] Starting controllers on 0.0.0.0:8443 (v4.0.0-alpha.0+793dcb0-773)
I1216
08:40:07.848334       1 controller_manager.go:51] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0@sha256:f2294a3985ca9bcec110a31e82fc1668e37afcf26641804956279ba99261e2fe"
I1216
08:40:07.848508       1 controller_manager.go:57] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0@sha256:5e33a53197e51307a0078604be138837c4991d41002f786194068ab19b3cbd7b"
I1216
08:40:07.848452       1 standalone_apiserver.go:101] Started health checks at 0.0.0.0:8443
I1216
08:40:07.852766       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/openshift-master-controllers...
I1216
08:40:07.949096       1 leaderelection.go:253] lock is held by controller-manager-2gkwm and has not yet expired
I1216
08:40:07.949117       1 leaderelection.go:190] failed to acquire lease kube-system/openshift-master-controllers
I1216
08:40:11.782682       1 leaderelection.go:253] lock is held by controller-manager-2gkwm and has not yet expired


in etcd-member-ip-10-0-15-5.us-west-2.compute.internal i see

2018-12-16 08:28:06.104964 I | raft: 57eeb8dc86c20684 became follower at term 2
2018-12-16 08:28:06.104980 I | raft: 57eeb8dc86c20684 [logterm: 1, index: 3, vote: 0] cast MsgVote for 51335c425499eba8 [logterm: 1, index: 3] at term 2
2018-12-16 08:28:06.106480 I | raft: raft.node: 57eeb8dc86c20684 elected leader 51335c425499eba8 at term 2
2018-12-16 08:28:06.110574 I | embed: ready to serve client requests
2018-12-16 08:28:06.110862 I | etcdserver: published {Name:etcd-member-ip-10-0-15-5.us-west-2.compute.internal ClientURLs:[https://10.0.15.5:2379]} to cluster 1b3e5fe675c17503
2018-12-16 08:28:06.112654 I | embed: serving client requests on [::]:2379
2018-12-16 08:28:06.121646 I | embed: rejected connection from "127.0.0.1:37368" (error "tls: failed to verify client's certificate: x509: certificate specifies an incompatible key usage", ServerName "")
WARNING
: 2018/12/16 08:28:06 Failed to dial 0.0.0.0:2379: connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"; please retry.
2018-12-16 08:28:06.141372 N | etcdserver/membership: set the initial cluster version to 3.3
2018-12-16 08:28:06.141473 I | etcdserver/api: enabled capabilities for version 3.3
proto
: no coders for int
proto
: no encoder for ValueSize int [GetProperties]
2018-12-16 08:32:00.853547 W | etcdserver: read-only range request "key:\"/openshift.io/cluster.k8s.io/machinesets/\" range_end:\"/openshift.io/cluster.k8s.io/machinesets0\" " with result "range_response_count:0 size:5" took too long (103.732568ms) to execute
2018-12-16 08:32:26.458778 W | wal: sync duration of 2.139911381s, expected less than 1s
2018-12-16 08:32:27.699258 W | wal: sync duration of 1.237582018s, expected less than 1s
2018-12-16 08:32:27.699689 W | etcdserver: read-only range request "key:\"/openshift.io/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:439" took too long (3.073363257s) to execute
2018-12-16 08:32:27.701982 W | etcdserver: read-only range request "key:\"/openshift.io/deployments\" range_end:\"/openshift.io/deploymentt\" count_only:true " with result "range_response_count:0 size:7" took too long (3.023904719s) to execute



Trevor, am i hitting https://github.com/coreos/kubecsr/pull/22 which blocks everything else ?


Dani

Dani Comnea

unread,
Dec 17, 2018, 5:43:25 AM12/17/18
to OpenShift 4 Developer Preview
fyi i've tried 5 more times using different --dir folders and always i end up in the same error state.
If there is anything else needed in terms of logs let me know otherwise i'm blocked/ can't move further.

Also if you want me to test/ validate a fix happy to do so, just let me know.

Dani Comnea

unread,
Dec 17, 2018, 6:44:51 PM12/17/18
to OpenShift 4 Developer Preview
Seems i've been chasing a lot of noise, i managed to see this in the logs

2018-12-17 23:37:22.707582 W | etcdserver: failed to send out heartbeat on time (exceeded the 100ms timeout for 546.368056ms)
2018-12-17 23:37:22.707739 W | etcdserver: server is likely overloaded
2018-12-17 23:37:22.707804 W | etcdserver: failed to send out heartbeat on time (exceeded the 100ms timeout for 546.598181ms)
2018-12-17 23:37:22.707857 W | etcdserver: server is likely overloaded
2018-12-17 23:37:22.742892 W | etcdserver: read-only range request "key:\"/openshift.io/pods\" range_end:\"/openshift.io/podt\" count_only:true " with result "range_response_count:0 size:7" took too long (505.230859ms) to execute
2018-12-17 23:37:23.899950 W | etcdserver: read-only range request "key:\"/openshift.io/clusterroles\" range_end:\"/openshift.io/clusterrolet\" count_only:true " with result "range_response_count:0 size:7" took too long (471.825866ms) to execute
2018-12-17 23:37:23.900310 W | etcdserver: read-only range request "key:\"/openshift.io/configmaps/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:448" took too long (940.785264ms) to execute
2018-12-17 23:37:23.900558 W | etcdserver: read-only range request "key:\"/openshift.io/secrets/kube-system/cronjob-controller-token-z9n7g\" " with result "range_response_count:1 size:2452" took too long (867.858264ms) to execute
2018-12-17 23:37:24.494178 I | raft: 75617f6f50a47e76 [logterm: 2, index: 1346, vote: 75617f6f50a47e76] ignored MsgVote from 23f54800b4e077ea [logterm: 2, index: 1343] at term 2: lease is not expired (remaining ticks: 5)
2018-12-17 23:37:25.659068 W | wal: sync duration of 2.217338278s, expected less than 1s
2018-12-17 23:37:25.659239 W | etcdserver: failed to send out heartbeat on time (exceeded the 100ms timeout for 2.023530232s)
2018-12-17 23:37:25.659298 W | etcdserver: server is likely overloaded
2018-12-17 23:37:25.659351 W | etcdserver: failed to send out heartbeat on time (exceeded the 100ms timeout for 2.023642324s)
2018-12-17 23:37:25.659399 W | etcdserver: server is likely overloaded
2018-12-17 23:37:25.661293 I | raft: 75617f6f50a47e76 [term: 2] received a MsgAppResp message with higher term from 23f54800b4e077ea [term: 3]
2018-12-17 23:37:25.661410 I | raft: 75617f6f50a47e76 became follower at term 3
2018-12-17 23:37:25.661504 I | raft: raft.node: 75617f6f50a47e76 lost leader 75617f6f50a47e76 at term 3
2018-12-17 23:37:25.794004 I | raft: 75617f6f50a47e76 [term: 3] received a MsgVote message with higher term from 23f54800b4e077ea [term: 4]
2018-12-17 23:37:25.794146 I | raft: 75617f6f50a47e76 became follower at term 4
2018-12-17 23:37:25.794219 I | raft: 75617f6f50a47e76 [logterm: 2, index: 1348, vote: 0] rejected MsgVote from 23f54800b4e077ea [logterm: 2, index: 1343] at term 4
2018-12-17 23:37:25.831700 I | raft: 75617f6f50a47e76 [term: 4] ignored a MsgVote message with lower term from 797d94016a43e51 [term: 3]
2018-12-17 23:37:25.832215 I | raft: 75617f6f50a47e76 [term: 4] ignored a MsgAppResp message with lower term from 797d94016a43e51 [term: 3]
2018-12-17 23:37:25.832991 I | raft: found conflict at index 1344 [existing term: 2, conflicting term: 4]
2018-12-17 23:37:25.833080 I | raft: replace the unstable entries from index 1344
2018-12-17 23:37:25.833133 I | raft: raft.node: 75617f6f50a47e76 elected leader 23f54800b4e077ea at term 4
2018-12-17 23:37:30.820800 W | etcdserver: failed to revoke 7e7667be8450d17f ("etcdserver: request timed out")
2018-12-17 23:37:30.888991 W | etcdserver: timed out waiting for read index response (local node might have slow network)


and that is interesting since i'm running m4.large with 120 volume size, 360 IOPS and gp2 type.

wk...@redhat.com

unread,
Dec 18, 2018, 3:20:08 AM12/18/18
to OpenShift 4 Developer Preview
On Sunday, December 16, 2018 at 12:56:13 AM UTC-8, Dani Comnea wrote:
What will be the rational of keeping the state if my cluster was nuked ? (but will come back to this later)
 
More discussion around that in [1].

wk...@redhat.com

unread,
Dec 18, 2018, 3:32:20 AM12/18/18
to OpenShift 4 Developer Preview
On Monday, December 17, 2018 at 2:43:25 AM UTC-8, Dani Comnea wrote:
fyi i've tried 5 more times using different --dir folders and always i end up in the same error state.

What's the error state now?  With the leftover-content-in-the-asset-directory issue resolved by using different --dir, I'd have expected your master kubelets to be joining the cluster without trouble.

Also, we expect to see some "the server could not find the requested resource" messages while kubelets attempt to join during etcd bootstrapping (when the etcd-cert-signer [1] is pretending to be a Kubernetes API server).  Once you see bootkube.service on the bootstrap node move into "Starting temporary bootstrap control plane", your kubelets should be able to join.  Are you still seeing "Error: error while checking pod status: timed out waiting for the condition" out of bootkube.service, or was that just because of the leftover kubelet certs?

Dani Comnea

unread,
Dec 18, 2018, 7:30:11 AM12/18/18
to OpenShift 4 Developer Preview
Much appreciated for taking the time to respond.

Regarding "Are you still seeing "Error: error while checking pod status: timed out waiting for the condition" out of bootkube.service" - i can confirm there is no problem with it anymore once i use a cleaned "--dir"

On bootstrap node everything looks okay

 sudo crictl pods
POD ID              CREATED             STATE               NAME                                               NAMESPACE                   ATTEMPT
13088a3e78d6f       4 minutes ago       Ready               bootstrap-kube-scheduler-ip-10-0-6-208             kube-system                 0
a1a131705d430      
4 minutes ago       Ready               bootstrap-kube-controller-manager-ip-10-0-6-208    kube-system                 0
03ea8b54621a1       4 minutes ago       Ready               bootstrap-kube-apiserver-ip-10-0-6-208             kube-system                 0
d082a50f053b9      
4 minutes ago       Ready               bootstrap-cluster-version-operator-ip-10-0-6-208   openshift-cluster-version   0


and the bootkube.service logs

Dec 18 12:02:20 ip-10-0-6-208 bootkube.sh[4419]: Starting temporary bootstrap control plane...
Dec 18 12:02:20 ip-10-0-6-208 bootkube.sh[4419]: Waiting for api-server...
Dec 18 12:02:25 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:02:25.672064       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 18 12:02:30 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:02:30.668505       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 18 12:02:35 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:02:35.670951       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 18 12:02:40 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:02:40.667663       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 18 12:02:45 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:02:45.667731       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 18 12:02:50 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:02:50.667977       1 create.go:58] Unable to determine api-server readiness: API Server http status: 0
Dec 18 12:03:03 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:03:03.810590       1 create.go:58] Unable to determine api-server readiness: API Server http status: 500
Dec 18 12:03:05 ip-10-0-6-208 bootkube.sh[4419]: W1218 12:03:05.666225       1 create.go:58] Unable to determine api-server readiness: API Server http status: 500
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Creating self-hosted assets...
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/0000_00_cluster-version-operator_00_namespace.yaml Namespace openshift-cluster-version
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/00_openshift-kube-apiserver-ns.yaml Namespace openshift-kube-apiserver
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/00_openshift-kube-controller-manager-ns.yaml Namespace openshift-kube-controller-manager
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/00_openshift-kube-scheduler-ns.yaml Namespace openshift-kube-scheduler
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/04-openshift-machine-config-operator.yaml Namespace openshift-machine-config-operator
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/05-openshift-cluster-api-namespace.yaml Namespace openshift-cluster-api
Dec 18 12:03:10 ip-10-0-6-208 bootkube.sh[4419]: Created /assets/manifests/09-openshift-service-signer-namespace.yaml Namespace openshift-service-cert-signer
....

Dec 18 12:06:41 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager        DoesNotExist
Dec 18 12:06:41 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:06:41 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver        DoesNotExist
Dec 18 12:06:41 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Pending
Dec 18 12:06:56 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        Pending
Dec 18 12:06:56 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Pending
Dec 18 12:06:56 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-20-188.us-west-2.compute.internal        Pending
Dec 18 12:06:56 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:01 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        RunningNotReady
Dec 18 12:07:01 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Ready
Dec 18 12:07:01 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-20-188.us-west-2.compute.internal        Pending
Dec 18 12:07:01 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:11 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-20-188.us-west-2.compute.internal        Ready
Dec 18 12:07:11 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:11 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        Pending
Dec 18 12:07:11 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Ready
Dec 18 12:07:16 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:16 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        RunningNotReady
Dec 18 12:07:16 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Ready
Dec 18 12:07:16 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-20-188.us-west-2.compute.internal        Ready
Dec 18 12:07:21 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        RunningNotReady
Dec 18 12:07:21 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-38-188.us-west-2.compute.internal        Pending
Dec 18 12:07:21 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-20-188.us-west-2.compute.internal        Ready
Dec 18 12:07:21 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:26 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        RunningNotReady
De
Enter code here...

c
18 12:07:26 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Ready
Dec 18 12:07:26 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-3-245.us-west-2.compute.internal        Pending
Dec 18 12:07:26 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:31 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-controller-manager/openshift-kube-controller-manager-ip-10-0-3-245.us-west-2.compute.internal        Ready
Dec 18 12:07:31 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-cluster-version/cluster-version-operator-7776dfdf59-t9tmm        Ready
Dec 18 12:07:31 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-apiserver/openshift-kube-apiserver-ip-10-0-38-188.us-west-2.compute.internal        Ready
Dec 18 12:07:31 ip-10-0-6-208 bootkube.sh[4419]: Pod Status:openshift-kube-scheduler/openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal        Ready
Dec 18 12:07:31 ip-10-0-6-208 bootkube.sh[4419]: All self-hosted control plane components successfully started
Dec 18 12:07:31 ip-10-0-6-208 bootkube.sh[4419]: Tearing down temporary bootstrap control plane...


However on the master-0 node i see pods coming up but i also see some in "NotReady" state

sudo crictl pods
POD ID              CREATED             STATE               NAME                                                                         NAMESPACE                                   ATTEMPT
0e280d9926b95       15 minutes ago      Ready               openshift-kube-apiserver-ip-10-0-3-245.us-west-2.compute.internal            openshift-kube-apiserver                    0
8b4a4a9808e67       15 minutes ago      NotReady            installer-3-ip-10-0-3-245.us-west-2.compute.internal                         openshift-kube-apiserver                    0
e20aab74967bd       16 minutes ago      Ready               apiserver-zk7cx                                                              openshift-apiserver                         0
286a3c9532e7d       16 minutes ago      Ready               controller-manager-lb5m4                                                     openshift-controller-manager                0
054b396be966d       17 minutes ago      Ready               openshift-console-5656889fdc-lb758                                           openshift-console                           0
e38bee4353b8f       17 minutes ago      Ready               tuned-x4c2v                                                                  openshift-cluster-node-tuning-operator      0
149a78f14b942       17 minutes ago      Ready               image-registry-6f7686c978-9gdll                                              openshift-image-registry                    0
0b2d3e116c373       17 minutes ago      Ready               console-operator-699468bd87-x85vw                                            openshift-console                           0
e50f6c1ffcef3       17 minutes ago      Ready               cluster-node-tuning-operator-9d595b45-rxd92                                  openshift-cluster-node-tuning-operator      0
1486a49855782       17 minutes ago      Ready               registry-ca-hostmapper-4c9m5                                                 openshift-image-registry                    0
26143901fe050       19 minutes ago      Ready               openshift-kube-controller-manager-ip-10-0-3-245.us-west-2.compute.internal   openshift-kube-controller-manager           0
c3e435c1387a4       19 minutes ago      NotReady            installer-2-ip-10-0-3-245.us-west-2.compute.internal                         openshift-kube-controller-manager           0
4fbc4d05e2506       19 minutes ago      NotReady            installer-2-ip-10-0-3-245.us-west-2.compute.internal                         openshift-kube-apiserver                    0
ac22679359332       20 minutes ago      Ready               machine-config-daemon-jwhmr                                                  openshift-machine-config-operator           0
b4b0b7aa9efe7       20 minutes ago      NotReady            installer-1-ip-10-0-3-245.us-west-2.compute.internal                         openshift-kube-controller-manager           0
fb7d4af855842       20 minutes ago      Ready               machine-config-server-2hw4d                                                  openshift-machine-config-operator           0
149fb973d190e       20 minutes ago      Ready               olm-operator-9c975c58d-kcjfm                                                 openshift-operator-lifecycle-manager        0
ca2b25b57b787       20 minutes ago      Ready               catalog-operator-6b4796687d-zttkw                                            openshift-operator-lifecycle-manager        0
37c98f97329b7       21 minutes ago      Ready               openshift-kube-scheduler-ip-10-0-3-245.us-west-2.compute.internal            openshift-kube-scheduler                    0
75076a4a7217a       21 minutes ago      NotReady            installer-1-ip-10-0-3-245.us-west-2.compute.internal                         openshift-kube-scheduler                    0
b63c5ba807b47       21 minutes ago      Ready               dns-default-h6h78                                                            openshift-cluster-dns                       0
5516997d7b783       21 minutes ago      Ready               configmap-cabundle-injector-6695c86f9d-72lc5                                 openshift-service-cert-signer               0
32d6c00d47299       21 minutes ago      Ready               machine-approver-78c784c7dc-z2zxj                                            openshift-cluster-machine-approver          0
5a8a6558fcffb       21 minutes ago      Ready               openshift-cluster-kube-scheduler-operator-667c9b4979-rg8vw                   openshift-cluster-kube-scheduler-operator   0
3875d0870ebd7       21 minutes ago      Ready               origin-cluster-osin-operator-78b7dcff6f-5mjwg                                openshift-core-operators                    0
9d2cc2559443d       21 minutes ago      Ready               cluster-dns-operator-69975557d5-282jz                                        openshift-cluster-dns-operator              0
d665492878bff       23 minutes ago      Ready               sdn-qjjrl                                                                    openshift-sdn                               0
1fd83d8d9b31a       23 minutes ago      Ready               ovs-blprr                                                                    openshift-sdn                               0
0e95810de0dda       23 minutes ago      Ready               sdn-controller-gpkp7                                                         openshift-sdn                               0
327a22d6143be       24 minutes ago      Ready               cluster-network-operator-gkj25                                               openshift-cluster-network-operator          0
1bebd19804e84       28 minutes ago      Ready               etcd-member-ip-10-0-3-245.us-west-2.compute.internal                         kube-system                                 0


while "crictl ps" shows

sudo crictl ps
CONTAINER ID        IMAGE                                                                                                            CREATED             STATE               NAME                                     ATTEMPT
6296f4a385c4a       quay.io/openshift-release-dev/ocp-v4.0@sha256:ef1a2bcebe3b05ce04d2d8b308ae515e247d5e6028cf68feb2285ac959b10511   15 minutes ago      Running             console-operator                         1
12a4ca13609f5       quay.io/openshift-release-dev/ocp-v4.0@sha256:c1c0d3f050b4e8ecdb0d441f48be99e224ed74e7350068b1010bf2412984aa8d   16 minutes ago      Running             openshift-kube-apiserver                 0
97fc370df12ee       quay.io/openshift-release-dev/ocp-v4.0@sha256:c1c0d3f050b4e8ecdb0d441f48be99e224ed74e7350068b1010bf2412984aa8d   16 minutes ago      Running             openshift-apiserver                      0
4267e2773b1e3       92d088f637d1fc9497b670552f700033e322fa8f27a215dfc6cfe33b4291b58e                                                 17 minutes ago      Running             controller-manager                       0
7ffbad9e2d0f8       quay.io/openshift-release-dev/ocp-v4.0@sha256:327d7a0491dc9c7e34715ce0a1c16c23c906e0254ee19f37ce43ac52aff19522   17 minutes ago      Running             console                                  0
887a7eac0bc6e       quay.io/openshift-release-dev/ocp-v4.0@sha256:94ae2335d16a1701803e3ef514e68713fe886439191d0553a74dd4af6e7fcc2c   17 minutes ago      Running             tuned                                    0
77c0081a4ccf4       quay.io/openshift-release-dev/ocp-v4.0@sha256:571c1cb1c58d45a5a90e370c45f5f826c0ec5a82adb8031ad229ac2b89a5abb3   17 minutes ago      Running             registry                                 0
cea99a38bd0d5       quay.io/openshift-release-dev/ocp-v4.0@sha256:085098f889c678b82220c4d01d8677faf0ac24c72bf849a8639378e43290c7e9   18 minutes ago      Running             cluster-node-tuning-operator             0
b0b0ed89cf329       quay.io/openshift-release-dev/ocp-v4.0@sha256:70f0ba4c10d4edfc16033e5af8b64b4ae4f23e7c7f60d91f34a0fe86ec3be9fc   18 minutes ago      Running             registry-ca-hostmapper                   0
b7270566034c3       quay.io/openshift-release-dev/ocp-v4.0@sha256:b32df395212ec408ac0727c4720f838923d90cc424d1030b45cf1a6e7965fb00   19 minutes ago      Running             kube-controller-manager                  2
c116bf800eaad       bb84dbdafdfa61c29ca989c652d63debe03a043c2099f6ad7097ac28865bd406                                                 20 minutes ago      Running             cluster-network-operator                 2
b2c23dd3eb888       92d088f637d1fc9497b670552f700033e322fa8f27a215dfc6cfe33b4291b58e                                                 20 minutes ago      Running             sdn-controller                           1
1fbc14270172a       quay.io/openshift-release-dev/ocp-v4.0@sha256:5ce7c68a01f40ed19d732deda861dd8ca84dcd86527b82fae6ccb530a20d1aad   20 minutes ago      Running             machine-config-daemon                    0
9add519721de9       quay.io/openshift-release-dev/ocp-v4.0@sha256:b0f1e2f626af16233efe43a0e1752258fb7b321b10670ae8e58e95d7b1df8302   21 minutes ago      Running             olm-operator                             0
4e43a29a43441       quay.io/openshift-release-dev/ocp-v4.0@sha256:b0f1e2f626af16233efe43a0e1752258fb7b321b10670ae8e58e95d7b1df8302   21 minutes ago      Running             catalog-operator                         0
b43c5f21e2081       quay.io/openshift-release-dev/ocp-v4.0@sha256:d8a9b841e26960a0a7ff7a1b36bfc86abada88af8df6074f857fa0c260001710   21 minutes ago      Running             machine-config-server                    0
7b1323695f7da       quay.io/openshift-release-dev/ocp-v4.0@sha256:b32df395212ec408ac0727c4720f838923d90cc424d1030b45cf1a6e7965fb00   21 minutes ago      Running             scheduler                                0
93be83b925411       quay.io/openshift-release-dev/ocp-v4.0@sha256:9b782ad38d94edcb9999ee96725a2f4f4f0a0d5924f62b58ed54dbde9f04391b   21 minutes ago      Running             dns-node-resolver                        0
33378fbde250a       quay.io/openshift-release-dev/ocp-v4.0@sha256:fbad11d966d4878a88f9a8226ea445c7f929ec4ffb0039e99fd610cf32764657   21 minutes ago      Running             dns                                      0
4a8a96b374c9f       quay.io/openshift-release-dev/ocp-v4.0@sha256:54efddc79f1e6d42d80882b2a1e4b01512e5246f2157a6cb5397c41209ec314f   21 minutes ago      Running             configmap-cabundle-injector-controller   0
ae2ddedaf7a1d       quay.io/openshift-release-dev/ocp-v4.0@sha256:5e702b5443dcd18fb2c5b895c41bfa7f0a7353d123e2ae0dd97c238dc8fb821a   22 minutes ago      Running             operator                                 0
50fb62941b0e4       quay.io/openshift-release-dev/ocp-v4.0@sha256:de325005b6c6e2a61ec7c8e6afa14bab246863bb9c8b8b80350c07cca1d334b9   22 minutes ago      Running             cluster-dns-operator                     0
bae0342011ac9       quay.io/openshift-release-dev/ocp-v4.0@sha256:b25edd29b13e56cefcb825aa0ddf0b476818249595b0bc58dcd68452e7c88cef   22 minutes ago      Running             kube-scheduler-operator-container        0
b9e47f6b1058c       quay.io/openshift-release-dev/ocp-v4.0@sha256:a15e4b7ea71ba4dca3fe183460029c1ad9cad82c52031f709fb593988ba70593   22 minutes ago      Running             machine-approver-controller              0
494419b8abd41       quay.io/openshift-release-dev/ocp-v4.0@sha256:748f5cef383606412dbc3233401885c80423846d7438feee8e5697adb2d49073   22 minutes ago      Running             openvswitch                              0
1b6428a88a1e0       quay.io/openshift-release-dev/ocp-v4.0@sha256:748f5cef383606412dbc3233401885c80423846d7438feee8e5697adb2d49073   22 minutes ago      Running             sdn                                      0
e1d4ca9e8dd68       quay.io/coreos/etcd@sha256:cb9cee3d9d49050e7682fde0a9b26d6948a0117b1b4367b8170fcaa3960a57b8                      27 minutes ago      Running             etcd-member                              0



I could be wrong but i guess those pods in NotReady state are the ones causing the problems and that is the state where i am at.
If you have any ideas where to dig into would be brilliant.

Dani

Dani Comnea

unread,
Dec 18, 2018, 8:35:39 AM12/18/18
to OpenShift 4 Developer Preview
Right so moving on ...

On master0 (.245) i can see that controller-manager pod logs are all flooded with leader election

I1218 12:11:10.215277       1 controller_manager.go:40] Starting controllers on 0.0.0.0:8443 (v4.0.0-alpha.0+793dcb0-773)
I1218
12:11:10.217726       1 controller_manager.go:51] DeploymentConfig controller using images from "quay.io/openshift-release-dev/ocp-v4.0@sha256:f2294a3985ca9bcec110a31e82fc1668e37afcf26641804956279ba99261e2fe"
I1218
12:11:10.217808       1 controller_manager.go:57] Build controller using images from "quay.io/openshift-release-dev/ocp-v4.0@sha256:5e33a53197e51307a0078604be138837c4991d41002f786194068ab19b3cbd7b"
I1218
12:11:10.218055       1 standalone_apiserver.go:101] Started health checks at 0.0.0.0:8443
I1218
12:11:10.218367       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/openshift-master-controllers...
I1218
12:11:10.234406       1 leaderelection.go:253] lock is held by controller-manager-zf8lv and has not yet expired
I1218
12:11:10.234962       1 leaderelection.go:190] failed to acquire lease kube-system/openshift-master-controllers
I1218
12:11:14.208658       1 leaderelection.go:253] lock is held by controller-manager-zf8lv and has not yet expired
I1218
12:11:14.208687       1 leaderelection.go:190] failed to acquire lease kube-system/openshift-master-controllers

You would think at some point when another controller pod won the election the logs will show that. I got so confused thinking "is it still going to process anything or is in a dead state" ?

On master2 (.188) the kube-controller-manager pod won the election and is a happy bunny. However same story (logs gets spammed by lock being held by master0) goes in the scheduler pod logs on master2

E1218 12:07:45.393972       1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.ReplicaSet: Get https://dani-cluster1-api.dani-devlab.metal:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.0.14.155:6443: connect: connection refused
E1218
12:07:45.399477       1 reflector.go:136] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.StatefulSet: Get https://dani-cluster1-api.dani-devlab.metal:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.0.43.123:6443: connect: connection refused
I1218
12:07:45.399688       1 reflector.go:171] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:130
I1218
12:07:45.406513       1 reflector.go:171] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:130
I1218
12:07:45.420662       1 reflector.go:171] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:130
I1218
12:07:45.421521       1 reflector.go:171] Listing and watching *v1.Pod from k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:176
I1218
12:07:45.427345       1 reflector.go:171] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:130
I1218
12:07:45.427856       1 reflector.go:171] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:130
I1218
12:07:45.428063       1 reflector.go:171] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:130
E1218
12:07:45.428413       1 reflector.go:136] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:176: Failed to list *v1.Pod: Get https://dani-cluster1-api.dani-devlab.metal:6443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.0.26.32:6443: connect: connection refused
I1218
12:07:45.551480       1 shared_informer.go:123] caches populated
I1218
12:07:45.651760       1 shared_informer.go:123] caches populated
I1218
12:07:46.394335       1 reflector.go:171] Listing and watching *v1beta1.ReplicaSet from k8s.io/client-go/informers/factory.go:130
I1218
12:07:46.394466       1 reflector.go:171] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:130
I1218
12:07:46.404358       1 reflector.go:171] Listing and watching *v1beta1.StatefulSet from k8s.io/client-go/informers/factory.go:130
I1218
12:07:46.433138       1 reflector.go:171] Listing and watching *v1.Pod from k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:176
I1218
12:07:46.454458       1 shared_informer.go:123] caches populated
I1218
12:07:46.554851       1 shared_informer.go:123] caches populated
I1218
12:07:46.655189       1 shared_informer.go:123] caches populated
I1218
12:07:46.757630       1 shared_informer.go:123] caches populated
I1218
12:07:46.859352       1 shared_informer.go:123] caches populated
I1218
12:07:46.959701       1 shared_informer.go:123] caches populated
I1218
12:07:47.061578       1 shared_informer.go:123] caches populated
I1218
12:07:47.061757       1 controller_utils.go:1025] Waiting for caches to sync for scheduler controller
I1218
12:07:47.162325       1 shared_informer.go:123] caches populated
I1218
12:07:47.162439       1 controller_utils.go:1032] Caches are synced for scheduler controller
I1218
12:07:47.162506       1 leaderelection.go:185] attempting to acquire leader lease  kube-system/kube-scheduler...
I1218
12:07:47.173768       1 leaderelection.go:253] lock is held by ip-10-0-6-208_da021962-02bc-11e9-8aea-0661a387f542 and has not yet expired
I1218
12:07:47.173818       1 leaderelection.go:190] failed to acquire lease kube-system/kube-scheduler
I1218
12:07:51.113518       1 leaderelection.go:253] lock is held by ip-10-0-6-208_da021962-02bc-11e9-8aea-0661a387f542 and has not yet expired
I1218
12:07:51.113550       1 leaderelection.go:190] failed to acquire lease kube-system/kube-scheduler
I1218
12:07:53.485640       1 leaderelection.go:253] lock is held by ip-10-0-6-208_da021962-02bc-11e9-8aea-0661a387f542 and has not yet expired
I1218
12:07:53.485667       1 leaderelection.go:190] failed to acquire lease kube-system/kube-scheduler

Now on master0 in the  logs i see which is great

I1218 12:08:02.167958       1 leaderelection.go:194] successfully acquired lease kube-system/kube-scheduler
I1218
12:08:02.253452       1 reflector.go:171] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:130
I1218
12:08:02.254052       1 reflector.go:171] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:130
I1218
12:08:02.254288       1 reflector.go:171] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:130
I1218
12:08:02.254457       1 reflector.go:171] Listing and watching *v1beta1.StatefulSet from k8s.io/client-go/informers/factory.go:130
I1218
12:08:02.254634       1 reflector.go:171] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:130
I1218
12:08:02.254831       1 reflector.go:171] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:130
I1218
12:08:02.269414       1 shared_informer.go:123] caches populated
I1218
12:08:04.260221       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler
I1218
12:08:06.315509       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler
I1218
12:08:08.347723       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler
I1218
12:08:15.966696       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler
I1218
12:08:18.051563       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler
I1218
12:08:19.562746       1 factory.go:1181] About to try and schedule pod packageserver-557c68644d-d95sg
I1218
12:08:19.562770       1 scheduler.go:447] Attempting to schedule pod: openshift-operator-lifecycle-manager/packageserver-557c68644d-d95sg
I1218
12:08:19.564834       1 scheduler_binder.go:194] AssumePodVolumes for pod "openshift-operator-lifecycle-manager/packageserver-557c68644d-d95sg", node "ip-10-0-38-188.us-west-2.compute.internal"
I1218
12:08:19.564856       1 scheduler_binder.go:197] AssumePodVolumes for pod "openshift-operator-lifecycle-manager/packageserver-557c68644d-d95sg", node "ip-10-0-38-188.us-west-2.compute.internal": all PVCs bound and nothing to do
I1218
12:08:19.564915       1 factory.go:1407] Attempting to bind packageserver-557c68644d-d95sg to ip-10-0-38-188.us-west-2.compute.internal
I1218
12:08:20.291446       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler
I1218
12:08:22.443019       1 leaderelection.go:209] successfully renewed lease kube-system/kube-scheduler



however

I1218 12:10:25.588552       1 scheduler.go:447] Attempting to schedule pod: openshift-ingress/router-default-847cbc9656-rlhs4
I1218
12:10:25.588759       1 scheduler.go:194] Failed to schedule pod: openshift-ingress/router-default-847cbc9656-rlhs4
I1218
12:10:25.588795       1 factory.go:1303] Unable to schedule openshift-ingress router-default-847cbc9656-rlhs4: no fit: 0/3 nodes are available: 3 node(s) didn't match node selector.; waiting
I1218 12:10:25.588875       1 factory.go:1416] Updating pod condition for openshift-ingress/router-default-847cbc9656-rlhs4 to (PodScheduled==False)
I1218 12:10:25.611527       1 generic_scheduler.go:226] Preemption will not help schedule pod router-default-847cbc9656-rlhs4 on any node.
I1218 12:10:25.611753       1 factory.go:1181] About to try and schedule pod router-default-847cbc9656-rlhs4
I1218 12:10:25.611770       1 scheduler.go:447] Attempting to schedule pod: openshift-ingress/router-default-847cbc9656-rlhs4
I1218 12:10:25.611858       1 scheduler.go:194] Failed to schedule pod: openshift-ingress/router-default-847cbc9656-rlhs4
I1218 12:10:25.611906       1 factory.go:1303] Unable to schedule openshift-ingress router-default-847cbc9656-rlhs4: no fit: 0/3 nodes are available: 3 node(s) didn'
t match node selector.; waiting


So i guess that is my main problem ?


Dani

Dani Comnea

unread,
Dec 18, 2018, 9:24:22 AM12/18/18
to OpenShift 4 Developer Preview
Although what i don't get it is why the API is not up and running so the installer can finish its job ...i refuse to believe is caused by not being able to schedule a router pod ....

Daniel Comnea

unread,
Dec 18, 2018, 12:58:09 PM12/18/18
to Dani Comnea, OpenShift 4 Developer Preview
All this problems caused by .... my public zone ..dOH !

Trevor - many thanks for helping me out, very much enjoyed the journey ;)

I'll document the entire journey in the [1] and/ or [2]

Dani


--
You received this message because you are subscribed to the Google Groups "OpenShift 4 Developer Preview" group.
To unsubscribe from this group and stop receiving emails from it, send an email to openshift-4-dev-p...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/openshift-4-dev-preview/07c9f1cd-c637-44f6-bbb4-4d13d046e356%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages