After 4.7 installation errors on RHV

513 views
Skip to first unread message

Batur Orkun

unread,
Jul 29, 2021, 10:29:55 AM7/29/21
to Vadim Rutkovsky
hello everbody,

I installed 4.7 and i can see 6 devices on RHV panel. But I have some problems.

  1. Worker Nodes are missing.

root@bastion auth]# oc get  nodes -o wide
NAME                       STATUS   ROLES    AGE    VERSION                INTERNAL-IP     EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION           CONTAINER-RUNTIME
okd-q282j-master-0         Ready    master   3h1m   v1.20.0+87cc9a4-1079   192.168.2.150   <none>        Fedora CoreOS 34   5.12.7-300.fc34.x86_64   cri-o://1.20.3
okd-q282j-master-1         Ready    master   3h1m   v1.20.0+87cc9a4-1079   192.168.2.149   <none>        Fedora CoreOS 34   5.12.7-300.fc34.x86_64   cri-o://1.20.3
okd-q282j-master-2         Ready    master   3h2m   v1.20.0+87cc9a4-1079   192.168.2.137   <none>        Fedora CoreOS 34   5.12.7-300.fc34.x86_64   cri-o://1.20.3
okd-q282j-worker-0-pl9wx   Ready    worker   128m   v1.20.0+87cc9a4-1079   192.168.2.153   <none>        Fedora CoreOS 34   5.12.7-300.fc34.x86_64   cri-o://1.20.3


  1. Two operators ( Ingress and Console ) are degrated

[root@bastion auth]# oc get  co
NAME                                       VERSION                         AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.7.0-0.okd-2021-07-03-190901   True        False         False      80m
baremetal                                  4.7.0-0.okd-2021-07-03-190901   True        False         False      173m
cloud-credential                           4.7.0-0.okd-2021-07-03-190901   True        False         False      3h19m
cluster-autoscaler                         4.7.0-0.okd-2021-07-03-190901   True        False         False      166m
config-operator                            4.7.0-0.okd-2021-07-03-190901   True        False         False      173m
console                                    4.7.0-0.okd-2021-07-03-190901   True        False         True       79m
csi-snapshot-controller                    4.7.0-0.okd-2021-07-03-190901   True        False         False      142m
dns                                        4.7.0-0.okd-2021-07-03-190901   True        False         False      160m
etcd                                       4.7.0-0.okd-2021-07-03-190901   True        False         False      172m
image-registry                             4.7.0-0.okd-2021-07-03-190901   True        False         False      126m
ingress                                    4.7.0-0.okd-2021-07-03-190901   True        False         True       125m
insights                                   4.7.0-0.okd-2021-07-03-190901   True        False         False      166m
kube-apiserver                             4.7.0-0.okd-2021-07-03-190901   True        False         False      160m
kube-controller-manager                    4.7.0-0.okd-2021-07-03-190901   True        False         False      160m
kube-scheduler                             4.7.0-0.okd-2021-07-03-190901   True        False         False      156m
kube-storage-version-migrator              4.7.0-0.okd-2021-07-03-190901   True        False         False      122m
machine-api                                4.7.0-0.okd-2021-07-03-190901   True        False         False      150m
machine-approver                           4.7.0-0.okd-2021-07-03-190901   True        False         False      171m
machine-config                             4.7.0-0.okd-2021-07-03-190901   True        False         False      155m
marketplace                                4.7.0-0.okd-2021-07-03-190901   True        False         False      169m
monitoring                                 4.7.0-0.okd-2021-07-03-190901   True        False         False      121m
network                                    4.7.0-0.okd-2021-07-03-190901   True        False         False      174m
node-tuning                                4.7.0-0.okd-2021-07-03-190901   True        False         False      156m
openshift-apiserver                        4.7.0-0.okd-2021-07-03-190901   True        False         False      124m
openshift-controller-manager               4.7.0-0.okd-2021-07-03-190901   True        False         False      154m
openshift-samples                          4.7.0-0.okd-2021-07-03-190901   True        False         False      156m
operator-lifecycle-manager                 4.7.0-0.okd-2021-07-03-190901   True        False         False      160m
operator-lifecycle-manager-catalog         4.7.0-0.okd-2021-07-03-190901   True        False         False      160m
operator-lifecycle-manager-packageserver   4.7.0-0.okd-2021-07-03-190901   True        False         False      156m
service-ca                                 4.7.0-0.okd-2021-07-03-190901   True        False         False      173m
storage                                    4.7.0-0.okd-2021-07-03-190901   True        False         False      159m


[root@bastion auth]# oc get  clusterversion
NAME      VERSION                         AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.okd-2021-07-03-190901   True        False         116m    Error while reconciling 4.7.0-0.okd-2021-07-03-190901: an unknown error has occurred: MultipleErrors


I can access  web console UI

Thanks


Batur Orkun

unread,
Jul 29, 2021, 3:22:05 PM7/29/21
to okd-wg

Now everything is ok but i have 2 workers

[root@bastion ~]# oc get nodes
NAME                       STATUS   ROLES    AGE     VERSION
okd-q282j-master-0         Ready    master   7h58m   v1.20.0+87cc9a4-1079
okd-q282j-master-1         Ready    master   7h59m   v1.20.0+87cc9a4-1079
okd-q282j-master-2         Ready    master   8h      v1.20.0+87cc9a4-1079
okd-q282j-worker-0-25g8k   Ready    worker   4h28m   v1.20.0+87cc9a4-1079
okd-q282j-worker-0-pl9wx   Ready    worker   7h6m    v1.20.0+87cc9a4-1079

There is a free worker in RHV. How can I add to the cluster? I inspected the logs on the worker and  I saw an error about not resolving "api-int.mydomain.com" and I added it to my DNS but I do not understand the other workers could be added to cluster successfully. 

# journalctl -u kubelet.service -f

Jul 29 19:19:02 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:02.732300  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:02 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:02.803351  202900 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Jul 29 19:19:02 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:02.832537  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:02 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:19:02.889719  202900 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "okd-q282j-worker-0-62zss" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Jul 29 19:19:02 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:02.932759  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:03 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:03.032864  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:03 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:03.133021  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:03 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:19:03.886733  202900 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "okd-q282j-worker-0-62zss" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Jul 29 19:19:03 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:03.934692  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:04.034980  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:04.135255  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:04.577368  202900 transport.go:110] It has been 5m0s since a valid client cert was found, but the server is not responsive. A restart may be necessary to retrieve new initial credentials.
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:04.636501  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]:Jul 29 19:20:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:20:04.898162  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:20:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:20:04.998363  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:20:05.098531  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:20:05.114511  202900 kubelet_node_status.go:362] Setting node annotation to enable volume controller attach/detach
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:20:05.124989  202900 kubelet_node_status.go:554] Recording NodeHasSufficientMemory event message for node okd-q282j-worker-0-62zss
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:20:05.125062  202900 kubelet_node_status.go:554] Recording NodeHasNoDiskPressure event message for node okd-q282j-worker-0-62zss
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:20:05.125085  202900 kubelet_node_status.go:554] Recording NodeHasSufficientPID event message for node okd-q282j-worker-0-62zss
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:20:05.125142  202900 kubelet_node_status.go:71] Attempting to register node okd-q282j-worker-0-62zss
Jul 29 19:20:05 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:20:05.130746  202900 kubelet_node_status.go:93] Unable to register node "okd-q282j-worker-0-62zss" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope E0729 19:19:04.736683  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:04.836953  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: I0729 19:19:04.886400  202900 csi_plugin.go:1016] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "okd-q282j-worker-0-62zss" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope
Jul 29 19:19:04 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:19:04.937193  202900 kubelet.go:2265] node "okd-q282j-worker-0-62zss" not found
Jul 29 19:20:40 okd-q282j-worker-0-62zss hyperkube[202900]: E0729 19:20:40.218761  202900 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"okd-q282j-worker-0-62zss.1696586bf243e640", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"okd-q282j-worker-0-62zss", UID:"okd-q282j-worker-0-62zss", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node okd-q282j-worker-0-62zss status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"okd-q282j-worker-0-62zss"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc038dcdffb06b040, ext:11595150617, loc:(*time.Location)(0x732f620)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc038ddc20c6d1a31, ext:915813333267, loc:(*time.Location)(0x732f620)}}, Count:186, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "okd-q282j-worker-0-62zss.1696586bf243e640" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!)
Reply all
Reply to author
Forward
0 new messages