Getting the below error while installing Mcord in a box

180 views
Skip to first unread message

Yogesh Mandge

unread,
Sep 18, 2018, 5:11:25 AM9/18/18
to CORD Developers
TASK [deploy-kubeadm-aio-master : deploying kubernetes on master node] ***********************************************************************

TASK [deploy-kubeadm-aio-common : performing deploy-kube action] *****************************************************************************
fatal: [local]: FAILED! => {"changed": false, "msg": "+ '[' xdeploy-kube == xdeploy-kubelet ']'\n+ '[' xdeploy-kube == xdeploy-kube ']'\n+ '[' x '!=' x ']'\n+ '[' xdocker0 '!=' x ']'\n++ echo '{' '\"my_container_name\":' '\"kubeadm-deploy-kube\",' '\"user\":' '{' '\"uid\":' 1000, '\"gid\":' 1000, '\"home\":' '\"/home/cse5\"' '},' '\"cluster\":' '{' '\"cni\":' '\"calico\"' '},' '\"kubelet\":' '{' '\"container_runtime\":' '\"docker\",' '\"net_support_linuxbridge\":' true, '\"pv_support_nfs\":' true, '\"pv_support_ceph\":' true '},' '\"helm\":' '{' '\"tiller_image\":' '\"gcr.io/kubernetes-helm/tiller:v2.9.1\"' '},' '\"k8s\":' '{' '\"kubernetesVersion\":' '\"v1.10.5\",' '\"imageRepository\":' '\"gcr.io/google_containers\",' '\"certificatesDir\":' '\"/etc/kubernetes/pki\",' '\"selfHosted\":' '\"False\",' '\"keystoneAuth\":' '\"False\",' '\"api\":' '{' '\"bindPort\":' 6443 '},' '\"networking\":' '{' '\"dnsDomain\":' '\"cluster.local\",' '\"podSubnet\":' '\"192.168.0.0/16\",' '\"serviceSubnet\":' '\"10.96.0.0/12\"' '}' '},' '\"gate\":' '{' '\"fqdn_testing\":' '\"True\",' '\"ingress_ip\":' '\"192.168.136.86\",' '\"fqdn_tld\":' '\"openstackhelm.test\"' '}' '}'\n++ jq '.k8s.api += {\"advertiseAddressDevice\": \"docker0\"}'\n+ PLAYBOOK_VARS='{\n  \"my_container_name\": \"kubeadm-deploy-kube\",\n  \"user\": {\n    \"uid\": 1000,\n    \"gid\": 1000,\n    \"home\": \"/home/cse5\"\n  },\n  \"cluster\": {\n    \"cni\": \"calico\"\n  },\n  \"kubelet\": {\n    \"container_runtime\": \"docker\",\n    \"net_support_linuxbridge\": true,\n    \"pv_support_nfs\": true,\n    \"pv_support_ceph\": true\n  },\n  \"helm\": {\n    \"tiller_image\": \"gcr.io/kubernetes-helm/tiller:v2.9.1\"\n  },\n  \"k8s\": {\n    \"kubernetesVersion\": \"v1.10.5\",\n    \"imageRepository\": \"gcr.io/google_containers\",\n    \"certificatesDir\": \"/etc/kubernetes/pki\",\n    \"selfHosted\": \"False\",\n    \"keystoneAuth\": \"False\",\n    \"api\": {\n      \"bindPort\": 6443,\n      \"advertiseAddressDevice\": \"docker0\"\n    },\n    \"networking\": {\n      \"dnsDomain\": \"cluster.local\",\n      \"podSubnet\": \"192.168.0.0/16\",\n      \"serviceSubnet\": \"10.96.0.0/12\"\n    }\n  },\n  \"gate\": {\n    \"fqdn_testing\": \"True\",\n    \"ingress_ip\": \"192.168.136.86\",\n    \"fqdn_tld\": \"openstackhelm.test\"\n  }\n}'\n+ exec ansible-playbook /opt/playbooks/kubeadm-aio-deploy-master.yaml --inventory=/opt/playbooks/inventory.ini --inventory=/opt/playbooks/vars.yaml '--extra-vars={\n  \"my_container_name\": \"kubeadm-deploy-kube\",\n  \"user\": {\n    \"uid\": 1000,\n    \"gid\": 1000,\n    \"home\": \"/home/cse5\"\n  },\n  \"cluster\": {\n    \"cni\": \"calico\"\n  },\n  \"kubelet\": {\n    \"container_runtime\": \"docker\",\n    \"net_support_linuxbridge\": true,\n    \"pv_support_nfs\": true,\n    \"pv_support_ceph\": true\n  },\n  \"helm\": {\n    \"tiller_image\": \"gcr.io/kubernetes-helm/tiller:v2.9.1\"\n  },\n  \"k8s\": {\n    \"kubernetesVersion\": \"v1.10.5\",\n    \"imageRepository\": \"gcr.io/google_containers\",\n    \"certificatesDir\": \"/etc/kubernetes/pki\",\n    \"selfHosted\": \"False\",\n    \"keystoneAuth\": \"False\",\n    \"api\": {\n      \"bindPort\": 6443,\n      \"advertiseAddressDevice\": \"docker0\"\n    },\n    \"networking\": {\n      \"dnsDomain\": \"cluster.local\",\n      \"podSubnet\": \"192.168.0.0/16\",\n      \"serviceSubnet\": \"10.96.0.0/12\"\n    }\n  },\n  \"gate\": {\n    \"fqdn_testing\": \"True\",\n    \"ingress_ip\": \"192.168.136.86\",\n    \"fqdn_tld\": \"openstackhelm.test\"\n  }\n}'\n\nPLAY [all] *********************************************************************\n\nTASK [Gathering Facts] *********************************************************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : storing node hostname] ***************************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : setup directorys on host] ************************\nok: [/mnt/rootfs] => (item=/etc/kubernetes)\nchanged: [/mnt/rootfs] => (item=/etc/kubernetes/pki)\n\nTASK [deploy-kubeadm-master : generating initial admin token] ******************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : storing initial admin token] *********************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : kubelet | copying config to host] ****************\nchanged: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | etcd-ca] ***************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | etcd-server] ***********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | etcd-peer] *************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | etcd-healthcheck-client] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | ca] ********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | apiserver] *************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | apiserver-etcd-client] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | apiserver-kubelet-client] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | sa] ********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-ca] ********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-client] ****\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | admin] ************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | kubelet] **********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | controller-manager] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | kubeconfig | scheduler] ********\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : generating etcd static manifest] *****************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | controlplane | apiserver] ******\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | controlplane | controller-manager] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : master | deploy | controlplane | scheduler] ******\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : wait for kube api] *******************************\nFAILED - RETRYING: wait for kube api (120 retries left).\nFAILED - RETRYING: wait for kube api (119 retries left).\nFAILED - RETRYING: wait for kube api (118 retries left).\nFAILED - RETRYING: wait for kube api (117 retries left).\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : wait for node to come online] ********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : include_tasks] ***********************************\nincluded: /opt/playbooks/roles/deploy-kubeadm-master/tasks/wait-for-kube-system-namespace.yaml for /mnt/rootfs\n\nTASK [deploy-kubeadm-master : wait for kube pods to all be running in kube-system namespace] ***\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : deploying kube-proxy] ****************************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : include_tasks] ***********************************\nincluded: /opt/playbooks/roles/deploy-kubeadm-master/tasks/helm-cni.yaml for /mnt/rootfs\n\nTASK [deploy-kubeadm-master : pull the helm tiller Image] **********************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : deploying bootstrap tiller] **********************\nchanged: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : wait for tiller to be ready] *********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : getting default route device mtu] ****************\nchanged: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : kubeadm | cni | calico | label node] *************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : kubeadm | cni | calico] **************************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : kubeadm | cni | calico] **************************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : kubeadm | cni | status] **************************\nok: [/mnt/rootfs -> 127.0.0.1] => {\n    \"msg\": [\n        \"LAST DEPLOYED: Tue Sep 18 08:58:58 2018\", \n        \"NAMESPACE: kube-system\", \n        \"STATUS: DEPLOYED\", \n        \"\", \n        \"RESOURCES:\", \n        \"==> v1/Secret\", \n        \"NAME                 TYPE               DATA  AGE\", \n        \"calico-certificates  kubernetes.io/tls  3     26s\", \n        \"\", \n        \"==> v1/ServiceAccount\", \n        \"NAME                            SECRETS  AGE\", \n        \"calico-etcd                     1        26s\", \n        \"calico-calico-cni-plugin        1        26s\", \n        \"calico-calico-kube-controllers  1        26s\", \n        \"calico-settings                 1        26s\", \n        \"\", \n        \"==> v1beta1/ClusterRoleBinding\", \n        \"NAME                            AGE\", \n        \"calico-cni-plugin               26s\", \n        \"calico-calico-kube-controllers  26s\", \n        \"\", \n        \"==> v1beta1/Role\", \n        \"NAME                                               AGE\", \n        \"calico-kube-system-calico-calico-cni-plugin        26s\", \n        \"calico-kube-system-calico-calico-kube-controllers  26s\", \n        \"calico-kube-system-calico-settings                 26s\", \n        \"\", \n        \"==> v1beta1/RoleBinding\", \n        \"NAME                                   AGE\", \n        \"calico-calico-calico-cni-plugin        26s\", \n        \"calico-calico-calico-kube-controllers  26s\", \n        \"calico-calico-settings                 26s\", \n        \"\", \n        \"==> v1/Service\", \n        \"NAME  TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE\", \n        \"etcd  ClusterIP  10.96.232.136  <none>       6666/TCP  26s\", \n        \"\", \n        \"==> v1/ConfigMap\", \n        \"NAME        DATA  AGE\", \n        \"calico-bin  3     26s\", \n        \"calico-etc  12    26s\", \n        \"\", \n        \"==> v1beta1/ClusterRole\", \n        \"NAME                            AGE\", \n        \"calico-calico-cni-plugin        26s\", \n        \"calico-calico-kube-controllers  26s\", \n        \"\", \n        \"==> v1/DaemonSet\", \n        \"NAME         DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR                    AGE\", \n        \"calico-etcd  1        1        1      1           1          node-role.kubernetes.io/master=  26s\", \n        \"calico-node  1        1        1      1           1          <none>                           26s\", \n        \"\", \n        \"==> v1/Deployment\", \n        \"NAME                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE\", \n        \"calico-kube-policy-controllers  1        1        1           0          26s\", \n        \"\", \n        \"==> v1/Job\", \n        \"NAME             DESIRED  SUCCESSFUL  AGE\", \n        \"calico-settings  1        0           26s\", \n        \"\", \n        \"==> v1/Pod(related)\", \n        \"NAME                                             READY  STATUS   RESTARTS  AGE\", \n        \"calico-etcd-b5nvp                                1/1    Running  0         16s\", \n        \"calico-node-sshx2                                2/2    Running  0         16s\", \n        \"calico-kube-policy-controllers-6dcb6488c8-mmsnt  0/1    Pending  0         16s\", \n        \"calico-settings-zntkr                            0/1    Pending  0         16s\"\n    ]\n}\n\nTASK [deploy-kubeadm-master : kubeadm | cni | flannel] *************************\nskipping: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : kubeadm | cni | flannel] *************************\nskipping: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : kubeadm | cni | status] **************************\nskipping: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : removing bootstrap tiller container] *************\nchanged: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : wait for node to be ready] ***********************\nFAILED - RETRYING: wait for node to be ready (120 retries left).\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : include_tasks] ***********************************\nincluded: /opt/playbooks/roles/deploy-kubeadm-master/tasks/wait-for-kube-system-namespace.yaml for /mnt/rootfs\n\nTASK [deploy-kubeadm-master : wait for kube pods to all be running in kube-system namespace] ***\nFAILED - RETRYING: wait for kube pods to all be running in kube-system namespace (120 retries left).\nFAILED - RETRYING: wait for kube pods to all be running in kube-system namespace (119 retries left).\nFAILED - RETRYING: wait for kube pods to all be running in kube-system namespace (118 retries left).\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : include_tasks] ***********************************\nincluded: /opt/playbooks/roles/deploy-kubeadm-master/tasks/helm-dns.yaml for /mnt/rootfs\n\nTASK [deploy-kubeadm-master : pull the helm tiller Image] **********************\nok: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : deploying bootstrap tiller] **********************\nchanged: [/mnt/rootfs]\n\nTASK [deploy-kubeadm-master : wait for tiller to be ready] *********************\nchanged: [/mnt/rootfs -> 127.0.0.1]\n\nTASK [deploy-kubeadm-master : kubeadm | dns] ***********************************\nfatal: [/mnt/rootfs -> 127.0.0.1]: FAILED! => {\"changed\": true, \"cmd\": [\"helm\", \"install\", \"/opt/charts/kube-dns\", \"--name\", \"kube-dns\", \"--namespace\", \"kube-system\", \"--set\", \"networking.dnsDomain=cluster.local\", \"--wait\"], \"delta\": \"0:05:00.473058\", \"end\": \"2018-09-18 09:04:53.221635\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-09-18 08:59:52.748577\", \"stderr\": \"Error: release kube-dns failed: timed out waiting for the condition\", \"stderr_lines\": [\"Error: release kube-dns failed: timed out waiting for the condition\"], \"stdout\": \"\", \"stdout_lines\": []}\n\tto retry, use: --limit @/opt/playbooks/kubeadm-aio-deploy-master.retry\n\nPLAY RECAP *********************************************************************\n/mnt/rootfs                : ok=47   changed=37   unreachable=0    failed=1   \n\n", "status": 2}

TASK [deploy-kubeadm-aio-common : getting logs for deploy-kube action] ***********************************************************************
changed: [local]

TASK [deploy-kubeadm-aio-common : dumping logs for deploy-kube action] ***********************************************************************
ok: [local] => {
    "out.stdout_lines": [
        "",
        "PLAY [all] *********************************************************************",
        "",
        "TASK [Gathering Facts] *********************************************************",
        "ok: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : storing node hostname] ***************************",
        "ok: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : setup directorys on host] ************************",
        "ok: [/mnt/rootfs] => (item=/etc/kubernetes)",
        "changed: [/mnt/rootfs] => (item=/etc/kubernetes/pki)",
        "",
        "TASK [deploy-kubeadm-master : generating initial admin token] ******************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : storing initial admin token] *********************",
        "ok: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : kubelet | copying config to host] ****************",
        "changed: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | etcd-ca] ***************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | etcd-server] ***********",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | etcd-peer] *************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | etcd-healthcheck-client] ***",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | ca] ********************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | apiserver] *************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | apiserver-etcd-client] ***",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | apiserver-kubelet-client] ***",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | sa] ********************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-ca] ********",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | certs | front-proxy-client] ****",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | kubeconfig | admin] ************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | kubeconfig | kubelet] **********",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | kubeconfig | controller-manager] ***",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | kubeconfig | scheduler] ********",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : generating etcd static manifest] *****************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | controlplane | apiserver] ******",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | controlplane | controller-manager] ***",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : master | deploy | controlplane | scheduler] ******",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : wait for kube api] *******************************",
        "FAILED - RETRYING: wait for kube api (120 retries left).",
        "FAILED - RETRYING: wait for kube api (119 retries left).",
        "FAILED - RETRYING: wait for kube api (118 retries left).",
        "FAILED - RETRYING: wait for kube api (117 retries left).",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : wait for node to come online] ********************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : include_tasks] ***********************************",
        "included: /opt/playbooks/roles/deploy-kubeadm-master/tasks/wait-for-kube-system-namespace.yaml for /mnt/rootfs",
        "",
        "TASK [deploy-kubeadm-master : wait for kube pods to all be running in kube-system namespace] ***",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : deploying kube-proxy] ****************************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : include_tasks] ***********************************",
        "included: /opt/playbooks/roles/deploy-kubeadm-master/tasks/helm-cni.yaml for /mnt/rootfs",
        "",
        "TASK [deploy-kubeadm-master : pull the helm tiller Image] **********************",
        "ok: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : deploying bootstrap tiller] **********************",
        "changed: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : wait for tiller to be ready] *********************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : getting default route device mtu] ****************",
        "changed: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | calico | label node] *************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | calico] **************************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | calico] **************************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | status] **************************",
        "ok: [/mnt/rootfs -> 127.0.0.1] => {",
        "    \"msg\": [",
        "        \"LAST DEPLOYED: Tue Sep 18 08:58:58 2018\", ",
        "        \"NAMESPACE: kube-system\", ",
        "        \"STATUS: DEPLOYED\", ",
        "        \"\", ",
        "        \"RESOURCES:\", ",
        "        \"==> v1/Secret\", ",
        "        \"NAME                 TYPE               DATA  AGE\", ",
        "        \"calico-certificates  kubernetes.io/tls  3     26s\", ",
        "        \"\", ",
        "        \"==> v1/ServiceAccount\", ",
        "        \"NAME                            SECRETS  AGE\", ",
        "        \"calico-etcd                     1        26s\", ",
        "        \"calico-calico-cni-plugin        1        26s\", ",
        "        \"calico-calico-kube-controllers  1        26s\", ",
        "        \"calico-settings                 1        26s\", ",
        "        \"\", ",
        "        \"==> v1beta1/ClusterRoleBinding\", ",
        "        \"NAME                            AGE\", ",
        "        \"calico-cni-plugin               26s\", ",
        "        \"calico-calico-kube-controllers  26s\", ",
        "        \"\", ",
        "        \"==> v1beta1/Role\", ",
        "        \"NAME                                               AGE\", ",
        "        \"calico-kube-system-calico-calico-cni-plugin        26s\", ",
        "        \"calico-kube-system-calico-calico-kube-controllers  26s\", ",
        "        \"calico-kube-system-calico-settings                 26s\", ",
        "        \"\", ",
        "        \"==> v1beta1/RoleBinding\", ",
        "        \"NAME                                   AGE\", ",
        "        \"calico-calico-calico-cni-plugin        26s\", ",
        "        \"calico-calico-calico-kube-controllers  26s\", ",
        "        \"calico-calico-settings                 26s\", ",
        "        \"\", ",
        "        \"==> v1/Service\", ",
        "        \"NAME  TYPE       CLUSTER-IP     EXTERNAL-IP  PORT(S)   AGE\", ",
        "        \"etcd  ClusterIP  10.96.232.136  <none>       6666/TCP  26s\", ",
        "        \"\", ",
        "        \"==> v1/ConfigMap\", ",
        "        \"NAME        DATA  AGE\", ",
        "        \"calico-bin  3     26s\", ",
        "        \"calico-etc  12    26s\", ",
        "        \"\", ",
        "        \"==> v1beta1/ClusterRole\", ",
        "        \"NAME                            AGE\", ",
        "        \"calico-calico-cni-plugin        26s\", ",
        "        \"calico-calico-kube-controllers  26s\", ",
        "        \"\", ",
        "        \"==> v1/DaemonSet\", ",
        "        \"NAME         DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR                    AGE\", ",
        "        \"calico-etcd  1        1        1      1           1          node-role.kubernetes.io/master=  26s\", ",
        "        \"calico-node  1        1        1      1           1          <none>                           26s\", ",
        "        \"\", ",
        "        \"==> v1/Deployment\", ",
        "        \"NAME                            DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE\", ",
        "        \"calico-kube-policy-controllers  1        1        1           0          26s\", ",
        "        \"\", ",
        "        \"==> v1/Job\", ",
        "        \"NAME             DESIRED  SUCCESSFUL  AGE\", ",
        "        \"calico-settings  1        0           26s\", ",
        "        \"\", ",
        "        \"==> v1/Pod(related)\", ",
        "        \"NAME                                             READY  STATUS   RESTARTS  AGE\", ",
        "        \"calico-etcd-b5nvp                                1/1    Running  0         16s\", ",
        "        \"calico-node-sshx2                                2/2    Running  0         16s\", ",
        "        \"calico-kube-policy-controllers-6dcb6488c8-mmsnt  0/1    Pending  0         16s\", ",
        "        \"calico-settings-zntkr                            0/1    Pending  0         16s\"",
        "    ]",
        "}",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | flannel] *************************",
        "skipping: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | flannel] *************************",
        "skipping: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | cni | status] **************************",
        "skipping: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : removing bootstrap tiller container] *************",
        "changed: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : wait for node to be ready] ***********************",
        "FAILED - RETRYING: wait for node to be ready (120 retries left).",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : include_tasks] ***********************************",
        "included: /opt/playbooks/roles/deploy-kubeadm-master/tasks/wait-for-kube-system-namespace.yaml for /mnt/rootfs",
        "",
        "TASK [deploy-kubeadm-master : wait for kube pods to all be running in kube-system namespace] ***",
        "FAILED - RETRYING: wait for kube pods to all be running in kube-system namespace (120 retries left).",
        "FAILED - RETRYING: wait for kube pods to all be running in kube-system namespace (119 retries left).",
        "FAILED - RETRYING: wait for kube pods to all be running in kube-system namespace (118 retries left).",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : include_tasks] ***********************************",
        "included: /opt/playbooks/roles/deploy-kubeadm-master/tasks/helm-dns.yaml for /mnt/rootfs",
        "",
        "TASK [deploy-kubeadm-master : pull the helm tiller Image] **********************",
        "ok: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : deploying bootstrap tiller] **********************",
        "changed: [/mnt/rootfs]",
        "",
        "TASK [deploy-kubeadm-master : wait for tiller to be ready] *********************",
        "changed: [/mnt/rootfs -> 127.0.0.1]",
        "",
        "TASK [deploy-kubeadm-master : kubeadm | dns] ***********************************",
        "fatal: [/mnt/rootfs -> 127.0.0.1]: FAILED! => {\"changed\": true, \"cmd\": [\"helm\", \"install\", \"/opt/charts/kube-dns\", \"--name\", \"kube-dns\", \"--namespace\", \"kube-system\", \"--set\", \"networking.dnsDomain=cluster.local\", \"--wait\"], \"delta\": \"0:05:00.473058\", \"end\": \"2018-09-18 09:04:53.221635\", \"msg\": \"non-zero return code\", \"rc\": 1, \"start\": \"2018-09-18 08:59:52.748577\", \"stderr\": \"Error: release kube-dns failed: timed out waiting for the condition\", \"stderr_lines\": [\"Error: release kube-dns failed: timed out waiting for the condition\"], \"stdout\": \"\", \"stdout_lines\": []}",
        "\tto retry, use: --limit @/opt/playbooks/kubeadm-aio-deploy-master.retry",
        "",
        "PLAY RECAP *********************************************************************",
        "/mnt/rootfs                : ok=47   changed=37   unreachable=0    failed=1   "
    ]
}

TASK [deploy-kubeadm-aio-common : exiting if deploy-kube action failed] **********************************************************************
fatal: [local]: FAILED! => {"changed": false, "cmd": "exit 1", "msg": "[Errno 2] No such file or directory", "rc": 2}

TASK [deploy-kubeadm-aio-common : removing container for deploy-kube action] *****************************************************************
changed: [local]
    to retry, use: --limit @/home/cse5/openstack-helm-infra/playbooks/osh-infra-deploy-k8s.retry

PLAY RECAP ***********************************************************************************************************************************
local                      : ok=21   changed=14   unreachable=0    failed=2  

++ dump_logs 2
++ export LOGS_DIR=/home/cse5/openstack-helm-infra/tools/gate/devel/../../../logs
++ LOGS_DIR=/home/cse5/openstack-helm-infra/tools/gate/devel/../../../logs
++ set +e
++ rm -rf /home/cse5/openstack-helm-infra/tools/gate/devel/../../../logs
++ mkdir -p /home/cse5/openstack-helm-infra/tools/gate/devel/../../../logs/ara
++ ara generate html /home/cse5/openstack-helm-infra/tools/gate/devel/../../../logs/ara
Done.
++ exit 2
Makefile:59: recipe for target 'dev-deploy' failed
make: *** [dev-deploy] Error 2


Any help would be appreciated, Thank you!
Message has been deleted

gmzh...@gmail.com

unread,
Mar 21, 2019, 3:15:42 AM3/21/19
to CORD Developers
Hi ,Yogesh
   Do you have fixed this issue? 

Thanks
Guangming

在 2018年9月18日星期二 UTC+8下午5:11:25,Yogesh Mandge写道:
Reply all
Reply to author
Forward
0 new messages