Unable to deploy on vmware vms successfully

32 views
Skip to first unread message

Nuz

unread,
Sep 24, 2018, 10:40:31 AM9/24/18
to Tungsten Fabric Users
Hi,
I am trying to deploy tungsten using the Tungsten Fabric Ansible deployer on vcenter/esxi. I setup 3 centos VMs w/ 8Gigs memory and 50 gigs HD all connected to each other and able to ping each other and the internet.
I use the following config/instances.yaml:
 
provider_config:
  bms:
   ssh_pwd: xxxxxx
   ssh_user: root
   ssh_public_key: /root/.ssh/id_rsa.pub
   ssh_private_key: /root/.ssh/id_rsa
   domainsuffix: xxxx.local
instances:
  bms1:
   provider: bms
   roles:            # Optional.  If roles is not defined, all below roles will be created
      config_database:         # Optional.
      config:                  # Optional.
      control:                 # Optional.
      analytics_database:      # Optional.
      analytics:               # Optional.
      webui:                   # Optional.
      k8s_master:              # Optional.
      kubemanager:             # Optional.
   ip: 192.168.40.9
  bms2:
   provider: bms
   roles:            # Optional.  If roles is not defined, all below roles will be created
     vrouter:        # Optional.
     k8s_node:       # Optional.
   ip: 192.168.40.10
contrail_configuration:
  CONTAINER_REGISTRY: opencontrailnightly
  CONTRAIL_VERSION: latest
  KUBERNETES_CLUSTER_PROJECT: {}

after this I am able to go to https://192.168.40.9:8143 and login to the contrail portal. I see the green dot but it says the database node is down.
In addition after I ssh to the master (192.168.40.9) VM and do a kubectl get nodes, I show the following:

[root@master ~]# kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master    NotReady   master    34m       v1.9.2
node1     Ready      <none>    33m       v1.9.2

After I do kubectl describe nodes master I get this:
 describe nodes master
Name:               master
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=master
                    node-role.kubernetes.io/master=
Annotations:        node.alpha.kubernetes.io/ttl=0
CreationTimestamp:  Mon, 24 Sep 2018 09:00:47 -0500
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Mon, 24 Sep 2018 09:38:20 -0500   Mon, 24 Sep 2018 09:00:37 -0500   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Mon, 24 Sep 2018 09:38:20 -0500   Mon, 24 Sep 2018 09:00:37 -0500   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Mon, 24 Sep 2018 09:38:20 -0500   Mon, 24 Sep 2018 09:00:37 -0500   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            False   Mon, 24 Sep 2018 09:38:20 -0500   Mon, 24 Sep 2018 09:00:37 -0500   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.40.4
  Hostname:    master
Capacity:
 cpu:     4
 memory:  8009792Ki
 pods:    110
Allocatable:
 cpu:     4
 memory:  7907392Ki
 pods:    110
System Info:
 Machine ID:                 86ae74cff1aa417a8e2e499d13c243b8
 System UUID:                9EE51E42-B988-998A-CDF2-080BA43B4026
 Boot ID:                    db60be73-5127-4e8f-9381-8caec780d2d8
 Kernel Version:             3.10.0-862.11.6.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.1
 Kubelet Version:            v1.9.2
 Kube-Proxy Version:         v1.9.2
PodCIDR:                     10.32.0.0/24
ExternalID:                  master
Non-terminated Pods:         (5 in total)
  Namespace                  Name                              CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                              ------------  ----------  ---------------  -------------
  kube-system                etcd-master                       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-apiserver-master             250m (6%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-master    200m (5%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-w2sqh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-master             100m (2%)     0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  550m (13%)    0 (0%)      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age                From                Message
  ----    ------                   ----               ----                -------
  Normal  Starting                 38m                kubelet, master     Starting kubelet.
  Normal  NodeAllocatableEnforced  38m                kubelet, master     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientDisk    38m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  38m (x8 over 38m)  kubelet, master     Node master status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    38m (x7 over 38m)  kubelet, master     Node master status is now: NodeHasNoDiskPressure
  Normal  Starting                 37m                kube-proxy, master  Starting kube-proxy.

  
 Why am I getting a runtime network not ready: error? When I do this in AWS using the script process online everything works beautifully and I can also bring up the kubernetes portal.
 
 thanks,
 Nusrat
Reply all
Reply to author
Forward
0 new messages