Failure of kubernetes dashboard, not a failure of Lucida

234 views
Skip to first unread message

Chris Pitchford

unread,
Sep 30, 2016, 1:15:05 PM9/30/16
to Lucida Users
Using the latest tag, I installed Lucida into a VirtualBox container with 5.88 GB RAM, 200GB HD, running Ubuntu 14.04 LTS. It's working!

Installation is complete, but I have a repeated and consistent error when trying to use the kubernetes UI. Going to http://127.0.0.1:8080/ui/ redirects me to http://127.0.0.1:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard and shares the following JSON:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Running `sudo kubectl get pods --namespace=kube-system` results in:

NAME                                READY     STATUS             RESTARTS   AGE

k8s-etcd-127.0.0.1                  1/1       Running            6          2d

k8s-master-127.0.0.1                4/4       Running            24         2d

k8s-proxy-127.0.0.1                 1/1       Running            6          2d

kube-addon-manager-127.0.0.1        2/2       Running            12         2d

kube-dns-v17-x2qmp                  2/3       CrashLoopBackOff   84         2d

kubernetes-dashboard-v1.1.0-4vc9l   0/1       CrashLoopBackOff   69         2d


Here's the description from running `sudo kubectl describe pod kubernetes-dashboard-v1.1.0-4vc9l --namespace=kube-system` 

Name: kubernetes-dashboard-v1.1.0-4vc9l

Namespace: kube-system

Node: 127.0.0.1/127.0.0.1

Start Time: Tue, 27 Sep 2016 14:26:03 -0600

Labels: k8s-app=kubernetes-dashboard,kubernetes.io/cluster-service=true,version=v1.1.0

Status: Running

IP: 172.17.0.2

Controllers: ReplicationController/kubernetes-dashboard-v1.1.0

Containers:

  kubernetes-dashboard:

    Container ID: docker://34be22ac4cd5727a042b6af3b54ef52f3fa8a98aa269a91d0b417c5b19644371

    Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0

    Image ID: docker://sha256:d023c050c0651bd96508b874ca1cd628fd0077f8327e1aeec92d22070b331c53

    Port: 9090/TCP

    QoS Tier:

      cpu: Guaranteed

      memory: Guaranteed

    Limits:

      memory: 50Mi

      cpu: 100m

    Requests:

      cpu: 100m

      memory: 50Mi

    State: Waiting

      Reason: CrashLoopBackOff

    Last State: Terminated

      Reason: Error

      Exit Code: 1

      Started: Fri, 30 Sep 2016 10:21:21 -0600

      Finished: Fri, 30 Sep 2016 10:21:22 -0600

    Ready: False

    Restart Count: 69

    Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3

    Environment Variables:

Conditions:

  Type Status

  Initialized True 

  Ready False 

  PodScheduled True 

Volumes:

  default-token-m1jd6:

    Type: Secret (a volume populated by a Secret)

    SecretName: default-token-m1jd6

Events:

  FirstSeen LastSeen Count From SubobjectPath Type Reason Message

  --------- -------- ----- ---- ------------- -------- ------ -------

  1h 2m 22 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Normal Pulled Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.0" already present on machine

  1h 2m 13 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Normal Created (events with common reason combined)

  1h 2m 13 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Normal Started (events with common reason combined)

  1h 15s 393 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Warning BackOff Back-off restarting failed docker container

  1h 15s 363 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-v1.1.0-4vc9l_kube-system(9c730f9b-84f0-11e6-b8f3-080027ec97bd)"


Running `curl http://localhost:8080/version` results in the following JSON:


{

  "major": "1",

  "minor": "3",

  "gitVersion": "v1.3.0",

  "gitCommit": "283137936a498aed572ee22af6774b6fb6e9fd94",

  "gitTreeState": "clean",

  "buildDate": "2016-07-01T19:19:19Z",

  "goVersion": "go1.6.2",

  "compiler": "gc",

  "platform": "linux/amd64"

}


But, trying to access the logs using `sudo kubectl logs --namespace=kube-system kubernetes-dashboard-v1.1.0-4vc9l` results in the following error: 


Starting HTTP server on port 9090

Creating API server client for http://localhost:8080

Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get http://localhost:8080/version: dial tcp [::1]:8080: getsockopt: connection refused


Any ideas? I don't think it's effecting anything else, but I only just started working with Kubernetes with this project.



Reply all
Reply to author
Forward
0 new messages