Re: [kubernetes/kubernetes] "CreateContainerConfigError: failed to prepare subPath for volumeMount" error with configMap volume (#61076)

2,461 views
Skip to first unread message

Kubernetes Submit Queue

unread,
Mar 13, 2018, 2:36:28 AM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process

@Silvenga @jsafrane @msau42

Note: This issue is marked as priority/critical-urgent, and must be updated every 1 day during code freeze.

Example update:

ACK.  In progress
ETA: DD/MM/YYYY
Risks: Complicated fix required
Issue Labels
  • sig/storage: Issue will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Andy Zhang

unread,
Mar 13, 2018, 4:11:07 AM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Need to mention that this issue also exists in Windows, and the PR also covers for windows.

Josh Berkus

unread,
Mar 13, 2018, 11:37:52 AM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@liggitt if this is a 1.9.4 issue, why the 1.10 milestone?

Jordan Liggitt

unread,
Mar 13, 2018, 11:41:21 AM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

it's a recently introduced regression that is release blocking and needs cherry picking to 1.7.x, 1.8.x, and 1.9.x

Davanum Srinivas

unread,
Mar 13, 2018, 11:41:38 AM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@jberkus this problem was triggered by the change made for the CVE yesterday (which had patch for 1.10/master which was backported to 1.9,1.8,1.7 branches). when we shipped 1.9.4, someone noticed it. So this problem exists in v.1.10/master as well. So we start here and then do backports again (i believe @liggitt has filed backports already)

Joel Smith

unread,
Mar 13, 2018, 12:35:41 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

It's nice that we're fixing this, but we probably ought to document that configMap and secrets don't work especially well with subPaths since any update to the underlying API object will cause the file to disappear from the view of the container. Since the subPath mount will cause the container runtime to mount only the version of the file or directory that exists when the container starts, any atomic update of the data will remove the old version and create a new version that the subPath can't see.

I think we should:

  1. Update documentation to direct people to (a) mount the entire volume, then (b) symlink their desired subPath to that mounted volume. For example, this bug reporter might mount at /mnt/config and have a symlink from /data/mumble.ini to /mnt/config/mumble.ini.
  2. Consider adding some kind of no-update flag to configMap/secret/downwardAPI/projected volumes for those that don't need the atomic update feature. Any containers mounting subPaths would not have to worry about the file disappearing on update, but would require a pod restart to get a data refresh.

Sai Teja Ranuva

unread,
Mar 13, 2018, 1:06:39 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Is there a workaround for this issue?

Jordan Liggitt

unread,
Mar 13, 2018, 1:14:54 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Is there a workaround for this issue?

There is not.

Kubernetes Submit Queue

unread,
Mar 13, 2018, 3:28:03 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #61076 via #61080.

Jordan Liggitt

unread,
Mar 13, 2018, 3:44:21 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

keeping this open until release branches are fixed as well

Jordan Liggitt

unread,
Mar 13, 2018, 3:44:35 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Reopened #61076.

Jordan Liggitt

unread,
Mar 13, 2018, 4:08:51 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@jberkus fix merged into master, moving this back to the v1.9 milestone as this issue is no longer release blocking

Kubernetes Submit Queue

unread,
Mar 13, 2018, 4:09:52 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process

@Silvenga @jsafrane @msau42

Issue Labels
  • sig/storage: Issue will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help

Joseph Irving

unread,
Mar 14, 2018, 8:15:19 AM3/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Hi. I've upgrade to the fixed version of 1.9.4 which has solved the configmap subpath problem but I'm encountering the same issue when using subPaths with an emptyDir.

Warning  Failed                 1m (x2 over 2m)  kubelet, ip-172-23-10-171.eu-west-1.compute.internal  Error: failed to prepare subPath for volumeMount "flannel-net-conf" of container "kube-flannel"

where the volume looks like:

volumes:
- emptyDir: {}
   name: flannel-net-conf

and the volume mount is this

 volumeMounts:
  - mountPath: /etc/kube-flannel/net-conf.json
     name: flannel-net-conf
     subPath: net-conf.json

This works fine on 1.9.3. Should I open a new issue about this?

Jan Šafránek

unread,
Mar 14, 2018, 10:11:49 AM3/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@Joseph-Irving, I can't reproduce the issue. This pod starts and creates an empty directory /etc/kube-flannel/net-conf.json in the container.

apiVersion: v1
kind: Pod
metadata:
  name: volumetest
spec:
  containers:
  - name: container-test
    image: busybox
    args:
    - sleep
    - "86400"
    volumeMounts:
    - mountPath: /etc/kube-flannel/net-conf.json
      name: flannel-net-conf
      subPath: net-conf.json

  volumes:
  - emptyDir: {}
    name: flannel-net-conf

net-conf.json is a directory because subPath net-conf.json does not exists in emptydir and it is assumed that's a directory. Do you have an init container that fills net-conf.json before the real container starts? Can you please start a new issue and post full pod spec that can reproduce the issue there so we can track it separately?

Joseph Irving

unread,
Mar 14, 2018, 10:24:27 AM3/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@jsafrane Yeah exactly that, we have an init container which creates a file and sticks it there before flannel boots up. Sure I'll create a new issue

Joseph Irving

unread,
Mar 14, 2018, 11:07:34 AM3/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

issue created @jsafrane #61178

Kubernetes Submit Queue

unread,
Mar 14, 2018, 11:33:06 PM3/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process

@Silvenga @jsafrane @liggitt @msau42

Issue Labels
  • sig/storage: Issue will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help

guillelb

unread,
Mar 15, 2018, 11:55:23 AM3/15/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Same issue upgrading from v1.7.11 to v1.7.14

Mounting a configMap:

failed to prepare subPath for volumeMount

Jordan Liggitt

unread,
Mar 15, 2018, 12:00:39 PM3/15/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

yes, this applies to 1.7.14, 1.8.9, and 1.9.4. point releases to address are scheduled for 3/19

Alvaro Aleman

unread,
Mar 19, 2018, 6:04:29 AM3/19/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

FYI, in GKE this doesn't seem to interfere with deploying new pods but keeps exiting pods in status "Terminating" after they were requested to get deleted:

Mar 19 09:19:53 gke-app-cluster-app-cluster-pool-78a17572-lxk8 kubelet[1331]: E0319 09:19:53.528768    1331 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/secret/c89a1204-2b4f-11e8-aca8-42010a9c0114-<redacted>\" (\"c89a1204-2b4f-11e8-aca8-42010a9c0114\")" failed. No retries permitted until 2018-03-19 09:21:55.528737036 +0000 UTC m=+3052.192633373 (durationBeforeRetry 2m2s). Error: "error cleaning subPath mounts for volume \"<redacted>\" (UniqueName: \"kubernetes.io/secret/c89a1204-2b4f-11e8-aca8-42010a9c0114-<redacted>\") pod \"c89a1204-2b4f-11e8-aca8-42010a9c0114\" (UID: \"c89a1204-2b4f-11e8-aca8-42010a9c0114\") : error checking /var/lib/kubelet/pods/c89a1204-2b4f-11e8-aca8-42010a9c0114/volume-subpaths/<redacted>/cipher/0 for mount: lstat /var/lib/kubelet/pods/c89a1204-2b4f-11e8-aca8-42010a9c0114/volume-subpaths/<redacted>/cipher/0/..: not a directory

Jordan Liggitt

unread,
Mar 19, 2018, 10:42:51 AM3/19/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

FYI, in GKE this doesn't seem to interfere with deploying new pods but keeps exiting pods in status "Terminating" after they were requested to get deleted:

Mar 19 09:19:53 gke-app-cluster-app-cluster-pool-78a17572-lxk8 kubelet[1331]: E0319 09:19:53.528768    1331 nestedpendingoperations.go:263] Operation for "\"kubernetes.io/secret/c89a1204-2b4f-11e8-aca8-42010a9c0114-<redacted>\" (\"c89a1204-2b4f-11e8-aca8-42010a9c0114\")" failed. No retries permitted until 2018-03-19 09:21:55.528737036 +0000 UTC m=+3052.192633373 (durationBeforeRetry 2m2s). Error: "error cleaning subPath mounts for volume \"<redacted>\" (UniqueName: \"kubernetes.io/secret/c89a1204-2b4f-11e8-aca8-42010a9c0114-<redacted>\") pod \"c89a1204-2b4f-11e8-aca8-42010a9c0114\" (UID: \"c89a1204-2b4f-11e8-aca8-42010a9c0114\") : error checking /var/lib/kubelet/pods/c89a1204-2b4f-11e8-aca8-42010a9c0114/volume-subpaths/<redacted>/cipher/0 for mount: lstat /var/lib/kubelet/pods/c89a1204-2b4f-11e8-aca8-42010a9c0114/volume-subpaths/<redacted>/cipher/0/..: not a directory

The subpath cleanup issue is tracked in #61178 and will also be fixed in the point releases planned for today.

Michelle Au

unread,
Mar 19, 2018, 11:32:01 AM3/19/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Gke already released last week with the configmap patch. I am looking into the cleanup issue.

Michelle Au

unread,
Mar 19, 2018, 11:34:27 AM3/19/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Ah sorry I can't read. #61178 should take care of it.

k8s-ci-robot

unread,
Mar 19, 2018, 4:48:09 PM3/19/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #61076.

Jordan Liggitt

unread,
Mar 19, 2018, 4:48:16 PM3/19/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/close

DilipJadhav

unread,
Mar 21, 2018, 2:19:56 AM3/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

can we specify mountpath as a file?


volumeMounts:
- mountPath: /etc/kube-flannel/net-conf.json
name: flannel-net-conf
subPath: net-conf.json

i guess it should be till /etc/kube-flannel and subpath can be . or kube-flannel
just try with following yaml.

apiVersion: v1
kind: Pod
metadata:
name: volumetest
spec:
containers:

  • name: container-test
    image: busybox
    args:
    • sleep
    • "86400"
      volumeMounts:
    • mountPath: /etc/kube-flannel
      name: flannel-net-conf
      subPath: kube-flannel

volumes:

  • emptyDir: {}
    name: flannel-net-conf

obriensystems

unread,
Mar 21, 2018, 3:11:06 PM3/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

We are seeing this under Rancher 1.6.13 and 1.6.14 in ONAP in our master branch ahead of the ONS conference
https://jira.onap.org/browse/OOM-813

obriensystems

unread,
Mar 21, 2018, 3:24:02 PM3/21/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

have a question for why we backported the 1.8.9 upgrade into 1.6.14 and 1.6.13 which were ok running 1.8.5 - the workaround in ONAP is to use rancher 1.6.12 until 1.6.14 is re-fixed (occurred 5 days ago during the release of 1.6.15)

Prune Sebastien THOMAS

unread,
Mar 26, 2018, 3:03:21 PM3/26/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Had the cleanup issue on GKE for few days (Google support looking into it... hope they will find this thread), but I'm now facing the 'failed to prepare subPath for volumeMount' error...
This is weird as some of my pods started fine and some other did not...

I'm on 1.9.4-gke.1... will the 1.9.5 release soon ?

Michelle Au

unread,
Mar 26, 2018, 4:05:33 PM3/26/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@prune998 this issue could show up during a container restart too. This fix is planned to roll out in GKE this week.

Russell Morrisey

unread,
Mar 27, 2018, 4:08:02 PM3/27/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Please make sure the fixed version is available on minikube windows! ❤️

Michelle Au

unread,
Mar 27, 2018, 4:17:11 PM3/27/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@rmorrise you should probably notify minikube maintainers to make sure they update their kubernetes versions.

Dhawal Patel

unread,
Mar 28, 2018, 7:42:19 PM3/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@liggitt is this fixed in gke release 1.8.10-gke.0? Don't see any mention in the release notes.

Michelle Au

unread,
Mar 28, 2018, 7:53:03 PM3/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@dhawal55 yes gke 1.8.10-gke.0 has the fix.

Jose Luis

unread,
Apr 25, 2018, 12:39:29 PM4/25/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Hi guys I think this is still happening, I am using kubernetes with google cloud and I've updated to 1.9.6-gke.1 and I got this problem, the solution was the downgrade to 1.9.3-gke.0 :s

Michelle Au

unread,
Apr 25, 2018, 12:57:39 PM4/25/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@lcortess can you paste your Pod spec?

Jose Luis

unread,
Apr 25, 2018, 1:53:38 PM4/25/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Hi @msau42 this my pod spec

{
	"kind": "Pod",
	"apiVersion": "v1",
	"metadata": {
		"name": "myserver-deployment-xxxxx-xxxxx",
		"generateName": "myserver-deployment-xxxxx-",
		"namespace": "default",
		"labels": {
			"app": "myserver",
			"stage": "production",
			"tier": "backend"
		},
		"ownerReferences": [{
			"apiVersion": "extensions/v1beta1",
			"kind": "ReplicaSet",
			"name": "myserver-deployment-xxxxx",
			"controller": true,
			"blockOwnerDeletion": true
		}]
	},
	"spec": {
		"volumes": [{
			"name": "default-token-xxx",
			"secret": {
				"secretName": "default-token-xxx",
				"defaultMode": 420
			}
		}],
		"containers": [{
			"name": "myserver",
			"image": "myserver:v1.0.0",
			"ports": [{
				"containerPort": 5000,
				"protocol": "TCP"
			}],
			"env": [],
			"volumeMounts": [{
				"name": "default-token-xxx",
				"readOnly": true,
				"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
			}],
			"terminationMessagePath": "/dev/termination-log",
			"terminationMessagePolicy": "File",
			"imagePullPolicy": "IfNotPresent"
		}],
		"restartPolicy": "Always",
		"terminationGracePeriodSeconds": 30,
		"dnsPolicy": "ClusterFirst",
		"serviceAccountName": "default",
		"serviceAccount": "default",
		"securityContext": {},
		"imagePullSecrets": [{
			"name": "docker-secrets"
		}],
		"schedulerName": "default-scheduler",
	}
}

Michelle Au

unread,
Apr 25, 2018, 2:00:32 PM4/25/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Ah, @lcortess I think you are hitting this issue with read only volumes: #62752

But actually, after 1.9.4, all secret volumes are mounted read only, so you don't need to explicitly specify the readOnly flag for secret volumes.

Tony Fouchard

unread,
May 2, 2018, 5:20:54 AM5/2/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Still the same issue in 1.10.2 with a daemonset (having 2 running pods, it works for the first one but not the second one).

Michelle Au

unread,
May 2, 2018, 8:29:30 AM5/2/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@hightoxicity can you post your pod spec?

Ross Edman

unread,
May 7, 2018, 6:36:40 PM5/7/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Hitting this on 1.8.9-rancher1.

Michelle Au

unread,
May 7, 2018, 6:47:21 PM5/7/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@rossedman there is a subpath cleanup/container restart issue that was fixed in 1.8.10

Tony Fouchard

unread,
May 11, 2018, 4:56:35 AM5/11/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@msau42-tmp Hi, I restarted all the control plane daemons, and it seems it fixed the issue... Thx.

nazisangg

unread,
May 28, 2018, 1:38:37 AM5/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Hitting this on "OpenShift Master: v3.7.23; Kubernetes Master: v1.7.6+a08f5eeb62"

The spec is:
spec:
containers:
- command:
- /bin/alertmanager
- '-config.file=/alertmanager.yml'
- '-storage.path=/alertmanager'
image: 'functions/alertmanager:latest-k8s'
imagePullPolicy: Always
name: alertmanager
ports:
- containerPort: 9003
protocol: TCP
resources:
limits:
memory: 30Mi
requests:
memory: 20Mi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- NET_RAW
- SETGID
- SETUID
privileged: false
runAsUser: 1000310000
seLinuxOptions:
level: 's0:c18,c2'
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /alertmanager.yml
name: alertmanager-config
subPath: alertmanager.yml
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-7glkb
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-v2c5d
nodeName: ip-10-194-27-58.ap-southeast-2.compute.internal
nodeSelector:
type: compute
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000310000
seLinuxOptions:
level: 's0:c18,c2'
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: alertmanager.yml
mode: 420
path: alertmanager.yml
name: alertmanager-config
name: alertmanager-config
- name: default-token-7glkb
secret:
defaultMode: 420
secretName: default-token-7glkb

Brendan Thompson

unread,
Jul 24, 2018, 7:58:27 PM7/24/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I too am facing this problem whilst trying to use subPath with volumeMounts.

App Version
Kubernetes v1.11.1
Docker v1.13.1
Ubuntu 16.04.4

Michelle Au

unread,
Jul 24, 2018, 8:05:30 PM7/24/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@BrendanThompson can you open a new issue and paste your pod spec into it?

Aisuko

unread,
Apr 2, 2019, 10:27:12 PM4/2/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Is there any information actually show which version has been fixed the issue?

My kubernetes version

➜  zookeeper git:(dev) ✗ kubectl version 

Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.4", GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1", GitTreeState:"clean", BuildDate:"2019-03-01T23:34:27Z", GoVersion:"go1.12", Compiler:"gc", Platform:"darwin/amd64"}

Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T20:55:30Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

➜  zookeeper git:(dev) ✗ kubectl describe po  dev-zookeeper-server-0 --namespace zookeeper

Name:           dev-zookeeper-server-0

Namespace:      zookeeper

Node:           node7/10.116.18.76

Start Time:     Tue, 02 Apr 2019 21:34:32 +0800

Labels:         app=zookeeper

                chart=zookeeper-1.4.2

                controller-revision-hash=dev-zookeeper-server-87495d7f7

                heritage=Tiller

                release=dev

                statefulset.kubernetes.io/pod-name=dev-zookeeper-server-0

Annotations:    <none>

Status:         Pending

IP:             

Controlled By:  StatefulSet/dev-zookeeper-server

Containers:

  dev-zookeeper:

    Container ID:  

    Image:         docker.io/bitnami/zookeeper:3.4.13

    Image ID:      

    Ports:         2181/TCP, 2888/TCP, 3888/TCP

    Host Ports:    0/TCP, 0/TCP, 0/TCP

    Command:

      bash

      -ec

      # Execute entrypoint as usual after obtaining ZOO_SERVER_ID based on POD hostname

      HOSTNAME=`hostname -s`

      if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then

        ORD=${BASH_REMATCH[2]}

        export ZOO_SERVER_ID=$((ORD+1))

      else

        echo "Failed to get index from hostname $HOST"

        exit 1

      fi

      . /opt/bitnami/base/functions

      . /opt/bitnami/base/helpers

      print_welcome_page

      . /init.sh

      nami_initialize zookeeper

      exec tini -- /run.sh

      

    State:          Waiting

      Reason:       CreateContainerConfigError

    Ready:          False

    Restart Count:  0

    Requests:

      cpu:      250m

      memory:   256Mi

    Liveness:   tcp-socket :client delay=30s timeout=5s period=10s #success=1 #failure=6

    Readiness:  tcp-socket :client delay=5s timeout=5s period=10s #success=1 #failure=6

    Environment:

      ZOO_PORT_NUMBER:        2181

      ZOO_TICK_TIME:          2000

      ZOO_INIT_LIMIT:         10

      ZOO_SYNC_LIMIT:         5

      ZOO_MAX_CLIENT_CNXNS:   60

      ZOO_SERVERS:            dev-zookeeper-0.dev-zookeeper-headless.zookeeper.svc.cluster.local:2888:3888

      ZOO_ENABLE_AUTH:        yes

      ZOO_CLIENT_USER:        quantex

      ZOO_CLIENT_PASSWORD:    <set to the key 'client-password' in secret 'dev-zookeeper'>  Optional: false

      ZOO_SERVER_USERS:       quantex

      ZOO_SERVER_PASSWORDS:   <set to the key 'server-password' in secret 'dev-zookeeper'>  Optional: false

      ZOO_HEAP_SIZE:          1024

      ALLOW_ANONYMOUS_LOGIN:  yes

    Mounts:

      /bitnami/zookeeper from data (rw)

      /opt/bitnami/zookeeper/conf/zoo.cfg from config (rw)

      /var/run/secrets/kubernetes.io/serviceaccount from default-token-8r6xd (ro)

Conditions:

  Type           Status

  Initialized    True 

  Ready          False 

  PodScheduled   True 

Volumes:

  data:

    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)

    ClaimName:  data-dev-zookeeper-server-0

    ReadOnly:   false

  config:

    Type:      ConfigMap (a volume populated by a ConfigMap)

    Name:      dev-zookeeper

    Optional:  false

  default-token-8r6xd:

    Type:        Secret (a volume populated by a Secret)

    SecretName:  default-token-8r6xd

    Optional:    false

QoS Class:       Burstable

Node-Selectors:  <none>

Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s

                 node.kubernetes.io/unreachable:NoExecute for 300s

Events:

  Type     Reason                 Age                From               Message

  ----     ------                 ----               ----               -------

  Normal   Scheduled              68s                default-scheduler  Successfully assigned dev-zookeeper-server-0 to node7

  Normal   SuccessfulMountVolume  67s                kubelet, node7     MountVolume.SetUp succeeded for volume "config"

  Normal   SuccessfulMountVolume  67s                kubelet, node7     MountVolume.SetUp succeeded for volume "default-token-8r6xd"

  Normal   SuccessfulMountVolume  67s                kubelet, node7     MountVolume.SetUp succeeded for volume "pvc-d221a9ce-5527-11e9-a07c-0050569e1842"

  Warning  Failed                 50s (x7 over 65s)  kubelet, node7     Error: failed to prepare subPath for volumeMount "config" of container "dev-zookeeper"

  Normal   SandboxChanged         49s (x7 over 65s)  kubelet, node7     Pod sandbox changed, it will be killed and re-created.

  Normal   Pulled                 48s (x8 over 65s)  kubelet, node7     Container image "docker.io/bitnami/zookeeper:3.4.13" already present on machine

Reply all
Reply to author
Forward
0 new messages