Mounted Persistent Volume appears as tmpfs with strange content in container

964 views
Skip to first unread message

Marcus

unread,
Feb 19, 2018, 11:49:24 AM2/19/18
to Kubernetes user discussion and Q&A
Hi,

I've created a PV, PVC and Pod using it as a volume. As provider for the PV, flexVolume with a script to mount a cifs-share is used. The PV is claimed and the pod starts, everything looks fine. On the node, I see the share mounted, as it should.

But inside the container, the volume is mounted as tmpfs and it's contents are different files than in the cifs-share.

/ # ls -l /var/lib/mysql/
total 8
prwx------    1 root     root             0 Feb 19 16:34 68e2851c2a83fa34d95ae7b37acdde7a0f8415d8e7eed25a826fd53feb365429-stdin
prwx------    1 root     root             0 Feb 19 16:34 68e2851c2a83fa34d95ae7b37acdde7a0f8415d8e7eed25a826fd53feb365429-stdout
-rw-r--r--    1 root     root          5986 Feb 19 16:33 config.json
prwx------    1 root     root             0 Feb 19 16:33 init-stderr
prwx------    1 root     root             0 Feb 19 16:33 init-stdin
prwx------    1 root     root             0 Feb 19 16:33 init-stdout

/ # mount | grep var/lib/mysql
tmpfs on /var/lib/mysql type tmpfs (rw,seclabel,nosuid,nodev,mode=755)

I am using kubelet 1.9.2 and docker 1.12.6 on centos 7.4. How can I fix this?

Thanks, Marcus

pv.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 1Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  flexVolume:
    driver: "fnordian/cv"
    readOnly: false
    options:
      source: "//192.168.121.82/kubvolumes"
      mountOptions: "dir_mode=0700,file_mode=0600"
      cifsuser: "nobody"
      cifspass: "nobody"

pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
    volumeMounts:
            - name: mysql-pv
              mountPath: /var/lib/mysql
  restartPolicy: Always

  volumes:
  - name: mysql-pv
    persistentVolumeClaim:
      claimName: mysql-pv-claim

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Michelle Au

unread,
Feb 19, 2018, 12:19:36 PM2/19/18
to Kubernetes user discussion and Q&A
In kubelet logs, do you see the volume getting mounted?

--
You received this message because you are subscribed to the Google Groups "Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email to kubernetes-use...@googlegroups.com.
To post to this group, send email to kubernet...@googlegroups.com.
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.

Marcus

unread,
Feb 19, 2018, 11:53:59 PM2/19/18
to Kubernetes user discussion and Q&A
I am not sure. The volume is mentioned there, but I don't know what to look for:

Feb 20 04:47:56 node1 kubelet[6615]: E0220 04:47:56.351626    6615 summary.go:92] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Feb 20 04:47:59 node1 dockerd-current[4405]: DEBU: 2018/02/20 04:47:59.685252 EVENT AddPod {"metadata":{"creationTimestamp":"2018-02-20T04:47:59Z","name":"busybox","namespace":"default","resourceVersion":"1945","selfLink":"/api/v1/namespaces/default/pods/busybox","uid":"39e0efdf-15f9-11e8-8091-525400a444d1"},"spec":{"containers":[{"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"Fi
Feb 20 04:47:59 node1 dockerd-current[4405]: DEBU: 2018/02/20 04:47:59.702701 EVENT UpdatePod {"metadata":{"creationTimestamp":"2018-02-20T04:47:59Z","name":"busybox","namespace":"default","resourceVersion":"1945","selfLink":"/api/v1/namespaces/default/pods/busybox","uid":"39e0efdf-15f9-11e8-8091-525400a444d1"},"spec":{"containers":[{"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":
Feb 20 04:47:59 node1 systemd[1]: Created slice libcontainer container kubepods-besteffort-pod39e0efdf_15f9_11e8_8091_525400a444d1.slice.
Feb 20 04:47:59 node1 dockerd-current[4405]: DEBU: 2018/02/20 04:47:59.715877 EVENT UpdatePod {"metadata":{"creationTimestamp":"2018-02-20T04:47:59Z","name":"busybox","namespace":"default","resourceVersion":"1947","selfLink":"/api/v1/namespaces/default/pods/busybox","uid":"39e0efdf-15f9-11e8-8091-525400a444d1"},"spec":{"containers":[{"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":
Feb 20 04:47:59 node1 systemd[1]: Starting libcontainer container kubepods-besteffort-pod39e0efdf_15f9_11e8_8091_525400a444d1.slice.
Feb 20 04:47:59 node1 kubelet[6615]: I0220 04:47:59.821322    6615 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "pv0003" (UniqueName: "flexvolume-fnordian/cv/39e0efdf-15f9-11e8-8091-525400a444d1-pv0003") pod "busybox" (UID: "39e0efdf-15f9-11e8-8091-525400a444d1")
Feb 20 04:47:59 node1 kubelet[6615]: I0220 04:47:59.821451    6615 reconciler.go:217] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-rg4gs" (UniqueName: "kubernetes.io/secret/39e0efdf-15f9-11e8-8091-525400a444d1-default-token-rg4gs") pod "busybox" (UID: "39e0efdf-15f9-11e8-8091-525400a444d1")
Feb 20 04:47:59 node1 systemd[1]: Started Kubernetes transient mount for /var/lib/kubelet/pods/39e0efdf-15f9-11e8-8091-525400a444d1/volumes/kubernetes.io~secret/default-token-rg4gs.
Feb 20 04:47:59 node1 systemd[1]: Starting Kubernetes transient mount for /var/lib/kubelet/pods/39e0efdf-15f9-11e8-8091-525400a444d1/volumes/kubernetes.io~secret/default-token-rg4gs.
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Mounting V5 Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Ending clean mount
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Unmounting Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Mounting V5 Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Ending clean mount
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Unmounting Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Mounting V5 Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-8): Ending clean mount
Feb 20 04:48:00 node1 systemd[1]: Started docker container 0a0d728960e7aa35e07305a628b2101d58e1a4f733fe21337c202bc6abd7cf03.
Feb 20 04:48:00 node1 systemd[1]: Starting docker container 0a0d728960e7aa35e07305a628b2101d58e1a4f733fe21337c202bc6abd7cf03.
Feb 20 04:48:00 node1 kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Feb 20 04:48:00 node1 oci-systemd-hook[9731]: systemdhook <debug>: 0a0d728960e7: Skipping as container command is /pause, not init or systemd
Feb 20 04:48:00 node1 oci-umount[9732]: umounthook <debug>: prestart container_id:0a0d728960e7 rootfs:/var/lib/docker/devicemapper/mnt/fb24bd63ac000f7d56f6f4b8549f09d70b8e1670c3d7d9d5c2c5ff19a0c490fa/rootfs
Feb 20 04:48:00 node1 kernel: weave: port 2(vethwepl0a0d728) entered blocking state
Feb 20 04:48:00 node1 kernel: weave: port 2(vethwepl0a0d728) entered disabled state
Feb 20 04:48:00 node1 kernel: device vethwepl0a0d728 entered promiscuous mode
Feb 20 04:48:00 node1 NetworkManager[2902]: <info>  [1519102080.4792] manager: (vethwepl0a0d728): new Veth device (/org/freedesktop/NetworkManager/Devices/12)
Feb 20 04:48:00 node1 kernel: IPv6: ADDRCONF(NETDEV_UP): vethwepl0a0d728: link is not ready
Feb 20 04:48:00 node1 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethwepl0a0d728: link becomes ready
Feb 20 04:48:00 node1 kernel: weave: port 2(vethwepl0a0d728) entered blocking state
Feb 20 04:48:00 node1 kernel: weave: port 2(vethwepl0a0d728) entered forwarding state
Feb 20 04:48:00 node1 NetworkManager[2902]: <info>  [1519102080.5192] device (vethwepl0a0d728): link connected
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Mounting V5 Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Ending clean mount
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Unmounting Filesystem
Feb 20 04:48:00 node1 kernel: IPv6: eth0: IPv6 duplicate address fe80::c84:1aff:fef1:17c6 detected!
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Mounting V5 Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Ending clean mount
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Unmounting Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Mounting V5 Filesystem
Feb 20 04:48:00 node1 kernel: XFS (dm-9): Ending clean mount
Feb 20 04:48:00 node1 systemd[1]: Started docker container 6913e41bde5576132809089358929755f2d41dfc8f1d2b63cf3b881d7467f404.
Feb 20 04:48:00 node1 systemd[1]: Starting docker container 6913e41bde5576132809089358929755f2d41dfc8f1d2b63cf3b881d7467f404.
Feb 20 04:48:00 node1 kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Feb 20 04:48:00 node1 oci-systemd-hook[9837]: systemdhook <debug>: 6913e41bde55: Skipping as container command is sleep, not init or systemd
Feb 20 04:48:00 node1 oci-umount[9838]: umounthook <debug>: prestart container_id:6913e41bde55 rootfs:/var/lib/docker/devicemapper/mnt/f9c55af22adc1aa54163654ae30d5156124119c8b34ebb77a4236cceaeff4a53/rootfs
Feb 20 04:48:00 node1 dockerd-current[4405]: time="2018-02-20T04:48:00.860981273Z" level=warning msg="Unknown healthcheck type 'NONE' (expected 'CMD')"
Feb 20 04:48:01 node1 dockerd-current[4405]: DEBU: 2018/02/20 04:48:01.128625 EVENT UpdatePod {"metadata":{"creationTimestamp":"2018-02-20T04:47:59Z","name":"busybox","namespace":"default","resourceVersion":"1950","selfLink":"/api/v1/namespaces/default/pods/busybox","uid":"39e0efdf-15f9-11e8-8091-525400a444d1"},"spec":{"containers":[{"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox","terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":
Feb 20 04:48:01 node1 dockerd-current[4405]: INFO: 2018/02/20 04:48:01.128670 adding entry 10.32.0.2 to weave-local-pods of 39e0efdf-15f9-11e8-8091-525400a444d1
Feb 20 04:48:01 node1 dockerd-current[4405]: INFO: 2018/02/20 04:48:01.128687 added entry 10.32.0.2 to weave-local-pods of 39e0efdf-15f9-11e8-8091-525400a444d1
Feb 20 04:48:01 node1 dockerd-current[4405]: INFO: 2018/02/20 04:48:01.130885 adding entry 10.32.0.2 to weave-k?Z;25^M}|1s7P3|H9i;*;MhG of 39e0efdf-15f9-11e8-8091-525400a444d1
Feb 20 04:48:01 node1 dockerd-current[4405]: INFO: 2018/02/20 04:48:01.130915 added entry 10.32.0.2 to weave-k?Z;25^M}|1s7P3|H9i;*;MhG of 39e0efdf-15f9-11e8-8091-525400a444d1
Feb 20 04:48:01 node1 dockerd-current[4405]: INFO: 2018/02/20 04:48:01.132339 adding entry 10.32.0.2 to weave-E.1.0W^NGSp]0_t5WwH/]gX@L of 39e0efdf-15f9-11e8-8091-525400a444d1
Feb 20 04:48:01 node1 dockerd-current[4405]: INFO: 2018/02/20 04:48:01.132367 added entry 10.32.0.2 to weave-E.1.0W^NGSp]0_t5WwH/]gX@L of 39e0efdf-15f9-11e8-8091-525400a444d1
Feb 20 04:48:06 node1 kubelet[6615]: E0220 04:48:06.372263    6615 summary.go:92] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"
Feb 20 04:48:06 node1 kubelet[6615]: E0220 04:48:06.373207    6615 summary.go:92] Failed to get system container stats for "/system.slice/docker.service": failed to get cgroup stats for "/system.slice/docker.service": failed to get container info for "/system.slice/docker.service": unknown container "/system.slice/docker.service"
Feb 20 04:48:09 node1 kubelet[6615]: E0220 04:48:09.979708    6615 fsHandler.go:121] failed to collect filesystem stats - rootDiskErr: <nil>, rootInodeErr: <nil>, extraDiskErr: du command failed on /var/lib/docker/containers/27947eb9e964f761f947f51ba0d9d33bfa9228508652ca54e570e2fdd4e12752 with output stdout: , stderr: du: cannot access ‘/var/lib/docker/containers/27947eb9e964f761f947f51ba0d9d33bfa9228508652ca54e570e2fdd4e12752’: No such file or directory
Feb 20 04:48:09 node1 kubelet[6615]: - exit status 1
Feb 20 04:48:13 node1 dockerd-current[4405]: time="2018-02-20T04:48:13.018171965Z" level=error msg="Handler for POST /v1.24/containers/839ea2a858e1e84de0b78a7cdf050e62967b9fd04ededf0ff1d50933efa4a22b/stop returned error: Container 839ea2a858e1e84de0b78a7cdf050e62967b9fd04ededf0ff1d50933efa4a22b is already stopped"
Feb 20 04:48:13 node1 systemd-udevd[9861]: inotify_add_watch(7, /dev/dm-10, 10) failed: No such file or directory
Feb 20 04:48:13 node1 systemd-udevd[9861]: inotify_add_watch(7, /dev/dm-10, 10) failed: No such file or directory
Feb 20 04:48:16 node1 kubelet[6615]: E0220 04:48:16.392269    6615 summary.go:92] Failed to get system container stats for "/system.slice/kubelet.service": failed to get cgroup stats for "/system.slice/kubelet.service": failed to get container info for "/system.slice/kubelet.service": unknown container "/system.slice/kubelet.service"

Marcus

unread,
Feb 20, 2018, 1:51:54 PM2/20/18
to Kubernetes user discussion and Q&A
I think it has something to do with cifs not allowing to relabel selinux contexts.
Reply all
Reply to author
Forward
0 new messages