@buddyledungarees: Reiterating the mentions to trigger a notification:
@kubernetes/sig-storage-bugs
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
I was not able to reproduce this on my cluster. I created a Pod that mounted a configmap, and ran the pod with non-root uid, and fsGroup.
$ ls -l /vol1
-rw-r--r-- 1 root 1000 33 May 22 22:07 /vol1
$ id
uid=1000 gid=0(root) groups=1000
@buddyledungarees I'd like to get more information about your setup:
No, we do not set any fsGroup or other security context configs in our deployment yaml.
We use docker version 1.13.1:
Containers: 25
Running: 24
Paused: 0
Stopped: 1
Images: 15
Server Version: 1.13.1
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1
runc version: 9df8b306d01f59d3a8029be411de015b7304dd8f
init version: 949e6fa
Kernel Version: 4.4.115-k8s
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.67 GiB
Name: ip-10-35-85-97
ID: M3WF:X5EY:AJUR:3L3H:VXG5:QKJX:CAWR:U72P:XHLO:CRZU:BFHY:4KUB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
WARNING: No kernel memory limit support
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
kubelet is running directly on host per kops' deployment.
Here is the configMap itself:
apiVersion: v1
kind: ConfigMap
metadata:
name: resolv-conf-retries-configmap
namespace: default
data:
resolv.conf: |
nameserver 100.64.0.10
search default.svc.cluster.local svc.cluster.local cluster.local ec2.internal
options ndots:2 attempts:5
We are suspecting a similar issue here when using "subPath" to mount a single file from a ConfigMap.
The pod stuck in a crash loop with the following error message
Error: failed to start container "subpath": Error response from daemon: error setting label on mount source '/var/lib/kubelet/pods/f13515f2-5f89-11e8-b6e7-005056994f46/volume-subpaths/config/subpath/0': SELinux relabeling of /var/lib/kubelet/pods/f13515f2-5f89-11e8-b6e7-005056994f46/volume-subpaths/config/subpath/0 is not allowed: "read-only file system"
apiVersion: v1
kind: ConfigMap
metadata:
name: subpath-test
data:
foo: bar
---
apiVersion: v1
kind: Pod
metadata:
name: subpath-test
spec:
containers:
- name: subpath
image: centos:7
command:
- cat
# Uncomment the following two lines fixes the pod crash
# securityContext:
# privileged: true
volumeMounts:
- mountPath: /etc/subpath_test/foo
name: config
subPath: foo
volumes:
- name: config
configMap:
name: subpath-test
@msau42 I think to trigger the error you have to use "subPath" while mounting a ConfigMap. Is there any chance you could try my test above on an 1.10.3 (or 1.8.13) cluster?
Thanks I was able to reproduce the issue. Before, my volumeMount was a completely new path in the container. It seems like the issue occurs when the filepath already exists in the container image.
I have updated my test above. Does that match your assumption?
In my case both test cases (/etc/subpath_test/foo and /subpath_test/foo) failed.
Can confirm this bug is present on 1.10.3... Was going to submit a ticket myself buy found this one first...
Same issue here, the behavior on 1.10 is not equals on 1.9
This issue exists on 1.10.3, 1.9.8, 1.8.13. Fix is being worked on
any workarounds for this on 1.9.8?