Re: [kubernetes/kubernetes] Volumes are created in container with root ownership and strict permissions (#2630)

3,068 views
Skip to first unread message

Brian Grant

unread,
Nov 13, 2017, 6:23:58 PM11/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

cc @kubernetes/sig-storage-feature-requests @kubernetes/sig-node-feature-requests


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

so0k

unread,
Dec 26, 2017, 11:29:13 AM12/26/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@Guillaume-Mayer - wouldn't a postStart lifecycle hook to chmod the files work (alternatively an init container if execution needs to be completed before the entrypoint)?

patrickf55places

unread,
Dec 27, 2017, 11:14:19 AM12/27/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@so0k That won't work if the runAsUser and allowPrivilegeEscalation prevent the root user from being used.

Lena Brüder

unread,
Feb 20, 2018, 7:51:27 AM2/20/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I have a very similar use case where I would need to change the owner of a file that is mounted (from a secret in my case).

I have a mongodb-cluster in k8s which uses a special cluster.key file for cluster authorization. That file is stored in a secret; we have a client where running images as root is forbidden. Our pod has a securityContext set with a runAsUser: 1000 directive. Mongodb itself forbids the case that the file is accessible by anyone else but the owner itself. It will reject startup if the file is readable by group or other.

Since the owner is root, and I cannot run a chown as non-root on that file, neither changing permissions works, nor (since there is no k8s support) changing the owner of the file does.

I am currently working around this by injecting as an environment variable in a busybox init container which in turn mounts an emptyDir and writes there. The secret is then not mounted as a file anymore. It's quite ugly, and if there is a chance to get rid of it' I'd be in.

Jordan Wilson

unread,
Feb 21, 2018, 12:34:03 PM2/21/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

The fact that so many of the Docs advise and caution the user against running containers as root, and that this issue is now 3 years old astounds me. This should at least be explained in much greater detail in the Docs.

thockin-cc

unread,
Feb 24, 2018, 12:27:20 AM2/24/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention
@saad-ali

On Wed, Feb 21, 2018 at 9:34 AM, Jordan Wilson <notifi...@github.com>
wrote:


> The fact that so many of the Docs advise and caution the user against
> running containers as root, and that this issue is now 3 years old astounds
> me. This should at least be explained in much greater detail in the Docs.
>
> —
> You are receiving this because you commented.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-367406210>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ALNCDzdco3jAGv5kwRfMg497RuZeiWWbks5tXFOJgaJpZM4DBDWs>

Jo Torsmyr

unread,
Mar 22, 2018, 8:55:32 PM3/22/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hi!
I ended up with the below initContainers config for giving the node-red-docker container, which runs as a non-privileged user, access to a externally created disk. After trying a lot of things, it seemed the "runAsUser" 0 (root) did the trick.

Cheers
-jo

  initContainers:
    - name: volume-mount-hack
      image: nodered/node-red-docker:slim
      command:
        - sh
        - -c
        - 'chmod -R a+rwx /data'
      volumeMounts:
        - name: picturl-persistent-storage
          mountPath: /data
      securityContext:
        runAsUser: 0

Qian Zhang

unread,
Apr 4, 2018, 9:28:38 AM4/4/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Many older applications which bind to low (privileged) ports start first as root, then immediately drop privileges to some other user. In such a scenario, the container must be configured to start the application as root, and so the original user (root) would have access to the volume. Once the application calls setuid(2)/seteuid(2) though, it won't have access anymore.

@eatnumber1 Can you elaborate a bit more on why we will have this issue with the supplementary group solution mentioned in this thread? IIUC, setuid(2)/seteuid(2) will not change the supplementary group of the calling process, so as long as the application is in a group which have the access to the volume, it should not have problems to access the volume, right?

Russell Harmon

unread,
Apr 4, 2018, 3:03:37 PM4/4/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

It looks like I was mistaken and calling setgid(2) doesn't change supplementary groups (which I had thought it did).

Looking around, it seems like at least nginx drops supplementary groups explicitly here (and otherwise would be a minor security vulnerability). I'd be surprised if any well-written privilege-dropping application doesn't drop supplementary groups.

Qian Zhang

unread,
Apr 6, 2018, 9:29:01 PM4/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Thanks @eatnumber1! So Nginx initially runs with root, and later reset uid, gid and supplementary groups with what are configured in nginx.conf. Then I think with the pod security context, we can set fsGroup to the group configured in nginx.conf, in this way, even after Nginx resets its supplementary groups, it can still access the volume. Right?

Russell Harmon

unread,
Apr 6, 2018, 10:40:08 PM4/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention
From a cursory reading about pod security contexts, it seems like it would.
I haven't used them though (note that my original comment on this bug is
multiple years old).
On Fri, Apr 6, 2018 at 20:28 Qian Zhang <notifi...@github.com> wrote:

> Thanks @eatnumber1 <https://github.com/eatnumber1>! So Nginx initially

> runs with root, and later reset uid, gid and supplementary groups with what
> are configured in nginx.conf. Then I think with the pod security context
> <https://kubernetes.io/docs/tasks/configure-pod-container/security-context/>,

> we can set fsGroup to the group configured in nginx.conf, in this way,
> even after Nginx resets its supplementary groups, it can still access the
> volume. Right?
>
> —
> You are receiving this because you were mentioned.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-379422625>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AABEj5KFwz4VWsvSpT2UwzY_1KIW0DB4ks5tmBZRgaJpZM4DBDWs>
> .

Qian Zhang

unread,
Apr 8, 2018, 9:27:24 PM4/8/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Besides supplementary group, I think POSIX ACL can be another solution to fix this issue, I mean we can add an ACL entry to grant rwx permission to the pod/container user on the volume. But I do not see POSIX ACL is not mentioned in this thread, any drawbacks it has?

cc @thockin @saad-ali

Tim Hockin

unread,
Apr 9, 2018, 2:44:03 PM4/9/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention
I don't know that nginx clearing supplementary groups prevents any
vulnerability in this case? It is specifically defeating a well-understood
mechanism. Can we fix nginx?

As for ACL or other mechanisms, I don't object to them, I just have less
context on them.


On Sun, Apr 8, 2018 at 6:26 PM Qian Zhang <notifi...@github.com> wrote:

> Besides supplementary group, I think POSIX ACL
> <http://man7.org/linux/man-pages/man5/acl.5.html> can be another solution

> to fix this issue, I mean we can add an ACL entry to grant rwx permission
> to the pod/container user on the volume. But I do not see POSIX ACL is not
> mentioned in this thread, any drawbacks it has?
>
> cc @thockin <https://github.com/thockin> @saad-ali
> <https://github.com/saad-ali>
>
> —
> You are receiving this because you were mentioned.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-379601556>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AFVgVHTRb0-By20y2QzU4idsBoZVDNpLks5tmrjZgaJpZM4DBDWs>
> .

Qian Zhang

unread,
Apr 9, 2018, 9:00:29 PM4/9/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

It is specifically defeating a well-understood mechanism.

@thockin Can you please elaborate a bit on this? And why do we need to fix Nginx?

Tim Hockin

unread,
Apr 10, 2018, 12:07:12 AM4/10/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention
We explicitly set up supplemental groups so we can do things like volumes
and per-volume accounting. It is 100% intentional and then nginx drops
supplemental groups in the name of security. Breaking valid use cases.


On Mon, Apr 9, 2018 at 5:59 PM Qian Zhang <notifi...@github.com> wrote:

> It is specifically defeating a well-understood mechanism.
> @thockin <https://github.com/thockin> Can you please elaborate a bit on

> this? And why do we need to fix Nginx?
>
> —
> You are receiving this because you were mentioned.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-379940087>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AFVgVPn8KDON0zSROa1Qy1uFgMsaR9jqks5tnAQHgaJpZM4DBDWs>
> .

Russell Harmon

unread,
Apr 10, 2018, 12:37:39 AM4/10/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

In a non-containerized world, if nginx didn't drop supplemental groups, a remote code execution vulnerability in nginx could leak undesired privileges to remote attackers via its supplemental groups. I therefore don't think you'll ever get the nginx developers to be willing to stop doing that. Even if you do manage to convince them to, dropping supplemental groups is the standard practice, and you'd have to convince every developer of every privilege dropping application to do the same. Apache does the same exact thing here.

Furthermore, even if you pick another obscure Linux access control mechanism to use instead (for example, fsuid), it is intentional that every possible type of privilege is dropped, so it would be a security vulnerability if applications didn't drop that privilege as well. That is the security model here.

In a non-containerized world, the only way to grant privileges to the application after it drops privileges is to grant privileges to the user/group/etc that the application switches to. Hence my original (3 year old) comment about supporting UID and GID explicitly, which would allow the user to specify the UID or GID that the application is going to switch to.

Looking at the documentation for PodSecurityContext, it says this about fsGroup:

A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:

  1. The owning GID will be the FSGroup
  2. The setgid bit is set (new files created in the volume will be owned by FSGroup)
  3. The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume.

As far as I'm aware, these actions should be sufficient to allow the resulting unprivileged user after a privilege drop to access the volume successfully. (caveat, I haven't tested it)

Qian Zhang

unread,
Apr 10, 2018, 3:42:06 AM4/10/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Yes, so I think setting fsGroup to the group configured in nginx.conf will make Nginx can access the volume even after a privilege drop and also make volume accounting work.

Qian Zhang

unread,
Apr 10, 2018, 5:20:20 AM4/10/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

But I have another question: Besides the fsGroup in pod security context, user can also set fsGroup in container security context, so if a pod has multiple containers in it and each container has its own fsGroup, how can we make sure all of these containers can access the volume (since a volume can only be owned by a single group rather than multiple)?

krmayankk

unread,
Apr 11, 2018, 8:07:48 PM4/11/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@qianzhangxa if multiple containers need access to that volume , you will need to make sure all containers request the same fagroup in the container level security context or better just set at the pod level

krmayankk

unread,
Apr 11, 2018, 8:09:12 PM4/11/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@tallclair FYI I believe we can close this issue

krmayankk

unread,
Apr 11, 2018, 8:10:00 PM4/11/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/sig auth

Nik Karbaum

unread,
Apr 24, 2018, 2:57:29 PM4/24/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

None of the solutions suggested are working for me.

YML:

apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
  labels:
    tier: frontend
spec:
  selector:
    matchLabels:
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      securityContext:
        fsGroup: 1000
        runAsUser: 0
      initContainers:
      - image: some-sftp-container
        name: sftp-mount-permission-fix
        command: ["sh", "-c", "chown -R <user> /mnt/permission-fix"]
        volumeMounts:
        - name: azure
          mountPath: /mnt/permission-fix
      containers:
      - image: some-sftp-container
        name: sftp-container
        ports:
        - containerPort: 22
          name: port_22
        volumeMounts:
        - name: azure
          mountPath: /home/<user>/data
      volumes:
        - name: azure
          azureFile:
            secretName: azure-secret
            shareName: sftp-share
            readOnly: false

Once the Pod is ready and I exec into the container and check the dirs, nothing has happened:

root@container:/# cd /home/talentry                                                                        
root@container:/home/talentry# ls -als
total 8
4 drwxr-xr-x 3 root root 4096 Apr 24 18:45 .
4 drwxr-xr-x 1 root root 4096 Apr 24 18:45 ..
0 drwxr-xr-x 2 root root    0 Apr 22 21:32 data
root@container:/home/<user># cd data
root@container:/home/<user>/data# ls -als
total 1
1 -rwxr-xr-x 1 root root 898 Apr 24 08:55 fix.sh
0 -rwxr-xr-x 1 root root   0 Apr 22 22:27 test.json
root@container:/home/<user>/data# 

At some point I also had the runAsUser: 0 on the container itself. But that didn't work either. Any help would be much appreciated

Nik Karbaum

unread,
Apr 24, 2018, 3:14:11 PM4/24/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Also running a chown afterwards didn't work

Tim Hockin

unread,
Apr 29, 2018, 11:12:27 AM4/29/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@eatnumber1 if a group is in your supplemental groups, shouldn't you assume that it was intended that you have access to that group's resources? Dropping supplemental groups is saying "I know you told me I need this, but I don't want it" and then later complaining that you don't have it.

Regardless, I am now throughly lost as to what this bug means - there are too many followups that don't seem to be quite the same.

Can someone summarize for me? Or better, post a full repro with non-pretend image names?

Qian Zhang

unread,
Apr 30, 2018, 10:44:39 PM4/30/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@thockin IIUC, Nginx is not just dropping the supplementary groups, it is actually resetting it with what is configured in nginx.conf by calling initgroups.

qafro1

unread,
May 16, 2018, 9:11:36 AM5/16/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

This worked for me
spec: containers: - name: jenkins image: jenkins/jenkins ports: - containerPort: 50000 - containerPort: 8080 volumeMounts: - mountPath: /var/jenkins_home name: jenkins-home securityContext: fsGroup: 1000 runAsUser: 0

qafro1

unread,
May 16, 2018, 9:12:22 AM5/16/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

This worked for me.. part of the script.

Noah Huppert

unread,
Jun 6, 2018, 3:11:10 AM6/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Could someone clarify what the general solution to this issue is? I think a nice clean resolution to this very long thread would be extremely helpful.

I found 2 workarounds in the comments with reports of varying degrees of success.

I've tried both of them with no success. But I don't want to declare these methods as fully not working in case I implemented them incorrectly.

Erkin Khaydarov

unread,
Jun 22, 2018, 12:52:55 PM6/22/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

the solutions arent idea, now your containers are running as root which is against the security standards that k8s tries to get its users to impose.

it would be great if persistent volumes could be created with securityContext in mind, ie

kind: PersistentVolume
metadata:
  name: redis-data-pv
  namespace: data
  labels:
    app: twornesol
spec:
  securityContext:
    runAsUser: 65534
    fsGroup: 65534
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  claimRef:
    namespace: data
    name: redis-data
  hostPath:
    path: "/data"```

Robert Terhaar

unread,
Jun 23, 2018, 2:46:19 PM6/23/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

As a workaround, I use a postStart lifecycle hook to chown the volume data to the correct permissions. This may not work for all applications, because the postStart lifecycle hook may run too late, but it's more secure than running the container as root and then fixing permissions and dropping root in the entrypoint.

Chico Venancio

unread,
Jun 23, 2018, 4:18:20 PM6/23/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@robbyt commented
As a workaround, I use a postStart lifecycle hook to chown the volume data to the correct permissions. This may not work for all applications, because the postStart lifecycle hook may run too late, but it's more secure than running the container as root and then fixing permissions and dropping root (or using gosu) in the entrypoint script.

We use initContainer, can a lifecycle hook have a different securityContext than the container itself?

Marcus Heese

unread,
Jun 28, 2018, 5:07:56 AM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

it's sad to see that after I have to do research again @chicocvenancio's option (which I use as well) is still apparently the only way to achieve this.

I understand where the problem is coming from and why we are so reluctant to change this, however, especially for Secret volumes changing the UID of volumes can be essential.

Here is an example from the PostgreSQL world: mount a TLS client cert for your application with a secret volume. As recommended everywhere, you don't run your container as root. However, the postgres connection library will instantaneously complain that the key is world readable. "No problem" you think and you change the mode / default mode to match the demanded 0600 (which is very reasonable to demand that as a client library). However, now this won't work either, because now root is the only user which can read this file.

The point I'm trying to make with this example is: groups don't come to the rescue here.

Now PostgreSQL is definitely a standard database and a product that a lot of people use. And asking for mounting client certs in a way with Kubernetes that do not require an initContainer as a workaround is not too much to ask imho.

So please, let's find some middle ground on this issue, and not just close it. 🙏

Reza

unread,
Jun 28, 2018, 2:48:17 PM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I'm trying to mount a ssh-key to user's .ssh directory with defaultMode 0400 so the application can ssh without a password. But that doesn't work if the secret is mounted as owned by root. Can you explain again how this can be solved using fsGroup or some other such mechanism?

Tim Hockin

unread,
Jul 2, 2018, 4:02:34 PM7/2/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I am still hopelessly confused about this bug. There seems to be about 6 things being reported that all fail the same way but are different for different reasons.

  • nginx drops supplemental groups
  • ssh/postgres demands a particular mode for keys (and does not accept group-read)
  • something about running as root ?

Can someone explain, top-to-bottom the issue (or issues) in a way that I can follow without having to re-read the whole thread?

Keep in mind that Volumes are defined as a Pod-scope construct, and 2 different containers may run as 2 different UIDs. Using group perms is ideal for this, but if it is really not meeting needs, then let's fix it. But i need to understand it first.

@saad-ali for your radar

Reza

unread,
Jul 6, 2018, 11:01:02 AM7/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@thockin My use-case is very simple. I'm injecting a secret (ssh key) into a container that is not running as root. The ssh key in /home//.ssh must have 400 permission which I can do, but must also be owned by the UID, or it won't work. I don't want to give this pod any root privilege of any sorts, so an init container that modifies the UID of the file does not work for me. How do I do it, other than including the ssh-key in the image?

Tim Allclair (St. Clair)

unread,
Jul 6, 2018, 4:45:49 PM7/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@vikaschoudhary16 @derekwaynecarr this has some overlap / implications for user-namespace mapping.

Joel Pearson

unread,
Jul 7, 2018, 6:26:11 AM7/7/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@rezroo a workaround to could be to simply make a copy of the ssh key in an Init container that way you’ll be able to control who owns the file right? Provided the init container runs as the same user that needs to read the ssh key later. It’s a little gross, but “should” work I think.

Matthew Lee

unread,
Jul 11, 2018, 6:14:51 AM7/11/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@thockin another use-case: I'm trying to run an ELK statefulset. The pod has an Elasticsearch container running as non-root. I'm using a volumeClaimTemplate to hold the elasticsearch data. The container is unable to write to the volume though as it is not running as root. Changing the fsGroup has had no effect. K8s v.1.9 . Also the pod has multiple containers and i don't want to use the same fsgroup for all of them.

Marcus Heese

unread,
Jul 19, 2018, 12:58:47 PM7/19/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@pearj that's exactly the workaround that everybody uses ... and as the name says: it's a workaround, and should get addressed :) ... However, there is also a problem with this workaround: updated secrets will eventually get updated in mounted volumes which will make it possible to act on a file change in the running pod; you will miss out on this update when you copy it from an init container.

Reza

unread,
Jul 20, 2018, 9:50:16 PM7/20/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@pearj @mheese This work around wouldn't work for me anyway - because our PodSecurityPolicy doesn't allow containers to run as root - normal or init containers - doesn't matter - no one can access a secret owned by root as far as I can tell.

Robert Krawitz

unread,
Aug 7, 2018, 1:50:12 PM8/7/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Yet another use case for this: I'm working on using XFS quotas (obviously, if XFS is in use) for ephemeral storage. The current enforcement mechanism for ephemeral storage is to run du periodically; in addition to being slow and rather coarse granularity, it can be faked out completely (create a file, keep a file descriptor open on it, and delete it). I intend to use quotas for two purposes:

  1. Hard cap usage across all containers of a pod.

  2. Retrieve the per-volume storage consumption without having to run du (which can bog down).

I can't use one quota for both purposes. The hard cap applies to all emptydir volumes, the writable layer, and logs, but a quota used for that purpose can't be used to retrieve storage used for each volume. So what I'd like to do is use project quotas in a non-enforcing way to retrieve per-volume storage consumption and either user or group quotas to implement the hard cap. To do that requires that each pod have a unique UID or single unique GID (probably a unique UID would be best, since there may be reasons why a pod needs to be in multiple groups).

(As regards group and project IDs being documented as mutually exclusive with XFS, that is in fact no longer the case, as I've verified. I've asked some XFS people about it, and they confirmed that the documentation is out of date and needs to be fixed; this restriction was lifted about 5 years ago.)

Kirill Semaev

unread,
Aug 9, 2018, 3:34:20 PM8/9/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@robbyt please tell how you managed to chown with postStart ? My container runs as nonroot user, so poststart still uses nonroot permissions and can't change permissions:

chown: /home/user/: Operation not permitted
, message: "chown: /home/user/: Permission denied\nchown: /home/user/: Operation not permitted

Cobra1978

unread,
Aug 30, 2018, 5:59:05 AM8/30/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Same problem here: whe have somo Dockerized tomcat that run our web applicaition and we us jmx to monithor them, we want to serve jmxremote user and jmxremote password as secrets, but tomcat, which obviously doesen't run as root, want that jmx files are readable only for the user that run tomcat.

Ludwik

unread,
Sep 5, 2018, 6:21:55 AM9/5/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

the same problem!

Zandalee11

unread,
Sep 5, 2018, 6:27:09 AM9/5/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hey FUCK YOU 😘

Josh Woodcock

unread,
Sep 26, 2018, 4:15:40 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

For now, the hack that works is by setting user to root at the end of your dockerfile. And set a custom entrypoint script. Chown the volume in your custom entrypoint script then use gosu to run the default entrypoint script as the default user. The thing I hate about this is I have to do it for every single image that uses a volume in kubernetes. Totally lame. Please provide a UID GUID option on the volume mount or volume claim config.

Mark Janssen

unread,
Sep 26, 2018, 4:31:56 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

That hack doesn’t work if you want to run a secure Kubernetes cluster with PodSecurityPolicies applied to enforce pods to run as a non-root user.

Josh Woodcock

unread,
Sep 26, 2018, 4:52:43 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

True. All hacks have their downsides. It's either that or logging in as root after the volume is created and chowning the directory manually. Not sure really which is worse :-D. Can't believe this is even a thing.

Marcus Heese

unread,
Sep 26, 2018, 5:31:49 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@thockin from what I gather following this issue now since nearly 4 years, the solution that everybody wants is to be able to set a uid and gid for a volume - in particular secret volumes (but not only those). On Jul 6 you posted a starting point for a possible solution to this. If this is a supported path from the maintainers, I'd finally start and try to solve this problem.

Davanum Srinivas

unread,
Sep 26, 2018, 6:03:33 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@mheese i'd say go for it.

Josh Woodcock

unread,
Sep 26, 2018, 6:06:44 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@mheese I'll collab on a PR if you want?

David Arnold

unread,
Sep 28, 2018, 4:29:31 PM9/28/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@mheese It seems gid is taken from the SecurityContext, so I guess, for a fast relief, an uid implementation would be enough. Also because gid had more second guesses in the discussion.

michalpiasecki1

unread,
Oct 6, 2018, 5:58:13 PM10/6/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

same issue with psql here

David Arnold

unread,
Oct 7, 2018, 11:38:02 AM10/7/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Chakradhar Rao Jonagam

unread,
Oct 9, 2018, 4:43:31 AM10/9/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

running into same issue. is there any recommended solution for this ?

michalpiasecki1

unread,
Oct 9, 2018, 5:50:38 AM10/9/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@blaggacao : thanks for the hint, however i found another workaround
@debianmaster : i would recommend securityContext and fsGroup as described in https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

I ended with certificates owned by root, and group postgres with permissions 440

Geri Jennings

unread,
Dec 7, 2018, 7:19:31 PM12/7/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@michalpiasecki1 can you give more info about how you resolved this for postgres?

I have the server.crt and server.key files stored in a k8s secret pg-certs-secret and I want to mount them into my container running postgres:9.6. I have this set up with:

      containers:
      - name: pg
         image: postgres:9.6
         ...
         volumeMounts:
         - name: pg-certs
            mountPath: "/etc/certs/"
            readOnly: true
            args: ["-c", "ssl=on", "-c", "ssl_cert_file=/etc/certs/pg_server.crt", "-c", "ssl_key_file=/etc/certs/pg_server.key"]
      volumes:
        - name: pg-certs
           secret:
           secretName: pg-certs-secret
           defaultMode: 384

But deploying this, the container dies with the error FATAL: could not load server certificate file "/etc/certs/pg_server.crt": Permission denied

I assume this is because the certs are loaded so that they are owned by root, when they need to be owned by postgres. It's not clear from the docs, etc what I should do to change ownership absent creating a custom Docker image, but I'd rather not. The securityContext and fsGroup you suggested seemed like it could work, but I would appreciate if you would share more info about how exactly you achieved this.

Also worth noting: I added defaultMode: 384 to ensure the files were added with 0600 file permissions. Before I added that, the container died with the error

FATAL:  private key file "/etc/certs/pg_server.key" has group or world access
DETAIL:  File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.

Geri Jennings

unread,
Dec 7, 2018, 8:48:26 PM12/7/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

For reference, I just figured this out and it worked when I added

     securityContext:
        fsGroup: 999

to the spec.

viveksaiaws

unread,
Dec 16, 2018, 5:43:47 PM12/16/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I have same problem #72085
can any one help me

realrill

unread,
Feb 5, 2019, 8:45:18 AM2/5/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Is there any chance for fix this issue? Are the Kubernetes guys working on the solution?

charles-crain

unread,
Feb 5, 2019, 12:19:10 PM2/5/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

This issue has not been a problem for us for a very long time. We set the "fsGroup" in the pod's security context to match the group ID of the user that runs the main Docker entry point, and any volumes in the pod become accessible to that container's main process:

https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

Victor Sollerhed

unread,
Feb 6, 2019, 6:18:36 AM2/6/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@charles-crain: Your suggestion works really well for most cases.

Here's another case that's not covered:

If the container starts as root but uses a tool such as gosu to become another user (for some processes). Then locking the container into only one group with fsGroup will prevent cases such as "I want my non-root user to have access to SSH keys mounted into it's ~/.ssh directory, while having my root user having access to other mounts too".

One example of this: "a DinD container where dockerd must start as root, but subsequent containers are run by a non-root user".

Artem Cherednichenko

unread,
Feb 14, 2019, 9:06:14 AM2/14/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hi there @charles-crain, I am facing very interesting issue that matches topic of this thread. Seems fsGroup does not work for all cases,
here is example of the deployment, it is test nginx deployment, I am trying to mount nfs and additionally mount empty directory - just to compare.

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
labels:
app: nginx-test
spec:
selector:
matchLabels:
app: nginx-test
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx-test
spec:
securityContext:
fsGroup: 2000
volumes:
- name: nfs-volume
nfs:
server: # nfs with no_root_squash
path: /nfs
- name: test-fs-group
emptyDir: {}
containers:
- image: nginx
name: nginx-test
imagePullPolicy: Always
volumeMounts:
- name: nfs-volume
mountPath: /var/local/test5
- name: test-fs-group
mountPath: /var/local/test6

when I exec bash into the pod's nginx container GID applied only for empty dir, and not to the dir mounted for nfs. Nfs configured with no root squash in testing purposes, process in my container has non-root user, so that is the problem, it can be solved via chown, however I am trying to achieve with with native solution

Manoj Ramakrishnan

unread,
Feb 17, 2019, 9:07:03 PM2/17/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I also face the exact same issue described above.

Erkin Khaydarov

unread,
Feb 18, 2019, 12:44:17 PM2/18/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

this issue has been open for like 5 years. no one from kubernetes is interested in it and it may be for a reason, valid or not. there were many number of valid solutions to the simple problem but none of them were implemented.

Not sure why this issue doesnt just get closed

anarchistHH1983

unread,
Feb 18, 2019, 12:49:42 PM2/18/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention
Help w comp talk.

On Mon, Feb 18, 2019, 12:44 PM Erkin Khaydarov <notifi...@github.com
wrote:


> this issue has been open for like 5 years. no one from kubernetes is
> interested in it and it may be for a reason, valid or not. there were many
> number of valid solutions to the simple problem but none of them were
> implemented.
>
> Not sure why this issue doesnt just get closed
>
> —
> You are receiving this because you are subscribed to this thread.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/2630#issuecomment-464824407>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/Atif5VY8mSFL1QRED9HS2oRbV3qyhPQ4ks5vOuZngaJpZM4DBDWs>
> .

Jing Xu

unread,
Feb 18, 2019, 2:37:53 PM2/18/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@mheese As you commented here #2630 (comment) to set a uid and gid for a volume , do you still trying to working on it? Thanks!

Cosmin Ioniță

unread,
Feb 19, 2019, 5:54:58 PM2/19/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I also encountered this issue. Is there any plan to create a viable solution for it? Local persistent volumes can't replace all use-cases of hostPath volume

Maximilian Mack

unread,
Mar 15, 2019, 5:01:22 AM3/15/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Same here.

Marcus Heese

unread,
Mar 16, 2019, 1:45:30 PM3/16/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@jingxu97 I haven't given it a try yet because I don't really feel that there is a consensus that this is what should be done.

Let me come up with a detailed proposal and post it here when ready.

raotkind

unread,
Mar 20, 2019, 11:16:42 PM3/20/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Ok

Martin Lensment

unread,
Apr 23, 2019, 12:53:03 PM4/23/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

For reference, I just figured this out and it worked when I added

     securityContext:
        fsGroup: 999

to the spec.

For postgres:11.1-alpine use this:

securityContext:
  fsGroup: 70

☁ The Panda Bear

unread,
May 12, 2019, 2:52:58 AM5/12/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I just can hope that the kubernetes members prioritize this issue. IMO, It's really a blocker especially from security point of view and becoming vulnerabilities risk :'(

Davanum Srinivas

unread,
May 13, 2019, 8:35:51 AM5/13/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

long-term-issue (note to self)

kfox1111

unread,
May 13, 2019, 4:05:39 PM5/13/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I'm hitting this in context of cert-manager managed secrets. I was using an initContainer to copy the certs to the right place and update the permissions. Cert-manager needs to update the secrets in place so that trick won't work. I'll explore the fsGroup workaround.

Jing Xu

unread,
May 22, 2019, 10:10:17 PM5/22/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@incubus8
I am trying to working on this issue. Could you please describe your use case and what kind of behavior you would expect? Thanks!

Arun Raghavan

unread,
May 22, 2019, 10:32:04 PM5/22/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@jingxu97 I can offer two examples. The Prometheus docker image starts the prometheus service as user nobody (uid 65534) and the Grafana docker image starts grafana as uid=472 (https://grafana.com/docs/installation/docker/)

Both of these fail by default to create directories when they first start up because of these permissions.

Jing Xu

unread,
May 23, 2019, 2:46:30 PM5/23/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@ford-prefect if you set fsGroup in PodSecurityContext, and runAsUser, wouldn't those services have the permission to write?

Erkin Khaydarov

unread,
May 23, 2019, 2:50:57 PM5/23/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

no because the permissions are set for the pod not for the volume that was created independently. it would be great if PodSecurityContext could infact alter the permissions of the volumes, or at least fail to mount and throw an error

Jing Xu

unread,
May 23, 2019, 2:55:35 PM5/23/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@ekhaydarov , in current setVolumeOwnerrhsip function, if fsGroup is provided, it will have rw-rw---- permission, so that group has rw permission.

Cobra1978

unread,
May 24, 2019, 2:23:52 AM5/24/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@jingxu97 thi is not always a solution, as example: we use secretes for jmxremote.password and jmxremote.user that are needed for jmxmonitoring of java applications, java requires that those files belong to the user that run te application an that have permissions 400, so by now there is no way to use secrets this way in rancher 2.x

leigh capili

unread,
Jun 14, 2019, 1:59:28 PM6/14/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I was perplexed to see that fsGroup was an option and fsUser was not.
Also, the permissions/mode portion of this is confusing. We should make it clearer how volumes like EmptyDir get their default mode or allow the user to set it explicitly, as this is a pretty normal unix admin task.

If root is the only user that can ever own your volume (aside from using an initContainer to chmod it at runtime), the API encourages usage of root for an application's user which is a weak security practice.

@jingxu97 What do you think?

Jing Xu

unread,
Jun 14, 2019, 2:26:11 PM6/14/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@stealthybox, thank you for the feedback. I am currently working on a proposal for API on volume ownership and permission and will share with the community soon. Feedback/comments are welcome then.

Cobra1978

unread,
Jul 10, 2019, 11:51:42 AM7/10/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hi.

There are some news about this issue?

mikekuzak

unread,
Jul 16, 2019, 5:25:42 AM7/16/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Why does pv.beta.kubernetes.io/gid not work for the local host path provisoner ?

Richard Matthews

unread,
Aug 11, 2019, 6:11:13 PM8/11/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hey,

I am encountering this as well, I'd appreciate some news :).

hughobrien

unread,
Aug 11, 2019, 9:46:44 PM8/11/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

this has been my workaround so far:
initContainers: - name: init image: busybox:latest command: ['/bin/chown', 'nobody:nogroup', '/<my dir>'] volumeMounts: - name: data mountPath: /<my dir>

Maxim Neaga

unread,
Aug 21, 2019, 7:14:22 PM8/21/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

this has been my workaround so far:

      - name: init
        image: busybox:latest
        command: ['/bin/chown', 'nobody:nogroup', '/<my dir>']
        volumeMounts:
        - name: data
          mountPath: /<my dir>

The workarounds with chowning do not work for read-only volumes, such as secret mounts, unfortunately.

Jeffrey Descan

unread,
Nov 15, 2019, 3:41:08 AM11/15/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I would need this as well (pretty urgently), because we have software not starting due to permissions not being able to be different then 0600. If we could mount the volume under a specific UID my (and other's) problem will be solved.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Josh Woodcock

unread,
Nov 15, 2019, 7:57:24 AM11/15/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

You can run a job as part of your deployment to update the volume permissions and use a ready state to check for write permission as a workaround. Or you can use fsGroup to specify the group for the volume and add the application owner to the group that owns the volume. Option 2 seems cleaner to me. I used to use option 1 but now I use option 2.

Will May

unread,
Nov 25, 2019, 6:27:14 AM11/25/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Note that if Kubernetes did support an fsUser option, then you'd trip over #57923 where all files within the mounted secret would be given 0440 permission (or 0660 for writeable mounts) and would ignore any other configuration.

Wolfgang Richter

unread,
Dec 4, 2019, 10:23:12 AM12/4/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@woodcockjosh fsGroup doesn't cover the use case of security-sensitive software such as Vault trying to run as vault:vault and loading a private key file requiring permissions equal to or less than 0600. @wjam fsUser would be ideal if we could get 0400 permissions set as well (for things like private key files).

We hit this trying to configure Vault to authenticate to a PostgreSQL DB with certificates. The underlying Go library hard fails if the permission bits differ (https://github.com/lib/pq/blob/90697d60dd844d5ef6ff15135d0203f65d2f53b8/ssl_permissions.go#L17).

eichlerla2

unread,
Feb 20, 2020, 4:35:53 PM2/20/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@jingxu97: Are there any news on that. We still have the pv ownership problem in our clusters with strict security policies.

Kaleab Girma

unread,
Feb 21, 2020, 2:13:30 PM2/21/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

This article looks like working I din't test it but I'll test it on Monday, if anyone can do it b4 then please let us know.
The detail is here
Data persistence is configured using persistent volumes. Due to the fact that Kubernetes mounts these volumes with the root user as the owner, the non-root containers don't have permissions to write to the persistent directory.

The following are some things we can do to solve these permission issues:

Use an init-container to change the permissions of the volume before mounting it in the non-root container. Example:

    spec:
       initContainers:
       - name: volume-permissions
         image: busybox
         command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
         volumeMounts:
         - mountPath: /bitnami
           name: nginx-data
       containers:
       - image: bitnami/nginx:latest
         name: nginx
         volumeMounts:
         - mountPath: /bitnami
           name: nginx-data

Use Pod Security Policies to specify the user ID and the FSGroup that will own the pod volumes. (Recommended)

  spec:
      securityContext:
        runAsUser: 1001
        fsGroup: 1001
      containers:
      - image: bitnami/nginx:latest
        name: nginx
        volumeMounts:
        - mountPath: /bitnami
          name: nginx-data

tisc0

unread,
Apr 2, 2020, 7:25:52 PM4/2/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Hi,
I've seen all around the Internet the workaround with that weak initContainer running as root.
I've also been struggling with fsGroup, which apply only on the scope of the pod, not on each container in a pod, which is [also] a shame.
Just build a custom image (nonroot-initContainer) based on alpine, with sudo installed and custom /etc/sudoers giving my non-root user full power to apply the chmod actions. Unfortunately, I'm hitting another wall with:

sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' \
option set or an NFS file system without root privileges?

Since I'm not willing to create a less secure PodSecurityPolicy for that deployment, any news from that issue would be very welcome for people having to be compliant with security best practices.

Thanks in advance !

Anton Kuzmin

unread,
Jun 8, 2020, 1:57:47 AM6/8/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Is there fsGroup for kubernetes deployment files?

fejta-bot

unread,
Sep 6, 2020, 2:30:13 AM9/6/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Émilien Devos

unread,
Sep 6, 2020, 2:43:58 AM9/6/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/remove-lifecycle stale

Paco Xu

unread,
Nov 17, 2020, 5:00:13 AM11/17/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

👍

Israel Fonseca

unread,
Nov 17, 2020, 10:39:26 AM11/17/20
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Is this still an issue? I've done some tests (Minikube 1.14, 1.15, 1.19 and EKS 1.14) and the permissions on the emptyDir volume is 777 as intended:

apiVersion: v1
kind: Pod
metadata:
  name: debug
  namespace: default
spec:
  containers:
  - image: python:2.7.18-slim
    command: [ "tail", "-f" ]
    imagePullPolicy: Always
    name: debug
    volumeMounts:
    - mountPath: /var/log/test-dir
      name: sample-volume
  volumes:
  - emptyDir:
      sizeLimit: 10M
    name: sample-volume

image

It is loading more messages.
0 new messages