cc @kubernetes/sig-storage-feature-requests @kubernetes/sig-node-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@Guillaume-Mayer - wouldn't a postStart
lifecycle hook to chmod the files work (alternatively an init container if execution needs to be completed before the entrypoint)?
@so0k That won't work if the runAsUser
and allowPrivilegeEscalation
prevent the root
user from being used.
I have a very similar use case where I would need to change the owner of a file that is mounted (from a secret in my case).
I have a mongodb-cluster in k8s which uses a special cluster.key
file for cluster authorization. That file is stored in a secret; we have a client where running images as root is forbidden. Our pod has a securityContext set with a runAsUser: 1000
directive. Mongodb itself forbids the case that the file is accessible by anyone else but the owner itself. It will reject startup if the file is readable by group
or other
.
Since the owner is root
, and I cannot run a chown
as non-root on that file, neither changing permissions works, nor (since there is no k8s support) changing the owner of the file does.
I am currently working around this by injecting as an environment variable in a busybox init container which in turn mounts an emptyDir
and writes there. The secret is then not mounted as a file anymore. It's quite ugly, and if there is a chance to get rid of it' I'd be in.
The fact that so many of the Docs advise and caution the user against running containers as root, and that this issue is now 3 years old astounds me. This should at least be explained in much greater detail in the Docs.
Hi!
I ended up with the below initContainers config for giving the node-red-docker container, which runs as a non-privileged user, access to a externally created disk. After trying a lot of things, it seemed the "runAsUser" 0 (root) did the trick.
Cheers
-jo
initContainers:
- name: volume-mount-hack
image: nodered/node-red-docker:slim
command:
- sh
- -c
- 'chmod -R a+rwx /data'
volumeMounts:
- name: picturl-persistent-storage
mountPath: /data
securityContext:
runAsUser: 0
Many older applications which bind to low (privileged) ports start first as root, then immediately drop privileges to some other user. In such a scenario, the container must be configured to start the application as root, and so the original user (root) would have access to the volume. Once the application calls setuid(2)/seteuid(2) though, it won't have access anymore.
@eatnumber1 Can you elaborate a bit more on why we will have this issue with the supplementary group solution mentioned in this thread? IIUC, setuid(2)
/seteuid(2)
will not change the supplementary group of the calling process, so as long as the application is in a group which have the access to the volume, it should not have problems to access the volume, right?
It looks like I was mistaken and calling setgid(2)
doesn't change supplementary groups (which I had thought it did).
Looking around, it seems like at least nginx drops supplementary groups explicitly here (and otherwise would be a minor security vulnerability). I'd be surprised if any well-written privilege-dropping application doesn't drop supplementary groups.
Thanks @eatnumber1! So Nginx initially runs with root, and later reset uid, gid and supplementary groups with what are configured in nginx.conf
. Then I think with the pod security context, we can set fsGroup
to the group configured in nginx.conf
, in this way, even after Nginx resets its supplementary groups, it can still access the volume. Right?
It is specifically defeating a well-understood mechanism.
@thockin Can you please elaborate a bit on this? And why do we need to fix Nginx?
In a non-containerized world, if nginx didn't drop supplemental groups, a remote code execution vulnerability in nginx could leak undesired privileges to remote attackers via its supplemental groups. I therefore don't think you'll ever get the nginx developers to be willing to stop doing that. Even if you do manage to convince them to, dropping supplemental groups is the standard practice, and you'd have to convince every developer of every privilege dropping application to do the same. Apache does the same exact thing here.
Furthermore, even if you pick another obscure Linux access control mechanism to use instead (for example, fsuid), it is intentional that every possible type of privilege is dropped, so it would be a security vulnerability if applications didn't drop that privilege as well. That is the security model here.
In a non-containerized world, the only way to grant privileges to the application after it drops privileges is to grant privileges to the user/group/etc that the application switches to. Hence my original (3 year old) comment about supporting UID and GID explicitly, which would allow the user to specify the UID or GID that the application is going to switch to.
Looking at the documentation for PodSecurityContext, it says this about fsGroup
:
A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod:
- The owning GID will be the FSGroup
- The setgid bit is set (new files created in the volume will be owned by FSGroup)
- The permission bits are OR'd with rw-rw---- If unset, the Kubelet will not modify the ownership and permissions of any volume.
As far as I'm aware, these actions should be sufficient to allow the resulting unprivileged user after a privilege drop to access the volume successfully. (caveat, I haven't tested it)
Yes, so I think setting fsGroup
to the group configured in nginx.conf will make Nginx can access the volume even after a privilege drop and also make volume accounting work.
But I have another question: Besides the fsGroup
in pod security context, user can also set fsGroup
in container security context, so if a pod has multiple containers in it and each container has its own fsGroup
, how can we make sure all of these containers can access the volume (since a volume can only be owned by a single group rather than multiple)?
@qianzhangxa if multiple containers need access to that volume , you will need to make sure all containers request the same fagroup in the container level security context or better just set at the pod level
@tallclair FYI I believe we can close this issue
/sig auth
None of the solutions suggested are working for me.
YML:
apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1 kind: Deployment metadata: labels: tier: frontend spec: selector: matchLabels: tier: frontend strategy: type: Recreate template: metadata: labels: tier: frontend spec: securityContext: fsGroup: 1000 runAsUser: 0 initContainers: - image: some-sftp-container name: sftp-mount-permission-fix command: ["sh", "-c", "chown -R <user> /mnt/permission-fix"] volumeMounts: - name: azure mountPath: /mnt/permission-fix containers: - image: some-sftp-container name: sftp-container ports: - containerPort: 22 name: port_22 volumeMounts: - name: azure mountPath: /home/<user>/data volumes: - name: azure azureFile: secretName: azure-secret shareName: sftp-share readOnly: false
Once the Pod is ready and I exec into the container and check the dirs, nothing has happened:
root@container:/# cd /home/talentry
root@container:/home/talentry# ls -als
total 8
4 drwxr-xr-x 3 root root 4096 Apr 24 18:45 .
4 drwxr-xr-x 1 root root 4096 Apr 24 18:45 ..
0 drwxr-xr-x 2 root root 0 Apr 22 21:32 data
root@container:/home/<user># cd data
root@container:/home/<user>/data# ls -als
total 1
1 -rwxr-xr-x 1 root root 898 Apr 24 08:55 fix.sh
0 -rwxr-xr-x 1 root root 0 Apr 22 22:27 test.json
root@container:/home/<user>/data#
At some point I also had the runAsUser: 0
on the container itself. But that didn't work either. Any help would be much appreciated
Also running a chown afterwards didn't work
@eatnumber1 if a group is in your supplemental groups, shouldn't you assume that it was intended that you have access to that group's resources? Dropping supplemental groups is saying "I know you told me I need this, but I don't want it" and then later complaining that you don't have it.
Regardless, I am now throughly lost as to what this bug means - there are too many followups that don't seem to be quite the same.
Can someone summarize for me? Or better, post a full repro with non-pretend image names?
@thockin IIUC, Nginx is not just dropping the supplementary groups, it is actually resetting it with what is configured in nginx.conf by calling initgroups
.
This worked for me
spec: containers: - name: jenkins image: jenkins/jenkins ports: - containerPort: 50000 - containerPort: 8080 volumeMounts: - mountPath: /var/jenkins_home name: jenkins-home securityContext: fsGroup: 1000 runAsUser: 0
This worked for me.. part of the script.
Could someone clarify what the general solution to this issue is? I think a nice clean resolution to this very long thread would be extremely helpful.
I found 2 workarounds in the comments with reports of varying degrees of success.
I've tried both of them with no success. But I don't want to declare these methods as fully not working in case I implemented them incorrectly.
the solutions arent idea, now your containers are running as root which is against the security standards that k8s tries to get its users to impose.
it would be great if persistent volumes could be created with securityContext
in mind, ie
kind: PersistentVolume
metadata:
name: redis-data-pv
namespace: data
labels:
app: twornesol
spec:
securityContext:
runAsUser: 65534
fsGroup: 65534
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: data
name: redis-data
hostPath:
path: "/data"```
As a workaround, I use a postStart
lifecycle hook to chown the volume data to the correct permissions. This may not work for all applications, because the postStart
lifecycle hook may run too late, but it's more secure than running the container as root and then fixing permissions and dropping root in the entrypoint.
@robbyt commented
As a workaround, I use a postStart lifecycle hook to chown the volume data to the correct permissions. This may not work for all applications, because the postStart lifecycle hook may run too late, but it's more secure than running the container as root and then fixing permissions and dropping root (or using gosu) in the entrypoint script.
We use initContainer
, can a lifecycle hook have a different securityContext than the container itself?
it's sad to see that after I have to do research again @chicocvenancio's option (which I use as well) is still apparently the only way to achieve this.
I understand where the problem is coming from and why we are so reluctant to change this, however, especially for Secret volumes changing the UID of volumes can be essential.
Here is an example from the PostgreSQL world: mount a TLS client cert for your application with a secret volume. As recommended everywhere, you don't run your container as root. However, the postgres connection library will instantaneously complain that the key is world readable. "No problem" you think and you change the mode / default mode to match the demanded 0600 (which is very reasonable to demand that as a client library). However, now this won't work either, because now root is the only user which can read this file.
The point I'm trying to make with this example is: groups don't come to the rescue here.
Now PostgreSQL is definitely a standard database and a product that a lot of people use. And asking for mounting client certs in a way with Kubernetes that do not require an initContainer as a workaround is not too much to ask imho.
So please, let's find some middle ground on this issue, and not just close it. 🙏
I'm trying to mount a ssh-key to user's .ssh directory with defaultMode 0400 so the application can ssh without a password. But that doesn't work if the secret is mounted as owned by root. Can you explain again how this can be solved using fsGroup or some other such mechanism?
I am still hopelessly confused about this bug. There seems to be about 6 things being reported that all fail the same way but are different for different reasons.
Can someone explain, top-to-bottom the issue (or issues) in a way that I can follow without having to re-read the whole thread?
Keep in mind that Volumes are defined as a Pod-scope construct, and 2 different containers may run as 2 different UIDs. Using group perms is ideal for this, but if it is really not meeting needs, then let's fix it. But i need to understand it first.
@saad-ali for your radar
@thockin My use-case is very simple. I'm injecting a secret (ssh key) into a container that is not running as root. The ssh key in /home//.ssh must have 400 permission which I can do, but must also be owned by the UID, or it won't work. I don't want to give this pod any root privilege of any sorts, so an init container that modifies the UID of the file does not work for me. How do I do it, other than including the ssh-key in the image?
@vikaschoudhary16 @derekwaynecarr this has some overlap / implications for user-namespace mapping.
@rezroo a workaround to could be to simply make a copy of the ssh key in an Init container that way you’ll be able to control who owns the file right? Provided the init container runs as the same user that needs to read the ssh key later. It’s a little gross, but “should” work I think.
@thockin another use-case: I'm trying to run an ELK statefulset. The pod has an Elasticsearch container running as non-root. I'm using a volumeClaimTemplate to hold the elasticsearch data. The container is unable to write to the volume though as it is not running as root. Changing the fsGroup has had no effect. K8s v.1.9 . Also the pod has multiple containers and i don't want to use the same fsgroup for all of them.
@pearj that's exactly the workaround that everybody uses ... and as the name says: it's a workaround, and should get addressed :) ... However, there is also a problem with this workaround: updated secrets will eventually get updated in mounted volumes which will make it possible to act on a file change in the running pod; you will miss out on this update when you copy it from an init container.
Yet another use case for this: I'm working on using XFS quotas (obviously, if XFS is in use) for ephemeral storage. The current enforcement mechanism for ephemeral storage is to run du periodically; in addition to being slow and rather coarse granularity, it can be faked out completely (create a file, keep a file descriptor open on it, and delete it). I intend to use quotas for two purposes:
Hard cap usage across all containers of a pod.
Retrieve the per-volume storage consumption without having to run du (which can bog down).
I can't use one quota for both purposes. The hard cap applies to all emptydir volumes, the writable layer, and logs, but a quota used for that purpose can't be used to retrieve storage used for each volume. So what I'd like to do is use project quotas in a non-enforcing way to retrieve per-volume storage consumption and either user or group quotas to implement the hard cap. To do that requires that each pod have a unique UID or single unique GID (probably a unique UID would be best, since there may be reasons why a pod needs to be in multiple groups).
(As regards group and project IDs being documented as mutually exclusive with XFS, that is in fact no longer the case, as I've verified. I've asked some XFS people about it, and they confirmed that the documentation is out of date and needs to be fixed; this restriction was lifted about 5 years ago.)
@robbyt please tell how you managed to chown with postStart
? My container runs as nonroot user, so poststart still uses nonroot permissions and can't change permissions:
chown: /home/user/: Operation not permitted
, message: "chown: /home/user/: Permission denied\nchown: /home/user/: Operation not permitted
Same problem here: whe have somo Dockerized tomcat that run our web applicaition and we us jmx to monithor them, we want to serve jmxremote user and jmxremote password as secrets, but tomcat, which obviously doesen't run as root, want that jmx files are readable only for the user that run tomcat.
the same problem!
Hey FUCK YOU 😘
For now, the hack that works is by setting user to root at the end of your dockerfile. And set a custom entrypoint script. Chown the volume in your custom entrypoint script then use gosu to run the default entrypoint script as the default user. The thing I hate about this is I have to do it for every single image that uses a volume in kubernetes. Totally lame. Please provide a UID GUID option on the volume mount or volume claim config.
That hack doesn’t work if you want to run a secure Kubernetes cluster with PodSecurityPolicies applied to enforce pods to run as a non-root user.
True. All hacks have their downsides. It's either that or logging in as root after the volume is created and chowning the directory manually. Not sure really which is worse :-D. Can't believe this is even a thing.
@thockin from what I gather following this issue now since nearly 4 years, the solution that everybody wants is to be able to set a uid and gid for a volume - in particular secret volumes (but not only those). On Jul 6 you posted a starting point for a possible solution to this. If this is a supported path from the maintainers, I'd finally start and try to solve this problem.
@mheese i'd say go for it.
@mheese I'll collab on a PR if you want?
@mheese It seems gid
is taken from the SecurityContext, so I guess, for a fast relief, an uid
implementation would be enough. Also because gid
had more second guesses in the discussion.
same issue with psql here
@michalpiasecki1 Look how I solved it with PostStart
-hook: https://github.com/xoe-labs/odoo-operator/blob/1be88b67d4ded5c4a0aea6e26b711241f0d09f89/pkg/controller/odoocluster/odoocluster_controller.go#L579-L586
running into same issue. is there any recommended solution for this ?
@blaggacao : thanks for the hint, however i found another workaround
@debianmaster : i would recommend securityContext and fsGroup as described in https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
I ended with certificates owned by root, and group postgres with permissions 440
@michalpiasecki1 can you give more info about how you resolved this for postgres?
I have the server.crt and server.key files stored in a k8s secret pg-certs-secret
and I want to mount them into my container running postgres:9.6
. I have this set up with:
containers:
- name: pg
image: postgres:9.6
...
volumeMounts:
- name: pg-certs
mountPath: "/etc/certs/"
readOnly: true
args: ["-c", "ssl=on", "-c", "ssl_cert_file=/etc/certs/pg_server.crt", "-c", "ssl_key_file=/etc/certs/pg_server.key"]
volumes:
- name: pg-certs
secret:
secretName: pg-certs-secret
defaultMode: 384
But deploying this, the container dies with the error FATAL: could not load server certificate file "/etc/certs/pg_server.crt": Permission denied
I assume this is because the certs are loaded so that they are owned by root
, when they need to be owned by postgres
. It's not clear from the docs, etc what I should do to change ownership absent creating a custom Docker image, but I'd rather not. The securityContext and fsGroup you suggested seemed like it could work, but I would appreciate if you would share more info about how exactly you achieved this.
Also worth noting: I added defaultMode: 384
to ensure the files were added with 0600
file permissions. Before I added that, the container died with the error
FATAL: private key file "/etc/certs/pg_server.key" has group or world access
DETAIL: File must have permissions u=rw (0600) or less if owned by the database user, or permissions u=rw,g=r (0640) or less if owned by root.
For reference, I just figured this out and it worked when I added
securityContext:
fsGroup: 999
to the spec.
I have same problem #72085
can any one help me
Is there any chance for fix this issue? Are the Kubernetes guys working on the solution?
This issue has not been a problem for us for a very long time. We set the "fsGroup" in the pod's security context to match the group ID of the user that runs the main Docker entry point, and any volumes in the pod become accessible to that container's main process:
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
@charles-crain: Your suggestion works really well for most cases.
Here's another case that's not covered:
If the container starts as root but uses a tool such as gosu
to become another user (for some processes). Then locking the container into only one group with fsGroup will prevent cases such as "I want my non-root user to have access to SSH keys mounted into it's ~/.ssh
directory, while having my root user having access to other mounts too".
One example of this: "a DinD container where dockerd
must start as root, but subsequent containers are run by a non-root user".
Hi there @charles-crain, I am facing very interesting issue that matches topic of this thread. Seems fsGroup does not work for all cases,
here is example of the deployment, it is test nginx deployment, I am trying to mount nfs and additionally mount empty directory - just to compare.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test
labels:
app: nginx-test
spec:
selector:
matchLabels:
app: nginx-test
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx-test
spec:
securityContext:
fsGroup: 2000
volumes:
- name: nfs-volume
nfs:
server: # nfs with no_root_squash
path: /nfs
- name: test-fs-group
emptyDir: {}
containers:
- image: nginx
name: nginx-test
imagePullPolicy: Always
volumeMounts:
- name: nfs-volume
mountPath: /var/local/test5
- name: test-fs-group
mountPath: /var/local/test6
when I exec bash into the pod's nginx container GID applied only for empty dir, and not to the dir mounted for nfs. Nfs configured with no root squash in testing purposes, process in my container has non-root user, so that is the problem, it can be solved via chown, however I am trying to achieve with with native solution
I also face the exact same issue described above.
this issue has been open for like 5 years. no one from kubernetes is interested in it and it may be for a reason, valid or not. there were many number of valid solutions to the simple problem but none of them were implemented.
Not sure why this issue doesnt just get closed
@mheese As you commented here #2630 (comment) to set a uid and gid for a volume , do you still trying to working on it? Thanks!
I also encountered this issue. Is there any plan to create a viable solution for it? Local persistent volumes can't replace all use-cases of hostPath volume
Same here.
@jingxu97 I haven't given it a try yet because I don't really feel that there is a consensus that this is what should be done.
Let me come up with a detailed proposal and post it here when ready.
Ok
For reference, I just figured this out and it worked when I added
securityContext: fsGroup: 999
to the spec.
For postgres:11.1-alpine
use this:
securityContext:
fsGroup: 70
I just can hope that the kubernetes members prioritize this issue. IMO, It's really a blocker especially from security point of view and becoming vulnerabilities risk :'(
long-term-issue (note to self)
I'm hitting this in context of cert-manager managed secrets. I was using an initContainer to copy the certs to the right place and update the permissions. Cert-manager needs to update the secrets in place so that trick won't work. I'll explore the fsGroup workaround.
@incubus8
I am trying to working on this issue. Could you please describe your use case and what kind of behavior you would expect? Thanks!
@jingxu97 I can offer two examples. The Prometheus docker image starts the prometheus service as user nobody
(uid 65534) and the Grafana docker image starts grafana as uid=472 (https://grafana.com/docs/installation/docker/)
Both of these fail by default to create directories when they first start up because of these permissions.
@ford-prefect if you set fsGroup in PodSecurityContext, and runAsUser, wouldn't those services have the permission to write?
no because the permissions are set for the pod not for the volume that was created independently. it would be great if PodSecurityContext could infact alter the permissions of the volumes, or at least fail to mount and throw an error
@ekhaydarov , in current setVolumeOwnerrhsip function, if fsGroup is provided, it will have rw-rw---- permission, so that group has rw permission.
@jingxu97 thi is not always a solution, as example: we use secretes for jmxremote.password and jmxremote.user that are needed for jmxmonitoring of java applications, java requires that those files belong to the user that run te application an that have permissions 400, so by now there is no way to use secrets this way in rancher 2.x
I was perplexed to see that fsGroup
was an option and fsUser
was not.
Also, the permissions/mode portion of this is confusing. We should make it clearer how volumes like EmptyDir get their default mode or allow the user to set it explicitly, as this is a pretty normal unix admin task.
If root is the only user that can ever own your volume (aside from using an initContainer to chmod it at runtime), the API encourages usage of root for an application's user which is a weak security practice.
@jingxu97 What do you think?
@stealthybox, thank you for the feedback. I am currently working on a proposal for API on volume ownership and permission and will share with the community soon. Feedback/comments are welcome then.
Hi.
There are some news about this issue?
Why does pv.beta.kubernetes.io/gid not work for the local host path provisoner ?
Hey,
I am encountering this as well, I'd appreciate some news :).
this has been my workaround so far:
initContainers: - name: init image: busybox:latest command: ['/bin/chown', 'nobody:nogroup', '/<my dir>'] volumeMounts: - name: data mountPath: /<my dir>
this has been my workaround so far:
- name: init image: busybox:latest command: ['/bin/chown', 'nobody:nogroup', '/<my dir>'] volumeMounts: - name: data mountPath: /<my dir>
The workarounds with chown
ing do not work for read-only volumes, such as secret mounts, unfortunately.
I would need this as well (pretty urgently), because we have software not starting due to permissions not being able to be different then 0600
. If we could mount the volume under a specific UID my (and other's) problem will be solved.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
You can run a job as part of your deployment to update the volume permissions and use a ready state to check for write permission as a workaround. Or you can use fsGroup to specify the group for the volume and add the application owner to the group that owns the volume. Option 2 seems cleaner to me. I used to use option 1 but now I use option 2.
Note that if Kubernetes did support an fsUser
option, then you'd trip over #57923 where all files within the mounted secret would be given 0440
permission (or 0660
for writeable mounts) and would ignore any other configuration.
@woodcockjosh fsGroup
doesn't cover the use case of security-sensitive software such as Vault trying to run as vault:vault
and loading a private key file requiring permissions equal to or less than 0600
. @wjam fsUser
would be ideal if we could get 0400
permissions set as well (for things like private key files).
We hit this trying to configure Vault to authenticate to a PostgreSQL DB with certificates. The underlying Go library hard fails if the permission bits differ (https://github.com/lib/pq/blob/90697d60dd844d5ef6ff15135d0203f65d2f53b8/ssl_permissions.go#L17).
@jingxu97: Are there any news on that. We still have the pv ownership problem in our clusters with strict security policies.
This article looks like working I din't test it but I'll test it on Monday, if anyone can do it b4 then please let us know.
The detail is here
Data persistence is configured using persistent volumes. Due to the fact that Kubernetes mounts these volumes with the root user as the owner, the non-root containers don't have permissions to write to the persistent directory.
The following are some things we can do to solve these permission issues:
Use an init-container to change the permissions of the volume before mounting it in the non-root container. Example:
spec:
initContainers:
- name: volume-permissions
image: busybox
command: ['sh', '-c', 'chmod -R g+rwX /bitnami']
volumeMounts:
- mountPath: /bitnami
name: nginx-data
containers:
- image: bitnami/nginx:latest
name: nginx
volumeMounts:
- mountPath: /bitnami
name: nginx-data
Use Pod Security Policies to specify the user ID and the FSGroup that will own the pod volumes. (Recommended)
spec:
securityContext:
runAsUser: 1001
fsGroup: 1001
containers:
- image: bitnami/nginx:latest
name: nginx
volumeMounts:
- mountPath: /bitnami
name: nginx-data
Hi,
I've seen all around the Internet the workaround with that weak initContainer running as root.
I've also been struggling with fsGroup, which apply only on the scope of the pod, not on each container in a pod, which is [also] a shame.
Just build a custom image (nonroot-initContainer) based on alpine, with sudo installed and custom /etc/sudoers giving my non-root user full power to apply the chmod actions. Unfortunately, I'm hitting another wall with:
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' \
option set or an NFS file system without root privileges?
Since I'm not willing to create a less secure PodSecurityPolicy for that deployment, any news from that issue would be very welcome for people having to be compliant with security best practices.
Thanks in advance !
Is there fsGroup
for kubernetes deployment files?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
👍
Is this still an issue? I've done some tests (Minikube 1.14, 1.15, 1.19 and EKS 1.14) and the permissions on the emptyDir
volume is 777 as intended:
apiVersion: v1
kind: Pod
metadata:
name: debug
namespace: default
spec:
containers:
- image: python:2.7.18-slim
command: [ "tail", "-f" ]
imagePullPolicy: Always
name: debug
volumeMounts:
- mountPath: /var/log/test-dir
name: sample-volume
volumes:
- emptyDir:
sizeLimit: 10M
name: sample-volume