Re: [kubernetes/kubernetes] Setting defaultMode is not Fully Respected When Pod.spec.securityContext.runAsUser is Set (#57923)

329 views
Skip to first unread message

Jordan Liggitt

unread,
Jun 14, 2018, 4:03:12 PM6/14/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Are other people doing essentially the suggestion above and just skirting around the issue by moving files around in their entrypoint?

I'll have to defer to @kubernetes/sig-storage-misc for that question


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

fejta-bot

unread,
Sep 12, 2018, 4:13:06 PM9/12/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Oct 12, 2018, 4:37:25 PM10/12/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Nov 11, 2018, 4:23:29 PM11/11/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

k8s-ci-robot

unread,
Nov 11, 2018, 4:23:44 PM11/11/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Closed #57923.

k8s-ci-robot

unread,
Nov 11, 2018, 4:23:46 PM11/11/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Stephen D. Spencer

unread,
Nov 15, 2018, 5:30:55 PM11/15/18
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/remove-lifecycle rotten

Andrew Hemming

unread,
Mar 7, 2019, 5:58:49 AM3/7/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

I’m also running into an issue with ownership / permissions on a Kubernetes secret.
I have a container (that is run by k8s cron, but that’s beside the point), that runs as uid=999 (pod security: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups).
I want to use a secret, being a SSL-certificate private key for client cert auth. Using Client certificates requires permissions to be 0600., which means that ownership must be set to 999: also (since uid=999 can only read a file with permission 0600 if he’s owner).
I can’t seem to be getting it to work. For now I worked around the issue by reading the file, and writing the contents to a new file on tmpfs with proper ownership / permissions.
But copying contents of secrets to other locations feels a bit contra dictional to what we want to achieve (higher security).

Is this the same issue / related? Or should I submit a new?

Hi @sebasmannem, I have run into the exact same issue, also only in a cron job. Same code works for a regular pod. Did you manage to find a more elegant solution?

Jing Xu

unread,
Apr 17, 2019, 7:48:42 PM4/17/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

@sebasmannem @jrnt30, sorry for the delay. But could you describe more details about the problems? When you try to use defaultMode and/or fsgroup in the Pod spec, what is the resulting permission of the files? Thanks!

Benedict Hartley

unread,
Apr 26, 2019, 9:07:19 AM4/26/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

I have hit this issue too. The following config excerpt results in files with 0440 instead of 0400 as expected:

kind: Deployment
...
spec:
  template:
    spec:
      securityContext:
        runAsUser: 472
        fsGroup: 472
...
      containers:
        - name: fip
          securityContext:
            runAsUser: 0
          volumeMounts:
          - name: foo
            mountPath: /etc/bip
...
      volumes:
        - name: foo
          secret:
            secretName: bar
            defaultMode: 256

Jan Šafránek

unread,
May 3, 2019, 4:33:54 AM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

IMO, fsGroup and defaultMode conflicts in Secrets (and other AtomicWriter volumes). AtomicWriter.Write prepares the volume with the right defaultMode first, i.e. a file is chmod to 400 (oct) and only after that SetVolumeOwnership is called to apply fsGroup:

https://github.com/kubernetes/kubernetes/blob/657a1a1a3457bc599005b1ca30c338c03e9d4aa0/pkg/volume/secret/secret.go#L251-L261

SetVolumeOwnership then chmods the file to 660 (or 440 for read-only files), ignoring any defaultMode:

https://github.com/kubernetes/kubernetes/blob/7a9f21bbb828a0f58e6c51234c1ba0e16efb6727/pkg/volume/volume_linux.go#L76-L86

Jan Šafránek

unread,
May 3, 2019, 4:34:02 AM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/reopen

Kubernetes Prow Robot

unread,
May 3, 2019, 4:34:12 AM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

@jsafrane: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
May 3, 2019, 4:34:14 AM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Reopened #57923.

Michelle Au

unread,
May 3, 2019, 11:45:59 AM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

@jsafrane what do you think should be the solution? Fail if both fsgroup and default mode are set? Or have default mode override fsgroup?

Or should we need to to come up with a new API to specify volume ownership that is:

  • Consistent across all volume types
  • Consistent across inline and pvcs
  • Handles all uid, gid, access settings
  • Supports both Linux and windows semantics

Hemant Kumar

unread,
May 3, 2019, 12:16:32 PM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

The API documentation of DefaultMode fwiw explicitly says it only guarantees file creation with given DeaultMode and specified DefaultMode may conflict with fsGroup:

	// Mode bits to use on created files by default. Must be a value between
	// 0 and 0777.
	// Directories within the path are not affected by this setting.
	// This might be in conflict with other options that affect the file
	// mode, like fsGroup, and the result can be other mode bits set.

I do not think we should change meaning of DefaultMode just yet.

Jing Xu

unread,
May 3, 2019, 5:44:41 PM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

discussed with @mesaugat, one proposal to fix this is that for volume types of secrets, configMap etc, we should set up the ownership along with mode together while writing the files (see code https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/atomic_writer.go#L396), so that no need to call SetVolumeOwnership function after finish writing the data at all. This way, it avoids SetVolumeOwnership overwrites the defaultMode which was set up earlier.

Jordan Liggitt

unread,
May 3, 2019, 5:55:07 PM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

I explored something similar in https://github.com/kubernetes/kubernetes/pull/57935/files and couldn't get it to work with containers running as different uids with an fsGroup

Jing Xu

unread,
May 3, 2019, 6:22:21 PM5/3/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

@liggitt thanks for the information. I think we might still honor fsGroup and defualtMode setting with the following restrictions

  1. If user has different containers that run as different uids, they should not set defaultMode to 0400, maybe this could be validated and disallowed?
  2. otherwise, user could set fsGroup and defaultMode freely. Although fsGroup seems does not bring much in this case.

Jordan Liggitt

unread,
May 4, 2019, 9:15:30 AM5/4/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

If user has different containers that run as different uids, they should not set defaultMode to 0400. This is considered as an invalid setting, could be validated and disallowed?

We can't currently tighten validation in a way that would invalidate data already allowed to be persisted in etcd. xref #64841

Also, we can't detect in the API validation what the natural uid is for containers that didn't specify a uid in the pod spec.

If we change how this is handled, I think it could only be done at runtime.

Jan Šafránek

unread,
May 7, 2019, 9:22:23 AM5/7/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

There should be a way how to provide e.g. ssh key as 0600.

IMO, if user specifies Mode or DefaultMode, we should honor it and not to overwrite it during fsGroup. I know that Mode description mentions "This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.", I know it may break existing apps, but why user wrote 0600 and expected 0640 instead? To me it looks like a bug in Pod object.

We can change AtomicWriter to do chown and chmod using fsGroup if Mode/DefaultMode is not set.

Jing Xu

unread,
May 7, 2019, 12:55:50 PM5/7/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Just noticed another issue #34982, maybe also relevant

Will May

unread,
Nov 26, 2019, 8:14:10 AM11/26/19
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

@jsafrane I've been looking through the code and it appears that all uses of AtomicWriter.Write - secrets, configmaps, downward apis, projections & cephfs - either specify a file mode directly, as with cephfs, or the default when the user specifies no modes (i.e. staging/src/k8s.io/api/core/v1/types/SecretVolumeSourceDefaultMode) are all 0644. Also, the behaviour of pkg/volume/SetVolumeOwnership for configmaps, downward apis & projections is to enforce that all files have at least 0440 permissions.

If we accept that if the user asks for something they should get it, then that means that the behaviour should be:

  1. If the user has specified a Mode in the KeyToPath struct then use that.
  2. If the user has specified a DefaultMode in the *VolumeSource structs, then use that.
  3. If a user hasn't specified a DefaultMode, then the value will be automatically set to the default for that source which is currently 0644 everywhere - 0644 is already wider than what volume.SetVolumeOwnership would have enforced.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Zrss

unread,
Feb 24, 2020, 7:13:55 AM2/24/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/cc

Oliver Liu

unread,
Mar 27, 2020, 2:50:26 PM3/27/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

As Istio is moving to SDS by default, which uses the projected volume with the "defaultMode" set, this issue breaks our customers. It would be great to get this resolved.

Jianfei Hu

unread,
Mar 30, 2020, 9:27:15 PM3/30/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/cc @incfly

will

unread,
Jun 7, 2020, 11:48:06 PM6/7/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Dose somebody track this issue ? How to resolved this bug?

devlinyvonne

unread,
Jul 2, 2020, 8:26:41 AM7/2/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Is there a plan to resolve this ticket? Our organisation is currently using Istio in our kubernetes cluster, we have cronjobs that are broken because the permissions on our id_rsa file has been changed and we are no longer able to clone a git repo. As the cronjobs can't run it is blocking code changes from being promoted to our production clusters which is in turn impacting our customers. I'm sure our company isn't the only consumer of istio/kubernetes that is impacted by this problem

John Howard

unread,
Aug 28, 2020, 11:31:42 AM8/28/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Any updates on this? This continues to break users that are attempting to us features like ProjectedVolumeMounts with sidecar containers: istio/istio#26882

Phillip Boushy

unread,
Sep 22, 2020, 3:19:19 PM9/22/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

I've now run into this issue in 2 different scenarios that are frustrating:

  1. Mount jmx.password in to a tomcat docker image to provide the username/passwords for JMX remote functionality.
  2. Mounting certificate files into jamfllc/jamf-pki-proxy which requires the key/pem to be 0600.

Is there a timeline on this issue being fixed? It appears it has existed without any resolution since January 2018.

karthick-kk

unread,
Oct 21, 2020, 8:24:19 AM10/21/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

bump. I need to mount ssh key pairs on to a pod running as an elevated privilege user to use along with an glusterfs mount.

Robert Blaine

unread,
Oct 21, 2020, 9:28:18 AM10/21/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

The workaround I used for this was to run an initContainer which mounts and SSH Private Key in a temp directory, then copies it to an emptyDir and chmod's it appropriately.
The container that then needs the private key mounts the same emptyDir volume

apiVersion: v1
kind: Deployment
metadata:
  name: my-example-deployment
spec:
  template:
    spec:
      initContainers:
        - name: prep-id-rsa
          image: busybox:1.30
          command:
            - sh
            - -c
            - |-
              cp /tmp/id_rsa /well/known/dir/id_rsa
              chmod 0600 /well/known/dir/id_rsa
          volumeMounts:
            - name: id-rsa
              mountPath: /tmp/id_rsa
              subPath: id_rsa
            - name: empty-dir
              mountPath: /well/known/dir
      containers:
        - name: my-container
          image: alpine:3
          command:
            - sh
            - -c
            - |-
              ls -la /tmp/id_rsa
              ls -la /root/.ssh
          volumeMounts:
            - name: id-rsa
              mountPath: /tmp/id_rsa
              subPath: id_rsa
            - name: empty-dir
              mountPath: /root/.ssh
          resources:
            requests:
              cpu: 1m
              memory: 1Mi
            limits:
              cpu: 1m
              memory: 1Mi
      volumes:
        - name: id-rsa
          secret:
            secretName: my-ssh-private-key
        - name: empty-dir
          emptyDir:
            medium: memory

Michelle Au

unread,
Oct 26, 2020, 9:27:46 PM10/26/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

I think the next step in investigating a solution for this issue is to see if we can generalize the work done for projected service account tokens to other volume types.

Jiawei Wang

unread,
Oct 27, 2020, 7:27:51 PM10/27/20
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/cc

Abhishek Singh Baghel

unread,
Mar 22, 2021, 3:17:40 PM3/22/21
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

Jiawei Wang

unread,
Apr 20, 2021, 2:55:07 PM4/20/21
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

For anyone who is following this thread: we are planning to resolve this issue in 1.22 release. Here is the enhancement: kubernetes/enhancements#2606. Feel free to take a look if you are interested. Thanks!

andrewchen5678

unread,
Dec 11, 2021, 3:56:17 AM12/11/21
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

I saw that the pull request above is closed rather than merged, does it mean it is not resolved?


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.

Xing Yang

unread,
Jan 11, 2023, 1:45:37 PM1/11/23
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/triage accepted


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/57923/1379333012@github.com>

Rodrigo Campos

unread,
Jan 12, 2023, 11:08:16 AM1/12/23
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

The problem is not that defaultMode is not respected when runAsUser is set. It is not respected when fsGroup is set, which is documented on one hand. On the other hand, it would be nice for the most specific setting to take precedence.

The issue here is that fsGroup set means all volumes should have this GID and read permissions for the group. When defaultMode is set, this mode is used. When both are set, it is not obvious what should take precedence.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/57923/1380630750@github.com>

Kubernetes Triage Robot

unread,
Jan 20, 2024, 1:13:14 AM1/20/24
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/57923/1901778252@github.com>

Michelle Au

unread,
Mar 8, 2024, 6:53:39 PM3/8/24
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

/triage accepted


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/57923/1986577342@github.com>

lin-crl

unread,
Aug 27, 2024, 10:24:17 PM8/27/24
to kubernetes/kubernetes, k8s-mirror-storage-misc, Team mention

The copy-key workaround is fine if it's for short-term. We have a situation where tls cert and private key are mounted from secrets, and copied to another directory and chmod the keys to 400. However it becomes an issue, as when cert are rotated, since Secrets are not directly mounted, the cert rotation requires additional work to copy them over again. Is anyone aware of it? Would like to see the root cause addressed.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/57923/2313960742@github.com>

Reply all
Reply to author
Forward
0 new messages