Is this a BUG REPORT or FEATURE REQUEST?:
@kubernetes/sig-storage-bugs
What happened:
fsgroup is only intended for RWO volumes because it recursively changes the permissions of all the directories in the volume. This can cause problems for nfs-type volumes because multiple pods could access the same volume and have the permissions changed out from underneath.
What you expected to happen:
Determine whether or not the plugin supports fsgroup either via:
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
fsgroup
value ?new csi capability
be ?Yes one possible way is to infer fsgroup is supported for all rwo volumes, which may not be true? Like a nfs volume technically supports rwo but not fsgroup.
So the other option is to make it a new CSI capability. I'm not sure exactly sure how best to describe it though, maybe something about volume ownership management.
Will it work if fsType
is not set then don't chown the ownership? I would think that for shared file system types(nfs, glusterfs) - fsType
won't be set. For block storage too - I think we shouldn't have to do this.
@gnufied what do you mean by I think we shouldn't have to do this
? Shouldnt have to solve this issue ?
I meant For raw block storage too - I think we shouldn't have to do this.
- which is obvious. I just meant that the entire mechanism of changing ownership of files should apply only to volume types with block storage filesystems. I am just wondering if we can get away with detecting fsType and not have to introduce new capability in CSI.
@msau42 @gnufied agreed, PVC.spec.AccessModes do not map well to current csi.Capabilities enums which makes it hard to determine RWO when mounting. Perhaps, instead of a new capability, we revisit the existing capabilities to see if there are any way RWO can be reliably inferred from returned capabilities. Currently, this is what the code is doing which we knew needed to be revisited:
func asCSIAccessMode(am api.PersistentVolumeAccessMode) csipb.VolumeCapability_AccessMode_Mode { switch am { case api.ReadWriteOnce: return csipb.VolumeCapability_AccessMode_SINGLE_NODE_WRITER case api.ReadOnlyMany: return csipb.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY case api.ReadWriteMany: return csipb.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER } return csipb.VolumeCapability_AccessMode_UNKNOWN }
Users may not fully understand what fsgroup is and set it anyway, or a default psp with some fsgroup may be defined in which it will automatically be set on all pods. The first case we can attribute to user error, but the second case may not have a good workaround and could cause permission issues on rwx volumes. So I think we do need to solve this.
@msau42 Yes, agreed this has to be solved. As you pointed out user error or improperly configured PSP can set fsgroup
value that does not match the correct attribute, causing permission issues.
Here is what I think can be done without new CSI capabilities:
if PV.AccessModes == nil
, then do not apply fsgroupif contains(PV.AccessModes, RWM) || contains(PV.AccessModes, ROM)
then:
If mount reference count <= 1
, apply fsgroupIf mount reference count > 1
, do not apply fsgroup to void stated issueif contains(PV.AccessModes, RWO)
then:
if mount reference == 1
, apply fsgroupI think that may help deduce when to apply fsgroup.
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
- if contains(PV.AccessModes, RWM) || contains(PV.AccessModes, ROM) then:
- * If mount reference count <= 1, apply fsgroup
* If mount reference count > 1, do not apply fsgroup to void stated issue
A RWM volume (say NFS) can be mounted on multiple nodes. Do you have reliable count of mounts across all nodes?
In addition, that would be a change in semantics to how we support rwx volumes today. If we don't have a new CSI capability, then I think the best course of action is to only apply fsgroup for RWO volumes. Some questions remain:
If there's not a reliable way to enumerate node mounts (thinking about it, for CSI external drivers, the answer is NO). Theres is no reliable way to have k8s autocorrect a bad fsgroup/PV.AccessMode combo. So we would have to rely on the config provided by user/admin.
I have come to similar conclusion as @msau42 -- mainly, apply fsgroup only to volumes with RWO
access modes. The other k8s specified access modes, Read{Only|Write}Many
, cannot be reliably applied.
Even if we were to introduce additional plugin capabilities that the CO can query (from driver) to find out what modes can be applied, there would still be the possibility of user/admin mis-configuration.
- Do all RWO volumes support fsgroup?
Probably not, Is this where you think an additional capability would help (@msau42 )
- RWX is a superset of RWO. So if a user requests a RWO volume, it could still be satisfied by a RWX volume type. Can we detect this and still not apply the fsgroup even though the volume capability will say RWO?
That is a good point. Can't think of a way to detect/guess RWX when RWO is specified, this without the driver providing that info.
Can we adopt the only RWO gets fsgroup
rule until we decide what capabilities would look like ?
I will just try to summarize the conversation from Slack.
We can safely assume that fsgroup
only applies to block storage type with file system on top of it. So for all - volumes that need to have SetVolumeOwnership
called MUST have a file system(fsType
) on them. Is that assumption correct?
If it is - can we not use presence of fsType
field or we can query the local volume for fs type (like how we query selinux relabelling capability) and call SetVolumeOwnership
only when - volume is RWM and has a valid block storage file system type.
Will this not solve our use case?
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
We'll need to make it clear that fsType
is only for block-based storage systems, and the field cannot be reused for some multi-protocol file server (ie, nfsv3, nfsv4, smb, etc). For that case, then the plugin will need to expose its own new storageclass parameter + volume attribute to specify protocol.
Can we not use blkid
to get fsType - #59050 and use that information for calling SetVolumeOwnership
? It may be problematic to rely entirely on fsType
present inside csi source.
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
@msau42 @gnufied @saad-ali Instead of a new capability or trying to deduce how best to apply fsgroup, why not pass the fsgroup
to the driver and let it decide how to apply it (either as a attribute for the mount or top level request param)? Right now, all mount logic has been delegated to the external CSI driver except for applying permission.
Thoughts ?
@vladimirvivien that is true but we can still inspect file system on the mounted path because by the time SetVolumeOwnership
gets called volume is already mounted. This should be no different than how selinux support is determined. @jsafrane what you think?
@saad-ali is this approved for 1.12 milestone? Thank you for working on this, @vladimirvivien!
@guineveresaenger still discussing how best to approach this. It would be a 1.12.
[MILESTONENOTIFIER] Milestone Issue Needs Approval
@msau42 @vladimirvivien @kubernetes/sig-storage-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer.
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
Approved for 1.12
[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process
sig/storage
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.—
@msau42 @gnufied
Based on previous post, this is a summary:
fsType == ""
fsGroup
is not applied (because it could be an indication of a nonblock fs, or error)fsType
is provided
pv.AccessMode == RWO
then apply fstype logic@gnufied @vladimirvivien @saad-ali - any updates on #67280?
Waiting for a review and approval. @saad-ali
@vladimirvivien you have a outstanding comment on the PR. Can you please answer that?
How does this approach relate to inline CSI volumes? There's no PersistentVolume involved, and hence no way to specify an access mode. I can specify a filesystem type, but my driver ignores it (always using tmpfs).
I considered having my CSI driver read the mounting pod's spec to see if it has a security context with "fsGroup" set, since that detail is not supplied to the driver, as @vladimirvivien noted.
My goal is to have the volume mounted with the files on it readable by the container's user, but there's no way for my driver to know who that user will be. Am I supposed to just use 0444 for the ownership bits, to allow everyone to read the files?
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.