Re: [kubernetes/kubernetes] Multi Tenancy for Persistent Volumes (#47326)

8 views
Skip to first unread message

Mike Danese

unread,
Jun 13, 2017, 12:54:34 PM6/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

yes looks like encryption seems like a reasonable solution as long as we can separate the encryption key per tenant and only the tenant has access to its own encryption keys. EBS volumes are the only ones supporting encryption. So for RBD , we are out of luck. Is there a general pattern of doing encryption with external provisioning ? What about incorporating tenancy in in-tree plugins ?

Any volume plugin that is backed by a block device can probably support LUKS. @kubernetes/sig-storage-feature-requests have we ever discussed LUKS encryption layers for block device volumes?


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Huamin Chen

unread,
Jun 13, 2017, 1:00:25 PM6/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Why does AttachDisk on volumes always happen on the node using kubelet ? The reason i ask is ,in some networked storage, for e.g. EBS and RBD, this can be dome from the master nodes. Doing this from kubelet means exposing more permissions to users then is necessary.

AttachDisk happens on Kubernetes master for cloud block storage (EBS, PD, Cinder, Azure). Kubelet doesn't have to have privileged credentials.

krmayankk

unread,
Jun 13, 2017, 2:58:51 PM6/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Interesting @rootfs @msau42 so in RBD, why do we need the user secret in user namespace or it could be my wrong understanding. Putting it other way, in RBD we need two secrets admin and user. My understanding is that user secrets must be in the same namespace as the PVC and they are used for AttachDisk. If AttachDisk is happening on master, we should allow the user secret to be in any namespace or accept a namespace field for it.

Huamin Chen

unread,
Jun 13, 2017, 3:13:05 PM6/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Interesting @rootfs @msau42 so in RBD, why do we need the user secret in user namespace or it could be my wrong understanding. Putting it other way, in RBD we need two secrets admin and user. My understanding is that user secrets must be in the same namespace as the PVC and they are used for AttachDisk. If AttachDisk is happening on master, we should allow the user secret to be in any namespace or accept a namespace field for it.

rbd doesn't support 3rd party attach, so rbd map has to happen on kubelet.

rbd admin and user keyrings are for different purposes: admin keyring for rbd image provisioning (admin privilege), while user keyring for rbd map (non admin privilege). Pods that use rbd image don't have admin keyrings.

krmayankk

unread,
Jun 13, 2017, 5:29:19 PM6/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@rootfs @msau42 is there a configuration which controls where the AttachDisk happens , or is it just check based , so at pod creation time, if AttachDisk is not already called, it will call AttachDisk. So for EBS, it will never be called since Attach has already happened in the master ?

Michelle Au

unread,
Jun 13, 2017, 5:38:35 PM6/13/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

The attach operation is only performed by the attach/detach controller. And that controller is only enabled to run in the master node by default. There is a kubelet option to turn on attach/detach on the node, but it's going to be removed.

Jan Šafránek

unread,
Jun 14, 2017, 4:17:28 AM6/14/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Overall i want a multi tenant model, where:-
-- accidentally its not possible for one tenant to mount a volume created by another tenant

That's already implemented. If a PV gets bound to a PVC the PV can never get bound to another PVC. Only the pods in the same namespace as the PVC can use it.

When the PVC is deleted, the PV gets Released. Based on persistentVolumeReclaimPolicy of the PV, the PV is either deleted or recycled (data on the PV are discarded in both cases) or remains Released forever and thus nobody can bind to it. Only admin can manually access data on the PV or forcefully bind the PV to another PVC.

Deyuan Deng

unread,
Jun 14, 2017, 5:14:56 AM6/14/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

I think one key point from what @krmayankk describes is the identity of a PV (e.g. PV created by a tenant, PV of an internal customer). After binding, PV is attached the identity of the PVC, but the binding process itself doesn't seem to take that into consideration. From what I know, it looks into selector, storageclass, accessmode, etc. As of now, it seems the best way is to mimic identity information using selector and storageclass.

@krmayankk why do you want to reserve PV for a tenant?

krmayankk

unread,
Jun 21, 2017, 3:23:58 AM6/21/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@jsafrane @ddysher in case of dynamic provisioning, the reclaim policy is always delete. But enterprises might want to still keep the PV and not delete them immediately for safety reasons. So we cant make use of the Released phased for preventing the binding.

The identity of the PV is important because, in no circumstance we want the binding process to accidentally bind a PV of one customer to a PVC of different customer. There is nothing that prevents it. Agreed that until a PVC is bind to PV, it cannot bind to another PV, but if accidentally we have some unbound PV's (due to bug or whatever) from customer A, we would want them to not get bound to customer B's PVC. The only way to prevent that today i can think of is assign per customer storage classes.

Michelle Au

unread,
Jun 21, 2017, 12:52:54 PM6/21/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@krmayankk once the PV is deleted, then there is no PV object that exists anymore, so you can't accidentally bind to it. You would have to recreate the PV, either statically or through dynamic provisioning.

Is what you really want the ability to specify Retain policy for dynamically provisioned PVs, so that you have the chance to clean up the data before it gets put back into the provisioning pool?

krmayankk

unread,
Jun 22, 2017, 12:33:07 AM6/22/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@msau42 Since dynamic provisioning doesnt support reclaim policy and always deletes. What we are doing is when deleting statefulsets, we are not deleting the PVC's. That way we explicitly GC the PVC at a later time and hence the PV. While the PVC and hence the PV is in GC phase, i am worried that the PVC could get unbound due to bugs and hence the PV will become available for further binding by other tenants. Do you think this is possible ? If somehow the PVC/PV get unbound for a dynamically provisioned PV, will the phase of PV be Release or Available ?
Yes the ability to specify Retain policy would really give us a chance to clean up the data although some tenants may not want that and they would want new PV not recycled ones

Deyuan Deng

unread,
Jun 22, 2017, 6:03:07 AM6/22/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

If a PV is bound to a PVC, then there are two pieces of information about the two-way binding:

  • In PVC, spec.volumeName tells which PV the PVC is bound to
  • In PV, spec.claimRef tells which PVC is bound with the PV

pvc.spec.volumeName can not be edited if already set. pv.spec.claimRef can be removed entirely and if so, pv.status.phase will become Available. However, since pvc.spec.volumeName is non-empty and point to the pv, pv_controller will try to bind the PV and PVC again.

My suspect is that In between, if another PVC is also pending, then it's possible that the other PVC will bind to the PV, and the original PVC will just stay as is (and pv_controller keeps rebinding and keeps failing). If my understanding is correct, I think the likelihood that PV is bound by other tenants is pretty low, if not impossible.

If somehow the PVC/PV get unbound for a dynamically provisioned PV, will the phase of PV be Release or Available ?

Michelle Au

unread,
Jun 22, 2017, 1:41:08 PM6/22/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Assuming no recycle policy, once the PV is unbound, it always goes to Released or Retain, or the PV object is deleted entirely. The same PV object cannot go to Available state. So the same PV object is never recycled. It's always a new PV object.

Now, whether or not the data on the underlying backing volume gets cleaned up before being put back into the storage pool (used for dynamic provisioning) by the storage provider is a different story, and it's going to depend on each provider. For example, for GCE PD, when the disk gets deleted, it is guaranteed that the content gets cleaned up in the underlying volume before it could be reused for a new disk. For local storage, the provided external provisioner will cleanup the data when the PV is released. For other volume plugins, that may not be the case, and I believe that is why @krmayankk wants the Retain policy to be able to manually cleanup the data on the volume.

I still think if we had the ability to set reclaim policy to Retain on dynamically provisioned volumes, then it should be able to address your concern about being able to clean up volumes before being used again by other tenants.

Jeff Vance

unread,
Jun 22, 2017, 2:47:12 PM6/22/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

See also issue 38192

krmayankk

unread,
Jun 26, 2017, 12:29:41 AM6/26/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@msau42

Assuming no recycle policy, once the PV is unbound, it always goes to Released or Retain, or the PV object is deleted entirely. The same PV object cannot go to Available state. So the same PV object is never recycled. It's always a new PV object.

I just tested by manually editing the PV's claimRef of an already Bound PV. I made the claimRef empty string, and soon the PV became Available and remaining available for ever. Is this a bug . Again all my tests are on 1.5.3

Michelle Au

unread,
Jun 26, 2017, 12:20:10 PM6/26/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@krmayankk A user is not going to "accidentally" delete the PV claimRef. An admin would have to intentionally do that. Is that a scenario you really want to protect from? I thought the scenario you are worried about is if a user accidentally deletes their PVC.

krmayankk

unread,
Jun 26, 2017, 3:13:45 PM6/26/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@msau42 i think the probability of a PV getting unbound from PVC is low(lets say there could be bugs in pv controller related to that). What would be best is that once a PV is dynamically provisioned, there is no way for it to become Available ever again in the realm of certain reclaimPolicy. Available is a phase which is not admin editable, even if an admin intentionally removes the binding, the PV should never go to Available. May be allow this through a reclaim policy which supports this. Maybe allow only in recycle for the PV to go to available, but never in delete, even if some admin accidentally unbinds it.

fejta-bot

unread,
Dec 30, 2017, 12:54:25 PM12/30/17
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot

unread,
Feb 9, 2018, 11:31:10 PM2/9/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot

unread,
May 21, 2018, 4:30:53 PM5/21/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Maxim Ivanov

unread,
May 21, 2018, 6:45:54 PM5/21/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/remove-lifecycle stale

Calvin Hartwell

unread,
Jul 4, 2018, 10:12:50 AM7/4/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@krmayankk sorry to respond to this necro post, but did you come to any conclusion on this? I assume you can create a LimitRange which is really small for namespaces which should be restricted from using certain storageclasses, correct?

Have CSI plugins in k8s 1.10 and the new storage improvements in 1.11 sorted out any issues?

https://kubernetes.io/docs/tasks/administer-cluster/limit-storage-consumption/

Restricting storage access with Ceph is quite easy, not sure about NetApp using Trident or other mechanisms though right now.

Thanks!

fejta-bot

unread,
Oct 2, 2018, 10:35:28 AM10/2/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

krmayankk

unread,
Oct 2, 2018, 11:49:08 AM10/2/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Dec 31, 2018, 10:51:33 AM12/31/18
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Jan 30, 2019, 11:35:58 AM1/30/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Mar 1, 2019, 11:53:47 AM3/1/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Mar 1, 2019, 11:53:56 AM3/1/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
Mar 1, 2019, 11:53:57 AM3/1/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Closed #47326.

krmayankk

unread,
Oct 20, 2019, 9:44:55 PM10/20/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/remove-lifecycle rotten


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

krmayankk

unread,
Oct 20, 2019, 9:44:56 PM10/20/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/reopen

krmayankk

unread,
Oct 20, 2019, 9:45:08 PM10/20/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

/lifecycle frozen

Kubernetes Prow Robot

unread,
Oct 20, 2019, 9:45:16 PM10/20/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

Reopened #47326.

Kubernetes Prow Robot

unread,
Oct 20, 2019, 9:45:17 PM10/20/19
to kubernetes/kubernetes, k8s-mirror-storage-feature-requests, Team mention

@krmayankk: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Reply all
Reply to author
Forward
0 new messages