Re: [kubernetes/kubernetes] PV still contains ClaimRef to a PVC even after it's deleted as part of the namespace (#65581)

55 views
Skip to first unread message

k8s-ci-robot

unread,
Jun 28, 2018, 9:05:55 AM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@enchantner: Reiterating the mentions to trigger a notification:
@kubernetes/sig-storage-bugs

In response to this:

@kubernetes/sig-storage-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Michelle Au

unread,
Jun 28, 2018, 9:58:45 AM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

A reclaim policy of retain means that the system should not automatically release the PV and make it available. It indicates that there may be user action required to copy or purge the data from the disk. So it is up to the user to clear the ClaimRef from the PV when they are ready to make the disk available again.

If you want the system to automatically clean and make the local disks availabile again, you can use the local volume static provisioner with Delete reclaim policy.

Nikolay Markov

unread,
Jun 28, 2018, 5:42:02 PM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@msau42 I think I understand how retain works, but I don't want to delete PV itself together with the claim.

The idea is, I create all those PVs together with creating partitions by an external tool, and they have NodeAffinity (and local path) related to an exact node. So, to create PV again, i'll need to specify an exact node each time I create a Deployment. This is really ugly.

Also, I know there is no important data left on those partitions when I delete claims. I thought Recycle policy will do what I want (clean partitions and make them "Available" again), but it doesn't work. Isn't there a way to actually do it automatically without any need to manually delete ClaimRef every time?

Michelle Au

unread,
Jun 28, 2018, 5:53:31 PM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

If your external tool can mount all these PVs all under one directory, then you can run the local static provisioner which runs as a DaemonSet and automatically creates the PVs with the correct path and NodeAffinity. And when the PVC is deleted, it will clean up the data on it, delete the PV, and then recreate the PV again with the correct information. This is how you can get recycling behavior with local PVs.

Nikolay Markov

unread,
Jun 28, 2018, 6:01:56 PM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

This looks like what I want, thanks! Is there any source, like article, where I can read more about this? An example, maybe?

Michelle Au

unread,
Jun 28, 2018, 6:05:22 PM6/28/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

fejta-bot

unread,
Sep 26, 2018, 6:20:49 PM9/26/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Oct 26, 2018, 7:08:11 PM10/26/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Tim Zhang

unread,
Nov 7, 2018, 3:57:19 AM11/7/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

A reclaim policy of retain means that the system should not automatically release the PV and make it available. It indicates that there may be user action required to copy or purge the data from the disk. So it is up to the user to clear the ClaimRef from the PV when they are ready to make the disk available again.

If you want the system to automatically clean and make the local disks availabile again, you can use the local volume static provisioner with Delete reclaim policy.

could we add one more reclaim policy, e.g.: reuse. This policy could set PV's phase to available and clean claimRef to nil after deleting the bounded PVC.
As this case:
When we use statefulset and PVC template, we may delete/ scale down statefulset and delete unMounted PVC. but the PV may be reused later, so we want to remain it and clear its claimRef.

Gini

unread,
Nov 13, 2018, 11:03:47 PM11/13/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Facing same, any update ?

oc version
oc v3.6.173.0.129
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://lbint-example.com:443
openshift v3.6.173.0.129
kubernetes v1.6.1+5115d708d7

Kubernetes Prow Robot

unread,
Dec 14, 2018, 12:02:49 AM12/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Closed #65581.

fejta-bot

unread,
Dec 14, 2018, 12:02:50 AM12/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Dec 14, 2018, 12:02:51 AM12/14/18
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Iridescens

unread,
Nov 26, 2019, 9:47:05 AM11/26/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I second @zhangxiaoyu-zidif 's proposal. I also need this as we often need to recreate whole releases with Helm with different options without deleting data.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Iridescens

unread,
Nov 26, 2019, 9:48:25 AM11/26/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

/reopen

Kubernetes Prow Robot

unread,
Nov 26, 2019, 9:48:33 AM11/26/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

@Iridescens: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Jan Šafránek

unread,
Nov 28, 2019, 11:22:36 AM11/28/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

could we add one more reclaim policy, e.g.: reuse. This policy could set PV's phase to available and clean claimRef to nil after deleting the bounded PVC.

No, the released PV may contain sensitive data (e.g. credit card numbers) and we don't want a random PVC to bind to such PV. Also, the semantics is that PVs are bound empty (unless admin manually creates a pre-populated PV). We tried to wipe volumes with policy Recycle, but it was slow, clumsy and not really safe. Local storage provisioner fixed most of the issues by using custom cleaners.

Jing Xu

unread,
Dec 2, 2019, 4:05:46 PM12/2/19
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

The "reuse" scenario mentioned by @zhangxiaoyu-zidif seems quite useful in some use cases. Thinking the way of PVC/PV is designed, PV is non-namespaced and could be bound to a PVC if the it is in "available" state (claimRef is nil). So I understand adding this "reuse" policy is considered dangerous if storage has data on it. But it seems also a limitation of this PVC/PV by design to not able to support this use case. If we could add a fields to limit what namespaces are allowed to access this PV (e.g, allowedAccessNamespaces"), will it help to support this "resue" case?

James Forbes

unread,
Mar 11, 2021, 7:16:43 AM3/11/21
to kubernetes/kubernetes, k8s-mirror-storage-bugs, Team mention

I also require the functionality mentioned by @zhangxiaoyu-zidif . Its been over a year have we found a work around?

Reply all
Reply to author
Forward
0 new messages