cc @kubernetes/sig-cli-feature-requests @kubernetes/sig-apps-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Also relevant to my interest. Sequentially deleting pods in a StatefulSet respect the PDB. Batch-delete the pods with their label does not. The documentation does not make such a distinction. What is the proper approach to avoid batch-delete voluntary disruptions?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
if we had this feature would simplify so many of our scripts 😭
BTW the docs on disruptions say that directly deleting a pod is a voluntary disruption, to which PodDisruptionBudgets apply: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
I think this section of the document is worded ambiguously - is it attempting to say that these typical application owner actions are all voluntary disruptions, or just listing actions that are application owner actions?
We call other cases voluntary disruptions. These include both actions initiated by the application owner and those initiated by a Cluster Administrator. Typical application owner actions include:
- deleting the deployment or other controller that manages the pod
- updating a deployment's pod template causing a restart
- directly deleting a pod (e.g. by accident)
I agree. The doc gives a false sense that all the voluntary disruptions are constrained by PDB, which should be more clear.
Yeah that doc could do with some rewording to make it clear what actions obey the PDB, currently it gives example of non voluntary disruptions like hardware failure, node deletion etc but then it seems to imply that something like deleting a deployment is a voluntary disruption and would therefore obey PDB.
This doesn't make any sense to me, if I've deleted a deployment I do not expect my pods to stay around.
If people think kubectl evict
command would be useful it's probably a good idea to create a KEP over at https://github.com/kubernetes/enhancements to help move this forward.
imo kubectl delete --respect-pdb
is more intuitive. People know what delete does, and now they want it to respect disruption budgets. evict introduces a new verb which is known to expert api user but probably needs more explaining to kubectl user.
Also am i reading this correctly that StatefulSets doesnt respect PDB ?
Does kubectl drain respect PDB?
Yes drain respects PDB https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/ @liggitr
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
i see that documentation has been fixed to make it clearer that kubectl delete pods wont respect pdb, but is there more needed ?
- should we add kubectl delete pod --respect-pdb like thing
delete
is a generic method... I don't think overloading one specific resource to use a different API is a good idea
do we need kubectl evict
doesn't kubectl drain
use the eviction endpoint already? what would the behavior of an evict
command be? if eviction was rejected, would the command succeed or fail?
what about kubectl delete deployments/statefulsets?
Deletion of a pod-creating controller is intended to remove all pods owned by that controller by default. Preventing cleanup of orphaned pods to avoid "disruption" could result in stuck pods.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Closed #39824.
/reopen
Any other way I could test that my PDB works if I can't delete pods and see that it doesn't delete?
And the guys here are right, there is a documentation issue where it is understood deleting a pod should take PDB into account.
@shacharSilbert: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Any other way I could test that my PDB works if I can't delete pods and see that it doesn't delete?
And the guys here are right, there is a documentation issue where it is understood deleting a pod should take PDB into account.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
BTW the docs on disruptions say that directly deleting a pod is a voluntary disruption, to which PodDisruptionBudgets apply: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
Documentation now states:
Caution: Not all voluntary disruptions are constrained by Pod Disruption Budgets. For example, deleting deployments or pods bypasses Pod Disruption Budgets.
from https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
the documentation today is very confusing still, especially this line (https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions)
We call other cases voluntary disruptions. These include both actions initiated by the application owner and those initiated by a Cluster Administrator. Typical application owner actions include:
- deleting the deployment or other controller that manages the pod
- updating a deployment's pod template causing a restart
- directly deleting a pod (e.g. by accident)
And NONE of these will respect the PDB constraint
/remove-lifecycle rotten
Deleting pods should respect pod disruption budget. 3 years and this still is not solved.
Any other way I could test that my PDB works if I can't delete pods and see that it doesn't delete?
And the guys here are right, there is a documentation issue where it is understood deleting a pod should take PDB into account.
It looks like a plugin has been created for this purpose exactly: https://github.com/ueokande/kubectl-evict
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.