As per #40714 (comment), kubectl delete should ensure that the resource is deleted before returning by default. We can also add a --wait-for-deletion flag that users can set if they dont want to wait.
Work items:
--wait-for-deletion=false.cc @liggitt @smarterclayton @bgrant0607 @kubernetes/sig-cli-bugs
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
I'd probably want --wait - not sure what else I would be waiting on here.
StatusDetails should probably have a UID field, but there's a question of how much detail want to to put in there. I don't want to add UID, then come back later and have to add other fields like resourceVersion, deletionTimestamp, etc.
Perhaps --wait could be an optional flag that turns on waiting, but by default it's not enabled? That will preserve the current UX for users who are used to the current state of things.
More of a semantic question: is there a specific reason to send DELETEs with a precondition, instead of repeated GET/HEADs? Not sure if the latter is possible since I haven't worked much with the apiserver, but that's how SDKs usually implement logic like this.
Perhaps --wait could be an optional flag that turns on waiting, but by default it's not enabled? That will preserve the current UX for users who are used to the current state of things.
The current state is that kubectl delete always waits (for ex: it waits for pods to be deleted when running kubectl delete rs or kubectl delete deployment). Namespace deletion and pog graceful deletion are exceptions.
More of a semantic question: is there a specific reason to send DELETEs with a precondition, instead of repeated GET/HEADs? Not sure if the latter is possible since I haven't worked much with the apiserver, but that's how SDKs usually implement logic like this.
It may only have access to run DELETE requests and hence wont be able to run GET.
More details: #40714 (comment)
/subscribe
StatusDetails should probably have a UID field, but there's a question of how much detail want to to put in there. I don't want to add UID, then come back later and have to add other fields like resourceVersion, deletionTimestamp, etc.
Sent #45600 to add UID. Happy to add other fields if required.
Sent #46471 which updates kubectl to wait for deletion by default and adds a --wait flag that users can set to disable the wait. PTAL
any updates?
@nikhiljindal could this be related ?
@Blasterdick No this is not related. Jianhuiz pointed out the root cause for that in: #53566 (comment). Its a bug in federation deployment controller. Doesnt require any change to kubectl.
Unassigning since am not working actively on this anymore.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
The current state is that kubectl delete always waits (for ex: it waits for pods to be deleted when running kubectl delete rs or kubectl delete deployment). Namespace deletion and pod graceful deletion are exceptions.
@nikhiljindal I disagree with this statement for a couple reasons:
replicasets and deployments are exceptions, not vice versa, because this behavior is driven by reapers which are optional (so by default kubectl does not wait)deployments the statement "kubectl delete always waits" is not 100% correct.Example: I added a custom finalizer example.com/preventDeletion to deployment, and the behavior was the following:
❯ kubectl apply -f deploy.yaml
deployment "nginx-deployment" created
❯ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6c54bd5869-m5tc9 1/1 Running 0 2s
nginx-deployment-6c54bd5869-t2pjl 1/1 Running 0 2s
❯ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 2 2 2 2 11s
deployment "nginx-deployment" deleted, and it will remain there until I manually remove the example.com/preventDeletion finalizer.❯ kubectl delete deploy nginx-deployment
deployment "nginx-deployment" deleted
❯ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-6c54bd5869-m5tc9 0/1 Terminating 0 29s
nginx-deployment-6c54bd5869-t2pjl 0/1 Terminating 0 29s
❯ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 0 0 0 0 36s
e.g. 7 minutes later:
kubectl get pod
No resources found.
kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx-deployment 0 0 0 0 7m
Yes agreed that we need to fix that. Part of this issue is to add support for finalizers. kubectl should wait for finalizer to be removed and the resource to be deleted before returning.
@nikhiljindal your PR #46471 (adding a -wait flag) has been automatically closed by bot a while ago due to inactivity. Do you have any plans on resurrecting it?
No. Feel free to take over.
/remove-lifecycle stale
This is driving me up the wall. I may get to this before too long.
is this fixed by #64034?
Closed #42594.
Guys, it's not a good idea to break current behaviour. By default delete command was not waiting until everything will be complete. I have a lot of scripts that drops a lot of resources. Now they are working for a very long time. Wait feature is good, but default behaviour change is upsetting
@nekufa add -wait=false to your scripts kubectl command to get previous behaviour
@roffe sure, i found how to achieve this, thank you.
My message was about breaking default behaviour.
Why was this made default? I cannot find any rationale for this here. IMO, one should have very good reason to break current behaviour. Besides, while I agree that this is a good addition, there are still plenty of use cases for not waiting.
Yikes - was wondering why delete seems to now hang. Now I know! IMO a bad choice for the default behaviour :(
What is actually going on under the covers? It appears that the command immediately prints " deleted" and then hangs, but actually it is waiting. It seems like the messaging is incorrect. It should say "deleting" and if --wait is the default it should only print "deleted" when the object is actually deleted.
The way it works right now it seems like kubectl deletes things and then hangs for no reason, and it doesn't give any indication that a state change has actually happened when the command finally terminates. (However ctrl-c and inspecting with kubectl get shows what is going on.) It's also a little surprising that the whole operation is async and ctrl-c doesn't stop it.)
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
![]()
I don't know if it could be more clear that it's async by saying "queued for deletion" or "scheduled for deletion." followed by saying "waiting for controller to report deletion" or something and finally "controller reports resource is deleted."
I do think this is good default behavior it just needs better messaging.