[kubernetes/kubernetes] kubectl delete should wait for resource to be deleted before returning (#42594)

47 views
Skip to first unread message

Nikhil Jindal

unread,
Mar 6, 2017, 4:15:08 PM3/6/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

As per #40714 (comment), kubectl delete should ensure that the resource is deleted before returning by default. We can also add a --wait-for-deletion flag that users can set if they dont want to wait.

Work items:

  • Update apiserver to return the UID of the resource being deleted in response to a delete request
  • Update kubectl delete code to:
  • First send DELETE request to apiserver with the resource name. Server will return back resource UID as part of the response.
  • Then keep sending DELETE requests to apiserver with UID precondition until server returns 404 or 409 or we timeout.
  • Skip the wait if user sets --wait-for-deletion=false.

cc @liggitt @smarterclayton @bgrant0607 @kubernetes/sig-cli-bugs


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Clayton Coleman

unread,
Mar 6, 2017, 9:52:20 PM3/6/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I'd probably want --wait - not sure what else I would be waiting on here.

StatusDetails should probably have a UID field, but there's a question of how much detail want to to put in there. I don't want to add UID, then come back later and have to add other fields like resourceVersion, deletionTimestamp, etc.

Jamie Hannaford

unread,
Mar 9, 2017, 9:28:32 AM3/9/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Perhaps --wait could be an optional flag that turns on waiting, but by default it's not enabled? That will preserve the current UX for users who are used to the current state of things.

More of a semantic question: is there a specific reason to send DELETEs with a precondition, instead of repeated GET/HEADs? Not sure if the latter is possible since I haven't worked much with the apiserver, but that's how SDKs usually implement logic like this.

Nikhil Jindal

unread,
Mar 10, 2017, 5:26:11 PM3/10/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Perhaps --wait could be an optional flag that turns on waiting, but by default it's not enabled? That will preserve the current UX for users who are used to the current state of things.

The current state is that kubectl delete always waits (for ex: it waits for pods to be deleted when running kubectl delete rs or kubectl delete deployment). Namespace deletion and pog graceful deletion are exceptions.

More of a semantic question: is there a specific reason to send DELETEs with a precondition, instead of repeated GET/HEADs? Not sure if the latter is possible since I haven't worked much with the apiserver, but that's how SDKs usually implement logic like this.

It may only have access to run DELETE requests and hence wont be able to run GET.
More details: #40714 (comment)

Shiyang Wang

unread,
Mar 12, 2017, 9:38:16 PM3/12/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/subscribe

Nikhil Jindal

unread,
May 10, 2017, 11:21:43 AM5/10/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

StatusDetails should probably have a UID field, but there's a question of how much detail want to to put in there. I don't want to add UID, then come back later and have to add other fields like resourceVersion, deletionTimestamp, etc.

Sent #45600 to add UID. Happy to add other fields if required.

Nikhil Jindal

unread,
May 26, 2017, 2:18:10 PM5/26/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Sent #46471 which updates kubectl to wait for deletion by default and adds a --wait flag that users can set to disable the wait. PTAL

Luigi Riefolo

unread,
Oct 30, 2017, 8:51:48 AM10/30/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

any updates?

Dmitry

unread,
Nov 30, 2017, 10:19:21 AM11/30/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@nikhiljindal could this be related ?

Nikhil Jindal

unread,
Dec 1, 2017, 3:06:29 AM12/1/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@Blasterdick No this is not related. Jianhuiz pointed out the root cause for that in: #53566 (comment). Its a bug in federation deployment controller. Doesnt require any change to kubectl.

Unassigning since am not working actively on this anymore.

fejta-bot

unread,
Mar 1, 2018, 4:02:48 AM3/1/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Nail Islamov

unread,
Mar 13, 2018, 6:32:05 PM3/13/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

The current state is that kubectl delete always waits (for ex: it waits for pods to be deleted when running kubectl delete rs or kubectl delete deployment). Namespace deletion and pod graceful deletion are exceptions.

@nikhiljindal I disagree with this statement for a couple reasons:

  1. replicasets and deployments are exceptions, not vice versa, because this behavior is driven by reapers which are optional (so by default kubectl does not wait)
  2. even for deployments the statement "kubectl delete always waits" is not 100% correct.

Example: I added a custom finalizer example.com/preventDeletion to deployment, and the behavior was the following:

  1. Created a deployment - all good:
❯ kubectl apply -f deploy.yaml
deployment "nginx-deployment" created

❯ kubectl get pod
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-6c54bd5869-m5tc9   1/1       Running   0          2s
nginx-deployment-6c54bd5869-t2pjl   1/1       Running   0          2s

❯ kubectl get deploy
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   2         2         2            2           11s
  1. Deleted a deployment - pods are terminated, but deployment is still there despite the message deployment "nginx-deployment" deleted, and it will remain there until I manually remove the example.com/preventDeletion finalizer.
❯ kubectl delete deploy nginx-deployment
deployment "nginx-deployment" deleted

❯ kubectl get pod
NAME                                READY     STATUS        RESTARTS   AGE
nginx-deployment-6c54bd5869-m5tc9   0/1       Terminating   0          29s
nginx-deployment-6c54bd5869-t2pjl   0/1       Terminating   0          29s

❯ kubectl get deploy
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   0         0         0            0           36s

e.g. 7 minutes later:

kubectl get pod
No resources found.

kubectl get deploy
NAME               DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   0         0         0            0           7m

Nikhil Jindal

unread,
Mar 14, 2018, 5:58:54 PM3/14/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Yes agreed that we need to fix that. Part of this issue is to add support for finalizers. kubectl should wait for finalizer to be removed and the resource to be deleted before returning.

Nail Islamov

unread,
Mar 31, 2018, 8:11:05 AM3/31/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@nikhiljindal your PR #46471 (adding a -wait flag) has been automatically closed by bot a while ago due to inactivity. Do you have any plans on resurrecting it?

Nikhil Jindal

unread,
Apr 1, 2018, 1:28:18 AM4/1/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

No. Feel free to take over.

Janet Kuo

unread,
Apr 2, 2018, 5:31:27 PM4/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/remove-lifecycle stale

Clayton Coleman

unread,
Apr 30, 2018, 3:35:57 PM4/30/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

This is driving me up the wall. I may get to this before too long.

Brian Grant

unread,
Jun 2, 2018, 12:13:19 AM6/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Antoine Pelisse

unread,
Jun 2, 2018, 12:21:48 AM6/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

is this fixed by #64034?

Nail Islamov

unread,
Jun 2, 2018, 12:52:59 AM6/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Yes, it was fixed in #64034, followed with #59851, #63979 and #64375

Brian Grant

unread,
Jun 4, 2018, 11:02:53 PM6/4/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Closed #42594.

Dmitry Krokhin

unread,
Sep 3, 2018, 5:35:18 AM9/3/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Guys, it's not a good idea to break current behaviour. By default delete command was not waiting until everything will be complete. I have a lot of scripts that drops a lot of resources. Now they are working for a very long time. Wait feature is good, but default behaviour change is upsetting

Joakim Karlsson

unread,
Sep 13, 2018, 4:30:04 AM9/13/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@nekufa add -wait=false to your scripts kubectl command to get previous behaviour

Dmitry Krokhin

unread,
Sep 21, 2018, 2:29:07 PM9/21/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@roffe sure, i found how to achieve this, thank you.
My message was about breaking default behaviour.

Torsten Bronger

unread,
Jan 11, 2019, 12:27:57 AM1/11/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Why was this made default? I cannot find any rationale for this here. IMO, one should have very good reason to break current behaviour. Besides, while I agree that this is a good addition, there are still plenty of use cases for not waiting.

Dan Clarke

unread,
Aug 17, 2019, 5:30:32 AM8/17/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Yikes - was wondering why delete seems to now hang. Now I know! IMO a bad choice for the default behaviour :(

Luke Schlather

unread,
Oct 29, 2021, 6:56:32 PM10/29/21
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

What is actually going on under the covers? It appears that the command immediately prints " deleted" and then hangs, but actually it is waiting. It seems like the messaging is incorrect. It should say "deleting" and if --wait is the default it should only print "deleted" when the object is actually deleted.

The way it works right now it seems like kubectl deletes things and then hangs for no reason, and it doesn't give any indication that a state change has actually happened when the command finally terminates. (However ctrl-c and inspecting with kubectl get shows what is going on.) It's also a little surprising that the whole operation is async and ctrl-c doesn't stop it.)


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

Luke Schlather

unread,
Oct 29, 2021, 6:57:54 PM10/29/21
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I don't know if it could be more clear that it's async by saying "queued for deletion" or "scheduled for deletion." followed by saying "waiting for controller to report deletion" or something and finally "controller reports resource is deleted."

Luke Schlather

unread,
Oct 29, 2021, 6:58:33 PM10/29/21
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I do think this is good default behavior it just needs better messaging.

Reply all
Reply to author
Forward
0 new messages