[MILESTONENOTIFIER] Milestone Issue Needs Approval
@aleksandra-malinowska @kubernetes/sig-api-machinery-misc @kubernetes/sig-cli-misc
Action required: This issue must have the status/approved-for-milestone
label applied by a SIG maintainer. If the label is not applied within 4 days, the issue will be moved out of the v1.11 milestone.
sig/api-machinery
sig/cli
: Issue will be escalated to these SIGs if needed.priority/important-soon
: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.kind/bug
: Fixes a bug discovered during the current release.—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
/status approved-for-milestone
[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process
sig/api-machinery
sig/cli
: Issue will be escalated to these SIGs if needed.priority/important-soon
: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.kind/bug
: Fixes a bug discovered during the current release.—
Closed #65818.
/assign
/reopen
To track cherry picks.
@nikhita: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/assign
/reopenTo track cherry picks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
Reopened #65818.
[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process
@aleksandra-malinowska @hzxuzhonghu @nikhita
sig/api-machinery
sig/cli
: Issue will be escalated to these SIGs if needed.priority/important-soon
: Escalate to the issue owners and SIG owner; move out of milestone after several unsuccessful escalation attempts.kind/bug
: Fixes a bug discovered during the current release.—
[MILESTONENOTIFIER] Milestone Issue: Up-to-date for process
Issue Labelssig/api-machinery
sig/cli
: Issue will be escalated to these SIGs if needed.priority/critical-urgent
: Never automatically move issue out of a release milestone; continually escalate to contributor and SIG through all available channels.kind/bug
: Fixes a bug discovered during the current release.master and v1.11 pick have merged, can close this tracking issue
/close
Closed #65818.
Hi. I am experiencing similar behaviour when trying to delete a similar API service. Please advise on whether this requires a new issue altogether or not.
I have the following:
kubectl api-resources | grep manager
challenges acme.cert-manager.io true Challenge
orders acme.cert-manager.io true Order
certificaterequests cr,crs cert-manager.io true CertificateRequest
certificates cert,certs cert-manager.io true Certificate
clusterissuers cert-manager.io false ClusterIssuer
issuers cert-manager.io true Issuer
certificates cert,certs certmanager.k8s.io true Certificate
clusterissuers certmanager.k8s.io false ClusterIssuer
issuers certmanager.k8s.io true Issuer
mutations webhook.cert-manager.io false AdmissionReview
validations webhook.cert-manager.io false AdmissionReview
But when I try:
kubectl delete apiservice v1alpha1.certmanager.k8s.io
apiservice.apiregistration.k8s.io "v1alpha1.certmanager.k8s.io" deleted
And then I issue kubectl api-resources | grep manager
again, I get the same output as above. I even tried to curl -X DELETE
this and got a Success
JSON response back but nothing changes.
Am I missing something?
Version info:
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:29Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.10-gke.17", GitCommit:"27b48e2b4c9535d185ec945c6a513537e4d116cf", GitTreeState:"clean", BuildDate:"2019-10-21T20:10:26Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Quick update re: my previous comment.
I had to manually remove the old CRDs with:
kubectl delete -f https://raw.githubusercontent.com/jetstack/cert-manager/cdd513c6c50904e5534aee6225bfe27db9dbc34d/deploy/manifests/00-crds.yaml
To be able to see the old API service go away:
kubectl api-resources | grep manager
challenges acme.cert-manager.io true Challenge
orders acme.cert-manager.io true Order
certificaterequests cr,crs cert-manager.io true CertificateRequest
certificates cert,certs cert-manager.io true Certificate
clusterissuers cert-manager.io false ClusterIssuer
issuers cert-manager.io
true Issuer
mutations webhook.cert-manager.io false AdmissionReview
validations webhook.cert-manager.io false AdmissionReview
Is this behaviour expected?
I would expect either a failure when running kubectl delete apiservice
due to the fact there's still CRDs lingering around or those CRDs being forcibly removed due to a delete action being performed to their "root kind". 🤔
I would expect either a failure when running kubectl delete apiservice due to the fact there's still CRDs lingering around or those CRDs being forcibly removed due to a delete action being performed to their "root kind"
The APIService is deleted successfully, but is automatically recreated because a CRD requiring that API group/version still exists (you can observe the creationTimestamp and uid change)
I see. And that's what I was missing 😄
Thanks @liggitt for the clarification! 🙏