—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
it is applied client-side
So sounds like my understanding of it as a timing issue is likely correct. Ah well. In the meantime, I added a wrap around it to catch and retry, so it can handle a few failures. Since I only am starting 3 master nodes, 3 retries with a 10 second backoff should be far more than enough to handle it.
Still and all, it probably is right to have the upsert logic handled server side?
Thanks @liggitt
@liggitt @deitch this is definitely client side behavior. kubectl apply will first check whether obj exists, if not then try to do create logical. But it cannot avoid other client create the same object meantime, for such situation(AlreadyExists error), kubectl should fallback to do normal patch logcial, since there is no guarantee the new object created in the meantime matches the manifest that was requested to be applied. wdyt?
@adohe so the upsert activity really isn't a kube-apiserver capability (well, controller-manager, but whatever), but a kubectl capability that just does "check if exists: if yes, update; if not, create."
kubectl should fallback to do normal patch logcial, since there is no guarantee the new object created in the meantime matches the manifest that was requested to be applied
Yes.
Ideally, I think you would want that upsert logic in the server, so kubectl (i.e. a rest API call) can just do a simple apply call. In the meantime, though, yes, a kubectl-side: "check if exists: if yes, update; if not, try to create, but trap AlreadyExists error, and then fall back to update."
Come to think of it, would it not be simpler if every apply just did, "create; if it fails with AlreadyExists error, then update"?
@deitch we did plan to move apply to server-side :)
we did plan to move apply to server-side
@adohe Thinner client, simpler logic, complexity on the server instead of client? I cannot imagine why... :-)
Of course, you still will need the same logic, since two threads of the server could handle it at the same time.
/assign @adohe
will fix this before this weekend.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
@adohe any update?
@adohe did you ever fix this? Still seeing this issue in server 1.7.2 and client 1.8.4
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
I can reproduce this with v1.9.3 when creating 3 masters at the same time, just FYI.
This issue poses challenges to those who run kubectl apply in their deployment pipelines.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #44165.
This is definitely still an issue, /reopen
@Oskoss: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reopened #44165.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
@epa095: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reopened #44165.
Is there any way to disable the rotten closure bot on this issue?
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.![]()
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reopened #44165.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Closed #44165.
/reopen
Reopened #44165.
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
Reopened #44165.
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
any update on this?
/reopen
@diegom626: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
any update on this?
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Please reopen this issue.
V 1.18.0
/reopen
Reopened #44165.
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
@hjkatz: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
Reopened #44165.
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
Please bot, stop closing this issue if it still exists.
@hjkatz: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Please bot, stop closing this issue if it still exists.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
Reopened #44165.
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
—
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Closed #44165.
@deitch Would you do the honors? 😆 Every 30 days!
/reopen
/remove-lifecycle rotten
Reopened #44165.
@deitch: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
I am honoured @hjkatz 😄
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-contributor-experience at kubernetes/community.
/close
Closed #44165.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
/reopen
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
@hetii: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
samy issue on:
$ kubectl get node
NAME STATUS ROLES AGE VERSION
10.xx.xx.37 Ready master,node 35m v1.27.16
10.xx.xx.45 Ready master,node 35m v1.27.16
10.xx.xx.46 Ready master,node 35m v1.27.16
"cmd": ["kubectl", "--kubeconfig=/etc/kubernetes/cluster1/.kube/config", "apply", "-f", "/tmp/mysql-svc.yaml"],
"delta": "0:00:00.649216", "end": "2025-01-07 11:16:24.104712",
"msg": "non-zero return code", "rc": 1, "start": "2025-01-07 11:16:23.455496",
"stderr": "Error from server (AlreadyExists): error when creating \"/tmp/mysql-svc.yaml\": services \" test-database\" already exists\nError from server (AlreadyExists): error when creating \"/tmp/mysql-svc.yaml\": endpoints \" test-database\" already exists",
"stderr_lines": ["Error from server (AlreadyExists): error when creating \"/tmp/mysql-svc.yaml\": services \" test-database\" already exists", "Error from server (AlreadyExists): error when creating \"/tmp/mysql-svc.yaml\": endpoints \" test-database\" already exists"], "stdout": "", "stdout_lines": []}
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()
will fix this before this weekend.
Whew, I wish I also had (almost) 7 year long weekends.
Jokes aside, this is still an issue and can be easily reproduced using the following script:
#!/bin/bash kubectl delete ns test-ns || true kubectl create ns test-ns --dry-run=client -o yaml | kubectl apply -f - & kubectl create ns test-ns --dry-run=client -o yaml | kubectl apply -f - & kubectl create ns test-ns --dry-run=client -o yaml | kubectl apply -f - &
which results in:
Error from server (NotFound): namespaces "test-ns" not found
namespace/test-ns created
Error from server (AlreadyExists): error when creating "STDIN": namespaces "test-ns" already exists
Error from server (AlreadyExists): error when creating "STDIN": namespaces "test-ns" already exists
My use-case are several CI jobs running at the same time which each are supposed to deploy a different Helm release, but in the same namespace. (Because the NS needs to be labeled as well, I need to run kubectl and can't use Helm's --create-namespace.)
If the check would happen on the server-side, this probably wouldn't be a problem.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.![]()