@tmszdmsk: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-bugs
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@mengqiy likely due to strategic patch computation sending a "remove x" patch
the envvar name is supposed to be the unique key of items in the list, yet the apiserver allowed a duplicate to be persisted in the first place. that's likely the cause of the bug
It sounds like the validation is inconsistent with the schema's merge key. It should either not construct SMP with the env name as a key, or it shouldn't let you specify the same env var twice.
@jennybuckley would you like to look at the validation to see if it is doing the right thing? Is it intentional that people can put the same var in the list multiple times?
I believe it's related with #59119. They might be the same issue.
Otherwise, to prevent the inconsistency proactively, How about having a GET
request to decide whether respect the last-configuration-annotation?
@liggitt The comments on the type definition for container seem to describe the list as being able to allow duplicates. Not saying that it should be allowed, but people might be relying on that because of the comment
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L2141
@yue9944882
I think that could fix this, but that PR has been unmerged for 9 months now.
Also isn't very clear to me why we should be allowing multiple definitions of the same environment variable anyway. I think #59593 could fix this in the short term
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
This is most likely due to strategic merge patch not handling duplicated keys correctly, see #65106
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
We are experiencing this in production on latest "1.12.5-gke.10"
Thanks @stevelacy . Everyone is experiencing this in production.
For those interested, I "fixed" this in one of my files by removing all of the instances of the env var that was duplicated, applied that to the cluster, then added it back it (only one time this time!) and applied that. Not the end solution for sure, but it works for those who want to clean their files up.
The easiest way to fix it is to use kubectl replace
(instead of apply
).
I run into the same issue today. To solution was to kubectl apply
once more. Then it got all fixed and the env variable appeared back as defined in the manifest.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Not stale
/remove-lifecycle stale
Change the targetPort: https to http
I have new finding that in the case of duplicated env, the value update on the 2nd env would not take effect.