Re: [kubernetes/kubernetes] Duplicated environment variable in deployment disappears completely when fixed (#58477)

1 view
Skip to first unread message

k8s-ci-robot

unread,
Jan 18, 2018, 5:03:42 PM1/18/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@tmszdmsk: Reiterating the mentions to trigger a notification:
@kubernetes/sig-api-machinery-bugs

In response to this:

@kubernetes/sig-api-machinery-bugs

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Jordan Liggitt

unread,
Jan 18, 2018, 5:24:02 PM1/18/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@mengqiy likely due to strategic patch computation sending a "remove x" patch

Jordan Liggitt

unread,
Jan 18, 2018, 5:25:21 PM1/18/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

the envvar name is supposed to be the unique key of items in the list, yet the apiserver allowed a duplicate to be persisted in the first place. that's likely the cause of the bug

Daniel Smith

unread,
Jan 22, 2018, 4:30:28 PM1/22/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

It sounds like the validation is inconsistent with the schema's merge key. It should either not construct SMP with the env name as a key, or it shouldn't let you specify the same env var twice.

@jennybuckley would you like to look at the validation to see if it is doing the right thing? Is it intentional that people can put the same var in the list multiple times?

Kim Min

unread,
Feb 8, 2018, 3:42:33 AM2/8/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I believe it's related with #59119. They might be the same issue.

Kim Min

unread,
Feb 8, 2018, 3:59:24 AM2/8/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Otherwise, to prevent the inconsistency proactively, How about having a GET request to decide whether respect the last-configuration-annotation?

Jenny Buckley

unread,
Feb 8, 2018, 2:23:46 PM2/8/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@liggitt The comments on the type definition for container seem to describe the list as being able to allow duplicates. Not saying that it should be allowed, but people might be relying on that because of the comment
https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L2141

Jenny Buckley

unread,
Feb 8, 2018, 5:11:40 PM2/8/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@yue9944882
I think that could fix this, but that PR has been unmerged for 9 months now.
Also isn't very clear to me why we should be allowing multiple definitions of the same environment variable anyway. I think #59593 could fix this in the short term

fejta-bot

unread,
May 9, 2018, 6:51:19 PM5/9/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Tomasz Adamski

unread,
May 10, 2018, 4:22:12 AM5/10/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

Maxim Ivanov

unread,
Jun 15, 2018, 8:46:04 AM6/15/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

This is most likely due to strategic merge patch not handling duplicated keys correctly, see #65106

fejta-bot

unread,
Sep 13, 2018, 9:30:12 AM9/13/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

jethrogb

unread,
Sep 13, 2018, 11:32:33 AM9/13/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Dec 12, 2018, 11:26:41 AM12/12/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Maxim Ivanov

unread,
Dec 12, 2018, 11:35:49 AM12/12/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Mar 12, 2019, 1:12:00 PM3/12/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

jethrogb

unread,
Mar 12, 2019, 1:15:24 PM3/12/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

Steve Lacy

unread,
Mar 27, 2019, 4:17:57 PM3/27/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

We are experiencing this in production on latest "1.12.5-gke.10"

jethrogb

unread,
Mar 27, 2019, 4:20:07 PM3/27/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Thanks @stevelacy . Everyone is experiencing this in production.

Alex Sears

unread,
Aug 26, 2019, 4:09:54 PM8/26/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

For those interested, I "fixed" this in one of my files by removing all of the instances of the env var that was duplicated, applied that to the cluster, then added it back it (only one time this time!) and applied that. Not the end solution for sure, but it works for those who want to clean their files up.

jethrogb

unread,
Aug 26, 2019, 4:19:29 PM8/26/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

The easiest way to fix it is to use kubectl replace (instead of apply).

Michal Stanke

unread,
Oct 2, 2019, 12:34:43 PM10/2/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I run into the same issue today. To solution was to kubectl apply once more. Then it got all fixed and the env variable appeared back as defined in the manifest.

fejta-bot

unread,
Dec 31, 2019, 12:09:01 PM12/31/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Jay Gorrell

unread,
Dec 31, 2019, 12:37:42 PM12/31/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Not stale

Jay Gorrell

unread,
Dec 31, 2019, 12:37:54 PM12/31/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

pbbhopp

unread,
Feb 21, 2020, 6:24:31 AM2/21/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Change the targetPort: https to http

futangwa

unread,
May 4, 2020, 11:20:47 PM5/4/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I have new finding that in the case of duplicated env, the value update on the 2nd env would not take effect.

  1. grep TEST_DUPLICATED_ENV test.yaml -A1
    - name: TEST_DUPLICATED_ENV
    value: "123456789"
    - name: TEST_DUPLICATED_ENV
    value: "987654321"
  2. after kubectl apply, the pod has value:
    kubectl describe po test-f94649c77-47s4m | grep TEST_DUPLICATED_ENV
    TEST_DUPLICATED_ENV: 123456789
    TEST_DUPLICATED_ENV: dup1
    The 'dup1' is the value before apply.
Reply all
Reply to author
Forward
0 new messages