Re: [kubernetes/kubernetes] `kubectl apply` (client-side) removes all entries when attempting to remove a single duplicated entry in a persisted object (#58477)

3 views
Skip to first unread message

issssu

unread,
May 6, 2024, 4:30:23 AM5/6/24
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

same issue with 1.25.3.
Why we need envs with same keys? Can we delete same keys automatically?
The warning may ignored, then next time, I delete this dulplicate key, but kubernetes delete all envs(with same keys), this will lead to serious accidents.


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/58477/2095449140@github.com>

Benjamin Dumke-von der Ehe

unread,
Apr 17, 2025, 4:20:56 PM4/17/25
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Just want to leave another workaround here that worked in our case, maybe it helps someone else.

If you have a duplicate ENV_VAR and want to remove one of them, and (this is important and may or may not be the case for you) your container doesn't care about the casing of the variable name, so can remove one of the variables and change the casing of the other one.

Since Kubernetes considers ENV_VAR and Env_Var to be different things, this will circumvent the problem.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/58477/2813939815@github.com>

balpha left a comment (kubernetes/kubernetes#58477)

Just want to leave another workaround here that worked in our case, maybe it helps someone else.

If you have a duplicate ENV_VAR and want to remove one of them, and (this is important and may or may not be the case for you) your container doesn't care about the casing of the variable name, so can remove one of the variables and change the casing of the other one.

Since Kubernetes considers ENV_VAR and Env_Var to be different things, this will circumvent the problem.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/58477/2813939815@github.com>

Maciej Szulik

unread,
Aug 26, 2025, 8:28:34 AM8/26/25
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention
soltysh left a comment (kubernetes/kubernetes#58477)

#125932 is currently being tracked as the long-term solution


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/58477/3223972960@github.com>

Daniel Hoherd

unread,
Mar 30, 2026, 10:34:55 AM (10 days ago) Mar 30
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention
danielhoherd left a comment (kubernetes/kubernetes#58477)

Here's a detail I think is important enough to mention here: helm 4 considers duplicate env vars as a blocking condition, and it fails when it encounters a duplicate. (I'm not sure how far this extends beyond env vars, since those are the only duplicate object I've encountered.) This means that any applications that are waiting on this patch handling behavior to be addressed in the k8s API will be unable to adopt helm 4. Also notable, helm 3 docs list the most recent version of helm 3 as being no longer maintained (https://helm.sh/docs/v3) which means that folks who have this duplicate object problem may be put into a difficult position soon.

For example, an application I work with experiences this bug. The workaround is to remove the duplicates in the helm chart that deploys the app, release that version of the chart, and then have every user who has to upgrade manually delete the affected object (or potentially hand edit the affected object to delete the correct duplicate object), then do a helm upgrade. This is a failure prone upgrade path that does not scale well and can lead to second-order problems.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/58477/4155540519@github.com>

Reply all
Reply to author
Forward
0 new messages