@DirectXMan12: GitHub didn't allow me to assign the following users: RobertKrawitz.
Note that only kubernetes members and repo collaborators can be assigned.
In response to this:
/assign @RobertKrawitz
@kubernetes/sig-api-machinery-bugs how do we want approach fixing this?
@liggitt I seem to recall you having opinions on patch in the past...
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
/assign @RobertKrawitz
@kubernetes/sig-api-machinery-bugs how do we want approach fixing this?
@liggitt I seem to recall you having opinions on patch in the past...
—
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I confirm this still exists and the analysis by @RobertKrawitz is excellent and very plausible.
WORSE -- it will consider two ports as identical even if they differ in protocol. Looking at types.go:
// The list of ports that are exposed by this service.
// More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
// +patchMergeKey=port
// +patchStrategy=merge
// +listType=map
// +listMapKey=port
// +listMapKey=protocol
Ports []ServicePort `json:"ports,omitempty" patchStrategy:"merge" patchMergeKey:"port" protobuf:"bytes,1,rep,name=ports"`
So dear api-machinery people -- how should we be approaching this? The correct key is really the protocol + port (at minimum) but more even then the original problem exists. This should really be an error...
Server: v1.14.0
Client: v1.14.0
Patch URI: https://172.18.141.129:6443/api/v1/namespaces/default/services/example
BODY:
{
"spec":{
"ports":[
{
"name":"port-0",
"port":6666,
"protocol":"TCP",
"targetPort":6666
}
]
}
}
Response:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Service "example" is invalid: spec.ports[1].name: Duplicate value: "port-0"","reason":"Invalid","details":{"name":"example","kind":"Service","causes":[{"reason":"FieldValueDuplicate","message":"Duplicate value: "port-0"","field":"spec.ports[1].name"}]},"code":422}
The Service "example" is invalid: spec.ports[1].name: Duplicate value: "port-0"
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
I believe SSA code should reject this. I am not sure if we're failing requests yet when SSA barfs (@apelisse probably knows). We are slightly worried about making client requests that used to "work" stop working. In this case the result is clearly garbage so that's probably not an issue.
(sorry for extremely delayed response, this is the first I saw the issue in my email)
@apelisse This is popping up again - any thoughts on how to crack this?
I believe SSA code should reject this. I am not sure if we're failing requests yet when SSA barfs (@apelisse probably knows).
We do.
My understanding is that there is a difference between the manually written validation and the openapi extensions (patchMergeKey) semantics, so the patch (build by kubectl using openapi) and the create/update don't behave the same way.
Server-side apply will help address the apply case in the future, and probably solve the problem altogether if you server-side apply to create and update. Eventually we should fix the validation (probably when we have ratcheting) so that either we use the openapi for validation on all types of requests, or either by validating that there are no duplicate keys.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
duplicate report in #97883
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I'm facing same issue, k3s cluster v1.21.2+k3s1
Is this still a problem if SSA is used (kubectl apply --server-side)? If so, what is the failure point now?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
/remove-lifecycle stale
See #103544 for status.
Closed #59119.