Re: [kubernetes/kubernetes] apiserver allows duplicate service port (#59119)

5 views
Skip to first unread message

k8s-ci-robot

unread,
Feb 13, 2018, 4:47:30 PM2/13/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@DirectXMan12: GitHub didn't allow me to assign the following users: RobertKrawitz.

Note that only kubernetes members and repo collaborators can be assigned.

In response to this:

/assign @RobertKrawitz

@kubernetes/sig-api-machinery-bugs how do we want approach fixing this?
@liggitt I seem to recall you having opinions on patch in the past...

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Solly Ross

unread,
Feb 13, 2018, 4:47:54 PM2/13/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/assign @RobertKrawitz

@kubernetes/sig-api-machinery-bugs how do we want approach fixing this?
@liggitt I seem to recall you having opinions on patch in the past...

fejta-bot

unread,
May 14, 2018, 5:47:59 PM5/14/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

James Ravn

unread,
May 15, 2018, 10:21:52 AM5/15/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

Daniel Smith

unread,
May 15, 2018, 12:11:28 PM5/15/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention
Sounds like a bug in the patch application code-- it should only validate
fully reified (i.e., post-patch-application) objects.


On Tue, May 15, 2018 at 7:21 AM James Ravn <notifi...@github.com> wrote:

> /remove-lifecycle stale
>
> —
> You are receiving this because you are on a team that was mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/59119#issuecomment-389184418>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAngljN0LEg-EDC63Wbo7ScTI8PPLZcKks5tyuR-gaJpZM4R0TE1>
> .

fejta-bot

unread,
Aug 13, 2018, 12:22:37 PM8/13/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale


You are receiving this because you are on a team that was mentioned.

jethrogb

unread,
Aug 13, 2018, 12:43:35 PM8/13/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Nov 11, 2018, 12:19:30 PM11/11/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

jethrogb

unread,
Nov 11, 2018, 12:32:50 PM11/11/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Feb 9, 2019, 1:04:03 PM2/9/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

James Ravn

unread,
Feb 10, 2019, 12:21:25 PM2/10/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

Tim Hockin

unread,
May 9, 2019, 2:29:03 AM5/9/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I confirm this still exists and the analysis by @RobertKrawitz is excellent and very plausible.

WORSE -- it will consider two ports as identical even if they differ in protocol. Looking at types.go:

   // The list of ports that are exposed by this service.
    // More info: https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
    // +patchMergeKey=port
    // +patchStrategy=merge
    // +listType=map
    // +listMapKey=port                                                                                                                                        
    // +listMapKey=protocol
    Ports []ServicePort `json:"ports,omitempty" patchStrategy:"merge" patchMergeKey:"port" protobuf:"bytes,1,rep,name=ports"`

So dear api-machinery people -- how should we be approaching this? The correct key is really the protocol + port (at minimum) but more even then the original problem exists. This should really be an error...

SaintLiber

unread,
Jul 4, 2019, 4:13:02 AM7/4/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Server: v1.14.0
Client: v1.14.0
Patch URI: https://172.18.141.129:6443/api/v1/namespaces/default/services/example
BODY:
{
"spec":{
"ports":[
{
"name":"port-0",
"port":6666,
"protocol":"TCP",
"targetPort":6666
}
]
}
}
Response:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Service "example" is invalid: spec.ports[1].name: Duplicate value: "port-0"","reason":"Invalid","details":{"name":"example","kind":"Service","causes":[{"reason":"FieldValueDuplicate","message":"Duplicate value: "port-0"","field":"spec.ports[1].name"}]},"code":422}
The Service "example" is invalid: spec.ports[1].name: Duplicate value: "port-0"

fejta-bot

unread,
Oct 2, 2019, 4:54:12 AM10/2/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Mike Miller

unread,
Oct 2, 2019, 5:22:53 AM10/2/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Dec 31, 2019, 5:01:59 AM12/31/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

jethrogb

unread,
Dec 31, 2019, 9:16:16 AM12/31/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Mar 30, 2020, 11:14:22 AM3/30/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Apr 29, 2020, 11:57:04 AM4/29/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

jethrogb

unread,
Apr 29, 2020, 12:03:05 PM4/29/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle rotten

Daniel Smith

unread,
Apr 29, 2020, 1:34:03 PM4/29/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I believe SSA code should reject this. I am not sure if we're failing requests yet when SSA barfs (@apelisse probably knows). We are slightly worried about making client requests that used to "work" stop working. In this case the result is clearly garbage so that's probably not an issue.

(sorry for extremely delayed response, this is the first I saw the issue in my email)

Tim Hockin

unread,
May 28, 2020, 4:07:05 PM5/28/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@apelisse This is popping up again - any thoughts on how to crack this?

Antoine Pelisse

unread,
Jun 1, 2020, 5:25:16 PM6/1/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I believe SSA code should reject this. I am not sure if we're failing requests yet when SSA barfs (@apelisse probably knows).

We do.

My understanding is that there is a difference between the manually written validation and the openapi extensions (patchMergeKey) semantics, so the patch (build by kubectl using openapi) and the create/update don't behave the same way.

Server-side apply will help address the apply case in the future, and probably solve the problem altogether if you server-side apply to create and update. Eventually we should fix the validation (probably when we have ratcheting) so that either we use the openapi for validation on all types of requests, or either by validating that there are no duplicate keys.

fejta-bot

unread,
Aug 30, 2020, 6:00:48 PM8/30/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

jethrogb

unread,
Aug 31, 2020, 7:41:46 AM8/31/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Nov 29, 2020, 7:22:32 AM11/29/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Dec 29, 2020, 8:07:19 AM12/29/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

jethrogb

unread,
Dec 29, 2020, 4:06:26 PM12/29/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle rotten

Jordan Liggitt

unread,
Jan 11, 2021, 8:20:25 AM1/11/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

duplicate report in #97883

fejta-bot

unread,
Apr 11, 2021, 9:21:20 AM4/11/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

James Ravn

unread,
Apr 12, 2021, 4:53:49 AM4/12/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

nurhun

unread,
Jul 7, 2021, 9:59:00 AM7/7/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I'm facing same issue, k3s cluster v1.21.2+k3s1

Daniel Smith

unread,
Jul 7, 2021, 12:17:31 PM7/7/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Is this still a problem if SSA is used (kubectl apply --server-side)? If so, what is the failure point now?

Kubernetes Triage Robot

unread,
Oct 5, 2021, 12:19:58 PM10/5/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.

jethrogb

unread,
Oct 5, 2021, 12:25:47 PM10/5/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale

Tim Hockin

unread,
Oct 5, 2021, 1:08:57 PM10/5/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

See #103544 for status.

Tim Hockin

unread,
Oct 5, 2021, 1:08:58 PM10/5/21
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Closed #59119.

Reply all
Reply to author
Forward
0 new messages