Re: [kubernetes/kubernetes] API server accepts invalid api versions for resources (#54697)

4 views
Skip to first unread message

Jordan Liggitt

unread,
Oct 31, 2017, 1:11:49 AM10/31/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

To recreate:

  1. kubectl apply -f ... a normal extensions/v1beta1 DaemonSet. The object is created:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{},"name":"prometheus-node-exporter","namespace":"default"},"spec":{"template":{"metadata":{"labels":{"daemon":"prom-node-exp"},"name":"prometheus-node-exporter"},"spec":{"containers":[{"image":"prom/prometheus","name":"c","ports":[{"containerPort":9090,"hostPort":9090,"name":"serverport"}]}]}}}}
  creationTimestamp: 2017-10-31T05:01:33Z
...
  1. kubectl apply -f ... the same file edited to be a bogus extensions/v1beta2. The object is fetched from the server at
/apis/extensions/v1beta1/namespaces/default/daemonsets/prometheus-node-exporter

This patch is sent by kubectl and accepted by the API server:

{"apiVersion":"extensions/v1beta2","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta2\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"name\":\"prometheus-node-exporter\",\"namespace\":\"default\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"daemon\":\"prom-node-exp\"},\"name\":\"prometheus-node-exporter\"},\"spec\":{\"containers\":[{\"image\":\"prom/prometheus\",\"name\":\"c\",\"ports\":[{\"containerPort\":9090,\"hostPort\":9090,\"name\":\"serverport\"}]}]}}}}\n"}}}

There's at least a couple bugs here:

  1. kubectl apply fetches a different version for comparing with the on-disk version. I'd expect it to fetch the version specified in the file and fail when the server reports that version is not available - @kubernetes/sig-cli-bugs
  2. I would not expect the API server to accept a patch that changes the kind or apiVersion of an object (since the type is already known in the API server patch handler, the kind and apiVersion are just getting decoded, ignored, then overwritten when getting converted back into the serialized form for etcd)


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Josh Horwitz

unread,
Oct 31, 2017, 1:15:05 AM10/31/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Sorry, I ran apply like @liggitt's example.

kubectl and the apiserver are 1.8.1 but i've tested this on 1.8.0 as well (why i put 1.8.x). I haven't tested this on 1.7.x.

Jordan Liggitt

unread,
Oct 31, 2017, 1:44:11 AM10/31/17
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

fix for the apiserver ignoring kind/apiVersion-changing patches in #54840

fejta-bot

unread,
Apr 15, 2018, 12:07:08 AM4/15/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Nikhita Raghunath

unread,
Apr 15, 2018, 1:22:50 AM4/15/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Sep 5, 2018, 5:29:56 PM9/5/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Nikhita Raghunath

unread,
Sep 13, 2018, 4:08:41 PM9/13/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/remove-lifecycle stale

PR open.

fejta-bot

unread,
Dec 12, 2018, 4:31:22 PM12/12/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Jan 11, 2019, 5:15:50 PM1/11/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Feb 10, 2019, 5:32:08 PM2/10/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Feb 10, 2019, 5:32:32 PM2/10/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Closed #54697.

Kubernetes Prow Robot

unread,
Feb 10, 2019, 5:32:44 PM2/10/19
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Reply all
Reply to author
Forward
0 new messages