To recreate:
kubectl apply -f ... a normal extensions/v1beta1 DaemonSet. The object is created:apiVersion: extensions/v1beta1 kind: DaemonSet metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"extensions/v1beta1","kind":"DaemonSet","metadata":{"annotations":{},"name":"prometheus-node-exporter","namespace":"default"},"spec":{"template":{"metadata":{"labels":{"daemon":"prom-node-exp"},"name":"prometheus-node-exporter"},"spec":{"containers":[{"image":"prom/prometheus","name":"c","ports":[{"containerPort":9090,"hostPort":9090,"name":"serverport"}]}]}}}} creationTimestamp: 2017-10-31T05:01:33Z ...
kubectl apply -f ... the same file edited to be a bogus extensions/v1beta2. The object is fetched from the server at/apis/extensions/v1beta1/namespaces/default/daemonsets/prometheus-node-exporter
This patch is sent by kubectl and accepted by the API server:
{"apiVersion":"extensions/v1beta2","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta2\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"name\":\"prometheus-node-exporter\",\"namespace\":\"default\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"daemon\":\"prom-node-exp\"},\"name\":\"prometheus-node-exporter\"},\"spec\":{\"containers\":[{\"image\":\"prom/prometheus\",\"name\":\"c\",\"ports\":[{\"containerPort\":9090,\"hostPort\":9090,\"name\":\"serverport\"}]}]}}}}\n"}}}
There's at least a couple bugs here:
kubectl apply fetches a different version for comparing with the on-disk version. I'd expect it to fetch the version specified in the file and fail when the server reports that version is not available - @kubernetes/sig-cli-bugs—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
Sorry, I ran apply like @liggitt's example.
kubectl and the apiserver are 1.8.1 but i've tested this on 1.8.0 as well (why i put 1.8.x). I haven't tested this on 1.7.x.
fix for the apiserver ignoring kind/apiVersion-changing patches in #54840
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
PR open.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #54697.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.