I haven't thought about this issue for a while, but I'm pretty sure we can solve our use case of recording which git commit or jenkins run resulted in an apply in our own custom annotation and not need a kube-standard one.
This is not ideal because we already do this sort of thing in kubectl albeit storing less valuable info ie. the kubectl command that was invoked. @kubernetes/sig-cli-feature-requests lets add a new flag in kubectl that users can use similar to --record but instead of storing the invoked command, store the string that is passed by the user.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
What alternative do you suggest? Repurpose the current flag? Something else? We need users/automated processes to be able to specify a reason when images (or less frequently other parts of the pod spec) are updated so things like kubectl set image
or kubectl apply
need to pass the info down to the deployment->replica set.
I'm leaning towards automated process, I don't have any details figured out, will keep you posted.
/cc @AdoHe
/cc
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #40422.
/reopen
@liggitt: you can't re-open an issue/PR unless you authored it or you are assigned to it.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Reopened #40422.
/remove-lifecycle rotten
cc @lavalamp for consideration in new apply design
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
Is the plan to bake in an audit system that can be used for rollbacks?
Is the plan to bake in an audit system that can be used for rollbacks?
I don't recall anything like that.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Closed #40422.
/reopen
@nphmuller: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reopened #40422.
Has there been any progress on this issue since 2019?
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
#102873 this is actually happening now.
+1
Is it possible for append this option on deployment yaml itself? A lot of need to use this feature for check and record history purpose.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
We need to provide an alternative to rollout history change-cause before deprecating --record.
2022-01-20,I now use --record
command to prompt me "Flag --record has been deprecated, --record will be removed in the future". Has it been discarded.Is there an alternative solution now。Thanks
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
FYI (2022-02-14)
v1.23
is same as before.
[root@m-k8s 9.4]# k get node
NAME STATUS ROLES AGE VERSION
m-k8s Ready control-plane,master 6d21h v1.23.3
w1-k8s Ready <none> 6d21h v1.23.3
<snipped>
[root@m-k8s 9.4]# k set image deployment deploy-rollout nginx=nginx:1.21.0 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/deploy-rollout image updated
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
2022-01-20,I now use
--record
command to prompt me "Flag --record has been deprecated, --record will be removed in the future". Has it been discarded.Is there an alternative solution now。Thanks
@olwenya just tested like this and it worked:
~$ kubectl create deploy nginx --image=nginx --replicas=2
deployment.apps/nginx created
~$ kubectl set image deploy/nginx nginx=nginx:1.19
deployment.apps/nginx image updated
~$ kubectl annotate deploy/nginx kubernetes.io/change-cause='update image to 1.19'
deployment.apps/nginx annotated
~$ kubectl rollout history deploy/nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 update image to 1.19
Not perfect, but is an alternative.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@vjunior1981 suggestion is looking good I think.
So how about --annotation
instead of --record
? like this?
(It is for instance)
~$ kubectl set image deploy/nginx nginx=nginx:1.19 --annotation=update image to 1.19
deployment.apps/nginx image updated
~$ kubectl rollout history deploy/nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 update image to 1.19
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Wow this look goods idea. As check back to Deployment. They also remove "--record". Will take some time to test about both of this
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Or even introduce --annotation
and have a default if the user does not specify it, along the lines of
kubectl set image deploy/nginx nginx=nginx:1.19
...defaults to set image to nginx:1.19
- perhaps just the tag if image name considered insecure.
kubectl rollout undo deploy/nginx --to-revision=3
...defaults to rollback to revision 3
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Although annotate option is there to have CHANGE-CAUSE , it is better to have --record option . It may happen that some incorrect message is provided in annotate .Providing actual command that was run while updating deployment will be more useful.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
This does not work for me when working with DaemonSet in v1.23.1.
I must do it the other way around.
First annotate, then set image, like so:
kubectl annotate ds myds01 kubernetes.io/change-cause='downgrade to 1.16.1-alpine'
kubectl set image ds myds01 nginx=nginx:1.16.1-alpine
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Here my example
$ k version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.0
$ k create deployment nginx-dep --image=nginx:1.22.0-alpine-perl --replicas 5
$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION CHANGE-CAUSE
1 <none>
$ k set image deployment nginx-dep nginx=nginx:1.23-alpine-perl
$ k annotate deployment nginx-dep kubernetes.io/change-cause="demo version changed from 1.22.0 to 1.23.0" --overwrite=true
$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION CHANGE-CAUSE
1 <none>
2 demo version changed from 1.22.0 to 1.23.0
$ k set image deployment nginx-dep nginx=nginx:1.23.1-alpine-perl
$ k annotate deployment nginx-dep kubernetes.io/change-cause="demo version changed from 1.23.0 to 1.23.1" --overwrite=true
$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION CHANGE-CAUSE
1 <none>
2 demo version changed from 1.22.0 to 1.23.0
3 demo version changed from 1.23.0 to 1.23.1
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@soltysh What is the replacement for this feature? Having to manually annotate everything is not a workable solution. I see some mention of HTTP headers getting sent by kubectl, but it is very unclear what is expected to consume these headers, and how I am expected to see them from the various yaml specs. And the kubectl debug logs don't show any additional headers being sent, even when explicitly setting KUBECTL_COMMAND_HEADERS=1
.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
it works for me:
[root@(?.|default:default) ~]$ kc version --short Client Version: v1.17.5 Server Version: v1.22.17+k3s1 [root@(?.|default:default) ~]$ kc annotate ds alpine-ds kubernetes.io/change-cause='set AA=123' daemonset.apps/alpine-ds annotated [root@(?.|default:default) ~]$ kc set env ds alpine-ds AA=123 daemonset.apps/alpine-ds env updated [root@(?.|default:default) ~]$ krollout history ds alpine-ds 3' daemonset.apps/alpine-ds REVISION CHANGE-CAUSE 1 <none> 2 update image to 1.19-03 3 update image to 1.19-03 4 AA to 114 5 update image to 1.19-04 6 set AA=123
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
sorry but changing one ugly solution by another? What was the problem to update the CHANGE-CAUSE automatically and let all the annotation stuff being done in the background? Like for example...
kubectl set image ...
So depending on what type of change was done (image,env,selector...) it should be automatically provided at CHANGE-CAUSE (and annotationally recorded) like "image xyz:1.2.3 set to xyz:1.2.2", unless an individual message is provided. But making "" the standard message is absolutely not helpful and crazy.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.