Re: [kubernetes/kubernetes] Deprecate and remove --record flag from kubectl (#40422)

4 views
Skip to first unread message

Michail Kargakis

unread,
Jan 27, 2017, 3:19:45 PM1/27/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I haven't thought about this issue for a while, but I'm pretty sure we can solve our use case of recording which git commit or jenkins run resulted in an apply in our own custom annotation and not need a kube-standard one.

This is not ideal because we already do this sort of thing in kubectl albeit storing less valuable info ie. the kubectl command that was invoked. @kubernetes/sig-cli-feature-requests lets add a new flag in kubectl that users can use similar to --record but instead of storing the invoked command, store the string that is passed by the user.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Maciej Szulik

unread,
Jan 27, 2017, 4:03:56 PM1/27/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention
I don't want a new flag, rather a more systematic approach to the problem.
The current flag is only partially solving this when certain kubectl
commands are being called. I agree here with @kargakis this should be
solved first before deprecating the flag.

On Jan 27, 2017 9:19 PM, "Michail Kargakis" <notifi...@github.com>
wrote:


> I haven't thought about this issue for a while, but I'm pretty sure we can
> solve our use case of recording which git commit or jenkins run resulted in
> an apply in our own custom annotation and not need a kube-standard one.
>
> This is not ideal because we already do this sort of thing in kubectl
> albeit storing less valuable info ie. the kubectl command that was invoked.
> @kubernetes/sig-cli-feature-requests
> <https://github.com/orgs/kubernetes/teams/sig-cli-feature-requests> lets

> add a new flag in kubectl that users can use similar to --record but
> instead of storing the invoked command, store the string that is passed by
> the user.
>
> —
> You are receiving this because you were mentioned.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/40422#issuecomment-275764131>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AAjLVXFa2t4eFxwtIH7UbChyFFUHR8Yeks5rWlE_gaJpZM4LtiNM>

Michail Kargakis

unread,
Jan 27, 2017, 4:25:42 PM1/27/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

What alternative do you suggest? Repurpose the current flag? Something else? We need users/automated processes to be able to specify a reason when images (or less frequently other parts of the pod spec) are updated so things like kubectl set image or kubectl apply need to pass the info down to the deployment->replica set.

Maciej Szulik

unread,
Jan 30, 2017, 6:26:03 AM1/30/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I'm leaning towards automated process, I don't have any details figured out, will keep you posted.

TonyAdo

unread,
Feb 3, 2017, 12:51:10 AM2/3/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/cc @AdoHe

zhengjiajin

unread,
Sep 25, 2017, 8:30:10 AM9/25/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/cc

fejta-bot

unread,
Jan 6, 2018, 7:34:19 AM1/6/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot

unread,
Feb 9, 2018, 7:27:18 PM2/9/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot

unread,
Mar 11, 2018, 9:13:19 PM3/11/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

k8s-ci-robot

unread,
Mar 11, 2018, 9:13:34 PM3/11/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Closed #40422.

Jordan Liggitt

unread,
Mar 11, 2018, 9:17:20 PM3/11/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/reopen

k8s-ci-robot

unread,
Mar 11, 2018, 9:17:26 PM3/11/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@liggitt: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Jordan Liggitt

unread,
Mar 11, 2018, 9:17:36 PM3/11/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Reopened #40422.

Jordan Liggitt

unread,
Mar 11, 2018, 9:18:15 PM3/11/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten
cc @lavalamp for consideration in new apply design

fejta-bot

unread,
Jun 9, 2018, 10:03:40 PM6/9/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Jul 9, 2018, 10:49:17 PM7/9/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten
/remove-lifecycle stale

Janet Kuo

unread,
Jul 10, 2018, 8:39:20 PM7/10/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

Wes McNamee

unread,
Jul 25, 2018, 11:44:37 PM7/25/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Is the plan to bake in an audit system that can be used for rollbacks?

Maciej Szulik

unread,
Aug 24, 2018, 10:36:59 AM8/24/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Is the plan to bake in an audit system that can be used for rollbacks?

I don't recall anything like that.

fejta-bot

unread,
Nov 22, 2018, 10:36:25 AM11/22/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Dec 22, 2018, 11:19:18 AM12/22/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Jan 21, 2019, 12:05:55 PM1/21/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Guillaume Gelin

unread,
Feb 19, 2019, 11:11:03 AM2/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

fejta-bot

unread,
May 20, 2019, 12:59:16 PM5/20/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.

Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Jun 19, 2019, 1:46:31 PM6/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

fejta-bot

unread,
Jul 19, 2019, 2:32:53 PM7/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.

Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Jul 19, 2019, 2:33:00 PM7/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
Jul 19, 2019, 2:33:28 PM7/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Closed #40422.

Nick

unread,
Jul 19, 2019, 2:37:13 PM7/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/reopen

Kubernetes Prow Robot

unread,
Jul 19, 2019, 2:37:30 PM7/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@nphmuller: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Benjamin Elder

unread,
Aug 17, 2019, 1:26:23 AM8/17/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@soltysh @liggitt should this be re-opened..?

Jordan Liggitt

unread,
Aug 17, 2019, 10:50:38 AM8/17/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Reopened #40422.

Joel Hoisko

unread,
Feb 18, 2021, 3:58:59 AM2/18/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Has there been any progress on this issue since 2019?


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Benjamin Elder

unread,
Jun 23, 2021, 5:36:20 AM6/23/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

#102873 this is actually happening now.

crokobit

unread,
Jul 9, 2021, 9:54:34 AM7/9/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

+1

Praparn Lungpoonlap

unread,
Sep 11, 2021, 7:27:29 AM9/11/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Is it possible for append this option on deployment yaml itself? A lot of need to use this feature for check and record history purpose.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.

Vova

unread,
Dec 15, 2021, 9:02:30 PM12/15/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

We need to provide an alternative to rollout history change-cause before deprecating --record.

cuianbing

unread,
Jan 20, 2022, 6:23:19 AM1/20/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

2022-01-20,I now use --record command to prompt me "Flag --record has been deprecated, --record will be removed in the future". Has it been discarded.Is there an alternative solution now。Thanks


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1017398906@github.com>

Hoon Jo

unread,
Feb 13, 2022, 11:59:17 PM2/13/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

FYI (2022-02-14)
v1.23 is same as before.

[root@m-k8s 9.4]# k get node 
NAME     STATUS   ROLES                  AGE     VERSION
m-k8s    Ready    control-plane,master   6d21h   v1.23.3
w1-k8s   Ready    <none>                 6d21h   v1.23.3
<snipped>
[root@m-k8s 9.4]# k set image deployment deploy-rollout nginx=nginx:1.21.0 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/deploy-rollout image updated


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1038643071@github.com>

tanvp112

unread,
Mar 6, 2022, 12:25:41 AM3/6/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

#102873


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1059898274@github.com>

Vitor Jr.

unread,
May 4, 2022, 8:13:55 PM5/4/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

2022-01-20,I now use --record command to prompt me "Flag --record has been deprecated, --record will be removed in the future". Has it been discarded.Is there an alternative solution now。Thanks

@olwenya just tested like this and it worked:

~$ kubectl create deploy nginx --image=nginx --replicas=2

deployment.apps/nginx created

~$ kubectl set image deploy/nginx nginx=nginx:1.19

deployment.apps/nginx image updated

~$ kubectl annotate deploy/nginx kubernetes.io/change-cause='update image to 1.19'

deployment.apps/nginx annotated

~$ kubectl rollout history deploy/nginx

deployment.apps/nginx 

REVISION  CHANGE-CAUSE

1         <none>

2         update image to 1.19

Not perfect, but is an alternative.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1118042103@github.com>

Hoon Jo

unread,
May 4, 2022, 8:22:57 PM5/4/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@vjunior1981 suggestion is looking good I think.
So how about --annotation instead of --record ? like this?
(It is for instance)

~$ kubectl set image deploy/nginx nginx=nginx:1.19 --annotation=update image to 1.19
deployment.apps/nginx image updated
~$ kubectl rollout history deploy/nginx
deployment.apps/nginx 
REVISION  CHANGE-CAUSE
1         update image to 1.19


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1118054056@github.com>

Praparn Lungpoonlap

unread,
May 4, 2022, 11:36:15 PM5/4/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Wow this look goods idea. As check back to Deployment. They also remove "--record". Will take some time to test about both of this


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1118133163@github.com>

Alistair Mackay

unread,
May 5, 2022, 12:43:40 AM5/5/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Or even introduce --annotation and have a default if the user does not specify it, along the lines of

kubectl set image deploy/nginx nginx=nginx:1.19

...defaults to set image to nginx:1.19 - perhaps just the tag if image name considered insecure.

kubectl rollout undo deploy/nginx --to-revision=3

...defaults to rollback to revision 3


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1118158087@github.com>

mohini4prac

unread,
Jun 11, 2022, 5:12:10 AM6/11/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Although annotate option is there to have CHANGE-CAUSE , it is better to have --record option . It may happen that some incorrect message is provided in annotate .Providing actual command that was run while updating deployment will be more useful.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1152886918@github.com>

Pontus Fagerström

unread,
Jul 7, 2022, 3:48:58 AM7/7/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

This does not work for me when working with DaemonSet in v1.23.1.
I must do it the other way around.
First annotate, then set image, like so:

kubectl annotate ds myds01 kubernetes.io/change-cause='downgrade to 1.16.1-alpine'
kubectl set image ds myds01 nginx=nginx:1.16.1-alpine


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1177210095@github.com>

Riccardo

unread,
Aug 2, 2022, 4:02:38 AM8/2/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Here my example

$ k version --short
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.24.0

$ k create deployment nginx-dep --image=nginx:1.22.0-alpine-perl --replicas 5

$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION  CHANGE-CAUSE
1         <none>

$ k set image deployment nginx-dep nginx=nginx:1.23-alpine-perl
$ k annotate deployment nginx-dep kubernetes.io/change-cause="demo version changed from 1.22.0 to 1.23.0" --overwrite=true
$ k rollout history deployment nginx-dep
deployment.apps/nginx-dep
REVISION  CHANGE-CAUSE
1         <none>
2         demo version changed from 1.22.0 to 1.23.0

$ k set image deployment nginx-dep nginx=nginx:1.23.1-alpine-perl
$ k annotate deployment nginx-dep kubernetes.io/change-cause="demo version changed from 1.23.0 to 1.23.1" --overwrite=true
$ k rollout history deployment nginx-dep                                                                             
deployment.apps/nginx-dep
REVISION  CHANGE-CAUSE
1         <none>
2         demo version changed from 1.22.0 to 1.23.0
3         demo version changed from 1.23.0 to 1.23.1


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1202152365@github.com>

rittneje

unread,
Sep 28, 2022, 9:33:00 AM9/28/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@soltysh What is the replacement for this feature? Having to manually annotate everything is not a workable solution. I see some mention of HTTP headers getting sent by kubectl, but it is very unclear what is expected to consume these headers, and how I am expected to see them from the various yaml specs. And the kubectl debug logs don't show any additional headers being sent, even when explicitly setting KUBECTL_COMMAND_HEADERS=1.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1260922729@github.com>

sam

unread,
Feb 11, 2023, 10:58:14 PM2/11/23
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

it works for me:

[root@(?.|default:default) ~]$ kc version --short 
Client Version: v1.17.5
Server Version: v1.22.17+k3s1

[root@(?.|default:default) ~]$ kc annotate ds alpine-ds kubernetes.io/change-cause='set AA=123'
daemonset.apps/alpine-ds annotated
[root@(?.|default:default) ~]$ kc set env ds alpine-ds AA=123
daemonset.apps/alpine-ds env updated

[root@(?.|default:default) ~]$ krollout history ds alpine-ds 3'
daemonset.apps/alpine-ds 
REVISION  CHANGE-CAUSE
1         <none>
2         update image to 1.19-03
3         update image to 1.19-03
4         AA to 114
5         update image to 1.19-04
6         set AA=123


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1426936343@github.com>

mhash17

unread,
Oct 31, 2023, 10:54:08 AM10/31/23
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

sorry but changing one ugly solution by another? What was the problem to update the CHANGE-CAUSE automatically and let all the annotation stuff being done in the background? Like for example...
kubectl set image ...

So depending on what type of change was done (image,env,selector...) it should be automatically provided at CHANGE-CAUSE (and annotationally recorded) like "image xyz:1.2.3 set to xyz:1.2.2", unless an individual message is provided. But making "" the standard message is absolutely not helpful and crazy.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/40422/1787381255@github.com>

Reply all
Reply to author
Forward
0 new messages