[kubernetes/kubernetes] oc rollout history should show me the images that each revision run (#54932)

0 views
Skip to first unread message

Michalis Kargakis

unread,
Nov 1, 2017, 7:04:25 AM11/1/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@kubernetes/sig-cli-feature-requests @kubernetes/sig-apps-feature-requests


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Kenneth Owens

unread,
Nov 1, 2017, 2:03:20 PM11/1/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@kargakis Can't they just use the --revision flag to view the details of a particular revision? I'm not opposed to adding image, but where do we cut off the details? Some users might also want to see resource requests for instance.

Michalis Kargakis

unread,
Nov 2, 2017, 5:52:17 AM11/2/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I don't want to see the details of a particular revision. I want to figure out where in the history I am running a particular image. Resource requests are depending on the image that is running so I would argue that it is a lower-priority field to care about. Similarly, most configuration options resolve around the image that is running.

Michalis Kargakis

unread,
Nov 2, 2017, 5:53:07 AM11/2/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I don't want to see the details of a particular revision. I want to figure out where in the history I am running a particular image. Resource requests are depending on the image that is running so I would argue that it is a lower-priority field to care about. Similarly, most configuration options revolve around the image that is running.

John Kelly

unread,
Nov 7, 2017, 3:51:44 PM11/7/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

This is actually something I would like to see as well.
Currently rollout history does not provide adequate information on its own to give you a sense of what exactly you are rolling back to, and in my experience a common use case for rolling back is to revert to a previous image version.
Yes, --revision will give you that information, but if you don't know where in the revision history to look, you end up running rollout history --revision=x iterating through each revision till you find what you need.
I would argue that showing the image versions would be more important than the history field, but that's up to debate :)

Michalis Kargakis

unread,
Nov 8, 2017, 3:34:47 AM11/8/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@jekohk are you interested in working on this? Should be an easy fix in https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/history.go

John Kelly

unread,
Nov 8, 2017, 1:03:38 PM11/8/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@kargakis sure I will take a look.

Gregory Lyons

unread,
Nov 9, 2017, 3:53:22 AM11/9/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

What about Deployments that specify multiple containers/images?

IMO it would be useful to have a more general way of specifying some arbitrary metadata associated with a Deployment rollout to show up in the rollout history, as described in #25554. This could be image tag, git commit, CI build identifier, change author, etc.

This already exists to some extent - we have currently been "hacking" the kubernetes.io/change-cause annotation as described here to display more useful information in rollout history. However, I don't think this is the intended use of the change-cause annotation and I'm worried that it won't work that way forever.

John Kelly

unread,
Nov 9, 2017, 11:59:49 AM11/9/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@gregory-lyons thanks for the feedback. I had the same concern, as to what would happen when a Deployment has multiple containers, and I agree it could get messy printing that out. But printing the full change command is also messy imo.

I think there is a general aversion the change-cause annotation & --record flag as it is (see #40422) and there are a few options to deal with this:

  1. Keep the --record flag of kubectl, but support safely overriding the change-cause annotation by letting a new value in the Deployment spec always take priority
  2. Deprecate the --record flag entirely, and leave it up to the user how change-cause gets populated (probably as part of their CI/CD pipeline)
  3. Keep the current functionality of change-cause and --record flag, but add a new optional annotation i.e. change-desc that is manually set by user , and also displayed by rollout history
  4. Deprecate --record and have change-cause be automatically populated by the server with more useful information (maybe instead of the entire apply/patch command, just short descriptions of what actually changed like "Image: nginx:1.4 -> nginx1.5")

Personally I am in favor of deprecating --record as it has raised many issues in the past, and just recording the kubectl command does not provide much context.

I am in favor of option 2, however I can see how option 1 would be safer while still allowing users to override the default behavior and associate with each revision whatever info is relevant to their setup (whether it is container images, a git commit, or build number).

@kargakis thoughts?

John Kelly

unread,
Nov 9, 2017, 12:03:26 PM11/9/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/cc @janetkuo

Gregory Lyons

unread,
Nov 9, 2017, 1:31:40 PM11/9/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I just want a customizable annotation that shows up in rollout history and will be stable/supported going forward (I don't think change-cause was designed to be overwritten, the current override behavior feels like a coincidence).

IMO option 1 perpetuates a weird relationship between --record, annotation, and template spec, it will just make it more explicit in logic buried somewhere. Documenting this behavior for users might be a challenge.

I think I prefer option 2 or 3, leaning slightly towards 2 because I think it makes sense to deprecate --record (don't feel strongly about it though, and I understand a desire to maintain compatibility).

John Kelly

unread,
Nov 10, 2017, 3:06:09 PM11/10/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I don't want to get too far off of the original topic which is showing image versions in the history, but going through the features of kubectl get gave me some interesting ideas to improve rollout history in general..

I'm now leaning towards a slight variation of option 3, but instead of adding yet another annotation, just display the image(s) instead of change-cause in the history list by default.
kubectl get -o wide already shows an IMAGE column, so i'm not too concerned with multiple containers/images, (and the cmds recorded in change-cause can be very lengthy anyway).

IMO it would be useful to have a more general way of specifying some arbitrary metadata associated with a Deployment rollout to show up in the rollout history

Going through history.go, the ViewHistory methods already access the full Template Spec of each RS, why not let the user specify (via a flag or otherwise) an annotation they want displayed in the history list? This would be similar to the way kubectl get -L label behaves, only with annotations.

I'm not a fan of making users keep relevant revision info (whether it's a git commit or CI build) in a preset annotation, so I think leaving the annotation name(s) to be listed up to the user is most flexible, although will require a decent amount of work to implement.

John Kelly

unread,
Nov 28, 2017, 8:03:01 PM11/28/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Coming back to this- I think for now I will just replace "change-cause" in the output with a list of container images in the same way kubectl get -o wide shows the images.

In the future I would like to propose adding more command line options to kubectl rollout history but that's a bigger issue ;)

thoughts? @kargakis @gregory-lyons

fejta-bot

unread,
May 5, 2018, 10:19:51 PM5/5/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Jun 4, 2018, 11:07:20 PM6/4/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten
/remove-lifecycle stale

Paul Miller

unread,
Jun 6, 2018, 6:42:59 PM6/6/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

This seems related to kubernetes/kubectl#59. In general what's the right way to wait for a specific results of an apply. Just getting the latest revision isn't really reliable. @jekohk suggestion to pass something to change cause would work or having apply return some identifier (revision guid) to monitor would be fine to. Maybe there's some guidance I am missing.

https://groups.google.com/forum/#!topic/kubernetes-users/JUzLzyUvzps

John Kelly

unread,
Jun 8, 2018, 3:50:24 PM6/8/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@paulgmiller i'm not sure these are directly related- this issue is mostly concerned with rollback workflows e.g. you want to rollback a deployment to an earlier revision running a particular image.

For your issue, why not run kubectl rollout status after apply? The command will wait until each child resource is updated and return with a 0 or 1 exit code depending if the rollout succeeded.

Paul Miller

unread,
Jun 15, 2018, 6:28:56 PM6/15/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Sorry for my slowness. We do run kubectl rollout status after the appy but we don't know what --revision to give it (if we give it no revision we might return success because a previous rollout that was not caused by our apply finished). So we try and use rollout history to figure out what the new revision in but that is subject to racing with a another rollout too.

fejta-bot

unread,
Jul 15, 2018, 7:07:13 PM7/15/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

k8s-ci-robot

unread,
Jul 15, 2018, 7:07:16 PM7/15/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Closed #54932.

Sam Koelle

unread,
Jul 12, 2023, 12:04:51 PM7/12/23
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

is there any update on this? still would be useful


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/54932/1632813709@github.com>

Reply all
Reply to author
Forward
0 new messages