@kubernetes/sig-cli-feature-requests @kubernetes/sig-apps-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@kargakis Can't they just use the --revision flag to view the details of a particular revision? I'm not opposed to adding image, but where do we cut off the details? Some users might also want to see resource requests for instance.
I don't want to see the details of a particular revision. I want to figure out where in the history I am running a particular image. Resource requests are depending on the image that is running so I would argue that it is a lower-priority field to care about. Similarly, most configuration options resolve around the image that is running.
I don't want to see the details of a particular revision. I want to figure out where in the history I am running a particular image. Resource requests are depending on the image that is running so I would argue that it is a lower-priority field to care about. Similarly, most configuration options revolve around the image that is running.
This is actually something I would like to see as well.
Currently rollout history
does not provide adequate information on its own to give you a sense of what exactly you are rolling back to, and in my experience a common use case for rolling back is to revert to a previous image version.
Yes, --revision will give you that information, but if you don't know where in the revision history to look, you end up running rollout history --revision=x
iterating through each revision till you find what you need.
I would argue that showing the image versions would be more important than the history field, but that's up to debate :)
@jekohk are you interested in working on this? Should be an easy fix in https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/history.go
@kargakis sure I will take a look.
What about Deployments that specify multiple containers/images?
IMO it would be useful to have a more general way of specifying some arbitrary metadata associated with a Deployment rollout to show up in the rollout history
, as described in #25554. This could be image tag, git commit, CI build identifier, change author, etc.
This already exists to some extent - we have currently been "hacking" the kubernetes.io/change-cause
annotation as described here to display more useful information in rollout history
. However, I don't think this is the intended use of the change-cause annotation and I'm worried that it won't work that way forever.
@gregory-lyons thanks for the feedback. I had the same concern, as to what would happen when a Deployment has multiple containers, and I agree it could get messy printing that out. But printing the full change command is also messy imo.
I think there is a general aversion the change-cause annotation & --record flag as it is (see #40422) and there are a few options to deal with this:
--record
flag of kubectl, but support safely overriding the change-cause annotation by letting a new value in the Deployment spec always take priority--record
flag entirely, and leave it up to the user how change-cause
gets populated (probably as part of their CI/CD pipeline)change-cause
and --record
flag, but add a new optional annotation i.e. change-desc
that is manually set by user , and also displayed by rollout history
--record
and have change-cause
be automatically populated by the server with more useful information (maybe instead of the entire apply/patch command, just short descriptions of what actually changed like "Image: nginx:1.4 -> nginx1.5")Personally I am in favor of deprecating --record
as it has raised many issues in the past, and just recording the kubectl command does not provide much context.
I am in favor of option 2, however I can see how option 1 would be safer while still allowing users to override the default behavior and associate with each revision whatever info is relevant to their setup (whether it is container images, a git commit, or build number).
@kargakis thoughts?
/cc @janetkuo
I just want a customizable annotation that shows up in rollout history and will be stable/supported going forward (I don't think change-cause was designed to be overwritten, the current override behavior feels like a coincidence).
IMO option 1 perpetuates a weird relationship between --record
, annotation, and template spec, it will just make it more explicit in logic buried somewhere. Documenting this behavior for users might be a challenge.
I think I prefer option 2 or 3, leaning slightly towards 2 because I think it makes sense to deprecate --record
(don't feel strongly about it though, and I understand a desire to maintain compatibility).
I don't want to get too far off of the original topic which is showing image versions in the history, but going through the features of kubectl get
gave me some interesting ideas to improve rollout history
in general..
I'm now leaning towards a slight variation of option 3, but instead of adding yet another annotation, just display the image(s) instead of change-cause in the history list by default.
kubectl get -o wide
already shows an IMAGE column, so i'm not too concerned with multiple containers/images, (and the cmds recorded in change-cause can be very lengthy anyway).
IMO it would be useful to have a more general way of specifying some arbitrary metadata associated with a Deployment rollout to show up in the rollout history
Going through history.go
, the ViewHistory methods already access the full Template Spec of each RS, why not let the user specify (via a flag or otherwise) an annotation they want displayed in the history list? This would be similar to the way kubectl get -L label
behaves, only with annotations.
I'm not a fan of making users keep relevant revision info (whether it's a git commit or CI build) in a preset annotation, so I think leaving the annotation name(s) to be listed up to the user is most flexible, although will require a decent amount of work to implement.
Coming back to this- I think for now I will just replace "change-cause" in the output with a list of container images in the same way kubectl get -o wide
shows the images.
In the future I would like to propose adding more command line options to kubectl rollout history
but that's a bigger issue ;)
thoughts? @kargakis @gregory-lyons
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
This seems related to kubernetes/kubectl#59. In general what's the right way to wait for a specific results of an apply. Just getting the latest revision isn't really reliable. @jekohk suggestion to pass something to change cause would work or having apply return some identifier (revision guid) to monitor would be fine to. Maybe there's some guidance I am missing.
https://groups.google.com/forum/#!topic/kubernetes-users/JUzLzyUvzps
@paulgmiller i'm not sure these are directly related- this issue is mostly concerned with rollback workflows e.g. you want to rollback a deployment to an earlier revision running a particular image.
For your issue, why not run kubectl rollout status
after apply
? The command will wait until each child resource is updated and return with a 0 or 1 exit code depending if the rollout succeeded.
Sorry for my slowness. We do run kubectl rollout status after the appy but we don't know what --revision to give it (if we give it no revision we might return success because a previous rollout that was not caused by our apply finished). So we try and use rollout history to figure out what the new revision in but that is subject to racing with a another rollout too.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #54932.
is there any update on this? still would be useful
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.