Some configuration management systems distinguish between "no change", "changed", and "failed". It should be possible to use kubectl apply and know when any changes were applied.
Since "no-op" is also success, and we don't expect clients to parse our stdout/stderr, it seems reasonable that we should allow a kubectl apply caller to request that a no-op be given a special exit code that we ensure no other result can return. Since today we return 1 for almost all errors, we have the option to begin defining "special" errors.
Possible options:
kubectl apply ... --fail-when-unchanged=2
returns exit code 2 (allows user to control exit code)kubectl apply ... --fail-when-unchanged
returns exit code 2 always (means we can document the exit code as per UNIX norms)The latter is probably better. Naming of course is up in the air.
@kubernetes/sig-cli-feature-requests
I rate this as high importance for integration with config management (like Ansible) which expects to be able to discern this.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
sounds interesting, @fabianofranz @pwittrock I would like to dibs on it.
hi, @smarterclayton also need confirm you, this is only for apply right ? we do not currently want other declarative/Imperative object configuration commands or even every command to have this flag right ?
otherwise the tittle would be like implement a custom return error code mechanism for kubectl
/assign
Sounds like a good idea. I remember sometime back there were issues where apply see detect that there were changes when there in fact were none. This may have since been resolved. I think it was due to an interaction with round tripping and defaulting, but don't quite remember.
This would fit nicely with the other apply renovations we are doing to address long standing issues.
RE priority art for other unix utils.
diff
exits 0 on no differences, 1 on differences found, and 1> on error
grep
exits 0 on lines found, 1 on no lines found, and 1> on error
If we had a green field, it might be worth trying to do something consistent - perhaps exit 1 if we make changes and 0 if we don't make any changes. That might lend itself to a retry loop to - fetch recent, apply, retry on non-0 exit (expecting that the next apply will return 0 if no changes, and maybe doing exponential backoff for exit >1).
This of course may impact existing scripts, so doing as you suggested and making it opt-in is the better route, and then add this to the list of things we would like to change when we do something that allows us to break backward compatibility (e.g. introducing a new "version" of the command or something).
Re naming: maybe something like --exit-failure-unchanged
?
This feature will be helpful. And it doesn't require big change since apply already can distinguish if there is a change (but it only print it out).
I agree with @pwittrock's opinion, make it opt-in for now and change the behavior in future major version change.
it doesn't require big change since apply already can distinguish if there is a change (but it only print it out).
@shiywang Sorry, I was wrong. It is actually kubectl edit
that can distinguish if there is a change and print no changes made
.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #52577.
Reopened #52577.
/remove-lifecycle stale
/lifecycle frozen
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Is there any progress on this?
There's a server-side apply working group, which is working on moving the apply command to the server. I'd be good to sync with them for the update.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Closed #52577.
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle stale
/remove-lifecycle rotten
/reopen
@tpoindessous: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/remove-lifecycle stale
/remove-lifecycle rotten
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
@soltysh Can we get this re-opened? It looks like it's still a valid concern.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
As a workaround, for scripting, this could be used:
if LANG=C kubectl apply -f change.yaml | grep -v unchanged; then echo 'Wait for a complicated update, only if change was made' fi
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.