Re: [kubernetes/kubernetes] "kubectl rollout status" returns error message before rollout finish (#40224)

2,109 views
Skip to first unread message

Michail Kargakis

unread,
Apr 19, 2017, 10:13:41 AM4/19/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@kubernetes/sig-cli-bugs @kubernetes/sig-api-machinery-bugs either fix watch.Until or remove


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Sam Powers

unread,
Apr 25, 2017, 11:58:43 PM4/25/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I'm getting this periodically when running my kubectl connection across a stateful firewall, where the connection may be periodically reset if idle for a period. I don't know what the idle period is. But I'm getting ready to make a bandaid shell script for our usage of kubectl rollout status. Just in case that motivates anyone to make watch.Until more robust.

I hope this was useful feedback. Maybe forcing a connection reset would help in reproducing the issue?

Michail Kargakis

unread,
Apr 26, 2017, 3:56:32 AM4/26/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@sampowers would you be interested in working on fixing rollout status rather than writting that script? :) Most people (myself included) seem to be fairly busy but we would love to spare review time.

Keyan Pishdadian

unread,
Jun 16, 2017, 3:38:59 PM6/16/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I was seeing the same error as OP when using kubectl v1.5.6 but upon upgrading to v1.7.0-beta.2 I'm seeing a different error:

Waiting for rollout to finish: 2 out of 4 new replicas have been updated...
error: watch closed before Until timeout

This is thrown by this line. I'm consistently seeing the error after kubectl has executed the status command for exactly 10 minutes.

The change I made in #47617 doesn't seem relevant anymore. Initially I thought I was somehow hitting the seemingly impossible select case from the after channel. But it is clear to me now that Until was failing at the same location in v1.5.6 except the error thrown is different.

The issue of timing out after 10 minutes remains the same.

Tomáš Nožička

unread,
Sep 13, 2017, 2:15:06 PM9/13/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/assign

Andrew Holway

unread,
Oct 20, 2017, 6:12:59 AM10/20/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

+1

Skyler Layne

unread,
Nov 14, 2017, 11:05:34 AM11/14/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Christopher Hlubek

unread,
Dec 7, 2017, 6:52:47 AM12/7/17
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

We are having the same issue when checking for deployments in our GitLab / K8s infrastructure:

Waiting for rollout to finish: 0 of 1 updated replicas are available...
E1207 11:03:06.765491     137 streamwatcher.go:109] Unable to decode an event from the watch stream: stream error: stream ID 5; INTERNAL_ERROR
error: watch closed before Until timeout

Bryan Lee

unread,
Jan 18, 2018, 6:49:44 AM1/18/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

I am having the same issue
bryanleekw@nb1:/proj/kubernetes$ kubectl rollout status deployment/monitor-scale


Waiting for rollout to finish: 0 of 1 updated replicas are available...
Waiting for rollout to finish: 0 of 1 updated replicas are available...

error: watch closed before Until timeout

my kubectl version is:
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}

Bryan Lee

unread,
Jan 18, 2018, 6:58:56 AM1/18/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

may I know how can I check rollout status by log files?

gavk34

unread,
Mar 7, 2018, 8:58:41 AM3/7/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Seeing this also

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T20:00:41Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.8-gke.0", GitCommit:"6e5b33a290a99c067003632e0fd6be0ead48b233", GitTreeState:"clean", BuildDate:"2018-02-16T18:26:58Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}

Sample log output

07-Mar-2018 08:04:41	2018/03/07 08:04:41 Waiting for rollout to finish: 0 of 1 updated replicas are available...
07-Mar-2018 08:04:41	Waiting for rollout to finish: 0 of 1 updated replicas are available...
07-Mar-2018 08:04:41	error: watch closed before Until timeout
07-Mar-2018 08:04:41	2018/03/07 08:04:41 Error occurred when running 'apply' command: exit status 1

creinheimer

unread,
Mar 29, 2018, 2:10:09 PM3/29/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Same problem here.

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.10-rancher1", GitCommit:"66aaf7681d4a74778ffae722d1f0f0f42c80a984", GitTreeState:"clean", BuildDate:"2018-03-20T16:02:56Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Log output:

Waiting for rollout to finish: 1 old replicas are pending termination...
F0329 14:43:18.723237    9889 helpers.go:119] error: watch closed before Until timeout

Tomáš Nožička

unread,
Apr 5, 2018, 8:08:56 AM4/5/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Fix is here #50102

/sig apps

Jinming Yue

unread,
May 17, 2018, 2:59:36 AM5/17/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Meet the same problem:

# kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9", GitCommit:"19fe91923d584c30bd6db5c5a21e9f0d5f742de8", GitTreeState:"clean", BuildDate:"2017-10-19T17:09:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.9", GitCommit:"19fe91923d584c30bd6db5c5a21e9f0d5f742de8", GitTreeState:"clean", BuildDate:"2017-10-19T16:55:06Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# kubectl rollout status deploy test
Waiting for rollout to finish: 0 out of 2 new replicas have been updated...
Waiting for rollout to finish: 0 out of 2 new replicas have been updated...
error: watch closed before Until timeout

root cause:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: test
  namespace: default
spec:
spec:
  paused: true    # Should be false.
  ...

Solution:

Set .spec.paused as false.

.spec.paused is an optional boolean field for pausing and resuming a Deployment. It defaults to false (a Deployment is not paused).

xref: http://kubernetes.kansea.com/docs/user-guide/deployments/#paused

fejta-bot

unread,
Aug 15, 2018, 3:00:38 AM8/15/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Tomáš Nožička

unread,
Aug 15, 2018, 6:08:58 AM8/15/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle stale
/milestone v1.12

brandshaide

unread,
Aug 16, 2018, 6:58:12 AM8/16/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/remove-lifecycle-stale

guineveresaenger

unread,
Aug 27, 2018, 8:32:39 AM8/27/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@tnozicka can you update on whether any work is being done on this?
As Code Slush begins this Tuesday, please make sure this issue is labeled as approved for the milestone by then.
https://github.com/kubernetes/sig-release/tree/master/releases/release-1.12#code-slush

Tomáš Nožička

unread,
Aug 27, 2018, 8:50:08 AM8/27/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@guineveresaenger there is a PR, #67817 linked just above your comment, fixing the issue; in review process, still targeting v1.12

guineveresaenger

unread,
Aug 27, 2018, 8:54:33 AM8/27/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@tnozicka awesome! can you get milestone approval from someone in sig/apps?

Tomáš Nožička

unread,
Aug 27, 2018, 9:06:21 AM8/27/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

/priority critical-urgent

@guineveresaenger is this for status/approved-for-milestone? I though that was effectively canceled on lazy consensus last week here https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/kubernetes-dev/MjyJzhBEgkM/64kNFcIACQAJ

guineveresaenger

unread,
Aug 27, 2018, 9:19:26 AM8/27/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@tnozicka gah, apologies for the noise. You are right. My bad.

Tomáš Nožička

unread,
Aug 27, 2018, 9:33:30 AM8/27/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Kubernetes Submit Queue

unread,
Sep 4, 2018, 7:21:52 AM9/4/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Closed #40224 via #67817.

ocofaigh

unread,
Dec 12, 2018, 9:52:59 AM12/12/18
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Any plans to backport this fix to v1.11?

Arockiasamy K

unread,
Mar 1, 2019, 7:57:36 AM3/1/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

If you use an AWS Load balancer, you want to increase the timeout value.

Morinaga

unread,
Jun 6, 2019, 9:44:41 PM6/6/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

You could add a timeout flag to the kubectl rollout status command.

kubectl rollout status deployment/app --namespace=app --timeout=60s

https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-status-em-

Arockiasamy K

unread,
Jun 7, 2019, 3:57:08 AM6/7/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@zyfu90 Even if you add this, the rollout will still fail - because it's the load balancer timeout which kills the watch and not Kubernetes. The error message clearly says that Kubernetes watch failed even before kubectl timeout.

Jonathan McCall

unread,
Jul 30, 2019, 3:14:49 PM7/30/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@arocki7 You mean the ELB idle timeout? Which would imply that while running rollout status the connection could be idle for that time? Doesn't the rollout status command poll for the status of the deployment?

Arockiasamy K

unread,
Aug 1, 2019, 9:54:04 AM8/1/19
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

@Jonnymcc Unfortunately, rollout status didn't do a poll for every few seconds - which is why it fails.

Hamza Y.

unread,
Feb 27, 2020, 11:31:03 AM2/27/20
to kubernetes/kubernetes, k8s-mirror-api-machinery-bugs, Team mention

Scale up your deployment and apply your deployment and service files, then back to your desired state.


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Reply all
Reply to author
Forward
0 new messages