Re: [kubernetes/kubernetes] kubectl 1.10 prints header after a chunk, instead of only once (#65727)

1 view
Skip to first unread message

Clayton Coleman

unread,
Jul 2, 2018, 6:32:47 PM7/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@juanvallejo


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Clayton Coleman

unread,
Jul 2, 2018, 6:32:50 PM7/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

In kubectl 1.10, when I fetch enough pods to cause a ranged page (>500), the headers get printed twice:

$ kubectl get pods -o wide --all-namespaces
NAMESPACE                           NAME                                              READY     STATUS              RESTARTS   AGE       IP              NODE
acs-engine-build                    acs-engine-web-16-t6prn                           1/1       Running             0          2d        172.16.13.124   origin-ci-ig-n-hs69
...
ci                                  tot-0                                             1/1       Running             0          12h       172.16.13.175   origin-ci-ig-n-hs69
NAMESPACE                           NAME                                              READY     STATUS              RESTARTS   AGE       IP              NODE
ci                                  tracer-14-4jw7f                                   2/2       Running             0          3d        172.16.13.55    origin-ci-ig-n-hs69
...

The headers should only be printed once. This is a bug in chunk printing.

@kubernetes/sig-cli-bugs

Juan Vallejo

unread,
Jul 2, 2018, 6:59:12 PM7/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

/assign juanvallejo

Jordan Liggitt

unread,
Jul 2, 2018, 7:49:55 PM7/2/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

I'm pretty sure this is working as intended. The column spacing is computed given a set of specific data. For a new chunk, the column spacing can be different, and we want headers aligned with content

Maciej Szulik

unread,
Jul 6, 2018, 10:11:26 AM7/6/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Hmmm... I'm guessing this might be related with the chunk size, which results in each data package to contain its own headers, which then gets printed. We could maybe omit the headers given that we're working with a single resource type here.

Jordan Liggitt

unread,
Jul 6, 2018, 10:20:11 AM7/6/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

We could maybe omit the headers given that we're working with a single resource type here.

that will result in misaligned headers

fejta-bot

unread,
Oct 4, 2018, 10:23:42 AM10/4/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Nov 3, 2018, 11:08:31 AM11/3/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Kubernetes Prow Robot

unread,
Dec 3, 2018, 10:56:14 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Closed #65727.

fejta-bot

unread,
Dec 3, 2018, 10:56:15 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.


Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Kubernetes Prow Robot

unread,
Dec 3, 2018, 10:56:22 AM12/3/18
to kubernetes/kubernetes, k8s-mirror-cli-bugs, Team mention

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Reply all
Reply to author
Forward
0 new messages