—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.![]()
In kubectl 1.10, when I fetch enough pods to cause a ranged page (>500), the headers get printed twice:
$ kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
acs-engine-build acs-engine-web-16-t6prn 1/1 Running 0 2d 172.16.13.124 origin-ci-ig-n-hs69
...
ci tot-0 1/1 Running 0 12h 172.16.13.175 origin-ci-ig-n-hs69
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
ci tracer-14-4jw7f 2/2 Running 0 3d 172.16.13.55 origin-ci-ig-n-hs69
...
The headers should only be printed once. This is a bug in chunk printing.
/assign juanvallejo
I'm pretty sure this is working as intended. The column spacing is computed given a set of specific data. For a new chunk, the column spacing can be different, and we want headers aligned with content
Hmmm... I'm guessing this might be related with the chunk size, which results in each data package to contain its own headers, which then gets printed. We could maybe omit the headers given that we're working with a single resource type here.
We could maybe omit the headers given that we're working with a single resource type here.
that will result in misaligned headers
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Closed #65727.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.