—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
/assign
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
/remove-lifecycle stale
This functionality sounds great, especially getting logs using only the pod prefix.
I see it's also using the deprecated team/ux
label.
The way the docker cli handles this for ids is great. If more than one resource is matching the prefix, the command is not executed.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
This is still a very wanted feature that @shiywang seemed to have a promising grip on. Has it been abandoned?
Looks like it got abandoned. Really hope this feature gets added
Has it been abandoned?
I'm still going to mark it as non-stale given that the feature is wanted. If someone can take this up, they would be most welcome.
/remove-lifecycle stale
I'm going to continue my work here to see what we can do.
+1
As devil's advocate - kubectl logs already supports using label selectors, and exec / attach could be updated to support this as well. Are selectors insufficient, or just not widely publicized?
/assign shiywang
There are some shell completion plugins available to achieve the similar functionality, such as kubectl plugin for zsh provided by oh-my-zsh.
Can we improve the shell completion plugins instead to cover some of these use cases?
…
On Sun, Jan 6, 2019 at 11:23 PM Di Weng ***@***.***> wrote: There are some shell completion plugins available to achieve the similar functionality, such as kubectl plugin https://github.com/robbyrussell/oh-my-zsh/blob/master/plugins/kubectl/kubectl.plugin.zsh for zsh provided by oh-my-zsh. — You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub <#17144 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/ABG_p91ffylfGmb96baxuz_DW1ID8TwUks5vAsumgaJpZM4Ggsot .
Should these two be exclusive?
+1
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
@pwittrock selectors work very well for logs, although it requires a little more typing. The only real drawback for using selectors is the inability to also use -f or --follow in the same command.
BTW, for those waiting on this to be implemented, kubetail or stern might be worth looking at in the meantime.
@bprashanth @bgrant0607 @gmaghera @nikhita why wouldn't you just use kubectl autocompletion? Surely that should address 95% of the usecases by typing the name of the pod followed by . If you use zsh
you can navigate through all the string matches so that should address the remaining 5%. (i.e. https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion should close thiis issue).
@axsaucedo , because it's an extremely slow feature for big clusters.
@KIVagant fair enough, that does makes sense 👍
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen
.
Mark the issue as fresh with/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Closed #17144.
/reopen
/remove-lifecycle rotten
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
@olenm: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
This doesn't solve the issue, but I created this abbreviation in fish to help with this.
abbr -a kbl "kubectl logs -f (kubectl get pods | tail -n +2 | sed \"s#^\(\S\+\)\s.*\\\$#\1#\" | fzf)"
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
does shell completion solve the issue of case 3 for OP?
I'd like to see this issue re-opened - I do not entirely agree that its the shell's job to process a list of pod-names and parse it with grep/awk/sed, and then feed it to the user when an attempt for kubectl get pod nginx
is used with the intent to list all pods that start with that prefix.
I would like this issue reopened. It would make it so much easier for "simple" datascience users on our Jupyterhub that needs a minimum of monitoring. They can do kubectl get logs
and kubectl get pods
. Would be so much easier for them if this feature was supported.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
@MarcSkovMadsen , there are other tools like kubetail
or stern
that can help with logs. As for getting Pods, we should all just suffer.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
As devil's advocate - kubectl logs already supports using label selectors, and exec / attach could be updated to support this as well. Are selectors insufficient, or just not widely publicized?
If only selecting by labels -l
was available to kubectl edit
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.