Seriously, I need it to draw some kind of truth table or something about what's allowed.
Why do we have so many restrictions?
% kc logs -f -nsonobuoy -lsonobuoy-plugin=e2e error: only one of follow (-f) or selector (-l) is allowed See 'kubectl logs -h' for help and examples. % kc logs -nsonobuoy -lsonobuoy-plugin=e2e Error from server (BadRequest): a container name must be specified for pod sonobuoy-e2e-job-533c18a33be34f79, choose one of: [e2e sonobuoy-worker] % kc logs -nsonobuoy -lsonobuoy-plugin=e2e -c e2e error: a container cannot be specified when using a selector (-l) See 'kubectl logs -h' for help and examples. # OK, this one is fixed at head
And this works just fine, so it really strike me as strange that kubectl won't do it for me.
% kc logs -nsonobuoy -f $(kc get pod -oname -nsonobuoy -lsonobuoy-plugin=e2e) -c e2e
/kind feature
@kubernetes/sig-cli-feature-requests
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Presumably -l
and -c
don't work together because -l
can match both pods that have a container specified by -c
and also ones that don't. What do you do in that case? I've been using https://github.com/wercker/stern to augment kubectl logs
that takes the more liberal approach to matching pods and containers, e.g. it shows all containers in a pod, etc. kubectl logs
could learn a thing or two from there.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
/remove-lifecycle rotten
me too. If the resulting containers can't be logged, then error at that point, but if all pods selected by labels have the selected container name, I don't see why they can't be -f'ed. If that's a problem for some reason, seems like we could at least allow the case of one-pod-match.
Would be really great if I could use -f and
-l`, this feels really limiting
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove_lifecycle rotten
/remove-lifecycle rotten
I would like to work on this. I'll try to implement combination of -l
and -f
. I'll, probably, have a look at -c
separately to keep PRs simple. As a first step, we will need to fix #67314 (I already submitted a PR for this).
I'm new to the project and I'm working on it in my spare time, so It might take a while for me to fix it. Please, bear with me :)
+1 for combining -f and -l. For debugging we need to be able to look at logs holistically from all running instances of a pod.
+1 it would be great if -f and -l are combined together.
Me as well. This is definitely needed!
I guess this issue can be closed since #67573 just got merged in. 🎉
#67573 still doesn't allow -c; please don't close this one
@dmick I'm not sure which MR allowed this, but I can definitely use -c
with both -f
and -l
!
@ramnes @dmick it looks like I unintentionally allowed to use -c
with -f
and -l
in #67573, but It will only work if all pods matching a selector provided in -l
will contain a container with a name specified in -c
. See the example below, but it's basically how it was described in #52218 (comment) and #52218 (comment)
It's probably not a good idea to rely on this behaviour because it's not documented and there are no tests to cover this.
Do people think that this behaviour is what we want? If so, I'll be happy to add some tests and update help text for kubectl log
with an example.
Example of how kubectl currently (v1.14) works.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: logging-test-be labels: app: logging_test kind: be spec: replicas: 2 selector: matchLabels: app: logging_test kind: be template: metadata: labels: app: logging_test kind: be spec: containers: - name: logging-container-1 image: logging_test:latest - name: logging-container-2 image: logging_test:latest --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: logging-test-fe labels: app: logging_test kind: fe spec: selector: matchLabels: app: logging_test kind: fe template: metadata: labels: app: logging_test kind: fe spec: containers: - name: logging-container-1 image: logging_test:latest
Assuming that logging_test:latest
runs something that produces logs. So the following will work:
kubectl logs -lapp=logging_test -f -c logging-container-1
This:
kubectl logs -lapp=logging_test -f -c logging-container-2
will not work because the pod logging-test-fe
doesn't have a container with name logging-container-2
. It will produce an errors similar to this:
error: container logging-container-2 is not valid for pod logging-test-fe-76c4dd7659-9b9ng
Its great behavoir, I think it would be slightly improved by not erroring if the selector finds pods without matching container, but just ignored these pods.
@mml @mcfedr I'm not sure how I feel about ignoring pods without the container user explicitly asked us about. Sounds like something that will make UX ambiguous.
I read kubectl logs -lapp=logging_test -f -c logging-container-2
as
I want to follow logs from
logging-container-2
from all pods matching labelapp=logging_test
not as
I want to follow logs from
logging-container-2
from all pods matching labelapp=logging_test
, iflogging-container-2
exists in these pods
Perhaps we should consider doing one of the following:
-c
. So we will run the command like:
# will error if there is no logging-container-2 container in any of the pods
kubectl logs -lapp=logging_test -f -c logging-container-2
# will ignore pods without logging-container-2 container
kubectl logs -lapp=logging_test -f -c logging-container-2 --ignore-missing-containers
That seems like a good backward-compatible approach, I agree.
It seems more intuitive to me that the pods without the container simply get ignored, as @mcfedr proposed in the first place. After all, we don't print an error when we do kubectl get pods -l app=foobar
and a pod doesn't match!
It seems to me that the real improvement would be the other way around, i.e. printing logs of all containers of the given pod by default, rather than forcing the user to specify one with -c
. But that would belong to another issue, I guess.
-l
and -c
have established meanings:
-l
filters pods-c
makes your choice more precise. If a pod has two or more containers you specify which one you want to read. Alternatively you can explicitly say that you want to read from all using the --all-containers
flag instead of -c
)Also I'm not sure that ignoring by default is more intuitive: for many people (including me) explicit is better than implicit. I would prefer to explicitly silence/turn off errors using an extra flag.
Take a look at my example. I'm concerned about the use case when a user expects all pods matching -lapp=logging_test
to have a container -c logging-container-2
. If we implicitly ignore pods kubectl logs -lapp=logging_test -f -c logging-container-2
will show you some logs and will be waiting for more, but will leave you with no clue about missing containers.
I think it's reasonable to error if the combination of -l and -c results in nonexistent containers. The command is literally saying "I can't fulfill your restrictions; make them less stringent."
@m1kola I've read your comments and I understand your point completely. My point is just that:
-c
filters containers
would be way simpler than:
-c
makes your choice more precise. If a pod has two or more containers you specify which one you want to read. Alternatively you can explicitly say that you want to read from all using the--all-containers flag
instead of-c
)
Now it's just a matter of opinion, and I'm very probably missing some context here, so I'm not sure it's worth continuing this discussion; feel free to throw tomatoes at me!
My point is just that:
-c
filters containerswould be way simpler than:
I would probably agree with this if we were talking about about new functionality: not about changing UX of existing commands/options.
I'm very interested to hear opinions of people from the sig-cli. They have more experience with kubectl
than I and will probably be able to see more implications... Or just tell us that it's fine to make this tweak (ignore pods by default).
I added this as a topic to the sig-cli meeting agenda doc. Next meeting is scheduled for 24th of April. If anyone is interested to join, Zoom link is in the agenda document.
Kubectl logs is useful for local debugging. It obviously won't work at scale, where you need to search logs from thousands of pods.
The only thing I need is for kubectl logs
to be able work like tail -F
- print logs as they appear for all containers that match current query. This doesn't need to be enabled on production but is INCREDIBLY useful when working with local clusters.
Scenario (sample deployment yaml: https://gist.github.com/karolinepauls/65ef9fbd59e646b9eace4a1366216212):
kubectl logs -l app=some-app --all-containers -F
running (currently -F doesn't exist).Right now I have to do:
$ kubectl logs some-deployment<TAB><TAB>
Error from server (BadRequest): container "some-app" in pod "some-deployment-68975fdcfb-tx8pq" is waiting to start: PodInitializing
$ kubectl logs some-deployment-68975fdcfb-tx8pq setup-app # name of the init container
<actual output from the init container>
make some changes...
$ kubectl apply -f depl.yaml
$ kubectl logs some-deployment<TAB><TAB>
some-deployment-57f48c4bd5-h7zrl some-deployment-68975fdcfb-tx8pq
I shouldn't have to know which is which. Even if I first issue kubectl delete
, the pods may still exist.
The below doesn't work with -f and doesn't work when one of the containers hasn't started (which is VERY common - it's enough for an init container to fail and --all-containers will never work). Moreover, a previous failed pod will make this command misbehave until it's deleted.
$ kubectl logs -l app=some-app --all-containers
Error from server (BadRequest): container "setup-app" in pod "some-deployment-78bdd9467f-qxqrv" is waiting to start: PodInitializing
How should kubectl logs -l app=some-app --all-containers -F
behave?
Exactly like tail -F
- reevaluate the query in regular intervals (or better, listen to the right events if possible), and whenever some container appears that matches, report it, and if possible, start tailing it. Report changes to existing containers' states and if one becomes tailable, start tailing it.
@karolinepauls IIRC stern does this
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
#76471 was merged so it's now possible to filter logs by container name on the client using grep
:
kubectl logs -l label=match-a-lot-of-things-pods --prefix --follow | grep <container-name>
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@karolinepauls -F
suggestion is great, any plans to include that at some point?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/lifecycle frozen