[kubernetes/kubernetes] kubectl logs is kafkaesque (#52218)

2 views
Skip to first unread message

Matt Liggett

unread,
Sep 8, 2017, 7:03:40 PM9/8/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Seriously, I need it to draw some kind of truth table or something about what's allowed.

Why do we have so many restrictions?

% kc logs -f -nsonobuoy -lsonobuoy-plugin=e2e     
error: only one of follow (-f) or selector (-l) is allowed
See 'kubectl logs -h' for help and examples.

% kc logs -nsonobuoy -lsonobuoy-plugin=e2e    
Error from server (BadRequest): a container name must be specified for pod sonobuoy-e2e-job-533c18a33be34f79, choose one of: [e2e sonobuoy-worker]

% kc logs -nsonobuoy -lsonobuoy-plugin=e2e  -c e2e
error: a container cannot be specified when using a selector (-l)
See 'kubectl logs -h' for help and examples.
# OK, this one is fixed at head

And this works just fine, so it really strike me as strange that kubectl won't do it for me.

% kc logs -nsonobuoy -f $(kc get pod -oname -nsonobuoy -lsonobuoy-plugin=e2e)  -c e2e

/kind feature
@kubernetes/sig-cli-feature-requests


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Andor Uhlár

unread,
Sep 11, 2017, 8:13:37 AM9/11/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Presumably -l and -c don't work together because -l can match both pods that have a container specified by -c and also ones that don't. What do you do in that case? I've been using https://github.com/wercker/stern to augment kubectl logs that takes the more liberal approach to matching pods and containers, e.g. it shows all containers in a pod, etc. kubectl logs could learn a thing or two from there.

fejta-bot

unread,
Jan 4, 2018, 7:59:23 PM1/4/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot

unread,
Feb 10, 2018, 2:33:55 AM2/10/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Dan Mick

unread,
Mar 8, 2018, 11:49:01 PM3/8/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

Dan Mick

unread,
Mar 8, 2018, 11:50:36 PM3/8/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

me too. If the resulting containers can't be logged, then error at that point, but if all pods selected by labels have the selected container name, I don't see why they can't be -f'ed. If that's a problem for some reason, seems like we could at least allow the case of one-pod-match.

Fred Cox

unread,
Apr 2, 2018, 7:55:47 AM4/2/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Would be really great if I could use -f and-l`, this feels really limiting

fejta-bot

unread,
Jul 1, 2018, 8:26:02 AM7/1/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Jul 31, 2018, 9:14:57 AM7/31/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Dan Mick

unread,
Jul 31, 2018, 1:26:41 PM7/31/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove_lifecycle rotten

Dan Mick

unread,
Jul 31, 2018, 2:04:52 PM7/31/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

Mikalai Radchuk

unread,
Aug 12, 2018, 4:48:56 PM8/12/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I would like to work on this. I'll try to implement combination of -l and -f. I'll, probably, have a look at -c separately to keep PRs simple. As a first step, we will need to fix #67314 (I already submitted a PR for this).

I'm new to the project and I'm working on it in my spare time, so It might take a while for me to fix it. Please, bear with me :)

Tobias N. Sasse

unread,
Oct 15, 2018, 2:17:07 AM10/15/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

+1 for combining -f and -l. For debugging we need to be able to look at logs holistically from all running instances of a pod.

gauravagrwl

unread,
Jan 3, 2019, 5:52:08 PM1/3/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

+1 it would be great if -f and -l are combined together.

Luca Santarella

unread,
Feb 20, 2019, 9:57:44 PM2/20/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Me as well. This is definitely needed!

Guillaume Gelin

unread,
Feb 26, 2019, 1:12:17 PM2/26/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I guess this issue can be closed since #67573 just got merged in. 🎉

Dan Mick

unread,
Feb 26, 2019, 6:46:24 PM2/26/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

#67573 still doesn't allow -c; please don't close this one

Guillaume Gelin

unread,
Apr 17, 2019, 9:14:50 AM4/17/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@dmick I'm not sure which MR allowed this, but I can definitely use -c with both -f and -l!

Mikalai Radchuk

unread,
Apr 17, 2019, 6:13:23 PM4/17/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@ramnes @dmick it looks like I unintentionally allowed to use -c with -f and -l in #67573, but It will only work if all pods matching a selector provided in -l will contain a container with a name specified in -c. See the example below, but it's basically how it was described in #52218 (comment) and #52218 (comment)

It's probably not a good idea to rely on this behaviour because it's not documented and there are no tests to cover this.

Do people think that this behaviour is what we want? If so, I'll be happy to add some tests and update help text for kubectl log with an example.


Example of how kubectl currently (v1.14) works.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: logging-test-be
  labels:
    app: logging_test
    kind: be
spec:
  replicas: 2
  selector:
    matchLabels:
      app: logging_test
      kind: be
  template:
    metadata:
      labels:
        app: logging_test
        kind: be
    spec:
      containers:
      - name: logging-container-1
        image: logging_test:latest
      - name: logging-container-2
        image: logging_test:latest

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: logging-test-fe
  labels:
    app: logging_test
    kind: fe
spec:
  selector:
    matchLabels:
      app: logging_test
      kind: fe
  template:
    metadata:
      labels:
        app: logging_test
        kind: fe
    spec:
      containers:
      - name: logging-container-1
        image: logging_test:latest

Assuming that logging_test:latest runs something that produces logs. So the following will work:

kubectl logs -lapp=logging_test -f -c logging-container-1

This:

kubectl logs -lapp=logging_test -f -c logging-container-2

will not work because the pod logging-test-fe doesn't have a container with name logging-container-2. It will produce an errors similar to this:

error: container logging-container-2 is not valid for pod logging-test-fe-76c4dd7659-9b9ng

Fred Cox

unread,
Apr 18, 2019, 2:35:35 AM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Its great behavoir, I think it would be slightly improved by not erroring if the selector finds pods without matching container, but just ignored these pods.

Matt Liggett

unread,
Apr 18, 2019, 12:31:01 PM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@m1kola tests and doc updates very much welcome, thanks! I think the tweak @mcfedr describes provides the most value to users.

Mikalai Radchuk

unread,
Apr 18, 2019, 1:02:06 PM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@mml @mcfedr I'm not sure how I feel about ignoring pods without the container user explicitly asked us about. Sounds like something that will make UX ambiguous.

I read kubectl logs -lapp=logging_test -f -c logging-container-2 as

I want to follow logs from logging-container-2 from all pods matching label app=logging_test

not as

I want to follow logs from logging-container-2 from all pods matching label app=logging_test, if logging-container-2 exists in these pods

Perhaps we should consider doing one of the following:

  • Keep the existing behaviour (error, if there one or more of the matching pods doesn't contain a requested container), but cover with tests and add help text
  • The same as above + add an extra flag to ignore pods without a container specified in -c. So we will run the command like:
     # will error if there is no logging-container-2 container in any of the pods
     kubectl logs -lapp=logging_test -f -c logging-container-2
    
     # will ignore pods without logging-container-2 container
     kubectl logs -lapp=logging_test -f -c logging-container-2 --ignore-missing-containers
    

Matt Liggett

unread,
Apr 18, 2019, 1:31:07 PM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

That seems like a good backward-compatible approach, I agree.

Guillaume Gelin

unread,
Apr 18, 2019, 5:00:23 PM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

It seems more intuitive to me that the pods without the container simply get ignored, as @mcfedr proposed in the first place. After all, we don't print an error when we do kubectl get pods -l app=foobar and a pod doesn't match!

It seems to me that the real improvement would be the other way around, i.e. printing logs of all containers of the given pod by default, rather than forcing the user to specify one with -c. But that would belong to another issue, I guess.

Mikalai Radchuk

unread,
Apr 18, 2019, 7:18:39 PM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

-l and -c have established meanings:

  • -l filters pods
  • -c makes your choice more precise. If a pod has two or more containers you specify which one you want to read. Alternatively you can explicitly say that you want to read from all using the --all-containers flag instead of -c)

Also I'm not sure that ignoring by default is more intuitive: for many people (including me) explicit is better than implicit. I would prefer to explicitly silence/turn off errors using an extra flag.

Take a look at my example. I'm concerned about the use case when a user expects all pods matching -lapp=logging_test to have a container -c logging-container-2. If we implicitly ignore pods kubectl logs -lapp=logging_test -f -c logging-container-2 will show you some logs and will be waiting for more, but will leave you with no clue about missing containers.

Dan Mick

unread,
Apr 18, 2019, 7:24:58 PM4/18/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I think it's reasonable to error if the combination of -l and -c results in nonexistent containers. The command is literally saying "I can't fulfill your restrictions; make them less stringent."

Guillaume Gelin

unread,
Apr 19, 2019, 4:00:57 AM4/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@m1kola I've read your comments and I understand your point completely. My point is just that:

-c filters containers

would be way simpler than:

-c makes your choice more precise. If a pod has two or more containers you specify which one you want to read. Alternatively you can explicitly say that you want to read from all using the --all-containers flag instead of -c)

Now it's just a matter of opinion, and I'm very probably missing some context here, so I'm not sure it's worth continuing this discussion; feel free to throw tomatoes at me!

Mikalai Radchuk

unread,
Apr 19, 2019, 6:28:23 AM4/19/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

My point is just that:

-c filters containers

would be way simpler than:

I would probably agree with this if we were talking about about new functionality: not about changing UX of existing commands/options.

I'm very interested to hear opinions of people from the sig-cli. They have more experience with kubectl than I and will probably be able to see more implications... Or just tell us that it's fine to make this tweak (ignore pods by default).

I added this as a topic to the sig-cli meeting agenda doc. Next meeting is scheduled for 24th of April. If anyone is interested to join, Zoom link is in the agenda document.

Karoline Pauls

unread,
Jun 14, 2019, 7:15:59 AM6/14/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Kubectl logs is useful for local debugging. It obviously won't work at scale, where you need to search logs from thousands of pods.

The only thing I need is for kubectl logs to be able work like tail -F - print logs as they appear for all containers that match current query. This doesn't need to be enabled on production but is INCREDIBLY useful when working with local clusters.

Scenario (sample deployment yaml: https://gist.github.com/karolinepauls/65ef9fbd59e646b9eace4a1366216212):

  1. There's kubectl logs -l app=some-app --all-containers -F running (currently -F doesn't exist).
  2. I create a deployment YAML, potentially containing init containers.
  3. I apply it.
  4. I want to be able to see logs from it, as they appear, without issuing additional commands.

Right now I have to do:

$ kubectl logs some-deployment<TAB><TAB>
Error from server (BadRequest): container "some-app" in pod "some-deployment-68975fdcfb-tx8pq" is waiting to start: PodInitializing      
$ kubectl logs some-deployment-68975fdcfb-tx8pq setup-app  # name of the init container
<actual output from the init container>

make some changes...

$ kubectl apply -f depl.yaml
$ kubectl logs some-deployment<TAB><TAB>
some-deployment-57f48c4bd5-h7zrl  some-deployment-68975fdcfb-tx8pq

I shouldn't have to know which is which. Even if I first issue kubectl delete, the pods may still exist.

The below doesn't work with -f and doesn't work when one of the containers hasn't started (which is VERY common - it's enough for an init container to fail and --all-containers will never work). Moreover, a previous failed pod will make this command misbehave until it's deleted.

$ kubectl logs -l app=some-app --all-containers
Error from server (BadRequest): container "setup-app" in pod "some-deployment-78bdd9467f-qxqrv" is waiting to start: PodInitializing

How should kubectl logs -l app=some-app --all-containers -F behave?

Exactly like tail -F - reevaluate the query in regular intervals (or better, listen to the right events if possible), and whenever some container appears that matches, report it, and if possible, start tailing it. Report changes to existing containers' states and if one becomes tailable, start tailing it.

Guillaume Gelin

unread,
Jun 14, 2019, 9:55:07 AM6/14/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@karolinepauls IIRC stern does this

fejta-bot

unread,
Sep 12, 2019, 10:09:21 AM9/12/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Anton Bessonov

unread,
Sep 12, 2019, 2:16:40 PM9/12/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Mikalai Radchuk

unread,
Oct 15, 2019, 6:18:34 AM10/15/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

#76471 was merged so it's now possible to filter logs by container name on the client using grep:

kubectl logs -l label=match-a-lot-of-things-pods --prefix --follow | grep <container-name>


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

fejta-bot

unread,
Jan 13, 2020, 6:12:51 AM1/13/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Anton Bessonov

unread,
Jan 13, 2020, 6:15:28 PM1/13/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Apr 12, 2020, 7:27:28 PM4/12/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Anton Bessonov

unread,
Apr 12, 2020, 8:03:14 PM4/12/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Lukasz Sanek

unread,
Jun 5, 2020, 9:41:45 AM6/5/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@karolinepauls -F suggestion is great, any plans to include that at some point?

fejta-bot

unread,
Sep 3, 2020, 10:27:01 AM9/3/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Anton Bessonov

unread,
Sep 3, 2020, 3:10:50 PM9/3/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Dec 2, 2020, 2:40:17 PM12/2/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Anton Bessonov

unread,
Dec 2, 2020, 3:47:00 PM12/2/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

fejta-bot

unread,
Mar 2, 2021, 3:53:44 PM3/2/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

Anton Bessonov

unread,
Mar 2, 2021, 5:36:30 PM3/2/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

fejta-bot

unread,
May 31, 2021, 6:42:33 PM5/31/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

Anton Bessonov

unread,
Jun 1, 2021, 2:12:33 PM6/1/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Anton Bessonov

unread,
Jun 1, 2021, 2:13:09 PM6/1/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/lifecycle frozen

Reply all
Reply to author
Forward
0 new messages