Re: [kubernetes/kubernetes] [Feature request] Get resource usage per namespace using kubectl (#55046)

0 views
Skip to first unread message

k8s-ci-robot

unread,
Nov 3, 2017, 2:51:52 AM11/3/17
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@gheon: Reiterating the mentions to trigger a notification:
@kubernetes/sig-cli-feature-requests

In response to this:

@kubernetes/sig-cli-feature-requests

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Dominic Gunn

unread,
Feb 21, 2018, 5:47:59 AM2/21/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

+1 on being able to see resource usage by namespace.

Matt Brown

unread,
Apr 5, 2018, 4:26:08 PM4/5/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Describing the ResourceQuota resource in the namespace gives you some of this:

$ kubectl describe resourcequota/compute-resources --namespace foobar
Name:            compute-resources
Namespace:       foobar
Resource         Used    Hard
--------         ----    ----
limits.cpu       13      96
limits.memory    25024M  360Gi
requests.cpu     6500m   48
requests.memory  12512M  180Gi

James Wen

unread,
Apr 5, 2018, 5:19:30 PM4/5/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Doesn't kubectl describe resourcequota for a namespace accomplish this?

$> kubectl --context <cluster_context> describe resourcequota -n my-namespace
Name:            compute-resources
Namespace:       my-namespace
Resource         Used    Hard
--------         ----    ----
limits.cpu       13      96
limits.memory    25024M  360Gi
requests.cpu     6500m   48
requests.memory  12512M  180Gi

Derrick Petzold

unread,
May 8, 2018, 1:47:39 AM5/8/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Hi. I authored https://github.com/dpetzold/kube-resource-explorer/ for this.

It will display the historical statistical resource usage from StackDriver. By evaluating the TimeSeries data to show the latest value (the most current data point), minimum value, maximum value and the average or the mode in the requested duration per container. The average is displayed when cpu is requested. For memory the mode is displayed (mode is the most common occurring value in the set).

Below is some sample output:

$ ./resource-explorer -historical -duration 4h -mem -sort Mode -reverse -namespace kube-system                                                                      
Pod/Container                                                     Last    Min     Max     Avg/Mode                                                                  
-------------------------------------------------------------     ------  ------  ------  --------                                                                  
l7-default-backend-1044750973-kqh98/default-http-backend          2Mi     2Mi     2Mi     2Mi                                                                       
kube-dns-323615064-8nxfl/dnsmasq                                  6Mi     6Mi     6Mi     6Mi                                                                       
event-exporter-v0.1.7-5c4d9556cf-kf4tf/prometheus-to-sd-exporter  6Mi     6Mi     6Mi     6Mi                                                                       
heapster-v1.4.3-74b5bd94bb-fz8hd/prom-to-sd                       7Mi     7Mi     7Mi     7Mi                                                                       
fluentd-gcp-v2.0.9-4qkwk/prometheus-to-sd-exporter                8Mi     8Mi     8Mi     8Mi                                                                       
fluentd-gcp-v2.0.9-tw9vk/prometheus-to-sd-exporter                9Mi     9Mi     9Mi     9Mi                                                                       
fluentd-gcp-v2.0.9-jmtpw/prometheus-to-sd-exporter                9Mi     9Mi     9Mi     9Mi                                                                       
kube-dns-323615064-8nxfl/kubedns                                  10Mi    10Mi    10Mi    10Mi                                                                      
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster-nanny                   10Mi    10Mi    10Mi    10Mi                                                                      
kube-dns-autoscaler-244676396-xzgs4/autoscaler                    11Mi    11Mi    11Mi    11Mi                                                                      
kube-dns-323615064-8nxfl/sidecar                                  13Mi    12Mi    13Mi    13Mi                                                                      
kube-proxy-gke-project-default-pool-175a4a05-bv59/kube-proxy      15Mi    15Mi    15Mi    15Mi                                                                      
event-exporter-v0.1.7-5c4d9556cf-kf4tf/event-exporter             15Mi    15Mi    15Mi    15Mi                                                                      
kube-proxy-gke-project-default-pool-175a4a05-ntfw/kube-proxy      18Mi    18Mi    18Mi    18Mi                                                                      
kube-proxy-gke-project-default-pool-175a4a05-mshh/kube-proxy      18Mi    18Mi    19Mi    18Mi                                                                      
kubernetes-dashboard-768854d6dc-jh292/kubernetes-dashboard        31Mi    31Mi    31Mi    31Mi                                                                      
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster                         33Mi    32Mi    39Mi    34Mi                                                                      
fluentd-gcp-v2.0.9-jmtpw/fluentd-gcp                              138Mi   136Mi   139Mi   138Mi                                                                     
fluentd-gcp-v2.0.9-tw9vk/fluentd-gcp                              136Mi   130Mi   162Mi   162Mi                                                                     
fluentd-gcp-v2.0.9-4qkwk/fluentd-gcp                              144Mi   126Mi   181Mi   178Mi                                                                     
                                                                                                                                                                    
Results shown are for a period of 4h0m0s. 2,400 data points were evaluted.                                                                                          
$ ./resource-explorer -historical -duration 4h -cpu -sort Max -reverse -namespace kube-system                                                                       
Pod/Container                                                     Last    Min     Max     Avg/Mode                                                                  
-------------------------------------------------------------     ------  ------  ------  --------                                                                  
heapster-v1.4.3-74b5bd94bb-fz8hd/prom-to-sd                       0m      0m      0m      0m                                                                        
event-exporter-v0.1.7-5c4d9556cf-kf4tf/prometheus-to-sd-exporter  0m      0m      0m      0m                                                                        
fluentd-gcp-v2.0.9-jmtpw/prometheus-to-sd-exporter                0m      0m      0m      0m                                                                        
fluentd-gcp-v2.0.9-4qkwk/prometheus-to-sd-exporter                0m      0m      0m      0m                                                                        
kube-dns-323615064-8nxfl/kubedns                                  0m      0m      0m      0m                                                                        
kube-dns-323615064-8nxfl/dnsmasq                                  0m      0m      0m      0m                                                                        
kubernetes-dashboard-768854d6dc-jh292/kubernetes-dashboard        0m      0m      0m      0m                                                                        
kube-dns-autoscaler-244676396-xzgs4/autoscaler                    0m      0m      0m      0m                                                                        
l7-default-backend-1044750973-kqh98/default-http-backend          0m      0m      0m      0m                                                                        
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster-nanny                   0m      0m      0m      0m                                                                        
fluentd-gcp-v2.0.9-tw9vk/prometheus-to-sd-exporter                0m      0m      0m      0m                                                                        
event-exporter-v0.1.7-5c4d9556cf-kf4tf/event-exporter             0m      0m      0m      0m                                                                        
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster                         1m      1m      1m      1m                                                                        
kube-dns-323615064-8nxfl/sidecar                                  1m      0m      1m      0m                                                                        
kube-proxy-gke-project-default-pool-175a4a05-ntfw/kube-proxy      1m      1m      2m      1m                                                                        
kube-proxy-gke-project-default-pool-175a4a05-bv59/kube-proxy      1m      1m      2m      1m                                                                        
kube-proxy-gke-project-default-pool-175a4a05-mshh/kube-proxy      1m      1m      2m      1m                                                                        
fluentd-gcp-v2.0.9-tw9vk/fluentd-gcp                              6m      5m      7m      5m                                                                        
fluentd-gcp-v2.0.9-4qkwk/fluentd-gcp                              6m      5m      12m     6m                                                                        
fluentd-gcp-v2.0.9-jmtpw/fluentd-gcp                              28m     23m     32m     28m        

Results shown are for a period of 4h0m0s. 2,400 data points were evaluted.                                                                                          

fejta-bot

unread,
Aug 6, 2018, 2:29:37 AM8/6/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Nikhita Raghunath

unread,
Aug 10, 2018, 10:23:01 AM8/10/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Nikhil Sidhaye

unread,
Sep 7, 2018, 1:27:20 PM9/7/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

kubectl describe resourcequota -n <namespace> will work only if your create resourcequota.

it will be nice to have command to see total resource utilization for any namespace.

I know we can configure different tools to capture & show this utilization but native command line tool will be more useful and lightweight.

fejta-bot

unread,
Dec 6, 2018, 1:06:42 PM12/6/18
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

fejta-bot

unread,
Jan 5, 2019, 1:52:46 PM1/5/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Xiaoyi

unread,
Jan 5, 2019, 9:56:07 PM1/5/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

David Oppenheimer

unread,
Jan 6, 2019, 12:01:13 AM1/6/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Is this different from "kubectl top" ?

Clayton Coleman

unread,
Jan 6, 2019, 7:51:58 PM1/6/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention
We are considering removing kubectl top completely in favor of just having
server side printing on the metrics API resources. I don't know what the
latest on that is.

On Sun, Jan 6, 2019 at 12:00 AM David Oppenheimer <notifi...@github.com>
wrote:


> Is this different from "kubectl top" ?
>
> —
> You are receiving this because you are on a team that was mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/55046#issuecomment-451715582>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABG_p4y7LLvRT8xFKcgW85WAH-gA7wg7ks5vAYLwgaJpZM4QQpjo>
> .

Geo

unread,
Mar 8, 2019, 10:49:04 AM3/8/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@davidopp also kubectl top relies on heapster.. which is.. not the future.

Mathieu Filotto

unread,
Mar 8, 2019, 2:38:55 PM3/8/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I'm pretty sure kubectl top can be used with metric server, so do not rely on heapster anymore
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#resource-metrics-pipeline

@smarterclayton please do not remove kubectl top, we use it on a daily basis to check the ressource state of nodes and pods. It's so usefull that many other probably use it too

Clayton Coleman

unread,
Mar 8, 2019, 4:52:37 PM3/8/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention
We are going to replace it with `kubectl get podmetrics`. Which should and
can do everything kubectl top does.

On Fri, Mar 8, 2019 at 2:37 PM Mathieu Filotto <notifi...@github.com>
wrote:


> I'm pretty sure kubectl top can be used with metric server, so do not
> rely on heapster anymore
>
> https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#resource-metrics-pipeline
>
> @smarterclayton <https://github.com/smarterclayton> please do not remove kubectl

> top, we use it on a daily basis to check the ressource state of nodes and
> pods. It's so usefull that many other probably use it too
>
> —
> You are receiving this because you were mentioned.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/55046#issuecomment-471050297>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/ABG_p012_v6HpvYjONA6s6wskL-No5yVks5vUrwHgaJpZM4QQpjo>
> .

fejta-bot

unread,
Jun 6, 2019, 6:43:06 PM6/6/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Jul 6, 2019, 7:30:54 PM7/6/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Ted Timmons

unread,
Jul 6, 2019, 7:34:38 PM7/6/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

todd densmore

unread,
Jul 30, 2019, 1:57:47 PM7/30/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

fejta-bot

unread,
Oct 28, 2019, 2:12:59 PM10/28/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

onlinebizsoft

unread,
Nov 7, 2019, 12:12:37 AM11/7/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

fejta-bot

unread,
Dec 7, 2019, 12:33:41 AM12/7/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Samuel Bishop

unread,
Dec 8, 2019, 10:17:37 PM12/8/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale
Still an issue...

Samuel Bishop

unread,
Dec 8, 2019, 10:18:10 PM12/8/19
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten
Still an issue...

fejta-bot

unread,
Mar 7, 2020, 11:28:39 PM3/7/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

Obeyda Djeffal

unread,
Apr 3, 2020, 11:05:03 AM4/3/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

any update on this?

ddl-retornam

unread,
Apr 3, 2020, 11:19:52 AM4/3/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

any update on this?

A good way to stay updated is to subscribe to notifications on this issue.
leaving comments like this spams everyone silently following the issue for updates — 20 at last count ( not including you) — and doesn't help in getting the underlying issue resolved.

Hope this helps.

fejta-bot

unread,
May 3, 2020, 11:30:54 AM5/3/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

ddl-retornam

unread,
May 3, 2020, 11:59:22 AM5/3/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

Weldon Sams

unread,
Jun 17, 2020, 12:28:04 PM6/17/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

I've been looking for a feature like this. In the meantime I hacked together this fish function to help display resource usage across all nodes for a given namespace. I haven't taken the time to actually sum resources but shouldn't be hard to implement. This just shows a nicely formatted table for Non-terminated Pods.

function kube-get-resource-usage
  if not count $argv > /dev/null
    echo "Usage: kube-get-resource-usage <namespace>"
    return
  end

  set c 0
  kubectl get nodes | sed '1d' | awk '{print $1}' | while read node
    if [ $c = 0 ]
      kubectl describe node $node | sed -n '/^Non-terminated Pods:/,/Allocated resources:/p' | sed '1d;$d' | sed '2d' | awk '$1 == "'$argv[1]'" || $1 == "Namespace" { print $0 }'
    else
      kubectl describe node $node | sed -n '/^Non-terminated Pods:/,/Allocated resources:/p' | sed '1,3d;$d' | awk '$1 == "'$argv[1]'" { print $0 }'
    end
    set c (math $c + 1)
  end | column -t
end

fejta-bot

unread,
Sep 21, 2020, 6:24:33 AM9/21/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

fejta-bot

unread,
Oct 21, 2020, 7:07:16 AM10/21/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle rotten

Obeyda Djeffal

unread,
Oct 21, 2020, 10:41:48 AM10/21/20
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

fejta-bot

unread,
Jan 19, 2021, 10:21:32 AM1/19/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.

/lifecycle stale

ddl-retornam

unread,
Jan 19, 2021, 12:39:44 PM1/19/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

fejta-bot

unread,
Feb 18, 2021, 1:08:50 PM2/18/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

ddl-retornam

unread,
Feb 18, 2021, 4:30:02 PM2/18/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle rotten

fejta-bot

unread,
May 19, 2021, 6:24:35 PM5/19/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

Sarasa Kisaragi

unread,
May 24, 2021, 5:52:33 PM5/24/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Rich Adams

unread,
Jul 19, 2021, 1:28:11 PM7/19/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Ended up writing some jq to get a quick snapshot of namespace resource usage:

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/$KUBE_NAMESPACE/pods | jq '[ .items[].containers[].usage 
      | { 
        cpu: .cpu | rtrimstr("n") | tonumber, 
        memory: .memory | rtrimstr("Ki") | tonumber
      }] 
    | reduce .[] as $item ({totalCpu: 0, totalMemory: 0}; .totalCpu += $item.cpu | .totalMemory += $item.memory)'

Kubernetes Triage Robot

unread,
Oct 17, 2021, 2:25:59 PM10/17/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.

Alexey Panchenko

unread,
Nov 15, 2021, 12:42:40 PM11/15/21
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale

Kubernetes Triage Robot

unread,
Feb 13, 2022, 1:24:36 PM2/13/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1038319899@github.com>

Luca Soato

unread,
Feb 18, 2022, 5:51:18 AM2/18/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

/remove-lifecycle stale


Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1044325650@github.com>

Kubernetes Triage Robot

unread,
May 19, 2022, 7:03:26 AM5/19/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1131548938@github.com>

Kubernetes Triage Robot

unread,
Jun 18, 2022, 7:55:37 AM6/18/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1159451318@github.com>

Kubernetes Triage Robot

unread,
Jul 18, 2022, 8:17:39 AM7/18/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten

Please send feedback to sig-contributor-experience at kubernetes/community.

/close


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1187257774@github.com>

Kubernetes Prow Robot

unread,
Jul 18, 2022, 8:17:55 AM7/18/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Closed #55046 as completed.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issue/55046/issue_event/7011512688@github.com>

Kubernetes Prow Robot

unread,
Jul 18, 2022, 8:17:55 AM7/18/22
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1187258460@github.com>

Vusal Alishov

unread,
Jan 16, 2024, 6:16:26 AM1/16/24
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

k top pods -n | awk 'BEGIN {mem=0; cpu=0} {mem += int($3); cpu += int($2);} END {print "Memory: " mem "Mi" " " "Cpu: " cpu "m"}'


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/1893539792@github.com>

desiredState

unread,
May 22, 2024, 11:19:16 AM5/22/24
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

Remember, it's important to distinguish between requests and actual usage. Displaying tonnes of available resource in kubectl top doesn't necessarily mean they're schedulable.


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/2125066683@github.com>

Marcel Zapf

unread,
Aug 13, 2024, 2:06:49 PM8/13/24
to kubernetes/kubernetes, k8s-mirror-cli-feature-requests, Team mention

kubectl with 1000 of awks and args nice that this is so easy.
Is there really no affort to implmement this in kubectl?


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/55046/2286829670@github.com>

Reply all
Reply to author
Forward
0 new messages