Re: [kubernetes/kubernetes] Need simple kubectl command to see cluster resource usage (#17512)

260 views
Skip to first unread message

Michail Kargakis

unread,
Jun 10, 2017, 12:10:47 PM6/10/17
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@kubernetes/sig-cli-misc


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Alok Kumar Singh

unread,
Jul 5, 2017, 6:02:18 AM7/5/17
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

You can use the below command to find the percentage cpu utlisation of your nodes

Note: 4000m cores is the total cores in one node
alias cpualloc="util | grep % | awk '{print \$1}' | awk '{ sum += \$1 } END { if (NR > 0) { result=(sum**4000); printf result/NR \"%\n\" } }'"
$ cpualloc
3.89358%

Note: 1600MB is the total cores in one node
alias memalloc='util | grep % | awk '\''{print $3}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { result=(sum*100)/(NR*1600); printf result/NR "%\n" } }'\'''
$ memalloc
3.89358%

Tom Fotherby

unread,
Jul 21, 2017, 8:17:13 AM7/21/17
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@alok87 - thanks for your alias. I would like you use it but what is util ?

Alok Kumar Singh

unread,
Jul 21, 2017, 2:28:11 PM7/21/17
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@tomfotherby alias util='kubectl get nodes | grep node | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''

Tom Fotherby

unread,
Jul 25, 2017, 10:35:20 AM7/25/17
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@alok87 - Thanks for your aliases. In my case, this is what worked for me given that we use bash and m3.large instance types (2 cpu , 7.5G memory).

alias util='kubectl get nodes --no-headers | awk '\''{print $1}'\'' | xargs -I {} sh -c '\''echo {} ; kubectl describe node {} | grep Allocated -A 5 | grep -ve Event -ve Allocated -ve percent -ve -- ; echo '\'''

# Get CPU request total (we x20 because because each m3.large has 2 vcpus (2000m) )
alias cpualloc='util | grep % | awk '\''{print $1}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { print sum/(NR*20), "%\n" } }'\'''

# Get mem request total (we x75 because because each m3.large has 7.5G ram )
alias memalloc='util | grep % | awk '\''{print $5}'\'' | awk '\''{ sum += $1 } END { if (NR > 0) { print sum/(NR*75), "%\n" } }'\'''
$util
ip-10-56-0-178.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  960m (48%)	2700m (135%)	630Mi (8%)	2034Mi (27%)

ip-10-56-0-22.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  920m (46%)	1400m (70%)	560Mi (7%)	550Mi (7%)

ip-10-56-0-56.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  1160m (57%)	2800m (140%)	972Mi (13%)	3976Mi (53%)

ip-10-56-0-99.ec2.internal
  CPU Requests	CPU Limits	Memory Requests	Memory Limits
  804m (40%)	794m (39%)	824Mi (11%)	1300Mi (17%)

cpualloc 
48.05 %

$ memalloc 
9.95333 %

Nick Irvine

unread,
Aug 30, 2017, 3:11:23 PM8/30/17
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

#17512 (comment) kubectl top shows usage, not allocation. Allocation is what causes the insufficient CPU problem. There's a ton of confusion in this issue about the difference.

AFAICT, there's no easy way to get a report of node CPU allocation by pod, since requests are per container in the spec. And even then, it's difficult since .spec.containers[*].requests may or may not have the limits/requests fields (in my experience)

Jonathan Basseri

unread,
Jan 2, 2018, 2:01:01 PM1/2/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Nic Cope

unread,
Feb 20, 2018, 11:51:48 PM2/20/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Getting in on this shell scripting party. I have an older cluster running the CA with scale down disabled. I wrote this script to determine roughly how much I can scale down the cluster when it starts to bump up on its AWS route limits:

#!/bin/bash

set -e

KUBECTL="kubectl"
NODES=$($KUBECTL get nodes --no-headers -o custom-columns=NAME:.metadata.name)

function usage() {
	local node_count=0
	local total_percent_cpu=0
	local total_percent_mem=0
	local readonly nodes=$@

	for n in $nodes; do
		local requests=$($KUBECTL describe node $n | grep -A2 -E "^\\s*CPU Requests" | tail -n1)
		local percent_cpu=$(echo $requests | awk -F "[()%]" '{print $2}')
		local percent_mem=$(echo $requests | awk -F "[()%]" '{print $8}')
		echo "$n: ${percent_cpu}% CPU, ${percent_mem}% memory"

		node_count=$((node_count + 1))
		total_percent_cpu=$((total_percent_cpu + percent_cpu))
		total_percent_mem=$((total_percent_mem + percent_mem))
	done

	local readonly avg_percent_cpu=$((total_percent_cpu / node_count))
	local readonly avg_percent_mem=$((total_percent_mem / node_count))

	echo "Average usage: ${avg_percent_cpu}% CPU, ${avg_percent_mem}% memory."
}

usage $NODES

Produces output like:

ip-REDACTED.us-west-2.compute.internal: 38% CPU, 9% memory
...many redacted lines...
ip-REDACTED.us-west-2.compute.internal: 41% CPU, 8% memory
ip-REDACTED.us-west-2.compute.internal: 61% CPU, 7% memory
Average usage: 45% CPU, 15% memory.

Shubham Chaudhary

unread,
Feb 21, 2018, 2:02:09 PM2/21/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

There is also pod option in top command:

kubectl top pod

Nick Irvine

unread,
Feb 21, 2018, 3:17:09 PM2/21/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Rémi Paulmier

unread,
Mar 4, 2018, 5:13:41 PM3/4/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

My way to obtain the allocation, cluster-wide:

$ kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]}  {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"

It produces something like:

kube-system:heapster-v1.5.0-dc8df7cc9-7fqx6
  heapster:88m
  heapster-nanny:50m
kube-system:kube-dns-6cdf767cb8-cjjdr
  kubedns:100m
  dnsmasq:150m
  sidecar:10m
  prometheus-to-sd:
kube-system:kube-dns-6cdf767cb8-pnx2g
  kubedns:100m
  dnsmasq:150m
  sidecar:10m
  prometheus-to-sd:
kube-system:kube-dns-autoscaler-69c5cbdcdd-wwjtg
  autoscaler:20m
kube-system:kube-proxy-gke-cluster1-default-pool-cd7058d6-3tt9
  kube-proxy:100m
kube-system:kube-proxy-gke-cluster1-preempt-pool-57d7ff41-jplf
  kube-proxy:100m
kube-system:kubernetes-dashboard-7b9c4bf75c-f7zrl
  kubernetes-dashboard:50m
kube-system:l7-default-backend-57856c5f55-68s5g
  default-http-backend:10m
kube-system:metrics-server-v0.2.0-86585d9749-kkrzl
  metrics-server:48m
  metrics-server-nanny:5m
kube-system:tiller-deploy-7794bfb756-8kxh5
  tiller:10m

Kieren Johnstone

unread,
Mar 13, 2018, 4:37:24 AM3/13/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

This is weird. I want to know when I'm at or nearing allocation capacity. It seems a pretty basic function of a cluster. Whether it's a statistic that shows a high % or textual error... how do other people know this? Just always use autoscaling on a cloud platform?

Derrick Petzold

unread,
May 1, 2018, 6:03:10 PM5/1/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I authored https://github.com/dpetzold/kube-resource-lister/ to address #3. Here is some sample output:

$ ./resource-lister -namespace kube-system -reverse -field MemReq
Namespace    Name                                               CpuReq  CpuReq%  CpuLimit  CpuLimit%  MemReq    MemReq%  MemLimit  MemLimit%
---------    ----                                               ------  -------  --------  ---------  ------    -------  --------  ---------
kube-system  event-exporter-v0.1.7-5c4d9556cf-kf4tf             0       0%       0         0%         0         0%       0         0%
kube-system  kube-proxy-gke-project-default-pool-175a4a05-mshh  100m    10%      0         0%         0         0%       0         0%
kube-system  kube-proxy-gke-project-default-pool-175a4a05-bv59  100m    10%      0         0%         0         0%       0         0%
kube-system  kube-proxy-gke-project-default-pool-175a4a05-ntfw  100m    10%      0         0%         0         0%       0         0%
kube-system  kube-dns-autoscaler-244676396-xzgs4                20m     2%       0         0%         10Mi      0%       0         0%
kube-system  l7-default-backend-1044750973-kqh98                10m     1%       10m       1%         20Mi      0%       20Mi      0%
kube-system  kubernetes-dashboard-768854d6dc-jh292              100m    10%      100m      10%        100Mi     3%       300Mi     11%
kube-system  kube-dns-323615064-8nxfl                           260m    27%      0         0%         110Mi     4%       170Mi     6%
kube-system  fluentd-gcp-v2.0.9-4qkwk                           100m    10%      0         0%         200Mi     7%       300Mi     11%
kube-system  fluentd-gcp-v2.0.9-jmtpw                           100m    10%      0         0%         200Mi     7%       300Mi     11%
kube-system  fluentd-gcp-v2.0.9-tw9vk                           100m    10%      0         0%         200Mi     7%       300Mi     11%
kube-system  heapster-v1.4.3-74b5bd94bb-fz8hd                   138m    14%      138m      14%        301856Ki  11%      301856Ki  11%

pao

unread,
May 22, 2018, 5:29:00 AM5/22/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@shtouff

root@debian9:~# kubectl get po -n chenkunning-84 -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]}  {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"
error: error parsing jsonpath {range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]}  {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}, unrecognized character in action: U+0027 '''
root@debian9:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.7-beta.0+$Format:%h$", GitCommit:"bb053ff0cb25a043e828d62394ed626fda2719a1", GitTreeState:"dirty", BuildDate:"2017-08-26T09:34:19Z", GoVersion:"go1.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.7-beta.0+$Format:84c3ae0384658cd40c1d1e637f5faa98cf6a965c$", GitCommit:"3af2004eebf3cbd8d7f24b0ecd23fe4afb889163", GitTreeState:"clean", BuildDate:"2018-04-04T08:40:48Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"linux/amd64"}

Nick Irvine

unread,
May 22, 2018, 4:56:36 PM5/22/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@harryge00: U+0027 is a curly quote, probably a copy-paste problem

pao

unread,
May 25, 2018, 11:18:16 AM5/25/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@nfirvine Thanks! I have solved problem by using:


kubectl get pods -n my-ns -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.containers[0].resources.limits.cpu} {"\n"}{end}' |awk '{sum+=$2 ; print $0} END{print "sum=",sum}'

It works for namespaces whose pods only containing 1 container each.

Abu Shoeb

unread,
Jun 5, 2018, 11:43:48 AM6/5/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@xmik Hey, I'm using k8 1.7 and running hepaster. When I run $ kubectl top nodes --heapster-namespace=kube-system, it shows me "error: metrics not available yet". Any clue for tackling the error?

Ewa Czechowska

unread,
Jun 5, 2018, 12:02:21 PM6/5/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@abushoeb:

  1. I don't think kubectl top supports flag: --heapster-namespace.
  2. If you see "error: metrics not available yet", then you should check heapster deployment. What its logs say? Is the heapster service ok, endpoints are not <none>? Check the latter with a command like: kubectl -n kube-system describe svc/heapster

Abu Shoeb

unread,
Jun 5, 2018, 1:46:22 PM6/5/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@xmik you are right, the heapster wasn't configured properly. Thanks a lot. It's working now. Do you know if there is a way to get real-time GPU usage information? This top command only gives CPU and Memory usage.

Ewa Czechowska

unread,
Jun 5, 2018, 2:24:46 PM6/5/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I don't know that. :(

avg

unread,
Jun 21, 2018, 4:31:03 PM6/21/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@abushoeb I am getting the same error "error: metrics not available yet" . How did you fix it?

Abu Shoeb

unread,
Jun 22, 2018, 10:50:28 AM6/22/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@avgKol check you heapster deployment first. In my case, it was not deployed properly. One way to check it is to access metrics via CURL command like curl -L http://heapster-pod-ip:heapster-service-port/api/v1/model/metrics/. If it doesn't show metrics then check the heapster pod and logs. The hepster metrics can be accessed via a web browser too like this.

Henning Jacobs

unread,
Jul 18, 2018, 3:10:27 PM7/18/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

If anybody is interested, I created a tool to generate static HTML for Kubernetes resource usage (and costs): https://github.com/hjacobs/kube-resource-report

Tony Li

unread,
Jul 18, 2018, 3:20:15 PM7/18/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@hjacobs I would like to use that tool but not a fan of installing/using python packages. Mind packaging it up as a docker image?

Henning Jacobs

unread,
Jul 18, 2018, 3:49:21 PM7/18/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@tonglil the project is pretty early, but my plan is to have an out-of-the-box ready Docker image incl. webserver which you can just do kubectl apply -f .. with.

Arun Gupta

unread,
Sep 12, 2018, 5:46:59 PM9/12/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Here is what worked for me:

kubectl get nodes -o=jsonpath="{range .items[*]}{.metadata.name}{'\t'}{.status.allocatable.memory}{'\t'}{.status.allocatable.cpu}{'\n'}{end}"

It shows output as:

ip-192-168-101-177.us-west-2.compute.internal	251643680Ki	32
ip-192-168-196-254.us-west-2.compute.internal	251643680Ki	32

Henning Jacobs

unread,
Sep 13, 2018, 11:25:26 AM9/13/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@tonglil a Docker image is now available: https://github.com/hjacobs/kube-resource-report

fejta-bot

unread,
Dec 17, 2018, 8:21:28 AM12/17/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Jeff Geerling

unread,
Dec 29, 2018, 11:20:42 PM12/29/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

/remove-lifecycle stale

Every month or so, my Googling leads me back to this issue. There are ways of getting the statistics I need with long jq strings, or with Grafana dashboards with a bunch of calculations... but it would be so nice if there were a command like:

# kubectl utilization cluster
cores: 19.255/24 cores (80%)
memory: 16.4/24 GiB (68%)

# kubectl utilization [node name]
cores: 3.125/4 cores (78%)
memory: 2.1/4 GiB (52%)

(similar to what @chrishiestand mentioned way earlier in the thread).

I am often building and destroying a few dozen test clusters per week, and I'd rather not have to build automation or add in some shell aliases to be able to just see "if I put this many servers out there, and toss these apps on them, what is my overall utilization/pressure".

Especially for smaller / more esoteric clusters, I don't want to set up autoscale-to-the-moon (usually for money reasons), but do need to know if I have enough overhead to handle minor pod autoscaling events.

Evan Anderson

unread,
Dec 31, 2018, 10:47:10 PM12/31/18
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

One additional request -- I'd also like to be able to see summed resource usage by namespace (at a minimum; by Deployment/label would be additionally great), so I can focus my resource-trimming efforts by figuring out which namespaces are worth concentrating on.

Peter Strzyzewski

unread,
Jan 13, 2019, 1:40:57 PM1/13/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I made a small plugin kubectl-utilization, that provides functionality @geerlingguy described. Installation via krew is not avilable yet as they need to merge PR but you can give it a try with curl method. This is implemented in BASH and it needs awk and bc.
With kubectl plugin framework this could be completely abstracted away from core tools.

Weeco

unread,
Mar 3, 2019, 6:45:23 AM3/3/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I am glad others were also facing this challenge. I created Kube Eagle (a prometheus exporter) which helped me gaining a better overview of cluster resources and ultimately let me better utilize the available hardware resources:

https://github.com/google-cloud-tools/kube-eagle

Kubernetes Resource monitoring dashboard

amelbakry

unread,
Apr 24, 2019, 9:02:46 AM4/24/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

This is a python script to get the actual node utilization in table format
https://github.com/amelbakry/kube-node-utilization

Kubernetes Node Utilization..........
+------------------------------------------------+--------+--------+
| NodeName | CPU | Memory |
+------------------------------------------------+--------+--------+
| ip-176-35-32-139.eu-central-1.compute.internal | 13.49% | 60.87% |
| ip-176-35-26-21.eu-central-1.compute.internal | 5.89% | 15.10% |
| ip-176-35-28-29.eu-central-1.compute.internal | 22.79% | 30.34% |
| ip-176-35-4-167.eu-central-1.compute.internal | 11.63% | 39.49% |
| ip-176-35-17-237.eu-central-1.compute.internal | 8.32% | 25.69% |
| ip-176-35-8-237.eu-central-1.compute.internal | 5.15% | 28.78% |
| ip-176-35-8-237.eu-central-1.compute.internal | 6.91% | 46.01% |
| ip-176-35-0-89.eu-central-1.compute.internal | 3.59% | 11.49% |
| ip-176-35-10-120.eu-central-1.compute.internal | 21.19% | 44.44% |
| ip-176-35-7-90.eu-central-1.compute.internal | 5.53% | 20.84% |
| ip-176-35-6-117.eu-central-1.compute.internal | 6.21% | 19.59% |
| ip-176-35-18-150.eu-central-1.compute.internal | 2.68% | 11.10% |
| ip-176-35-4-128.eu-central-1.compute.internal | 4.44% | 17.46% |
| ip-176-35-9-122.eu-central-1.compute.internal | 8.08% | 65.51% |
| ip-176-35-22-243.eu-central-1.compute.internal | 6.29% | 19.28% |
+------------------------------------------------+--------+--------+

Kieren Johnstone

unread,
Apr 25, 2019, 3:10:57 AM4/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

For me at least @amelbakry it's cluster-level utilisation that's important: "do I need to add more machines?" / "should I remove some machines?" / "should I expect the cluster to scale up soon?" ..

Eugene Glotov

unread,
Apr 25, 2019, 7:58:48 AM4/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

What about ephemeral storage usage? Any ideas how to get it from all pods?

amelbakry

unread,
Apr 25, 2019, 10:41:20 AM4/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@kivagant-ba you can try this snipt to get pod metrics per node, you can get all nodes like
https://github.com/amelbakry/kube-node-utilization

def get_pod_metrics_per_node(node):
pod_metrics = "/api/v1/pods?fieldSelector=spec.nodeName%3D" + node
response = api_client.call_api(pod_metrics,
'GET', auth_settings=['BearerToken'],
response_type='json', _preload_content=False)

response = json.loads(response[0].data.decode('utf-8'))

return response

amelbakry

unread,
Apr 25, 2019, 10:43:20 AM4/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@kierenj I think the cluster-autoscaler component based on which cloud kubernetes is running should handle the capacity. not sure if this is your question.

fejta-bot

unread,
Jul 30, 2019, 12:46:27 PM7/30/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Eric Duncan

unread,
Aug 28, 2019, 7:28:29 AM8/28/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

/remove-lifecycle stale

I, like many others, keep coming back here - for years - to get the hack we need to manage the clusters via CLI (e.g. AWS ASGs)

Dmitri Moore

unread,
Aug 28, 2019, 2:07:47 PM8/28/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@etopeter Thank you for such a cool CLI plugin. Love the simplicity of it. Any advice on how to better understand the numbers and their exact meaning?

Sean

unread,
Sep 1, 2019, 11:23:45 PM9/1/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

If anyone can find use of it here is a script that will dump current limits of pods.

kubectl get pods --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}\ {'.spec.nodeName -'} {.spec.nodeName}{'\n'}\ {range .spec.containers[*]}\ {'requests.cpu -'} {.resources.requests.cpu}{'\n'}\ {'limits.cpu -'} {.resources.limits.cpu}{'\n'}\ {'requests.memory -'} {.resources.requests.memory}{'\n'}\ {'limits.memory -'} {.resources.limits.memory}{'\n'}\ {'\n'}{end}\ {'\n'}{end}"

Eric Duncan

unread,
Sep 1, 2019, 11:39:38 PM9/1/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@Spaceman1861 could you show an example outout?

Sean

unread,
Sep 2, 2019, 12:04:10 AM9/2/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@eduncan911 done

Lennart Jern

unread,
Sep 2, 2019, 6:28:23 AM9/2/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I find it easier to read the output in table format, like this (this shows requests instead of limits):

kubectl get pods -o custom-columns=NAME:.metadata.name,"CPU(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY(bytes)":.spec.containers[*].resources.requests.memory --all-namespaces

Sample output:

NAME                                CPU(cores)      MEMORY(bytes)
pod1                                100m            128Mi
pod2                                100m            128Mi,128Mi

Henning Jacobs

unread,
Sep 2, 2019, 6:47:24 AM9/2/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Sean

unread,
Sep 2, 2019, 6:52:22 AM9/2/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Oooo shiny @hjacobs I like that.

amelbakry

unread,
Sep 2, 2019, 9:18:18 AM9/2/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

This is a script (deployment-health.sh) to get the utilization of the pods in deployment based on the usage and configured limits
https://github.com/amelbakry/kubernetes-scripts

Screenshot from 2019-09-02 15-11-42

Alik Khilazhev

unread,
Sep 25, 2019, 9:23:38 AM9/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Inspired by the answers of @lentzi90 and @ylogx, I have created own big script which shows actual resource usage (kubectl top pods) and resource requests and limits:

join -a1 -a2 -o 0,1.2,1.3,2.2,2.3,2.4,2.5, -e '<none>' <(kubectl top pods) <(kubectl get pods -o custom-columns=NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory) | column -t -s' ' 

output example:

NAME                                                             CPU(cores)  MEMORY(bytes)  CPU_REQ(cores)  MEMORY_REQ(bytes)  CPU_LIM(cores)  MEMORY_LIM(bytes)
xxxxx-847dbbc4c-c6twt                                            20m         110Mi          50m             150Mi              150m            250Mi
xxx-service-7b6b9558fc-9cq5b                                     19m         1304Mi         1               <none>             1               <none>
xxxxxxxxxxxxxxx-hook-5d585b449b-zfxmh                            0m          46Mi           200m            155M               200m            155M

Dmitri Moore

unread,
Sep 25, 2019, 1:19:40 PM9/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

This is a script (deployment-health.sh) to get the utilization of the pods in deployment based on the usage and configured limits
https://github.com/amelbakry/kubernetes-scripts

@amelbakry I am getting the following error trying to execute it on a Mac:

Failed to execute process './deployment-health.sh'. Reason:
exec: Exec format error
The file './deployment-health.sh' is marked as an executable but could not be run by the operating system.

Charles Thayer

unread,
Sep 25, 2019, 2:03:49 PM9/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention
Woops,
"#!" needs to be the very first line. Instead try "bash
./deployment-health.sh" to work around the issue.

/charles
PS. PR opened to fix the issue

On Wed, Sep 25, 2019 at 10:19 AM Dmitri Moore <notifi...@github.com>
wrote:


> This is a script (deployment-health.sh) to get the utilization of the pods
> in deployment based on the usage and configured limits
> https://github.com/amelbakry/kubernetes-scripts
>
> @amelbakry <https://github.com/amelbakry> I am getting the following

> error trying to execute it on a Mac:
>
> Failed to execute process './deployment-health.sh'. Reason:
> exec: Exec format error
> The file './deployment-health.sh' is marked as an executable but could not be run by the operating system.
>
> —
> You are receiving this because you are subscribed to this thread.

> Reply to this email directly, view it on GitHub
> <https://github.com/kubernetes/kubernetes/issues/17512?email_source=notifications&email_token=AACA3TODQEUPWK3V3UY3SF3QLOMSFA5CNFSM4BUXCUG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7SVHRQ#issuecomment-535122886>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AACA3TOPOBIWXFX2DAOT6JDQLOMSFANCNFSM4BUXCUGQ>

Dmitri Moore

unread,
Oct 2, 2019, 3:55:37 PM10/2/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@cgthayer You might want to apply that PR fix globally. Also, when I ran the scripts on MacOs Mojave, a bunch of errors showed up, including EU specific zone names which I don't use. Looks like these scripts have been written for a specific project.

Sam Mingo

unread,
Oct 21, 2019, 12:23:05 PM10/21/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Here's a modified version of the join ex. which does totals of columns as well.

oc_ns_pod_usage () {
    # show pod usage for cpu/mem
    ns="$1"
    usage_chk3 "$ns" || return 1
    printf "$ns\n"
    separator=$(printf '=%.0s' {1..50})
    printf "$separator\n"
    output=$(join -a1 -a2 -o 0,1.2,1.3,2.2,2.3,2.4,2.5, -e '<none>' \
        <(kubectl top pods -n $ns) \
        <(kubectl get -n $ns pods -o custom-columns=NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory))
    totals=$(printf "%s" "$output" | awk '{s+=$2; t+=$3; u+=$4; v+=$5; w+=$6; x+=$7} END {print s" "t" "u" "v" "w" "x}')
    printf "%s\n%s\nTotals: %s\n" "$output" "$separator" "$totals" | column -t -s' '
    printf "$separator\n"
}

Example

$ oc_ns_pod_usage ls-indexer
ls-indexer
==================================================
NAME                                                CPU(cores)  MEMORY(bytes)  CPU_REQ(cores)  MEMORY_REQ(bytes)  CPU_LIM(cores)  MEMORY_LIM(bytes)
ls-indexer-f5-7cd5859997-qsfrp                      15m         741Mi          1               1000Mi             2               2000Mi
ls-indexer-f5-7cd5859997-sclvg                      15m         735Mi          1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-4b7j2                 92m         1103Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-5xj5l                 88m         1124Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-6vvl2                 92m         1132Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-85f66                 85m         1151Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-924jz                 96m         1124Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-g6gx8                 119m        1119Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-hkhnt                 52m         819Mi          1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-hrsrs                 51m         1122Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-j4qxm                 53m         885Mi          1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-lxlrb                 83m         1215Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-mw6rt                 86m         1131Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-pbdf8                 95m         1115Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-qk9bm                 91m         1141Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-sdv9r                 54m         1194Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-t67v6                 75m         1234Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-tkxs2                 88m         1364Mi         1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-v6jl2                 53m         747Mi          1               1000Mi             2               2000Mi
ls-indexer-filebeat-7858f56c9-wkqr7                 53m         838Mi          1               1000Mi             2               2000Mi
ls-indexer-metricbeat-74d89d7d85-jp8qc              190m        1191Mi         1               1000Mi             2               2000Mi
ls-indexer-metricbeat-74d89d7d85-jv4bv              192m        1162Mi         1               1000Mi             2               2000Mi
ls-indexer-metricbeat-74d89d7d85-k4dcd              194m        1144Mi         1               1000Mi             2               2000Mi
ls-indexer-metricbeat-74d89d7d85-n46tz              192m        1155Mi         1               1000Mi             2               2000Mi
ls-indexer-packetbeat-db98f6fdf-8x446               35m         1198Mi         1               1000Mi             2               2000Mi
ls-indexer-packetbeat-db98f6fdf-gmxxd               22m         1203Mi         1               1000Mi             2               2000Mi
ls-indexer-syslog-5466bc4d4f-gzxw8                  27m         1125Mi         1               1000Mi             2               2000Mi
ls-indexer-syslog-5466bc4d4f-zh7st                  29m         1153Mi         1               1000Mi             2               2000Mi
==================================================
Totals:                                             2317        30365          28              28000              56              56000
==================================================


You are receiving this because you are on a team that was mentioned.

Reply to this email directly, view it on GitHub, or unsubscribe.

Cristian Falcas

unread,
Oct 25, 2019, 5:36:59 AM10/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

And what is usage_chk3?

David Bernard

unread,
Oct 25, 2019, 10:26:03 AM10/25/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I would like to also share my tools ;-) kubectl-view-allocations: kubectl plugin to list allocations (cpu, memory, gpu,... X requested, limit, allocatable,...)., request are welcome.

I made it because I would like to provide to my (internal) users a way to see "who use what". By default every resources are displayed, but in the following sample I only request resource with "gpu" in name.

> kubectl-view-allocations -r gpu



 Resource                                   Requested  %Requested  Limit  %Limit  Allocatable  Free

  nvidia.com/gpu                                    7         58%      7     58%           12     5

  ├─ node-gpu1                                      1         50%      1     50%            2     1

  │  └─ xxxx-784dd998f4-zt9dh                       1                  1

  ├─ node-gpu2                                      0          0%      0      0%            2     2

  ├─ node-gpu3                                      0          0%      0      0%            2     2

  ├─ node-gpu4                                      1         50%      1     50%            2     1

  │  └─ aaaa-1571819245-5ql82                       1                  1

  ├─ node-gpu5                                      2        100%      2    100%            2     0

  │  ├─ bbbb-1571738839-dfkhn                       1                  1

  │  └─ bbbb-1571738888-52c4w                       1                  1

  └─ sail-gpu6                                      2        100%      2    100%            2     0

     ├─ bbbb-1571738688-vlxng                       1                  1

     └─ cccc-1571745684-7k6bn                       1                  1

coming version(s):

  • will allow to hide (node, pod) level or to choose how to group, (eg to provide an overview with only resources)
  • installation via curl, krew, brew, ... (currently binary are available under the releases section of github)

Thanks to kubectl-view-utilization for the inspiration, but adding support to other resources was to many copy/paste or hard to do for me in bash (for a generic way).

libudas

unread,
Nov 28, 2019, 8:56:28 AM11/28/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

here is my hack kubectl describe nodes | grep -A 2 -e "^\\s*CPU Requests"

This doesn't work anymore :(

Mostafa Gazar

unread,
Nov 28, 2019, 3:04:41 PM11/28/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Give kubectl describe node | grep -A5 "Allocated" a try

Alex Kreidler

unread,
Dec 1, 2019, 12:43:05 PM12/1/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

This is currently the 4th highest requested issue by thumbs up, but still is priority/backlog.

I'd be happy to take a stab at this if someone could point me in the right direction or if we could finalize a proposal. I think the UX of @davidB's tool is awesome, but this really belongs in the core kubectl.

smpar

unread,
Dec 18, 2019, 11:33:50 AM12/18/19
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Using the following comands: kubectl top nodes & kubectl describe node we do not get consistent results

For example with the first one the CPU(cores) are 1064m but this result cannot be fetched with the second one(1480m):

kubectl top nodes
NAME                                                CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
abcd-p174e23ea5qa4g279446c803f82-abc-node-0         1064m        53%    6783Mi          88%
kubectl describe node abcd-p174e23ea5qa4g279446c803f82-abc-node-0
...
  Resource  Requests          Limits
  --------  --------          ------
  cpu       1480m (74%)       1300m (65%)
  memory    2981486848 (37%)  1588314624 (19%)

Any idea about getting the CPU(cores) without using the kubectl top nodes ?

omerfsen

unread,
Jan 12, 2020, 3:19:47 PM1/12/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I made it because I would like to provide to my (internal) users a way to see "who allocates what". By default every resources are displayed, but in the following sample I only request resource with "gpu" in name.

> kubectl-view-allocations -r gpu



 Resource                                   Requested  %Requested  Limit  %Limit  Allocatable  Free

  nvidia.com/gpu
                                    7         58%      7     58%           12     5

  ├─ node-gpu1                                      1         50%      1     50%            2     1

  │  └─ xxxx-784dd998f4-zt9dh                       1                  1

  ├─ node-gpu2                                      0          0%      0      0%            2     2

  ├─ node-gpu3                                      0          0%      0      0%            2     2

  ├─ node-gpu4                                      1         50%      1     50%            2     1

  │  └─ aaaa-1571819245-5ql82                       1                  1

  ├─ node-gpu5                                      2        100%      2    100%            2     0

  │  ├─ bbbb-1571738839-dfkhn                       1                  1

  │  └─ bbbb-1571738888-52c4w                       1                  1

  └─ node-gpu6                                      2        100%      2    100%            2     0

     ├─ bbbb-1571738688-vlxng                       1                  1

     └─ cccc-1571745684-7k6bn                       1                  1

coming version(s):

* will allow to hide (node, pod) level or to choose how to group, (eg to provide an overview with only resources)



* installation via curl, krew, brew, ... (currently binary are available under the releases section of github)

Thanks to kubectl-view-utilization for the inspiration, but adding support to other resources was to many copy/paste or hard to do for me in bash (for a generic way).

Hello David it would be nice if you provide more compiled binary for new distributions. On Ubuntu 16.04 we get

kubectl-view-allocations: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by kubectl-view-allocations)

dpkg -l |grep glib

ii libglib2.0-0:amd64 2.48.2-0ubuntu4.4

David Bernard

unread,
Jan 15, 2020, 5:24:32 PM1/15/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@omerfsen can you try the new version kubectl-view-allocations and comment into the ticket version `GLIBC_2.25' not found #14.

Abu Belal

unread,
Jan 31, 2020, 10:49:14 AM1/31/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

My way to obtain the allocation, cluster-wide:

$ kubectl get po --all-namespaces -o=jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name}{'\n'}{range .spec.containers[*]}  {.name}:{.resources.requests.cpu}{'\n'}{end}{'\n'}{end}"

It produces something like:

kube-system:heapster-v1.5.0-dc8df7cc9-7fqx6
  heapster:88m
  heapster-nanny:50m
kube-system:kube-dns-6cdf767cb8-cjjdr
  kubedns:100m
  dnsmasq:150m
  sidecar:10m
  prometheus-to-sd:
kube-system:kube-dns-6cdf767cb8-pnx2g
  kubedns:100m
  dnsmasq:150m
  sidecar:10m
  prometheus-to-sd:
kube-system:kube-dns-autoscaler-69c5cbdcdd-wwjtg
  autoscaler:20m
kube-system:kube-proxy-gke-cluster1-default-pool-cd7058d6-3tt9
  kube-proxy:100m
kube-system:kube-proxy-gke-cluster1-preempt-pool-57d7ff41-jplf
  kube-proxy:100m
kube-system:kubernetes-dashboard-7b9c4bf75c-f7zrl
  kubernetes-dashboard:50m
kube-system:l7-default-backend-57856c5f55-68s5g
  default-http-backend:10m
kube-system:metrics-server-v0.2.0-86585d9749-kkrzl
  metrics-server:48m
  metrics-server-nanny:5m
kube-system:tiller-deploy-7794bfb756-8kxh5
  tiller:10m

by far the best answer here.

stefanjacobs

unread,
Feb 11, 2020, 2:34:16 AM2/11/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Inspired by the scripts above I created the following script too view the usage, requests and limits:

join -1 2 -2 2 -a 1 -a 2 -o "2.1 0 1.3 2.3 2.5 1.4 2.4 2.6" -e '<wait>' \
  <( kubectl top pods --all-namespaces | sort --key 2 -b ) \
  <( kubectl get pods --all-namespaces -o custom-columns=NAMESPACE:.metadata.namespace,NAME:.metadata.name,"CPU_REQ(cores)":.spec.containers[*].resources.requests.cpu,"MEMORY_REQ(bytes)":.spec.containers[*].resources.requests.memory,"CPU_LIM(cores)":.spec.containers[*].resources.limits.cpu,"MEMORY_LIM(bytes)":.spec.containers[*].resources.limits.memory | sort --key 2 -b ) \
  | column -t -s' '

Because join expects a sorted list, the scripts before failed for me.

You see as a result the current usage from top and from the deployment the requests and the limits of (here) all namespaces:

NAMESPACE                 NAME                                                        CPU(cores)  CPU_REQ(cores)  CPU_LIM(cores)  MEMORY(bytes)  MEMORY_REQ(bytes)   MEMORY_LIM(bytes)
kube-system               aws-node-2jzxr                                              18m         10m             <none>          41Mi           <none>              <none>
kube-system               aws-node-5zn6w                                              <wait>      10m             <none>          <wait>         <none>              <none>
kube-system               aws-node-h8cc5                                              20m         10m             <none>          42Mi           <none>              <none>
kube-system               aws-node-h9n4f                                              0m          10m             <none>          0Mi            <none>              <none>
kube-system               aws-node-lz5fn                                              17m         10m             <none>          41Mi           <none>              <none>
kube-system               aws-node-tpmxr                                              20m         10m             <none>          39Mi           <none>              <none>
kube-system               aws-node-zbkkh                                              23m         10m             <none>          47Mi           <none>              <none>
cluster-autoscaler        cluster-autoscaler-aws-cluster-autoscaler-5db55fbcf8-mdzkd  1m          100m            500m            9Mi            300Mi               500Mi
cluster-autoscaler        cluster-autoscaler-aws-cluster-autoscaler-5db55fbcf8-q9xs8  39m         100m            500m            75Mi           300Mi               500Mi
kube-system               coredns-56b56b56cd-bb26t                                    6m          100m            <none>          11Mi           70Mi                170Mi
kube-system               coredns-56b56b56cd-nhp58                                    6m          100m            <none>          11Mi           70Mi                170Mi
kube-system               coredns-56b56b56cd-wrmxv                                    7m          100m            <none>          12Mi           70Mi                170Mi
gitlab-runner-l           gitlab-runner-l-gitlab-runner-6b8b85f87f-9knnx              3m          100m            200m            10Mi           128Mi               256Mi
gitlab-runner-m           gitlab-runner-m-gitlab-runner-6bfd5d6c84-t5nrd              7m          100m            200m            13Mi           128Mi               256Mi
gitlab-runner-mda         gitlab-runner-mda-gitlab-runner-59bb66c8dd-bd9xw            4m          100m            200m            17Mi           128Mi               256Mi
gitlab-runner-ops         gitlab-runner-ops-gitlab-runner-7c5b85dc97-zkb4c            3m          100m            200m            12Mi           128Mi               256Mi
gitlab-runner-pst         gitlab-runner-pst-gitlab-runner-6b8f9bf56b-sszlr            6m          100m            200m            20Mi           128Mi               256Mi
gitlab-runner-s           gitlab-runner-s-gitlab-runner-6bbccb9b7b-dmwgl              50m         100m            200m            27Mi           128Mi               512Mi
gitlab-runner-shared      gitlab-runner-shared-gitlab-runner-688d57477f-qgs2z         3m          <none>          <none>          15Mi           <none>              <none>
kube-system               kube-proxy-5b65t                                            15m         100m            <none>          19Mi           <none>              <none>
kube-system               kube-proxy-7qsgh                                            12m         100m            <none>          24Mi           <none>              <none>
kube-system               kube-proxy-gn2qg                                            13m         100m            <none>          23Mi           <none>              <none>
kube-system               kube-proxy-pz7fp                                            15m         100m            <none>          18Mi           <none>              <none>
kube-system               kube-proxy-vdjqt                                            15m         100m            <none>          23Mi           <none>              <none>
kube-system               kube-proxy-x4xtp                                            19m         100m            <none>          15Mi           <none>              <none>
kube-system               kube-proxy-xlpn7                                            0m          100m            <none>          0Mi            <none>              <none>
metrics-server            metrics-server-5875c7d795-bj7cq                             5m          200m            500m            29Mi           200Mi               500Mi
metrics-server            metrics-server-5875c7d795-jpjjn                             7m          200m            500m            29Mi           200Mi               500Mi
gitlab-runner-s           runner-heq8ujaj-project-10386-concurrent-06t94f             <wait>      200m,100m       200m,200m       <wait>         200Mi,128Mi         500Mi,500Mi
gitlab-runner-s           runner-heq8ujaj-project-10386-concurrent-10lpn9j            1m          200m,100m       200m,200m       12Mi           200Mi,128Mi         500Mi,500Mi
gitlab-runner-s           runner-heq8ujaj-project-10386-concurrent-11jrxfh            <wait>      200m,100m       200m,200m       <wait>         200Mi,128Mi         500Mi,500Mi
gitlab-runner-s           runner-heq8ujaj-project-10386-concurrent-129hpvl            1m          200m,100m       200m,200m       12Mi           200Mi,128Mi         500Mi,500Mi
gitlab-runner-s           runner-heq8ujaj-project-10386-concurrent-13kswg8            1m          200m,100m       200m,200m       12Mi           200Mi,128Mi         500Mi,500Mi
gitlab-runner-s           runner-heq8ujaj-project-10386-concurrent-15qhp5w            <wait>      200m,100m       200m,200m       <wait>         200Mi,128Mi         500Mi,500Mi

Noteworthy: You can sort over CPU consumption with e.g.:

| awk 'NR<2{print $0;next}{print $0| "sort --key 3 --numeric -b --reverse"}

This works on Mac - I am not sure, if it works on Linux, too (because of join, sort, etc...).

Hopefully, someone can use this till kubectl gets a good view for that.

Eyal Levin

unread,
Feb 18, 2020, 7:13:39 AM2/18/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

I have a good experience with kube-capacity.

Example:

kube-capacity --util

NODE              CPU REQUESTS    CPU LIMITS    CPU UTIL    MEMORY REQUESTS    MEMORY LIMITS   MEMORY UTIL
*                 560m (28%)      130m (7%)     40m (2%)    572Mi (9%)         770Mi (13%)     470Mi (8%)
example-node-1    220m (22%)      10m (1%)      10m (1%)    192Mi (6%)         360Mi (12%)     210Mi (7%)
example-node-2    340m (34%)      120m (12%)    30m (3%)    380Mi (13%)        410Mi (14%)     260Mi (9%)

boniek

unread,
Apr 9, 2020, 10:05:09 AM4/9/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

In order for this tool to be truly useful it should detect all kubernetes device plugins deployed on cluster and show usage for all of them. CPU/Mem is definetly not enough. There's also GPUs, TPUs (for machine learning), Intel QAT and probably more I don't know about.

David Bernard

unread,
Apr 9, 2020, 10:36:13 AM4/9/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@boniek83 , It's why I created kubectl-view-allocations, because I need to list GPU,... any feedback (on the github project) are welcomes. I curious to know if it detects TPU (it should if it is listed as a Node's resources)

boniek

unread,
Apr 9, 2020, 1:22:02 PM4/9/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@boniek83 , It's why I created kubectl-view-allocations, because I need to list GPU,... any feedback (on the github project) are welcomes. I curious to know if it detects TPU (it should if it is listed as a Node's resources)

I'm aware of your tool and, for my purpose, it is the best that is currently available. Thanks for making it!
I will try to get TPUs tested after Easter. It would be helpful if this data would be available in web app format with pretty graphs so I wouldn't have to give any access to kubernetes to data scientists. They only want to know who is eating away resources and nothing more :)

Enrico Tröger

unread,
Apr 12, 2020, 6:12:12 PM4/12/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Since none of the tools and scripts above fit my needs (and this issue is still open :( ), I hacked my own variant:
https://github.com/eht16/kube-cargo-load

It provides a quick overview of PODs in a cluster and shows their configured memory requests and limits and the actual memory usage. The idea is to get a picture of the ratio between configured memory limits and actual usage.

RahulRatan07

unread,
Apr 26, 2020, 12:46:18 AM4/26/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

How can we get memory dumps logs of the pods?
Pods are often getting hung,

hmsvigle

unread,
Apr 27, 2020, 4:40:13 AM4/27/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

  • kubectl describe nodes OR kubectl top nodes , which one should be considered to calculate cluster resource utilization ?
  • Also Why there is difference between these 2 results.
    Is there any logical explanation this yet ?

Brian Pursley

unread,
Apr 29, 2020, 9:36:52 AM4/29/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

/kind feature

Prathamesh Dhanawade

unread,
Apr 30, 2020, 3:09:58 PM4/30/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

All the comments and hacks with nodes worked well for me. I also need something for a higher view to keep track of..like sum of resources per node pool !

rajjar123456

unread,
Jul 17, 2020, 7:49:24 PM7/17/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

hi,
I want to log the cpu and memory usage for a pod , every 5 mins over a period of time. I would then use this data to create a graph in excel. Any ideas? Thanks

Yceos HdA

unread,
Jul 28, 2020, 8:49:34 AM7/28/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Hi,
I'm happy to see that Google pointed all of us to this issue :-) (a bit disappointed that it's still open after almost 5y.) Thanks for all the shell snips and other tools.

Serhey Dolgushev

unread,
Aug 5, 2020, 9:38:34 AM8/5/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Simple and quick hack:

$ kubectl describe nodes | grep 'Name:\|  cpu\|  memory'
Name:               XXX-2-wke2
  cpu                                               1552m (77%)   2402m (120%)
  memory                                            2185Mi (70%)  3854Mi (123%)
Name:               XXX-2-wkep
  cpu                                               1102m (55%)   1452m (72%)
  memory                                            1601Mi (51%)  2148Mi (69%)
Name:               XXX-2-wkwz
  cpu                                               852m (42%)    1352m (67%)
  memory                                            1125Mi (36%)  3624Mi (116%)

boniek

unread,
Aug 5, 2020, 9:42:11 AM8/5/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Simple and quick hack:

$ kubectl describe nodes | grep 'Name:\|  cpu\|  memory'

Name:               XXX-2-wke2
  cpu                                               1552m (77%)   2402m (120%)
  memory                                            2185Mi (70%)  3854Mi (123%)
Name:               XXX-2-wkep
  cpu                                               1102m (55%)   1452m (72%)
  memory                                            1601Mi (51%)  2148Mi (69%)
Name:               XXX-2-wkwz
  cpu                                               852m (42%)    1352m (67%)
  memory                                            1125Mi (36%)  3624Mi (116%)

Device plugins are not there. They should be. Such devices are resources as well.

Aécio Pires

unread,
Aug 7, 2020, 2:27:31 PM8/7/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Hello!

I created this script and share it with you.

https://github.com/Sensedia/open-tools/blob/master/scripts/listK8sHardwareResources.sh

This script has a compilation of some of the ideas you shared here. The script can be incremented and can help other people get the metrics more simply.

Thanks for sharing the tips and commands!

Laury Bueno

unread,
Aug 9, 2020, 2:17:43 PM8/9/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

For my use case, I ended up writing a simple kubectl plugin that lists CPU/RAM limits/reservations for nodes in a table. It also checks current pod CPU/RAM consumption (like kubectl top pods), but ordering output by CPU in descending order.

Its more of a convenience thing than anything else, but maybe someone else will find it useful too.

https://github.com/laurybueno/kubectl-hoggers

denissabramovs

unread,
Sep 20, 2020, 7:58:30 AM9/20/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Whoa, what a huge thread and still no proper solution from kubernetes team to properly calculate current overall cpu usage of a whole cluster?

cafebabe1991

unread,
Oct 16, 2020, 3:45:42 AM10/16/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

For those looking to run this on minikube , first enable the metric server add-on
minikube addons enable metrics-server
and then run the command
kubectl top nodes

Max Malm

unread,
Nov 14, 2020, 3:31:50 PM11/14/20
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

If you're using Krew:

kubectl krew install resource-capacity
kubectl resource-capacity
NODE                                          CPU REQUESTS   CPU LIMITS     MEMORY REQUESTS   MEMORY LIMITS
*                                             16960m (35%)   18600m (39%)   26366Mi (14%)     3100Mi (1%)
ip-10-0-138-176.eu-north-1.compute.internal   2460m (31%)    4200m (53%)    567Mi (1%)        784Mi (2%)
ip-10-0-155-49.eu-north-1.compute.internal    2160m (27%)    2200m (27%)    4303Mi (14%)      414Mi (1%)
ip-10-0-162-84.eu-north-1.compute.internal    3860m (48%)    3900m (49%)    8399Mi (27%)      414Mi (1%)
ip-10-0-200-101.eu-north-1.compute.internal   2160m (27%)    2200m (27%)    4303Mi (14%)      414Mi (1%)
ip-10-0-231-146.eu-north-1.compute.internal   2160m (27%)    2200m (27%)    4303Mi (14%)      414Mi (1%)
ip-10-0-251-167.eu-north-1.compute.internal   4160m (52%)    3900m (49%)    4491Mi (14%)      660Mi (2%)

fejta-bot

unread,
Feb 12, 2021, 3:50:35 PM2/12/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

Abu Belal

unread,
Feb 12, 2021, 3:56:59 PM2/12/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

5 years and still open. I understand there are loads of tools available to check pod resource usage but honestly why not supply a standard one out of the box that's simple to use? Bundling grafana and prometheus with all the monitoring you could require would have been a god send for my team. We wasted months experimenting with different solutions. Please kube mainteners give us something out of the box and close this issue!

tculp

unread,
Feb 12, 2021, 4:09:44 PM2/12/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

/remove-lifecycle stale

Rok Carl

unread,
Mar 22, 2021, 10:39:31 AM3/22/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Even with all the tools above (I currently use [kubectl-view-utilization]), none of them can answer: "Can I run 3 replicas of an application pod that requires 1500 mCPU on my app nodes?" I have to do some number-crunching manually.

Mani Gandham

unread,
Mar 22, 2021, 11:08:44 AM3/22/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Highly recommend another great tool called K9S: https://github.com/derailed/k9s

It's a separate CLI tool but uses the same config context for access and offers a lot of terminal/UI utility for monitoring and managing your cluster.

jeho

unread,
Mar 31, 2021, 12:39:33 PM3/31/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

kubectl describe nodes | grep "Allocated resources" -A 9

Eddie Zaneski

unread,
Mar 31, 2021, 1:03:10 PM3/31/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

From the long history of comments here it seems everyone has different expectations by the many issues and requests reported in this thread. This thread is more of a wiki now.

We'd be happy to see one of these plugins be proposed for upstreaming via a KEP. If someone wants to own this and bias for action with a decision, please open a KEP for discussion.

/close

Kubernetes Prow Robot

unread,
Mar 31, 2021, 1:03:20 PM3/31/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@eddiezane: Closing this issue.

In response to this:

From the long history of comments here it seems everyone has different expectations by the many issues and requests reported in this thread. This thread is more of a wiki now.

We'd be happy to see one of these plugins be proposed for upstreaming via a KEP. If someone wants to own this and bias for action with a decision, please open a KEP for discussion.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Kubernetes Prow Robot

unread,
Mar 31, 2021, 1:03:22 PM3/31/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Closed #17512.

champak

unread,
Apr 15, 2021, 3:46:06 PM4/15/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

In case folks are still listening in on this issue...Has anyone attempted using the standard resource usage apis like getrusage() for sw running inside containers/pods(). For cpu stats it does not seem that it will far off from what the node level cgroup would have to report.

mem stats seem more problematic. Unclear whether say /sys/fs/crgoup/memory/<> from inside a container really reflects memory usage correctly. Being able to monitor the resource usage from within an app (and then changing behavior in the app etc.) is a neat capability. Seems unclear when that will be available in k8s so casting around for workarounds.

dguyhasnoname

unread,
Jun 18, 2021, 2:31:01 PM6/18/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

another tool to see resources node-wise, namespace-wise: https://github.com/dguyhasnoname/k8s-day2-ops/tree/master/resource_calcuation/k8s-toppur

Jack Peterson

unread,
Sep 20, 2021, 11:48:26 PM9/20/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

My hack (on k8s 1.18; EKS)

kubectl describe nodes | grep 'Name:\|Allocated' -A 5 | grep 'Name\|memory'


You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.

Triage notifications on the go with GitHub Mobile for iOS or Android.

Shawn Cao

unread,
Nov 7, 2021, 5:27:24 PM11/7/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Lots of gems in this thread, :) thanks all! (wish some good writer could summarize and publish a quick sheet for it)

Vladimir

unread,
Nov 10, 2021, 10:41:09 AM11/10/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

@jackdpeterson answer adapted for Powershell :)

kubectl describe nodes | Select-String -Pattern 'Allocated resources:' -Context 0,5

sanderdescamps

unread,
Nov 19, 2021, 4:59:27 AM11/19/21
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

kubectl describe nodes | grep "Allocated resources" -A 9

Without counting the lines

kubectl describe nodes | awk '/Allocated resources/,/Events/' | head -n-1

Jason Dusek

unread,
Jul 6, 2022, 1:21:17 AM7/6/22
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

It's not perfect, but we can get a serviceable summary with sed:

:;  kubectl describe nodes |
    sed -n '/^Allocated /,/^Events:/ { /^  [^(]/ p; } ; /^Name: / p'
Name:               ip100.k8s.computer
  Resource                    Requests           Limits
  --------                    --------           ------
  cpu                         6773m (90%)        14300m (190%)
  memory                      12851005952 (40%)  18577645056 (57%)
  ephemeral-storage           0 (0%)             0 (0%)
  hugepages-1Gi               0 (0%)             0 (0%)
  hugepages-2Mi               0 (0%)             0 (0%)
Name:               ip200.k8s.computer
  Resource                    Requests           Limits
  --------                    --------           ------
  cpu                         7082m (94%)        9500m (126%)
  memory                      26405455360 (83%)  24630806144 (77%)
  ephemeral-storage           0 (0%)             0 (0%)
  hugepages-1Gi               0 (0%)             0 (0%)
  hugepages-2Mi               0 (0%)             0 (0%)
Name:               ip300.k8s.computer
  Resource                    Requests           Limits
  --------                    --------           ------
  cpu                         7153m (95%)        8800m (117%)
  memory                      27759605888 (86%)  22996783232 (71%)
  ephemeral-storage           0 (0%)             0 (0%)
  hugepages-1Gi               0 (0%)             0 (0%)
  hugepages-2Mi               0 (0%)             0 (0%)


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/17512/1175793126@github.com>

Peter Pan

unread,
Sep 18, 2022, 1:05:45 AM9/18/22
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Assume unit CPU: m, Memory: Ki

allocatable_cpu=$(kubectl describe node |grep Allocatable -A 5|grep cpu | awk '{sum+=$NF;} END{print sum;}')
allocatable_mem=$(kubectl describe node |grep Allocatable -A 5|grep memory| awk '{sum+=$NF;} END{print sum;}')

the allocated resource in request field

allocated_req_cpu=$(kubectl describe node |grep Allocated -A 5|grep cpu | awk '{sum+=$2; } END{print sum;}')
allocated_req_mem=$(kubectl describe node |grep Allocated -A 5|grep memory| awk '{sum+=$2; } END{print sum;}')

finally, we got the resource spaces left

usable_req_cpu=$(( allocatable_cpu - allocated_req_cpu ))
usable_req_mem=$(( allocatable_mem - allocated_req_mem ))

echo Usable CPU request $usable_req_cpu M
echo Usable Memory request $usable_req_mem Ki


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/17512/1250192383@github.com>

Julien Laurenceau

unread,
Nov 23, 2022, 9:17:01 AM11/23/22
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Getting in on this shell scripting party. I have an older cluster running the CA with scale down disabled. I wrote this script to determine roughly how much I can scale down the cluster when it starts to bump up on its AWS route limits:

Updated version of this shell function

function kusage() {
    # Function returning resources usage on current kubernetes cluster
	local node_count=0
	local total_percent_cpu=0
	local total_percent_mem=0

    echo "NODE\t\t CPU_allocatable\t Memory_allocatable\t CPU_requests%\t Memory_requests%\t CPU_limits%\t Memory_limits%\t"
	for n in $(kubectl get nodes --no-headers -o custom-columns=NAME:.metadata.name); do
		local requests=$(kubectl describe node $n | grep -A2 -E "Resource" | tail -n1 | tr -d '(%)')
        local abs_cpu=$(echo $requests | awk '{print $2}')
		local percent_cpu=$(echo $requests | awk '{print $3}')
        local node_cpu=$(echo $abs_cpu $percent_cpu | tr -d 'mKi' | awk '{print int($1/$2*100)}')
        local allocatable_cpu=$(echo $node_cpu $abs_cpu | tr -d 'mKi' | awk '{print int($1 - $2)}')
        local percent_cpu_lim=$(echo $requests | awk '{print $5}')
        local requests=$(kubectl describe node $n | grep -A3 -E "Resource" | tail -n1 | tr -d '(%)')
        local abs_mem=$(echo $requests | awk '{print $2}')
		local percent_mem=$(echo $requests | awk '{print $3}')
        local node_mem=$(echo $abs_mem $percent_mem | tr -d 'mKi' | awk '{print int($1/$2*100)}')
        local allocatable_mem=$(echo $node_mem $abs_mem | tr -d 'mKi' | awk '{print int($1 - $2)}')
        local percent_mem_lim=$(echo $requests | awk '{print $5}')
		echo "$n\t ${allocatable_cpu}m\t\t\t ${allocatable_mem}Ki\t\t ${percent_cpu}%\t\t ${percent_mem}%\t\t\t ${percent_cpu_lim}%\t\t ${percent_mem_lim}%\t"

		node_count=$((node_count + 1))
		total_percent_cpu=$((total_percent_cpu + percent_cpu))
		total_percent_mem=$((total_percent_mem + percent_mem))
	done

	local avg_percent_cpu=$((total_percent_cpu / node_count))
	local avg_percent_mem=$((total_percent_mem / node_count))

	echo "Average usage (requests) : ${avg_percent_cpu}% CPU, ${avg_percent_mem}% memory."
}

image


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/17512/1325137987@github.com>

Message has been deleted

AK Sarav

unread,
Mar 28, 2025, 2:51:57 PMMar 28
to kubernetes/kubernetes, k8s-mirror-cli-misc, Team mention

Try KubeNodeUsage

https://github.com/AKSarav/KubeNodeUsage


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/17512/2762176177@github.com>

AKSaravAKSarav left a comment (kubernetes/kubernetes#17512)

Try KubeNodeUsage

https://github.com/AKSarav/KubeNodeUsage


Reply to this email directly, view it on GitHub, or unsubscribe.

You are receiving this because you are on a team that was mentioned.Message ID: <kubernetes/kubernetes/issues/17512/2762176177@github.com>

It is loading more messages.
0 new messages