@gheon: Reiterating the mentions to trigger a notification:
@kubernetes/sig-cli-feature-requests
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
+1 on being able to see resource usage by namespace.
Describing the ResourceQuota resource in the namespace gives you some of this:
$ kubectl describe resourcequota/compute-resources --namespace foobar Name: compute-resources Namespace: foobar Resource Used Hard -------- ---- ---- limits.cpu 13 96 limits.memory 25024M 360Gi requests.cpu 6500m 48 requests.memory 12512M 180Gi
Doesn't kubectl describe resourcequota
for a namespace accomplish this?
$> kubectl --context <cluster_context> describe resourcequota -n my-namespace Name: compute-resources Namespace: my-namespace Resource Used Hard -------- ---- ---- limits.cpu 13 96 limits.memory 25024M 360Gi requests.cpu 6500m 48 requests.memory 12512M 180Gi
Hi. I authored https://github.com/dpetzold/kube-resource-explorer/ for this.
It will display the historical statistical resource usage from StackDriver. By evaluating the TimeSeries data to show the latest value (the most current data point), minimum value, maximum value and the average or the mode in the requested duration per container. The average is displayed when cpu is requested. For memory the mode is displayed (mode is the most common occurring value in the set).
Below is some sample output:
$ ./resource-explorer -historical -duration 4h -mem -sort Mode -reverse -namespace kube-system
Pod/Container Last Min Max Avg/Mode
------------------------------------------------------------- ------ ------ ------ --------
l7-default-backend-1044750973-kqh98/default-http-backend 2Mi 2Mi 2Mi 2Mi
kube-dns-323615064-8nxfl/dnsmasq 6Mi 6Mi 6Mi 6Mi
event-exporter-v0.1.7-5c4d9556cf-kf4tf/prometheus-to-sd-exporter 6Mi 6Mi 6Mi 6Mi
heapster-v1.4.3-74b5bd94bb-fz8hd/prom-to-sd 7Mi 7Mi 7Mi 7Mi
fluentd-gcp-v2.0.9-4qkwk/prometheus-to-sd-exporter 8Mi 8Mi 8Mi 8Mi
fluentd-gcp-v2.0.9-tw9vk/prometheus-to-sd-exporter 9Mi 9Mi 9Mi 9Mi
fluentd-gcp-v2.0.9-jmtpw/prometheus-to-sd-exporter 9Mi 9Mi 9Mi 9Mi
kube-dns-323615064-8nxfl/kubedns 10Mi 10Mi 10Mi 10Mi
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster-nanny 10Mi 10Mi 10Mi 10Mi
kube-dns-autoscaler-244676396-xzgs4/autoscaler 11Mi 11Mi 11Mi 11Mi
kube-dns-323615064-8nxfl/sidecar 13Mi 12Mi 13Mi 13Mi
kube-proxy-gke-project-default-pool-175a4a05-bv59/kube-proxy 15Mi 15Mi 15Mi 15Mi
event-exporter-v0.1.7-5c4d9556cf-kf4tf/event-exporter 15Mi 15Mi 15Mi 15Mi
kube-proxy-gke-project-default-pool-175a4a05-ntfw/kube-proxy 18Mi 18Mi 18Mi 18Mi
kube-proxy-gke-project-default-pool-175a4a05-mshh/kube-proxy 18Mi 18Mi 19Mi 18Mi
kubernetes-dashboard-768854d6dc-jh292/kubernetes-dashboard 31Mi 31Mi 31Mi 31Mi
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster 33Mi 32Mi 39Mi 34Mi
fluentd-gcp-v2.0.9-jmtpw/fluentd-gcp 138Mi 136Mi 139Mi 138Mi
fluentd-gcp-v2.0.9-tw9vk/fluentd-gcp 136Mi 130Mi 162Mi 162Mi
fluentd-gcp-v2.0.9-4qkwk/fluentd-gcp 144Mi 126Mi 181Mi 178Mi
Results shown are for a period of 4h0m0s. 2,400 data points were evaluted.
$ ./resource-explorer -historical -duration 4h -cpu -sort Max -reverse -namespace kube-system
Pod/Container Last Min Max Avg/Mode
------------------------------------------------------------- ------ ------ ------ --------
heapster-v1.4.3-74b5bd94bb-fz8hd/prom-to-sd 0m 0m 0m 0m
event-exporter-v0.1.7-5c4d9556cf-kf4tf/prometheus-to-sd-exporter 0m 0m 0m 0m
fluentd-gcp-v2.0.9-jmtpw/prometheus-to-sd-exporter 0m 0m 0m 0m
fluentd-gcp-v2.0.9-4qkwk/prometheus-to-sd-exporter 0m 0m 0m 0m
kube-dns-323615064-8nxfl/kubedns 0m 0m 0m 0m
kube-dns-323615064-8nxfl/dnsmasq 0m 0m 0m 0m
kubernetes-dashboard-768854d6dc-jh292/kubernetes-dashboard 0m 0m 0m 0m
kube-dns-autoscaler-244676396-xzgs4/autoscaler 0m 0m 0m 0m
l7-default-backend-1044750973-kqh98/default-http-backend 0m 0m 0m 0m
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster-nanny 0m 0m 0m 0m
fluentd-gcp-v2.0.9-tw9vk/prometheus-to-sd-exporter 0m 0m 0m 0m
event-exporter-v0.1.7-5c4d9556cf-kf4tf/event-exporter 0m 0m 0m 0m
heapster-v1.4.3-74b5bd94bb-fz8hd/heapster 1m 1m 1m 1m
kube-dns-323615064-8nxfl/sidecar 1m 0m 1m 0m
kube-proxy-gke-project-default-pool-175a4a05-ntfw/kube-proxy 1m 1m 2m 1m
kube-proxy-gke-project-default-pool-175a4a05-bv59/kube-proxy 1m 1m 2m 1m
kube-proxy-gke-project-default-pool-175a4a05-mshh/kube-proxy 1m 1m 2m 1m
fluentd-gcp-v2.0.9-tw9vk/fluentd-gcp 6m 5m 7m 5m
fluentd-gcp-v2.0.9-4qkwk/fluentd-gcp 6m 5m 12m 6m
fluentd-gcp-v2.0.9-jmtpw/fluentd-gcp 28m 23m 32m 28m
Results shown are for a period of 4h0m0s. 2,400 data points were evaluted.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
kubectl describe resourcequota -n <namespace>
will work only if your create resourcequota.
it will be nice to have command to see total resource utilization for any namespace.
I know we can configure different tools to capture & show this utilization but native command line tool will be more useful and lightweight.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Is this different from "kubectl top" ?
@davidopp also kubectl top
relies on heapster.. which is.. not the future.
I'm pretty sure kubectl top
can be used with metric server, so do not rely on heapster anymore
https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/#resource-metrics-pipeline
@smarterclayton please do not remove kubectl top
, we use it on a daily basis to check the ressource state of nodes and pods. It's so usefull that many other probably use it too
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
/remove-lifecycle rotten
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Still an issue...
/remove-lifecycle rotten
Still an issue...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
any update on this?
any update on this?
A good way to stay updated is to subscribe to notifications on this issue.
leaving comments like this spams everyone silently following the issue for updates — 20 at last count ( not including you) — and doesn't help in getting the underlying issue resolved.
Hope this helps.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
I've been looking for a feature like this. In the meantime I hacked together this fish function to help display resource usage across all nodes for a given namespace. I haven't taken the time to actually sum resources but shouldn't be hard to implement. This just shows a nicely formatted table for Non-terminated Pods
.
function kube-get-resource-usage
if not count $argv > /dev/null
echo "Usage: kube-get-resource-usage <namespace>"
return
end
set c 0
kubectl get nodes | sed '1d' | awk '{print $1}' | while read node
if [ $c = 0 ]
kubectl describe node $node | sed -n '/^Non-terminated Pods:/,/Allocated resources:/p' | sed '1d;$d' | sed '2d' | awk '$1 == "'$argv[1]'" || $1 == "Namespace" { print $0 }'
else
kubectl describe node $node | sed -n '/^Non-terminated Pods:/,/Allocated resources:/p' | sed '1,3d;$d' | awk '$1 == "'$argv[1]'" { print $0 }'
end
set c (math $c + 1)
end | column -t
end
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle rotten
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Ended up writing some jq to get a quick snapshot of namespace resource usage:
kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/$KUBE_NAMESPACE/pods | jq '[ .items[].containers[].usage
| {
cpu: .cpu | rtrimstr("n") | tonumber,
memory: .memory | rtrimstr("Ki") | tonumber
}]
| reduce .[] as $item ({totalCpu: 0, totalMemory: 0}; .totalCpu += $item.cpu | .totalMemory += $item.memory)'
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
/remove-lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Closed #55046 as completed.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
k top pods -n | awk 'BEGIN {mem=0; cpu=0} {mem += int($3); cpu += int($2);} END {print "Memory: " mem "Mi" " " "Cpu: " cpu "m"}'
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
Remember, it's important to distinguish between requests and actual usage. Displaying tonnes of available resource in kubectl top
doesn't necessarily mean they're schedulable.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.
kubectl with 1000 of awks and args nice that this is so easy.
Is there really no affort to implmement this in kubectl?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are on a team that was mentioned.