node_exporter not collecting node_cpu

1,213 views
Skip to first unread message

asi....@stellarcreativelab.com

unread,
May 24, 2018, 8:23:07 PM5/24/18
to Prometheus Users

Hi,


I'm new to Prometheus, so bare with me please. 

Question: Why would node_export not collect node_cpu information. even when the --collector.cpu flag is on?

Back story

I've installed a node_exporter on a server, hoping to monitor the CPU utilisation:
Prometheus query: 
avg(node_load5{instance=~"ls0:9100"}) /  count(count(node_cpu{instance=~"ls0:9100"}) by (cpu)) * 100
 

But the query failed on a few servers, since they don't have node_cpu datapoint (they do have node_load5 if it makes a difference)

When I look at the node_export matrices, it doesn't show node_cpu. but I have no idea why.


Host operating system: output of uname -a

# uname -a
Linux box7 3.10.0-693.21.1.el7.x86_64 #1 SMP Wed Mar 7 19:03:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

node_exporter version: output of node_exporter --version

$ ./node_exporter --version
node_exporter, version 0.16.0 (branch: , revision: )
  build user:       copr
  build date:       20180521-14:45:46
  go version:       go1.9.4


node_exporter command line flags

/usr/sbin/node_exporter --log.level="debug" --collector.cpu --collector.textfile.directory /var/lib/node_exporter/textfile_collector


Are you running node_exporter in Docker?

No, on physical multi-core systems.



Log.level="debug" infromation

box7 /etc/systemd/system $ /usr/sbin/node_exporter --log.level="debug" --collector.cpu --collector.textfile.directory /var/lib/node_exporter/textfile_collector

INFO[0000] Starting node_exporter (version=0.16.0, branch=, revision=)  source="node_exporter.go:82"

INFO[0000] Build context (go=go1.9.4, user=copr, date=20180521-14:45:46)  source="node_exporter.go:83"

INFO[0000] Enabled collectors:                           source="node_exporter.go:90"

INFO[0000]  - arp                                        source="node_exporter.go:97"

INFO[0000]  - bcache                                     source="node_exporter.go:97"

INFO[0000]  - bonding                                    source="node_exporter.go:97"

INFO[0000]  - conntrack                                  source="node_exporter.go:97"

INFO[0000]  - cpu                                        source="node_exporter.go:97"

INFO[0000]  - diskstats                                  source="node_exporter.go:97"

INFO[0000]  - edac                                       source="node_exporter.go:97"

INFO[0000]  - entropy                                    source="node_exporter.go:97"

INFO[0000]  - filefd                                     source="node_exporter.go:97"

INFO[0000]  - filesystem                                 source="node_exporter.go:97"

INFO[0000]  - hwmon                                      source="node_exporter.go:97"

INFO[0000]  - infiniband                                 source="node_exporter.go:97"

INFO[0000]  - ipvs                                       source="node_exporter.go:97"

INFO[0000]  - loadavg                                    source="node_exporter.go:97"

INFO[0000]  - mdadm                                      source="node_exporter.go:97"

INFO[0000]  - meminfo                                    source="node_exporter.go:97"

INFO[0000]  - netdev                                     source="node_exporter.go:97"

INFO[0000]  - netstat                                    source="node_exporter.go:97"

INFO[0000]  - nfs                                        source="node_exporter.go:97"

INFO[0000]  - nfsd                                       source="node_exporter.go:97"

INFO[0000]  - sockstat                                   source="node_exporter.go:97"

INFO[0000]  - stat                                       source="node_exporter.go:97"

INFO[0000]  - textfile                                   source="node_exporter.go:97"

INFO[0000]  - time                                       source="node_exporter.go:97"

INFO[0000]  - timex                                      source="node_exporter.go:97"

INFO[0000]  - uname                                      source="node_exporter.go:97"

INFO[0000]  - vmstat                                     source="node_exporter.go:97"

INFO[0000]  - wifi                                       source="node_exporter.go:97"

INFO[0000]  - xfs                                        source="node_exporter.go:97"

INFO[0000]  - zfs                                        source="node_exporter.go:97"

INFO[0000] Listening on :9100                            source="node_exporter.go:111"

DEBU[0010] collect query: []                             source="node_exporter.go:36"

DEBU[0010] OK: uname collector succeeded after 0.000074s.  source="collector.go:135"

DEBU[0010] OK: timex collector succeeded after 0.000026s.  source="collector.go:135"

DEBU[0010] ipvs collector metrics are not available for this system  source="ipvs_linux.go:113"

DEBU[0010] OK: ipvs collector succeeded after 0.000386s.  source="collector.go:135"

DEBU[0010] Unable to detect InfiniBand devices           source="infiniband_linux.go:110"

DEBU[0010] OK: infiniband collector succeeded after 0.000313s.  source="collector.go:135"

DEBU[0010] OK: entropy collector succeeded after 0.000082s.  source="collector.go:135"

DEBU[0010] OK: textfile collector succeeded after 0.000341s.  source="collector.go:135"

DEBU[0010] OK: nfs collector succeeded after 0.001060s.  source="collector.go:135"

DEBU[0010] Return time: 1527206877.287007                source="time.go:47"

DEBU[0010] OK: time collector succeeded after 0.000636s.  source="collector.go:135"

DEBU[0010] OK: bcache collector succeeded after 0.000034s.  source="collector.go:135"

DEBU[0010] Not collecting bonding, file does not exist: /sys/class/net  source="bonding_linux.go:60"

DEBU[0010] OK: bonding collector succeeded after 0.000140s.  source="collector.go:135"

DEBU[0010] OK: netdev collector succeeded after 0.002826s.  source="collector.go:135"

DEBU[0010] OK: mdadm collector succeeded after 0.001362s.  source="collector.go:135"

DEBU[0010] OK: vmstat collector succeeded after 0.001246s.  source="collector.go:135"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/xuio_stats" for reading  source="zfs_linux.go:48"

DEBU[0010] return load 0: 0.060000                       source="loadavg.go:51"

DEBU[0010] return load 1: 0.190000                       source="loadavg.go:51"

DEBU[0010] return load 2: 0.230000                       source="loadavg.go:51"

DEBU[0010] OK: loadavg collector succeeded after 0.001993s.  source="collector.go:135"

DEBU[0010] OK: arp collector succeeded after 0.001873s.  source="collector.go:135"

DEBU[0010] OK: sockstat collector succeeded after 0.001859s.  source="collector.go:135"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/zfetchstats" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] OK: nfsd collector succeeded after 0.002738s.  source="collector.go:135"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/fm" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/vdev_cache_stats" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/vdev_mirror_stats" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] OK: meminfo collector succeeded after 0.002626s.  source="collector.go:135"

DEBU[0010] OK: stat collector succeeded after 0.002952s.  source="collector.go:135"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/abdstats" for reading  source="zfs_linux.go:48"

DEBU[0010] OK: wifi collector succeeded after 0.003489s.  source="collector.go:135"

DEBU[0010] OK: conntrack collector succeeded after 0.001985s.  source="collector.go:135"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Ignoring device: sda1                         source="diskstats_linux.go:178"

DEBU[0010] OK: filefd collector succeeded after 0.001918s.  source="collector.go:135"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/arcstats" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/dbuf_stats" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Ignoring device: sda2                         source="diskstats_linux.go:178"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/dmu_tx" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] OK: edac collector succeeded after 0.002709s.  source="collector.go:135"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/dnodestats" for reading  source="zfs_linux.go:48"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] Ignoring mount point: /sys                    source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /proc                   source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /dev                    source="filesystem_linux.go:42"

DEBU[0010] OK: diskstats collector succeeded after 0.003057s.  source="collector.go:135"

DEBU[0010] Ignoring mount point: /sys/kernel/security    source="filesystem_linux.go:42"

DEBU[0010] Cannot open "/proc/spl/kstat/zfs/zil" for reading  source="zfs_linux.go:48"

DEBU[0010] Ignoring mount point: /dev/shm                source="filesystem_linux.go:42"

DEBU[0010] ZFS / ZFS statistics are not available        source="zfs.go:66"

DEBU[0010] OK: hwmon collector succeeded after 0.005281s.  source="collector.go:135"

DEBU[0010] OK: zfs collector succeeded after 0.003390s.  source="collector.go:135"

DEBU[0010] Ignoring mount point: /dev/pts                source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup          source="filesystem_linux.go:42"

DEBU[0010] OK: xfs collector succeeded after 0.003571s.  source="collector.go:135"

DEBU[0010] CPU /sys/devices/system/cpu/cpu0 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] OK: netstat collector succeeded after 0.005339s.  source="collector.go:135"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/systemd  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/pstore          source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/memory   source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/cpu,cpuacct  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/blkio    source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/devices  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/net_cls,net_prio  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/pids     source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/perf_event  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/freezer  source="filesystem_linux.go:42"

DEBU[0010] CPU /sys/devices/system/cpu/cpu1 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/hugetlb  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/fs/cgroup/cpuset   source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/kernel/config      source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /proc/sys/fs/binfmt_misc  source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /dev/hugepages          source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /sys/kernel/debug       source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /dev/mqueue             source="filesystem_linux.go:42"

DEBU[0010] Ignoring mount point: /proc/fs/nfsd           source="filesystem_linux.go:42"

DEBU[0010] Ignoring fs type: rpc_pipefs                  source="filesystem_linux.go:46"

DEBU[0010] CPU /sys/devices/system/cpu/cpu10 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] Ignoring fs type: autofs                      source="filesystem_linux.go:46"

DEBU[0010] Ignoring fs type: autofs                      source="filesystem_linux.go:46"

DEBU[0010] CPU /sys/devices/system/cpu/cpu11 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] Ignoring mount point: /sys/fs/fuse/connections  source="filesystem_linux.go:42"

DEBU[0010] CPU /sys/devices/system/cpu/cpu2 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] CPU /sys/devices/system/cpu/cpu3 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] Ignoring mount point: /dev/shm/FlexNetFs.82856  source="filesystem_linux.go:42"

DEBU[0010] CPU /sys/devices/system/cpu/cpu4 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] CPU /sys/devices/system/cpu/cpu5 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] OK: filesystem collector succeeded after 0.006831s.  source="collector.go:135"

DEBU[0010] CPU /sys/devices/system/cpu/cpu6 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] CPU /sys/devices/system/cpu/cpu7 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] CPU /sys/devices/system/cpu/cpu8 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] CPU /sys/devices/system/cpu/cpu9 is missing package_throttle_count  source="cpu_linux.go:171"

DEBU[0010] OK: cpu collector succeeded after 0.008841s.  source="collector.go:135"



What did you expect to see?

the node_cpu information so I can use it in Prometheus & Grafana. 

# HELP node_cpu Seconds the cpus spent in each mode.
# TYPE node_cpu counter
node_cpu{cpu="cpu0",mode="guest"} 0
node_cpu{cpu="cpu0",mode="guest_nice"} 0
node_cpu{cpu="cpu0",mode="idle"} 6.72018077e+06
node_cpu{cpu="cpu0",mode="iowait"} 73074.45
node_cpu{cpu="cpu0",mode="irq"} 0
node_cpu{cpu="cpu0",mode="nice"} 6.9
node_cpu{cpu="cpu0",mode="softirq"} 1982.36
node_cpu{cpu="cpu0",mode="steal"} 0
node_cpu{cpu="cpu0",mode="system"} 20504.7
node_cpu{cpu="cpu0",mode="user"} 76782.96
node_cpu{cpu="cpu1",mode="guest"} 0
node_cpu{cpu="cpu1",mode="guest_nice"} 0
node_cpu{cpu="cpu1",mode="idle"} 6.74782086e+06
node_cpu{cpu="cpu1",mode="iowait"} 56686.61
node_cpu{cpu="cpu1",mode="irq"} 0
node_cpu{cpu="cpu1",mode="nice"} 3.27
node_cpu{cpu="cpu1",mode="softirq"} 1974.05
node_cpu{cpu="cpu1",mode="steal"} 0
node_cpu{cpu="cpu1",mode="system"} 19283.31
node_cpu{cpu="cpu1",mode="user"} 63245.42
node_cpu{cpu="cpu2",mode="guest"} 0
node_cpu{cpu="cpu2",mode="guest_nice"} 0
node_cpu{cpu="cpu2",mode="idle"} 6.72601826e+06
node_cpu{cpu="cpu2",mode="iowait"} 72417.15
node_cpu{cpu="cpu2",mode="irq"} 0
node_cpu{cpu="cpu2",mode="nice"} 3.14
node_cpu{cpu="cpu2",mode="softirq"} 1579.78
node_cpu{cpu="cpu2",mode="steal"} 0
node_cpu{cpu="cpu2",mode="system"} 18452.72
node_cpu{cpu="cpu2",mode="user"} 72657.68
node_cpu{cpu="cpu3",mode="guest"} 0
node_cpu{cpu="cpu3",mode="guest_nice"} 0
node_cpu{cpu="cpu3",mode="idle"} 6.69916069e+06
node_cpu{cpu="cpu3",mode="iowait"} 61786.24
node_cpu{cpu="cpu3",mode="irq"} 0
node_cpu{cpu="cpu3",mode="nice"} 3.67
node_cpu{cpu="cpu3",mode="softirq"} 1372.32
node_cpu{cpu="cpu3",mode="steal"} 0
node_cpu{cpu="cpu3",mode="system"} 17576.55
node_cpu{cpu="cpu3",mode="user"} 111798.61
node_cpu{cpu="cpu4",mode="guest"} 0
node_cpu{cpu="cpu4",mode="guest_nice"} 0
node_cpu{cpu="cpu4",mode="idle"} 6.7953318e+06
node_cpu{cpu="cpu4",mode="iowait"} 43383.99
node_cpu{cpu="cpu4",mode="irq"} 0
node_cpu{cpu="cpu4",mode="nice"} 7.17
node_cpu{cpu="cpu4",mode="softirq"} 1199.09
node_cpu{cpu="cpu4",mode="steal"} 0
node_cpu{cpu="cpu4",mode="system"} 11804.99
node_cpu{cpu="cpu4",mode="user"} 47344.51
node_cpu{cpu="cpu5",mode="guest"} 0
node_cpu{cpu="cpu5",mode="guest_nice"} 0
node_cpu{cpu="cpu5",mode="idle"} 6.77709214e+06
node_cpu{cpu="cpu5",mode="iowait"} 59651.18
node_cpu{cpu="cpu5",mode="irq"} 0
node_cpu{cpu="cpu5",mode="nice"} 4.87
node_cpu{cpu="cpu5",mode="softirq"} 498.17
node_cpu{cpu="cpu5",mode="steal"} 0
node_cpu{cpu="cpu5",mode="system"} 11363.6
node_cpu{cpu="cpu5",mode="user"} 51623.87
node_cpu{cpu="cpu6",mode="guest"} 0
node_cpu{cpu="cpu6",mode="guest_nice"} 0
node_cpu{cpu="cpu6",mode="idle"} 6.79913583e+06
node_cpu{cpu="cpu6",mode="iowait"} 40808.52
node_cpu{cpu="cpu6",mode="irq"} 0
node_cpu{cpu="cpu6",mode="nice"} 2.95
node_cpu{cpu="cpu6",mode="softirq"} 1355.94
node_cpu{cpu="cpu6",mode="steal"} 0
node_cpu{cpu="cpu6",mode="system"} 11453.48
node_cpu{cpu="cpu6",mode="user"} 45959.11
node_cpu{cpu="cpu7",mode="guest"} 0
node_cpu{cpu="cpu7",mode="guest_nice"} 0
node_cpu{cpu="cpu7",mode="idle"} 6.73615075e+06
node_cpu{cpu="cpu7",mode="iowait"} 48719.26
node_cpu{cpu="cpu7",mode="irq"} 0
node_cpu{cpu="cpu7",mode="nice"} 10.45
node_cpu{cpu="cpu7",mode="softirq"} 1133.79
node_cpu{cpu="cpu7",mode="steal"} 0
node_cpu{cpu="cpu7",mode="system"} 12747.45
node_cpu{cpu="cpu7",mode="user"} 99002.53




Any help would be appreciated, 
Asi

Ben Kochie

unread,
May 25, 2018, 4:24:55 AM5/25/18
to asi....@stellarcreativelab.com, Prometheus Users
node_cpu has been renamed to node_cpu_seconds_total.

Please read the release notes:


--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscribe@googlegroups.com.
To post to this group, send email to prometheus-users@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/0946f9b5-9de9-49bb-8065-9d43bbe7bbe4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Asi Sudai

unread,
May 25, 2018, 11:05:56 AM5/25/18
to Ben Kochie, Prometheus Users
Ok. Sorry I missed that and thank you Ben! 

To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To post to this group, send email to promethe...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages