Arithmetic operations on metric possible with scrape config?

299 views
Skip to first unread message

Ben Cohee

unread,
Oct 22, 2021, 7:44:18 PM10/22/21
to Prometheus Users
This has been bothering me for a while, and hopefully someone has a solution I am simply overlooking.

I use snmp_exporter to pull power metrics from a bunch of different PDU vendors (APC, Raritan, Geist, ServerTech, etc).  I have one Prometheus scrape job & module per vendor to store all the PDU total power data in a metric named pduInputPowerConsumption

All of these vendors report the power in Watts, except for APC - which reports in " hundredths of kilowatts (decawatts)"

So throughout all my Grafana dashboards where I have to aggregate the total power consumption from all the different PDU vendors I currently just multiply the APC metrics by 10 to convert to watts like the other PDUs.

sum(pduInputPowerConsumption{job="apc_pdus"}*10 OR on() vector(0))
+
sum(pduInputPowerConsumption{job!="apc_pdus"} OR on() vector(0))


Is there a way that I can just scale *10 the apc_pdu collected metric on the prometheus backend via the prometheus.yml scrape config along with my relabel_configs or metric_relabel_configs?
Is there something else I am blatantly overlooking?




# Scrape config for snmp_exporter polling APC PDU power metric pduInputPowerConsumption
  - job_name: 'apc_pdus'
    scrape_interval: 3m   # Set the SNMP scrape inverval for every 3 mins
    scrape_timeout:  45s  # Set the scrape timeout to 45 seconds (3 retries = 2.25 mins)
    metrics_path: /snmp
    params:
      module: [apc_pdus]
    file_sd_configs:
      - files:
        - /etc/prometheus/snmp-targets/apc_pdus_*.json
        # Attempt to re-read files every five minutes.
        refresh_interval: 5m
    relabel_configs:
      - source_labels: [__meta_filepath]
        regex: '.*_(\w{4})\.json'
        replacement: $1
        target_label: site
      - source_labels: [__address__]
        target_label: ip_address
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: 10.0.XX.XX:9116  # SNMP exporter.

Brian Brazil

unread,
Oct 23, 2021, 3:23:13 AM10/23/21
to Ben Cohee, Prometheus Users
On Sat, 23 Oct 2021 at 00:44, Ben Cohee <bco...@gaikai.com> wrote:
This has been bothering me for a while, and hopefully someone has a solution I am simply overlooking.

I use snmp_exporter to pull power metrics from a bunch of different PDU vendors (APC, Raritan, Geist, ServerTech, etc).  I have one Prometheus scrape job & module per vendor to store all the PDU total power data in a metric named pduInputPowerConsumption

All of these vendors report the power in Watts, except for APC - which reports in " hundredths of kilowatts (decawatts)"

So throughout all my Grafana dashboards where I have to aggregate the total power consumption from all the different PDU vendors I currently just multiply the APC metrics by 10 to convert to watts like the other PDUs.

sum(pduInputPowerConsumption{job="apc_pdus"}*10 OR on() vector(0))
+
sum(pduInputPowerConsumption{job!="apc_pdus"} OR on() vector(0))


Is there a way that I can just scale *10 the apc_pdu collected metric on the prometheus backend via the prometheus.yml scrape config along with my relabel_configs or metric_relabel_configs?

There's not.
 
Is there something else I am blatantly overlooking?

You could do this with regex_extracts on the snmp_exporter end, by adding a 0 to the strig.

Brian
 




# Scrape config for snmp_exporter polling APC PDU power metric pduInputPowerConsumption
  - job_name: 'apc_pdus'
    scrape_interval: 3m   # Set the SNMP scrape inverval for every 3 mins
    scrape_timeout:  45s  # Set the scrape timeout to 45 seconds (3 retries = 2.25 mins)
    metrics_path: /snmp
    params:
      module: [apc_pdus]
    file_sd_configs:
      - files:
        - /etc/prometheus/snmp-targets/apc_pdus_*.json
        # Attempt to re-read files every five minutes.
        refresh_interval: 5m
    relabel_configs:
      - source_labels: [__meta_filepath]
        regex: '.*_(\w{4})\.json'
        replacement: $1
        target_label: site
      - source_labels: [__address__]
        target_label: ip_address
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: 10.0.XX.XX:9116  # SNMP exporter.

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/7fbdaaa6-8aec-43d5-962b-8f4372075811n%40googlegroups.com.


--

Ben Kochie

unread,
Oct 23, 2021, 3:42:12 AM10/23/21
to Ben Cohee, Prometheus Users
The best way to do this is to use a recording rule.

Something like this:

- record: global:pduInputPowerConsumption:sum
  expr: |
    sum(pduInputPowerConsumption{job="apc_pdus"} * 10 or on () vector(0))
    +
    sum(pduInputPowerConsumption{job!="apc_pdus"} or on () vector(0))

This is also nicer for your dashboard, since it only needs to pull one metric from the TSDB. This will be much faster for long-term views. like "avg_over_time(global:pduInputPowerConsumption:sum[$__interval])"

I also recommend not trying to get clever with SNMP metric names. Just keep the raw names generated with the snmp_exporter generator. This way you can avoid the problems of mismatched metric issues. Different OIDs have different meanings from different vendors. Keep them separated and handle things at query time. Either in the dashboard or in your rules.

--

Ben Cohee

unread,
Oct 25, 2021, 12:47:37 PM10/25/21
to Prometheus Users
Thanks Brian.  I already use the regex_extracts for the SNMP floating point conversion via - https://www.robustperception.io/numbers-from-displaystrings-with-the-snmp_exporter - so yes this is what I was looking for (and overlooking).

also Thanks @sup - I had not thought about adding a recording rule, which has a bunch advantages in my use case.  I will lookin further into.

Cheers!

Reply all
Reply to author
Forward
0 new messages