sum(increase(pushes_android{stage="dev"}[10m])) + sum(increase(pushes_ios{stage="dev"}[10m]))
--Regards,Gunther
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/5c1e8ca4-7b0c-43b5-9718-e0550cd2f2a9%40googlegroups.com.
# curl -s http://localhost:8080/ag-push/rest/prometheus/metrics | grep aerogear
aerogear_ups_push_requests_android 2.0
aerogear_ups_push_requests_total 2.0
Hmm, that's a good point. The NaN seems to occur only when a counter has not been incremented yet, which in our happens quite often in the morning on our dev stages (we stop servers in the evening and start them in the morning) or after new server deployments. The project we are using is Aerogear UPS (mobile pushes), which uses prometheus simple client for metric exporting: https://github.com/aerogear/aerogear-unifiedpush-server/blob/master/service/src/main/java/org/jboss/aerogear/unifiedpush/service/metrics/PrometheusExporter.javaAfter a fresh server restart the push metrics are not exported:
When i trigger the first android pushes it returns:
# curl -s http://localhost:8080/ag-push/rest/prometheus/metrics | grep aerogear
aerogear_ups_push_requests_android 2.0
aerogear_ups_push_requests_total 2.0, which seems correct. However the other metrics are not inititialzed at all until they are incremented the very first time.
I could not reproduce this behaviour locally (i suppose perhaps because i have micrometer in the classpath which may default initialize counter values?).According to https://github.com/prometheus/client_golang/issues/190 and your comment there it seems that prometheus-simple-client-java is expected to just behave that way. Any ideas how i can workaround these issues from outside. E.g. metric relabeling?
--Gunther
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/1ebe2c68-e7a9-4ef9-9ad2-68d8b6f407eb%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/5c1e8ca4-7b0c-43b5-9718-e0550cd2f2a9%40googlegroups.com.
Hi Gunther,Binary operators (i.e. +, -, /, *, etc.) in Prometheus return NaNs if at least a single operand is NaN,
while `sum()` skips NaNs.
sum(increase({__name__=~"pushes_(android|ios)", stage="dev"}[10m]))
and it should work as you expect, i.e. ignore NaNs during calculating the aggregate sum.
On Mon, Dec 2, 2019 at 3:43 PM 'Gunther Klein' via Prometheus Users <promethe...@googlegroups.com> wrote:--Hi there,i have a promql query which adds up sums of two different metrics like this:
sum(increase(pushes_android{stage="dev"}[10m])) + sum(increase(pushes_ios{stage="dev"}[10m]))
This works fine if both metrics have values defined. However if one of both sums results in NaN the overall sum is also NaN, rather than a sum where NaN is treated as 0 (e.g. 15 + NaN = 15).Any ideas how i can achieve that?Regards,Gunther
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/5c1e8ca4-7b0c-43b5-9718-e0550cd2f2a9%40googlegroups.com.
--Best Regards,--
Aliaksandr
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/CAPbKnmD_dMN2NMvKJxr_4URqqyZy1kC6g%3D3QqF8bgCrg1dAzDA%40mail.gmail.com.
On Mon, 2 Dec 2019 at 22:57, Aliaksandr Valialkin <val...@gmail.com> wrote:Hi Gunther,Binary operators (i.e. +, -, /, *, etc.) in Prometheus return NaNs if at least a single operand is NaN,This is standard floating point behaviour, which we preserve.while `sum()` skips NaNs.sum() does not skip NaNs, see http://demo.robustperception.io:9090/graph?g0.range_input=1h&g0.expr=sum(vector(0%2F0))&g0.tab=1 for example.If you've an example where PromQL is doing otherwise, please let us know.
sum(increase({__name__=~"pushes_(android|ios)", stage="dev"}[10m]))and it should work as you expect, i.e. ignore NaNs during calculating the aggregate sum.This query will likely error out due to duplicate series in the result of increase. And even if that weren't the case, it'd still produce a NaN here.
On Tue, Dec 3, 2019 at 1:21 AM Brian Brazil <brian....@robustperception.io> wrote:On Mon, 2 Dec 2019 at 22:57, Aliaksandr Valialkin <val...@gmail.com> wrote:Hi Gunther,Binary operators (i.e. +, -, /, *, etc.) in Prometheus return NaNs if at least a single operand is NaN,This is standard floating point behaviour, which we preserve.while `sum()` skips NaNs.sum() does not skip NaNs, see http://demo.robustperception.io:9090/graph?g0.range_input=1h&g0.expr=sum(vector(0%2F0))&g0.tab=1 for example.If you've an example where PromQL is doing otherwise, please let us know.compare query_range results for the following queries:sum(minute(vector(time())) > 30) +sum(label_replace(minute(vector(time())) < 40,"foo", "bar", "", ""))vs
sum(minute(vector(time())) > 30 orlabel_replace(minute(vector(time())) < 40, "foo", "bar", "", ""))
--sum(increase({__name__=~"pushes_(android|ios)", stage="dev"}[10m]))and it should work as you expect, i.e. ignore NaNs during calculating the aggregate sum.This query will likely error out due to duplicate series in the result of increase. And even if that weren't the case, it'd still produce a NaN here.Oops - the query returns "vector cannot contain metrics with the same labelset" error, because `increase` removes metric names :(Best Regards,
Aliaksandr