Prometheus metrics are lost if the prometheus push gateway from different instance is down

17 views
Skip to first unread message

Rajesh Somasundaram

unread,
Feb 17, 2020, 6:10:14 AM2/17/20
to Prometheus Developers
Hi All,

I am trying out some custom metrics using the prometheus push gateway service in AWS EC2 instance. Below is my use case.

1) Instance A - running service xyz and its related metrics are pushed to push-gateway running in the same instance.
2) Instance B - Prometheus is scraping the metrics from Gateway and it is visualized in Grafana.

Issue:

1) When the instance A is down, the prometheus lost its previously scraped metrics. Ideally, it should have saved the scraped metrics from the gateway though when the instance A is down.
    But it is not happening at my end. Could anyone please help me with that?

Thanks,
Rajesh

Bjoern Rabenstein

unread,
Feb 17, 2020, 10:12:39 AM2/17/20
to Rajesh Somasundaram, Prometheus Developers
On 17.02.20 03:10, Rajesh Somasundaram wrote:
>
> 1) Instance A - running service xyz and its related metrics are pushed to
> push-gateway running in the same instance.
> 2) Instance B - Prometheus is scraping the metrics from Gateway and it is
> visualized in Grafana.
>
> Issue:
>
> 1) When the instance A is down, the prometheus lost its previously scraped
> metrics. Ideally, it should have saved the scraped metrics from the gateway
> though when the instance A is down.
>     But it is not happening at my end. Could anyone please help me with that?

Prometheus will for sure save the scraped metrics. It just cannot
ingest up-to-date metrics from the Pushgateway while it is down. So
time series end at the time of the last scrape, and that's probably
what you see in your visualization. That's the intended behavior. You
have to look farther back in time, or do some PromQL tricks with
max_over_time or avg_over_time (see also
https://www.robustperception.io/alerting-on-gauges-in-prometheus-2-0
for a related topic).

In different news, your question isn't really about development of
Prometheus itself. It is a better fit for the prometheus-users mailing
list, see https://groups.google.com/forum/#!forum/prometheus-users

--
Björn Rabenstein
[PGP-ID] 0x851C3DA17D748D03
[email] bjo...@rabenste.in
Reply all
Reply to author
Forward
0 new messages