Target Server - Which Prometheus Server Is Scraping

202 views
Skip to first unread message

kekr...@gmail.com

unread,
Feb 14, 2022, 11:32:25 PM2/14/22
to Prometheus Users

If you have multiple Prometheus servers using an identical target list, is there a way on the target server to tell which Prometheus server is scraping at the time the scrap occurs.

For example,  Prom_server_a and Prom_server_b scrape target_server_123.  On target_server_123 is there something on this server that says Prom_server_a is scraping right now.  Prom_server_b is scraping right now?

Thanks,
Kevin

Stuart Clark

unread,
Feb 15, 2022, 2:15:11 AM2/15/22
to kekr...@gmail.com, Prometheus Users
The source IP address in any logs?

--
Stuart Clark

Brian Candler

unread,
Feb 15, 2022, 2:47:37 AM2/15/22
to Prometheus Users
You can add whatever query params you like in the scrape job, e.g. you could add ?from=foo or ?from=bar as part of the URL being scraped.

  - job_name: blah
    metrics_path: /metrics
    params:
      from: [ foo ]
    ...

However, I wonder how you intend to use this information.  The data which is scraped *should* be independent of who is scraping it, and it should be possible to do additional scrapes without affecting the data (e.g. hitting the exporter with "curl" to test it shouldn't alter the data for anyone else).

Therefore, I wonder if there's a better way to achieve what you're trying to achieve.  For example, if you are keeping a counter of "how many widgets processed in the last minute", and resetting it to zero on each scrape, then you should not be doing this; you should be keeping a counter which just keeps incrementing. It's up  to the consumer of the data to look at data and work out the number of widgets processed per minute, or per hour or whatever.  Having the data in this format is much more useful anyway.

If you can describe what it is you're doing, and why it matters where the scrape is coming from, we may be able to give some alternative suggestions.

kekr...@gmail.com

unread,
Feb 15, 2022, 1:41:07 PM2/15/22
to Prometheus Users
My end goal is to prove monitoring is not running every minute on the server.  My word saying  it is not, there's no way, the job is not configured to run every minute,  is not good enough.

There is a possibility that the two Prometheus servers are scraping at the same time but there is no way the scrapes are happening every minute.  The scrape interval is 3m with a scrape time out of 2m45s.

Stuart Clark

unread,
Feb 15, 2022, 4:47:10 PM2/15/22
to kekr...@gmail.com, Prometheus Users
On 15/02/2022 18:41, kekr...@gmail.com wrote:
> My end goal is to prove monitoring is not running every minute on the
> server.  My word saying  it is not, there's no way, the job is not
> configured to run every minute,  is not good enough.
>
> There is a possibility that the two Prometheus servers are scraping at
> the same time but there is no way the scrapes are happening every
> minute.  The scrape interval is 3m with a scrape time out of 2m45s.

So are you seeing more frequent requests than you expect? How are you
telling this? Do you have request logs & what do they say/record?

--
Stuart Clark

kekr...@gmail.com

unread,
Feb 15, 2022, 5:29:44 PM2/15/22
to Prometheus Users
I am not seeing the frequency is more often than I expect.  I am being told a log file is being created by the scrapes in a temp directory every minute.  I am saying it is not Prometheus.  So now i have to prove it is not Prometheus.

As an alternate solution, I am trying to use the Prometheus timestamp function on the metric being created by the scrape in Grafana to get the time history of the metric as proof.  The thought being the time difference between the metric history is 3 minutes.  But I am having trouble getting the value of the timestamp function to act as an epoch date.    If I use the value returned in a web epoch translator, it translate to the correct date.  If I multiple the value by 1000, as you do every epoch date in Grafana, it actually multiplies the value rather than putting it in human readable date format.

Kevin

Stuart Clark

unread,
Feb 15, 2022, 5:32:02 PM2/15/22
to kekr...@gmail.com, Prometheus Users
On 15/02/2022 22:29, kekr...@gmail.com wrote:
> I am not seeing the frequency is more often than I expect.  I am being
> told a log file is being created by the scrapes in a temp directory
> every minute.  I am saying it is not Prometheus. So now i have to
> prove it is not Prometheus.
>
> As an alternate solution, I am trying to use the Prometheus timestamp
> function on the metric being created by the scrape in Grafana to get
> the time history of the metric as proof.  The thought being the time
> difference between the metric history is 3 minutes.  But I am having
> trouble getting the value of the timestamp function to act as an epoch
> date.    If I use the value returned in a web epoch translator, it
> translate to the correct date.  If I multiple the value by 1000, as
> you do every epoch date in Grafana, it actually multiplies the value
> rather than putting it in human readable date format.
I'm not clear if you are getting logs from these requests or not? I'd
expect any request logs to include the path being requested, time &
source IP. What do you see?

--
Stuart Clark

kekr...@gmail.com

unread,
Feb 15, 2022, 5:41:41 PM2/15/22
to Prometheus Users
I got the Grafana date problem solved and have the scrape time history for the metric created - the info I needed.

Thank you Stuart and Brian for the assistance.

Kevin

kekr...@gmail.com

unread,
Feb 15, 2022, 8:11:07 PM2/15/22
to Prometheus Users
Stuart, I am not sure I understand the log files question.  I am not aware of any log files related to the scrape itself.  We do have log files related to the exporters running on the server but they do not capture the scrapes.  I am trying to get details of what is going on on the target server itself, not so concerned about what the Prometheus server has log wise.

If you will add some more detail around the question, I will be glad to answer. 

Ben Kochie

unread,
Feb 16, 2022, 3:19:36 AM2/16/22
to kekr...@gmail.com, Prometheus Users
If you just need to know the scrape timestamps, you can query each Prometheus instance like this:

up{instance="foo"}[1h]

This will return the timetsamps for each scrape over a given time range.

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/4fdc7c08-bbbf-40bc-a26c-c791337155c2n%40googlegroups.com.

Stuart Clark

unread,
Feb 16, 2022, 4:35:09 AM2/16/22
to kekr...@gmail.com, Prometheus Users
On 16/02/2022 01:11, kekr...@gmail.com wrote:
> Stuart, I am not sure I understand the log files question.  I am not
> aware of any log files related to the scrape itself.  We do have log
> files related to the exporters running on the server but they do not
> capture the scrapes.  I am trying to get details of what is going on
> on the target server itself, not so concerned about what the
> Prometheus server has log wise.
>
I was talking about logs from the application being scraped.

--
Stuart Clark

kekr...@gmail.com

unread,
Feb 16, 2022, 6:26:33 PM2/16/22
to Prometheus Users
Thanks Ben.  That is a much easier way to do it than my round about way.  Will keep that in mind next time.

Stuart,  I see what you are getting at now.  We do not  parse through the application log files.  We wrote our exporters that can take in a list of scripts that generate metrics, translate into Prometheus format and get written to the DB.
Reply all
Reply to author
Forward
0 new messages