Prometheus targets

62 views
Skip to first unread message

adi garg

unread,
Feb 27, 2020, 12:53:41 PM2/27/20
to Prometheus Users
Hello experts,

I am running Prometheus on my local system. I am getting my hiveserver2 metrics on local port 12156, and I have configured that in my prometheus.yml file which is present in the same directory as my Prometheus binary(/usr/local/bin). Now when using promql on localhost:9090, I was able to select my hs2 metrics there for querying, but I couldn't see them on localhost:9090/metrics. Is there any reason for it. 
Also, my metrics on port 12156 are present on the '/' route and not on '/metrics', so that can be handled with metrics_path right?

Prometheus.yml

  1# my global config$
  2 global:$
  3   scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.$
  4   evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.$
  5   # scrape_timeout is set to the global default (10s).$
  6 $
  7 # Alertmanager configuration$
  8 alerting:$
  9   alertmanagers:$
 10       - static_configs:$
 11          - targets:$
 12             - localhost:9093$
 13 $
 14 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.$
 15 rule_files:$
 16   # - "first_rules.yml"$
 17     - 'alerts.yml'$
 18   # - "second_rules.yml"$
 19 $
 20 # A scrape configuration containing exactly one endpoint to scrape:$
 21 # Here it's Prometheus itself.$
 22 scrape_configs:$
 23   # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.$
 24   - job_name: 'hehe'$
 25 $
 26     # metrics_path defaults to '/metrics'$
 27     # scheme defaults to 'http'.$
 28     metrics_path: '/'$
 29     static_configs:$
 30       - targets: ['localhost:12156']$ 

Brian Candler

unread,
Feb 27, 2020, 1:25:58 PM2/27/20
to Prometheus Users
On Thursday, 27 February 2020 17:53:41 UTC, adi garg wrote:
I was able to select my hs2 metrics there for querying, but I couldn't see them on localhost:9090/metrics. Is there any reason for it. 

Yes.

/metrics on the prometheus server only exposes metrics about the operation of the prometheus server itself.

If you want to query metrics which are *stored* in the prometheus time series database, then you need to use the prometheus API:

There's a command line interface that wraps this for you: e.g.
promtool query instant http://localhost:9090 up
is like entering "up" as a query in the GUI
 
Also, my metrics on port 12156 are present on the '/' route and not on '/metrics', so that can be handled with metrics_path right?


Yes.  If you can scrape the metrics using
then prometheus needs metrics_path of "/"

adi garg

unread,
Feb 27, 2020, 1:40:38 PM2/27/20
to Prometheus Users
Thanks a lot, Brian, that clears a lot of things for me. Just one more doubt is there a way to look at the current scraped metrics as I am not sure but I think that Prometheus stores metrics in 2 hours chunk in the memory(Ram) and after then sends them to the storage(SSD). Like how can I confirm this that my Prometheus is scraping all those targets? 
Moreover, that querying part can either be done by promql locally or using grafana right?

Callum Styan

unread,
Feb 27, 2020, 3:18:59 PM2/27/20
to Prometheus Users
Within the Prometheus UI if you navigate to Status -> Targets you can see what targets Prometheus is scraping (or attempting to scrape) and when each was last successfully scraped.

You can query Prometheus via it's HTTP API: https://prometheus.io/docs/prometheus/latest/querying/api/

Brian Candler

unread,
Feb 27, 2020, 3:46:46 PM2/27/20
to Prometheus Users
On Thursday, 27 February 2020 18:40:38 UTC, adi garg wrote:
Thanks a lot, Brian, that clears a lot of things for me. Just one more doubt is there a way to look at the current scraped metrics as I am not sure but I think that Prometheus stores metrics in 2 hours chunk in the memory(Ram) and after then sends them to the storage(SSD).

That's transparent to you.  Queries will merge results from the head or the stored chunks automatically.
 
Like how can I confirm this that my Prometheus is scraping all those targets? 


But in general, all you need to do is to perform an instant query for the metric "up".  This is a metric generated by prometheus on every scrape attempt, and has value 1 for successful scrape or 0 for failed scrape.

The instant query "up" will give you the most recent value of that metric.  The query "timestamp(up)" will give you the time it was set, i.e. the time of the last scrape attempt, and "time() - timestamp(up)" will give you the age in seconds since the last scrape.

See timestamp() function.
 
Moreover, that querying part can either be done by promql locally or using grafana right?


Yes, grafana just sends promQL queries to the prometheus API - the same as promtool does, and indeed the same as prometheus' own GUI does.

adi garg

unread,
Feb 28, 2020, 12:19:16 AM2/28/20
to Prometheus Users
Thank you all for replying. Very crystal clear answers.
Reply all
Reply to author
Forward
0 new messages