1 # my global config$
2 global:$
3 scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.$
4 evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.$
5 # scrape_timeout is set to the global default (10s).$
6 $
7 # Alertmanager configuration$
8 alerting:$
9 alertmanagers:$
10 - static_configs:$
11 - targets:$
12 - localhost:9093$
13 $
14 # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.$
15 rule_files:$
16 # - "first_rules.yml"$
17 - 'alerts.yml'$
18 # - "second_rules.yml"$
19 $
20 # A scrape configuration containing exactly one endpoint to scrape:$
21 # Here it's Prometheus itself.$
22 scrape_configs:$
23 # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.$
24 - job_name: 'federate'$
25 metrics_path: '/federate'$
26 params:~$
27 'match[]':$
28 - '{job="prom"}'$
29 static_configs:$
30 - targets: ['localhost:9090']$
31 - job_name: 'prom'$
32 metrics_path: '/'$
33 static_configs:$
34 - targets: ['localhost:12156']$
My expectation:- I will get all the metrics with the job name 'prom' on my localhost:9090/federate, but I didn't get anything there. I know I am doing something wrong. Can someone please guide me?
Moreover, I do have some doubts:
1) Can we change the route name other than 'federate' or is it fixed?
2) The federation endpoint collects all metrics just for the recent timestamp and from the local system of the Prometheus server. right?