How to use ec2_sd_configs and heartbeat http_2xx together

984 views
Skip to first unread message

alexande...@gmail.com

unread,
Oct 27, 2017, 3:26:28 AM10/27/17
to Prometheus Users
Hi.

I have aws instance running and I use the ec2_sd_configs to retrieve these instances.

On this instances is a node exporter that I query with the scraper.

But there is also another endpoint available that I want to query for additional heartbeat information.

I don't really know how my config should look like. Do I need to run the blackbox exporter besides prometheus?

Would that mean that I query the instances using /metrics by prometheus itself, but again using an http call over the blackbox exporter?

I don't know how I put this in a single config. I also don't know how I configure which information is taken in the targets page of prometheus for the "UP" "DOWN" information. I would like to use the hearbeat endpoint here.


This is my config that is only doing /metrics on the node exporter. How do I integrate the http_2xx here additionally?

# my global config
global:
  external_labels:
      monitor: 'monitor'

rule_files:
  # - "first.rules"
  # - "second.rules"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'
    ec2_sd_configs:
      - region: eu-central-1
        access_key: 
        secret_key: 
        port: 9100
    relabel_configs:
      - source_labels: [__meta_ec2_instance_state]
        regex: running
        action: keep
      - source_labels: [__meta_ec2_tag_type]
        regex: prometheus
        action: drop
      - source_labels: [__meta_ec2_public_ip]
        regex: (.*)
        replacement: ${1}:9100
        action: replace
        target_label: __address__


Some hint would be great since I am pretty new to prometheus.

pat...@docnetwork.org

unread,
Nov 1, 2017, 2:27:40 PM11/1/17
to Prometheus Users
You would write your collection queries in your rules files.  I would verify you have a 'rules' directory in '/opt/prometheus', and then add your scrape rules in there.  I have my 'rules_files:' in my 'prometheus.yml' set to '/opt/prometheus/rules/*' so every rules file is loaded.

Example:
I have a 'prod.rules' file in '/opt/prometheus/rules/' that has the job;
  job:prod_cpu_usage:avg = 100 - (avg by (instance) (irate(node_cpu{job="prod", mode="idle"}[5m])) * 100)

Next, I have my 'prometheus.yml' file have the 'prod' job inside the 'scrape_configs:' section
Reply all
Reply to author
Forward
0 new messages