Docker Swarm auto-discovery override port/address from label

859 views
Skip to first unread message

Johan

unread,
Sep 4, 2021, 9:24:42 PM9/4/21
to Prometheus Users
Is there some way to configure dockerswarm_sd_config to read the port or address from a label? For example I'd like to monitor this service that exposes metrics on a different port than the exposed ones:

services:
  server:
    image: jnordberg/caddy
    networks:
      - public
      - internal
      - stats
    volumes:
      - data:/data
    ports:
      - target: 80
        published: 80
        mode: host
      - target: 443
        published: 443
        mode: host
    environment:
      DOCKER_API_VERSION: 1.37
      DOCKER_HOST: tcp://docker:2375
      CADDY_INGRESS_NETWORKS: public
      CADDY_DOCKER_CADDYFILE_PATH: /data/Caddyfile
    deploy:
      mode: global
      update_config:
        parallelism: 10
        delay: 10s
        order: start-first
      restart_policy:
        condition: on-failure
      labels:
        prometheus.job: caddy
        prometheus.addr: caddy:2141 << is this possible?
      placement:
        constraints: [node.role == manager]

Brian Candler

unread,
Sep 5, 2021, 4:49:55 AM9/5/21
to Prometheus Users
You can use relabelling in the scrape job to set __address__ (which is the addr:port used for actual scraping) to whatever you like, e.g.

    relabel_configs:
      - source_labels: ['prometheus.addr']
        target_label: __address__

Other possibilities would be to overwrite just the port from the existing __address__

    relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):(\d+)'
        target_label: instance
        replacement: '${1}:2141'

Or you could put the required replacement port in another label, e.g.

    relabel_configs:
      - source_labels: [__address__, __port__]
        regex: '(.*):(\d+);(\d+)'
        target_label: instance
        replacement: '${1}:${3}'

Related feature request in issue #9287

Johan

unread,
Sep 18, 2021, 8:24:36 PM9/18/21
to Prometheus Users
Thanks! Using your examples I was able to set it up to work really well with docker swarm, here's the config I'm using:

- job_name: "dockerswarm"
dockerswarm_sd_configs:
- host: tcp://docker:2375
role: tasks
relabel_configs:
# Only keep containers that should be running.
- source_labels: [__meta_dockerswarm_task_desired_state]
regex: running
action: keep
# Only keep containers that have a `prometheus.job` service label.
- source_labels: [__meta_dockerswarm_service_label_prometheus_job]
regex: .+
action: keep
# Use the `prometheus.job` swarm label as prometheus job label.
- source_labels: [__meta_dockerswarm_service_label_prometheus_job]
target_label: job
# Set the address for the job if the service has a `prometheus.address` label.
- source_labels: [__meta_dockerswarm_service_label_prometheus_address]
target_label: __address__
regex: (.+)
# Set the metrics path the `prometheus.path` label if it exists.
- source_labels: [__meta_dockerswarm_service_label_prometheus_path]
target_label: __metrics_path__
regex: (.+)
# Set the target port from the `prometheus.port` label if it exists.
- source_labels:
[__address__, __meta_dockerswarm_service_label_prometheus_port]
target_label: __address__
regex: '(.*):(\d+);(\d+)'
replacement: "${1}:${3}"

Brian Candler

unread,
Sep 19, 2021, 4:41:12 AM9/19/21
to Prometheus Users
I would caution against replacing the "job" label.  This is supposed to reference the prometheus scrape job (i.e. job_name).  Its purpose is:
(1) to allow you to trace back any given metric to the scrape job which generated it; and
(2) to ensure that two different scrape jobs which happen to scrape the same metrics on the same target get unique labels.
If you need to attach some other data then I suggest you give it a different label name, and leave the original "job" unchanged.

Otherwise, that looks fine.  (And you fixed my copy-paste errors, where I'd put "target_label: instance" instead of "target_label: __address__")

Ben Kochie

unread,
Sep 19, 2021, 7:13:48 AM9/19/21
to Brian Candler, Prometheus Users
Using `job` from a discovery service is OK if the service has such concepts. For example, in Kubernetes, it's common to use it this way.

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/74b72391-a999-4158-8fe9-b86aef457808n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages