I am using Prometheus as a container inside my K8S cluster. My current
working configMap is ::
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus
data:
prometheus.yml: |-
# This file comes from the kubernetes configmap
rule_files:
- '/etc/prometheus-alert-rules/alert.rules'
global:
scrape_interval: 5s
scrape_configs:
- job_name: 'kubernetes_apiserver'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- api_servers:
- http://172.29.219.65:8080
role: apiserver
relabel_configs:
- source_labels: [__meta_kubernetes_role]
action: keep
regex: (?:apiserver)
###################### Kubernetes Pods ##########################
- job_name: 'haproxy'
static_configs:
- targets:
- 172.29.219.110:9101
- job_name: 'docker_containers'
metrics_path: '/metrics'
tls_config:
insecure_skip_verify: true
static_configs:
- targets:
- 172.29.219.103:4194
- 172.29.219.104:4194
- 172.29.219.105:4194
- job_name: 'kubernetes_pods'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- api_servers:
- http://172.29.219.65:8080
role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_name]
action: replace
target_label: pod_name
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
Now I want to monitor the End points so Ive added the below config to my configMap ::
- job_name: 'kubernetes_endpoints'
tls_config:
insecure_skip_verify: true
kubernetes_sd_configs:
- api_servers:
- http://172.29.219.65:8080
role: endpoints
relabel_configs:
- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
action: keep
regex: true
But when I boot up my deployment, the Pod ERRORs out with the following ::
time="2017-05-26T13:56:46Z" level=info msg="Loading configuration file /etc/prometheus/prometheus.yml" source="main.go:221"
time="2017-05-26T13:56:46Z" level=error msg="Error loading config: couldn't load configuration (-config.file=/etc/prometheus/prometheus.yml): Unknown Kubernetes SD role \"endpoints\"" source="main.go:126"
Why does such an error occur ? It does not complain about role: pod OR role: apiserver .