Scrape Multiple HAProxy instances using HAProxy exporter in Kubernetes

110 views
Skip to first unread message

sena...@gmail.com

unread,
Feb 12, 2019, 1:56:51 PM2/12/19
to Prometheus Users
I have hundreds of HAProxy instances running outside of Kubernetes and I'm planning to monitor them using HAProxy Exporter and Prometheus, both of them are deployed in Pivotal Kubernetes. 

I have few questions on how to achieve this.

1)Deployed Prometheus in Kuberenetes using prom.yaml 
2)Deployed HAProxy Exporter per instances. Assume, I have 2 HAP instances and I'm attaching those 2 YAMLS here. Question: Do I need to install hundreds of exporters? 
3)How do I make sure that my 2 HAProxy instances- one is for retail and another for IVR channel, writes to 2 different jobs in Prometheus. Need help with ConfigMap . I'm attaching my ConfigMap here.

Please help and review if I'm doing any mistakes here. Thank you in advance.

Sen.

ConfigMap.yaml
*******************
prometheus-hap-exporter-6bd8fc9974-gblr2     1/1     Running            0          2d
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-configuration
  labels:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev
    name: prometheus-configuration
  namespace: metrics-dev
data:
  prometheus.yml: |-
    global:
      scrape_interval: 10s
    scrape_configs:
    - job_name: 'ivr_hap'
      honor_labels: true
      kubernetes_sd_configs:
      - role: pod
        namespaces:
          names:
          - metrics-dev
    - job_name: 'rtl_hap'
      honor_labels: true
      kubernetes_sd_configs:
      - role: pod
        namespaces:
          names:
          - metrics-dev
---

cat rtl_prom_hap_exporter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rtl-prometheus-hap-exporter
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: rtl-prometheus-hap-exporter
    app.kubernetes.io/part-of: metrics-dev

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: rtl-prometheus-hap-exporter
      app.kubernetes.io/part-of: metrics-dev
  template:
    metadata:
      labels:
        app.kubernetes.io/name: rtl-prometheus-hap-exporter
        app.kubernetes.io/part-of: metrics-dev
    spec:
      serviceAccountName: prometheus-server
      containers:
        - name: rtl-prometheus-exporter
          args:
            - "--haproxy.scrape-uri=http://admin:xxxxxx.@<rtlhaphostname>:8181/haproxy?stats;csv"
          ports:
            - name: rtl-prom-metrics
              containerPort: 9101
---

cat ivr_prom_hap_exporter.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ivr-prometheus-hap-exporter
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: ivr-prometheus-hap-exporter
    app.kubernetes.io/part-of: metrics-dev

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ivr-prometheus-hap-exporter
      app.kubernetes.io/part-of: metrics-dev
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ivr-prometheus-hap-exporter
        app.kubernetes.io/part-of: metrics-dev
    spec:
      serviceAccountName: prometheus-server
      containers:
        - name: ivr-prometheus-exporter
          args:
            - "--haproxy.scrape-uri=http://admin:xxxxxxx@<ivrhaphostname>:8181/haproxy?stats;csv"
          ports:
            - name: ivr-prom-metrics
              containerPort: 9101
---

cat prom.yaml
kind: Role
metadata:
  name: prometheus-server
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev
rules:
  - apiGroups: [""]
    resources:
      - services
      - endpoints
      - pods
    verbs: ["get", "list", "watch"]

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus-server
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev

---
kind: RoleBinding
metadata:
  name: prometheus-server
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev

roleRef:
  kind: Role
  name: prometheus-server
subjects:
  - kind: ServiceAccount
    name: prometheus-server
    namespace: metrics-dev

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev

spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: prometheus
      app.kubernetes.io/part-of: metrics-dev
  template:
    metadata:
      labels:
        app.kubernetes.io/name: prometheus
        app.kubernetes.io/part-of: metrics-dev
    spec:
      serviceAccountName: prometheus-server
      containers:
        - name: prometheus
          image: prom/prometheus:v2.3.2
          args:
            - "--config.file=/etc/prometheus/prometheus.yml"
            - "--storage.tsdb.path=/prometheus/"
          ports:
            - containerPort: 9090
          volumeMounts:
            - name: prometheus-config-volume
              mountPath: /etc/prometheus/
            - name: prometheus-storage-volume
              mountPath: /prometheus/
      volumes:
        - name: prometheus-config-volume
          configMap:
            name: prometheus-configuration
        - name: prometheus-storage-volume
          emptyDir: {}

---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-server
  namespace: metrics-dev
  labels:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev

spec:
  selector:
    app.kubernetes.io/name: prometheus
    app.kubernetes.io/part-of: metrics-dev
  type: NodePort
  ports:
    - port: 9090
      name: prometheus
      targetPort: 9090
      protocol: TCP
---






Ben Kochie

unread,
Feb 12, 2019, 3:26:13 PM2/12/19
to sena...@gmail.com, Prometheus Users
I would highly recommend doing localhost pairing of the haproxy exporter instances on the same hosts running haproxy itself. This is much easier to deal with, as you can use the same tools that control your haproxy to control the exporter.

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To post to this group, send email to promethe...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/6e87c9f9-4c59-4991-981e-37981fe46a0d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

sena...@gmail.com

unread,
Feb 12, 2019, 3:38:25 PM2/12/19
to Prometheus Users
is there a way to fix my ConfigMap though, something like each job for each HApRoxy Exporter Service or pod or endpoint?
Reply all
Reply to author
Forward
0 new messages