I've been following the guide on how to set this up using both the Prometheus and RabbitMQ operators for Kubernetes. However when I deploy the PodMonitor (for rabbitmq-operator) and ServiceMonitors (for the rabbitMQ cluster itself) to the cluster, Prometheus doesn't seem to be scraping the metrics as expected.
I've correctly set the metadata.labels.release property in the YAML for these two resources to match what the Prometheus operator is expecting, and can see them listed in Prometheus' Status -> Service Discovery UI, but the 'active targets' always reports 0.
My current suspicion is that there are no prometheus or prometheus-tls ports declared against the rabbitMQ Service in the cluster, which is what the ServiceMonitor is expecting to scrape from. Presumably declaring this port on the Service is controlled by the rabbitMQ cluster operator. The documentation doesn't mention any additional steps to set up these ports, so I'm not sure if I am understanding the problem correctly.
--
You received this message because you are subscribed to the Google Groups "rabbitmq-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rabbitmq-user...@googlegroups.com.
To view this discussion on the web, visit https://groups.google.com/d/msgid/rabbitmq-users/ab5fc8ac-9dc8-4ad5-991b-6afb10dae202n%40googlegroups.com.