--
You received this message because you are subscribed to the Google Groups "Prometheus Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-devel...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-developers/CA%2BT6YowWFrsHTDb3BdwrzKvz3SHBJ%3DnJj3p%3DLxmXzJaPQUnrYQ%40mail.gmail.com.
This message (including any attachments) has been sent as a part of discussion between the sender and the addressee whose name is specified above. This message contains confidential, proprietary, privileged and/or private information. The information is intended to be for the use of the individual or entity designated above, as authorised. If you are not the intended recipient of this message, please notify the sender immediately, and delete the message and any attachments. Any disclosure, reproduction, distribution or other use of this message or any attachments by an individual or entity other than the intended recipient is prohibited..
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-developers/5be686ab-d815-41e5-82c8-7ef0f326714bn%40googlegroups.com.
Yeah, I think the collectd_exporter is a much better fit for how we are using CollectD. Rather than figuring out how to configure Prometheus to scrape metrics from any instance running CollectD we fire all the metrics to a central collectd_exporter and scrape there instead.
In practice this has worked well for us, but perhaps there’s a better way to configure Prometheus to scrape arbitrary collections of EC2 instances across multiple accounts in AWS? Or some way for EC2 nodes to register themselves to be scraped by Prometheus?
Error! Filename not specified.
Error! Filename not specified.
CONFIDENTIALITY NOTICE:
This message (including any attachments) has been sent as a part of discussion between the sender and the addressee whose name is specified above. This message contains confidential, proprietary, privileged and/or private information. The information is intended to be for the use of the individual or entity designated above, as authorised. If you are not the intended recipient of this message, please notify the sender immediately, and delete the message and any attachments. Any disclosure, reproduction, distribution or other use of this message or any attachments by an individual or entity other than the intended recipient is prohibited..
Yeah, I think the collectd_exporter is a much better fit for how we are using CollectD. Rather than figuring out how to configure Prometheus to scrape metrics from any instance running CollectD we fire all the metrics to a central collectd_exporter and scrape there instead.
In practice this has worked well for us, but perhaps there’s a better way to configure Prometheus to scrape arbitrary collections of EC2 instances across multiple accounts in AWS? Or some way for EC2 nodes to register themselves to be scraped by Prometheus?