Hi Prometheus Users,
I am working on writing a (my first) custom exporter that collects metrics from an application's REST API endpoint that I have no control over.
Now that I am half way through it, I am questioning myself whether I am taking the right approach to get the collected metrics published to my /metrics endpoint for Prometheus to scrape, and I could use some guidance.
Let me attempt to explain what I'm doing.
My approach:
Metrics are published on a /metrics endpoint that Prometheus will scrape every 15 minutes. But, how do metrics get to the /metrics endpoint?
My idea was to have my exporter collect data from an external REST API on it's own schedule and publish the metrics to the /metrics endpoint so they would be ready and waiting for Prometheus to scrape them.
This way, Prometheus would be scraping /metrics on it's own scrape_interval = 15m, and my exporter would be collecting data and publishing to /metrics on it's own export_interval = 15m. I might even lag ahead the export_interval by a minute or so, because each of the ~10 API calls it makes could take up ~20 seconds to complete.
Is it necessary to have the 2 different schedules? Or would the Prometheus scrape_interval schedule be expected to trigger the API calls in the exporter?
Basically I don't want to bog down that external applications performance or look like a DOS attack on the external application by continuously barraging it with requests when I only need the data once every 15 minutes or so.
Hopefully I explained this alright, but let me know if you need any clarification.
Thanks,
Cory