Hello,
I have a special use case and I want to ask the community opinion on the implementation
Usecase: I want to expose all metrics from a given cluster (not k8s cluster) to a prometheus server through only one endpoint.
multiple components exposing metrics ---> single-endpoint --> main prometheus
I have thought of 3 solutions but with some caveats
Solution1: use a tool that merge metrics from multiple endpoints and expose them at once while injecting missing labels like instance and job
Solution2: Setup a local prometheus server that scrape all metrics and use its /federate endpoint as a single endpoint for the main prometheus server to scrape
Solution3: Use remote_read config option and point it to the local-prometheus/api/v1/read to ingest the local prometheus metrics
Caveats
Solution1 is more like a dirty hack because metrics with different scrape intervals will be complicated to handle through only one endpoint
Solution2: the main prometheus server will timestamp the metrics again from the federation (not sure if federate endpoint exposes metrics timestamp in that case i may honor timestamps)
Solution3: This looks the best solution so far as I read that remote_read will be more efficient that scraping ? (I may be mistaken)
Which solution you think it is the best, Any other suggestion is welcome
Best regards