I'm setting up a kubernetes cluster for user. They need to deploy different deployment, each of them have their own metrics for pods/deployment/service, and also have its own bear_token to write to remote storage. It's something like a big company user, own the cluster, with multiple department , each department owns a seprate bear_token and remote storage.
What's the best way to set a prometheus cluster to scrape the metrics and send them to remote storage?
I can think of two way:
- set up a prometheus pod for each deployment
- pro: easy and safe, different deployments have different token and remote storage
- con: bad performance?Not quite sure. Each of prometheus will collect all metrics from cadvisors. If there are many deployments, with many prometheus, each of them will scrape the whole cluster metrics, it may lead to high pressure of cluster
- set up an HA prometheus for a whole cluster and wrote a own program to scrape its own metrics and send it to remote storage
- pro: seems lighter? not quite sure. It seems also does something similar to prometheus
- con: need a lot of code to implement the scrape and send metrics part
Does anyone have any suggestions? Of are there any other better solutions?