what's the best practice to collect metrics and write to different remote storage with different bear_token_file

76 views
Skip to first unread message

闵骏

unread,
Nov 20, 2020, 9:11:14 AM11/20/20
to Prometheus Users
I'm setting up a kubernetes cluster for user. They need to deploy different deployment, each of them have their own metrics for pods/deployment/service, and also have its own bear_token to write to remote storage.  It's something like a big company user, own the cluster, with multiple department , each department owns a seprate bear_token and remote storage.

What's the best way to set a prometheus cluster to scrape the metrics and send them to remote storage?

I can think of two way:

  1. set up a prometheus pod for each deployment
    1. pro: easy and safe, different deployments have different token and remote storage
    2. con: bad performance?Not quite sure.  Each of prometheus will collect all metrics from cadvisors. If there are many deployments, with many prometheus, each of them will scrape the whole cluster metrics, it may lead to high pressure of cluster
  2. set up an HA prometheus for a whole cluster and wrote a own program to scrape its own metrics and send it to remote storage
    1. pro: seems lighter? not quite sure.  It seems also does something similar to prometheus
    2. con: need a lot of code to implement the scrape and send metrics part
Does anyone have any suggestions? Of are there any other better solutions?

b.ca...@pobox.com

unread,
Nov 21, 2020, 8:16:53 AM11/21/20
to Prometheus Users
3. Add labels that identify metrics that belong to each deployment.  Have separate remote_write sections for each remote_write target with its own bearer token.  Use write_relabel_configs in each section to filter the writes to each target, so that they only include metrics with the appropriate label.
Message has been deleted

jun min

unread,
Nov 23, 2020, 12:10:36 AM11/23/20
to Prometheus Users
Thanks for reply, will try it

jun min

unread,
Nov 23, 2020, 3:34:44 AM11/23/20
to Prometheus Users
Hi, I have another question, are there any maximum limitation for remote write? it might contains a huge configuration which contains thousands of remote write, is that ok?

b.ca...@pobox.com

unread,
Nov 23, 2020, 6:28:22 AM11/23/20
to Prometheus Users
Having one prometheus server writing to thousands of different remote write endpoints doesn't sound like a sensible way to work.

Maybe you want a proper multi-tenant solution, like Cortex, or the cluster/multi-tenant version of VictoriaMetrics.

A simpler option would be a separate prometheus instance per tenant doing the scraping. Even more lightweight, look at the vmagent part of VictoriaMetrics, which can be used for scraping and remote write without a local TSDB.

jun min

unread,
Nov 23, 2020, 9:33:28 AM11/23/20
to Prometheus Users
awesome, I'll give a shot

jun min

unread,
Nov 24, 2020, 9:56:11 PM11/24/20
to Prometheus Users
BTW, someone also make another suggestions, maybe useful for people who have same scenerio. Instead of having a agent to scrape data for every tenant,  we can use one prometheus to scrape the data, and write a remote write adapter to receive these data, split them, and route to different remote storage for different tenant. It seems also very light weight and simple
Reply all
Reply to author
Forward
0 new messages