Julio Leal
unread,Jul 11, 2024, 4:13:03 PMJul 11Sign in to reply to author
Sign in to forward
You do not have permission to delete messages in this group
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Prometheus Users
Hi everyone,
I have Prometheus installed as a statefulset in a Kubernetes cluster running in HA mode (with 2 instances). Currently, I use Prometheus as a statefulset to enable retries in remote write, which I use to store my metrics. However, this setup has its drawbacks: it prevents autoscaling when I scrape a large volume of metrics.
Is there a way to configure Prometheus solely as a collector that doesn’t perform any relabeling or buffering, and only scrapes and sends metrics to a remote write endpoint without losing metrics during the scrape? I understand there might be some metric loss due to the removal of the buffer.
I considered using a daemonset, but this approach is costly and there is a risk of losing metrics during a pod rollout on the node. Alternatively, using a deployment would require handling deduplication.
Has anyone implemented a similar configuration?
I hope this helps! Let me know if you need any further adjustments.