Prometheus Memory Requirements

2,168 views
Skip to first unread message

tha...@gmail.com

unread,
Dec 4, 2017, 11:27:44 PM12/4/17
to Prometheus Users
Hello all,

So I read https://www.robustperception.io/how-much-ram-does-my-prometheus-need-for-ingestion/ however it seems to be a bit of a chicken and egg issue. I want to know how many resources I need to use Prometheus, but I need to have Prometheus running for 6 hours in order to know how much memory I need.

I'll note that in my tests I used Prometheus with 5 nodes and a fairly wide variety of exporters and the formula was quite accurate. I was using about 3.5 gigs of memory, and using the formula on the above link I also got a reasonably close number.

So, for simplicity's sake. And hopefully to the benefit of all people who want to roll this out to production environments... I would like to ask two unambiguous questions.

How much memory is required to scrape 100 node_exporter targets with a 30 second interval?
How much memory is required to scrape 1,000 node_exporter targets with a 30 second interval?
Note: using the default metrics that come out of the box.

The aim for these questions is not only to answer them explicitly, but to give a good sense of what resources need to be allocated as the system scales.

Thanks for your time,
Marek

Brian Brazil

unread,
Dec 5, 2017, 4:08:48 AM12/5/17
to tha...@gmail.com, Prometheus Users
On 5 December 2017 at 04:27, <tha...@gmail.com> wrote:
Hello all,

So I read https://www.robustperception.io/how-much-ram-does-my-prometheus-need-for-ingestion/ however it seems to be a bit of a chicken and egg issue. I want to know how many resources I need to use Prometheus, but I need to have Prometheus running for 6 hours in order to know how much memory I need.

I'll note that in my tests I used Prometheus with 5 nodes and a fairly wide variety of exporters and the formula was quite accurate. I was using about 3.5 gigs of memory, and using the formula on the above link I also got a reasonably close number.

So, for simplicity's sake. And hopefully to the benefit of all people who want to roll this out to production environments... I would like to ask two unambiguous questions.

How much memory is required to scrape 100 node_exporter targets with a 30 second interval?
How much memory is required to scrape 1,000 node_exporter targets with a 30 second interval?
Note: using the default metrics that come out of the box.


Unfortunately those questions are ambiguous. We'd need to know as a starting point how many CPUs, network devices, filesystems, and disks you have in addition to kernel version, options it was compiled with and which modules are loaded. On top of that we'd need to know how well that data compresses.

In practice this can only really be determined empirically, as it varies organisation by organisation and machine by machine.
 
The aim for these questions is not only to answer them explicitly, but to give a good sense of what resources need to be allocated as the system scales.

Thanks for your time,
Marek

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-users+unsubscribe@googlegroups.com.
To post to this group, send email to prometheus-users@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/577da5e6-03dc-46f4-85e0-ec01552ceb67%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--

marek.s...@visiercorp.com

unread,
Dec 5, 2017, 11:49:18 AM12/5/17
to Prometheus Users
Thanks for the reply at least. It'll be trial by fire then.


On Tuesday, December 5, 2017 at 1:08:48 AM UTC-8, Brian Brazil wrote:
On 5 December 2017 at 04:27, <tha...@gmail.com> wrote:
Hello all,

So I read https://www.robustperception.io/how-much-ram-does-my-prometheus-need-for-ingestion/ however it seems to be a bit of a chicken and egg issue. I want to know how many resources I need to use Prometheus, but I need to have Prometheus running for 6 hours in order to know how much memory I need.

I'll note that in my tests I used Prometheus with 5 nodes and a fairly wide variety of exporters and the formula was quite accurate. I was using about 3.5 gigs of memory, and using the formula on the above link I also got a reasonably close number.

So, for simplicity's sake. And hopefully to the benefit of all people who want to roll this out to production environments... I would like to ask two unambiguous questions.

How much memory is required to scrape 100 node_exporter targets with a 30 second interval?
How much memory is required to scrape 1,000 node_exporter targets with a 30 second interval?
Note: using the default metrics that come out of the box.


Unfortunately those questions are ambiguous. We'd need to know as a starting point how many CPUs, network devices, filesystems, and disks you have in addition to kernel version, options it was compiled with and which modules are loaded. On top of that we'd need to know how well that data compresses.

In practice this can only really be determined empirically, as it varies organisation by organisation and machine by machine.
 
The aim for these questions is not only to answer them explicitly, but to give a good sense of what resources need to be allocated as the system scales.

Thanks for your time,
Marek

--
You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
To post to this group, send email to promethe...@googlegroups.com.



--
Reply all
Reply to author
Forward
0 new messages