How much memory does prometheus-k8s-0 pod require?

31 views
Skip to first unread message

Arve Knudsen

unread,
May 24, 2017, 11:52:06 AM5/24/17
to CoreOS Dev
How much memory does the prometheus-k8s-0 pod require? I've spun up a Kubernetes cluster of 4 workers with 1 GB of memory on DigitalOcean, and prometheus-k8s-0 won't schedule anywhere on account of its memory requirements not being met.

Thanks,
Arve


Rob Szumski

unread,
May 24, 2017, 1:36:51 PM5/24/17
to coreo...@googlegroups.com
Its default limit is 2GB, which allows it to keep a decent amount of history. 3-4GB should be good. I use the t2.medium on AWS which has 4GB.

Arve Knudsen

unread,
May 24, 2017, 2:02:01 PM5/24/17
to coreo...@googlegroups.com
Thanks for letting me know, Rob. I'm bringing this cluster up for testing though, so I don't think you'll need much history. How can I configure Promotheus to use a certain amount of memory?

Thanks,
Arve

Rob Szumski

unread,
May 24, 2017, 2:41:26 PM5/24/17
to coreo...@googlegroups.com
This is not currently configurable at install time, but you should be able to modify the object on the cluster that you already have running. Change the limit parameter from 2GB down to like 500MB or something, and then update -storage.local.target-heap-size to be 50% of that.

Once you save, the pod should get scheduled.

For anyone following along, note that these types of changes are not generally recommended/supported and there are no guarantees that they will preserved upon a software update.

 - Rob

Arve Knudsen

unread,
May 24, 2017, 7:56:03 PM5/24/17
to coreo...@googlegroups.com
Aha thanks for letting me know.

Euan Kemp

unread,
May 25, 2017, 1:23:15 PM5/25/17
to coreo...@googlegroups.com
You should also be able to edit the `prometheus` api object created, for
example to have a spec like:

> apiVersion: "monitoring.coreos.com/v1alpha1"
> kind: "Prometheus"
> metadata:
> name: k8s
> labels:
> prometheus: k8s
> spec:
> retention: "2h"
> resources:
> requests:
> memory: 500Mi

Note that the actual memory usage needed will vary based on the services
monitored, retention period, number of node-exporters running, etc etc.

Setting the 'resources' field of the 'Prometheus' kind is a good way to
change the resources of the created statefulset.

- Euan

On 05/24/2017 04:55 PM, Arve Knudsen wrote:
> Aha thanks for letting me know.
>
> On Wed, May 24, 2017 at 8:41 PM Rob Szumski <rob.s...@coreos.com
> <mailto:rob.s...@coreos.com>> wrote:
>
> This is not currently configurable at install time, but you should
> be able to modify the object on the cluster that you already have
> running. Change the limit parameter from 2GB down to like 500MB or
> something, and then update -storage.local.target-heap-size to be 50%
> of that.
>
> Once you save, the pod should get scheduled.
>
> For anyone following along, note that these types of changes are not
> generally recommended/supported and there are no guarantees that
> they will preserved upon a software update.
>
> - Rob
>
>> On May 24, 2017, at 11:01 AM, Arve Knudsen <arve.k...@gmail.com
>> <mailto:arve.k...@gmail.com>> wrote:
>>
>> Thanks for letting me know, Rob. I'm bringing this cluster up for
>> testing though, so I don't think you'll need much history. How can
>> I configure Promotheus to use a certain amount of memory?
>>
>> Thanks,
>> Arve
>>
>> On Wed, May 24, 2017 at 7:36 PM Rob Szumski
>> <rob.s...@coreos.com <mailto:rob.s...@coreos.com>> wrote:
>>
>> Its default limit is 2GB, which allows it to keep a decent
>> amount of history. 3-4GB should be good. I use the t2.medium
>> on AWS which has 4GB.
>>
>> > On May 24, 2017, at 8:51 AM, Arve Knudsen
signature.asc
Reply all
Reply to author
Forward
0 new messages