mongodump inside a limited container

266 views
Skip to first unread message

Иван Дербенев

unread,
Jul 6, 2017, 6:17:16 PM7/6/17
to mongodb-user
Hello!

we currently have an installation of mongodb3.4 inside Openshift.
So, every mongod instance is installed inside a container with limited memory.

When we are trying to run mongodump on a 3GB database from rather small container ( for example, 1 GB ram), we are getting OOM.
As far as i can understand, mongodump can't understand cgroup limits and thinks that it has 64 GB RAM available (it's our docker host available ram), so it tries to eat more than it can chew and container is killed

Are there any workarounds for this issue?

Wan Bachtiar

unread,
Jul 17, 2017, 10:19:24 PM7/17/17
to mongodb-user

When we are trying to run mongodump on a 3GB database from rather small container ( for example, 1 GB ram), we are getting OOM.

Hi,


Could you clarify whether it’s the mongod or mongodump process that is being killed due to OOM ?
Are you executing both mongod and mongodump in the same container instance with 1GB of memory to dump 3GB database ?
Do you configure swap for these small containers ?

If it’s mongod that is being killed with OOM, you could check out —wiredTigerCacheSizeGB which may alleviate the memory contention. The memory usage of mongodump should be reasonably small depending on your options. i.e. numParallelCollections.

You could try increasing the OpenShift memory of the containers. Remember that a portion of the memory capacity is reserved for system daemons (kernel, node, etc). See also OpenShift Handling out of resource errors.

Regards,
Wan.

Иван Дербенев

unread,
Jul 18, 2017, 6:23:41 AM7/18/17
to mongodb-user
it's mongodump container (we run it separately from mongod) which is OOM-ing, so wiredTiger isn't an issue

we can't configure swap in Openshift containers - it's disabled (and i don't really think it's supported in kubernetes)

Increasing memory limit is an option, but it seems weird to give 1-2GB ram for a backup agent

вторник, 18 июля 2017 г., 5:19:24 UTC+3 пользователь Wan Bachtiar написал:
Reply all
Reply to author
Forward
0 new messages