When we are trying to run mongodump on a 3GB database from rather small container ( for example, 1 GB ram), we are getting OOM.
Hi,
Could you clarify whether it’s the mongod or mongodump process that is being killed due to OOM ?
Are you executing both mongod and mongodump in the same container instance with 1GB of memory to dump 3GB database ?
Do you configure swap for these small containers ?
If it’s mongod that is being killed with OOM, you could check out —wiredTigerCacheSizeGB which may alleviate the memory contention. The memory usage of mongodump should be reasonably small depending on your options. i.e. numParallelCollections.
You could try increasing the OpenShift memory of the containers. Remember that a portion of the memory capacity is reserved for system daemons (kernel, node, etc). See also OpenShift Handling out of resource errors.
Regards,
Wan.