it looks like your are using NFS to connect your storage, it would be worth reading https://groups.google.com/forum/m/#!topic/mongodb-user/Kd85b2HHVn8 which advises against it. Runing database systems over nfs is not a relieable solution.
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
For other MongoDB technical support options, see: https://docs.mongodb.com/manual/support/
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/b445f460-ff87-4369-8e7b-067d0f499c72%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi,
Fri Jul 15 15:49:24.293 E STORAGE [conn30] WiredTiger (12) [1468622964:293830][2854:0x7fd90f9f1700], file:ycsbc/collection/3-8181378381775311216.wt, WT_CURSOR.search: memory allocation of 323400 bytes failed: Cannot allocate memory
This error means that MongoDB tried to allocate memory, but failed. The main reason is that there is not enough free memory in your system.
I have replica set with 2 data-bearing nodes and 1 arbiter
I am running YCSB tool to load 12M records and both the secondary and primary nodes are crashing with below error
Could you specify if the three mongod
processes and the YCSB process all run in a single server? If all four processes run inside a single server, then it is possible that your system is experiencing a memory contention issue.
MongoDB was designed to utilize as much memory as possible to provide you with the best performance, hence it is not recommended to run more than one mongod
process in a machine, or running mongod
alongside any resource-hungry application.
For more information and recommended settings for MongoDB deployment, please see:
Best regards,
Kevin
Hi Tanveer,
I fixed the issue by disbaling the setting vm.overcommit_memory=2 on the servers
But I am confused as to, the documentation says to disable overcommit of memory to run mongodb, but when I set this parameter the mongod was crashing
The overcommit setting is set situationally depending on how you want the server to behave during memory pressure situations. As such, there is no single best setting and it depends on your systems memory usage, which depends on your workload. You should review and set the overcommit setting and appropriate ratios according to your specific needs.
- My server was 64GB RAM, the mongod used to grow upto 33GB ( around 60% of the RAM as mentioned in the documentation) and then crash
- The secondary used to become primary and then similarly after it reached usage of 33GB the mongod used to crash and it as well
Having the database killed by the oomkiller or return “cannot allocate memory” is an indication of high demand for memory that is beyond the server’s capacities. As default, the WiredTiger cache is set to around 60% of RAM (which you have observed). From your description, there may be other memory intensive process in your servers that resulted in MongoDB not able to allocate memory.
Best regards,
Kevin