[Wired Tiger ] Mongod 3.2 crashes with Too many open files

1,778 views
Skip to first unread message

Astro

unread,
Dec 22, 2015, 12:18:04 AM12/22/15
to mongodb-user

We're testing MongoDB 3.2 for a use case that have many collections in mongodb. We've script that creates 100K collections in single database at a time.

Case 1: We tried to create these collections with open files=64000(default set in mongod init script)

Result 1: The mongod stopped with too many open files when created collections reached to ~30K.


Case 2: Modified the mongod init script to set limit nofile 999999 999999.

Result 2: Created 100K collections without any error. Also applied indexes to these collections. No issues found.


Can we change the nofile value to 999999 permanently? How is it going to affect in production env? 

What would be the good practice to avoid/mitigate the open files issues with WiredTiger with this such use case?

Any help would be appreciated.

Thanks in advance!


Tim Hawkins

unread,
Dec 22, 2015, 2:36:16 AM12/22/15
to mongod...@googlegroups.com

Edit /etc/security/limits.conf and restart mongodb.

New limits will be applied at boot.


--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/167dad4a-f5de-46a2-a2c9-bde18481ee3e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Astro

unread,
Dec 23, 2015, 4:19:19 AM12/23/15
to mongodb-user


Hi Tim,

We've tried to set the limits in this files. As far the scenario is posted we are concerned about max open files limits.


Thanks, 

Astro

unread,
Jan 4, 2016, 4:03:41 AM1/4/16
to mongodb-user
Any help on this?

Thanks,

Asya Kamsky

unread,
Jan 5, 2016, 11:22:25 PM1/5/16
to mongodb-user
You should raise the limit to avoid the error that you got initially.

How is it going to affect in production env?

It will prevent the error from happening again.



On Mon, Jan 4, 2016 at 4:03 AM, Astro <andhar...@gmail.com> wrote:
Any help on this?

Thanks,

--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.

For more options, visit https://groups.google.com/d/optout.



--
Asya Kamsky
Lead Product Manager
MongoDB
Download MongoDB - mongodb.org/downloads
Free MongoDB Monitoring - cloud.mongodb.com
Free Online Education - university.mongodb.com
Get Involved - mongodb.org/community
We're Hiring! - https://www.mongodb.com/careers

Astro

unread,
Jan 6, 2016, 7:28:30 AM1/6/16
to mongodb-user
Thanks Asya,

I understand that WT needs larger number of open files. That gives rise to another question with the described use case where number of collections going to be very large (100K).
How to configure file descriptor for this use case with WT? Does setting this limit to 999999 going to be plausible ? Or how one can handle such large number of collections with WT?

-a


 

Francisco A. Lozano

unread,
Jan 9, 2016, 2:17:26 PM1/9/16
to mongodb-user
I'd love to be wrong, but it's my understanding that WT is not great for huge numbers of collections... I think rocksDB is better for that case.

Again, I'd be very happy to be told otherwise. 

Astro

unread,
Jan 9, 2016, 2:34:01 PM1/9/16
to mongodb-user
We're also concerned if changing the opefiles limit to 999999 will affect the performance whilst the recommended limit is 64K.

Asya Kamsky

unread,
Jan 10, 2016, 2:04:29 AM1/10/16
to mongod...@googlegroups.com
You should set ulimits to the level they need to be operationally for what you are doing - I don't really understand your concern - what performance are you worried will be affected and why?


On Saturday, January 9, 2016, Astro <andhar...@gmail.com> wrote:
We're also concerned if changing the opefiles limit to 999999 will affect the performance whilst the recommended limit is 64K.

--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.
To view this discussion on the web visit https://groups.google.com/d/msgid/mongodb-user/f056f2e9-2bcf-4789-8a1b-ab008543e965%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Astro

unread,
Jan 10, 2016, 11:33:45 AM1/10/16
to mongodb-user
Hi Asya,

Thanks for the reply.

Just wanted to know if setting this limit to such high value will affect the performance of the db in any way?

 

Kevin Adistambha

unread,
Jan 13, 2016, 1:57:13 AM1/13/16
to mongodb-user

Hello,

Setting the limit beyond the default values seems to be required by your use case, and your testing has apparently determined that the increased limit resolves your issue.

For details on the recommended ulimit setting, please refer to UNIX ulimit settings in the MongoDB docs.

If your hardware can handle the extra load of additional open files, then MongoDB will have no issue. In reality, only your own testing within your specific use case can reveal if there will be any problem with the hardware that you have.

As a side note, it may be worth considering splitting your data across multiple servers as opposed to using a single giant server, for ease of backup, practicality, security, and availability.

Best regards,
Kevin

Samantha Atkins

unread,
Jan 17, 2016, 8:14:53 PM1/17/16
to mongodb-user
I am likely to eventually run into this issue as each user of my app gets at least 8 collections of their own.   Some questions on this and in general about wiredtiger:

1) How does wired tiger operate iff there are many collections in the database but most of them are not being used?  Does it open all of them anyway?

2) Does each collection also have at least one index file (_id)?   Are these all open or not too as per (1)?

3) If I have multiple connections to the mongod instance how does that increase the memory load and possibly some resource usage counts?

4) Does every open collection/file get mapped to memory even if not actually queried?

thanks,
   Samantha

Asya Kamsky

unread,
Jan 18, 2016, 1:40:30 PM1/18/16
to mongod...@googlegroups.com
Samantha,

It's best to start a new thread for your questions - this one is about simple crash when not enough OS resources are configured - yours is more in depth questions about how WiredTiger uses those resources. 

Asya
--
You received this message because you are subscribed to the Google Groups "mongodb-user"
group.
 
For other MongoDB technical support options, see: http://www.mongodb.org/about/support/.
---
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To unsubscribe from this group and stop receiving emails from it, send an email to mongodb-user...@googlegroups.com.
To post to this group, send email to mongod...@googlegroups.com.
Visit this group at https://groups.google.com/group/mongodb-user.

For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages