Mongod start with memory limit

14,626 views
Skip to first unread message

cdmn

unread,
Feb 9, 2010, 7:13:02 AM2/9/10
to mongodb-user
Hi all,

Short questions:

How to start mongo server and specify that it shouldn't use more than
2gb of memory.

Cause right now on any hard operation like export, add index on 3mil
records it's just eats all my 4gb or memory :-)

Cheers,
Mike

Eliot Horowitz

unread,
Feb 9, 2010, 7:25:57 AM2/9/10
to mongod...@googlegroups.com
There isn't a way to limit memory - it's controlled by the OS. There
was an issue with creating indexes that would use more ram than needed
though. Fix is going out in 1.3.2

> --
> You received this message because you are subscribed to the Google
> Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com
> .
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en
> .
>

scott

unread,
Feb 9, 2010, 1:31:19 PM2/9/10
to mongodb-user
Just to add to this, as I have seen this topic come up multiple times
in the groups.
If you are running a separate server, all is fine and dandy, because
that ram on the server will just fill up and you dont really need to
do anything else on the server besides just run a database.

However, if like me, you do your testing locally (like a laptop, or
something) even if you have 6-8 gigs of ram, the db will take up a
huge portion of that ram causing your dev environment to crawl to a
halt. I basically have to stop the db process, let the ram memory
clear out, everything speeds up again, then I restart the db process
and after a few hours the system slows down again because mongo ever
so slowly starts eating up the ram again.

On both windows and linux, I think it would be great to figure out a
way to pass a param to mongo to tell it to limit how much ram it can
take up.
I know the common response is the OS handles it, but mongo also doesnt
free any ram that it has taken, so, after awhile, it becomes the hog
of RAM.

I do love the mongo, but I hate thinking that I will always have to
keep some separate db server running somewhere because I can't limit
the db instance of ram on my laptop.

On Feb 9, 4:25 am, Eliot Horowitz <eliothorow...@gmail.com> wrote:
> There isn't a way to limit memory - it's controlled by the OS. There  
> was an issue with creating indexes that would use more ram than needed  
> though. Fix is going out in 1.3.2
>

Chuck Remes

unread,
Feb 9, 2010, 1:41:46 PM2/9/10
to mongod...@googlegroups.com
This won't be possible. Mongo uses memory-mapped files (google for it or read the article on wikipedia). As far as I know, there isn't an API for memory-mapping a file and restricting it to a memory limit.

This is a fundamental *design choice* for mongo. I doubt the devs will be able to honor your request without significant design changes.

cr

Roger Binns

unread,
Feb 9, 2010, 4:27:00 PM2/9/10
to mongod...@googlegroups.com
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

scott wrote:
> However, if like me, you do your testing locally (like a laptop, or
> something) even if you have 6-8 gigs of ram, the db will take up a
> huge portion of that ram causing your dev environment to crawl to a
> halt.

First of all you need to understand that this will happen with any form of
database where the entire database does not fit in RAM and you are making
random queries. For example lets say your database is 16GB in size and you
have 4GB of RAM. With any database a new random query is unlikely to be
like one of the more recent ones (it is after all random) and so the random
record will have to be read in from disk into that 4GB of RAM evicting
something that was already in it to make space. The net consequence is that
your database runs at the speed of the disk and that that disk bandwidth is
not available for other things affecting your other tasks.

> I basically have to stop the db process, let the ram memory
> clear out, everything speeds up again, then I restart the db process
> and after a few hours the system slows down again because mongo ever
> so slowly starts eating up the ram again.

You need to distinguish between RAM and address space. Mongo uses memory
mapped files. For example it can ask your operating system to make the 16GB
file available and it will be as part of the address space. Behind the
scenes the operating system will allocate RAM to the portions of the address
space being used across all tasks.

Operating systems typically have some tuneables, but in general they use a
LRU scheme. When new RAM is needed, the least recently used is taken
(unmapped from the virtual address space it was providing and remapped to
new virtual address space).

> On both windows and linux, I think it would be great to figure out a
> way to pass a param to mongo to tell it to limit how much ram it can
> take up.
> I know the common response is the OS handles it, but mongo also doesnt
> free any ram that it has taken, so, after awhile, it becomes the hog
> of RAM.

With memory mapped files it is up to the operating system to manage RAM.
The process does not have to get involved in shuffling data between disks
and RAM and overseeing which portions of the file are available at any time
(which is what happens when read() and write() are used).

It is in theory possible for Mongo to deliberately reduce performance, but
only on Linux as Windows doesn't have the system calls. For example every
minute Mongo could call madvise() to tell the operating system that it
doesn't anticipate needing any of the memory mapped file in the near future.
Linux could then choose those RAM pages as the first to evict (but is under
no obligation to do so). Consequently your other tasks would end up getting
more of the RAM.

As a separate issue Mongo suffers badly when the disk is approaching
saturation due to concurrency. It allows for even more requests to start
which makes any existing in-flight ones take even longer.

http://jira.mongodb.org/browse/SERVER-574

Roger
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iEYEARECAAYFAktx0yQACgkQmOOfHg372QRRJQCfcBJBWN4vadNHSYiN9rGVThYP
fNAAoIYseafT12ME/rgNKNnQ9tYIw9CD
=mJVf
-----END PGP SIGNATURE-----

Colin M

unread,
Feb 27, 2010, 4:47:35 PM2/27/10
to mongodb-user
So it sounds like if you are running a single-server web app or a dev
environment you need to do one of the following:

1. Restart mongo occasionally, say via cron
2. Run mongo inside it's own VM so it can eat up all of the memory it
wants
3. Buy another server

Any other options? On one of my dev servers the MySQL server crashed
and since there was only 10m free memory it couldn't even start back
up without first restarting Mongo.

Before restart:
# free -m
total used free shared buffers
cached
Mem: 950 939 10 0 0
41
-/+ buffers/cache: 897 52
Swap: 127 127 0

After restarting Mongo and MySQL:
# free -m
total used free shared buffers
cached
Mem: 950 272 677 0 7
90
-/+ buffers/cache: 173 776
Swap: 127 15 112

It even causes swap usage which could further impact other processes
due to disk activity. It would be really great if there was some CLI
option to make Mongo play nicer, even if it meant reduced performance
(only if enabled) just so that it could be used for low-traffic
servers or dev environments. Or is there a better solution?

Colin

> Withmemorymapped files it is up to the operating system to manage RAM.


> The process does not have to get involved in shuffling data between disks
> and RAM and overseeing which portions of the file are available at any time
> (which is what happens when read() and write() are used).
>
> It is in theory possible for Mongo to deliberately reduce performance, but
> only on Linux as Windows doesn't have the system calls.  For example every
> minute Mongo could call madvise() to tell the operating system that it

> doesn't anticipate needing any of thememorymapped file in the near future.


>  Linux could then choose those RAM pages as the first to evict (but is under
> no obligation to do so).  Consequently your other tasks would end up getting
> more of the RAM.
>
> As a separate issue Mongo suffers badly when the disk is approaching
> saturation due to concurrency.  It allows for even more requests to start
> which makes any existing in-flight ones take even longer.
>
>  http://jira.mongodb.org/browse/SERVER-574
>
> Roger
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.9 (GNU/Linux)

> Comment: Using GnuPG with Mozilla -http://enigmail.mozdev.org

Eliot Horowitz

unread,
Feb 27, 2010, 4:55:54 PM2/27/10
to mongod...@googlegroups.com
Mongo should never go into swap which it sounds like it did... Can you
verify? What version are you running. If it did use a decent amount
of non mapped memory that's a bug and can be fixed. the serverStatus
xommand can give you a lot of info. Maybe run it now for a baseline?

Dwight Merriman

unread,
Feb 27, 2010, 5:21:25 PM2/27/10
to mongod...@googlegroups.com
generally the OS virtual memory managers treat memory-mapped files
like regular files. that is, they will allocate all *free* ram in the
system to caching them but relinquish as other processes need them for
real in-process memory working set.

problems may occur if

(1) there is a bug in mongo - for example the buildindex memory
inefficiency where it was using too much (non memory-mapped) RAM
(2) the OS VMM is doing something dumb. This will vary by OS it is
possible that a particular OS has an issue.

Colin M

unread,
Feb 28, 2010, 1:27:01 AM2/28/10
to mongodb-user
Eliot and Dwight,

Thanks for the responses. I setup cron to log the serverStatus every
hour so I'll have more info if it happens again. I hope it doesn't
though because when MySQL crashed it corrupted all of my InnoDb tables
and I had to restore from backups. I can't say for sure there wasn't
some other cause for MySQL crashing but it has never happened before
using Mongo and the memory consumption was definitely high.

I wasn't using Mongo heavily (my data dir is total 209M) and the
mongod was probably only running for a week or so with only single
developer usage on a few days. Version 1.3.2.
OS is Ubuntu 9.04 32bit and it is a virtual machine. CPU usage is
typically < 0.1%
MySQL config is quite lean:
key_buffer_size = 64M
innodb_buffer_pool_size = 64M
etc...

If you have any other tips for gathering useful information just let
me know!

Thanks again!
Colin

Reply all
Reply to author
Forward
0 new messages