GridFS w/journaling causing out of memory on Windows

473 views
Skip to first unread message

Thomas Hjorslev

unread,
Nov 14, 2011, 8:54:19 AM11/14/11
to mongodb-user
I'm working on a proof of concept for our first MongoDB application (a
quite simple system, some files and meta data).

I'm working on a scalability test, to try to estimate what hardware we
will need, but I'm currently just running mongod locally, while
developing. My test app is quite simple, just inserting a meta data
document and a file (using the C# GridFS API) a lot of times. I'm
currently aiming for 1 million files with an average size of 100KB.

The problem is, when my test app runs, Windows runs out of memory. Or
rather, the first time I ran it, it just became very slow, until I
noticed that the pagefile was 11GB and growing. Now I have capped the
pagefile at 2GB, but immediately (2-4 seconds) after running the test
app, Windows starts throwing up Out of Memory dialog boxes,
"helpfully" suggests to close MongoDB, etc. This is how I start
mongod:
mongod --dbpath "C:\Data\MongoDB\data" --directoryperdb --rest
Now, if I add the --nojournal option, everything works as I would
expect, and I have succeeded in creating a 20GB test DB this way.

Now, adding files at this rate, is not a realistic scenario, but the
problem is, that the memory does not seem to be freed. If I change my
test app to only insert 500MB at a time and run it a few times, each
time the "Private Bytes" section of mongod will increase by
approximately 500MB, even if I wait some minutes between each run. I
have yet to see see the Private Bytes figure go down, when running
with journalling on. For now, I can just turn journalling off, but
once we move to production, I guess we'd like to have journalling on.

Any suggestions on how to solve/troubleshoot this issue? I'm running
v2.0.1 on Windows 7 64bit Professional.

Scott Hernandez

unread,
Nov 14, 2011, 9:09:11 AM11/14/11
to mongod...@googlegroups.com

Make the page file fixed, not dynamic; also increase to five gigs

--
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.

Thomas Hjorslev

unread,
Nov 14, 2011, 9:35:06 AM11/14/11
to mongodb-user
On Nov 14, 3:09 pm, Scott Hernandez <scotthernan...@gmail.com> wrote:
> Make the page file fixed, not dynamic; also increase to five gigs

Why five?

It did take a while longer before it crashed, and I did see Private
Bytes go down on several occations, hovering around 2GB. In the end
though, mongod was forcibly closed by Windows.

These are errors from the event log around the time of the crash:

Windows successfully diagnosed a low virtual memory condition. The
following programs consumed the most virtual memory: mongod.exe (1684)
consumed 7040729088 bytes, sqlservr.exe (2940) consumed 135147520
bytes, and MsMpEng.exe (1000) consumed 112685056 bytes.

Application popup: Windows - Out of Virtual Memory : Your system is
low on virtual memory. To ensure that Windows runs properly, increase
the size of your virtual memory paging file. For more information, see
Help.

The Desktop Window Manager has encountered a fatal error (0x80070008)

Scott Hernandez

unread,
Nov 14, 2011, 9:57:28 AM11/14/11
to mongod...@googlegroups.com
You will basically need 3-6GB for the journal files. How much physical
memory do you have?

If you use a dynamic page file the OS doesn't increase it fast enough
to keep up with the (private) mapped memory calls needed for
journaling, it seems. In the next version we will (still being worked
on) warn and suggest creating a fixed size page file under windows for
this exact reason.

Thomas Hjorslev

unread,
Nov 14, 2011, 11:09:34 AM11/14/11
to mongodb-user


On Nov 14, 3:57 pm, Scott Hernandez <scotthernan...@gmail.com> wrote:
> You will basically need 3-6GB for the journal files. How much physical
> memory do you have?

4GB physical memory, 2-3GB typically used by MSSQL, Visual Studio,
etc.

> If you use a dynamic page file the OS doesn't increase it fast enough
> to keep up with the (private) mapped memory calls needed for
> journaling, it seems. In the next version we will (still being worked
> on) warn and suggest creating a fixed size page file under windows for
> this exact reason.

It's now set to 8GB, and I managed to insert 100,000 files (10GB)
before a crash. This time the client app crashed. The mongod process
is still alive, but I cannot insert files - I get this error:
MongoDB.Driver.GridFS.MongoGridFSException: Upload client and
server MD5 hashes are not equal.

Here's the tail of the log:

Mon Nov 14 17:02:49 [conn2] DocStore.fs.files caught assertion
_indexRecord DocStore.fs.files.$filename_1_uploadDate_1 _id:
ObjectId('4ec13ba9f41a7f0364ba0daf')
Mon Nov 14 17:02:49 [conn2] VirtualProtect failed (mcw) C:/Data/
MongoDB/data/DocStore/DocStore.22 120512d4000000 4000000 errno:1455
The paging file is too small for this operation to complete.
Mon Nov 14 17:02:49 [conn2] DocStore.fs.chunks Assertion failure
false db\mongommf.cpp 72
Mon Nov 14 17:02:49 [conn2] insert DocStore.fs.chunks exception:
assertion db\mongommf.cpp:72 15ms
Mon Nov 14 17:03:20 [initandlisten] connection accepted from
192.168.2.67:49589 #4
Mon Nov 14 17:03:21 [conn4] VirtualProtect failed (mcw) C:/Data/
MongoDB/data/DocStore/DocStore.23 126313be520000 1ae0000 errno:1455
The paging file is too small for this operation to complete.
Mon Nov 14 17:03:21 [conn4] DocStore.fs.chunks Assertion failure
false db\mongommf.cpp 72
Mon Nov 14 17:03:21 [conn4] insert DocStore.fs.chunks exception:
assertion db\mongommf.cpp:72 249ms

Does it make sense to keep increasing the pagefile size? (I'm running
out of disk space :-))

Thanks for the help so far!

Robert Stam

unread,
Nov 14, 2011, 11:17:16 AM11/14/11
to mongod...@googlegroups.com
The error message "Upload client and server MD5 hashes are not equal" is generated client side but appears to be a side effect of things not being in good shape server side (i.e., your server log also has
error messages in it regarding inserting into the fs.chunks collection which is what the md5 computation is done over).

Scott Hernandez

unread,
Nov 14, 2011, 11:53:49 AM11/14/11
to mongod...@googlegroups.com
You don't have enough (virtual) memory for everything you are trying to do.

Note the error returned from windows printed in the log file: "The
paging file is too small for this operation to complete." This is
literally what windows returns when mongod tries to allocate memory.

Thomas Hjorslev

unread,
Nov 14, 2011, 1:32:59 PM11/14/11
to mongodb-user
On Nov 14, 5:17 pm, Robert Stam <rob...@10gen.com> wrote:
> The error message "Upload client and server MD5 hashes are not equal" is
> generated client side but appears to be a side effect of things not being
> in good shape server side (i.e., your server log also has
> error messages in it regarding inserting into the fs.chunks collection
> which is what the md5 computation is done over).

Ok, that's what I suspected. After a mongod restart, it seems to be
cool again.

Thomas Hjorslev

unread,
Nov 14, 2011, 1:46:21 PM11/14/11
to mongodb-user


On Nov 14, 5:53 pm, Scott Hernandez <scotthernan...@gmail.com> wrote:
> You don't have enough (virtual) memory for everything you are trying to do.

I think I understand to some extend :-) I'm still confused about what
I can do about it? Is the amount of data that I can write to the
server limited by the amount of (virtual) memory?

> Note the error returned from windows printed in the log file: "The
> paging file is too small for this operation to complete." This is
> literally what windows returns when mongod tries to allocate memory.

Ok, that's fair. I guess I can turn off journalling when I do the
initial data push (it's pretty cool that it can just be disabled and
enabled later IMHO). I'm more worried what happens over time, under
normal load, if it slowly eats up all the space in the page file, and
then suddenly fails. I'll try to modify my test app, so it feeds the
files more slowly, maybe pausing every 1,000 or 10,000 files or
something like that.

Scott Hernandez

unread,
Nov 14, 2011, 2:28:27 PM11/14/11
to mongod...@googlegroups.com
In this case it had an error when allocating your database file, not
the journal.

Mon Nov 14 17:03:21 [conn4] VirtualProtect failed (mcw) C:/Data/
MongoDB/data/DocStore/DocStore.23 126313be520000 1ae0000 errno:1455

The paging file is too small for this operation to complete.

In general it seems like windows needs enough virtual memory (page
file) when mapping new files, like when it creates new db files. It
seems to depend on the page file settings and some ration of
free/total memory and the size of the files mapped.

Thomas Hjorslev

unread,
Nov 14, 2011, 3:09:45 PM11/14/11
to mongodb-user
On Nov 14, 8:28 pm, Scott Hernandez <scotthernan...@gmail.com> wrote:
> In this case it had an error when allocating your database file, not
> the journal.
>
> Mon Nov 14 17:03:21 [conn4] VirtualProtect failed (mcw) C:/Data/
> MongoDB/data/DocStore/DocStore.23 126313be520000 1ae0000 errno:1455
> The paging file is too small for this operation to complete.

I see.


> In general it seems like windows needs enough virtual memory (page
> file) when mapping new files, like when it creates new db files. It
> seems to depend on the page file settings and some ration of
> free/total memory and the size of the files mapped.

I was hoping it would be simply caused by the fact that I was pushing
data so fast, so I tried something new - pushing 1000 docs (100MB)
then pausing, then 1000 more, etc. After 100,000 docs (10GB) I let it
"rest" for 20 minutes then started inserting another 100,000. When the
datafiles hit 11.2GB, I once again hit the ceiling ("paging file is
too small..."). The 11.2GB is strangely close to my total virtual
memory of 12GB (4 ph + 8 pf) when accounting for other running
processes.

I'll try on a server version of Windows as soon as I can. If it's the
same issue, I'll have to evaluate if we can live without journalling
in production. Unfortunately, running it on a UNIX OS is out of the
question :s

Do you think there is any chance that this will (can?) be fixed in a
later version of MongoDB?

Once again, thanks a lot for your answers, it's much appreciated.

Scott Hernandez

unread,
Nov 14, 2011, 5:15:05 PM11/14/11
to mongod...@googlegroups.com

Yes, I think there is a jira issue for this. I don't have time to look
now, but it is not a lost cause and it is something that is being
investigated albeit slower than some would like.

> Once again, thanks a lot for your answers, it's much appreciated.
>

Reply all
Reply to author
Forward
0 new messages