MapViewOfFile failed *** Not enough storage is available to process this command.

1,594 views
Skip to first unread message

JR

unread,
Apr 13, 2011, 5:37:42 AM4/13/11
to mongodb-user
Hello Mongods,

I know the limitations of MongoDB 32 Bit Builds, but I have encounter
a hugh problem.

I receive the following error while trying to read data from the
database:
Wed Apr 13 11:49:49 [conn9] MapViewOfFile failed -db name here-
errno:8 Not enough storage is available to process this command.

Wed Apr 13 11:49:49 [conn9] assertion 10084 can't map file memory -
mongo requires 64 bit build for larger datasets ns:-DB name-
a query:{ $query: {} }

I am using the Python Driver on the latest 1.8.1 Windows 32Bit Build
of Mongo.

The Problem is not at all about the limitation, this I am aware of,
but I have the particular problem that I want to extract stored data
and place it in regular file and further giving the space of this
entries free for other tasks.

Is there any way this can be archived? Mongo is giving a hard time not
even allowing access of the data or further processing, it just locks
me out. Do I have any way to parse this data and align it out of the
DB?

----

Some other words, I have to say hugh thumb up for the Oreily book
which you guys made, I use it as frequeny referenze when working with
Mongo and it was really worth the money. Niceley written and giving
also some information under the hood. Well covered so, just perhaps
missing some more information about the particular driver commands for
the drivers which sometimes seem to very a lot. Indexing is sadly a
bit strangely in the Python driver then in the Java version. Perhaps
would be another question later own.
Otherwise, thanks for bringing up such a DB! :)

Mathias Stearn

unread,
Apr 13, 2011, 12:25:09 PM4/13/11
to mongod...@googlegroups.com
If you have more than one database, you can try restarting mongod and
only accessing one db at a time to dump it, then restarting it before
dumping the next db. Alternatively you can move your data files to a
64 bit machine and avoid the 32 bit limitations altogether. We use the
same format on all platforms so you can freely move the files to eg a
64bit linux box without having to do any conversions.

> --
> You received this message because you are subscribed to the Google Groups "mongodb-user" group.
> To post to this group, send email to mongod...@googlegroups.com.
> To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
> For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
>
>

JR

unread,
Apr 13, 2011, 1:52:02 PM4/13/11
to mongodb-user
Hi Mathias,

your suggestion with trying to acces one database a time unfortunately
doenst work out. My action causing the above error is a global
".find()" on the DB so, perhaps there is a way to avoid Mongo
unpacking all the data at once? Limiting the find with
'find().limit(1)' doenst give this a hit so.

Im a bit confused so, about why Mongo does block completly. Perhaps
there should be a way to make it like with a socket object - reading
(and internally unpacking) only a limited "part" of the data instead
and then provide each layer of the routing with another data part
instead of loading/mapping it in advance, for the 'find()' queries
examply?

Moving to a 64 bit variant could be an option if I woudnt stress test
out a project for myself, if I would be company user and not so
limited by ressources I surely would give this a shoot. ;)
But its good to know that the data files are portable so.

Thank you a lot and hoping for other ideas,
Jan

JR

unread,
Apr 13, 2011, 1:58:31 PM4/13/11
to mongodb-user
I have to correct myself, actually I was trying not to dump the data
using the import/export tools but from the Python driver interface /
an application.

Seems like I have to dump the data then into BSON and parse this
manually then, just hoped that there would be a built-in solution for
this kind of problem,
http://www.mongodb.org/display/DOCS/Import+Export+Tools#ImportExportTools-Example%3ADumpingaSingleCollection

Guess this is what you mean by dumping, right?

Jan

JR

unread,
Apr 16, 2011, 6:47:38 AM4/16/11
to mongodb-user
I finally tried on this, using MongoDump over commandline.

But the problem is the same, even if I only access on database part,
the data is to big for Mongo to unpack/dump.

Are there any other methods on how to access the data at all, right
now I neither cant:
1. Access any data from the DB in global
2. Dump out data to free space of the DB

Switching to a 64bit system is, at present, a out of reach method.

I am a bit frustrated about the DB design in this situation, cause a
desired behavior should be that, even so no new data can be added to
the database, the data should stay accessible at all time and not
locking completely - perhaps in sense of a safe space to maintain the
data flow.

Are there by any means ways to save out a particular collection (or
dumping out it, extract it) without copying all DB files somewhere and
walking the way with a new DB Setup and later, maybe, which I also
don`t know if possible, merge the data?

The problem I am standing at right now is that I have a vital setup
which feeds into and works from the database in a circle using queues.

The database system is made up off 7 collections from which one
contains a queue of items - impossible to export so I cannot continue
my application from this point, another one is used as a data-store
which I also cannot lay out of the DB to disk files and drop, to free
space.

Any further ideas besides switching the environment, they would be
much appreciated!

Jan

Scott Hernandez

unread,
Apr 17, 2011, 12:21:33 PM4/17/11
to mongod...@googlegroups.com
What are the files and their sizes?

Copying the data to a 64-bit machine is the best option.

JR

unread,
Apr 19, 2011, 2:39:24 AM4/19/11
to mongodb-user
Hello Scott,

the file table looks as follows:

03.04.2011 22:42 67.108.864 ilsDB.0
07.04.2011 06:36 67.108.864 ilsDB.1
08.04.2011 07:49 67.108.864 ilsDB.2
08.04.2011 08:38 134.217.728 ilsDB.3
08.04.2011 23:58 268.435.456 ilsDB.4
11.04.2011 21:42 536.608.768 ilsDB.5
12.04.2011 07:53 536.608.768 ilsDB.6
13.04.2011 05:59 536.608.768 ilsDB.7
03.04.2011 22:42 16.777.216 ilsDB.ns

////

Any ideas besides switching would be more then appreciated. ;)

Otherwise, if there any settings which might prevent that the database
goes full and locks? I dont want to risk to run in the same issue if I
switch to a 64 bit station or running out of working memory on that
one.

Jan

Gaetan Voyer-Perrault

unread,
Apr 20, 2011, 1:23:24 PM4/20/11
to mongod...@googlegroups.com
Otherwise, if there any settings which might prevent that the database goes full and locks?

No "switches" or "settings" are available here.

 I dont want to risk to run in the same issue if I switch to a 64 bit station or running out of working memory on that one.

2^31 ~= 2GB
2^63 ~= 9,223,372,036 GB

You're going to need a very big hard drive for this :)


Jan

JR

unread,
Apr 21, 2011, 3:01:36 AM4/21/11
to mongodb-user
Hello Gaetan,

thanks for the space advice, but I am not sure if I can follow what
your calculation is based on..
Maybe I am missing something here.

Can there be any future implementation be considered which makes Mongo
understand when the data-limit is reached (to keep the data
accessible).

Probably even to split a DB file up into a automatically expanded
collection (with continuing name) if one existing runs "full" to
prevent data loss? Not sure if this would make sense in shared setups
so, but for single machines this might do the trick.

Otherwise I think the problem can be considered closed, I guess I will
have to switch to 64 bit then.
Thanks for all advices on this.

Jan
Reply all
Reply to author
Forward
0 new messages