MongoDB Document Size Limit

1,499 views
Skip to first unread message

Shalom Rav

unread,
Apr 10, 2011, 11:57:01 AM4/10/11
to mongodb-user
It is my understanding that MangoDB has a size limit of 16MB per
document. I also read that documents can be 'embedded' in other
documents. Does this provide a way to increase this limit? if it
doesn't, what other ways are suggested to overcome this?

Sergei Tulentsev

unread,
Apr 10, 2011, 12:04:41 PM4/10/11
to mongod...@googlegroups.com
16MB is the limit for the whole document with all its children.

If there is a possibility that your data will outgrow this limitation, you may consider separating it to different documents/collections.

Classic question: comments and posts, embed vs reference. If you're bulding a home page for your cat, you most likely be fine with embedded comments. But if you're building next livejournal, then it's better to store comments in their own collection.


--
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.




--
Best regards,
Sergei Tulentsev

Robert Stam

unread,
Apr 10, 2011, 12:05:36 PM4/10/11
to mongodb-user
The size of embedded documents does count towards the size of the root
document, so embedded documents aren't a way to exceed the 16MB limit.

One way to store more than 16MB is to factor out a few of the largest
components in to a separate document and reference these new documents
from the original collection.

If any of the embedded information is itself larger than 16MB you can
use GridFS which lets you store binary blobs of any size.

Shalom Rav

unread,
Apr 10, 2011, 12:10:43 PM4/10/11
to mongodb-user
Robert, Sergei, thank you for your help.

Is there a limit to the number of collections?
I read in some place that document operations are atomic. Is it true
also regarding collections?

Robert Stam

unread,
Apr 10, 2011, 12:24:00 PM4/10/11
to mongodb-user
There is a limit on the number of collections. See:

http://www.mongodb.org/display/DOCS/Using+a+Large+Number+of+Collections

The limit is about 24000 namespaces. Each collection is a namespace
and so is each index. Since every collection has at least an index on
_id the practical limit on the number of collections is about 12000,
and could be lower if you use other indexes.

Like all limits, you will be better off if you stay well away from it.
Otherwise you might suddenly go over the limit.

Scott Hernandez

unread,
Apr 10, 2011, 12:31:34 PM4/10/11
to mongod...@googlegroups.com
On Sun, Apr 10, 2011 at 9:10 AM, Shalom Rav <csharppl...@gmail.com> wrote:
> Robert, Sergei, thank you for your help.
>
> Is there a limit to the number of collections?

Yes. There is a limit the total number of collections + indexes
(namespaces) and it is about 12K collections with the default index
(_id). This is limited by the default nssize and can be increased with
a command line, or config file, option.

http://www.mongodb.org/display/DOCS/Using+a+Large+Number+of+Collections


> I read in some place that document operations are atomic. Is it true
> also regarding collections?

Only document updates are atomic:
http://www.mongodb.org/display/DOCS/Atomic+Operations


>
> On Apr 10, 12:05 pm, Robert Stam <rstam10...@gmail.com> wrote:
>> The size of embedded documents does count towards the size of the root
>> document, so embedded documents aren't a way to exceed the 16MB limit.
>>
>> One way to store more than 16MB is to factor out a few of the largest
>> components in to a separate document and reference these new documents
>> from the original collection.
>>
>> If any of the embedded information is itself larger than 16MB you can
>> use GridFS which lets you store binary blobs of any size.
>>
>> On Apr 10, 11:57 am, Shalom Rav <csharpplusproj...@gmail.com> wrote:
>>
>> > It is my understanding that MangoDB has a size limit of 16MB per
>> > document. I also read that documents can be 'embedded' in other
>> > documents. Does this provide a way to increase this limit? if it
>> > doesn't, what other ways are suggested to overcome this?
>

Shalom Rav

unread,
Apr 10, 2011, 1:30:58 PM4/10/11
to mongodb-user
Thank you gentlemen.

Is there a limit to the number of documents a collections can have?
(or to the size of one collection?)

Suppose one stores 10000 documents in a collection. Each document has
some text-based keys, and ONE binary array (including 20000 floats
stored serially as a 'BSON' object) stored in it.
Suppose that one wants to read ALL the 10000 arrays that are in the
10000 documents (all that one collection).

How fast would such a reading operation be? Is MongoDB suitable for
such large data sets?


On Apr 10, 12:31 pm, Scott Hernandez <scotthernan...@gmail.com> wrote:

Sergei Tulentsev

unread,
Apr 10, 2011, 1:33:15 PM4/10/11
to mongod...@googlegroups.com
MongoDB can hold ginormous amounts of data. In this scenario you will be limited by disk or network I/O rather than MongoDB performance.

For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.

Shalom Rav

unread,
Apr 10, 2011, 2:08:06 PM4/10/11
to mongodb-user
Sergei,

Thank you. I know that Mongo is scalable, the question is, would such
a need be best served using Mongo, or alternatively, a column oriented
DB like Cassandra?
Performance is very critical.

On Apr 10, 1:33 pm, Sergei Tulentsev <sergei.tulent...@gmail.com>
wrote:
> MongoDB can hold ginormous amounts of data. In this scenario you will be
> limited by disk or network I/O rather than MongoDB performance.
>

Scott Hernandez

unread,
Apr 10, 2011, 2:16:13 PM4/10/11
to mongod...@googlegroups.com
On Sun, Apr 10, 2011 at 10:30 AM, Shalom Rav
<csharppl...@gmail.com> wrote:
> Thank you gentlemen.
>
> Is there a limit to the number of documents a collections can have?
> (or to the size of one collection?)

No

> Suppose one stores 10000 documents in a collection. Each document has
> some text-based keys, and ONE binary array (including 20000 floats
> stored serially as a 'BSON' object) stored in it.
> Suppose that one wants to read ALL the 10000 arrays that are in the
> 10000 documents (all that one collection).
> How fast would such a reading operation be? Is MongoDB suitable for
> such large data sets?

That is fine; your performance will depend on many things: how much
memory you have, the working set size (all the indexes plus those
docs), how fast your disks are (assuming this doesn't fit in memory),
and your network speed/throughput. You should test it and see how if
it meets your requirements.

Reply all
Reply to author
Forward
0 new messages