--
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.
Yes. There is a limit the total number of collections + indexes
(namespaces) and it is about 12K collections with the default index
(_id). This is limited by the default nssize and can be increased with
a command line, or config file, option.
http://www.mongodb.org/display/DOCS/Using+a+Large+Number+of+Collections
> I read in some place that document operations are atomic. Is it true
> also regarding collections?
Only document updates are atomic:
http://www.mongodb.org/display/DOCS/Atomic+Operations
>
> On Apr 10, 12:05 pm, Robert Stam <rstam10...@gmail.com> wrote:
>> The size of embedded documents does count towards the size of the root
>> document, so embedded documents aren't a way to exceed the 16MB limit.
>>
>> One way to store more than 16MB is to factor out a few of the largest
>> components in to a separate document and reference these new documents
>> from the original collection.
>>
>> If any of the embedded information is itself larger than 16MB you can
>> use GridFS which lets you store binary blobs of any size.
>>
>> On Apr 10, 11:57 am, Shalom Rav <csharpplusproj...@gmail.com> wrote:
>>
>> > It is my understanding that MangoDB has a size limit of 16MB per
>> > document. I also read that documents can be 'embedded' in other
>> > documents. Does this provide a way to increase this limit? if it
>> > doesn't, what other ways are suggested to overcome this?
>
No
> Suppose one stores 10000 documents in a collection. Each document has
> some text-based keys, and ONE binary array (including 20000 floats
> stored serially as a 'BSON' object) stored in it.
> Suppose that one wants to read ALL the 10000 arrays that are in the
> 10000 documents (all that one collection).
> How fast would such a reading operation be? Is MongoDB suitable for
> such large data sets?
That is fine; your performance will depend on many things: how much
memory you have, the working set size (all the indexes plus those
docs), how fast your disks are (assuming this doesn't fit in memory),
and your network speed/throughput. You should test it and see how if
it meets your requirements.