While it may not be impossible, it may be impractical on a single machine (which seems to be what you're aiming for).
I think Linux limits you to 128TB of virtual address space per process; that means you will need at least 30 mongod proocesses managing different shards and if you use journalling twice that many.
In practice I have not seen people running individual shards that big (think single digit TB as a practical maximum so that at least frequently used parts of indexes can be in memory) which probably means hundreds of servers.
You'll also need petabytes of storage - say 10 pb raw with some redundancy (RAID) assuming 100 bytes per record. This will be at least 3000 spindles, which will likely fill 10 racks or so even if you can handle high power density. In practice you might attach say 10 drives to each of a few hundred servers but you're talking about a pretty big data center.
I think there are likely much more efficient data structures for this than 36 trillion documents. Happy to think about suggestions if you supply more details.
-- Max
--
You received this message because you are subscribed to the Google Groups "mongodb-user" group.
To post to this group, send email to mongod...@googlegroups.com.
To unsubscribe from this group, send email to mongodb-user...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mongodb-user?hl=en.