We are facing similar issues (not in production yet, still evaluating mongodb as a possible solution).
The approach I took so far is to design such way that we can have customer-per-DB, customer-per-collection, or a combination. No conclusive results yet which approach would be better (same concern as you about number of collections, etc, we are quite granular, lots of collections per customer).
As we will probably end up with a multi-tenant system, and potentially being under contractual obligations regarding co-mingling of data, I built encryption (and compression) on top of a chunking (to address the user object size limitation, see below) streaming API (Java driver) into mongodb. We are using AES 128b by default, and the performance impact is not bad at all (if you have also compression, do that first and then encryption,or it will render the compression useless and just use CPU time for nothing :o). In the extreme case where a customer wants physical data separation, we would probably use different mongodb clusters. As far as I know, the Java driver pools the connections per-DB.
Cheers,
PS the comments from my code (need to check these in 2.0) :
// Mongodb (as of 1.8.3) has a user object size limitation of 16M (builder.h defines it as
// BSONObjMaxUserSize = 16*1024*1024). BUT it seems that size can change, based on who is
// master, replica sets dynamics, etc, so we cannot interrogate Mongo to get that size and use it
// to trigger chunking.
public static final int DEF_CHUNK_SIZE = 15 * 1024 * 1024; // 15M
--
Octavian Florescu
oflo...@gmail.com