This is what I know and I'll also admit, it is founded on very little experience.
The allocation of memory/ disk space (which padding is part of) is to avoid movement of documents on disk, which can happen, if they get bigger than their allocated space. This kind of document movement is really bad for performance, should it happen a lot. I am not sure what "a lot" is, but document movement seems to be something you really want to avoid if you can. I've been told it several times and you can read it often in the manual. So that is the why for allocation. You probably already knew this.
But how does powerOf2Sizes allocation work?
Say you store the first document in your collection with only 1 subdoc, then the allocation will be for whatever size in KB that record is, starting at 32KB or multiple of 2 higher (i.e. 64KB, 128KB, 256KB etc. .... up to 4MB, after that, it can be anywhere between 4 and 16MB, rounded up to the nearest MB). So if your data is 24KB in size in that one document, your document will take up 32KB of space. If you then store a lot of documents with only 1 subdocument, then you'll have a lot of 32KB blocks of space used.
Now say you need to store a new document with 50 subdocuments and it is actually 140KB in size. Mongo will then increase the allocation of that single record to 256KB (128KB is too small). And, every other document you save after that will also be 256KB, despite them actually being smaller. ( I am not sure about this one fact, couldn't find any examples or info.)
Now, what about updates?
For any new documents, they have plenty of room to grow. No problems there.
However, if you have to add the other 49 subdocuments to one or more of the older 32KB documents, this will cause document movement, since the 140KB is much larger than the allocated 32KB. Not good and what you want to avoid.
So, what you might want to do for a preallocation of your data, is to load an initial dummy document with the 50 subdocuments filled with fake data. If you are absolutely sure the documents won't grow out of this certain schema, then you could tighten down the padding a lot. But be aware, changing the schema in any way in the future, which will cause any updated documents to grow in size, will cause havoc. The padding can only be smaller than the powerOf2sizes.
So in the end, if you aren't really sure about the size now or in the future, let MongoDB handle the allocation.
If you are absolutely sure about the average size, prefill a dummy document to that average size and let Mongo handle the rest.
If you really know the max size and it will never, ever change, prefil a dummy document to the max size, with very tight padding.
Although, with that last possibility, I am not really sure it will help performance in a read/update/insert scenario, which is what you are suggesting and expecting. I am thinking that the powerOf2sizes are specially organized in this certain way, in order to match the chunks of data logically read from disk (which can also only be set in powers of 2 too).
The other advantage of powerOf2Sizes is the reuse of deleted data segments. Since they are "standardized", it is easy for Mongo to reuse those empty chunks.
I can't wait for Asya or William to grade/ correct my answer. I hope one of them will.:) They are the real experts.
Scott