OK. The fine-grain capability post was definitely tongue in cheek, but there are some serious points buried in there. How many bits might we actually need for page addresses?
Well, first, it turns out that most of the atoms in the universe are hydrogen atoms. There are a mere 2^72 observable silicon atoms. In current technology it takes 2^9 - 2^30 atoms to store a single bit. IBM has a research result using 12 atoms, but it requires -272C temperatures. But things improve with time, so let's take 2^8 atoms as a working number.
If we used every observable silicon atom in the universe to do it, we could construct a grand total of 2^64 bits of storage, or 2^49 4k pages of storage.
I am reasonably confident that we will not attach all of those pages to a single PC. Let's assume that there are two PCs. I will call them yours, and mine. Since I'm a fair-minded sort of person, let's split the total supply of storage equally. Each of us gets 2^48 pages of storage, which 2^60 bytes, which is 2^20 terabytes, or 1M petabytes, or 1 exabyte.
At this point we're already down to 48 bits of OID. Estimates of total Amazon S3 storage notwithstanding, there is no chance that a single machine ever writes that much data. I'd feel fairly comfortable coming down to 40 bits of OID, which would reclaim 24 bits of the current OID value, bringing the addressable storage down to 4.5 petabytes per machine.
Meanwhile, we currently have 20 bits of allocation count. It will sound silly, but I think I'd be pretty comfortable declaring that a page "breaks" after 2^12 uses and then you have to allocate a different page (because at this scale nobody is going to be doing any disk GC).
So yes, I think there are 32 bits to be gotten back out of the current capability structure. The question is whether we can re-pack the remaining bits in a way that lets us shrink the data structure. There's almost always a way, but I haven't attempted to look at that yet.
Jonathan