I think you have some sort of misunderstanding about the difference between "querying" over a billion rows, and "iterating" over a billion rows.
If you have a row for every post<->subscriber pair, and you have an index over the subscriber, and you query for posts the subscriber should see, the query would simply use the index, and only go over the posts to which the user is subscribed. In other words, it would NOT go over billions of rows. In fact, if your index uses the post date, descending, as a second key, it would only go over the posts returned. In other words, instead of going over millions of posts, you go over about 20 or 50 (depending on how many posts you show per page).
Now, if you didn't have that collection, and tried to compute things on-demand, you would have to go over the latest 20 or 50 posts per-subscribable, so if you have 5000 subscriptions then you are now iterating over 250000 posts for every page view!
This is the same if you used MySQL. Doing the query using a JOIN would result in a query that iterates 250000 posts (While locking both the subscriptions and posts tables), if not more, resulting in a query that runs VERY slow, and can easily take anywhere between several seconds to several minutes (!) to complete.
On the topic of locks, you seem to have mentioned them several times, which shows me that you have some sort of misconception as to when and for how long collections are locked. The method I described actually rises practically no cause for mutual exclusion. Querying is performed very fast, and the read-locks held don't even block each other. The write lock used to update the helper collection is only held sporadically - for each row added, so it does not block queries and other operations either.
This is a feature of MongoDB that is in contrast to MySQL which would, in fact, lock the whole table for any such query. However, even in MySQL, a lock held on two tables during a JOIN would create MUCH greater exclusion compared to ones held sporadically during fast queries and insertions.
The fact is, you don't need to manage any special locking here. The queries are very simple. A query by index for read. Simple row-by-row insertion for write (Doesn't even read any data).
The REAL cost, as you have hinted, is that the helper collection would be quite large. This is a space vs performance issue. You either pay the cost in space, or you have a site that constantly freezes trying to aggregate on-demand with no caching.
You can partially alleviate this by having the aggregated collection only hold references to the real posts, rather than actual content. This should reduce individual document sizes.
And don't forget that you can shard the collection by subscriber, and thereby have it stored in several data-centers, reducing the space (and CPU) load on each.
As for the number crunching: You seem to be very worried about the writes throughput, but have you considered reads? Imagine how often a user refreshes his wall. This is a lot more than the number of status posts made, moreso if you consider things like AJAX queries to update new posts in real time (Once a minute vs 15 a day? No contest). Imagine if each of these times, the amount of posts displayed (again, about 20-50 in a normal page), are read from EACH of the feeds to which the user is subscribed.
Think of that top gear page you mentioned: Instead of writing 1 post 15 times a day for each subscriber, it now has 20-50 posts read from it hundreds, if not thousands of times by each subscriber per day.
You can argue about the efficiency of reads vs writes, but that difference is marginal when faced with such a huge difference in quantity. You'd basically be driving your CPU to the ground.
Now, admittedly, you can optimize the reads a bit by more carefully iterating through each subscribed feed, doing a form of sorted array merge. You would still be left with a LOT more rows read than were written in the caching method, per day.
Incidentally: You CAN save some space by capping the maximal length of the news feed per user. That is, you can decide that a user cannot view news past a certain page, or that are older than a certain date. Again: This saves space, not performance.