What I was thinking is that like the current system works in getting
the data in a timely manner, you then process it ultimately displaying
it through a portal and marking items read in the DB in order to hide/
fade them. If this is done on top of a system that could ultimately be
as simple as a list of URL's with a last pulled time for each item.
Then you're not overtly loading the browser or doing too much
processing.
In this regards, we're lucky to have a community that will ultimately
build on the system and add complexity. I think it's essential to lay
the foundation and then build on top of it until a point where you can
rebuild completely and make educated choices based on the experiences
of the previous system. For example, pulling the feeds with a
secondary process rather than with firefox.exe in order to prevent
bogging the browser down (especially if you have notifications
enabled).
I feel that if a feed hasn't been updated in a more than a year, you'd
handle that easily enough. You'd simply match it's ID against an array
and have it skipped 9/10 times.
I agree that it gets complicated at the time that you'd want to allow
people to hook into the system and perhaps that's the time when local
storage of the data is needed. Perhaps then, the best option would be
to upload the rendered page to the cloud. Then perhaps depending on
the type of requests there are for API's, then at that point, you'd
break the page down into smaller chunks so as that an add-on can
access whatever it needs.
As you point out, the possibilities opened with Pubsubhubbub are
exciting and having real time feeds would be amazing, but in terms of
adoption, I imagine it'll be a while before that really kicks off. Are
places like WordPress and Blogspot shipping with this enabled by
default? If so, it may cut adoption time significantly. So in that
case, you could then have the feed tab listen for updates and add
them. At this point, it becomes impossible to do this all with memory
and you'd have to do it with local storage. However, it's easy to stop
your feed checker from checking those sites as it'd simply populate
another array to check against and see that it wouldn't have to check
feed XXX.
In regards to desktop newsreaders, it's harder to argue for them
against cloud based ones in this day and age where by everyone is
trying to move everything to the cloud. But people do like to be in
control of their data and interests. Take a look at Tweetdeck. There
are alternatives available within the cloud but people want something
tangible. And ultimately at a quick glance, there are over 4000 weekly
downloads from Firefox users in order to enhance Google Reader. Feedly
is on it's way to a million downloads. There is no question here, in
my opinion that people want more than what Firefox gives them and
would welcome the enhancement to Firefox.
You're onto something in consideration of private feeds. I'm not sure
of the implementation, but I can definitely think of ways in which you
can interact with secure feeds in the browser in manners that you
can't, at least not at the same level in a cloud based solution.
Especially if for example, a feed is encrypted.
You've clearly given it some thought and have managed to come quite
far with it. Initially some of your ideas sounded silly but as I went
through the post and thought about how things would work, I can see
exactly where you're coming from and applaud the forethought. I'm
going to further mull on the idea of API's, as I imagine some users
will want their feeds on things like iGoogle. So you're right, it does
require greater thought from the perspective of extensibility.
However, I do think that although not a solution, Firefox Home/Weave/
Sync will provide an amazing amount of external usage.