Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Allowing Web Services to Handle Downloaded Content

5 views
Skip to first unread message

Ben Goodger

unread,
Apr 12, 2006, 4:03:31 PM4/12/06
to
Firefox2 Feed Handling functionality raises an interesting question:

Feeds come in various content types: application/atom+xml,
application/rdf+xml, etc.

Sometimes they are served through links with custom protocol schemes,
e.g. feed:

Feed readers are sometimes client apps that are registered to handle
content of specific types or loaded via specific protocols.

But Feed readers are also sometimes web apps. There is no method for
having a URI as a handler for files. This is the first fairly
significant issue, because the current system is predicated on a file
being transfered to the local disk and loaded with some application.
With a web handler, a direct connection from handler to source is
preferred but might not be possible (especially if the data is behind a
login). So an interesting bunch of questions there that are worth
discussion.

We can solve this pretty easily for feeds by just requiring that the URL
be publicly available, since that's the assumption feed readers operate
under at the moment. But more generic file viewing? There are lots of
MSOffice files etc behind login screens.

Next, and this is the more immediately-Firefox2-critical part of my set
of questions: assuming we have a system for dispatching content to web
services, even a pretty hacky one just for RSS content for Firefox2, how
should the list of available choices be populated? We can ship with some
defaults, but we may like to offer new web services the ability to
register their service URLs for specific content types?

Would this be a new js call exposed to web pages like addEngine() etc?
Or something else?

-Ben

Robert Sayre

unread,
Apr 12, 2006, 5:33:40 PM4/12/06
to Ben Goodger
Ben Goodger wrote:
>
> We can solve this pretty easily for feeds by just requiring that the URL
> be publicly available, since that's the assumption feed readers operate
> under at the moment.

Most popular feed readers can handle HTTP authentication and TLS these
days. For example, if you clear your Google cookies and visit the URI

<https://mail.google.com/mail/feed/atom/>

you'll see a Basic Authentication popup. This sort of thing is common
enough that you get bug reports filed if you don't implement it in a
desktop aggregator. I just checked Newsgator Online, and it seems they
have a credentials field for feeds there. I guess this means users will
be entering credentials twice...

For content handling via web services, I have to wonder what sort of
attack vectors might be opened up, allowing Firefox to be exploited as a
trusted intermediary.

Rob

Ben Goodger

unread,
Apr 13, 2006, 12:36:23 PM4/13/06
to
Ben Goodger wrote:
> Next, and this is the more immediately-Firefox2-critical part of my set
> of questions: assuming we have a system for dispatching content to web
> services, even a pretty hacky one just for RSS content for Firefox2, how
> should the list of available choices be populated? We can ship with some
> defaults, but we may like to offer new web services the ability to
> register their service URLs for specific content types?
>
> Would this be a new js call exposed to web pages like addEngine() etc?
> Or something else?

Addendum:

Hixie added this to the WHATWG WebApplications 1.0 spec to support this:

http://whatwg.org/specs/web-apps/current-work/#browser

Feedback encouraged!

-Ben

Christian Biesinger

unread,
Apr 13, 2006, 5:53:45 PM4/13/06
to
Ben Goodger wrote:
> http://whatwg.org/specs/web-apps/current-work/#browser
>
> Feedback encouraged!

I sent feedback to the WhatWG list:
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2006-April/006231.html

I'm not sure what to think of a spec that says "User agents may do
whatever they like when the methods are called"...

Ben Goodger

unread,
Apr 21, 2006, 3:43:19 PM4/21/06
to
Some more thoughts, summarizing a discussion on #developers:

- There are two basic open modes for such links: view once, and
subscribe. With view once, the content is transferred once. With
subscribe, the content is potentially accessed numerous times at
arbitrary intervals into the future (shaver)

- Relying on a web service to download the file is not always good
enough, since the content may be behind a login, and that login may not
be something as easy to navigate as basic auth.

- One solution is to download the data as today, and once the download
is complete open a new window with the web service handler and upload
the file to it with a POST (biesi)

- One concern is that forcing data to be publicly available means that
you can never accidentally upload confidential material to a web service.

It is possible that we may be able to develop two APIs for this, or make
the existing one richer. I am focused on 2.0 features right now, and RSS
subscription is one of them, so handling the subscription case is
important to me in the immediate term.

bz also says the time is now to design URILoader changes for 3.0, and I
would like this system to be as generically useful as possible, not just
useful to feeds.

-Ben

0 new messages