The case for Cross-Origin Webfinger

112 views
Skip to first unread message

Michiel de Jong

unread,
Jun 22, 2011, 1:02:50 PM6/22/11
to webf...@googlegroups.com
Hi!

My name is Michiel, I work on the Unhosted project. We use Webfinger
in our protocol stack (http://unhosted.org/spec/dav/0.1). The trouble
is, since our web apps all run in the browser (with html5, javascript
and ajax), they can not read any of the host-meta and lrdd files
provided by main stream webfinger providers.

This is sad for us, and as far as I know, unnecessary.

So my question to this mailing list is: Under which conditions would
you agree that there is no security risk in allowing host-meta and
lrdd files to be retrieved from within the browser, by adding a:

Access-Control-Allow-Origin: *

http header to them?

Browsers have evolved to require this http response header before they
believe that a cross-origin GET request will (A) only return public
information, and (B) not have any side effects. With this header, the
resource is basically saying "This content is OK to retrieve, it's
public, and doesn't depend on any browser cookies or other
credentials".

Not having this Cross-Origin Resource Sharing header on webfinger
resources leads to an absurd situation. Say you develop a mobile app
and have to choose between 'going web' or 'going native app'.
Webfinger will sadly make this choice very easy, because webfinger
works with everything, except for (client-side) html5 web apps!

I think this should be fixed. Webfinger is cross-origin in its very
nature, and its information is public, so I think that's what its http
response headers should reflect, right?

Many thanks for any replies to this thread, they will be a great help
for me, to correct me on the points where I'm wrong (i'm very new to
all this!), and to help bring webfinger to the (client-side) web.


Cheers!
Michiel

Paul E. Jones

unread,
Jun 22, 2011, 11:28:58 PM6/22/11
to webf...@googlegroups.com, Michiel de Jong
Michael,

Why not issue the query to your server and have it proxy the requests on
behalf of the client? Not being able to query various sites to get metadata
is not a limitation of Webfinger, after all. A very simple proxy could
address the issue.

Paul

Michiel de Jong

unread,
Jun 23, 2011, 2:52:10 AM6/23/11
to Paul E. Jones, webf...@googlegroups.com
Hi Paul,

Thanks for your reply!

On Thu, Jun 23, 2011 at 5:28 AM, Paul E. Jones <pau...@packetizer.com> wrote:
> Why not issue the query to your server and have it proxy the requests on
> behalf of the client?

Short answer: that's a work-around, not a solution. And it has drawbacks.

Long answer:
* Using a proxy to route all webfinger traffic through, is like
creating a centralized crossroads through which all traffic must pass.
Even if it's one proxy per webfinger-consuming web app.
* It introduces an extra hop both on the way out and on the way back,
which will increase RTT and lead to less snappy user experience in the
app.
* It might become a bottleneck - if this proxy is down, then it
unnecessarily takes down all apps running in clients at that time.
* It's against the decentralized design of the web. We should use only
DNS to guide us to the right hosts when looking for information. *
It's ugly. (this one's my only real argument ;)
* As far as I understand from http://www.w3.org/TR/cors it's unnecessary.

We're trying to make web apps that "live" on the client-side, while
depending as little as possible on the server that hosts their
application code (html, css, js). That's why our project is called
'unhosted'. Looking at for instance the html5 app cache manifest,
client-side functionality seems to be where the web is going. No more
'every click is a page load, and the client only renders it to capture
the next click'. Interaction between the app and the user is moving
more and more away from the server and towards the client.

I can understand your point of view. If someone would say "hey, i want
webfinger to be available inside pdf documents on the jvm", then I
would have the same reaction as you: use the web and proxy it. But
this *is* the web. It's just client-side web technology instead of
server-side web technology.

And webfinger is very much about the web, so why not make the effort
of putting in one extra http response header, and making webfinger
compatible with client-side web apps?

Sure, we can use a proxy, and that's probably what we'll do next, if
this doesn't get fixed. There will probably even be public,
centralized proxies that will be very handy because you can ask them
for any webfinger from any host, and they'll fetch it for you, with
the CORS headers added onto them. But that's exactly what we want not
to happen. Because it goes against the architecture of the web.

I hope this explains my point of view a bit better?

Cheers!
Michiel

Paul E. Jones

unread,
Jun 23, 2011, 11:58:45 AM6/23/11
to webf...@googlegroups.com, Michiel de Jong
Michiel,

That certainly makes sense and I've pondered on the same issue before. Part
of the problem is a technology issue, where cross-site requests were
considered a no-no for so long and, as such, browsers do not all fully
support the capability. Part of the problem, too, is that those who will be
providing a webfinger service might be anybody on the Internet. I have it
enabled on packetizer.com, for example. (Not sure if it's 100% right, as
I've not touched the code in a year... but something is there.) We would
want everybody to implement webfinger on their domains and no matter what
recommendation we might make, there will be some who will not return headers
that help facilitate cross-domain requests.

That's not to say I disagree with your desire. I do, actually. I'm just
very skeptical that everybody would do it. However, if there is a critical
service that breaks because it needs this, that might compel people to
implement CORS.

So, the recommendation you wish to make is that for the host-meta document
and the lrdd link relation URL, you wish the requests to return this header:

Access-Control-Allow-Origin: *

Correct? Do you want to require that OPTIONS be supported to allow for
preflighted requests?

Paul

> -----Original Message-----
> From: webf...@googlegroups.com [mailto:webf...@googlegroups.com] On
> Behalf Of Michiel de Jong

Michiel de Jong

unread,
Jun 23, 2011, 12:28:40 PM6/23/11
to Paul E. Jones, webf...@googlegroups.com
On Thu, Jun 23, 2011 at 5:58 PM, Paul E. Jones <pau...@packetizer.com> wrote:
We would
want everybody to implement webfinger on their domains and no matter what
recommendation we might make, there will be some who will not return headers
that help facilitate cross-domain requests.

I think if big providers set the example, then the small ones will follow suit eventually.
 
That's not to say I disagree with your desire.  I do, actually.  I'm just
very skeptical that everybody would do it.  However, if there is a critical
service that breaks because it needs this, that might compel people to
implement CORS.


yeah, we are building these this year. Web apps that run on the client-side, and which you log in to with your webfinger.
 
So, the recommendation you wish to make is that for the host-meta document
and the lrdd link relation URL, you wish the requests to return this header:

  Access-Control-Allow-Origin: *

Correct?

yes. that's all there is to it.
 
 Do you want to require that OPTIONS be supported to allow for
preflighted requests?


no, preflights are not necessary here, because webfinger is only GET requests. So it's really, literally, a one-line change to your webfinger implementation. 

The important thing we need first, is some sort of consensus or blessing that this is OK and we're not overlooking something here. Thank you for your help so far!


Cheers,
Michiel.

Eran Hammer-Lahav

unread,
Jun 24, 2011, 1:05:17 PM6/24/11
to webf...@googlegroups.com, Paul E. Jones
There should not be any security issue as long at the endpoint does NOT take into account cookies (which is should not), or coming from an intranet server.

However, adding CORS headers as a default is a bad security practice.

EHL

Paul E. Jones

unread,
Jun 24, 2011, 2:54:49 PM6/24/11
to webf...@googlegroups.com

Yeah, but he has a valid point.  Either one needs a means of getting around the same origin policy or one needs to proxy requests.  For specific services like webfinger, this might be an example where it is generally considered acceptable.  I’m just not sure what risks this might introduce in this case.  What might they be?

 

Paul

 

From: webf...@googlegroups.com [mailto:webf...@googlegroups.com] On Behalf Of Eran Hammer-Lahav
Sent: Friday, June 24, 2011 1:05 PM
To: webf...@googlegroups.com
Cc: Paul E. Jones
Subject: Re: The case for Cross-Origin Webfinger

 

There should not be any security issue as long at the endpoint does NOT take into account cookies (which is should not), or coming from an intranet server.

Raphael Sofaer

unread,
Jun 24, 2011, 11:14:28 PM6/24/11
to webf...@googlegroups.com
I think there are a couple places Diaspora's responsiveness could be enhanced with this, and it seems appropriate for content like host-meta and webfinger anyhow.  I'll implement it this weekend unless there are security concerns.  The JS context can't access any cookies set on the site that that the CO request goes to, right? 

Raphael

Eran Hammer-Lahav

unread,
Jun 30, 2011, 1:06:39 PM6/30/11
to webf...@googlegroups.com
At a minimum you want to follow two rules for enabling CORS: don't use cookies when serving host-meta documents, and don't enable CORS on servers inaccessible from the internet.

EHL

Michiel de Jong

unread,
Jul 11, 2011, 8:59:12 AM7/11/11
to webf...@googlegroups.com
Hi!

Quick update: The response so far has been impressive - I had to cry a little bit every time a provider responded to this appeal. :) Thanks so much all you web openers! We are really building something amazing here, and it is happening in a decentralized way, as it should on the web.

The following important providers have now adapted their Webfinger implementation to help us move towards 'a truly open access across domain-boundaries' as http://enable-cors.org/ puts it:

Packetizer.com
Diaspora*
WordPress (plugin by Pfefferle)
StatusNet (configurable; should be left disabled for intranet instances)

On Thu, Jun 30, 2011 at 7:06 PM, Eran Hammer-Lahav <er...@hueniverse.com> wrote:
At a minimum you want to follow two rules for enabling CORS: don't use cookies when serving host-meta documents, and don't enable CORS on servers inaccessible from the internet.

EHL

Great! Thanks for this confirmation.

So... who's next? :)


Cheers,
Michiel

Mike Macgirvin

unread,
Jul 11, 2011, 8:55:05 PM7/11/11
to WebFinger
We put this in Friendika a while back. Wasn't aware anybody was making
a list, and a bit curious about where this list actually resides as I
don't see *any* of those names on http://enable-cors.org

Kingsley Idehen

unread,
Jul 12, 2011, 4:50:46 AM7/12/11
to webf...@googlegroups.com, Michael Hausenblas
I've cc'd Michael in on this mail. He handles the enable-cors.org site.

--

Regards,

Kingsley Idehen
President& CEO
OpenLink Software
Web: http://www.openlinksw.com
Weblog: http://www.openlinksw.com/blog/~kidehen
Twitter/Identi.ca: kidehen

Kingsley Idehen

unread,
Jul 12, 2011, 9:07:01 AM7/12/11
to webf...@googlegroups.com
FYI:

On 7/12/11 9:54 AM, Michael Hausenblas wrote:
>
> Thanks Kingsley!
>
> Mike,
>
> So this is really crowd-sourcing. If you wanna add stuff, I suggest
> you either let me know directly (via or Twitter/Google+) or simply add
> it to http://www.w3.org/wiki/CORS_Enabled where I get notified, OK?
> @Kingsley not sure if I'm subscribed to the ML, can you forward pls in
> case it doesn't go through?
>
> Cheers,
> Michael
> --
> Dr. Michael Hausenblas, Research Fellow
> LiDRC - Linked Data Research Centre
> DERI - Digital Enterprise Research Institute
> NUIG - National University of Ireland, Galway
> Ireland, Europe
> Tel. +353 91 495730
> http://linkeddata.deri.ie/
> http://sw-app.org/about.html

Reply all
Reply to author
Forward
0 new messages