[chromium-dev] Implementor interest in a W3C WebApps proposal

31 views
Skip to first unread message

Adam Barth

unread,
Apr 19, 2010, 1:23:13 PM4/19/10
to Chromium-dev
Recently on the W3C's public-webapps mailing list, Anne van Kesteren
asked about implementor interest in a particular specification. I
haven't replied because I don't want to speak for the project. Who
are the right folks to ask for an opinion?

Thanks,
Adam

--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev

Jeremy Orlow

unread,
Apr 19, 2010, 2:16:17 PM4/19/10
to aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Darin, Alex, and Dimitiry are the web platform leads and work on prioritization of what Googlers work on in terms of new web platform features.  That said, I'd rather things be brought up on list so others can weigh in and (assuming it's not shot down) it might get done faster than if it just went into the normal pipeline (if someone decides they want to implement it).
 
J

Adam Barth

unread,
Apr 19, 2010, 2:25:16 PM4/19/10
to Jeremy Orlow, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Ok. Here's the email in question:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0171.html

The question appears to be mainly if we want a new API for accessing a
profile of cross-origin XMLHttpRequest that might have better security
properties. IMHO, it doesn't matter that much because the same
functionality will be there either way.

Adam

Tyler Close

unread,
Apr 19, 2010, 5:32:56 PM4/19/10
to aba...@chromium.org, Jeremy Orlow, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
The Uniform Messaging Policy (UMP) is a proposal by Mark Miller and
myself. The latest editor's draft is at:

http://dev.w3.org/2006/waf/UMP/

The elevator pitch is that CORS enables cross-origin messaging by
letting servers poke more holes in the Same Origin Policy, but does
little to help developers avoid the CSRF-like vulnerabilities inherent
in doing so. The UMP provides a security model for doing cross-origin
messaging without CSRF-like vulnerabilities (aka Confused Deputy).
I've been advocating for this functionality for some years now and
CORS has moved in that direction somewhat with its "withCredentials"
flag. This part of CORS is still underspecified though. UMP provides a
clear and succinct definition of the needed functionality.

Even you hope CORS will adopt more of UMP over time, expressing
support for UMP could encourage that phenomenon.

I glad to answer any questions you may have.

--Tyler

Jeremy Orlow

unread,
Apr 19, 2010, 5:42:34 PM4/19/10
to ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Oops...Yes, Ian is part of the "web platform leads" group as well.  :-)

2010/4/19 Ian Fette (イアンフェッティ) <ife...@google.com>
*cough* you're forgetting someone :)

That said, I would like to take a look at the objections raised by Maciej et al from Apple, as we would likely have to address them if we wanted to implement in Chrome. Does anyone care to summarize?

Tyler Close

unread,
Apr 19, 2010, 6:05:44 PM4/19/10
to jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
My understanding is that Maciej does not claim any significant
technical barriers to implementing UMP. Indeed, he says such support
may arise by coincidence. The relevant email is at:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0043.html

I believe his main technical concern is that any UMP implementation in
WebKit should share code with the CORS implementation. I haven't
looked at the CORS implementation in WebKit, but there's nothing in
the spec that should require a wholly independent implementation.

--Tyler

Ojan Vafai

unread,
Apr 22, 2010, 6:36:12 PM4/22/10
to tjc...@google.com, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
For the record, I agree with the general sentiment expressed by Maciej/Anne that we want to implement CORS and, given that, there's no benefit to UMP being a different spec. Having UMP as a different spec leaves the possible downside to having the APIs diverge over time. The sticking points with UMP folding into CORS seem relatively straightforward to me (as straightforward as any web API discussions are) and are worth the assurance of consistent APIs. There's the argument of defining CORS in terms of UMP, but I don't see the benefit of doing so, especially as it makes implementors lives more difficult.

Ojan

Tyler Close

unread,
Apr 22, 2010, 7:33:22 PM4/22/10
to Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Hi Ojan,

On Thu, Apr 22, 2010 at 3:36 PM, Ojan Vafai <oj...@google.com> wrote:
> For the record, I agree with the general sentiment expressed by Maciej/Anne
> that we want to implement CORS and, given that, there's no benefit to UMP
> being a different spec.

Since you're so confident that we want to implement CORS I suppose you
must have a strategy for explaining to developers how to avoid
Confused Deputy vulnerabilities when using CORS. As I've explained on
the list, there are several natural ways to use CORS that cause
Confused Deputy vulnerabilities. See:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0258.html

> Having UMP as a different spec leaves the possible
> downside to having the APIs diverge over time. The sticking points with UMP
> folding into CORS seem relatively straightforward to me (as straightforward
> as any web API discussions are) and are worth the assurance of consistent
> APIs. There's the argument of defining CORS in terms of UMP, but I don't see
> the benefit of doing so, especially as it makes implementors lives more
> difficult.

Implementers are a vocal bunch and will keep the APIs from diverging
if that's what they want. I think that's not a major concern.

The lives of implementers are also a much lesser concern than the
lives of Web application developers. We owe application developers an
easily understood spec. Burying UMP inside the substantial complexity
of CORS doesn't help application developers.

--Tyler

Ojan Vafai

unread,
Apr 22, 2010, 9:15:29 PM4/22/10
to Tyler Close, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
I don't have anything to add that hasn't already been said on public-webapps. I find Maciej's description at http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0481.html convincing.

Ojan

Tyler Close

unread,
Apr 23, 2010, 12:40:41 AM4/23/10
to Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
On Thu, Apr 22, 2010 at 6:15 PM, Ojan Vafai <oj...@google.com> wrote:
> I don't have anything to add that hasn't already been said on
> public-webapps. I find Maciej's description at
> http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0481.html convincing.

In that email, Maciej claims developers can follow a DBAD programming
discipline to avoid Confused Deputy vulnerabilities. He describes that
policy as:

"""
[1] To recap the DBAD discipline:

Either:
A) Never make a request to a site on behalf of a different site; OR
B) Guarantee that all requests you make on behalf of a third-party
site are syntactically different from any request you make on your own
behalf.

In this discipline, "on behalf of" does not necessary imply that the
third-party site initiated the deputizing interaction; it may include
requesting information from a third-party site and then constructing a
request to a different site based on it without proper checking. (In
general proper checking may not be possible, but making third-party
requests look different can always be provided for by the protocol.
"""

A) basically says: "don't do cross-site messaging". Since the request
is going cross-site, at least some of the data, such as the target
URL, is determined by the target host rather than the sending host.
Maciej notes that checking this data is not always possible, let alone
something that developers can easily and reliably do. A) is also
insufficient to guard against all Confused Deputy problems since it
ignores what you do with the response data. I don't understand how B)
works. Perhaps you could explain it since you were convinced by it.

Darin Fisher

unread,
Apr 23, 2010, 1:22:41 AM4/23/10
to Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
Hi Tyler,

A couple things to note for background sake:

1) It is our goal that Chrome and Safari should not diverge in web platform behavior.
2) Maciej is a very influential member of the WebKit and web standards communities.

Therefore, I think Maciej would need to be convinced before Chrome would ship UMP.

I confess that I don't have a good enough understanding of UMP vs CORS yet to comment intelligently on the subject.  I need to do some reading and educate myself better.  Having read some of what has been linked from this thread, I still feel that I am missing some background information.

Regards,
-Darin

Dirk Pranke

unread,
Apr 23, 2010, 1:26:45 PM4/23/10
to da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
> Hi Tyler,
> A couple things to note for background sake:
> 1) It is our goal that Chrome and Safari should not diverge in web platform
> behavior.
> 2) Maciej is a very influential member of the WebKit and web standards
> communities.
> Therefore, I think Maciej would need to be convinced before Chrome would
> ship UMP.
> I confess that I don't have a good enough understanding of UMP vs CORS yet
> to comment intelligently on the subject.  I need to do some reading and
> educate myself better.  Having read some of what has been linked from this
> thread, I still feel that I am missing some background information.

A few months ago I was up on the differences between the two and
relatively convinced that UMP was the way to go and CORS should
probably be discouraged. Of course, I've forgotten everything since
then :)

I would attempt to summarize something now, but I fear I would get it
wrong and just confuse things, so I will instead try to refresh my
memory today and see if I can send out a better summary of the two
protocols and the tradeoffs.

-- Dirk

Adam Barth

unread,
Apr 23, 2010, 2:12:18 PM4/23/10
to Dirk Pranke, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Maciej Stachowiak
On Fri, Apr 23, 2010 at 10:26 AM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
>> Hi Tyler,
>> A couple things to note for background sake:
>> 1) It is our goal that Chrome and Safari should not diverge in web platform
>> behavior.
>> 2) Maciej is a very influential member of the WebKit and web standards
>> communities.
>> Therefore, I think Maciej would need to be convinced before Chrome would
>> ship UMP.
>> I confess that I don't have a good enough understanding of UMP vs CORS yet
>> to comment intelligently on the subject.  I need to do some reading and
>> educate myself better.  Having read some of what has been linked from this
>> thread, I still feel that I am missing some background information.
>
> A few months ago I was up on the differences between the two and
> relatively convinced that UMP was the way to go and CORS should
> probably be discouraged. Of course, I've forgotten everything since
> then :)
>
> I would attempt to summarize something now, but I fear I would get it
> wrong and just confuse things, so I will instead try to refresh my
> memory today and see if I can send out a better summary of the two
> protocols and the tradeoffs.

I've been avoiding commenting this round because I gave Tyler and co a
lot of feedback on this topic last round. My current read is as
follows:

1) UMP should be / is a subset of CORS.
2) Developers who use UMP might create more secure applications.
3) CORS has already shipped in a number of browsers, so user agent
implementors don't want to remove the feature.
4) Having UMP in a separate document makes it easier to understand
which parts of CORS are in UMP.
5) User agent implementors don't want to have two independent
implementations because the internal mechanisms are largely the same.
6) User agent implementors want to read one document to tell them how
to build their one implementation of CORS+UMP, complete with
instructions on where to put the various if statements.

Putting these together, it looks like we want a separate UMP
specification for web developers and a combined CORS+UMP specification
for user agent implementors. Consequently, I think it makes sense for
the working group to publish UMP separately from CORS but have all the
user agent conformance requirements in the combined CORS+UMP document.

(There's also some debate about what API should trigger the UMP
subset, but that's mostly aesthetics as far as I can tell.)

Adam

Dirk Pranke

unread,
Apr 23, 2010, 3:02:07 PM4/23/10
to Adam Barth, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Maciej Stachowiak
On Fri, Apr 23, 2010 at 11:12 AM, Adam Barth <aba...@chromium.org> wrote:
> 3) CORS has already shipped in a number of browsers, so user agent
> implementors don't want to remove the feature.

For completeness, can you say where has it actually shipped already?

-- Dirk

Tyler Close

unread,
Apr 23, 2010, 5:54:47 PM4/23/10
to Darin Fisher, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
> Hi Tyler,
> A couple things to note for background sake:
> 1) It is our goal that Chrome and Safari should not diverge in web platform
> behavior.
> 2) Maciej is a very influential member of the WebKit and web standards
> communities.
> Therefore, I think Maciej would need to be convinced before Chrome would
> ship UMP.

Maciej has clearly and consistently maintained that he is not against
shipping UMP, so long as UMP is a subset of CORS. So I think Chrome
can ship UMP without conflict.

The point that remains in dispute is whether or not CORS should be
removed. So far, Maciej remains committed to supporting full CORS. I
think the DBAD discipline is clearly unworkable, but others don't see
it that way, yet.

Given your stated constraints, and assuming Maciej doesn't change his
mind, following the deployment strategy in Adam Barth's email seems
like a reasonable path forward.

Dirk Pranke

unread,
Apr 23, 2010, 6:30:12 PM4/23/10
to Maciej Stachowiak, Adam Barth, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 3:07 PM, Maciej Stachowiak <m...@apple.com> wrote:
>
> On Apr 23, 2010, at 12:02 PM, Dirk Pranke wrote:
>
> On Fri, Apr 23, 2010 at 11:12 AM, Adam Barth <aba...@chromium.org> wrote:
>
> 3) CORS has already shipped in a number of browsers, so user agent
>
> implementors don't want to remove the feature.
>
> For completeness, can you say where has it actually shipped already?
>
> Safari, Chrome, Firefox, IE (limited profile via XDomainRequest). Probably
> any other WebKit-based browser that is remotely up to date - it's been in
> WebKit since mid-2008.

Even ignoring IE, that's fairly sizable. The obvious other question:
do we know if a significant number of sites are actively using CORS
(and using it in a way that could not be trivially migrated to UMP)?

If we don't have this already, it might be useful to put some metrics
into the Chrome dev channel to see if the CORS headers are being
received and used, and if so, how often.

Obviously, if we decide we need to support CORS, we can either support
it for legacy reasons, or support it fully. If we support it for
legacy reasons, we should see if we can come up with some way of
indicating potential security risks (maybe something like we do with
mixed content over SSL).

Dirk Pranke

unread,
Apr 23, 2010, 9:26:15 PM4/23/10
to Maciej Stachowiak, Adam Barth, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 5:51 PM, Maciej Stachowiak <m...@apple.com> wrote:
>
> On Apr 23, 2010, at 3:30 PM, Dirk Pranke wrote:
>
>> On Fri, Apr 23, 2010 at 3:07 PM, Maciej Stachowiak <m...@apple.com> wrote:
>>>
>>> On Apr 23, 2010, at 12:02 PM, Dirk Pranke wrote:
>>>
>>> On Fri, Apr 23, 2010 at 11:12 AM, Adam Barth <aba...@chromium.org> wrote:
>>>
>>> 3) CORS has already shipped in a number of browsers, so user agent
>>>
>>> implementors don't want to remove the feature.
>>>
>>> For completeness, can you say where has it actually shipped already?
>>>
>>> Safari, Chrome, Firefox, IE (limited profile via XDomainRequest).
>>> Probably
>>> any other WebKit-based browser that is remotely up to date - it's been in
>>> WebKit since mid-2008.
>>
>> Even ignoring IE, that's fairly sizable. The obvious other question:
>> do we know if a significant number of sites are actively using CORS
>> (and using it in a way that could not be trivially migrated to UMP)?
>>
>> If we don't have this already, it might be useful to put some metrics
>> into the Chrome dev channel to see if the CORS headers are being
>> received and used, and if so, how often.
>
>
> I don't have that data. Gathering it would be useful. I'm not sure that
> end-of-lifing CORS would be a good idea even if current usage is not very
> high.

I agree that end-of-lifing CORS *may* not be a good idea. However, getting this
data would certainly help in making that decision.

> The limited support in IE, the fact that it's somewhat new, and the
> fact that cross-site communication can easily be done cross-browser with
> postMessage are all limiting factors. There are also other popular
> techniques for cross-site communication that are in common use but are
> either very hard to deploy correctly (pure server-to-server communication
> with a pre-arranged shared secret) or just blatantly insecure (typing
> username/password for site A into site B's UI). We really want people to
> migrate off of those bad techniques.

Agreed.

> postMessage is not so bad, but it's not
> always the best choice for a cross-site data API; it's better for visual
> embedding use cases.
>
> I should mention that postMessage has an origin-based security model, like
> CORS, and it is in every major browser and in active use by many popular Web
> sites. So even completely removing CORS would not end the use of
> origin-based security for cross-site communication.

Also agreed. However, that does not mean that it's necessarily a model
that should be encouraged to spread. One could argue that a path to a
more secure web would be to obsolete CORS in favor of UMP and
eventually replace postMessage() with a no-ambient-authority
equivalent of that.

CORS has the admirable goal of making it easier to do certain
activities in a browser without increasing the attack surface of the
web beyond what already exists. I'm sure most of us would like to
figure out how to actually reduce the attack surface, as long as we
can do it in a way that (a) ideally is easy to code and get correct
and (b) provides a migration path off the existing web.

Dirk Pranke

unread,
Apr 23, 2010, 9:51:53 PM4/23/10
to da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
On Fri, Apr 23, 2010 at 10:26 AM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
>> I confess that I don't have a good enough understanding of UMP vs CORS yet
>> to comment intelligently on the subject.  I need to do some reading and
>> educate myself better.  Having read some of what has been linked from this
>> thread, I still feel that I am missing some background information.
>
> A few months ago I was up on the differences between the two and
> relatively convinced that UMP was the way to go and CORS should
> probably be discouraged. Of course, I've forgotten everything since
> then :)
>
> I would attempt to summarize something now, but I fear I would get it
> wrong and just confuse things, so I will instead try to refresh my
> memory today and see if I can send out a better summary of the two
> protocols and the tradeoffs.
>

Okay.

Fortunately, someone has already written something of a summary:

http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UM

If I had remembered that link earlier, I would have saved Maciej a
reply, since it contains implementor info.

Here's my own summary of the two protocols; I attempted to avoid
restating what was already written in that link. Those of you who are
familiar with them, please correct me if I misstate anything.
Apologies, as this is slightly long for an email.

Both UMP and CORS are an attempt to relax some of the aspects of the
same origin policy normally enforced by XHR, so that you can
programmatically read the response from a cross-origin request (among
other things).

UMP is more-or-less a subset of CORS. Whether or not that is strictly
true is a matter of some technical debate, but it looks likely that
this will eventually be made true.

UMP enables cross-site GET and POST requests. Such requests are
required to contain no ambient authority - no cookies, no HTTP auth
info, no Origin or Referer header. All authority required to perform
the action on the server must be contained in the URL parameters and
optional form body. This is different than what is nominally done with
XHR today - you can disable the sending of the credentials, but not
the Origin and Referer headers. Accordingly, since UMP does not send
the Origin, it means that service providers can only implement
cross-origin sharing of resources that they are either (a) willing to
share with anyone on the internet or (b) will determine whether or not
to share based on the URL parameters and form body.

UMP also requires the user-agent to ignore any Set-Cookie headers in
the response.

CORS extends the functionality of UMP in the following ways (roughly speaking):
* It allows methods other than GET and POST, at the cost of requiring
a "preflight" request to see if such a request is allowed. The
response to the preflight request is cachable.
* It normally sends along the Origin header (the spec does not require
this, but AFAIK we do not currently expose an API hook to turn it
off).
* It can optionally send along other credentials (cookies, http auth info)

CORS thus allows the service provider to implement simple forms of
access control that can't be done without client-side cooperation
using UMP (e.g., restricting to "Origin: google.com"). The flip side
of this is that CORS also enables the potential for XSRF / confused
deputy attacks.

Note that the full CORS spec is significantly more complicated than
UMP; this is presumably less of an issue for us since it's already
implemented, but there are support and QA implications. Adding
whatever it takes to provide a UMP-compliant API on top of this will
be trivial by comparison to implementing full CORS.

Given all this, if you were to ask me what we should do, I would say
something like the following:

- I agree with Tyler and MarkM that full ambient-authority-based
messaging should usually be discouraged. Cookies (and to a lesser
degree HTTP-based ambient auth credentials) make our lives difficult
from a security aspect. Unfortunately, they are often much easier to
code to.
- I am less convinced that sending the Origin header is also
undesirable. I think doing so can enable a simple class of use cases
trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
in with the sort of attacks they're worried about here, in case I'm
missing something.

Since we already support CORS, I would suggest that we do what we need
to do to provide a clean API for UMP, and that we instrument the code
to see if we can tell who uses what. As I suggested in my other note,
that would help us figure out if we should do more to detect and warn
potentially unsafe behavior, and if we can safely remove CORS support
at some future date if it turns out everyone uses UMP anyway.

While I believe that we should probably not actively promote CORS as a
way to accomplish complicated things, I am reluctant to flat out say
that we should remove CORS without a better sense of who wants to use
it for what, and whether or not we can provide similar functionality
that is as easy to use or easier without requiring any ambient
authority.

I also agree with Adam's analysis of the way the specs should be
written. If nothing else, the UMP spec is far easier to follow than
CORS, because it is so much simpler and more limited.

So, I think my recommendation -- at least for what to do in the near
term -- largely lines up with Adam and Maciej here.

-- Dirk

Darin Fisher

unread,
Apr 24, 2010, 1:24:50 AM4/24/10
to Dirk Pranke, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
Thanks for the summary!

Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?

-Darin

Mark S. Miller

unread,
Apr 24, 2010, 9:58:46 AM4/24/10
to Darin Fisher, Dirk Pranke, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 10:24 PM, Darin Fisher <da...@chromium.org> wrote:
Thanks for the summary!

Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?

 
This is brilliant! Let's propose a new security mechanism with known security vulnerabilities. Since the new mechanism is new, it isn't used yet, and so no one is exploiting its vulnerabilities yet. When someone argues that the new mechanism shouldn't be deployed because it invites attack, point out the absence of observed exploitation of its vulnerabilities.

The original web CSRF vulnerabilities were created by browsers presenting cookies for cross origin GETs and POSTs (for links and forms). It was around five years between when this vulnerability was first deployed and the first reported exploitation. At least browser makers had the excuse of ignorance back then[1]. Using your argument, even if they had known about the vulnerability they were creating, they should have deployed it anyway.

Why don't we add unchecked pointer arithmetic to JavaScript?


[1] Assuming that they were ignorant of the relevant security literature, which seems like a safe assumption.

-Darin



--
    Cheers,
    --MarkM

Mark S. Miller

unread,
Apr 24, 2010, 10:25:10 AM4/24/10
to Dirk Pranke, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 6:51 PM, Dirk Pranke <dpr...@chromium.org> wrote:
On Fri, Apr 23, 2010 at 10:26 AM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
>> I confess that I don't have a good enough understanding of UMP vs CORS yet
>> to comment intelligently on the subject.  I need to do some reading and
>> educate myself better.  Having read some of what has been linked from this
>> thread, I still feel that I am missing some background information.
>
> A few months ago I was up on the differences between the two and
> relatively convinced that UMP was the way to go and CORS should
> probably be discouraged. Of course, I've forgotten everything since
> then :)
>
> I would attempt to summarize something now, but I fear I would get it
> wrong and just confuse things, so I will instead try to refresh my
> memory today and see if I can send out a better summary of the two
> protocols and the tradeoffs.
>

Okay.

Fortunately, someone has already written something of a summary:

http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UM

If I had remembered that link earlier, I would have saved Maciej a
reply, since it contains implementor info.

Here's my own summary of the two protocols; I attempted to avoid
restating what was already written in that link. Those of you who are
familiar with them, please correct me if I misstate anything.
Apologies, as this is slightly long for an email.

Hi Dirk, thanks for your excellent summary and reasonable analysis. I really appreciate the seriousness with which you are taking these issues.
Because existing browser behavior already presents such auth info for cross origin form POSTs, and will continue to do so, servers must make use of so-called CSRF tokens in the payload of the message anyway, in order to protect themselves from CSRF attacks.

Regarding cookies as a security mechanism, even CSRF-aside, at <http://tools.ietf.org/html/draft-ietf-httpstate-cookie-08#page-31> Adam Barth says:

Transport-layer encryption, such as that employed in HTTPS, is
insufficient to prevent a network attacker from obtaining or altering
a victim's cookies because the cookie protocol itself has various
vulnerabilities (see "Weak Confidentiality" and "Weak Integrity",
below). In addition, by default, cookies do not provide
confidentiality or integrity from network attackers, even when used
in conjunction with HTTPS.

If a security mechanism need not be used to provide security, yes, it can indeed be much easier to code to. Likewise, and algorithm that need not be correct is much easier to write.

 
- I am less convinced that sending the Origin header is also
undesirable. I think doing so can enable a simple class of use cases
trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
in with the sort of attacks they're worried about here, in case I'm
missing something.


 

Since we already support CORS, I would suggest that we do what we need
to do to provide a clean API for UMP, and that we instrument the code
to see if we can tell who uses what. As I suggested in my other note,
that would help us figure out if we should do more to detect and warn
potentially unsafe behavior, and if we can safely remove CORS support
at some future date if it turns out everyone uses UMP anyway.

While I believe that we should probably not actively promote CORS as a
way to accomplish complicated things, I am reluctant to flat out say
that we should remove CORS without a better sense of who wants to use
it for what, and whether or not we can provide similar functionality
that is as easy to use or easier without requiring any ambient
authority.

+100. This data will be very useful. An excellent suggestion!

 
I also agree with Adam's analysis of the way the specs should be
written. If nothing else, the UMP spec is far easier to follow than
CORS, because it is so much simpler and more limited.

So, I think my recommendation -- at least for what to do in the near
term -- largely lines up with Adam and Maciej here.

-- Dirk



--
    Cheers,
    --MarkM

Ojan Vafai

unread,
Apr 24, 2010, 11:45:38 AM4/24/10
to Mark S. Miller, Darin Fisher, Dirk Pranke, Tyler Close, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Sat, Apr 24, 2010 at 6:58 AM, Mark S. Miller <eri...@google.com> wrote:
On Fri, Apr 23, 2010 at 10:24 PM, Darin Fisher <da...@chromium.org> wrote:
Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?
 
This is brilliant! Let's propose a new security mechanism with known security vulnerabilities. Since the new mechanism is new, it isn't used yet, and so no one is exploiting its vulnerabilities yet. When someone argues that the new mechanism shouldn't be deployed because it invites attack, point out the absence of observed exploitation of its vulnerabilities.
...
Why don't we add unchecked pointer arithmetic to JavaScript?

Please refrain from being an ass on chromium-dev. This condescending, arrogant tone is unnecessary and counterproductive. If you had excluded the above from your email, you would have gotten the same point across without wasting our time or being rude.

To Darin's question, the absence of known attacks doesn't necessarily prove anything, but the presence does. Also, Flash policy files have been around for years.

Ojan

Ojan Vafai

unread,
Apr 24, 2010, 11:48:34 AM4/24/10
to Tyler Close, Darin Fisher, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 2:54 PM, Tyler Close <tjc...@google.com> wrote:
Given your stated constraints, and assuming Maciej doesn't change his
mind, following the deployment strategy in Adam Barth's email seems
like a reasonable path forward.

I agree. Adam, you want to present that path forward on the public-webapps list? 

Tyler Close

unread,
Apr 24, 2010, 1:04:28 PM4/24/10
to Dirk Pranke, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
On Fri, Apr 23, 2010 at 6:51 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> CORS extends the functionality of UMP in the following ways (roughly speaking):
> * It allows methods other than GET and POST, at the cost of requiring
> a "preflight" request to see if such a request is allowed. The
> response to the preflight request is cachable.

Note that this preflight request is *per URL*. If a PUT requires two
round trips and POST only one, I suspect an API designer will have a
tough time defending the use of PUT, no matter what its philosophical
advantages. Consequently, I think its fair to wonder if CORS enables
methods other than GET and POST only in theory and not in practice.

> - I am less convinced that sending the Origin header is also
> undesirable. I think doing so can enable a simple class of use cases
> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
> in with the sort of attacks they're worried about here, in case I'm
> missing something.

I have a few concrete concerns with using the Origin header in UMP:

1. Consider a resource that uses client IP address or a firewall for
access control. The resource author might easily think CSRF-like
vulnerabilities can be prevented by checking the Origin header. The
logic being: "This request came one of our trusted machines behind our
firewall and was sent by one of our web pages. It must be safe to
accept the request." Of course if any page from that origin is doing
cross-site messaging then the target site could run a Confused Deputy
attack against the intranet resource. In particular, consider an
intranet page doing cross-site messaging with an API like the Atom
Publishing Protocol. The Atom server is expected to respond to a
resource creation request with the URL for the created resource. This
URL is expected to identify a resource created by the publishing site;
however, if instead the target server responds with the URL of the
victim intranet resource, then the intranet page will POST to that
resource. The intranet resource receives a request from behind the
firewall with the expected client IP address and the expected Origin
header.

2. Even resources on the public Internet might be doing some form of
special request processing based on the Origin header if it was sent.
For example, many seem tempted to use this header to restrict access
to information to only web pages from a given origin. Now all of the
pages in this origin are vulnerable to a Confused Deputy attack where
they reveal information fetched on behalf of another site. An attack
similar to the previous one can again be used. The client page is
again using an API like the Atom Publishing Protocol. It wants to copy
the content of one resource to another. The client page gets the URL
for the copied resource by listing the contents of an Atom collection.
The attacker's Atom server reports the URL as being the URL of a
resource protected by the Origin header. The client page GETs the
content of this resource and POSTs it to a new resource hosted by the
attacker's Atom server.

--Tyler

Mark S. Miller

unread,
Apr 24, 2010, 3:02:25 PM4/24/10
to Ojan Vafai, Darin Fisher, Dirk Pranke, Tyler Close, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Sat, Apr 24, 2010 at 8:45 AM, Ojan Vafai <oj...@chromium.org> wrote:
On Sat, Apr 24, 2010 at 6:58 AM, Mark S. Miller <eri...@google.com> wrote:
On Fri, Apr 23, 2010 at 10:24 PM, Darin Fisher <da...@chromium.org> wrote:
Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?
 
This is brilliant! Let's propose a new security mechanism with known security vulnerabilities. Since the new mechanism is new, it isn't used yet, and so no one is exploiting its vulnerabilities yet. When someone argues that the new mechanism shouldn't be deployed because it invites attack, point out the absence of observed exploitation of its vulnerabilities.
...
Why don't we add unchecked pointer arithmetic to JavaScript?

Please refrain from being an ass on chromium-dev. This condescending, arrogant tone is unnecessary and counterproductive. If you had excluded the above from your email, you would have gotten the same point across without wasting our time or being rude.

You are correct. I apologize for my tone. It was way inappropriate. Thanks.

 

To Darin's question, the absence of known attacks doesn't necessarily prove anything, but the presence does. Also, Flash policy files have been around for years.

Ojan



--
    Cheers,
    --MarkM

Dirk Pranke

unread,
Apr 24, 2010, 9:51:27 PM4/24/10
to Darin Fisher, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
I am not aware of any existing attacks using CORS. Then again, I am
not aware of anyone using CORS for anything, so my knowledge may not
be exhaustive. If you restrict yourself to using the GET and POST
forms of CORS, then there isn't really anything you can do with CORS
that you can't do with a form in an iframe. The CORS authors have been
clear that although they believe they are not introducing new attack
vectors, they readily acknowledge they are propagating existing attack
vectors.

I had a longish discussion with Maciej on #webkit on Friday afternoon;
part of the discussion centered around the fact that there is really
no way to distinguish a CORS-initiated GET or POST from a
form-initiated one. We spent some time discussing ways to make that
possible, but it was unclear if there would be enough of a security
benefit to justify the work.

-- Dirk

Dirk Pranke

unread,
Apr 24, 2010, 10:14:16 PM4/24/10
to Mark S. Miller, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Sat, Apr 24, 2010 at 7:25 AM, Mark S. Miller <eri...@google.com> wrote:
> If a security mechanism need not be used to provide security, yes, it can
> indeed be much easier to code to. Likewise, and algorithm that need not be
> correct is much easier to write.

Life is rarely composed of such binary distinctions. You yourself just
finished pointing out that CORS (and existing cross-origin mechanism)
can be used securely with tokens, at the cost of additional
complexity. Similarly, while it is easy to show that there are ways
that CORS can be used to attack, it is also easy to show that some
usages of CORS may be "secure enough" for their intended purpose, and
it is incumbent on us to provide other mechanisms that are both
equally secure and equally easy to use if we want people to use them.

>> - I am less convinced that sending the Origin header is also
>> undesirable. I think doing so can enable a simple class of use cases
>> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
>> in with the sort of attacks they're worried about here, in case I'm
>> missing something.
>
> http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1324.html
>

Unless I"m misunderstanding your example, it seems that while the
attack is indeed a confused deputy attack, and the access control is
gated based on the Origin, the vulnerability has less to do with the
fact the Origin: is sent, and much more to do with the fact that the
photos were stored at guessable URLs. Am I correct?

[ Hixie's reply to that message seem to say as much as I just did ].

Tyler's response was to work out the equivalent transaction using
web-keys. It was left as an exercise to the reader. I will attempt to
reproduce that exercise in a further email for comparison. It is
always good to have a true apples-to-apples pair of examples.

-- Dirk

Mark S. Miller

unread,
Apr 26, 2010, 12:11:52 AM4/26/10
to Dirk Pranke, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
[+ianh]

On Sat, Apr 24, 2010 at 7:14 PM, Dirk Pranke <dpr...@chromium.org> wrote:
On Sat, Apr 24, 2010 at 7:25 AM, Mark S. Miller <eri...@google.com> wrote:
> If a security mechanism need not be used to provide security, yes, it can
> indeed be much easier to code to. Likewise, and algorithm that need not be
> correct is much easier to write.

Life is rarely composed of such binary distinctions.

Agreed.

 
You yourself just
finished pointing out that CORS (and existing cross-origin mechanism)
can be used securely with tokens, at the cost of additional
complexity.

Not quite. The security comes 1) from the tokens, and 2) despite cookies, not because of them. Had all the relevant information been placed only in the payload, we'd have all the security benefits without the complexity.

 
Similarly, while it is easy to show that there are ways
that CORS can be used to attack, it is also easy to show that some
usages of CORS may be "secure enough" for their intended purpose, and
it is incumbent on us to provide other mechanisms that are both
equally secure and equally easy to use if we want people to use them.

Agreed, but let's go farther. If it's only "equally", why bother. What we need are mechanisms that are more secure and easier to use in general. I added the "in general" because, for almost any bad mechanism, no matter how poor it may be in general, there will be some example that plays to its strengths, for which it seems to beat other mechanisms.
 

>> - I am less convinced that sending the Origin header is also
>> undesirable. I think doing so can enable a simple class of use cases
>> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
>> in with the sort of attacks they're worried about here, in case I'm
>> missing something.
>
> http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1324.html
>

Unless I"m misunderstanding your example, it seems that while the
attack is indeed a confused deputy attack, and the access control is
gated based on the Origin, the vulnerability has less to do with the
fact the Origin: is sent, and much more to do with the fact that the
photos were stored at guessable URLs. Am I correct?

[ Hixie's reply to that message seem to say as much as I just did ].

That's not my understanding of Hixie's reply. I took Hixie to be saying that photo.example.com should inspect the origin of the URL and reject the request if it has a surprising origin -- effectively sanitizing URL space to the subset it knows about. (I'm cc'ing Hixie in case he'd like to clarify.)

Subsequent discussion revealed differences of opinion of how realistic or desirable such URL sanitizing is. That's why Tyler's recent post at <http://groups.google.com/a/chromium.org/group/chromium-dev/msg/d922610d772aa159> using an ATOM-based example is perhaps the better example to use.
 

Tyler's response was to work out the equivalent transaction using
web-keys. It was left as an exercise to the reader. I will attempt to
reproduce that exercise in a further email for comparison. It is
always good to have a true apples-to-apples pair of examples.

I look forward to it, thanks.

 

-- Dirk



--
    Cheers,
    --MarkM

Dirk Pranke

unread,
Apr 26, 2010, 9:52:19 PM4/26/10
to Mark S. Miller, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Sun, Apr 25, 2010 at 9:11 PM, Mark S. Miller <eri...@google.com> wrote:
> [+ianh]
> On Sat, Apr 24, 2010 at 7:14 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>>
>> On Sat, Apr 24, 2010 at 7:25 AM, Mark S. Miller <eri...@google.com>
>> > http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1324.html
>> >
>>
>> Unless I"m misunderstanding your example, it seems that while the
>> attack is indeed a confused deputy attack, and the access control is
>> gated based on the Origin, the vulnerability has less to do with the
>> fact the Origin: is sent, and much more to do with the fact that the
>> photos were stored at guessable URLs. Am I correct?
>>
>> [ Hixie's reply to that message seem to say as much as I just did ].
>
> That's not my understanding of Hixie's reply. I took Hixie to be saying that
> photo.example.com should inspect the origin of the URL and reject the
> request if it has a surprising origin -- effectively sanitizing URL space to
> the subset it knows about. (I'm cc'ing Hixie in case he'd like to clarify.)

Hixie's reply talked about the danger of potentially overlapping URL
namespaces; he never mentioned "Origin"
So, essentially you would have to have the application
(photo.example.net) know that when a reply from
printer.example.net specified a URL on storage.example.net , the name
had better be in the "/job/*" namespace
and not in the "/photo/*" namespace. So, an "intelligent"
access-control-based implementation would need to do
access control by path, not by origin. Tyler's response to that thread
seems to imply that this cannot be done,
when in fact this can be and is done for specific applications, all
the time. It can't be done if you know nothing
about the namespace of URLs, of course, or their correlation to the
actors involved. But that's almost a tautology.

> Subsequent discussion revealed differences of opinion of how realistic
> or desirable such URL sanitizing is.

I must have missed the subsequent discussion you're thinking of; I
didn't find any follow-ups to that thread
indicating that Hixie's proposal couldn't be implemented.

> That's why Tyler's recent post at
> <http://groups.google.com/a/chromium.org/group/chromium-dev/msg/d922610d772aa159>
> using an ATOM-based example is perhaps the better example to use.
>

I find myself too stupid to follow Tyler's example, but I'll reply to
that there.

>>
>> Tyler's response was to work out the equivalent transaction using
>> web-keys. It was left as an exercise to the reader. I will attempt to
>> reproduce that exercise in a further email for comparison. It is
>> always good to have a true apples-to-apples pair of examples.
>

Okay, having re-read his paper on web-keys, the key insight appears to
be that URLs (should) uniquely designate resources (combining access
with authority), and in order to avoid leakage you carefully store the
authority-granting part of the URL in the fragment (so it can't be
leaked in Referer headers).

In thinking about how to write this out, I'm not sure that there's any
way to do it in a short enough form to be worth reproducing here.
There are too many caveats and footnotes necessary to explain things,
so I won't bother for now. If there is interest, please speak up, but
I will proceed on with my beliefs, since they're probably the
important part.

So, I think that the problem remains the same: *in this particular
example*, as long as printer.example.com can guess (or otherwise come
to know about) the URLs of the user's photos, you're hosed. The
web-key based approach in fact bases its security completely on the
secrecy of these URLs (as all pure-capability models do). If you can
enforce path-based Access Control (combined with the Origin), then you
can in fact do something stronger. Whether or not this generalizes to
multi-party security is an entirely different matter.

-- Dirk

Tyler Close

unread,
Apr 27, 2010, 1:05:36 PM4/27/10
to Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Your use of the word 'tautology' implies that the argument in the
previous sentence is somehow unfair. It's not, it's basic web
architecture. A server has complete control over how it designs its
URL namespace. Typically, standards don't place any constraints on how
a server designs its URL namespace. For example, AtomPub doesn't
define constraints on the structure of URLs used by an AtomPub server.
In fact, it explicitly says:

"""
While the Atom Protocol specifies the formats of the representations
that are exchanged and the actions that can be performed on the IRIs
embedded in those representations, it does not constrain the form of
the URIs that are used. HTTP [RFC2616] specifies that the URI space
of each server is controlled by that server, and this protocol
imposes no further constraints on that control.
"""

Consequently, an AtomPub client is unable to take the precautions you
describe above. This convention is also an explicit principle of the
W3C's webarch document.

Note that there is also no guidance similar to what you describe in
the CORS specification. The general assumption is that Cookies and
Origin headers are sufficient to enable the server-side target of a
request to perform any needed access control checks. As this example
shows, that assumption is plainly invalid.

In the previous email thread, Hixie seemed to be arguing that CORS
access control worked just fine assuming that clients reliably
performed the kind of checking you describe. This is a perfect example
of a tautology. Assuming clients first verify that all principals are
allowed to use the identifiers they've provided, then access checks
done based on CORS headers will be reliable. Put even simpler, CORS
access checks are only reliable if clients first do all the access
checking themselves. In the case of AtomPub, and in general, this
client-side checking is not possible.

>> That's why Tyler's recent post at
>> <http://groups.google.com/a/chromium.org/group/chromium-dev/msg/d922610d772aa159>
>> using an ATOM-based example is perhaps the better example to use.
>>
>
> I find myself too stupid to follow Tyler's example, but I'll reply to
> that there.

Sigh. I'm sorry the examples aren't clearer. Please point out the
parts that are confusing.

>>> Tyler's response was to work out the equivalent transaction using
>>> web-keys. It was left as an exercise to the reader. I will attempt to
>>> reproduce that exercise in a further email for comparison. It is
>>> always good to have a true apples-to-apples pair of examples.
>>
>
> Okay, having re-read his paper on web-keys, the key insight appears to
> be that URLs (should) uniquely designate resources (combining access
> with authority), and in order to avoid leakage you carefully store the
> authority-granting part of the URL in the fragment (so it can't be
> leaked in Referer headers).
>
> In thinking about how to write this out, I'm not sure that there's any
> way to do it in a short enough form to be worth reproducing here.
> There are too many caveats and footnotes necessary to explain things,
> so I won't bother for now. If there is interest, please speak up, but
> I will proceed on with my beliefs, since they're probably the
> important part.

The solution I was thinking of is dead simple. Every resource hosted
by storage.example.org is at an unguessable URL. So the UMP solution
to the problem uses exactly the same requests as the CORS solution,
but omits all credentials. So:

1. A page from photo.example.com makes request:

POST /newprintjob?s=890890890 HTTP/1.0
Host: printer.example.net

HTTP/1.0 201 Created
Content-Type: application/json

{ "@" : "https://storage.example.org/?s=123412341234" }

2. To respond to the above request, the server side code at
printer.example.net set up a new printer spool file at
storage.example.org and returned the unguessable URL for it.

3. The same page from photo.example.com then makes request:

POST /copydocument HTTP/1.0
Host: storage.example.org
Content-Type: application/json

{
"from" : { "@" : "https://storage.example.org/?s=456745674567" },
"to": { "@" : "https://storage.example.org/?s=123412341234" }
}

HTTP/1.0 204 Ok

Done.

Since printer.example.net doesn't know the URL of any
storage.example.org file it is not allowed to write to, it can't ask
photo.example.com to write to such a file. Since photo.example.com
doesn't add any credentials to its requests, printer.example.net also
cannot cause photo.example.com to overwrite any other resource it may
own at any other site.

> So, I think that the problem remains the same: *in this particular
> example*, as long as printer.example.com can guess (or otherwise come
> to know about) the URLs of the user's photos, you're hosed. The
> web-key based approach in fact  bases its security completely on the
> secrecy of these URLs (as all pure-capability models do). If you can
> enforce path-based Access Control (combined with the Origin), then you
> can in fact do something stronger.

What can be made stronger?

> Whether or not this generalizes to multi-party security is an entirely different matter.

It seems strange to deploy and standardize CORS while this question is
unanswered.

--Tyler

Aaron Boodman

unread,
Apr 27, 2010, 1:53:00 PM4/27/10
to tjc...@google.com, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 10:05 AM, Tyler Close <tjc...@google.com> wrote:
> On Mon, Apr 26, 2010 at 6:52 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>> I find myself too stupid to follow Tyler's example, but I'll reply to
>> that there.
>
> Sigh. I'm sorry the examples aren't clearer. Please point out the
> parts that are confusing.

I didn't get it at first either.

I think it works like this. Say you have this URL on an intranet:

https://internal.mycompany.com/update-direct-deposit-details

Say there is also a URL like this:

https://internal.mycompany.com/news/admin

This page is the admin section for the company's internal news page.
Imagine that the news section for this company is actually implemented
using Blogger: the admin section is just a text area and a submit
button that uses XHR+CORS to interact with:

https://www.blogger.com/atom/update

Now imagine Blogger is owned or turns evil. The way atom works is that
when you create a post, the server returns you the URL you should use
to refer to it. A badly implemented client would likely just turn
around and start using this URL, without checking what server it
refers to. If EvilBlogger starts sending URLs that refer to
https://internal.mycompany.com/update-direct-deposit-details, bad
things happen.

===

Perhaps there are simpler examples. In general, the attack that the
UMP advocates are worried about relies on an attacker somehow
convincing a good client to make a bad request to a sensitive service.
With atom this is easy because the protocol provides URLs that the
client is supposed to directly access.

I still need to think about whether I think this is a big deal. My gut
reaction in the example above is that the direct deposit admin screen
shouldn't have been on the same origin as the news server, and
shouldn't accept cross-origin requests. Maybe there are other examples
that aren't so easily solveable?

- a

Tyler Close

unread,
Apr 27, 2010, 2:14:20 PM4/27/10
to Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Thank you for filling in a more concrete example. That is indeed an
example of the kind of thing I am talking about.

I would of course argue that the problem is not a badly implemented
client, but a badly designed access control model in CORS. I think the
client is working just fine and as we want it to. We want that simple
client implementation to be safe.

> Perhaps there are simpler examples. In general, the attack that the
> UMP advocates are worried about relies on an attacker somehow
> convincing a good client to make a bad request to a sensitive service.
> With atom this is easy because the protocol provides URLs that the
> client is supposed to directly access.

The attack is likely also easily demonstrated for any protocol that
passes around identifiers, of URL syntax or not.

> I still need to think about whether I think this is a big deal. My gut
> reaction in the example above is that the direct deposit admin screen
> shouldn't have been on the same origin as the news server, and
> shouldn't accept cross-origin requests. Maybe there are other examples
> that aren't so easily solveable?

Isolating every page that uses cross-domain messaging in a unique
domain is probably not feasible and is also insufficient to protect
against Confused Deputy problems. For example, the evil Blogger could
return a URL that refers to the server-side state of the internal news
page. When the internal news page attempts to update a news item, it
is actually overwriting it's own server side state. That people
mistakenly think these problems are easily solvable is one of the
reasons this kind of attack is so dangerous.

Contrast this tarpit with the situation where there is only UMP. The
policy is that any page on the Intranet can do cross-domain messaging,
so long as it only uses UMP. You've got a straightforward policy to
explain to developers, an easy thing to code audit for and a robust
defense against Confused Deputy. A much better world to work in.

--Tyler