[chromium-dev] Implementor interest in a W3C WebApps proposal

31 views
Skip to first unread message

Adam Barth

unread,
Apr 19, 2010, 1:23:13 PM4/19/10
to Chromium-dev
Recently on the W3C's public-webapps mailing list, Anne van Kesteren
asked about implementor interest in a particular specification. I
haven't replied because I don't want to speak for the project. Who
are the right folks to ask for an opinion?

Thanks,
Adam

--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev

Jeremy Orlow

unread,
Apr 19, 2010, 2:16:17 PM4/19/10
to aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Darin, Alex, and Dimitiry are the web platform leads and work on prioritization of what Googlers work on in terms of new web platform features.  That said, I'd rather things be brought up on list so others can weigh in and (assuming it's not shot down) it might get done faster than if it just went into the normal pipeline (if someone decides they want to implement it).
 
J

Adam Barth

unread,
Apr 19, 2010, 2:25:16 PM4/19/10
to Jeremy Orlow, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Ok. Here's the email in question:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0171.html

The question appears to be mainly if we want a new API for accessing a
profile of cross-origin XMLHttpRequest that might have better security
properties. IMHO, it doesn't matter that much because the same
functionality will be there either way.

Adam

Tyler Close

unread,
Apr 19, 2010, 5:32:56 PM4/19/10
to aba...@chromium.org, Jeremy Orlow, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
The Uniform Messaging Policy (UMP) is a proposal by Mark Miller and
myself. The latest editor's draft is at:

http://dev.w3.org/2006/waf/UMP/

The elevator pitch is that CORS enables cross-origin messaging by
letting servers poke more holes in the Same Origin Policy, but does
little to help developers avoid the CSRF-like vulnerabilities inherent
in doing so. The UMP provides a security model for doing cross-origin
messaging without CSRF-like vulnerabilities (aka Confused Deputy).
I've been advocating for this functionality for some years now and
CORS has moved in that direction somewhat with its "withCredentials"
flag. This part of CORS is still underspecified though. UMP provides a
clear and succinct definition of the needed functionality.

Even you hope CORS will adopt more of UMP over time, expressing
support for UMP could encourage that phenomenon.

I glad to answer any questions you may have.

--Tyler

Jeremy Orlow

unread,
Apr 19, 2010, 5:42:34 PM4/19/10
to ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Oops...Yes, Ian is part of the "web platform leads" group as well.  :-)

2010/4/19 Ian Fette (イアンフェッティ) <ife...@google.com>
*cough* you're forgetting someone :)

That said, I would like to take a look at the objections raised by Maciej et al from Apple, as we would likely have to address them if we wanted to implement in Chrome. Does anyone care to summarize?

Tyler Close

unread,
Apr 19, 2010, 6:05:44 PM4/19/10
to jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
My understanding is that Maciej does not claim any significant
technical barriers to implementing UMP. Indeed, he says such support
may arise by coincidence. The relevant email is at:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0043.html

I believe his main technical concern is that any UMP implementation in
WebKit should share code with the CORS implementation. I haven't
looked at the CORS implementation in WebKit, but there's nothing in
the spec that should require a wholly independent implementation.

--Tyler

Ojan Vafai

unread,
Apr 22, 2010, 6:36:12 PM4/22/10
to tjc...@google.com, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
For the record, I agree with the general sentiment expressed by Maciej/Anne that we want to implement CORS and, given that, there's no benefit to UMP being a different spec. Having UMP as a different spec leaves the possible downside to having the APIs diverge over time. The sticking points with UMP folding into CORS seem relatively straightforward to me (as straightforward as any web API discussions are) and are worth the assurance of consistent APIs. There's the argument of defining CORS in terms of UMP, but I don't see the benefit of doing so, especially as it makes implementors lives more difficult.

Ojan

Tyler Close

unread,
Apr 22, 2010, 7:33:22 PM4/22/10
to Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
Hi Ojan,

On Thu, Apr 22, 2010 at 3:36 PM, Ojan Vafai <oj...@google.com> wrote:
> For the record, I agree with the general sentiment expressed by Maciej/Anne
> that we want to implement CORS and, given that, there's no benefit to UMP
> being a different spec.

Since you're so confident that we want to implement CORS I suppose you
must have a strategy for explaining to developers how to avoid
Confused Deputy vulnerabilities when using CORS. As I've explained on
the list, there are several natural ways to use CORS that cause
Confused Deputy vulnerabilities. See:

http://lists.w3.org/Archives/Public/public-webapps/2010AprJun/0258.html

> Having UMP as a different spec leaves the possible
> downside to having the APIs diverge over time. The sticking points with UMP
> folding into CORS seem relatively straightforward to me (as straightforward
> as any web API discussions are) and are worth the assurance of consistent
> APIs. There's the argument of defining CORS in terms of UMP, but I don't see
> the benefit of doing so, especially as it makes implementors lives more
> difficult.

Implementers are a vocal bunch and will keep the APIs from diverging
if that's what they want. I think that's not a major concern.

The lives of implementers are also a much lesser concern than the
lives of Web application developers. We owe application developers an
easily understood spec. Burying UMP inside the substantial complexity
of CORS doesn't help application developers.

--Tyler

Ojan Vafai

unread,
Apr 22, 2010, 9:15:29 PM4/22/10
to Tyler Close, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
I don't have anything to add that hasn't already been said on public-webapps. I find Maciej's description at http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0481.html convincing.

Ojan

Tyler Close

unread,
Apr 23, 2010, 12:40:41 AM4/23/10
to Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Darin Fisher, Alex Russell, Dimitri Glazkov
On Thu, Apr 22, 2010 at 6:15 PM, Ojan Vafai <oj...@google.com> wrote:
> I don't have anything to add that hasn't already been said on
> public-webapps. I find Maciej's description at
> http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0481.html convincing.

In that email, Maciej claims developers can follow a DBAD programming
discipline to avoid Confused Deputy vulnerabilities. He describes that
policy as:

"""
[1] To recap the DBAD discipline:

Either:
A) Never make a request to a site on behalf of a different site; OR
B) Guarantee that all requests you make on behalf of a third-party
site are syntactically different from any request you make on your own
behalf.

In this discipline, "on behalf of" does not necessary imply that the
third-party site initiated the deputizing interaction; it may include
requesting information from a third-party site and then constructing a
request to a different site based on it without proper checking. (In
general proper checking may not be possible, but making third-party
requests look different can always be provided for by the protocol.
"""

A) basically says: "don't do cross-site messaging". Since the request
is going cross-site, at least some of the data, such as the target
URL, is determined by the target host rather than the sending host.
Maciej notes that checking this data is not always possible, let alone
something that developers can easily and reliably do. A) is also
insufficient to guard against all Confused Deputy problems since it
ignores what you do with the response data. I don't understand how B)
works. Perhaps you could explain it since you were convinced by it.

Darin Fisher

unread,
Apr 23, 2010, 1:22:41 AM4/23/10
to Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
Hi Tyler,

A couple things to note for background sake:

1) It is our goal that Chrome and Safari should not diverge in web platform behavior.
2) Maciej is a very influential member of the WebKit and web standards communities.

Therefore, I think Maciej would need to be convinced before Chrome would ship UMP.

I confess that I don't have a good enough understanding of UMP vs CORS yet to comment intelligently on the subject.  I need to do some reading and educate myself better.  Having read some of what has been linked from this thread, I still feel that I am missing some background information.

Regards,
-Darin

Dirk Pranke

unread,
Apr 23, 2010, 1:26:45 PM4/23/10
to da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
> Hi Tyler,
> A couple things to note for background sake:
> 1) It is our goal that Chrome and Safari should not diverge in web platform
> behavior.
> 2) Maciej is a very influential member of the WebKit and web standards
> communities.
> Therefore, I think Maciej would need to be convinced before Chrome would
> ship UMP.
> I confess that I don't have a good enough understanding of UMP vs CORS yet
> to comment intelligently on the subject.  I need to do some reading and
> educate myself better.  Having read some of what has been linked from this
> thread, I still feel that I am missing some background information.

A few months ago I was up on the differences between the two and
relatively convinced that UMP was the way to go and CORS should
probably be discouraged. Of course, I've forgotten everything since
then :)

I would attempt to summarize something now, but I fear I would get it
wrong and just confuse things, so I will instead try to refresh my
memory today and see if I can send out a better summary of the two
protocols and the tradeoffs.

-- Dirk

Adam Barth

unread,
Apr 23, 2010, 2:12:18 PM4/23/10
to Dirk Pranke, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Maciej Stachowiak
On Fri, Apr 23, 2010 at 10:26 AM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
>> Hi Tyler,
>> A couple things to note for background sake:
>> 1) It is our goal that Chrome and Safari should not diverge in web platform
>> behavior.
>> 2) Maciej is a very influential member of the WebKit and web standards
>> communities.
>> Therefore, I think Maciej would need to be convinced before Chrome would
>> ship UMP.
>> I confess that I don't have a good enough understanding of UMP vs CORS yet
>> to comment intelligently on the subject.  I need to do some reading and
>> educate myself better.  Having read some of what has been linked from this
>> thread, I still feel that I am missing some background information.
>
> A few months ago I was up on the differences between the two and
> relatively convinced that UMP was the way to go and CORS should
> probably be discouraged. Of course, I've forgotten everything since
> then :)
>
> I would attempt to summarize something now, but I fear I would get it
> wrong and just confuse things, so I will instead try to refresh my
> memory today and see if I can send out a better summary of the two
> protocols and the tradeoffs.

I've been avoiding commenting this round because I gave Tyler and co a
lot of feedback on this topic last round. My current read is as
follows:

1) UMP should be / is a subset of CORS.
2) Developers who use UMP might create more secure applications.
3) CORS has already shipped in a number of browsers, so user agent
implementors don't want to remove the feature.
4) Having UMP in a separate document makes it easier to understand
which parts of CORS are in UMP.
5) User agent implementors don't want to have two independent
implementations because the internal mechanisms are largely the same.
6) User agent implementors want to read one document to tell them how
to build their one implementation of CORS+UMP, complete with
instructions on where to put the various if statements.

Putting these together, it looks like we want a separate UMP
specification for web developers and a combined CORS+UMP specification
for user agent implementors. Consequently, I think it makes sense for
the working group to publish UMP separately from CORS but have all the
user agent conformance requirements in the combined CORS+UMP document.

(There's also some debate about what API should trigger the UMP
subset, but that's mostly aesthetics as far as I can tell.)

Adam

Dirk Pranke

unread,
Apr 23, 2010, 3:02:07 PM4/23/10
to Adam Barth, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Maciej Stachowiak
On Fri, Apr 23, 2010 at 11:12 AM, Adam Barth <aba...@chromium.org> wrote:
> 3) CORS has already shipped in a number of browsers, so user agent
> implementors don't want to remove the feature.

For completeness, can you say where has it actually shipped already?

-- Dirk

Tyler Close

unread,
Apr 23, 2010, 5:54:47 PM4/23/10
to Darin Fisher, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
> Hi Tyler,
> A couple things to note for background sake:
> 1) It is our goal that Chrome and Safari should not diverge in web platform
> behavior.
> 2) Maciej is a very influential member of the WebKit and web standards
> communities.
> Therefore, I think Maciej would need to be convinced before Chrome would
> ship UMP.

Maciej has clearly and consistently maintained that he is not against
shipping UMP, so long as UMP is a subset of CORS. So I think Chrome
can ship UMP without conflict.

The point that remains in dispute is whether or not CORS should be
removed. So far, Maciej remains committed to supporting full CORS. I
think the DBAD discipline is clearly unworkable, but others don't see
it that way, yet.

Given your stated constraints, and assuming Maciej doesn't change his
mind, following the deployment strategy in Adam Barth's email seems
like a reasonable path forward.

Dirk Pranke

unread,
Apr 23, 2010, 6:30:12 PM4/23/10
to Maciej Stachowiak, Adam Barth, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 3:07 PM, Maciej Stachowiak <m...@apple.com> wrote:
>
> On Apr 23, 2010, at 12:02 PM, Dirk Pranke wrote:
>
> On Fri, Apr 23, 2010 at 11:12 AM, Adam Barth <aba...@chromium.org> wrote:
>
> 3) CORS has already shipped in a number of browsers, so user agent
>
> implementors don't want to remove the feature.
>
> For completeness, can you say where has it actually shipped already?
>
> Safari, Chrome, Firefox, IE (limited profile via XDomainRequest). Probably
> any other WebKit-based browser that is remotely up to date - it's been in
> WebKit since mid-2008.

Even ignoring IE, that's fairly sizable. The obvious other question:
do we know if a significant number of sites are actively using CORS
(and using it in a way that could not be trivially migrated to UMP)?

If we don't have this already, it might be useful to put some metrics
into the Chrome dev channel to see if the CORS headers are being
received and used, and if so, how often.

Obviously, if we decide we need to support CORS, we can either support
it for legacy reasons, or support it fully. If we support it for
legacy reasons, we should see if we can come up with some way of
indicating potential security risks (maybe something like we do with
mixed content over SSL).

Dirk Pranke

unread,
Apr 23, 2010, 9:26:15 PM4/23/10
to Maciej Stachowiak, Adam Barth, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 5:51 PM, Maciej Stachowiak <m...@apple.com> wrote:
>
> On Apr 23, 2010, at 3:30 PM, Dirk Pranke wrote:
>
>> On Fri, Apr 23, 2010 at 3:07 PM, Maciej Stachowiak <m...@apple.com> wrote:
>>>
>>> On Apr 23, 2010, at 12:02 PM, Dirk Pranke wrote:
>>>
>>> On Fri, Apr 23, 2010 at 11:12 AM, Adam Barth <aba...@chromium.org> wrote:
>>>
>>> 3) CORS has already shipped in a number of browsers, so user agent
>>>
>>> implementors don't want to remove the feature.
>>>
>>> For completeness, can you say where has it actually shipped already?
>>>
>>> Safari, Chrome, Firefox, IE (limited profile via XDomainRequest).
>>> Probably
>>> any other WebKit-based browser that is remotely up to date - it's been in
>>> WebKit since mid-2008.
>>
>> Even ignoring IE, that's fairly sizable. The obvious other question:
>> do we know if a significant number of sites are actively using CORS
>> (and using it in a way that could not be trivially migrated to UMP)?
>>
>> If we don't have this already, it might be useful to put some metrics
>> into the Chrome dev channel to see if the CORS headers are being
>> received and used, and if so, how often.
>
>
> I don't have that data. Gathering it would be useful. I'm not sure that
> end-of-lifing CORS would be a good idea even if current usage is not very
> high.

I agree that end-of-lifing CORS *may* not be a good idea. However, getting this
data would certainly help in making that decision.

> The limited support in IE, the fact that it's somewhat new, and the
> fact that cross-site communication can easily be done cross-browser with
> postMessage are all limiting factors. There are also other popular
> techniques for cross-site communication that are in common use but are
> either very hard to deploy correctly (pure server-to-server communication
> with a pre-arranged shared secret) or just blatantly insecure (typing
> username/password for site A into site B's UI). We really want people to
> migrate off of those bad techniques.

Agreed.

> postMessage is not so bad, but it's not
> always the best choice for a cross-site data API; it's better for visual
> embedding use cases.
>
> I should mention that postMessage has an origin-based security model, like
> CORS, and it is in every major browser and in active use by many popular Web
> sites. So even completely removing CORS would not end the use of
> origin-based security for cross-site communication.

Also agreed. However, that does not mean that it's necessarily a model
that should be encouraged to spread. One could argue that a path to a
more secure web would be to obsolete CORS in favor of UMP and
eventually replace postMessage() with a no-ambient-authority
equivalent of that.

CORS has the admirable goal of making it easier to do certain
activities in a browser without increasing the attack surface of the
web beyond what already exists. I'm sure most of us would like to
figure out how to actually reduce the attack surface, as long as we
can do it in a way that (a) ideally is easy to code and get correct
and (b) provides a migration path off the existing web.

Dirk Pranke

unread,
Apr 23, 2010, 9:51:53 PM4/23/10
to da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
On Fri, Apr 23, 2010 at 10:26 AM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
>> I confess that I don't have a good enough understanding of UMP vs CORS yet
>> to comment intelligently on the subject.  I need to do some reading and
>> educate myself better.  Having read some of what has been linked from this
>> thread, I still feel that I am missing some background information.
>
> A few months ago I was up on the differences between the two and
> relatively convinced that UMP was the way to go and CORS should
> probably be discouraged. Of course, I've forgotten everything since
> then :)
>
> I would attempt to summarize something now, but I fear I would get it
> wrong and just confuse things, so I will instead try to refresh my
> memory today and see if I can send out a better summary of the two
> protocols and the tradeoffs.
>

Okay.

Fortunately, someone has already written something of a summary:

http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UM

If I had remembered that link earlier, I would have saved Maciej a
reply, since it contains implementor info.

Here's my own summary of the two protocols; I attempted to avoid
restating what was already written in that link. Those of you who are
familiar with them, please correct me if I misstate anything.
Apologies, as this is slightly long for an email.

Both UMP and CORS are an attempt to relax some of the aspects of the
same origin policy normally enforced by XHR, so that you can
programmatically read the response from a cross-origin request (among
other things).

UMP is more-or-less a subset of CORS. Whether or not that is strictly
true is a matter of some technical debate, but it looks likely that
this will eventually be made true.

UMP enables cross-site GET and POST requests. Such requests are
required to contain no ambient authority - no cookies, no HTTP auth
info, no Origin or Referer header. All authority required to perform
the action on the server must be contained in the URL parameters and
optional form body. This is different than what is nominally done with
XHR today - you can disable the sending of the credentials, but not
the Origin and Referer headers. Accordingly, since UMP does not send
the Origin, it means that service providers can only implement
cross-origin sharing of resources that they are either (a) willing to
share with anyone on the internet or (b) will determine whether or not
to share based on the URL parameters and form body.

UMP also requires the user-agent to ignore any Set-Cookie headers in
the response.

CORS extends the functionality of UMP in the following ways (roughly speaking):
* It allows methods other than GET and POST, at the cost of requiring
a "preflight" request to see if such a request is allowed. The
response to the preflight request is cachable.
* It normally sends along the Origin header (the spec does not require
this, but AFAIK we do not currently expose an API hook to turn it
off).
* It can optionally send along other credentials (cookies, http auth info)

CORS thus allows the service provider to implement simple forms of
access control that can't be done without client-side cooperation
using UMP (e.g., restricting to "Origin: google.com"). The flip side
of this is that CORS also enables the potential for XSRF / confused
deputy attacks.

Note that the full CORS spec is significantly more complicated than
UMP; this is presumably less of an issue for us since it's already
implemented, but there are support and QA implications. Adding
whatever it takes to provide a UMP-compliant API on top of this will
be trivial by comparison to implementing full CORS.

Given all this, if you were to ask me what we should do, I would say
something like the following:

- I agree with Tyler and MarkM that full ambient-authority-based
messaging should usually be discouraged. Cookies (and to a lesser
degree HTTP-based ambient auth credentials) make our lives difficult
from a security aspect. Unfortunately, they are often much easier to
code to.
- I am less convinced that sending the Origin header is also
undesirable. I think doing so can enable a simple class of use cases
trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
in with the sort of attacks they're worried about here, in case I'm
missing something.

Since we already support CORS, I would suggest that we do what we need
to do to provide a clean API for UMP, and that we instrument the code
to see if we can tell who uses what. As I suggested in my other note,
that would help us figure out if we should do more to detect and warn
potentially unsafe behavior, and if we can safely remove CORS support
at some future date if it turns out everyone uses UMP anyway.

While I believe that we should probably not actively promote CORS as a
way to accomplish complicated things, I am reluctant to flat out say
that we should remove CORS without a better sense of who wants to use
it for what, and whether or not we can provide similar functionality
that is as easy to use or easier without requiring any ambient
authority.

I also agree with Adam's analysis of the way the specs should be
written. If nothing else, the UMP spec is far easier to follow than
CORS, because it is so much simpler and more limited.

So, I think my recommendation -- at least for what to do in the near
term -- largely lines up with Adam and Maciej here.

-- Dirk

Darin Fisher

unread,
Apr 24, 2010, 1:24:50 AM4/24/10
to Dirk Pranke, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
Thanks for the summary!

Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?

-Darin

Mark S. Miller

unread,
Apr 24, 2010, 9:58:46 AM4/24/10
to Darin Fisher, Dirk Pranke, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 10:24 PM, Darin Fisher <da...@chromium.org> wrote:
Thanks for the summary!

Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?

 
This is brilliant! Let's propose a new security mechanism with known security vulnerabilities. Since the new mechanism is new, it isn't used yet, and so no one is exploiting its vulnerabilities yet. When someone argues that the new mechanism shouldn't be deployed because it invites attack, point out the absence of observed exploitation of its vulnerabilities.

The original web CSRF vulnerabilities were created by browsers presenting cookies for cross origin GETs and POSTs (for links and forms). It was around five years between when this vulnerability was first deployed and the first reported exploitation. At least browser makers had the excuse of ignorance back then[1]. Using your argument, even if they had known about the vulnerability they were creating, they should have deployed it anyway.

Why don't we add unchecked pointer arithmetic to JavaScript?


[1] Assuming that they were ignorant of the relevant security literature, which seems like a safe assumption.

-Darin



--
    Cheers,
    --MarkM

Mark S. Miller

unread,
Apr 24, 2010, 10:25:10 AM4/24/10
to Dirk Pranke, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 6:51 PM, Dirk Pranke <dpr...@chromium.org> wrote:
On Fri, Apr 23, 2010 at 10:26 AM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Thu, Apr 22, 2010 at 10:22 PM, Darin Fisher <da...@google.com> wrote:
>> I confess that I don't have a good enough understanding of UMP vs CORS yet
>> to comment intelligently on the subject.  I need to do some reading and
>> educate myself better.  Having read some of what has been linked from this
>> thread, I still feel that I am missing some background information.
>
> A few months ago I was up on the differences between the two and
> relatively convinced that UMP was the way to go and CORS should
> probably be discouraged. Of course, I've forgotten everything since
> then :)
>
> I would attempt to summarize something now, but I fear I would get it
> wrong and just confuse things, so I will instead try to refresh my
> memory today and see if I can send out a better summary of the two
> protocols and the tradeoffs.
>

Okay.

Fortunately, someone has already written something of a summary:

http://www.w3.org/Security/wiki/Comparison_of_CORS_and_UM

If I had remembered that link earlier, I would have saved Maciej a
reply, since it contains implementor info.

Here's my own summary of the two protocols; I attempted to avoid
restating what was already written in that link. Those of you who are
familiar with them, please correct me if I misstate anything.
Apologies, as this is slightly long for an email.

Hi Dirk, thanks for your excellent summary and reasonable analysis. I really appreciate the seriousness with which you are taking these issues.
Because existing browser behavior already presents such auth info for cross origin form POSTs, and will continue to do so, servers must make use of so-called CSRF tokens in the payload of the message anyway, in order to protect themselves from CSRF attacks.

Regarding cookies as a security mechanism, even CSRF-aside, at <http://tools.ietf.org/html/draft-ietf-httpstate-cookie-08#page-31> Adam Barth says:

Transport-layer encryption, such as that employed in HTTPS, is
insufficient to prevent a network attacker from obtaining or altering
a victim's cookies because the cookie protocol itself has various
vulnerabilities (see "Weak Confidentiality" and "Weak Integrity",
below). In addition, by default, cookies do not provide
confidentiality or integrity from network attackers, even when used
in conjunction with HTTPS.

If a security mechanism need not be used to provide security, yes, it can indeed be much easier to code to. Likewise, and algorithm that need not be correct is much easier to write.

 
- I am less convinced that sending the Origin header is also
undesirable. I think doing so can enable a simple class of use cases
trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
in with the sort of attacks they're worried about here, in case I'm
missing something.


 

Since we already support CORS, I would suggest that we do what we need
to do to provide a clean API for UMP, and that we instrument the code
to see if we can tell who uses what. As I suggested in my other note,
that would help us figure out if we should do more to detect and warn
potentially unsafe behavior, and if we can safely remove CORS support
at some future date if it turns out everyone uses UMP anyway.

While I believe that we should probably not actively promote CORS as a
way to accomplish complicated things, I am reluctant to flat out say
that we should remove CORS without a better sense of who wants to use
it for what, and whether or not we can provide similar functionality
that is as easy to use or easier without requiring any ambient
authority.

+100. This data will be very useful. An excellent suggestion!

 
I also agree with Adam's analysis of the way the specs should be
written. If nothing else, the UMP spec is far easier to follow than
CORS, because it is so much simpler and more limited.

So, I think my recommendation -- at least for what to do in the near
term -- largely lines up with Adam and Maciej here.

-- Dirk



--
    Cheers,
    --MarkM

Ojan Vafai

unread,
Apr 24, 2010, 11:45:38 AM4/24/10
to Mark S. Miller, Darin Fisher, Dirk Pranke, Tyler Close, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Sat, Apr 24, 2010 at 6:58 AM, Mark S. Miller <eri...@google.com> wrote:
On Fri, Apr 23, 2010 at 10:24 PM, Darin Fisher <da...@chromium.org> wrote:
Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?
 
This is brilliant! Let's propose a new security mechanism with known security vulnerabilities. Since the new mechanism is new, it isn't used yet, and so no one is exploiting its vulnerabilities yet. When someone argues that the new mechanism shouldn't be deployed because it invites attack, point out the absence of observed exploitation of its vulnerabilities.
...
Why don't we add unchecked pointer arithmetic to JavaScript?

Please refrain from being an ass on chromium-dev. This condescending, arrogant tone is unnecessary and counterproductive. If you had excluded the above from your email, you would have gotten the same point across without wasting our time or being rude.

To Darin's question, the absence of known attacks doesn't necessarily prove anything, but the presence does. Also, Flash policy files have been around for years.

Ojan

Ojan Vafai

unread,
Apr 24, 2010, 11:48:34 AM4/24/10
to Tyler Close, Darin Fisher, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Fri, Apr 23, 2010 at 2:54 PM, Tyler Close <tjc...@google.com> wrote:
Given your stated constraints, and assuming Maciej doesn't change his
mind, following the deployment strategy in Adam Barth's email seems
like a reasonable path forward.

I agree. Adam, you want to present that path forward on the public-webapps list? 

Tyler Close

unread,
Apr 24, 2010, 1:04:28 PM4/24/10
to Dirk Pranke, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
On Fri, Apr 23, 2010 at 6:51 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> CORS extends the functionality of UMP in the following ways (roughly speaking):
> * It allows methods other than GET and POST, at the cost of requiring
> a "preflight" request to see if such a request is allowed. The
> response to the preflight request is cachable.

Note that this preflight request is *per URL*. If a PUT requires two
round trips and POST only one, I suspect an API designer will have a
tough time defending the use of PUT, no matter what its philosophical
advantages. Consequently, I think its fair to wonder if CORS enables
methods other than GET and POST only in theory and not in practice.

> - I am less convinced that sending the Origin header is also
> undesirable. I think doing so can enable a simple class of use cases
> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
> in with the sort of attacks they're worried about here, in case I'm
> missing something.

I have a few concrete concerns with using the Origin header in UMP:

1. Consider a resource that uses client IP address or a firewall for
access control. The resource author might easily think CSRF-like
vulnerabilities can be prevented by checking the Origin header. The
logic being: "This request came one of our trusted machines behind our
firewall and was sent by one of our web pages. It must be safe to
accept the request." Of course if any page from that origin is doing
cross-site messaging then the target site could run a Confused Deputy
attack against the intranet resource. In particular, consider an
intranet page doing cross-site messaging with an API like the Atom
Publishing Protocol. The Atom server is expected to respond to a
resource creation request with the URL for the created resource. This
URL is expected to identify a resource created by the publishing site;
however, if instead the target server responds with the URL of the
victim intranet resource, then the intranet page will POST to that
resource. The intranet resource receives a request from behind the
firewall with the expected client IP address and the expected Origin
header.

2. Even resources on the public Internet might be doing some form of
special request processing based on the Origin header if it was sent.
For example, many seem tempted to use this header to restrict access
to information to only web pages from a given origin. Now all of the
pages in this origin are vulnerable to a Confused Deputy attack where
they reveal information fetched on behalf of another site. An attack
similar to the previous one can again be used. The client page is
again using an API like the Atom Publishing Protocol. It wants to copy
the content of one resource to another. The client page gets the URL
for the copied resource by listing the contents of an Atom collection.
The attacker's Atom server reports the URL as being the URL of a
resource protected by the Origin header. The client page GETs the
content of this resource and POSTs it to a new resource hosted by the
attacker's Atom server.

--Tyler

Mark S. Miller

unread,
Apr 24, 2010, 3:02:25 PM4/24/10
to Ojan Vafai, Darin Fisher, Dirk Pranke, Tyler Close, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Sat, Apr 24, 2010 at 8:45 AM, Ojan Vafai <oj...@chromium.org> wrote:
On Sat, Apr 24, 2010 at 6:58 AM, Mark S. Miller <eri...@google.com> wrote:
On Fri, Apr 23, 2010 at 10:24 PM, Darin Fisher <da...@chromium.org> wrote:
Do we have knowledge of successful XSRF / Confused Deputy attacks against servers using CORS?  To what degree has this been an observed problem?  Has this been a huge problem for Flash policy files?
 
This is brilliant! Let's propose a new security mechanism with known security vulnerabilities. Since the new mechanism is new, it isn't used yet, and so no one is exploiting its vulnerabilities yet. When someone argues that the new mechanism shouldn't be deployed because it invites attack, point out the absence of observed exploitation of its vulnerabilities.
...
Why don't we add unchecked pointer arithmetic to JavaScript?

Please refrain from being an ass on chromium-dev. This condescending, arrogant tone is unnecessary and counterproductive. If you had excluded the above from your email, you would have gotten the same point across without wasting our time or being rude.

You are correct. I apologize for my tone. It was way inappropriate. Thanks.

 

To Darin's question, the absence of known attacks doesn't necessarily prove anything, but the presence does. Also, Flash policy files have been around for years.

Ojan



--
    Cheers,
    --MarkM

Dirk Pranke

unread,
Apr 24, 2010, 9:51:27 PM4/24/10
to Darin Fisher, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
I am not aware of any existing attacks using CORS. Then again, I am
not aware of anyone using CORS for anything, so my knowledge may not
be exhaustive. If you restrict yourself to using the GET and POST
forms of CORS, then there isn't really anything you can do with CORS
that you can't do with a form in an iframe. The CORS authors have been
clear that although they believe they are not introducing new attack
vectors, they readily acknowledge they are propagating existing attack
vectors.

I had a longish discussion with Maciej on #webkit on Friday afternoon;
part of the discussion centered around the fact that there is really
no way to distinguish a CORS-initiated GET or POST from a
form-initiated one. We spent some time discussing ways to make that
possible, but it was unclear if there would be enough of a security
benefit to justify the work.

-- Dirk

Dirk Pranke

unread,
Apr 24, 2010, 10:14:16 PM4/24/10
to Mark S. Miller, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov
On Sat, Apr 24, 2010 at 7:25 AM, Mark S. Miller <eri...@google.com> wrote:
> If a security mechanism need not be used to provide security, yes, it can
> indeed be much easier to code to. Likewise, and algorithm that need not be
> correct is much easier to write.

Life is rarely composed of such binary distinctions. You yourself just
finished pointing out that CORS (and existing cross-origin mechanism)
can be used securely with tokens, at the cost of additional
complexity. Similarly, while it is easy to show that there are ways
that CORS can be used to attack, it is also easy to show that some
usages of CORS may be "secure enough" for their intended purpose, and
it is incumbent on us to provide other mechanisms that are both
equally secure and equally easy to use if we want people to use them.

>> - I am less convinced that sending the Origin header is also
>> undesirable. I think doing so can enable a simple class of use cases
>> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
>> in with the sort of attacks they're worried about here, in case I'm
>> missing something.
>
> http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1324.html
>

Unless I"m misunderstanding your example, it seems that while the
attack is indeed a confused deputy attack, and the access control is
gated based on the Origin, the vulnerability has less to do with the
fact the Origin: is sent, and much more to do with the fact that the
photos were stored at guessable URLs. Am I correct?

[ Hixie's reply to that message seem to say as much as I just did ].

Tyler's response was to work out the equivalent transaction using
web-keys. It was left as an exercise to the reader. I will attempt to
reproduce that exercise in a further email for comparison. It is
always good to have a true apples-to-apples pair of examples.

-- Dirk

Mark S. Miller

unread,
Apr 26, 2010, 12:11:52 AM4/26/10
to Dirk Pranke, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
[+ianh]

On Sat, Apr 24, 2010 at 7:14 PM, Dirk Pranke <dpr...@chromium.org> wrote:
On Sat, Apr 24, 2010 at 7:25 AM, Mark S. Miller <eri...@google.com> wrote:
> If a security mechanism need not be used to provide security, yes, it can
> indeed be much easier to code to. Likewise, and algorithm that need not be
> correct is much easier to write.

Life is rarely composed of such binary distinctions.

Agreed.

 
You yourself just
finished pointing out that CORS (and existing cross-origin mechanism)
can be used securely with tokens, at the cost of additional
complexity.

Not quite. The security comes 1) from the tokens, and 2) despite cookies, not because of them. Had all the relevant information been placed only in the payload, we'd have all the security benefits without the complexity.

 
Similarly, while it is easy to show that there are ways
that CORS can be used to attack, it is also easy to show that some
usages of CORS may be "secure enough" for their intended purpose, and
it is incumbent on us to provide other mechanisms that are both
equally secure and equally easy to use if we want people to use them.

Agreed, but let's go farther. If it's only "equally", why bother. What we need are mechanisms that are more secure and easier to use in general. I added the "in general" because, for almost any bad mechanism, no matter how poor it may be in general, there will be some example that plays to its strengths, for which it seems to beat other mechanisms.
 

>> - I am less convinced that sending the Origin header is also
>> undesirable. I think doing so can enable a simple class of use cases
>> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
>> in with the sort of attacks they're worried about here, in case I'm
>> missing something.
>
> http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1324.html
>

Unless I"m misunderstanding your example, it seems that while the
attack is indeed a confused deputy attack, and the access control is
gated based on the Origin, the vulnerability has less to do with the
fact the Origin: is sent, and much more to do with the fact that the
photos were stored at guessable URLs. Am I correct?

[ Hixie's reply to that message seem to say as much as I just did ].

That's not my understanding of Hixie's reply. I took Hixie to be saying that photo.example.com should inspect the origin of the URL and reject the request if it has a surprising origin -- effectively sanitizing URL space to the subset it knows about. (I'm cc'ing Hixie in case he'd like to clarify.)

Subsequent discussion revealed differences of opinion of how realistic or desirable such URL sanitizing is. That's why Tyler's recent post at <http://groups.google.com/a/chromium.org/group/chromium-dev/msg/d922610d772aa159> using an ATOM-based example is perhaps the better example to use.
 

Tyler's response was to work out the equivalent transaction using
web-keys. It was left as an exercise to the reader. I will attempt to
reproduce that exercise in a further email for comparison. It is
always good to have a true apples-to-apples pair of examples.

I look forward to it, thanks.

 

-- Dirk



--
    Cheers,
    --MarkM

Dirk Pranke

unread,
Apr 26, 2010, 9:52:19 PM4/26/10
to Mark S. Miller, da...@google.com, Tyler Close, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Sun, Apr 25, 2010 at 9:11 PM, Mark S. Miller <eri...@google.com> wrote:
> [+ianh]
> On Sat, Apr 24, 2010 at 7:14 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>>
>> On Sat, Apr 24, 2010 at 7:25 AM, Mark S. Miller <eri...@google.com>
>> > http://lists.w3.org/Archives/Public/public-webapps/2009AprJun/1324.html
>> >
>>
>> Unless I"m misunderstanding your example, it seems that while the
>> attack is indeed a confused deputy attack, and the access control is
>> gated based on the Origin, the vulnerability has less to do with the
>> fact the Origin: is sent, and much more to do with the fact that the
>> photos were stored at guessable URLs. Am I correct?
>>
>> [ Hixie's reply to that message seem to say as much as I just did ].
>
> That's not my understanding of Hixie's reply. I took Hixie to be saying that
> photo.example.com should inspect the origin of the URL and reject the
> request if it has a surprising origin -- effectively sanitizing URL space to
> the subset it knows about. (I'm cc'ing Hixie in case he'd like to clarify.)

Hixie's reply talked about the danger of potentially overlapping URL
namespaces; he never mentioned "Origin"
So, essentially you would have to have the application
(photo.example.net) know that when a reply from
printer.example.net specified a URL on storage.example.net , the name
had better be in the "/job/*" namespace
and not in the "/photo/*" namespace. So, an "intelligent"
access-control-based implementation would need to do
access control by path, not by origin. Tyler's response to that thread
seems to imply that this cannot be done,
when in fact this can be and is done for specific applications, all
the time. It can't be done if you know nothing
about the namespace of URLs, of course, or their correlation to the
actors involved. But that's almost a tautology.

> Subsequent discussion revealed differences of opinion of how realistic
> or desirable such URL sanitizing is.

I must have missed the subsequent discussion you're thinking of; I
didn't find any follow-ups to that thread
indicating that Hixie's proposal couldn't be implemented.

> That's why Tyler's recent post at
> <http://groups.google.com/a/chromium.org/group/chromium-dev/msg/d922610d772aa159>
> using an ATOM-based example is perhaps the better example to use.
>

I find myself too stupid to follow Tyler's example, but I'll reply to
that there.

>>
>> Tyler's response was to work out the equivalent transaction using
>> web-keys. It was left as an exercise to the reader. I will attempt to
>> reproduce that exercise in a further email for comparison. It is
>> always good to have a true apples-to-apples pair of examples.
>

Okay, having re-read his paper on web-keys, the key insight appears to
be that URLs (should) uniquely designate resources (combining access
with authority), and in order to avoid leakage you carefully store the
authority-granting part of the URL in the fragment (so it can't be
leaked in Referer headers).

In thinking about how to write this out, I'm not sure that there's any
way to do it in a short enough form to be worth reproducing here.
There are too many caveats and footnotes necessary to explain things,
so I won't bother for now. If there is interest, please speak up, but
I will proceed on with my beliefs, since they're probably the
important part.

So, I think that the problem remains the same: *in this particular
example*, as long as printer.example.com can guess (or otherwise come
to know about) the URLs of the user's photos, you're hosed. The
web-key based approach in fact bases its security completely on the
secrecy of these URLs (as all pure-capability models do). If you can
enforce path-based Access Control (combined with the Origin), then you
can in fact do something stronger. Whether or not this generalizes to
multi-party security is an entirely different matter.

-- Dirk

Tyler Close

unread,
Apr 27, 2010, 1:05:36 PM4/27/10
to Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Your use of the word 'tautology' implies that the argument in the
previous sentence is somehow unfair. It's not, it's basic web
architecture. A server has complete control over how it designs its
URL namespace. Typically, standards don't place any constraints on how
a server designs its URL namespace. For example, AtomPub doesn't
define constraints on the structure of URLs used by an AtomPub server.
In fact, it explicitly says:

"""
While the Atom Protocol specifies the formats of the representations
that are exchanged and the actions that can be performed on the IRIs
embedded in those representations, it does not constrain the form of
the URIs that are used. HTTP [RFC2616] specifies that the URI space
of each server is controlled by that server, and this protocol
imposes no further constraints on that control.
"""

Consequently, an AtomPub client is unable to take the precautions you
describe above. This convention is also an explicit principle of the
W3C's webarch document.

Note that there is also no guidance similar to what you describe in
the CORS specification. The general assumption is that Cookies and
Origin headers are sufficient to enable the server-side target of a
request to perform any needed access control checks. As this example
shows, that assumption is plainly invalid.

In the previous email thread, Hixie seemed to be arguing that CORS
access control worked just fine assuming that clients reliably
performed the kind of checking you describe. This is a perfect example
of a tautology. Assuming clients first verify that all principals are
allowed to use the identifiers they've provided, then access checks
done based on CORS headers will be reliable. Put even simpler, CORS
access checks are only reliable if clients first do all the access
checking themselves. In the case of AtomPub, and in general, this
client-side checking is not possible.

>> That's why Tyler's recent post at
>> <http://groups.google.com/a/chromium.org/group/chromium-dev/msg/d922610d772aa159>
>> using an ATOM-based example is perhaps the better example to use.
>>
>
> I find myself too stupid to follow Tyler's example, but I'll reply to
> that there.

Sigh. I'm sorry the examples aren't clearer. Please point out the
parts that are confusing.

>>> Tyler's response was to work out the equivalent transaction using
>>> web-keys. It was left as an exercise to the reader. I will attempt to
>>> reproduce that exercise in a further email for comparison. It is
>>> always good to have a true apples-to-apples pair of examples.
>>
>
> Okay, having re-read his paper on web-keys, the key insight appears to
> be that URLs (should) uniquely designate resources (combining access
> with authority), and in order to avoid leakage you carefully store the
> authority-granting part of the URL in the fragment (so it can't be
> leaked in Referer headers).
>
> In thinking about how to write this out, I'm not sure that there's any
> way to do it in a short enough form to be worth reproducing here.
> There are too many caveats and footnotes necessary to explain things,
> so I won't bother for now. If there is interest, please speak up, but
> I will proceed on with my beliefs, since they're probably the
> important part.

The solution I was thinking of is dead simple. Every resource hosted
by storage.example.org is at an unguessable URL. So the UMP solution
to the problem uses exactly the same requests as the CORS solution,
but omits all credentials. So:

1. A page from photo.example.com makes request:

POST /newprintjob?s=890890890 HTTP/1.0
Host: printer.example.net

HTTP/1.0 201 Created
Content-Type: application/json

{ "@" : "https://storage.example.org/?s=123412341234" }

2. To respond to the above request, the server side code at
printer.example.net set up a new printer spool file at
storage.example.org and returned the unguessable URL for it.

3. The same page from photo.example.com then makes request:

POST /copydocument HTTP/1.0
Host: storage.example.org
Content-Type: application/json

{
"from" : { "@" : "https://storage.example.org/?s=456745674567" },
"to": { "@" : "https://storage.example.org/?s=123412341234" }
}

HTTP/1.0 204 Ok

Done.

Since printer.example.net doesn't know the URL of any
storage.example.org file it is not allowed to write to, it can't ask
photo.example.com to write to such a file. Since photo.example.com
doesn't add any credentials to its requests, printer.example.net also
cannot cause photo.example.com to overwrite any other resource it may
own at any other site.

> So, I think that the problem remains the same: *in this particular
> example*, as long as printer.example.com can guess (or otherwise come
> to know about) the URLs of the user's photos, you're hosed. The
> web-key based approach in fact  bases its security completely on the
> secrecy of these URLs (as all pure-capability models do). If you can
> enforce path-based Access Control (combined with the Origin), then you
> can in fact do something stronger.

What can be made stronger?

> Whether or not this generalizes to multi-party security is an entirely different matter.

It seems strange to deploy and standardize CORS while this question is
unanswered.

--Tyler

Aaron Boodman

unread,
Apr 27, 2010, 1:53:00 PM4/27/10
to tjc...@google.com, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 10:05 AM, Tyler Close <tjc...@google.com> wrote:
> On Mon, Apr 26, 2010 at 6:52 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>> I find myself too stupid to follow Tyler's example, but I'll reply to
>> that there.
>
> Sigh. I'm sorry the examples aren't clearer. Please point out the
> parts that are confusing.

I didn't get it at first either.

I think it works like this. Say you have this URL on an intranet:

https://internal.mycompany.com/update-direct-deposit-details

Say there is also a URL like this:

https://internal.mycompany.com/news/admin

This page is the admin section for the company's internal news page.
Imagine that the news section for this company is actually implemented
using Blogger: the admin section is just a text area and a submit
button that uses XHR+CORS to interact with:

https://www.blogger.com/atom/update

Now imagine Blogger is owned or turns evil. The way atom works is that
when you create a post, the server returns you the URL you should use
to refer to it. A badly implemented client would likely just turn
around and start using this URL, without checking what server it
refers to. If EvilBlogger starts sending URLs that refer to
https://internal.mycompany.com/update-direct-deposit-details, bad
things happen.

===

Perhaps there are simpler examples. In general, the attack that the
UMP advocates are worried about relies on an attacker somehow
convincing a good client to make a bad request to a sensitive service.
With atom this is easy because the protocol provides URLs that the
client is supposed to directly access.

I still need to think about whether I think this is a big deal. My gut
reaction in the example above is that the direct deposit admin screen
shouldn't have been on the same origin as the news server, and
shouldn't accept cross-origin requests. Maybe there are other examples
that aren't so easily solveable?

- a

Tyler Close

unread,
Apr 27, 2010, 2:14:20 PM4/27/10
to Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Thank you for filling in a more concrete example. That is indeed an
example of the kind of thing I am talking about.

I would of course argue that the problem is not a badly implemented
client, but a badly designed access control model in CORS. I think the
client is working just fine and as we want it to. We want that simple
client implementation to be safe.

> Perhaps there are simpler examples. In general, the attack that the
> UMP advocates are worried about relies on an attacker somehow
> convincing a good client to make a bad request to a sensitive service.
> With atom this is easy because the protocol provides URLs that the
> client is supposed to directly access.

The attack is likely also easily demonstrated for any protocol that
passes around identifiers, of URL syntax or not.

> I still need to think about whether I think this is a big deal. My gut
> reaction in the example above is that the direct deposit admin screen
> shouldn't have been on the same origin as the news server, and
> shouldn't accept cross-origin requests. Maybe there are other examples
> that aren't so easily solveable?

Isolating every page that uses cross-domain messaging in a unique
domain is probably not feasible and is also insufficient to protect
against Confused Deputy problems. For example, the evil Blogger could
return a URL that refers to the server-side state of the internal news
page. When the internal news page attempts to update a news item, it
is actually overwriting it's own server side state. That people
mistakenly think these problems are easily solvable is one of the
reasons this kind of attack is so dangerous.

Contrast this tarpit with the situation where there is only UMP. The
policy is that any page on the Intranet can do cross-domain messaging,
so long as it only uses UMP. You've got a straightforward policy to
explain to developers, an easy thing to code audit for and a robust
defense against Confused Deputy. A much better world to work in.

--Tyler

Adam Barth

unread,
Apr 27, 2010, 2:57:16 PM4/27/10
to Tyler Close, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
How does UMP help in this case? In Aaron's example, the security for
https://internal.mycompany.com/update-direct-deposit-details seems to
rely upon the company's firewall, which UMP lets EvilBlogger
circumvent.

Adam

Aaron Boodman

unread,
Apr 27, 2010, 3:31:10 PM4/27/10
to Tyler Close, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
I realized my example was not that great. It doesn't demonstrate a new
issue with CORS because it doesn't actually use CORS for the attack.
CORS is only used in my example to interact with the attacker, not to
interact with the target of the attack
(update-direct-deposit-details).

It seems like this issue already exists today, without adding CORS. If
you can convince a client to issue an exact request, game over. CORS
only exacerbates this problem in that it makes more services available
to clients.

> Isolating every page that uses cross-domain messaging in a unique
> domain is probably not feasible and is also insufficient to protect
> against Confused Deputy problems. For example, the evil Blogger could
> return a URL that refers to the server-side state of the internal news
> page. When the internal news page attempts to update a news item, it
> is actually overwriting it's own server side state. That people
> mistakenly think these problems are easily solvable is one of the
> reasons this kind of attack is so dangerous.
>
> Contrast this tarpit with the situation where there is only UMP. The
> policy is that any page on the Intranet can do cross-domain messaging,
> so long as it only uses UMP. You've got a straightforward policy to
> explain to developers, an easy thing to code audit for and a robust
> defense against Confused Deputy. A much better world to work in.

It seems like the only way the attack can work is if the attacker can
convince the client (I'm really hesitant to use the word 'deputy'
because I keep imaging Barney Fife) to issue the *exact* request
required by the target service. This seems unrealistic. In my example,
presumably the update-direct-deposit-details requires other POST and
GET parameters. Can Blogger convince the news service to issue that
exact request?

It seems like this devolves to XSS. Do not evaluate code you received
from outside your program. Issuing requests that you receive from
others is basically the same as evaluating code.

- a

Dirk Pranke

unread,
Apr 27, 2010, 4:29:10 PM4/27/10
to Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 10:05 AM, Tyler Close <tjc...@google.com> wrote:
I'm sorry, I did not mean to imply unfair, just "obvious."

> A server has complete control over how it designs its
> URL namespace. Typically, standards don't place any constraints on how
> a server designs its URL namespace. For example, AtomPub doesn't
> define constraints on the structure of URLs used by an AtomPub server.

Standards don't. Implementations do, all the time.

> In fact, it explicitly says:
>
> """
>   While the Atom Protocol specifies the formats of the representations
>   that are exchanged and the actions that can be performed on the IRIs
>   embedded in those representations, it does not constrain the form of
>   the URIs that are used.  HTTP [RFC2616] specifies that the URI space
>   of each server is controlled by that server, and this protocol
>   imposes no further constraints on that control.
> """
>
> Consequently, an AtomPub client is unable to take the precautions you
> describe above.

A generic AtomPub client that knew nothing about the server, sure. My point is
that in the real world, we don't build systems this way, at least not
if we know what we
are doing.

>  This convention is also an explicit principle of the
> W3C's webarch document.

Which means almost nothing in the real world.

>
> Note that there is also no guidance similar to what you describe in
> the CORS specification. The general assumption is that Cookies and
> Origin headers are sufficient to enable the server-side target of a
> request to perform any needed access control checks. As this example
> shows, that assumption is plainly invalid.

I don't think the CORS advocates would necessarily agree with that
characterization, at least not
to the example you provided. They all agree that the protocol does
have the confused deputy vulnerability.

>
>> So, I think that the problem remains the same: *in this particular
>> example*, as long as printer.example.com can guess (or otherwise come
>> to know about) the URLs of the user's photos, you're hosed. The
>> web-key based approach in fact  bases its security completely on the
>> secrecy of these URLs (as all pure-capability models do). If you can
>> enforce path-based Access Control (combined with the Origin), then you
>> can in fact do something stronger.
>
> What can be made stronger?

Combining Origin with unguessable URLs is stronger than unguessable URLs alone.

-- Dirk

Dirk Pranke

unread,
Apr 27, 2010, 4:44:47 PM4/27/10
to Aaron Boodman, Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Yes, the CORS people are quite clear that (as long as you're using
GETs and POSTs) any attack you have with CORS exists today with form
submissions (or at least, they -- and I -- believe that to be the
case).

>> Isolating every page that uses cross-domain messaging in a unique
>> domain is probably not feasible and is also insufficient to protect
>> against Confused Deputy problems. For example, the evil Blogger could
>> return a URL that refers to the server-side state of the internal news
>> page. When the internal news page attempts to update a news item, it
>> is actually overwriting it's own server side state. That people
>> mistakenly think these problems are easily solvable is one of the
>> reasons this kind of attack is so dangerous.
>>
>> Contrast this tarpit with the situation where there is only UMP. The
>> policy is that any page on the Intranet can do cross-domain messaging,
>> so long as it only uses UMP. You've got a straightforward policy to
>> explain to developers, an easy thing to code audit for and a robust
>> defense against Confused Deputy. A much better world to work in.
>
> It seems like the only way the attack can work is if the attacker can
> convince the client (I'm really hesitant to use the word 'deputy'
> because I keep imaging Barney Fife) to issue the *exact* request
> required by the target service. This seems unrealistic. In my example,
> presumably the update-direct-deposit-details requires other POST and
> GET parameters. Can Blogger convince the news service to issue that
> exact request?
>

Yes, that is pretty much the point of the confused deputy / XSRF
attack. And yes, it happens all the time.

> It seems like this devolves to XSS. Do not evaluate code you received
> from outside your program. Issuing requests that you receive from
> others is basically the same as evaluating code.
>

Not always XSS, but that can certainly help. It can be difficult to
defend against this in the general case, because it may be difficult
if not impossible to be able to tell which URLs are safe and which
aren't (which is Tyler and Mark's point). They argue that the best
defense against this is to make the URLs unguessable and difficult to
leak.

-- Dirk

Dirk Pranke

unread,
Apr 27, 2010, 5:05:59 PM4/27/10
to Tyler Close, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, MarkM Miller
On Sat, Apr 24, 2010 at 10:04 AM, Tyler Close <tjc...@google.com> wrote:
> On Fri, Apr 23, 2010 at 6:51 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>> CORS extends the functionality of UMP in the following ways (roughly speaking):
>> * It allows methods other than GET and POST, at the cost of requiring
>> a "preflight" request to see if such a request is allowed. The
>> response to the preflight request is cachable.
>
> Note that this preflight request is *per URL*. If a PUT requires two
> round trips and POST only one, I suspect an API designer will have a
> tough time defending the use of PUT, no matter what its philosophical
> advantages. Consequently, I think its fair to wonder if CORS enables
> methods other than GET and POST only in theory and not in practice.

I am inclined to agree with you here, but I am usually skeptical about
claims that "PUT" is a useful verb in practice.

>
>> - I am less convinced that sending the Origin header is also
>> undesirable. I think doing so can enable a simple class of use cases
>> trivially at a relatively minor risk. Perhaps Tyler or MarkM can chime
>> in with the sort of attacks they're worried about here, in case I'm
>> missing something.
>
> I have a few concrete concerns with using the Origin header in UMP:
>
> 1. Consider a resource that uses client IP address or a firewall for
> access control. The resource author might easily think CSRF-like
> vulnerabilities can be prevented by checking the Origin header. The
> logic being: "This request came one of our trusted machines behind our
> firewall and was sent by one of our web pages. It must be safe to
> accept the request." Of course if any page from that origin is doing
> cross-site messaging then the target site could run a Confused Deputy
> attack against the intranet resource. In particular, consider an
> intranet page doing cross-site messaging with an API like the Atom
> Publishing Protocol. The Atom server is expected to respond to a
> resource creation request with the URL for the created resource. This
> URL is expected to identify a resource created by the publishing site;
> however, if instead the target server responds with the URL of the
> victim intranet resource, then the intranet page will POST to that
> resource. The intranet resource receives a request from behind the
> firewall with the expected client IP address and the expected Origin
> header.

(I wrote this part last night but didn't send it because I wasn't
happy with it yet; Aaron's reply largely makes this part moot ...)

I'm sorry, but I didn't follow this example or what the perceived
threat is at all. I've been feeling rather stupid lately, so this is
probably my fault, but could you be a bit more explicit about who is
doing what to do who here, and what is being compromised? I think
you've got "cms.intranet.example.com" (which is the atom server), and
"client.intranet.example.com". I'm not sure what "target site" is
supposed to refer to, nor "target server", "victim intranet resource",
or "intranet resource".

Presumably the end result is that "cms.intranet" gets a request from
"client.intranet" that it honors, but it shouldn't?

At any rate, let me try and go back to the beginning.

(1) I think at this point, everyone acknowledges that confused deputy
attacks are a real threat with CORS. It is also acknowledged that they
can be defended against at the cost of increased developer complexity.

(2) Most everyone acknowledges that UMP is (or will be a subset) of
CORS, and so UMP will likely be implemented. Since it is clear -- to
me at least -- that a no-ambient-authority version of cross-domain
messaging should be possible, and it's almost no work to implement
this on top of the CORS infrastructure, we should go ahead and do so.

(3) CORS is, at least in WebKit and in most other browsers, already
implemented and deployed, so debating whether or not it should be
implemented and deployed is moot. Rather, the questions become -
should it be documented and/or evangelized at all, and should it be
removed?

Now, when deciding whether or not to use or remove a protocol, we
should ask the following questions:

1) What is it being used for? Are any of those uses secure enough that
we need not be concerned about them? We don't know the answer to these
questions, so some amount of information gathering would be good.
Slightly different, what can it be used for, and are those uses
secure?

If it can be shown that no usage of CORS is secure, that would
obviously be a deal killer. However, that's not going to be possible,
since you clearly can build secure apps on top of it, as we've
discussed above.

2) If CORS is being used securely, can the equivalent functionality be
achieved other ways, just as easily and securely? Answering this
effectively probably also requires the information gathered in (1).
Otherwise, it is too easy to create strawman use cases on both sides.

One could largely object that we should not have a protocol floating
around out there that has known holes. Unfortunately, since CORS
builds on the existing web infrastructure, the cat is long since
escaped from that bag (although we could at least suggest that the
aspects of CORS like non-GET/POST usage should be discouraged, if we
were being paranoid). The CORS advocates suggest, probably correctly,
that CORS has the advantage of making certain coding patterns much
easier than they are otherwise in which case it becomes easier to
write correct versions of the cross-origin sharing code that does
already exist today. And that's a good thing.

One can also object that CORS may encourage people to adopt and spread
insecure communication patterns (leading to more cross-origin sharing
than currently occurs). I suggested at the beginning that we not
encourage this and hence CORS continues to exist in the same limbo it
currently lives in, at least until we can get better answers to (1)
and (2).

The last problem with adopting a "UMP-only" world view is that we
don't actually have a good sense of what the security issues around
UMP-only messaging will be. I think we would agree that as long as
URLs are correctly made unguessable and aren't leaked, things should
be good. That is, however, a largely academic stance, just as saying
that you can use CORS successfully (or not) is an academic stance. We
need real world usage to get a much better degree of confidence here.

-- Dirk

Aaron Boodman

unread,
Apr 27, 2010, 5:30:13 PM4/27/10
to Dirk Pranke, Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 1:44 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Tue, Apr 27, 2010 at 12:31 PM, Aaron Boodman <a...@google.com> wrote:
>> I realized my example was not that great. It doesn't demonstrate a new
>> issue with CORS because it doesn't actually use CORS for the attack.
>> CORS is only used in my example to interact with the attacker, not to
>> interact with the target of the attack
>> (update-direct-deposit-details).
>>
>> It seems like this issue already exists today, without adding CORS. If
>> you can convince a client to issue an exact request, game over. CORS
>> only exacerbates this problem in that it makes more services available
>> to clients.
>>
>
> Yes, the CORS people are quite clear that (as long as you're using
> GETs and POSTs) any attack you have with CORS exists today with form
> submissions (or at least, they -- and I -- believe that to be the
> case).

My understanding is that in a browser that implements CORS, all
cross-origin requests (not just those coming from XHR) will carry the
Origin header. I couldn't find it in the spec, so I can't link to it,
and might be wrong. The rest of this mail is based on that assumption.

===

Any attack you have with CORS you have today with cross-site form
posts, but the reverse is not true. Most of the attacks you have today
with cross-site form posts are thwarted by CORS.

Traditional XSRF, where attackers create a form and submit it using
JavaScript, no longer work with a browser that implements CORS. In
that case, the Origin header will indicate the source of the request,
and a server that checked the Origin header would not be vulnerable.

This simple check could be implemented in low-level framework code and
deployed quite rapidly. That would be a pretty dramatic win.

>>> Isolating every page that uses cross-domain messaging in a unique
>>> domain is probably not feasible and is also insufficient to protect
>>> against Confused Deputy problems. For example, the evil Blogger could
>>> return a URL that refers to the server-side state of the internal news
>>> page. When the internal news page attempts to update a news item, it
>>> is actually overwriting it's own server side state. That people
>>> mistakenly think these problems are easily solvable is one of the
>>> reasons this kind of attack is so dangerous.
>>>
>>> Contrast this tarpit with the situation where there is only UMP. The
>>> policy is that any page on the Intranet can do cross-domain messaging,
>>> so long as it only uses UMP. You've got a straightforward policy to
>>> explain to developers, an easy thing to code audit for and a robust
>>> defense against Confused Deputy. A much better world to work in.
>>
>> It seems like the only way the attack can work is if the attacker can
>> convince the client (I'm really hesitant to use the word 'deputy'
>> because I keep imaging Barney Fife) to issue the *exact* request
>> required by the target service. This seems unrealistic. In my example,
>> presumably the update-direct-deposit-details requires other POST and
>> GET parameters. Can Blogger convince the news service to issue that
>> exact request?
>>
>
> Yes, that is pretty much the point of the confused deputy / XSRF
> attack. And yes, it happens all the time.

There are two different deputies we're talking about. With traditional
XSRF, the browser is the deputy. It is confused into making a
malicious request on behalf of an evil site when the evil site
constructs a cross-domain form and submits it using JavaScript. I'm
not talking about that case, because as I said above, it seems like
CORS addresses it.

The case that is left, and the one I was talking about above, is where
some good site is the deputy.

In my example, the intranet news site is confused into making a bad
request to the direct deposit site. This seems pretty unlikely to me.
It requires an evil site to be able to control the exact request that
a good site will make. In many cases this would even involve
controlling the POST data that would be sent. That seems far-fetched
to me.

- a

Tyler Close

unread,
Apr 27, 2010, 6:22:18 PM4/27/10
to Adam Barth, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
This part of this thread started out with Dirk asking why UMP
prohibits naming the requesting origin in the Origin header. So the
scenario assumes a world in which the Origin header is deployed and
explains why it is dangerous for UMP to provide a non-"null" value for
this header.

In a world where Origin is deployed, the update-direct-deposit-details
resource checks for an Origin header with value
"https://internal.mycompany.com". A request without this header is
rejected. The web developer believes that this check, in combination
with the firewall, protects the resource against Confused Deputy
attacks, such as CSRF. The attack scenario shows how to use CORS to
violate the web developer's expectations and so attack the
update-direct-deposit-details resource. Since UMP can only send an
"Origin: null" header, it cannot be used to attack the
update-direct-deposit-details resource.

--Tyler

Adam Barth

unread,
Apr 27, 2010, 6:30:18 PM4/27/10
to Tyler Close, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Thanks. That wasn't clear from the example.

Adam

Tyler Close

unread,
Apr 27, 2010, 6:38:59 PM4/27/10
to Dirk Pranke, Aaron Boodman, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 1:44 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Tue, Apr 27, 2010 at 12:31 PM, Aaron Boodman <a...@google.com> wrote:
>> It seems like this issue already exists today, without adding CORS. If
>> you can convince a client to issue an exact request, game over. CORS
>> only exacerbates this problem in that it makes more services available
>> to clients.
>>
>
> Yes, the CORS people are quite clear that (as long as you're using
> GETs and POSTs) any attack you have with CORS exists today with form
> submissions (or at least, they -- and I -- believe that to be the
> case).

This is *not* the case. CORS enables attacks that are not possible
today using only form submissions. Today, there is no safe way for a
web page to read data from another origin. Consequently, there is no
safe way for a victim page to receive a URL (or other identifier) from
an attacker's server. Since the victim page cannot acquire this URL,
it cannot be included in a subsequent request by the victim page.
Consequently, the attacker cannot cause an interaction that exploits a
Confused Deputy vulnerability.

Today, we only have full messaging (with requests and responses) in
Same Origin scenarios, so only two parties are involved. With
cross-origin messaging you have at least 3 parties and so the
possibility of Confused Deputy attacks. A Confused Deputy attack
requires at least 3 parties. CORS enables developers to write 3 party
interactions, but provides a security model where these interactions
are widely vulnerable to Confused Deputy attack. It is a new attack
surface.

--Tyler

Aaron Boodman

unread,
Apr 27, 2010, 6:43:16 PM4/27/10
to Tyler Close, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 3:38 PM, Tyler Close <tjc...@google.com> wrote:
> This is *not* the case. CORS enables attacks that are not possible
> today using only form submissions. Today, there is no safe way for a
> web page to read data from another origin. Consequently, there is no
> safe way for a victim page to receive a URL (or other identifier) from
> an attacker's server. Since the victim page cannot acquire this URL,
> it cannot be included in a subsequent request by the victim page.
> Consequently, the attacker cannot cause an interaction that exploits a
> Confused Deputy vulnerability.

Couldn't you get an origin or identifier via window.postMessage()?

- a

Tyler Close

unread,
Apr 27, 2010, 6:48:38 PM4/27/10
to Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 3:43 PM, Aaron Boodman <a...@google.com> wrote:
> On Tue, Apr 27, 2010 at 3:38 PM, Tyler Close <tjc...@google.com> wrote:
>> This is *not* the case. CORS enables attacks that are not possible
>> today using only form submissions. Today, there is no safe way for a
>> web page to read data from another origin. Consequently, there is no
>> safe way for a victim page to receive a URL (or other identifier) from
>> an attacker's server. Since the victim page cannot acquire this URL,
>> it cannot be included in a subsequent request by the victim page.
>> Consequently, the attacker cannot cause an interaction that exploits a
>> Confused Deputy vulnerability.
>
> Couldn't you get an origin or identifier via window.postMessage()?

Yes, you could. The claim I was disputing was:

> Yes, the CORS people are quite clear that (as long as you're using
> GETs and POSTs) any attack you have with CORS exists today with form
> submissions (or at least, they -- and I -- believe that to be the
> case).

So CORS enables new attacks that did not exist with only forms, or
with other HTML4.01 APIs. Other HTML5 APIs may also be introducing new
Confused Deputy attacks.

--Tyler

Adam Barth

unread,
Apr 27, 2010, 6:53:56 PM4/27/10
to Tyler Close, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 3:48 PM, Tyler Close <tjc...@google.com> wrote:
> On Tue, Apr 27, 2010 at 3:43 PM, Aaron Boodman <a...@google.com> wrote:
>> On Tue, Apr 27, 2010 at 3:38 PM, Tyler Close <tjc...@google.com> wrote:
>>> This is *not* the case. CORS enables attacks that are not possible
>>> today using only form submissions. Today, there is no safe way for a
>>> web page to read data from another origin. Consequently, there is no
>>> safe way for a victim page to receive a URL (or other identifier) from
>>> an attacker's server. Since the victim page cannot acquire this URL,
>>> it cannot be included in a subsequent request by the victim page.
>>> Consequently, the attacker cannot cause an interaction that exploits a
>>> Confused Deputy vulnerability.
>>
>> Couldn't you get an origin or identifier via window.postMessage()?
>
> Yes, you could. The claim I was disputing was:
>
>> Yes, the CORS people are quite clear that (as long as you're using
>> GETs and POSTs) any attack you have with CORS exists today with form
>> submissions (or at least, they -- and I -- believe that to be the
>> case).
>
> So CORS enables new attacks that did not exist with only forms, or
> with other HTML4.01 APIs. Other HTML5 APIs may also be introducing new
> Confused Deputy attacks.

That's not really true. Plenty of folks are doing cross-origin
communication in IE6, which certainly doesn't support HTML5 since
HTML5 was invented after it was released.

Adam

Tyler Close

unread,
Apr 27, 2010, 7:00:32 PM4/27/10
to Adam Barth, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 3:53 PM, Adam Barth <aba...@chromium.org> wrote:
> On Tue, Apr 27, 2010 at 3:48 PM, Tyler Close <tjc...@google.com> wrote:
>> On Tue, Apr 27, 2010 at 3:43 PM, Aaron Boodman <a...@google.com> wrote:
>>> On Tue, Apr 27, 2010 at 3:38 PM, Tyler Close <tjc...@google.com> wrote:
>>>> This is *not* the case. CORS enables attacks that are not possible
>>>> today using only form submissions. Today, there is no safe way for a
>>>> web page to read data from another origin. Consequently, there is no
>>>> safe way for a victim page to receive a URL (or other identifier) from
>>>> an attacker's server. Since the victim page cannot acquire this URL,
>>>> it cannot be included in a subsequent request by the victim page.
>>>> Consequently, the attacker cannot cause an interaction that exploits a
>>>> Confused Deputy vulnerability.
>>>
>>> Couldn't you get an origin or identifier via window.postMessage()?
>>
>> Yes, you could. The claim I was disputing was:
>>
>>> Yes, the CORS people are quite clear that (as long as you're using
>>> GETs and POSTs) any attack you have with CORS exists today with form
>>> submissions (or at least, they -- and I -- believe that to be the
>>> case).
>>
>> So CORS enables new attacks that did not exist with only forms, or
>> with other HTML4.01 APIs. Other HTML5 APIs may also be introducing new
>> Confused Deputy attacks.
>
> That's not really true.  Plenty of folks are doing cross-origin
> communication in IE6, which certainly doesn't support HTML5 since
> HTML5 was invented after it was released.

Note my use of the adjective "safe way". So if your cross-origin
communication requires you to be vulnerable to XSS, you don't have the
expectation of safe cross origin communication.

It might be possible to safely use polling of the fragment id on a
iframe, but that technique is sufficiently esoteric that I think my
argument still stands. CORS is introducing an new 3 party interaction
pattern to a wider audience of developers and not providing a safe
security model for those interactions.

--Tyler

Aaron Boodman

unread,
Apr 27, 2010, 7:05:14 PM4/27/10
to Tyler Close, Adam Barth, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 4:00 PM, Tyler Close <tjc...@google.com> wrote:
> It might be possible to safely use polling of the fragment id on a
> iframe, but that technique is sufficiently esoteric that I think my
> argument still stands. CORS is introducing an new 3 party interaction
> pattern to a wider audience of developers and not providing a safe
> security model for those interactions.

There is also the possibility of just using server-to-server
communication. Your attack at its simplest is that a web app (either
the client or server) talks to some other code, receives a URL from
that code, and, then the browser requests that URL with the authority
of the current user.

It seems like this attack doesn't require CORS or HTML5. It existed
before either of those. Right?

- a

Adam Barth

unread,
Apr 27, 2010, 7:08:15 PM4/27/10
to Tyler Close, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
It's not esoteric at all. In fact, it's used on many, many web sites
because that's what Facebook Connect uses for cross-origin
communication.

Adam

Alex Russell

unread,
Apr 27, 2010, 7:15:49 PM4/27/10
to Adam Barth, Tyler Close, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Dimitri Glazkov, Ian Hickson
It's also available in toolkits and the like:

http://api.dojotoolkit.org/jsdoc/1.3.2/dojox.io.proxy.xip

Regards

Tyler Close

unread,
Apr 27, 2010, 7:16:21 PM4/27/10
to Adam Barth, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
I don't think the existence of fragment polling makes it fair for CORS
to claim it is not introducing any new attack vectors. Are you
claiming otherwise?

If we're just nitpicking at phrasing, then re-read the claim I am disputing:

> Yes, the CORS people are quite clear that (as long as you're using
> GETs and POSTs) any attack you have with CORS exists today with form
> submissions (or at least, they -- and I -- believe that to be the
> case).

There's nothing in there about fragment polling.

--Tyler

Ojan Vafai

unread,
Apr 27, 2010, 7:24:24 PM4/27/10
to tjc...@google.com, Adam Barth, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
This discussion has gotten unproductive. The following seem clear:
1) Noone on Chrome team who has read this thread approves of dropping CORS support. Also, there is general agreement that supporting UMP is fine as a subset of CORS.
2) We all agree with the path forward that Adam proposed.

Can we move that proposal over to the public-webapps discussion and kill this thread?

Ojan

Tyler Close

unread,
Apr 27, 2010, 7:24:53 PM4/27/10
to Aaron Boodman, Adam Barth, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 4:05 PM, Aaron Boodman <a...@google.com> wrote:
> On Tue, Apr 27, 2010 at 4:00 PM, Tyler Close <tjc...@google.com> wrote:
>> It might be possible to safely use polling of the fragment id on a
>> iframe, but that technique is sufficiently esoteric that I think my
>> argument still stands. CORS is introducing an new 3 party interaction
>> pattern to a wider audience of developers and not providing a safe
>> security model for those interactions.
>
> There is also the possibility of just using server-to-server
> communication. Your attack at its simplest is that a web app (either
> the client or server) talks to some other code, receives a URL from
> that code, and, then the browser requests that URL with the authority
> of the current user.

That last step "with the authority of the current user." is the
crucial bit. That's why CORS is creating new vulnerability and UMP is
not.

> It seems like this attack doesn't require CORS or HTML5. It existed
> before either of those. Right?

CORS provides a way to easily express the vulnerability in purely
client-side code. Since client-side code has use of the user's cookies
and runs behind firewalls, whereas server-side code does not, this new
way of expressing the vulnerability is especially dangerous.

--Tyler

Aaron Boodman

unread,
Apr 27, 2010, 7:32:06 PM4/27/10
to Ojan Vafai, tjc...@google.com, Adam Barth, Dirk Pranke, Mark S. Miller, da...@google.com, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 4:24 PM, Ojan Vafai <oj...@google.com> wrote:
> This discussion has gotten unproductive. The following seem clear:

I don't feel like the discussion has been unproductive. It has
actually been super helpful for me, anyway, to work through these
cases. We aren't arguing past each other... I think we're asking good
questions that Tyler is answering.

I agree this discussion makes more sense on public-webapps, though.

- a

Ojan Vafai

unread,
Apr 27, 2010, 7:50:51 PM4/27/10
to Aaron Boodman, tjc...@google.com, Adam Barth, Dirk Pranke, Mark S. Miller, da...@google.com, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 4:32 PM, Aaron Boodman <a...@google.com> wrote:
On Tue, Apr 27, 2010 at 4:24 PM, Ojan Vafai <oj...@google.com> wrote:
> This discussion has gotten unproductive. The following seem clear:

I don't feel like the discussion has been unproductive. It has
actually been super helpful for me, anyway, to work through these
cases. We aren't arguing past each other... I think we're asking good
questions that Tyler is answering.

I agree this discussion makes more sense on public-webapps, though.

Yeah, sorry. I take that back.

Ojan 
 

Dirk Pranke

unread,
Apr 27, 2010, 9:50:51 PM4/27/10
to Tyler Close, Aaron Boodman, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 3:38 PM, Tyler Close <tjc...@google.com> wrote:
> On Tue, Apr 27, 2010 at 1:44 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>> On Tue, Apr 27, 2010 at 12:31 PM, Aaron Boodman <a...@google.com> wrote:
>>> It seems like this issue already exists today, without adding CORS. If
>>> you can convince a client to issue an exact request, game over. CORS
>>> only exacerbates this problem in that it makes more services available
>>> to clients.
>>>
>>
>> Yes, the CORS people are quite clear that (as long as you're using
>> GETs and POSTs) any attack you have with CORS exists today with form
>> submissions (or at least, they -- and I -- believe that to be the
>> case).
>
> This is *not* the case. CORS enables attacks that are not possible
> today using only form submissions. Today, there is no safe way for a
> web page to read data from another origin. Consequently, there is no
> safe way for a victim page to receive a URL (or other identifier) from
> an attacker's server. Since the victim page cannot acquire this URL,
> it cannot be included in a subsequent request by the victim page.
> Consequently, the attacker cannot cause an interaction that exploits a
> Confused Deputy vulnerability.

To be clear, I believe that the requests that can be done with CORS
can be done today,
and you can have the reply sent to a hidden iframe. I think you're
right that the contents
of the hidden iframe cannot be read by the attacker, and so there is a
difference (now
that I think about it; for some reason I was thinking the hidden
iframe attack did give
you access to the response). Adam, do you agree with this?

Assuming this reasoning is correct, this should be re-posted on
public-webapps, because I
believe the general message of the CORS advocates is "no new attack
surface" and so
this is an important point (fragment identifiers aside).

Thank you for clarifying that.

-- Dirk

Aaron Boodman

unread,
Apr 27, 2010, 9:58:05 PM4/27/10
to Tyler Close, Adam Barth, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 4:24 PM, Tyler Close <tjc...@google.com> wrote:
Sorry, I just wanted to finish this thought in case anyone is still
trying to understand the UMP vs CORS debate.

> On Tue, Apr 27, 2010 at 4:05 PM, Aaron Boodman <a...@google.com> wrote:
>> On Tue, Apr 27, 2010 at 4:00 PM, Tyler Close <tjc...@google.com> wrote:
>>> It might be possible to safely use polling of the fragment id on a
>>> iframe, but that technique is sufficiently esoteric that I think my
>>> argument still stands. CORS is introducing an new 3 party interaction
>>> pattern to a wider audience of developers and not providing a safe
>>> security model for those interactions.
>>
>> There is also the possibility of just using server-to-server
>> communication. Your attack at its simplest is that a web app (either
>> the client or server) talks to some other code, receives a URL from
>> that code, and, then the browser requests that URL with the authority
>> of the current user.
>
> That last step "with the authority of the current user." is the
> crucial bit. That's why CORS is creating new vulnerability and UMP is
> not.

I think this vulnerability existed prior to either CORS, UMP, or HTML5.

There were already many ways to make a request from the client to some
origin with the user's authority:

* XHR to the same origin
* Images, css, javascript, and iframes across origins (GET only)
* Forms across origins (GET and POST)

There were already ways to communicate with other servers:

* server-to-server communication
* the iframe fragment trick
* plugins

So it seems like it was already possible to write a system that was
vulnerable to receiving or constructing a URL based on external input,
and requesting it. It was already possible to make your site a deputy.

I agree it is slightly easier with CORS (and UMP for that matter)
because it can easily be implemented entirely in client code without
library support. But it doesn't seem like this attack actually happens
in practice. If it does happen, it is completely dwarfed by the other
security problems like XSS and XSRF.

Since XSRF is mitigated very nicely by CORS, I think it should be deployed.

I also think it would be cool to deploy UMP (or GuestXHR) or whatever.
It seems very useful.

- a

Dirk Pranke

unread,
Apr 27, 2010, 10:01:12 PM4/27/10
to Aaron Boodman, Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 2:30 PM, Aaron Boodman <a...@google.com> wrote:
> On Tue, Apr 27, 2010 at 1:44 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>> On Tue, Apr 27, 2010 at 12:31 PM, Aaron Boodman <a...@google.com> wrote:
>>> I realized my example was not that great. It doesn't demonstrate a new
>>> issue with CORS because it doesn't actually use CORS for the attack.
>>> CORS is only used in my example to interact with the attacker, not to
>>> interact with the target of the attack
>>> (update-direct-deposit-details).
>>>
>>> It seems like this issue already exists today, without adding CORS. If
>>> you can convince a client to issue an exact request, game over. CORS
>>> only exacerbates this problem in that it makes more services available
>>> to clients.
>>>
>>
>> Yes, the CORS people are quite clear that (as long as you're using
>> GETs and POSTs) any attack you have with CORS exists today with form
>> submissions (or at least, they -- and I -- believe that to be the
>> case).
>
> My understanding is that in a browser that implements CORS, all
> cross-origin requests (not just those coming from XHR) will carry the
> Origin header. I couldn't find it in the spec, so I can't link to it,
> and might be wrong. The rest of this mail is based on that assumption.
>

I believe the Origin header is now being sent for pretty much any
request, not just XHRs, and so doing Origin-based mitigation of XSRFs
is largely independent of CORS. Adam, is this right?

-- Dirk

Aaron Boodman

unread,
Apr 27, 2010, 10:09:46 PM4/27/10
to Dirk Pranke, Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 7:01 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> I believe the Origin header is now being sent for pretty much any
> request, not just XHRs, and so doing Origin-based mitigation of XSRFs
> is largely independent of CORS. Adam, is this right?

Yeah, I was going to bring that up. It seems like always sending the
Origin header is a pretty awesome by itself, independent of CORS. The
actual ability to make cross-origin requests with XHR can be debated
separately.

- a

Aaron Boodman

unread,
Apr 27, 2010, 10:27:13 PM4/27/10
to Dirk Pranke, Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 6:50 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> To be clear, I believe that the requests that can be done with CORS
> can be done today,
> and you can have the reply sent to a hidden iframe. I think you're
> right that the contents
> of the hidden iframe cannot be read by the attacker, and so there is a
> difference (now
> that I think about it; for some reason I was thinking the hidden
> iframe attack did give
> you access to the response). Adam, do you agree with this?

This is true. You can't read the response with iframes, you can with
CORS. But the server must opt-into the client being able to read the
response with a special header, so there is still no new attack
surface directly.

I think the best argument against CORS is that it creates more
JavaScript out in the world that is making requests with XHR and
sending the user's credentials. It an attacker can trick one of these
clients into sending a request of the attacker's choosing, the
attacker wins.

My theory is that this doesn't happen very much in practice, though
the atom example does give me pause.

- a

Tyler Close

unread,
Apr 28, 2010, 1:02:31 PM4/28/10
to Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 7:09 PM, Aaron Boodman <a...@google.com> wrote:
> On Tue, Apr 27, 2010 at 7:01 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>> I believe the Origin header is now being sent for pretty much any
>> request, not just XHRs, and so doing Origin-based mitigation of XSRFs
>> is largely independent of CORS. Adam, is this right?
>
> Yeah, I was going to bring that up. It seems like always sending the
> Origin header is a pretty awesome by itself, independent of CORS. The
> actual ability to make cross-origin requests with XHR can be debated
> separately.

The Origin header is one of those things that at first seems like a
good idea, but turns out to actually be a bad idea once you dig into
the details. The crux of the problem is that once you're doing
multi-party messaging, the content of any particular request is
actually determined by an intricate collaboration amongst many
parties. One party caused one identifier to be present and another
party caused another one to be present and yet another one was the
immediate sender of the request. In this situation, there's no useful
definition for the "origin" of the request. By pegging one party as
the originator of the whole request, you misrepresent what's actually
happening and the intent of the parties involved and so make incorrect
access decisions. In a CSRF attack, cookies similarly misrepresent the
intent of a message sender, by treating the whole request as the
intent of the user when the reality is that an attacker specified the
body of the request. The Origin header will be used in similar ways by
attackers.

--Tyler

Tyler Close

unread,
Apr 28, 2010, 12:39:17 PM4/28/10
to Dirk Pranke, Aaron Boodman, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, aba...@chromium.org, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
It's not just that an attacker can't read a response, but that before
CORS, a victim couldn't either. So a victim web page makes a
cross-origin request to an attacker's server. The attacker returns an
unexpected URL to the victim page. If the victim page makes use of
this unexpected URL, there is a Confused Deputy attack. CORS makes it
easy for a victim page to receive a URL (or other identifier) from an
attacker. CORS doesn't make it safe for the victim to use this URL.
UMP also makes it easy for a victim page to receive a URL from an
attacker. The main difference is that UMP makes it safe for the victim
to use this URL.

--Tyler

Adam Barth

unread,
Apr 28, 2010, 3:11:45 PM4/28/10
to Dirk Pranke, Aaron Boodman, Tyler Close, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
On Tue, Apr 27, 2010 at 7:01 PM, Dirk Pranke <dpr...@chromium.org> wrote:
> On Tue, Apr 27, 2010 at 2:30 PM, Aaron Boodman <a...@google.com> wrote:
>> On Tue, Apr 27, 2010 at 1:44 PM, Dirk Pranke <dpr...@chromium.org> wrote:
>>> On Tue, Apr 27, 2010 at 12:31 PM, Aaron Boodman <a...@google.com> wrote:
>>>> I realized my example was not that great. It doesn't demonstrate a new
>>>> issue with CORS because it doesn't actually use CORS for the attack.
>>>> CORS is only used in my example to interact with the attacker, not to
>>>> interact with the target of the attack
>>>> (update-direct-deposit-details).
>>>>
>>>> It seems like this issue already exists today, without adding CORS. If
>>>> you can convince a client to issue an exact request, game over. CORS
>>>> only exacerbates this problem in that it makes more services available
>>>> to clients.
>>>>
>>>
>>> Yes, the CORS people are quite clear that (as long as you're using
>>> GETs and POSTs) any attack you have with CORS exists today with form
>>> submissions (or at least, they -- and I -- believe that to be the
>>> case).
>>
>> My understanding is that in a browser that implements CORS, all
>> cross-origin requests (not just those coming from XHR) will carry the
>> Origin header. I couldn't find it in the spec, so I can't link to it,
>> and might be wrong. The rest of this mail is based on that assumption.
>
> I believe the Origin header is now being sent for pretty much any
> request, not just XHRs, and so doing Origin-based mitigation of XSRFs
> is largely independent of CORS. Adam, is this right?

We're currently sending the Origin header with every POST request.
The uses of the origin header for CORS and CSRF defense are largely
independent.

Adam

Adam Barth

unread,
Apr 28, 2010, 3:14:30 PM4/28/10
to tjc...@google.com, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
We've had this discussion many times and I don't care to rehearse the
arguments here. If folks are interested, many, many words have been
written on this topic on the public-webapps and the ietf-http-wg
mailing lists.

Adam

Tyler Close

unread,
Apr 28, 2010, 3:41:35 PM4/28/10
to Adam Barth, Aaron Boodman, Dirk Pranke, Mark S. Miller, da...@google.com, Ojan Vafai, jor...@google.com, ife...@google.com, Chromium-dev, Alex Russell, Dimitri Glazkov, Ian Hickson
Searching for the right email is tricky. I'm not even sure I could do
it having participated in many of those discussions. For the benefit
of the current discussion, I'll explain my understanding of Adam's
response. Hopefully Adam will correct me if I have misunderstood.

I think Adam advertises the Origin header as an easy to adopt solution
to the basic CSRF attack. He doesn't believe Origin offers a general
purpose solution to all Confused Deputy vulnerabilities.

I in turn argue that we already have good defences for CSRF that can
be extended to provide general purpose defence against Confused Deputy
vulnerabilities. The Origin header only addresses one case, is a new
credential that will itself be the subject of Confused Deputy attacks
and shifts focus away from solutions to the fuller problem.

As I noted in the quoted text, in the general case, the Origin header
is unworkable. Code can get into the general case very quickly and
easily. For example, none of the deployed CORS implementations handles
redirects as specified by CORS and they all do slightly different
things when compared to each other. A redirected request is a
relatively simple case where it's hard to pin down what the origin of
a request is.

--Tyler
Reply all
Reply to author
Forward
0 new messages