Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Content Security Policy feedback

15 views
Skip to first unread message

Bil Corry

unread,
Nov 17, 2008, 5:19:53 PM11/17/08
to
Giorgio Maone mentioned CSP on the OWASP Intrinsic Security list[1]
and I wanted to provide some feedback.

(1) Something that appears to be missing from the spec is a way for
the browser to advertise to the server that it will support Content
Security Policy, possibly with the CSP version. By having the browser
send an additional header, it allows the server to make decisions
about the browser, such as limiting access to certain resources,
denying access, redirecting to an alternate site that tries to
mitigate using other techniques, etc. Without the browser advertising
if it will follow the CSP directives, one would have to test for
browser compliance, much like how tests are done now for cookie and
JavaScript support (maybe that isn't a bad thing?).

(2) Currently the spec allows/denies based on the host name, it might
be worthwhile to allow limiting it to a specific path as well. For
example, say you use Google's custom search engine, one way to
implement it is to use a script that sits on www.google.com (e.g.
http://www.google.com/coop/cse/brand?form=cse-search-box&lang=en).
By having an allowed path, you could prevent loading other scripts
from the www.google.com domain.

(3) Currently the spec focuses on the "host items" -- has any thought
be given to allowing CSP to extend to sites being referenced by "host
items"? That is, allowing a site to specify that it can't be embedded
on another site via frame or object, etc? I imagine it would be
similar to the Access Control for XS-XHR[2].


- Bil

[1] https://lists.owasp.org/pipermail/owasp-intrinsic-security/2008-November/000062.html
[2] http://www.w3.org/TR/access-control/

Gervase Markham

unread,
Nov 20, 2008, 12:39:45 PM11/20/08
to Bil Corry, Brandon Sterne
Bil Corry wrote:
> Giorgio Maone mentioned CSP on the OWASP Intrinsic Security list[1]
> and I wanted to provide some feedback.
>
> (1) Something that appears to be missing from the spec is a way for
> the browser to advertise to the server that it will support Content
> Security Policy, possibly with the CSP version.

That's intentional. CSP is a backstop solution, not front-line security.
If you are depending on the presence of CSP, as the lolcats say, U R
Doin It Wrong.

> (2) Currently the spec allows/denies based on the host name, it might
> be worthwhile to allow limiting it to a specific path as well. For
> example, say you use Google's custom search engine, one way to
> implement it is to use a script that sits on www.google.com (e.g.
> http://www.google.com/coop/cse/brand?form=cse-search-box&lang=en).
> By having an allowed path, you could prevent loading other scripts
> from the www.google.com domain.

For this and the next one, I'll wait for bsterne to reply, as he's doing
the implementation and speccing work.

> (3) Currently the spec focuses on the "host items" -- has any thought
> be given to allowing CSP to extend to sites being referenced by "host
> items"? That is, allowing a site to specify that it can't be embedded
> on another site via frame or object, etc? I imagine it would be
> similar to the Access Control for XS-XHR[2].

I would suspect that would be out of scope.

Gerv

bsterne

unread,
Nov 20, 2008, 5:37:49 PM11/20/08
to
On Nov 17, 2:19 pm, Bil Corry <b...@corry.biz> wrote:
> (1) Something that appears to be missing from the spec is a way for
> the browser to advertise to the server that it will support Content
> Security Policy, possibly with the CSP version.  By having the browser
> send an additional header, it allows the server to make decisions
> about the browser, such as limiting access to certain resources,
> denying access, redirecting to an alternate site that tries to
> mitigate using other techniques, etc.  Without the browser advertising
> if it will follow the CSP directives, one would have to test for
> browser compliance, much like how tests are done now for cookie and
> JavaScript support (maybe that isn't a bad thing?).

This isn't a bad idea, as I have seen this sort of "compatibility
level" used successfully elsewhere. If future changes are made to the
model which would define restrictions for new types of content (e.g.
<video>), or which would affect the default behaviors for how content
is allowed to load, then it will be useful to servers to have their
clients' CSP version information. If we are going to add this to the
model, then we should do so from the beginning to avoid the
potentially messy browser compliance testing that would result after
the first set of changes.

> (2) Currently the spec allows/denies based on the host name, it might
> be worthwhile to allow limiting it to a specific path as well.  For
> example, say you use Google's custom search engine, one way to

> implement it is to use a script that sits on www.google.com (e.g.http://www.google.com/coop/cse/brand?form=cse-search-box&lang=en).


> By having an allowed path, you could prevent loading other scripts
> from the www.google.com domain.

I don't have a strong opinion on this one. My initial reaction is
that it adds complexity to the model, but perhaps complexity that's
warranted if people feel it's a useful feature. Do you have some
specific use cases to share which would demonstrate the usefulness of
your suggestion?

> (3) Currently the spec focuses on the "host items" -- has any thought
> be given to allowing CSP to extend to sites being referenced by "host
> items"?  That is, allowing a site to specify that it can't be embedded
> on another site via frame or object, etc?  I imagine it would be
> similar to the Access Control for XS-XHR[2].

I would agree with Gerv, that this feels a bit out of scope for this
particular proposal.

Cheers,
Brandon

Bil Corry

unread,
Nov 21, 2008, 11:56:30 AM11/21/08
to
On Nov 20, 4:37 pm, bsterne <bste...@mozilla.com> wrote:
> On Nov 17, 2:19 pm, Bil Corry <b...@corry.biz> wrote:
>
> > (1) Something that appears to be missing from the spec is a way for
> > the browser to advertise to the server that it will support Content
> > Security Policy, possibly with the CSP version.  By having the browser
> > send an additional header, it allows the server to make decisions
> > about the browser, such as limiting access to certain resources,
> > denying access, redirecting to an alternate site that tries to
> > mitigate using other techniques, etc.  Without the browser advertising
> > if it will follow the CSP directives, one would have to test for
> > browser compliance, much like how tests are done now for cookie and
> > JavaScript support (maybe that isn't a bad thing?).
>
> This isn't a bad idea, as I have seen this sort of "compatibility
> level" used successfully elsewhere.  If future changes are made to the
> model which would define restrictions for new types of content (e.g.
> <video>), or which would affect the default behaviors for how content
> is allowed to load, then it will be useful to servers to have their
> clients' CSP version information.  If we are going to add this to the
> model, then we should do so from the beginning to avoid the
> potentially messy browser compliance testing that would result after
> the first set of changes.

I personally see value there for the website, but if 99.9% of websites
will never do anything with the header, then it probably isn't
worthwhile (or it may take version 2 before the need is evident). The
big challenge here is making sure the CSP announcement header can not
be spoofed via XHR, so to that end, I'd recommend prefixing the header
name with "Sec-" such as "Sec-Content-Security-Policy" -- the latest
draft of XHR2 specifies that any header beginning with "Sec-" is not
allowed to be overwritten with setRequestHeader():

http://www.w3.org/TR/XMLHttpRequest2/#setrequestheader

Of course, XHR2 would have to be implemented in the browsers first in
order to take advantage of the requirement.


> > (2) Currently the spec allows/denies based on the host name, it might
> > be worthwhile to allow limiting it to a specific path as well.  For
> > example, say you use Google's custom search engine, one way to

> > implement it is to use a script that sits onwww.google.com(e.g.http://www.google.com/coop/cse/brand?form=cse-search-box〈=en).


> > By having an allowed path, you could prevent loading other scripts
> > from thewww.google.comdomain.
>
> I don't have a strong opinion on this one.  My initial reaction is
> that it adds complexity to the model, but perhaps complexity that's
> warranted if people feel it's a useful feature.  Do you have some
> specific use cases to share which would demonstrate the usefulness of
> your suggestion?

I don't have a specific use case, I'm thinking more of the edge cases
where content is allowed from a domain that allows a multitude of
third-party content. Maybe this is something to explore for v2 if
warranted.


> > (3) Currently the spec focuses on the "host items" -- has any thought
> > be given to allowing CSP to extend to sites being referenced by "host
> > items"?  That is, allowing a site to specify that it can't be embedded
> > on another site via frame or object, etc?  I imagine it would be
> > similar to the Access Control for XS-XHR[2].
>
> I would agree with Gerv, that this feels a bit out of scope for this
> particular proposal.

Then maybe something to consider down the road. It would be useful to
prevent hot linking and clickjacking
.

- Bil

Lucas Adamski

unread,
Nov 21, 2008, 7:50:28 PM11/21/08
to Bil Corry, dev-se...@lists.mozilla.org
Bil Corry wrote:
>
> I personally see value there for the website, but if 99.9% of websites
> will never do anything with the header, then it probably isn't
> worthwhile (or it may take version 2 before the need is evident). The
> big challenge here is making sure the CSP announcement header can not
> be spoofed via XHR, so to that end, I'd recommend prefixing the header
> name with "Sec-" such as "Sec-Content-Security-Policy" -- the latest
> draft of XHR2 specifies that any header beginning with "Sec-" is not
> allowed to be overwritten with setRequestHeader():
>
> http://www.w3.org/TR/XMLHttpRequest2/#setrequestheader
>
> Of course, XHR2 would have to be implemented in the browsers first in
> order to take advantage of the requirement.

My 2c is that if we do this we should do versioning from the get go,
otherwise servers will have a hard time telling CSP v1.0 from CSP
unsupported clients in the future. On one hand this may waste some
bandwidth now, but then again if it saves the server from sending CSP
responses to clients that don't support it, it may actually save
bandwidth and simplify server logic (since servers will be able to
determine conclusively that CSP is supported, rather than guessing).

> > I don't have a specific use case, I'm thinking more of the edge cases
> where content is allowed from a domain that allows a multitude of
> third-party content. Maybe this is something to explore for v2 if
> warranted.
>

I think part of the challenge is that CSP governs a number of different
operations, some of which may be meaningful to restrict to a specific
path but others may not be (i.e. scripting vs asset loading). If we had
a few specific examples that would help us get our brains around whether
or not enforcing restrictions on a per-path basis would actually be a
contract that is enforcable.

For (a contrived) example, say mashup.com hosts a number of different
widgets, but myapp.com wants to restrict loading of iframes from only
mashup.com/good. If the user happens to have another app from
mashup.com/bad loaded in another window/tab, then in theory content from
mashup.com/bad could script directly into the iframe contain
mashup.com/good within myapp.com, bypassing the loading restriction.

That is probably not the best example, but the root of this problem is
that scripting permissions are really still only enforceable on a per
fully-qualified domain name basis, regardless of any loading restrictions.

>
>>> (3) Currently the spec focuses on the "host items" -- has any thought
>>> be given to allowing CSP to extend to sites being referenced by "host
>>> items"? That is, allowing a site to specify that it can't be embedded
>>> on another site via frame or object, etc? I imagine it would be
>>> similar to the Access Control for XS-XHR[2].
>> I would agree with Gerv, that this feels a bit out of scope for this
>> particular proposal.
>
> Then maybe something to consider down the road. It would be useful to
> prevent hot linking and clickjacking
> .

I think the primary reason this seems out of scope is that CSP is a
mechanism for servers to govern their own content, rather than
specifying policies for 3rd party content. The latter seems more like
the domain of Access Control. Access Control AFAIK is not intended just
for XHR2, so I could image it being extended to govern opt-out of
cross-domain content loading, as well as to opt-in.

Thank you for your feedback btw, it is much appreciated.
Lucas.

>
> - Bil
> _______________________________________________
> dev-security mailing list
> dev-se...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security

Bil Corry

unread,
Nov 22, 2008, 11:22:55 AM11/22/08
to b...@corry.biz
On Nov 21, 6:50 pm, Lucas Adamski <lu...@mozilla.com> wrote:
> >>> (3) Currently the spec focuses on the "host items" -- has any thought
> >>> be given to allowing CSP to extend to sites being referenced by "host
> >>> items"?  That is, allowing a site to specify that it can't be embedded
> >>> on another site via frame or object, etc?  I imagine it would be
> >>> similar to the Access Control for XS-XHR[2].
> >> I would agree with Gerv, that this feels a bit out of scope for this
> >> particular proposal.
>
> > Then maybe something to consider down the road.  It would be useful to
> > prevent hot linking and clickjacking
> > .
>
> I think the primary reason this seems out of scope is that CSP is a
> mechanism for servers to govern their own content, rather than
> specifying policies for 3rd party content.  The latter seems more like
> the domain of Access Control.  Access Control AFAIK is not intended just
> for XHR2, so I could image it being extended to govern opt-out of
> cross-domain content loading, as well as to opt-in.

I was thinking Access Control was close, but it currently has this as
its abstract:

-----
This document defines a mechanism to enable client-side cross-site
requests. Specifications that want to enable cross-site requests in an
API they define can use the algorithms defined by this specification.
If such an API is used on http://example.org resources, a resource on
http://hello-world.example can opt in using the mechanism described by
this specification (e.g., specifying Access-Control-Allow-Origin:
http://example.org as response header), which would allow that
resource to be fetched cross-site from http://example.org.
-----

That to me means it's geared strictly for XHR, but maybe "cross-site
requests" is suppose to include any type of cross-site request,
including img, script, object, etc.

I agree though, Access Control seems like a better fit for this type
of functionality. I'll approach Anne and see what he thinks.

Thanks for the reply,


- Bil

Lucas Adamski

unread,
Nov 22, 2008, 3:03:29 PM11/22/08
to Bil Corry, dev-se...@lists.mozilla.org
Yes, my understanding is that Access Control is actually intended as a
generic cross-site server policy mechanism, and XHR is just its first
implementation. Thanks,
Lucas.

Gervase Markham

unread,
Nov 25, 2008, 5:43:59 PM11/25/08
to
Lucas Adamski wrote:
> My 2c is that if we do this we should do versioning from the get go,
> otherwise servers will have a hard time telling CSP v1.0 from CSP
> unsupported clients in the future. On one hand this may waste some
> bandwidth now, but then again if it saves the server from sending CSP
> responses to clients that don't support it,

What do you mean by "CSP responses to clients that don't support it"?
What is a "CSP response"? CSP is not supposed to make page authors do
anything different, it's supposed to cover their asses when they mess
up. Relying on CSP is using it for something it's not designed for.

bsterne - I'm not talking crack, right?

Gerv

bsterne

unread,
Nov 26, 2008, 12:01:40 PM11/26/08
to
On Nov 25, 2:43 pm, Gervase Markham <g...@mozilla.org> wrote:
> What do you mean by "CSP responses to clients that don't support it"?
> What is a "CSP response"? CSP is not supposed to make page authors do
> anything different, it's supposed to cover their asses when they mess
> up. Relying on CSP is using it for something it's not designed for.
>
> bsterne - I'm not talking crack, right?

I think what Lucas is saying is that servers won't send policy to
clients who don't announce that they support CSP.

-Brandon

Bil Corry

unread,
Dec 1, 2008, 4:00:21 PM12/1/08
to b...@corry.biz
On Nov 22, 2:03 pm, Lucas Adamski <lu...@mozilla.com> wrote:
> Yes, my understanding is that Access Control is actually intended as a
> generic cross-site server policy mechanism, and XHR is just its first
> implementation.

Anne confirmed that it's not intended to be XHR-only, however it's not
intended for all types of requests either. He specifically said it
would not work for <iframe> due to cross-site scripting issues.


- Bil

Lucas Adamski

unread,
Dec 1, 2008, 8:07:38 PM12/1/08
to Bil Corry, dev-se...@lists.mozilla.org, Jonas Sicking, Arun Ranganathan
I think this is true, but it kind of depends on how you look at it. I
think sometimes different types of cross-domain operations can get
conflated together:

* cross-domain scripting - when code in one domain has the ability to
access another domain's code or DOM
* cross-domain data importing - transferring data from the context of
one domain into another domain (XHR with AC, stylesheets)
* cross-domain content loading - hands-off content loading operations
such as IFRAME and IMG tags that leave content in their respective
security domains--aka embedding

In this (conveniently simplified) model, since iframe is a content
loading operation, it doesn't need Access Control. Nor am I sure what
it would really even mean to apply Access Control to it (would it be
permitting data importing or scripting?)

Probably the biggest fly in my otherwise nicely-simple ointment is
<SCRIPT SRC=>. Is it cross-domain scripting or data importing? It may
seem like scripting at first blush, but you may not have even
instantiated any code from the source domain, and in the end its not
much different than loading data via XHR+AC and then calling eval() on
it. So I would argue that even <SCRIPT SRC=> is a data import
operation, just one that is (alas) permitted by default and
automatically evals everything it loads.

So perhaps we are just agreeing insofar that Access Control should never
govern cross-domain scripting. Whether it could/should be extended to
govern (opt-out of) cross-domain loading/embedding is an interesting
one. Thanks,
Lucas.

Gervase Markham

unread,
Dec 3, 2008, 5:56:26 PM12/3/08
to
bsterne wrote:
> I think what Lucas is saying is that servers won't send policy to
> clients who don't announce that they support CSP.

To save 60 bytes in a header?

Gerv

Bil Corry

unread,
Dec 3, 2008, 6:22:43 PM12/3/08
to dev-se...@lists.mozilla.org

No, so that in the event CSPv2 is incompatible with CSPv1, it won't require two response headers to be sent to every client. Instead, since the browser tells the server which version of CSP it's accepting, the server can send back the CSP header in the most recent format that both the client and server understand (e.g. server knows CSPv2, client knows CSPv3, server sends CSPv2 header).


- Bil

Gervase Markham

unread,
Dec 8, 2008, 1:32:21 PM12/8/08
to
Bil Corry wrote:
> No, so that in the event CSPv2 is incompatible with CSPv1, it won't
> require two response headers to be sent to every client. Instead,
> since the browser tells the server which version of CSP it's
> accepting, the server can send back the CSP header in the most recent
> format that both the client and server understand (e.g. server knows
> CSPv2, client knows CSPv3, server sends CSPv2 header).

That makes no sense. You are saying that servers won't send any policy
at all, now, because in the future they might have to send two headers?

Gerv

Bil Corry

unread,
Dec 8, 2008, 4:53:37 PM12/8/08
to dev-se...@lists.mozilla.org

Let's back up. The CSP method you support (correct me if I'm wrong) is for the server to send a CSP header to all clients. And if the client understands the header, it'll kick on some extra protections not currently afforded to the site. And that's great for CSPv1. But lets take it to the extreme, say there is now five different CSP versions, and none of them are compatible with each other. The server then will have to issue five headers for all five CSP versions and hope the client supports one or more of them:

X-Content-Security-Policy: ...
X-Content-Security-Policy2: ...
X-Content-Security-Policy3: ...
X-Content-Security-Policy4: ...
X-Content-Security-Policy5: ...

I'm suggesting instead that the client announce the CSP version it supports; something like:

Sec-Content-Security-Policy: v3

And the server can respond with just that CSP version:

X-Content-Security-Policy: ... v3 format here ...

So the main benefit is unambiguous communication, not saving bytes in a header.

Beyond that, it has other benefits, perhaps the biggest one is being able to measure how many clients are using CSP. How will you measure the success of CSP if you have no way of knowing if 1% of browsers are using it, or 99%? And websites may not want to implement it if they can't see the number of clients affected; if there is only 1% of their visitors using it, maybe they don't want to spend the effort to devise and keep a CSP header up-to-date. But if 99% of their visitors use it, it now becomes more worthwhile.

And there's also debugging -- when some site visitors are having trouble using the site, but others are not, how can the website debug the problem when it's a misconfigured CSP? Will the browser pop up an alert each time there's a CSP violation? If not, and without a client sending a CSP header, it'll be hard to debug.

- Bil

Gervase Markham

unread,
Dec 12, 2008, 12:22:15 PM12/12/08
to
Bil Corry wrote:
> Let's back up. The CSP method you support (correct me if I'm wrong)
> is for the server to send a CSP header to all clients. And if the
> client understands the header, it'll kick on some extra protections
> not currently afforded to the site. And that's great for CSPv1. But
> lets take it to the extreme, say there is now five different CSP
> versions, and none of them are compatible with each other.

Stop right there. How does this potential future problem dissuade people
from deploying CSP now (which is what you were worried about)?

Anyway, CSP is designed to be forwardly-compatible in syntax. HTTP, a
complex protocol, has had one backwardly-compatible revision in 10+
years. I suspect we won't have five, or even two versions of CSP.
Particularly as it's currently an X- header for testing purposes, and
will move to not being an X- header when it hits 1.0, which allows
breaking changes at that point.

> Beyond that, it has other benefits, perhaps the biggest one is being
> able to measure how many clients are using CSP. How will you measure
> the success of CSP if you have no way of knowing if 1% of browsers
> are using it, or 99%?

This is a feature. The only reason you'd want to do this is to see if
you could rely on it.

Anyway, you could get approximate stats by mapping from browser versions.

Gerv

Bil Corry

unread,
Dec 12, 2008, 2:08:01 PM12/12/08
to dev-se...@lists.mozilla.org
Gervase Markham wrote on 12/12/2008 11:22 AM:
> Bil Corry wrote:
>> Let's back up. The CSP method you support (correct me if I'm wrong)
>> is for the server to send a CSP header to all clients. And if the
>> client understands the header, it'll kick on some extra protections
>> not currently afforded to the site. And that's great for CSPv1. But
>> lets take it to the extreme, say there is now five different CSP
>> versions, and none of them are compatible with each other.
>
> Stop right there. How does this potential future problem dissuade people
> from deploying CSP now (which is what you were worried about)?

It doesn't. For this, I am worried about future, non-compatible revisions. But your point is taken that it may be unlikely to happen.


>> Beyond that, it has other benefits, perhaps the biggest one is being
>> able to measure how many clients are using CSP. How will you measure
>> the success of CSP if you have no way of knowing if 1% of browsers
>> are using it, or 99%?
>
> This is a feature. The only reason you'd want to do this is to see if
> you could rely on it.

The reason site owners want to know how many people are using it is to see if it's worth the effort to implement and maintain it. Take Cookie2 for example. Only Opera supports it, so most sites only use Cookie.


> Anyway, you could get approximate stats by mapping from browser versions.

Browsers without built-in CSP support might have a plug-in available, and browsers with built-in CSP support might have it turned off. But yes, once most browsers have built-in support, you could approximate stats based on user-agent.


In the end, sites that want to know which browsers support CSP will simply test for it (just like cookies and JavaScript); I see the client header as a convenience vs. having to test for it and it offers some other limited benefits, but if it violates the CSP paradigm, then certainly skip the suggestion.


- Bil

Lucas Adamski

unread,
Dec 16, 2008, 2:51:59 PM12/16/08
to Bil Corry, dev-se...@lists.mozilla.org
From this discussion I'm still seeing good reasons to have a version
flag; mainly to allow servers to detect whether a given client
supports CSP (and what version of it) in an unequivocal manner.
Browser version sniffing is not a good solution to that problem IMHO.

If a server is to rely on CSP to reliably enforce security constraints
it needs to know what version the client supports, so it can tailor
its content accordingly. Even if the API is future-compatible, it is
very likely that as web technologies and attacks evolve we we need to
revise CSP to take into account new APIs that need to be governed, and/
or new policies that need to be applied to existing APIs. At which
point the server may need to modify its behavior/content based upon
the specific version of CSP provided.
Lucas.

On Dec 12, 2008, at 11:08 AM, Bil Corry wrote:

> Gervase Markham wrote on 12/12/2008 11:22 AM:

>> Bil Corry wrote:
>>> Let's back up. The CSP method you support (correct me if I'm wrong)
>>> is for the server to send a CSP header to all clients. And if the
>>> client understands the header, it'll kick on some extra protections
>>> not currently afforded to the site. And that's great for CSPv1.
>>> But
>>> lets take it to the extreme, say there is now five different CSP
>>> versions, and none of them are compatible with each other.
>>
>> Stop right there. How does this potential future problem dissuade
>> people
>> from deploying CSP now (which is what you were worried about)?
>

> It doesn't. For this, I am worried about future, non-compatible
> revisions. But your point is taken that it may be unlikely to happen.
>
>

>>> Beyond that, it has other benefits, perhaps the biggest one is being
>>> able to measure how many clients are using CSP. How will you
>>> measure
>>> the success of CSP if you have no way of knowing if 1% of browsers
>>> are using it, or 99%?
>>
>> This is a feature. The only reason you'd want to do this is to see if
>> you could rely on it.
>

> The reason site owners want to know how many people are using it is
> to see if it's worth the effort to implement and maintain it. Take
> Cookie2 for example. Only Opera supports it, so most sites only use
> Cookie.
>
>

>> Anyway, you could get approximate stats by mapping from browser
>> versions.
>

> Browsers without built-in CSP support might have a plug-in
> available, and browsers with built-in CSP support might have it
> turned off. But yes, once most browsers have built-in support, you
> could approximate stats based on user-agent.
>
>
> In the end, sites that want to know which browsers support CSP will
> simply test for it (just like cookies and JavaScript); I see the
> client header as a convenience vs. having to test for it and it
> offers some other limited benefits, but if it violates the CSP
> paradigm, then certainly skip the suggestion.
>
>

Sid Stamm

unread,
Dec 17, 2008, 1:38:01 PM12/17/08
to
It seems to me that version-compatibility announcement is helpful
unless CSP is intended to be a short-lived or rarely-used security
construct (which I gather is not the point). It is very likely that
CSP's governing scope will change in the future, or an additional
policy may be created to govern other pieces. It is fairly likely
that different, incompatible policy-version support may need to be
treated differently by the web server.

In fact, supported version announcement is probably necessary, not
just useful, if implementations of CSP are initially rolled out in
browser add-ons; suddenly there's the possibility of multiple browser/
add-on version combinations, especially when people delay updating
browsers or updating plugins when prompted (old-browser + new-addon,
new-browser + old-addon, etc). In these scenarios user-agent
profiling just won't work reliably. Also, it's not clear we should
burden the web site developers to stay up-to-date on which browsers
support which policies; it might be difficult to track which agents
support which engines, especially for unknown or niche browsers.
Instead, it might be ideal to explicitly tell the server what is
supported so regardless of the user-agent, the server can be fairly
confident it serves an appropriate policy.

A reasonable approach to specify the CSP version might be seen in the
Accept-Charset header, or really any of the Accept-* request headers.
It need not be present, but if it is, such an Accept-Security-Policy
request header can contain which versions the user agent supports
(e.g., CSP-1.0), and can be comma-separated in case multiple versions
or multiple policies are supported. Another option would be to shove
the supported CSP versions into the user agent string, but that's a
nasty abuse of the User-Agent header (though arguably the security
policy enforcement is part of the "platform" on which web apps will
run).

-Sid

Gervase Markham

unread,
Dec 17, 2008, 3:23:35 PM12/17/08
to
Lucas Adamski wrote:
> From this discussion I'm still seeing good reasons to have a version
> flag; mainly to allow servers to detect whether a given client supports
> CSP (and what version of it) in an unequivocal manner.

How do you react to my point that they shouldn't need to know that
because, if they do, it means they are relying on CSP, which they
shouldn't be?

> If a server is to rely on CSP to reliably enforce security constraints

If it's doing that, it's broken. CSP is explicitly not designed for
this. (As I understand it.)

Gerv

Sid Stamm

unread,
Dec 17, 2008, 9:31:43 PM12/17/08
to
Gervase Markham wrote:
> > If a server is to rely on CSP to reliably enforce security constraints
> If it's doing that, it's broken. CSP is explicitly not designed for
> this. (As I understand it.)

Maybe it's not completely bad for browsers to advertise whether or not
they support CSP (and which versions). There's a benefit for web
developers who can decide to serve more restricted/filtered content to
browsers that won't "catch them when they fall". This benefit is not
there if the browser's don't advertise what they will enforce. For
example, consider a webmaster who is just learning some new technology
X may not be comfortable enough to serve X content without a safety
net that CSP provides, but is being pressured to add features to his
site. If a client doesn't support CSP, his server can simply not
serve any script content that he isn't sure about, but when CSP is
present and can be enforced, he has that to fall back on and can serve
experimental stuff. While in an ideal world, all developers should
understand how all the code their site serves will behave in every
situation, but I doubt this is the case in reality, especially for
smaller, feature-driven sites.

I can see both sides of this issue, though. It is not healthy to rely
on CSP for a primary layer of security, especially since it will take
some time for CSP to be adopted widely (and we *really* don't want to
encourage sloppy design).

-Sid

Lucas Adamski

unread,
Dec 18, 2008, 1:01:14 AM12/18/08
to Gervase Markham, dev-se...@lists.mozilla.org
Hi Gerv,

Well, I think any security feature/model has to have some properties
that are reliable. So CSP may not prevent XSS is the blanket sense,
but it still needs to be able to enforce some set of restrictions that
the developer can rely upon.

Certainly the language within http://people.mozilla.org/~bsterne/content-security-policy/details.html
is unambiguous (i.e. "Scripts from non-white-listed hosts will not
be requested or executed", not "Scripts from non-white-listed hosts
may or may not be requested or executed"). Thanks,
Lucas.

Bil Corry

unread,
Dec 18, 2008, 10:30:25 AM12/18/08
to dev-se...@lists.mozilla.org
Gervase Markham wrote on 12/17/2008 2:23 PM:
> Lucas Adamski wrote:
>> From this discussion I'm still seeing good reasons to have a version
>> flag; mainly to allow servers to detect whether a given client supports
>> CSP (and what version of it) in an unequivocal manner.
>
> How do you react to my point that they shouldn't need to know that
> because, if they do, it means they are relying on CSP, which they
> shouldn't be?

Is CSP suppose to be user-centric or site-centric?

By user-centric, I mean is CSP going to be similar to NoScript and AdBlockPlus where it's up to the user to configure its use and behavior, with the site being able to helpfully suggest the appropriate rules for itself? If so, then I agree, sites should not rely on CSP because who knows how the user has configured CSP to behave.

By site-centric, I mean is CSP going to be entirely drive by the site, so the lack of a CSP header from the site means there is no CSP protection in place? If so, then it is counter-intuitive that the entire model is premised on the site implementing the CSP header, but the site is blind to how many visitors use it and must not rely on CSP to actually do anything. What I think will happen instead is sites that implement it will have some expectation that it does something (otherwise, why implement it?), and they will test to see which browsers are supporting it. And if there is more than one version of CSP, they'll create multiple tests.


- Bil

Lucas Adamski

unread,
Dec 18, 2008, 4:48:03 PM12/18/08
to Bil Corry, dev-se...@lists.mozilla.org
It is site-centric. Someone might write an add-in to monitor or
modify content policies but that's not a core use case.
Lucas.

On Dec 18, 2008, at 7:30 AM, Bil Corry wrote:

> Gervase Markham wrote on 12/17/2008 2:23 PM:
>> Lucas Adamski wrote:
>>> From this discussion I'm still seeing good reasons to have a version
>>> flag; mainly to allow servers to detect whether a given client
>>> supports
>>> CSP (and what version of it) in an unequivocal manner.
>>
>> How do you react to my point that they shouldn't need to know that
>> because, if they do, it means they are relying on CSP, which they
>> shouldn't be?
>

> Is CSP suppose to be user-centric or site-centric?
>
> By user-centric, I mean is CSP going to be similar to NoScript and
> AdBlockPlus where it's up to the user to configure its use and
> behavior, with the site being able to helpfully suggest the
> appropriate rules for itself? If so, then I agree, sites should not
> rely on CSP because who knows how the user has configured CSP to
> behave.
>
> By site-centric, I mean is CSP going to be entirely drive by the
> site, so the lack of a CSP header from the site means there is no
> CSP protection in place? If so, then it is counter-intuitive that
> the entire model is premised on the site implementing the CSP
> header, but the site is blind to how many visitors use it and must
> not rely on CSP to actually do anything. What I think will happen
> instead is sites that implement it will have some expectation that
> it does something (otherwise, why implement it?), and they will test
> to see which browsers are supporting it. And if there is more than
> one version of CSP, they'll create multiple tests.
>
>

Gervase Markham

unread,
Dec 19, 2008, 12:30:55 PM12/19/08
to
Sid Stamm wrote:
> Gervase Markham wrote:
>>> If a server is to rely on CSP to reliably enforce security constraints
>> If it's doing that, it's broken. CSP is explicitly not designed for
>> this. (As I understand it.)
>
> Maybe it's not completely bad for browsers to advertise whether or not
> they support CSP (and which versions). There's a benefit for web
> developers who can decide to serve more restricted/filtered content to
> browsers that won't "catch them when they fall".

If there's additional filtering they know how to do, they should be
doing it for everyone.

> example, consider a webmaster who is just learning some new technology
> X may not be comfortable enough to serve X content without a safety
> net that CSP provides, but is being pressured to add features to his
> site.

Then he shouldn't use X. (Who designed X to be unsafe by default? Go
shoot them. :-)

Gerv

Gervase Markham

unread,
Dec 19, 2008, 12:35:54 PM12/19/08
to
Bil Corry wrote:
> Is CSP suppose to be user-centric or site-centric?

Using your definitions, it's site-centric.

> By site-centric, I mean is CSP going to be entirely drive by the
> site, so the lack of a CSP header from the site means there is no CSP
> protection in place? If so, then it is counter-intuitive that the
> entire model is premised on the site implementing the CSP header, but
> the site is blind to how many visitors use it and must not rely on
> CSP to actually do anything. What I think will happen instead is
> sites that implement it will have some expectation that it does
> something (otherwise, why implement it?),

Because it might save you when you screw up. That's the entire point of
it. If you never screw up, you don't need to use it (and please come and
work for me).

If you do screw up, people using browser which support CSP will be saved
(and will, perhaps, be able to warn you that you've screwed up) and
people using other browsers won't be saved. Such is life. It was still
worth implementing it, even if you didn't mean to screw up and even if
some people still get attacked.

Gerv

Gervase Markham

unread,
Dec 19, 2008, 12:35:57 PM12/19/08
to
Lucas Adamski wrote:
> Well, I think any security feature/model has to have some properties
> that are reliable. So CSP may not prevent XSS is the blanket sense, but
> it still needs to be able to enforce some set of restrictions that the
> developer can rely upon.

Your second sentence doesn't follow from your first, in this context.

Yes, if CSP promises it'll prevent exact attack scenario X, it should
prevent X, and if it doesn't prevent X, it's a bug. (But all that's
really saying is that it's deterministic.) No, that doesn't mean that
developers should rely on a particular browser preventing attack X.
There may be a bug, the user may have turned it off, there may be a very
similar attack Y using the same flaw which CSP can't prevent, and so on.

Gerv

Lucas Adamski

unread,
Dec 19, 2008, 1:18:07 PM12/19/08
to Gervase Markham, dev-se...@lists.mozilla.org
Developers rely on the browser security model in countless ways
already. A fundamental attribute of security models is reliability.
I realize that not all browsers will have CSP in the foreseeable
future, but that is orthogonal from being able to detect & rely upon
CSP when it is present. And so no, I don't think there is an
inconsistency in my earlier statements below.

> there may be a bug
- we fix it

> the user may have turned it off

- that's why you need to send a CSP supported header, and not rely on
version sniffing. Furthermore, not sure why the user would turn it
off (does the user turn off same-origin restrictions, or cross-frame
navigation restrictions, or ...)

> there may be a very similar attack Y using the same flaw which CSP
> can't prevent, and so on.

- which is why we aren't prevent attacks, we are enforcing policies.
There is no "PREVENT XSS" switch in CSP for that reason. If
anything, this is a compelling argument for versioning because we may
have to update CSP in the future to modify existing policies or add
new ones

Lucas.

Sid Stamm

unread,
Dec 19, 2008, 1:53:32 PM12/19/08
to
On Dec 19, 12:30 pm, Gervase Markham <g...@mozilla.org> wrote:
> > Maybe it's not completely bad for browsers to advertise whether or not
> > they support CSP (and which versions).  There's a benefit for web
> > developers who can decide to serve more restricted/filtered content to
> > browsers that won't "catch them when they fall".
> If there's additional filtering they know how to do, they should be
> doing it for everyone.

I'm not sure I agree with that... take for instance a browser that
only supports SSL v2 (and not 3): a site concerned with avoiding MITM
attacks might serve different content (or none) to someone whose
browser only supports SSL v2, and serve all the site's content to
someone whose browser supports v3. That doesn't warrant blocking
content to all visitors regardless of what security constructs their
browser supports. If the filtering in question just removes possibly-
evil data, then yeah, it should be done for everyone. However, the
filtering in question might remove site functionality because the
client's browser may not play nice.

> > consider a webmaster who is just learning some new technology
> > X may not be comfortable enough to serve X content without a safety
> > net that CSP provides, but is being pressured to add features to his
> > site.  
> Then he shouldn't use X. (Who designed X to be unsafe by default? Go
> shoot them. :-)

I see your point. One would hope X is not *designed* to be unsafe,
but it might not be rock-solid, with a history of security issues
(like Flash). The webmaster might not feel completely comfortable
with his mastery of it, so only feels comfortable providing Flash-
based content to people whose browsers will help protect them. I
block Flash content from most sites (and don't employ it on my own web
sites), but may change my ways if CSP were available to help out with
more CSRF protection.

-Sid

Bil Corry

unread,
Dec 20, 2008, 8:40:55 AM12/20/08
to dev-se...@lists.mozilla.org
Bil Corry wrote on 12/18/2008 9:30 AM:
> By user-centric, I mean is CSP going to be similar to NoScript and
> AdBlockPlus where it's up to the user to configure its use and
> behavior, with the site being able to helpfully suggest the

> appropriate rules for itself? If so, then I agree, sites should not
> rely on CSP because who knows how the user has configured CSP to
> behave.

Here's a good example of "user-centric", Giorgio Maone's ABE:

http://hackademix.net/2008/12/20/introducing-abe/

The details of it are here:

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf

So while ABE doesn't send a request header advertising itself, due to the user-centric nature of the protection, it doesn't seem necessary to me. I do admit there's a fine line here that's entirely based on how CSP and ABE have been framed for use.


- Bil

Gervase Markham

unread,
Dec 23, 2008, 10:33:02 AM12/23/08
to
Sid Stamm wrote:
> I'm not sure I agree with that... take for instance a browser that
> only supports SSL v2 (and not 3):

That's a difficult "for instance" to accept, because there aren't any.
At least, not that anyone uses.

> a site concerned with avoiding MITM
> attacks might serve different content (or none) to someone whose
> browser only supports SSL v2, and serve all the site's content to
> someone whose browser supports v3. That doesn't warrant blocking
> content to all visitors regardless of what security constructs their
> browser supports.

Right. In that far-fetched scenario, they might. But the security
provided by SSL (privacy, authentication) is very different to the
security provided by CSP (anti-XSS), so the analogy doesn't hold.
Security is a multi-faceted beast.

> I see your point. One would hope X is not *designed* to be unsafe,
> but it might not be rock-solid, with a history of security issues
> (like Flash). The webmaster might not feel completely comfortable
> with his mastery of it, so only feels comfortable providing Flash-
> based content to people whose browsers will help protect them.

In which case, for the forseeable future, he won't be providing it to
many people. :-) Again, CSP is here being used as a front line of
defence, and it shouldn't be.

Another feature of CSP is "herd immunity" - it doesn't have to be used
by everyone to be helpful.

Gerv

Gervase Markham

unread,
Dec 23, 2008, 10:34:46 AM12/23/08
to
Lucas Adamski wrote:
> Developers rely on the browser security model in countless ways
> already. A fundamental attribute of security models is reliability.

I am not arguing we should make CSP work a random 50% of the time. I am
arguing that CSP is not a "security model", it's a "phew, I would have
just got stuffed, but it saved me this time" model. Security models are
things you rely on. CSP is a second line of defence for when your
security model fails, and it doesn't promise to save your ass every time.

Gerv

Sid Stamm

unread,
Jan 5, 2009, 2:52:40 PM1/5/09
to
Gervase Markham <g...@mozilla.org> wrote:
> Security is a multi-faceted beast.
Point taken, and I agree, it was a crappy analogy.

> Again, CSP is here being used as a front line of
> defence, and it shouldn't be.

I agree with you... optimally, CSP should not be front-line defense.
But for it to be helpful in practice, there must be a motivation for
people to put it on their sites.

What worries me is that with no assurance that they're enforced, CSP
policies won't be provided by web sites since it takes time (granted,
not much of it) to compose them. It's likely that a profit-driven
company might rather have their engineers spend time fuzzing or bug
fixing than designing a good CSP string that may or may not ever be
used.

One point of view is, screw 'em... sites that don't provide CSP will
just be vulnerable to more XSS attacks, and it is only skin off their
own back. On the other hand, the client through his browser is
usually the real victim, not the site, and I think we want to
encourage sites to give as much protection to the client as possible.
This might mean tailoring CSP a bit to give companies motivation to
put CSP into their sites.

Though, perhaps in the long run a good policy can help them later
identify possible vulnerabilities, it may not be obviously beneficial
in the short run and won't be enough to make up for the fact that the
site can't tell whether or not if their CSP is helping out at all (and
so they won't provide it).

> Another feature of CSP is "herd immunity" -
> it doesn't have to be used by everyone to
> be helpful.

Surely using CSP won't *hurt*, but I think that it will only help the
people who use it. Herd immunity applies mainly to viral spreads or
epidemics, and I would argue that most of what CSP prevents are not
viral attacks. A few browsers with CSP can help slow an XSS worm from
spreading to the rest of the "herd", but it won't change the
persistent or reflected XSS attacks to steal contact lists or deface a
site that doesn't use CSP.

These one-shot (non-viral) attacks only become less frequent when it
becomes more futile to try. CSP actually has to be adopted enough by
sites in practice (and not just theorized) to make attacks it prevents
less attractive, and thus reduce the overall number of attempted
attacks. For instance, if only 10% of visitors to an XSS-defaced site
enforce CSP, attackers will probably still deface that site because
90% isn't bad. If we can make it irrational to attack a site (by
having 60% of browsers and sites implement CSP), then we'll see
attackers stop trying. Until then, only those implementing CSP will
get the benefit of extra security.

-Sid

Gervase Markham

unread,
Jan 12, 2009, 5:53:16 AM1/12/09
to Sid Stamm
Sid Stamm wrote:
> Gervase Markham <g...@mozilla.org> wrote:
>> Security is a multi-faceted beast.
> Point taken, and I agree, it was a crappy analogy.
>
>> Again, CSP is here being used as a front line of
>> defence, and it shouldn't be.
> I agree with you... optimally, CSP should not be front-line defense.
> But for it to be helpful in practice, there must be a motivation for
> people to put it on their sites.
>
> What worries me is that with no assurance that they're enforced, CSP
> policies won't be provided by web sites since it takes time (granted,
> not much of it) to compose them. It's likely that a profit-driven
> company might rather have their engineers spend time fuzzing or bug
> fixing than designing a good CSP string that may or may not ever be
> used.

It really doesn't take long - it's not a complicated spec. I'm not sure
we need to make it "more attractive" by promising what we can't deliver.

>> Another feature of CSP is "herd immunity" -
>> it doesn't have to be used by everyone to
>> be helpful.

Sorry, I realise that in hindsight I was ambiguous here. I meant that
not all end-users have to use it for it to be helpful in the case of a
particular site which is using it. I say this because once the site
owner is warned of the problem, he can fix it. If no-one has CSP, it may
take much longer for people to notice the compromise.

Gerv

Mike Ter Louw

unread,
Jan 12, 2009, 1:34:10 PM1/12/09
to dev-se...@lists.mozilla.org
Gervase Markham wrote:

> Sid Stamm wrote:
>> What worries me is that with no assurance that they're enforced, CSP
>> policies won't be provided by web sites since it takes time (granted,
>> not much of it) to compose them. It's likely that a profit-driven
>> company might rather have their engineers spend time fuzzing or bug
>> fixing than designing a good CSP string that may or may not ever be
>> used.
>
> It really doesn't take long - it's not a complicated spec. I'm not sure
> we need to make it "more attractive" by promising what we can't deliver.

One concern is the time and effort required to refactor existing code to
use only external scripts (a non-trivial task). Development of new web
code can take this restriction into account but still requires
deliberate effort throughout the development cycle to maintain support
for CSP.

I think utilizing CSP will be a very conscious decision by web site
operators, weighing the benefits CSP offers, the cost of implementing
and maintaining CSP support, and the risks of not adding CSP to their
web site. While it would be nice to have a low cost, effective, add-on
layer of security, it seems the requirement of no inline script code
adds significantly to the cost of CSP. Therefore site owners should be
able to estimate the benefit CSP will give them by measuring the level
of browser support among the site's visitors, so it can be weighed
against the cost of CSP deployment.

Is it correct that the rule against inline scripts is in effect for all
CSP policies, even when script-src is not used?

Mike

Sid Stamm

unread,
Jan 12, 2009, 1:52:11 PM1/12/09
to
On Jan 12, 5:53 am, Gervase Markham <g...@mozilla.org> wrote:
> not all end-users have to use it for it to be helpful in the case of a
> particular site which is using it. I say this because once the site
> owner is warned of the problem, he can fix it. If no-one has CSP, it may
> take much longer for people to notice the compromise.

Of course, unless the site breaks in a noticeable way when violations
of CSP occur, there is no additional help for the site developer...
and I don't believe that CSP is intended to have a violation reporting
mechanism. Additionally, it is my impression that a lot of attacks
stopped by CSP would break un-noticed. For example, a cross-site
exploit that simply embeds a <script> and steals cookies would likely
not modify the page visually, so whether or not it fails, the end-user
wouldn't notice.

Maybe something to add value to CSP support would be a CSP developer
mode or warning logo somewhere in the browser that alerts the end-user
when a policy is violated. That would indeed be an easy-addon, and
perhaps testers could just flip it on for sites they fool with on a
daily basis.

Or do we want phone-home features for CSP so the browser will
automatically tell a site when its policy is violated? This sounds
like it could be abused to help sites identify which browsers support
CSP (essentially providing that 'this-browser-supports-csp' flag
you're arguing against).

-Sid

Bil Corry

unread,
Jan 12, 2009, 2:23:12 PM1/12/09
to dev-se...@lists.mozilla.org
Sid Stamm wrote on 1/12/2009 12:52 PM:
> Or do we want phone-home features for CSP so the browser will
> automatically tell a site when its policy is violated?

It already has this feature, see #6:

http://people.mozilla.org/~bsterne/content-security-policy/details.html


- Bil

Sid Stamm

unread,
Jan 12, 2009, 4:19:52 PM1/12/09
to
On Jan 12, 2:23 pm, Bil Corry <b...@corry.biz> wrote:
> It already has this feature, see #6:

Ah, sorry for my blindness Bil. It has been a while since I read
that, and simply spaced on that feature.

Gerv: what are your thoughts on (mis)use of the Report-URI to
determine which browsers support CSP? For example, given a policy "X-
Content-Security-Policy: allow self", Report-URI "http://self.com/
report" and a tag served "<script src='http://forbidden.com/js'>", a
report would be generated. Assuming the report URI and the page
containing the violation are in the same domain, cookies could be used
to connect the report to a specific client. It seems to me that
unless client browsers *never* send CSP-related data to the server
then the server can ultimately determine which clients are using CSP.

-Sid

Bil Corry

unread,
Jan 12, 2009, 5:40:45 PM1/12/09
to dev-se...@lists.mozilla.org
Sid Stamm wrote on 1/12/2009 3:19 PM:
> It seems to me that unless client browsers *never* send CSP-related
> data to the server then the server can ultimately determine which
> clients are using CSP.

I agree, without the client advertising CSP-support, sites will test for CSP just as they test for JavaScript, cookies, etc. You could probably test for CSP by using policy-uri, if the browser requests it from your server, then it supports CSP. To prevent an attacker from causing a browser to load it ala CSRF, you could even add a nonce to the request:

X-Content-Security-Policy: policy-uri /policy.csp?nonce=ABC123

- Bil

bsterne

unread,
Jan 13, 2009, 8:24:11 PM1/13/09
to
Sorry I haven't been more vocal on this thread lately. I think it's
important that we keep our momentum moving forward here if we hope to
get something meaningful implemented any time soon.

I am getting the sense that we aren't in agreement on one or two of
the fundamental goals of this project and I think it potentially
jeopardizes overall progress if we are working with different base
assumptions. My near-term goal is to start driving toward a stable
design (if not specification) for CSP. The design is certainly still
open for comments and feedback, but those discussions will be easier
to resolve after we've settled the issue of project goals. More
below...

On Dec 23 2008, 7:34 am, Gervase Markham <g...@mozilla.org> wrote:
> I am not arguing we should make CSP work a random 50% of the time. I am
> arguing that CSP is not a "security model", it's a "phew, I would have
> just got stuffed, but it saved me this time" model. Security models are
> things you rely on. CSP is a second line of defence for when your
> security model fails, and it doesn't promise to save your ass every time.

I think that CSP should be considered part of the browser security
model. Mike and others have made the excellent point that there are
significant costs to bear for a website that wants to start using this
model: policy development as well as migrating inline scripts to
external script files. Websites will not be willing to pay this cost
if user agents are not strongly committed to enforcing the policies.
We won't be able to make security guarantees like "XSS will never
happen on your site", but we can provide smaller guarantees like
"inline script will not execute in this page if the CSP header is
sent".

I have previously agreed with Gerv's "belt-and-(suspenders|braces)"
logic with regard to CSP as it had twofold appeal to me: 1) it is
consistent with the defense-in-depth approach found elsewhere in
computer security, and 2) it provided an escape hatch from design
flaws, implementation bugs, or other deficiencies later discovered
with the model. It appears now, though, that this issue is impeding
us a bit and I am going to weigh in on the side of stronger commitment
to policy enforcement. Perhaps a stronger design is produced as the
result of a firm commitment to CSP as a part of the browser security
model (or perhaps it is required by such a commitment).

Gervase Markham

unread,
Jan 16, 2009, 12:55:41 AM1/16/09
to
Sid Stamm wrote:
> Gerv: what are your thoughts on (mis)use of the Report-URI to
> determine which browsers support CSP? For example, given a policy "X-
> Content-Security-Policy: allow self", Report-URI "http://self.com/
> report" and a tag served "<script src='http://forbidden.com/js'>", a
> report would be generated. Assuming the report URI and the page
> containing the violation are in the same domain, cookies could be used
> to connect the report to a specific client. It seems to me that
> unless client browsers *never* send CSP-related data to the server
> then the server can ultimately determine which clients are using CSP.

I have no objection in principle to servers knowing that clients have
CSP capability. What I object to is bulking up every HTTP request with
that information, or making the protocol or system more complicated in
order to allow people to do things they shouldn't be doing (like relying
on it as a first line of defence).

Gerv

Gervase Markham

unread,
Jan 16, 2009, 12:58:36 AM1/16/09
to
bsterne wrote:
> I think that CSP should be considered part of the browser security
> model. Mike and others have made the excellent point that there are
> significant costs to bear for a website that wants to start using this
> model: policy development as well as migrating inline scripts to
> external script files. Websites will not be willing to pay this cost
> if user agents are not strongly committed to enforcing the policies.
> We won't be able to make security guarantees like "XSS will never
> happen on your site", but we can provide smaller guarantees like
> "inline script will not execute in this page if the CSP header is
> sent".

I completely agree that we should make these guarantees, in the sense
that if that doesn't work, it's a bug :-) That's not the sort of
guarantee I'm objecting to. The sort I'm objecting to is "you don't have
to validate and escape user input properly because even if you let a
<script> tag through accidentally, CSP will catch it and save you".

Some understandings of "CSP being strongly part of the browser security
model" would have us making such guarantees. And I think they would be a
mistake. If "CSP being strongly part of the browser security model" just
means "we guarantee that it does what it says on the tin" then I have no
problem with it :-) My reduced commitment to guarantees was not designed
as an ass-covering measure for shoddy coding ;-)

Gerv

0 new messages