Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Content Security Policy Spec questions and feedback

55 views
Skip to first unread message

EricLaw

unread,
Jul 5, 2009, 8:28:05 PM7/5/09
to
(I'm moving a thread to NNTP at the request of Gerv. Thanks for
reading!)

The following is a weakly-organized list of my questions and thoughts
on the current CSP spec draft. (See http://blogs.msdn.com/ie/archive/2009/06/25/declaring-security.aspx
if you're interested in higher-level feedback)

---------------
Versioning
---------------
Server CSP Versioning
Can the server define which version of CSP policies it wants to use,
allowing the client to ignore? I know that backward compatibility is
the goal, but other successful features (E.g. Cookies) have had tons
of problems here as they try to evolve. The current “Handling parse
errors” section imposes a number of requirements that might be onerous
in the distant future when we’re on version 5 of the CSP feature.

User-Agent header
What’s the use-case for adding a new token to the user-agent header?
It’s already getting pretty bloated (at least in IE) and it’s hard to
imagine what a server would do differently when getting this token.


---------------
Policy Questions
---------------
Style-src
I don’t know what “style attributes of HTML elements” means. Is this
meant to cover cases where a CSS rule specifies url() for a font/
cursor/image? Or are those meant to be controlled by the other
relevant CSP directives?

frame-ancestors
In addition to IFRAMEs/FRAME tags, this should also restrict OBJECT
tags that point to HTML pages, correct?

What exactly an “ancestor” is should probably be defined here.

I like this directive, but it’s worth noting that this is the only
directive which constrains how others can host the protected-
document. More on this later (see Scope Creep below).

---------------
CSP Declarations
---------------

HTTP: Header Name
W3C folks have been giving us (IE) a hard time about the number (and
scattered documentation) of X- header names
http://blogs.msdn.com/ieinternals/archive/2009/06/30/Internet-Explorer-Custom-HTTP-Headers.aspx,
and they’ve strongly encouraged us to register our header names (even
provisionally) with IANA http://www.iana.org/assignments/message-headers/message-header-index.html
rather than using the X- prefix. You don’t need a formal RFC to do
this (some just point to the relevant working groups), and you’ll find
some of the header names proposed by Jackson and Barths listed there
already. I think Content-Security-Policy is well-thought-out enough
that it’s going to get implemented by more than one UA, and we might
as well save a few terabytes of traffic over the next several years by
dropping the X-. Mark Nottingham (HTTPBis chair, I think) is probably
a great person to talk to about this if you want more info on best
practices for header definition.

HTTP Header: Final
It seems like it might be useful for a CSP Header to declare that it’s
the “Final” security policy, to prevent meddling by META Header
injection and the like. Of course, HTTP Header injection is a threat
as well, but that seems like a smaller threat, and the “FINAL”
directive doesn’t really significantly increase attack surface here
because sites using the Header are unlikely to also send the META tag.

Meta Tag Placement
I like the restriction that META must appear within the HEAD, although
technically HTML5 has no such restriction.

Are relative URIs valid for the report-URI/policy-URI? (Seems like
this would be a good thing to support). However, if so, is there any
interaction/relationship with the BASE tag, which is supposed to also
appear early in the head?

The spec needs to specifically define what happens if a META tag is
found in violation of the rules (e.g. It MUST be ignored, and a
CONSOLE ERR must be raised)

CSP-Tagging
What happens to CSP if I save a CSP-protected document to my local
disk? I’d assume it would be ignored (because many restrictions could
be broken) but this should be explicit. Also, when saving docs to
disk, HTTP headers are lost, so to preserve it, you’d need to
explicitly serialize to a META tag, which could get complicated if the
document already had a CSP META…

---------------
Policies and Wildcarding
---------------

Wildcarding: Multiple Labels
Allowing a wildcard to represent an unlimited number of DNS labels
could be problematic, because it leaves the wildcard-configured site
at the mercy of the DNS policies of any of its children. I think
there’s a use case that suggests a single-label wildcard would be
useful but a multiple-label wildcard incurs unneeded attack surface
for some.

The “how many labels can * represent” problem has come up in a number
of contexts, including Access-Control and HTTPS certificate
validation. In the latter case, * is defined in RFC2818 as one DNS
label, but Firefox does not currently follow that RFC.

Wildcarding: Zero Labels
In a related vein, wildcards are currently defined as “one or more
labels” by the CSP spec. Real-world sites have the unfortunate habit
of serving content from both “example.com” and “www.example.com” which
is likely to cause site breakage when CSP is in use. Unfortunately,
trivially redefining a wildcard to “zero or more labels” doesn’t quite
work because then there’s a leading dot we’d have to get rid of, but
I’d propose that this is probably the simplest/most intuitive fix.
Therefore, a site could specify “*.example.com” to match
www.example.com” and “example.com”.

Wildcarding: Port & Scheme
As the intent is to allow wildcarding of the port, and constrained-
wildcarding of the scheme, it might make sense to provide explicit
examples of each in the document.

• The spec might want to note that using wildcards does not permit
access to banned ports http://www.mozilla.org/projects/netlib/PortBanning.html.
• Scheme wildcarding is somewhat dangerous. Should the spec define
what happens for use of schemes that do not define origin
information? (javascript: and data: are listed, but there are
others).

Sample Policy definitions
The example:
X-Content-Security-Policy: allow https://self

Doesn’t make sense to me, because “self” is defined to include the
scheme. This suggests that we need a "selfhost" directive, which
includes the hostname only.

---------------
Reporting issues
---------------

Violation Report: Headers
I don’t think I understand the use case for sending the request
headers. Is the hope that the server operator be able to use this to
catch XSS attempts and go plug holes to protect legacy clients? If
so, should the Violation Report Sample example be explicitly be
updated to show such an injection attempt being caught?

This seems like a potentially significant source of security risk and
complexity. For instance, the client must ensure that it won’t leak
Proxy-Authorization headers, etc.

Also, the “blocked-headers” is defined as required, but not all
schemes (specifically, FTP and FILE) do not use headers.

Data Leak Vectors
Per other parts of the spec, “all reports must be sent to the same
host” should read “all reports must be sent to the same Origin (scheme/
port/host).” The section “Restrictions on policy-uri and report-uri”
has a similar problem.

Violation Report Syntax: How to send
It probably should be restated in this section that redirects are not
to be followed.

This section says that the report must be “transmitted” but does not
explain how such transmission should occur. I assume the proposal is
to use a HTTP POST. If so, this should be stated.

Problem: Neither FTP, nor File, nor a number of other schemes support
POST.

Violation Report: No redirects
I like the simplicity of forbidding redirects outright, but I think
there’s going to be a complaint that same-origin redirection via 307
ought to be permitted. As a designer, I’d probably ignore that
complaint, but I bet it’ll be made eventually.

Parse Errors: Server detection
Parse errors are defined as only being reported on the client. This
is probably reasonable, but leads to the possibility that some UA will
fail to parse some CSP directive and the server operator will not know
about it.

Parse Errors: User notification
If the “Fail closed” model is used, is there any way for the user to
know why the site is broken? Isn’t this going to create a problem,
where, say, a FF4 user will “downgrade” to a browser that doesn’t
support CSP (say, Opera 9) because the site “works properly there”?
Everyone loses.

---------------
Misc
---------------

User-Scripts
Agreeing with Sacolcor, I think the spec should explicitly note that
CSP isn’t intended to apply to User-Scripts, although I think the
Greasemonkey guys might find it hard to implement their current
feature-set considering where CSP is likely to be implemented in the
browser stacks.

Broken link
The “CSP Syntax” link seems to be broken, and goes to the general
introduction page?

When any known directive contains a value that violates [[CSP syntax]]

---------------
Logical evolutions aka Scope Creep
---------------

Scope Creep: exempt HEAD
We’ve had some folks suggest that CSP-like schemes would be more
easily deployed if they could allow arbitrary script/css to be
embedded inline/referenced in the HEAD tag. Presumably, you still get
some solid security value here because the HEAD is a much smaller part
of the document to protect from injections.

Scope Creep: Prevent Sniffing & Unintended Reuse
It seems natural that a subdownload should be able to say Content-
Security-Policy: strictType which would cause a UA to refuse to sniff
content or feed it to tags of mismatched typing (e.g. text/plain
resource fed to a <SCRIPT> element, etc). This would go beyond IE's X-
Content-Type-Options directive.

In particular, this could be used by non-JS responses to explicitly
prevent them from being used by SCRIPT tags, and to prevent HTML files
from being scraped by liberal CSS parsers. This is an anti-CSRF ASR.

Scope Creep: Same Origin Only
The claim “Content Security Policy enables a site to specify which
sites may embed a resource” is currently over-broad, but it shouldn’t
be. (CSP currently seems to only apply to HTML documents, not
"resources" in general).

It seems natural that a subdownload should be able to say e.g. Content-
Security-Policy: callers <originlist> which would cause the UA network
stack to refuse to process (e.g. Set-Cookie) or return the content (to
a script tag, object tag, image tag, XHR request etc) unless the
Origin of the requestor matches the specified Origin list.

(A competing idea is to respect Access-Control-Allow-Origin response
headers on all types of requests (not just XHR) but I don’t think
that’s what’s currently being proposed?)

This allows for a useful CSRF/bandwidth misuse protection at little
cost.

I’m not fully convinced that the “Origin” proposal (or at least the
versions I’ve read closely) will prove generally workable. Among
other problems, every protected resource would need to be served with
a Vary: Origin header, which is problematic for a number of reasons,
including legacy IE bugs (http://blogs.msdn.com/ieinternals/archive/
2009/06/17/9769915.aspx).

---------------
Feedback from others
---------------
ASP.NET Controls
Apparently, ASP.NET controls are tightly bound to use of JavaScript:
protocol URIs, and this isn’t likely to be easily changed. For that
reason, it might be interesting to have a way to allow only those URIs
and not inline script blocks, event handlers, etc?

---------------

That’s all I’ve got for now. Thanks for your great work here!

Eric Lawrence
Program Manager - IE Security

Gervase Markham

unread,
Jul 6, 2009, 6:02:00 AM7/6/09
to
Hi Eric,

Some really, really great points here. My thoughts on some of them:

On 06/07/09 01:28, EricLaw wrote:
> Server CSP Versioning
> Can the server define which version of CSP policies it wants to use,
> allowing the client to ignore? I know that backward compatibility is
> the goal, but other successful features (E.g. Cookies) have had tons
> of problems here as they try to evolve. The current “Handling parse
> errors” section imposes a number of requirements that might be onerous
> in the distant future when we’re on version 5 of the CSP feature.
>
> User-Agent header
> What’s the use-case for adding a new token to the user-agent header?
> It’s already getting pretty bloated (at least in IE) and it’s hard to
> imagine what a server would do differently when getting this token.

I also haven't quite got this straight in my head.

I think it would be useful for the spec to contain a lot more detail on
what it hopes to achieve with the current versioning system - scenarios
where it would be useful, scenarios where it won't help, and so on - and
also why we decided not to put a version number in the CSP response itself.

> Style-src
> I don’t know what “style attributes of HTML elements” means.

It means <div style="some CSS here"></div>

> frame-ancestors
> In addition to IFRAMEs/FRAME tags, this should also restrict OBJECT
> tags that point to HTML pages, correct?

I guess so :-)

> W3C folks have been giving us (IE) a hard time about the number (and
> scattered documentation) of X- header names
> http://blogs.msdn.com/ieinternals/archive/2009/06/30/Internet-Explorer-Custom-HTTP-Headers.aspx,
> and they’ve strongly encouraged us to register our header names (even
> provisionally) with IANA http://www.iana.org/assignments/message-headers/message-header-index.html
> rather than using the X- prefix.

I'm sure we can do that, particularly if we have buy-in or tacit support
from multiple browser vendors. We are early in a Firefox development
cycle, so we do have time.

> HTTP Header: Final
> It seems like it might be useful for a CSP Header to declare that it’s
> the “Final” security policy, to prevent meddling by META Header
> injection and the like.

The very existence of "meta" is now under discussion. But I think that
if we do implement a merging algorithm (which I think we should, albeit
for multiple headers) a "final" directive might be useful.

> Are relative URIs valid for the report-URI/policy-URI? (Seems like
> this would be a good thing to support). However, if so, is there any
> interaction/relationship with the BASE tag, which is supposed to also
> appear early in the head?

Very good question.

> What happens to CSP if I save a CSP-protected document to my local
> disk? I’d assume it would be ignored (because many restrictions could
> be broken) but this should be explicit. Also, when saving docs to
> disk, HTTP headers are lost, so to preserve it, you’d need to
> explicitly serialize to a META tag, which could get complicated if the
> document already had a CSP META…

Another good one. Gut reaction: The things CSP is supposed to help with
are mostly connected with the page being loaded from a particular target
site. If it's no longer being loaded from that site, many of them go
away. So I think the answer is that CSP protection is removed. We
currently restrict HTML loaded from the local disk from accessing other
files on the local disk, but not in other ways.

> Therefore, a site could specify “*.example.com” to match
> “www.example.com” and “example.com”.

Hmm. For people not thinking, this would obey the Rule of Least
Surprise, but for people thinking, it would not obey that rule. <sigh>

> Doesn’t make sense to me, because “self” is defined to include the
> scheme. This suggests that we need a "selfhost" directive, which
> includes the hostname only.

Or we make the same word serve two purposes, doing the "obvious" thing.

> Parse Errors: Server detection
> Parse errors are defined as only being reported on the client. This
> is probably reasonable, but leads to the possibility that some UA will
> fail to parse some CSP directive and the server operator will not know
> about it.

Will this be a problem in practice, given that presumably the server
owner tests their site with a variety of UAs? We don't ping server
owners to say "your HTML is unparseable", after all. That would rather
increase the amount of traffic on the Internet! ;-)

> If the “Fail closed” model is used, is there any way for the user to
> know why the site is broken? Isn’t this going to create a problem,
> where, say, a FF4 user will “downgrade” to a browser that doesn’t
> support CSP (say, Opera 9) because the site “works properly there”?
> Everyone loses.

This is a problem with a "tighten when the header is used, and then use
directives to loosen" approach. Content Restrictions had the opposite
approach - it started with loose (i.e. the situation as it is without CR
support) and tightened using directives. This avoided this problem. Of
course, both directions have pros and cons.

> Agreeing with Sacolcor, I think the spec should explicitly note that
> CSP isn’t intended to apply to User-Scripts, although I think the
> Greasemonkey guys might find it hard to implement their current
> feature-set considering where CSP is likely to be implemented in the
> browser stacks.

We need to avoid breaking Greasemonkey/GreasemonkIE.

> Scope Creep: exempt HEAD
> We’ve had some folks suggest that CSP-like schemes would be more
> easily deployed if they could allow arbitrary script/css to be
> embedded inline/referenced in the HEAD tag.

Yes; CR originally had a way to allow this. I think it would make
converting sites quite a bit easier.

> In particular, this could be used by non-JS responses to explicitly
> prevent them from being used by SCRIPT tags, and to prevent HTML files
> from being scraped by liberal CSS parsers. This is an anti-CSRF ASR.

I think using CSP should also mean that the scripts which are permitted
have to be served with the correct content type. As others have said,
this prevents people using E4X and some user content elsewhere in the
same domain to inject script which is actually an HTML page.

> Scope Creep: Same Origin Only
> The claim “Content Security Policy enables a site to specify which
> sites may embed a resource” is currently over-broad, but it shouldn’t
> be. (CSP currently seems to only apply to HTML documents, not
> "resources" in general).

Yes, we need to think more about how CSP applies to non-HTML and
non-HTTP resources.

> It seems natural that a subdownload should be able to say e.g. Content-
> Security-Policy: callers<originlist> which would cause the UA network
> stack to refuse to process (e.g. Set-Cookie) or return the content (to
> a script tag, object tag, image tag, XHR request etc) unless the
> Origin of the requestor matches the specified Origin list.

People have wanted the web to do this for years to prevent people
leaching e.g. image bandwidth, but I'm not convinced it would be a great
thing for the web. It seems to me that this sort of behaviour is a
regrettable side effect of an open web, but one we should just live with.

> I’m not fully convinced that the “Origin” proposal (or at least the
> versions I’ve read closely) will prove generally workable. Among
> other problems, every protected resource would need to be served with
> a Vary: Origin header, which is problematic for a number of reasons,
> including legacy IE bugs (http://blogs.msdn.com/ieinternals/archive/
> 2009/06/17/9769915.aspx).

Presumably you've sent that feedback in the relevant direction?

> ---------------
> Feedback from others
> ---------------
> ASP.NET Controls
> Apparently, ASP.NET controls are tightly bound to use of JavaScript:
> protocol URIs, and this isn’t likely to be easily changed. For that
> reason, it might be interesting to have a way to allow only those URIs
> and not inline script blocks, event handlers, etc?

I know nothing about ASP.NET controls. Are these pre-built blocks of
HTML that can be included in a page when it's built with ASP?

I guess the question is: have we effectively blown up all the protection
if we allow javascript: URIs? Can every possible exploitation method be
adapted to use them?

Gerv

EricLaw

unread,
Jul 6, 2009, 10:46:21 AM7/6/09
to
I'm not sure about Usenet etiquette (it's been years) so I'll try
replying inline for now. :-)

On Jul 6, 3:02 am, Gervase Markham <g...@mozilla.org> wrote:
> Hi Eric,
>
> Some really, really great points here. My thoughts on some of them:
>
> On 06/07/09 01:28, EricLaw wrote:> Server CSP Versioning
> > Can the server define which version of CSP policies it wants to use,
> > allowing the client to ignore?  I know that backward compatibility is
> > the goal, but other successful features (E.g. Cookies) have had tons
> > of problems here as they try to evolve.  The current “Handling parse
> > errors” section imposes a number of requirements that might be onerous
> > in the distant future when we’re on version 5 of the CSP feature.
>
>  >
>  > User-Agent header
>  > What’s the use-case for adding a new token to the user-agent header?
>  > It’s already getting pretty bloated (at least in IE) and it’s hard to
>  > imagine what a server would do differently when getting this token.
>
> I also haven't quite got this straight in my head.
>
> I think it would be useful for the spec to contain a lot more detail on
> what it hopes to achieve with the current versioning system - scenarios
> where it would be useful, scenarios where it won't help, and so on - and
> also why we decided not to put a version number in the CSP response itself.
>
> > Style-src
> > I don’t know what “style attributes of HTML elements” means.
>
> It means <div style="some CSS here"></div>

That's what I figured, but I'm not sure I understand how CSP applies.
Is it applying to url() statements (if any) that are inlined in the
style attribute? Or is there some other way that the style attribute
allows retrieval of remote content?

> > frame-ancestors
> > In addition to IFRAMEs/FRAME tags, this should also restrict OBJECT
> > tags that point to HTML pages, correct?
>
> I guess so :-)
>
> > W3C folks have been giving us (IE) a hard time about the number (and
> > scattered documentation) of X- header names

> >http://blogs.msdn.com/ieinternals/archive/2009/06/30/Internet-Explore...,


> > and they’ve strongly encouraged us to register our header names (even

> > provisionally) with IANAhttp://www.iana.org/assignments/message-headers/message-header-index....


> > rather than using the X- prefix.
>
> I'm sure we can do that, particularly if we have buy-in or tacit support
> from multiple browser vendors. We are early in a Firefox development
> cycle, so we do have time.
>
> > HTTP Header: Final
> > It seems like it might be useful for a CSP Header to declare that it’s
> > the “Final” security policy, to prevent meddling by META Header
> > injection and the like.
>
> The very existence of "meta" is now under discussion. But I think that
> if we do implement a merging algorithm (which I think we should, albeit
> for multiple headers) a "final" directive might be useful.


Dropping META support has its merits, but that suggests one couldn't
use CSP with any protocol which doesn't allow for headers (FTP/FILE,
etc).


True enough. :-)


> > If the “Fail closed” model is used, is there any way for the user to
> > know why the site is broken?  Isn’t this going to create a problem,
> > where, say, a FF4 user will “downgrade” to a browser that doesn’t
> > support CSP (say, Opera 9) because the site “works properly there”?
> > Everyone loses.
>
> This is a problem with a "tighten when the header is used, and then use
> directives to loosen" approach. Content Restrictions had the opposite
> approach - it started with loose (i.e. the situation as it is without CR
> support) and tightened using directives. This avoided this problem. Of
> course, both directions have pros and cons.


Oh, I think Fail Closed is a fine model, but unless there's some way
for the user to know why the page is completely busted, it seems
likely that they're going to blame the properly-behaving UA rather
than the site. Pretty much the same problem one encounters with
strict XHTML validation failure-- how do you ensure that the user
blames the site, not the UA?


> > Agreeing with Sacolcor, I think the spec should explicitly note that
> > CSP isn’t intended to apply to User-Scripts, although I think the
> > Greasemonkey guys might find it hard to implement their current
> > feature-set considering where CSP is likely to be implemented in the
> > browser stacks.
>
> We need to avoid breaking Greasemonkey/GreasemonkIE.
>
> > Scope Creep: exempt HEAD
> > We’ve had some folks suggest that CSP-like schemes would be more
> > easily deployed if they could allow arbitrary script/css to be
> > embedded inline/referenced in the HEAD tag.
>
> Yes; CR originally had a way to allow this. I think it would make
> converting sites quite a bit easier.
>
> > In particular, this could be used by non-JS responses to explicitly
> > prevent them from being used by SCRIPT tags, and to prevent HTML files
> > from being scraped by liberal CSS parsers.  This is an anti-CSRF ASR.
>
> I think using CSP should also mean that the scripts which are permitted
> have to be served with the correct content type. As others have said,
> this prevents people using E4X and some user content elsewhere in the
> same domain to inject script which is actually an HTML page.


Oh, sure, but the scenario I'm trying to cover is the case where a
cross-domain attacker's site (not using CSP) is trying to steal (via
E4X, Script Inclusion, CSS style enumeration, etc) content from the
victim site (which uses CSP on all of its pages). Unless there's some
way for a resource (e.g. a script, CSS, HTML page) to enforce that its
content can only be accessed by appropriate tags (e.g. refuse to load
a text/html document in response to a <LINK rel=stylesheet> query)
then data theft (leading to CSRF) is possible.

After we shipped it, a major web property requested that the "X-
Content-Type-Options: nosniff" directive work like this to protect
against some threat vectors they suffer.

> > Scope Creep: Same Origin Only
> > The claim “Content Security Policy enables a site to specify which
> > sites may embed a resource” is currently over-broad, but it shouldn’t
> > be.  (CSP currently seems to only apply to HTML documents, not
> > "resources" in general).
>
> Yes, we need to think more about how CSP applies to non-HTML and
> non-HTTP resources.
>
> > It seems natural that a subdownload should be able to say e.g. Content-
> > Security-Policy: callers<originlist>  which would cause the UA network
> > stack to refuse to process (e.g. Set-Cookie) or return the content (to
> > a script tag, object tag, image tag, XHR request etc) unless the
> > Origin of the requestor matches the specified Origin list.
>
> People have wanted the web to do this for years to prevent people
> leaching e.g. image bandwidth, but I'm not convinced it would be a great
> thing for the web. It seems to me that this sort of behaviour is a
> regrettable side effect of an open web, but one we should just live with.


I think I understand the concerns (similar to those voiced by folks
who think frame-busters like X-Frame-Options or CSP's "frame-
ancestors" directive are a bad idea, because they break sites that
want to frame content they don't own). But I think there's a very
legitimate case to be made for the potential security value in
preventing unexpected cross-domain data reads.

> > I’m not fully convinced that the “Origin” proposal (or at least the
> > versions I’ve read closely) will prove generally workable.  Among
> > other problems, every protected resource would need to be served with
> > a Vary: Origin header, which is problematic for a number of reasons,
> > including legacy IE bugs (http://blogs.msdn.com/ieinternals/archive/
> > 2009/06/17/9769915.aspx).
>
> Presumably you've sent that feedback in the relevant direction?


I haven't been keeping up on the progress of the Origin proposal, but
I did ask some probing questions in this vein a long time ago. I was
hoping that we'd be able to come up with a different approach which
offers improved security / deployability properties, and I think CSP
might do just that with a few tweaks.


> > ---------------
> > Feedback from others
> > ---------------
> > ASP.NET Controls
> > Apparently, ASP.NET controls are tightly bound to use of JavaScript:
> > protocol URIs, and this isn’t likely to be easily changed.  For that
> > reason, it might be interesting to have a way to allow only those URIs
> > and not inline script blocks, event handlers, etc?
>
> I know nothing about ASP.NET controls. Are these pre-built blocks of
> HTML that can be included in a page when it's built with ASP?


Yeah, that's the basic idea I think (I know very little about ASP.NET
myself). I think the idea is that the dev/designer uses the IDE to
drop a "HTML component" onto the page (e.g. a date-picker), and the
toolkit emits the HTML/script which implements the functionality of
the control.

> I guess the question is: have we effectively blown up all the protection
> if we allow javascript: URIs? Can every possible exploitation method be
> adapted to use them?

Well, I think the obvious threat is that a bad guy who finds an XSS
hole can inject an <A> tag with an onclick method pointing to a
JavaScript URI, but this seems to represent a subset of all possible
attacks, and may be significantly less compelling to the attacker.

Sid Stamm

unread,
Jul 6, 2009, 1:14:44 PM7/6/09
to EricLaw
Hi Eric, Gerv... thanks for getting this thread started. My replies are
inline due to the diverse range of discussions going on in this thread.
:) I also cut some stuff not relevant to my responses out to shrink
the size of this message.

On 7/6/09 7:46 AM, EricLaw wrote:
> On Jul 6, 3:02 am, Gervase Markham <g...@mozilla.org> wrote:
>> On 06/07/09 01:28, EricLaw wrote:> Server CSP Versioning

>>> frame-ancestors
>>> In addition to IFRAMEs/FRAME tags, this should also restrict OBJECT
>>> tags that point to HTML pages, correct?
>> I guess so :-)

Right. The goal with frame-ancestors is to be able to specify what
sites can embed a protected page in any sort of framing context, no
matter the tag. If an HTML/XHTML/etc document has a CSP, then any
ancestor frame or embedding entity is verified against the policy. So I
guess if a protected HTML document is embedded using OBJECT, then
whatever content set the OBJECT tag must be checked.

The only time this really gets broken is when plug-ins are involved. For
example, there's nothing (short of Flash implementation that supports
CSP) to stop a .swf called via <OBJECT> from loading an HTML page with a
CSP set and rendering it as a subframe.

>>> Are relative URIs valid for the report-URI/policy-URI? (Seems like
>>> this would be a good thing to support). However, if so, is there any
>>> interaction/relationship with the BASE tag, which is supposed to also
>>> appear early in the head?
>> Very good question.

I believe this is an implementation issue... a relative URI eventually
gets turned into an absolute URI before being requested. It is at that
point (when it has been 'absolutified' or whatever I should call it)
when CSP checks are done. Whether or not a BASE tag is present, the UA
has to figure out what host to request the content from and over what
scheme and port to request it; at this level, relative and absolute URIs
should appear the same. I'll try to make this more obvious in the Spec.

>>> Doesn�t make sense to me, because �self� is defined to include the


>>> scheme. This suggests that we need a "selfhost" directive, which
>>> includes the hostname only.
>> Or we make the same word serve two purposes, doing the "obvious" thing.

I worry about a changed meaning of the keyword based on context, because
that can lead to bugs. :) Fears aside, using self in the host position
with any additional information (scheme://self or self:port) isn't such
a bad idea, and makes sense to me.

>>> If the �Fail closed� model is used, is there any way for the user to
>>> know why the site is broken? Isn�t this going to create a problem,
>>> where, say, a FF4 user will �downgrade� to a browser that doesn�t
>>> support CSP (say, Opera 9) because the site �works properly there�?


>>> Everyone loses.
>> This is a problem with a "tighten when the header is used, and then use
>> directives to loosen" approach. Content Restrictions had the opposite
>> approach - it started with loose (i.e. the situation as it is without CR
>> support) and tightened using directives. This avoided this problem. Of
>> course, both directions have pros and cons.
>
> Oh, I think Fail Closed is a fine model, but unless there's some way
> for the user to know why the page is completely busted, it seems
> likely that they're going to blame the properly-behaving UA rather
> than the site. Pretty much the same problem one encounters with
> strict XHTML validation failure-- how do you ensure that the user
> blames the site, not the UA?

The spec states that we put errors in the error/debug console (so an
advanced user can see exactly what's going on. Maybe we need to figure
out a "CSP Broken" icon to pop up next to the lock icon in the status
bar? I hate to clutter up the UI unless

>>> Agreeing with Sacolcor, I think the spec should explicitly note that

>>> CSP isn�t intended to apply to User-Scripts, although I think the


>>> Greasemonkey guys might find it hard to implement their current
>>> feature-set considering where CSP is likely to be implemented in the
>>> browser stacks.
>> We need to avoid breaking Greasemonkey/GreasemonkIE.

I wholeheartedly agree.

> I think there's a very
> legitimate case to be made for the potential security value in
> preventing unexpected cross-domain data reads.

I agree with this, but I subscribe to the "UA provided context for HTTP
requests" school of thought. I think the UA should provide some info
about where requests came from to the server (render context like
stylesheet or image, and source origin) -- then the Server can decide
whether or not to serve the content.

Cheers,
Sid

Sid Stamm

unread,
Jul 6, 2009, 1:16:17 PM7/6/09
to
> I hate to clutter up the UI unless
Oops... I should probably finish my statement. I hate to clutter up the
UI unless it provides a significant benefit to the user.

-Sid

Brandon Sterne

unread,
Jul 6, 2009, 1:58:08 PM7/6/09
to Gervase Markham, dev-se...@lists.mozilla.org
Thanks for the great feedback, Eric. I have some additional comments
that I haven't finished yet, but this was a quick one...

Gervase Markham wrote:
> On 06/07/09 01:28, EricLaw wrote:
>> Style-src
>> I don’t know what “style attributes of HTML elements” means.
>
> It means <div style="some CSS here"></div>

Perhaps the style-src tag does not need to apply to inline style after
all. Originally, we had thought we needed this restriction to prevent
CSS from being used as a vector for script injection via XBL and CSS
expressions. However, there is the other restriction already in place
which requires that XBL bindings come from chrome: or resource: URIs, so
the XSS risk is extremely low. The only other risk of allowing inline
CSS is page defacement, element hiding, etc.

I think we should change the script-src directive to only apply to
external stylesheet loads and let inline styles (<style> elements and
style attributes) behave as they currently do.

-Brandon

Brandon Sterne

unread,
Jul 6, 2009, 2:17:06 PM7/6/09
to Gervase Markham, dev-se...@lists.mozilla.org
On 7/6/09 10:58 AM, Brandon Sterne wrote:
> I think we should change the script-src directive to only apply to
> external stylesheet loads and let inline styles (<style> elements and
> style attributes) behave as they currently do.

That should have been "style-src directive".

-Brandon

Sid Stamm

unread,
Jul 6, 2009, 4:36:22 PM7/6/09
to
On 7/6/09 10:14 AM, Sid Stamm wrote:
>>>> Are relative URIs valid for the report-URI/policy-URI? (Seems like
>>>> this would be a good thing to support). However, if so, is there any
>>>> interaction/relationship with the BASE tag, which is supposed to also
>>>> appear early in the head?
>>> Very good question.
> Whether or not a BASE tag is present, the UA
> has to figure out what host to request the content from and over what
> scheme and port to request it; at this level, relative and absolute URIs
> should appear the same. I'll try to make this more obvious in the Spec.

Actually, I got a little ahead of myself about the BASE tag. If the CSP
is specified in an HTTP header, then I don't think the BASE HTML tag
should have any effect on the resolution of a relative URI. It is
defined in a different layer, and should really only affect the HTML
content and anything it does (not the protocol-level stuff).

So in brief, I think the BASE tag shouldn't affect any HTTP header-level
URIs at all, but relative URIs might be okay since the policy-uri and
report-uri are required to be same scheme/host/port anyway.

-Sid

Sid Stamm

unread,
Jul 7, 2009, 2:18:38 PM7/7/09
to
Hi Eric,

I've addressed many of your (excellent) comments in the Spec. Thanks
for the feedback! Status of each point is inline:

On 7/5/09 5:28 PM, EricLaw wrote:
> ---------------
> Versioning
> ---------------
> Server CSP Versioning
> Can the server define which version of CSP policies it wants to use,
> allowing the client to ignore? I know that backward compatibility is
> the goal, but other successful features (E.g. Cookies) have had tons

> of problems here as they try to evolve. The current �Handling parse
> errors� section imposes a number of requirements that might be onerous
> in the distant future when we�re on version 5 of the CSP feature.
>
> User-Agent header
> What�s the use-case for adding a new token to the user-agent header?
> It�s already getting pretty bloated (at least in IE) and it�s hard to


> imagine what a server would do differently when getting this token.

Under Discussion.

> ---------------
> Policy Questions
> ---------------
> Style-src

> I don�t know what �style attributes of HTML elements� means. Is this


> meant to cover cases where a CSS rule specifies url() for a font/
> cursor/image? Or are those meant to be controlled by the other
> relevant CSP directives?

Removed this confusing point from spec. (Refined spec to say: Images
loaded from stylesheets should be subject to "img-src", external
stylesheet loads are subject to style-src, and inline styles are allowed.)

> frame-ancestors
> In addition to IFRAMEs/FRAME tags, this should also restrict OBJECT
> tags that point to HTML pages, correct?

Yes, updated spec.

> What exactly an �ancestor� is should probably be defined here.
Done.

> I like this directive, but it�s worth noting that this is the only


> directive which constrains how others can host the protected-
> document. More on this later (see Scope Creep below).

I believe the original goal for this directive was to help prevent
clickjacking.

> ---------------
> CSP Declarations
> ---------------
>
> HTTP: Header Name
> W3C folks have been giving us (IE) a hard time about the number (and
> scattered documentation) of X- header names
> http://blogs.msdn.com/ieinternals/archive/2009/06/30/Internet-Explorer-Custom-HTTP-Headers.aspx,

> and they�ve strongly encouraged us to register our header names (even


> provisionally) with IANA http://www.iana.org/assignments/message-headers/message-header-index.html
> rather than using the X- prefix.

I believe we have plans to register the header, and when we do, we'll
absolutely drop the X-.

> HTTP Header: Final
> It seems like it might be useful for a CSP Header to declare that it�s
> the �Final� security policy, to prevent meddling by META Header


> injection and the like. Of course, HTTP Header injection is a threat

> as well, but that seems like a smaller threat, and the �FINAL�
> directive doesn�t really significantly increase attack surface here


> because sites using the Header are unlikely to also send the META tag.

Under Discussion.

> Meta Tag Placement
> I like the restriction that META must appear within the HEAD, although
> technically HTML5 has no such restriction.

I personally want to eradicate the META tag
(http://blog.sidstamm.com/2009/06/csp-with-or-without-meta.html). This
should be discussed more in depth to decide if we should remove META
support, if we should support multiple HTTP headers, etc.

> Are relative URIs valid for the report-URI/policy-URI? (Seems like
> this would be a good thing to support). However, if so, is there any
> interaction/relationship with the BASE tag, which is supposed to also
> appear early in the head?

Spec updated to support relative URIs. I don't think CSP should
interact with the BASE tag at all.

> The spec needs to specifically define what happens if a META tag is
> found in violation of the rules (e.g. It MUST be ignored, and a
> CONSOLE ERR must be raised)

Done.

> CSP-Tagging
> What happens to CSP if I save a CSP-protected document to my local

> disk? I�d assume it would be ignored (because many restrictions could


> be broken) but this should be explicit. Also, when saving docs to

> disk, HTTP headers are lost, so to preserve it, you�d need to


> explicitly serialize to a META tag, which could get complicated if the

> document already had a CSP META�
Under discussion.

> ---------------
> Policies and Wildcarding
> ---------------
>
> Wildcarding: Multiple Labels
> Allowing a wildcard to represent an unlimited number of DNS labels
> could be problematic, because it leaves the wildcard-configured site
> at the mercy of the DNS policies of any of its children. I think

> there�s a use case that suggests a single-label wildcard would be


> useful but a multiple-label wildcard incurs unneeded attack surface
> for some.
>

> The �how many labels can * represent� problem has come up in a number


> of contexts, including Access-Control and HTTPS certificate
> validation. In the latter case, * is defined in RFC2818 as one DNS
> label, but Firefox does not currently follow that RFC.

Under Discussion.

> Wildcarding: Zero Labels
> In a related vein, wildcards are currently defined as �one or more
> labels� by the CSP spec. Real-world sites have the unfortunate habit
> of serving content from both �example.com� and �www.example.com� which


> is likely to cause site breakage when CSP is in use. Unfortunately,

> trivially redefining a wildcard to �zero or more labels� doesn�t quite
> work because then there�s a leading dot we�d have to get rid of, but
> I�d propose that this is probably the simplest/most intuitive fix.
> Therefore, a site could specify �*.example.com� to match
> �www.example.com� and �example.com�.
Done. Wildcard matches zero or more labels. In a semantic sense, the
wildcard token can be considered "*.", and may be replaced by any number
(>= 0) of DNS labels x[i], each followed by a dot, and concatenated as
"x[0].x[1]. ...x[i]."

> Wildcarding: Port& Scheme


> As the intent is to allow wildcarding of the port, and constrained-
> wildcarding of the scheme, it might make sense to provide explicit
> examples of each in the document.

Done. Also added detailed descriptions of port/scheme wildcarding.

> � The spec might want to note that using wildcards does not permit

Done.

> � Scheme wildcarding is somewhat dangerous. Should the spec define


> what happens for use of schemes that do not define origin
> information? (javascript: and data: are listed, but there are
> others).

Under Discussion.

> Sample Policy definitions
> The example:
> X-Content-Security-Policy: allow https://self
>

> Doesn�t make sense to me, because �self� is defined to include the


> scheme. This suggests that we need a "selfhost" directive, which
> includes the hostname only.

Updated spec to allow "https://self:443" syntax. Self flexible and may
or may not include scheme and port. When absent from the expression,
scheme or port are inherited.

> ---------------
> Reporting issues
> ---------------
>
> Violation Report: Headers

> I don�t think I understand the use case for sending the request


> headers. Is the hope that the server operator be able to use this to
> catch XSS attempts and go plug holes to protect legacy clients? If
> so, should the Violation Report Sample example be explicitly be
> updated to show such an injection attempt being caught?

Yes and Yes. Will add such an example.

> This seems like a potentially significant source of security risk and

> complexity. For instance, the client must ensure that it won�t leak
> Proxy-Authorization headers, etc.
It is. We should discuss this.

> Also, the �blocked-headers� is defined as required, but not all


> schemes (specifically, FTP and FILE) do not use headers.

Removed the requirement to send "request-headers" from the XML schema
(implied optional).

> Data Leak Vectors
> Per other parts of the spec, �all reports must be sent to the same
> host� should read �all reports must be sent to the same Origin (scheme/
> port/host).� The section �Restrictions on policy-uri and report-uri�
> has a similar problem.
Done.

> Violation Report Syntax: How to send
> It probably should be restated in this section that redirects are not
> to be followed.

Done.

> This section says that the report must be �transmitted� but does not


> explain how such transmission should occur. I assume the proposal is
> to use a HTTP POST. If so, this should be stated.

Done.

> Problem: Neither FTP, nor File, nor a number of other schemes support
> POST.

Fixed spec: "HTTP POST is used if available in the employed scheme,
otherwise an appropriate 'submit' method is used."

> Violation Report: No redirects
> I like the simplicity of forbidding redirects outright, but I think

> there�s going to be a complaint that same-origin redirection via 307
> ought to be permitted. As a designer, I�d probably ignore that
> complaint, but I bet it�ll be made eventually.
We should indeed anticipate this complaint, but I think it should be
tabled for now.

> Parse Errors: Server detection
> Parse errors are defined as only being reported on the client. This
> is probably reasonable, but leads to the possibility that some UA will
> fail to parse some CSP directive and the server operator will not know
> about it.

I don't think we need to address this (as per Gerv's comment on this).

> Parse Errors: User notification


> If the �Fail closed� model is used, is there any way for the user to
> know why the site is broken? Isn�t this going to create a problem,
> where, say, a FF4 user will �downgrade� to a browser that doesn�t
> support CSP (say, Opera 9) because the site �works properly there�?
> Everyone loses.

We should discuss the use of a UI indicator or something similar, though
in the case of policy violations (that get repaired due to reporting),
the failure should be short-lived enough so it won't warrant downgrading.

> ---------------
> Misc
> ---------------
>
> User-Scripts
> Agreeing with Sacolcor, I think the spec should explicitly note that

> CSP isn�t intended to apply to User-Scripts, although I think the


> Greasemonkey guys might find it hard to implement their current
> feature-set considering where CSP is likely to be implemented in the
> browser stacks.

Agreed. Added comment to spec.

> Broken link
> The �CSP Syntax� link seems to be broken, and goes to the general


> introduction page?
>
> When any known directive contains a value that violates [[CSP syntax]]

Fixed.

> ---------------
> Logical evolutions aka Scope Creep
> ---------------
>
> Scope Creep: exempt HEAD

> We�ve had some folks suggest that CSP-like schemes would be more


> easily deployed if they could allow arbitrary script/css to be
> embedded inline/referenced in the HEAD tag. Presumably, you still get
> some solid security value here because the HEAD is a much smaller part
> of the document to protect from injections.

Under Discussion.

> Scope Creep: Prevent Sniffing& Unintended Reuse


> It seems natural that a subdownload should be able to say Content-
> Security-Policy: strictType which would cause a UA to refuse to sniff
> content or feed it to tags of mismatched typing (e.g. text/plain
> resource fed to a<SCRIPT> element, etc). This would go beyond IE's X-
> Content-Type-Options directive.
>
> In particular, this could be used by non-JS responses to explicitly
> prevent them from being used by SCRIPT tags, and to prevent HTML files
> from being scraped by liberal CSS parsers. This is an anti-CSRF ASR.

Under Discussion.

> Scope Creep: Same Origin Only

> The claim �Content Security Policy enables a site to specify which
> sites may embed a resource� is currently over-broad, but it shouldn�t


> be. (CSP currently seems to only apply to HTML documents, not
> "resources" in general).
>
> It seems natural that a subdownload should be able to say e.g. Content-
> Security-Policy: callers<originlist> which would cause the UA network
> stack to refuse to process (e.g. Set-Cookie) or return the content (to
> a script tag, object tag, image tag, XHR request etc) unless the
> Origin of the requestor matches the specified Origin list.
>
> (A competing idea is to respect Access-Control-Allow-Origin response

> headers on all types of requests (not just XHR) but I don�t think
> that�s what�s currently being proposed?)


>
> This allows for a useful CSRF/bandwidth misuse protection at little
> cost.

Under Discussion.

> I�m not fully convinced that the �Origin� proposal (or at least the
> versions I�ve read closely) will prove generally workable. Among


> other problems, every protected resource would need to be served with
> a Vary: Origin header, which is problematic for a number of reasons,
> including legacy IE bugs (http://blogs.msdn.com/ieinternals/archive/
> 2009/06/17/9769915.aspx).

> ---------------
> Feedback from others
> ---------------
> ASP.NET Controls
> Apparently, ASP.NET controls are tightly bound to use of JavaScript:

> protocol URIs, and this isn�t likely to be easily changed. For that


> reason, it might be interesting to have a way to allow only those URIs
> and not inline script blocks, event handlers, etc?

Under Discussion.

> ---------------
>
> That�s all I�ve got for now. Thanks for your great work here!


>
> Eric Lawrence
> Program Manager - IE Security

-Sid

Daniel Veditz

unread,
Jul 7, 2009, 7:59:45 PM7/7/09
to Sid Stamm
Sid Stamm wrote:
>> Also, the “blocked-headers” is defined as required, but not all

>> schemes (specifically, FTP and FILE) do not use headers.
> Removed the requirement to send "request-headers" from the XML schema
> (implied optional).

Just jumping off here on a related topic: What do we send as the
"blocked-uri" when we find inline script? Since this is perhaps the most
common injection type this would be a good one for an example.

I suppose we could leave blocked-uri empty and let people infer that it
was inline script from the violated directive. I'd rather be explicit
about it, but then "blocked-uri" might be the wrong name. Or do we leave
the blocked-uri empty (absent, or present-but-empty?) and use a keyword
like <violated-directive>inline script</violated-directive>

For clarification, if the entire policy was "allow self othersite.com"
and we tried to load an image in violation of that policy, would the
violated-directive be the implied img-src or the allow fall-back that is
actually specified? I imagine it would be the allow directive.

Sid Stamm

unread,
Jul 7, 2009, 8:43:38 PM7/7/09
to Daniel Veditz
Hi Dan,

You raise some excellent questions... you know, I hadn't really thought
about what to do about reporting inline script violations. I think the
intention was to just *not run* the violating script, but reporting the
violation is definitely a good idea since much of XSS happens this way.

Daniel Veditz wrote:
> Just jumping off here on a related topic: What do we send as the
> "blocked-uri" when we find inline script? Since this is perhaps the most
> common injection type this would be a good one for an example.

I think we need to send the URI of the protected document as the
blocked-uri, since the inline scripts live in there.

> I suppose we could leave blocked-uri empty and let people infer that it
> was inline script from the violated directive. I'd rather be explicit
> about it, but then "blocked-uri" might be the wrong name. Or do we leave
> the blocked-uri empty (absent, or present-but-empty?) and use a keyword
> like <violated-directive>inline script</violated-directive>

As far as the report details go.... since inline scripts violate a base
restriction of CSP, maybe we should change up the violation report
format a bit. How about this: the report either contains a
"violated-directive" field or "violated-base-restriction" field. If a
directive is violated due to a resource load (like an image), the
"violated-directive" field is sent. If it is a base restriction, such
as inline script, the "violated-base-restriction" field is sent. We can
formalize the names of each base restriction for inclusion in the report.

Here's an example, as I propose:
<csp-report>
<request>GET /index.html HTTP/1.1</request>
<request-headers><![CDATA[
Host: example.com
User-Agent: Mozilla/5.0 (X11; U; ...
Accept: text/html,application/xhtml ...
]]></request-headers>
<blocked-uri>http://myserver.com/index.html</blocked-uri>
<violated-base-restriction>1.0: NO INLINE SCRIPTS</violated-directive>
</csp-report>

And then the new schema would require one of either the
"violated-directive" or "violated-base-restriction" entities.

> For clarification, if the entire policy was "allow self othersite.com"
> and we tried to load an image in violation of that policy, would the
> violated-directive be the implied img-src or the allow fall-back that is
> actually specified? I imagine it would be the allow directive.

There's arguments for both choices:
1. We could send the "allow" directive for ease in figuring out which
directive was violated; this is the most straightforward report.
2. We could send the "img-src" directive: the recipient of the report
may want to know that the blocked URI was requested for display as an image.

Maybe we can compromise and say something like:
<violated-directive>(allow as img-src) self
othersite.com</violated-directive>

Thoughts?

-Sid

Daniel Veditz

unread,
Jul 8, 2009, 4:02:13 AM7/8/09
to EricLaw
EricLaw wrote:
> ---------------
> Versioning
> ---------------
> User-Agent header
> What’s the use-case for adding a new token to the user-agent header?
> It’s already getting pretty bloated (at least in IE) and it’s hard to
> imagine what a server would do differently when getting this token.

The UA approach may be a botch, but it was an attempt at something like
a less-verbose Accept-type header (six bytes in the UA, many more as a
separate header which would have to be sent with every request, with no
servers today actually understanding anything about CSP). Should the
policy syntax ever change a server could theoretically send different
syntax to a CSP/1 browser and a CSP/2 browser.

The other approach is to version the response, a few extra bytes only
when a server supports CSP. Yay, bandwidth win! But then what do we do?
How does the server know which version to send? Should it send every
version it knows about, and the client process the highest version it
knows how to process? That means if we ever have a CSP-2 either clients
are sending two complete headers (or three, or more) or they're sending
their preferred version and users of clients which only support CSP-1
get zero protection rather than the 99% they actually support.

In the case of brand-new directives older clients can simply ignore
unknowns and that will work OK in many cases. Either loads of that type
aren't supported at all (e.g. downloadable fonts, maybe?) or they can
reasonably fall-back to the default allow directive. That might leave
users of older clients vulnerable for that type (or only partially
protected), but no worse than users of browser that don't support CSP at
all.

What if we change the rules? Suppose we add a "head" keyword to the
script-src directive. Older clients will think that's a host named
"head" and strip all the in-line <head> scripts the site relies on. In
that case a versioned response actually works better for the site. The
older clients get zero protection, much less than they are capable of
providing (but the site still has to work to protect legacy browsers
with no CSP at all), and at least the content isn't broken.

> frame-ancestors


> What exactly an “ancestor” is should probably be defined here.

Would "frame-parents" make any more sense? Ties in to the window.parent
property rather than introducing a new name for the concept

> The “how many labels can * represent” problem has come up in a number
> of contexts, including Access-Control and HTTPS certificate
> validation. In the latter case, * is defined in RFC2818 as one DNS
> label, but Firefox does not currently follow that RFC.

Firefox 3.5 does, actually. The regexp syntax followed in older versions
of Firefox was inherited from Netscape and predated the RFC by years. A
small but vocal minority took advantage of the feature for internal
servers, but given the lack of support in other browsers it was well
past time to let it go.

> • The spec might want to note that using wildcards does not permit
> access to banned ports http://www.mozilla.org/projects/netlib/PortBanning.html.

Maybe an implementation note saying nothing in CSP prevents a user agent
from blocking loads for other reasons. AdBlock will block additional
loads, NoScript will block scripts, LocalRodeo will block access to
RFC1918 addresses, etc. The Content Security Policy allows a site to
define _additional_ restrictions it would like the client to impose for
that content, but in no way is intended to loosen restrictions already
imposed by the client for its own reasons.

> • Scheme wildcarding is somewhat dangerous. Should the spec define
> what happens for use of schemes that do not define origin
> information? (javascript: and data: are listed, but there are
> others).

I am personally 100% against scheme wildcarding. There are so few
schemes a site could reasonably want to allow that it shouldn't be hard
to list them.

> X-Content-Security-Policy: allow https://self
>
> Doesn’t make sense to me, because “self” is defined to include the
> scheme. This suggests that we need a "selfhost" directive, which
> includes the hostname only.

Doesn't make sense to me either. "self" should be a keyword. If you want
to stick schemes and ports on there then you should have to explicitly
state your FQDN.


> Violation Report: Headers


> This seems like a potentially significant source of security risk and
> complexity. For instance, the client must ensure that it won’t leak
> Proxy-Authorization headers, etc.

Maybe we should explicitly define which headers we will send. Do the
Accept headers really help, for instance?

We definitely want the method and the path, Host, Referer, Origin (when
we have that), Cookie (and Cookie2 for UA's that support that). Anything
else?

The user-agent might be marginally useful for diagnostic purposes should
different clients start reporting different errors, but could probably
be gotten from the POST itself and not need to be repeated in the report
body. I don't care either way, whichever might be more convenient for
site authors.

I suppose there are probably cases where a site serves content that's
different enough in response to Accept headers that it's worth including
them, although the Accept headers aren't under the control of the
XSS-attempting attacker.

> Parse Errors: User notification
> If the “Fail closed” model is used, is there any way for the user to
> know why the site is broken? Isn’t this going to create a problem,
> where, say, a FF4 user will “downgrade” to a browser that doesn’t
> support CSP (say, Opera 9) because the site “works properly there”?
> Everyone loses.

Since "user choice" is a fundamental principal of the Mozilla foundation
we will almost certainly have some back-door way for an advanced user to
tell the browser to ignore the site's Content Security Policy, but I
wouldn't want to write that option into the spec. The spec should define
what a conforming implementation should do. If a user wants to
customize their browser into a non-conforming implementation that is
outside the spec.

While I think browsers should try to tell users why the content looks
bad (just as we try to tell them why we're not accepting certain SSL
certificates), I don't think the CSP spec should be dictating user
presentation. Mozilla will be spitting violations into our Error Console
at the very least, but whether we do something more visible to the user
will probably come out of experimentation.

> Agreeing with Sacolcor, I think the spec should explicitly note that
> CSP isn’t intended to apply to User-Scripts, although I think the
> Greasemonkey guys might find it hard to implement their current
> feature-set considering where CSP is likely to be implemented in the
> browser stacks.

We're going to have trouble keeping 100% of current user scripts
working. May have to add some API so Greasemonkey can actively
participate in the content security policy model, such as by having user
scripts declare which resources they're going to try to load so we can
add them to the whitelist.

> Scope Creep: exempt HEAD
> We’ve had some folks suggest that CSP-like schemes would be more
> easily deployed if they could allow arbitrary script/css to be
> embedded inline/referenced in the HEAD tag.

Gerv's original Content Restrictions allowed this, too. I'm not
convinced the people who suggest that have looked at real-life pages.
Even with <head> scripts allowed CSP will require massive rewrites of
most pages, and XSS injection does happen in the <head>.

Worth keeping in mind after we get some experimentation, but I'd rather
start out with a stricter policy and loosen than the reverse.

> (CSP currently seems to only apply to HTML documents, not
> "resources" in general).

CSP is currently a document-focused policy.

> It seems natural that a subdownload should be able to say e.g. Content-
> Security-Policy: callers <originlist>

That's not too far off from what frame-ancestors does (which was also a
scope-creep). Could they be combined in some way?

I'd like something like that, but won't concerned sites want to enforce
it server-side? A reliable Referer, or the Origin/Sec-From header would
seem more useful there.

-Dan Veditz

Gervase Markham

unread,
Jul 8, 2009, 12:21:20 PM7/8/09
to
On 08/07/09 09:02, Daniel Veditz wrote:
> The UA approach may be a botch, but it was an attempt at something like
> a less-verbose Accept-type header (six bytes in the UA, many more as a
> separate header which would have to be sent with every request, with no
> servers today actually understanding anything about CSP). Should the
> policy syntax ever change a server could theoretically send different
> syntax to a CSP/1 browser and a CSP/2 browser.

So the versioning in the UA is to guard against a policy syntax change.
But the syntax is so simple a list of (key/value pairs) that it's very,
very hard to imagine a requirement which would mean we *had* to break
the syntax. And yet, every request the browser ever sends acquires
another six bytes, until the end of time. (This is not a UA token which
changes over time as browsers change, like OS, it's one which has to be
present for ever.)

I don't think the risk of needing a breaking syntax change is worth it.
In that very unlikely event, we should instead plan to deploy a new
header, as you say below. There's more downstream bandwidth than
upstream, and there's more every year.

> The other approach is to version the response, a few extra bytes only
> when a server supports CSP. Yay, bandwidth win! But then what do we do?
> How does the server know which version to send? Should it send every
> version it knows about, and the client process the highest version it
> knows how to process? That means if we ever have a CSP-2 either clients
> are sending two complete headers (or three, or more) or they're sending
> their preferred version and users of clients which only support CSP-1
> get zero protection rather than the 99% they actually support.

But this scary scenario fails to take into account the frankly tiny
chance that we'll need to make one breaking syntax change, let alone
two. Even with the spec as it is. Careful design can reduce the chances
even further.

> In the case of brand-new directives older clients can simply ignore
> unknowns and that will work OK in many cases. Either loads of that type
> aren't supported at all (e.g. downloadable fonts, maybe?) or they can
> reasonably fall-back to the default allow directive. That might leave
> users of older clients vulnerable for that type (or only partially
> protected), but no worse than users of browser that don't support CSP at
> all.

Exactly.

> What if we change the rules? Suppose we add a "head" keyword to the
> script-src directive. Older clients will think that's a host named
> "head" and strip all the in-line<head> scripts the site relies on.

So we have an "inline metadata" bug in the spec, in that we are putting
domain names and keywords in the same slot. We could either use case to
delimit keywords (HEAD, SELF) or we could prefix them with a character
not permitted in hostnames (!head, $self).

Even if we'd deployed already, we could fix this without breaking syntax
by having a script-head: yes directive. Ugly, sure; I mention it just to
show that the chances of us _having_ to break compatibility are tiny.

> Would "frame-parents" make any more sense? Ties in to the window.parent
> property rather than introducing a new name for the concept

Good idea.

Gerv

Gervase Markham

unread,
Jul 8, 2009, 12:25:26 PM7/8/09
to
On 07/07/09 19:18, Sid Stamm wrote:
> I personally want to eradicate the META tag
> (http://blog.sidstamm.com/2009/06/csp-with-or-without-meta.html). This
> should be discussed more in depth to decide if we should remove META
> support, if we should support multiple HTTP headers, etc.

My comment:

Why not allow multiple headers, and keep the intersection algorithm?

This way, the hosting company has to provide a special interface for
editing the header rather than the customer just being able to type it
into the page, but it still allows it to make non-negotiable
restrictions. They just serve their restrictions in the first header,
and make sure the customer-provided header comes afterwards.

This means you still need the policy intersection logic, so that part of
complexity isn't removed, but it still allows, at least in some ways,
the use case that you were worried about. A compromise, in other words.


However, I would now add that removing <meta> support (i.e. inline
policy) and sticking with headers makes CSP an HTTP-only technology. Is
that something we are happy with?

> Spec updated to support relative URIs. I don't think CSP should interact
> with the BASE tag at all.

I agree. Even with <meta>, it's just saying "hey, here's a header you
didn't get". So none of your <base> are belong to us.

>> What happens to CSP if I save a CSP-protected document to my local

>> disk? I’d assume it would be ignored (because many restrictions could


>> be broken) but this should be explicit. Also, when saving docs to

>> disk, HTTP headers are lost, so to preserve it, you’d need to


>> explicitly serialize to a META tag, which could get complicated if the

>> document already had a CSP META…
> Under discussion.

Let's say content saved to disk should just lose its CSP. What would be
the disadvantages of that policy?

> Updated spec to allow "https://self:443" syntax. Self flexible and may
> or may not include scheme and port. When absent from the expression,
> scheme or port are inherited.

Good idea.

>> Apparently, ASP.NET controls are tightly bound to use of JavaScript:

>> protocol URIs, and this isn’t likely to be easily changed. For that


>> reason, it might be interesting to have a way to allow only those URIs
>> and not inline script blocks, event handlers, etc?
> Under Discussion.

I think we need to figure out whether permitting this in fact blows all
protection out of the water, or not.

Gerv

Sid Stamm

unread,
Jul 8, 2009, 12:53:27 PM7/8/09
to
On 7/8/09 9:21 AM, Gervase Markham wrote:
>> Would "frame-parents" make any more sense? Ties in to the window.parent
>> property rather than introducing a new name for the concept
>
> Good idea.

"frame-parents" suggests to me one level of ancestry... Your grandmother
is not your parent, right? Are there examples in other technologies or
features that we can follow?

-Sid

Daniel Veditz

unread,
Jul 8, 2009, 1:05:11 PM7/8/09
to Sid Stamm
Sid Stamm wrote:
> You raise some excellent questions... you know, I hadn't really thought
> about what to do about reporting inline script violations. I think the
> intention was to just *not run* the violating script, but reporting the
> violation is definitely a good idea since much of XSS happens this way.

I had always assumed that if we were going to report anything, it'd be
an inline script attempt -- the heart of most XSS attacks.

> How about this: the report either contains a
> "violated-directive" field or "violated-base-restriction" field.

I'm not keen on the either/or, can we pick one that will serve for both?
There are not many policies that are not directives, we can define in
the spec what we will send for those violations.

e.g.
<restriction>allow none</restriction>
<restriction>img-src *.flickr.com self</restriction>
<restriction>inline script</restriction>

I don't care so much what the tagname is (although
"violated-base-restriction" is a little extreme) as much as I'd like a
consistent report format. All fields should be present (even if empty),
and the same fields every time.

Suggestions for the tag could be
violated-directive // mostly accurate, reporting the implied
// no-inline-script "directive" is OK
violated-policy
restriction
policy // violation implied, else we wouldn't report

>> For clarification, if the entire policy was "allow self othersite.com"
>> and we tried to load an image in violation of that policy, would the
>> violated-directive be the implied img-src or the allow fall-back that is
>> actually specified? I imagine it would be the allow directive.
> There's arguments for both choices:
> 1. We could send the "allow" directive for ease in figuring out which
> directive was violated; this is the most straightforward report.

I prefer sending the actual policy, I just want the spec to be clear
about what happens.

> Maybe we can compromise and say something like:
> <violated-directive>(allow as img-src) self
> othersite.com</violated-directive>
>
> Thoughts?

I like either of your first suggestions over a wishy-washy sending both.

Bil Corry

unread,
Jul 8, 2009, 1:22:55 PM7/8/09
to Gervase Markham, dev-se...@lists.mozilla.org
Gervase Markham wrote on 7/8/2009 11:25 AM:
> On 07/07/09 19:18, Sid Stamm wrote:
>> I personally want to eradicate the META tag
>> (http://blog.sidstamm.com/2009/06/csp-with-or-without-meta.html). This
>> should be discussed more in depth to decide if we should remove META
>> support, if we should support multiple HTTP headers, etc.
>
> My comment:
>
> Why not allow multiple headers, and keep the intersection algorithm?
>
> This way, the hosting company has to provide a special interface for
> editing the header rather than the customer just being able to type it
> into the page, but it still allows it to make non-negotiable
> restrictions. They just serve their restrictions in the first header,
> and make sure the customer-provided header comes afterwards.

If the hosting company is providing an interface to add one or more additional CSP headers, then wouldn't it be just as easy for them to provide an interface that constructs a single header?


- Bil

Gervase Markham

unread,
Jul 9, 2009, 7:01:05 AM7/9/09
to
On 08/07/09 18:22, Bil Corry wrote:
> If the hosting company is providing an interface to add one or more
> additional CSP headers, then wouldn't it be just as easy for them to
> provide an interface that constructs a single header?

The scenario here is that they have a set policy, which an individual
site owner is permitted to tighten but not loosen. To do that by editing
one header would mean that either they'd need to post-check the header
to make sure it was no looser than the original, or they'd need to
implement the header-merging logic which would otherwise be in the
client. Which means N implementations of header merging, some buggy,
rather than one.

Header-merging logic in the client should just be a case of setting bits
to 1 and not letting them get set back to 0 again. That can't be that hard.

Gerv

EricLaw

unread,
Jul 9, 2009, 6:05:24 PM7/9/09
to
Lots of great thoughts in this thread!

I wanted to elaborate a bit here:

> > It seems natural that a subdownload should be able to say e.g. Content-
> > Security-Policy: callers <originlist>
> That's not too far off from what frame-ancestors does (which was also a
> scope-creep). Could they be combined in some way?
>
> I'd like something like that, but won't concerned sites want to enforce
> it server-side? A reliable Referer, or the Origin/Sec-From header would
> seem more useful there.

Some might, but that basically requires the server to send Vary:
Origin or Vary: Sec-From for all resources returned. This seems like
it could potentially impair performance for otherwise cacheable
resources.

Daniel Veditz

unread,
Jul 10, 2009, 4:49:12 AM7/10/09
to Gervase Markham
Gervase Markham wrote:
>> What if we change the rules? Suppose we add a "head" keyword to the
>> script-src directive. Older clients will think that's a host named
>> "head" and strip all the in-line<head> scripts the site relies on.
>
> So we have an "inline metadata" bug in the spec, in that we are putting
> domain names and keywords in the same slot. We could either use case to
> delimit keywords (HEAD, SELF) or we could prefix them with a character
> not permitted in hostnames (!head, $self).

That misses my point. Suppose we have such a syntax and add a new $HEAD
keyword. The old client knows it is not a hostname and ignores it as an
unknown keyword. Then it merrily follows old behavior and strips all the
in-line scripts from the <head> section, breaking the site.

> Even if we'd deployed already, we could fix this without breaking syntax
> by having a script-head: yes directive. Ugly, sure; I mention it just to
> show that the chances of us _having_ to break compatibility are tiny.

The problem is not the syntax but the meaning. The site wants to use a
configuration that cannot be expressed in the old syntax. If the client
advertises its support level the site might be able to send different
CSP headers (or none at all) for the older client. Alternately we put
versioning into the CSP response header, either explicitly in the
proposed syntax or implicitly by later using Content-Security-Policy-2
after significant changes in meaning.

If we're optimistic that we've got the buckets close to right we can
ignore versions for now and later change the header name if we have to.
That seems to be what you're saying; I'm clearly not as optimistic as
you -- occupational hazard. I suppose we're already taking that tack by
using an X- header for now, giving us a one-time pass to change the
syntax/semantics when we drop the X- before ship. Realistically, though,
there won't be enough real-world use by then to have figured out what
might need to be different.

Separate from the header, we may want to plan for versioning in the
policyURI file format, maybe even versioned sections so that even if we
do have a semantic change a site could fall back to sending a common
CSP: policyURI <foo> to everyone.

-Dan

Lucas Adamski

unread,
Jul 10, 2009, 3:18:12 PM7/10/09
to Gervase Markham, dev-se...@lists.mozilla.org
With security, its safer (and more accurate) to assume compatibility
breakage than not. Its not just syntax that can change but the rules
themselves. For example if we identify new vectors for code
injection, we might have to block additional APIs thus breaking sites
that would otherwise effectively support CSP without any change in
syntax.

Even something as relatively simple to reason about like HTTP itself
is versioned, that is why each transaction starts with the HTTP
version for the request. CSP is a complex security model; I would say
backwards breakage in the future is inevitable regardless of how much
we churn on it now, given it will have to evolve hand in hand with our
understanding of the threat models. It cannot anticipate attack
vectors we don't yet know about.
Lucas.

Brandon Sterne

unread,
Jul 13, 2009, 5:48:26 PM7/13/09
to EricLaw, dev-se...@lists.mozilla.org

I don't see why servers need to send Vary: Sec-From for all resources
returned. Can't they just send it for the resources that they don't want
cached?

You mentioned that there are legacy IE bugs that would be problematic
for sending Vary: Sec-From. In the article you posted it says:

> Internet Explorer 6 will treat a response with a Vary header as
> completely uncacheable...

This seems like a problem of underutilizing browser caching but it
doesn't seem to break the Sec-From model where each request is validated
by the server using the context supplied in Sec-From. If an extra
request is generated, it will be validated by the server in the same way
as the original request. Plus, this is assuming that Microsoft even
plans to implement Sec-From in IE 6/7.

Are there other problems that you see in the Sec-From model? It
addresses both CSRF and the bandwidth-stealing issue you raised. I'm
personally a strong supporter.

Cheers,
Brandon

0 new messages