HTTPS requirements in SRI v1 spec

224 views
Skip to first unread message

Joel Weinberger

unread,
Dec 11, 2014, 12:45:05 AM12/11/14
to security-dev, Frederik Braun
Hello security-dev'ers. Over in spec land, we've been pushing towards a version 1 subresource integrity spec. Most of what we've come to hasn't been too controversial (as we've taken out a lot of the controversial aspects, such as caching).

However, one major hurdle exists: should integrity only be enforced on sites that are carried over authenticated origins? For example, can http://foobar.com have a <script> tag with an integrity attribute? (Important note: it is not controversial that the resources that integrity values point to must be over authenticated origins).

Chromium and Mozilla so far have had different approaches to this. Namely, we've publicly argued for a requirement that SRI only work on authenticated origins as we believe integrity is only meaningful if carried over a channel that itself provides integrity. Mozilla, however, feels that it still has value and we shouldn't neuter an API that might provide value. The compromise that Freddy Braun of Mozilla (my co-editor on the spec, along with Mike West of Google and Dev Akhawe of Dropbox) proposed is language like, "a user-agent MAY want to enable SRI support only on authenticated origins" which would allow both of our approaches.

What do you all think? I'm particularly interested in the thoughts of Chromium contributors here, since this relates to policy that we've been trying to encourage for a while (HTTPS for new features). Personally, I think this is a pretty good compromise as it allows us to keep our implementation and approach but allows for Mozilla and other user-agents to be more flexible.

There would still be some questions remaining, such as what to do in Chromium if an integrity value is placed on an unauthenticated origin, which would otherwise be valid with this language in another user agent ( should it fail to load, or should it just not enforce the integrity?), but I think we can work that out, too.
--Joel

Ryan Sleevi

unread,
Dec 11, 2014, 1:07:30 AM12/11/14
to Joel Weinberger, Frederik Braun, security-dev


On Dec 10, 2014 9:45 PM, "Joel Weinberger" <j...@chromium.org> wrote:
>
> Hello security-dev'ers. Over in spec land, we've been pushing towards a version 1 subresource integrity spec. Most of what we've come to hasn't been too controversial (as we've taken out a lot of the controversial aspects, such as caching).
>
> However, one major hurdle exists: should integrity only be enforced on sites that are carried over authenticated origins? For example, can http://foobar.com have a <script> tag with an integrity attribute? (Important note: it is not controversial that the resources that integrity values point to must be over authenticated origins).
>
> Chromium and Mozilla so far have had different approaches to this. Namely, we've publicly argued for a requirement that SRI only work on authenticated origins as we believe integrity is only meaningful if carried over a channel that itself provides integrity.

I must admit, I'm personally a little mixed on this, though I won't rehash the debate here. As one of the staunchest supports of HTTPS-all-the-things, I can't help but feel like I don't (fully) get this argument.

> Mozilla, however, feels that it still has value and we shouldn't neuter an API that might provide value. The compromise that Freddy Braun of Mozilla (my co-editor on the spec, along with Mike West of Google and Dev Akhawe of Dropbox) proposed is language like, "a user-agent MAY want to enable SRI support only on authenticated origins" which would allow both of our approaches.
>
> What do you all think? I'm particularly interested in the thoughts of Chromium contributors here, since this relates to policy that we've been trying to encourage for a while (HTTPS for new features). Personally, I think this is a pretty good compromise as it allows us to keep our implementation and approach but allows for Mozilla and other user-agents to be more flexible.
>

This is effectively the compromise the WebCrypto WG reached, and I personally consider it deeply unsatisfying and misleading for web developers.

I realize the cases are nuanced - the use of keys in WebCrypto is unquestionably unsafe for any purpose other than device tracking, while I imagine the argument for SRI+HTTP is that it protects you against hosting compromise, but not ISP compromise.

I think you need to carefully consider what the effect on developers will be. We already see no shortage of developers encountering Chrome's restriction for HTTPS after testing in Firefox. While deeply disturbing on its own right (these developers are surprised to find HTTP is not a secure script delivery mechanism, yet are busy implementing crypto systems), the biggest concern is that there's no place they can go and see what the "spec added" requirements are.

So I guess I'm saying there's precedent, but you should be very careful before assuming that precedent is good precedent (especially if from the Web Crypto WG).

I would also encourage you to be explicit in your specification exactly what should happen if a UA opts to make the MAY a MUST. Using a sentence as you described leaves great ambiguity (as you note). Instead, work that decision into every normative algorithm you have and define what must happen.

craig....@gmail.com

unread,
Dec 11, 2014, 5:53:51 AM12/11/14
to securi...@chromium.org, j...@chromium.org, fbr...@mozilla.com, rsl...@chromium.org
On Dec 10, 2014 9:45 PM, "Joel Weinberger" <j...@chromium.org> wrote:

> However, one major hurdle exists: should integrity only be enforced on sites that are carried over authenticated origins? For example, can http://foobar.com have a <script> tag with an integrity attribute? (Important note: it is not controversial that the resources that integrity values point to must be over authenticated origins).


As a website developer, I would really like HTTPS for everything as well... but when my localhost machine has 60 websites checked out (I seem to have collected a few), it would be a bit of a pain to sort out a certificate for each one.

For reference, they all use a "client.project.dev1.example.com" domain format, mapped automatically to a "client.project" folder via an Apache RewriteRule... where a wildcard certificate doesn't work for sub-sub-domains... and it is useful that dev2 can see dev1's work in progress (while on the same LAN).

I'm not saying my situation should be considered, but I know many people do work on multiple websites, and during development HTTPS is too hard to setup (something I hope will change).

Craig

Chris Palmer

unread,
Dec 11, 2014, 1:57:05 PM12/11/14
to craig....@gmail.com, security-dev, Joel Weinberger, Frederik Braun, Ryan Sleevi
On Thu, Dec 11, 2014 at 2:53 AM, <craig....@gmail.com> wrote:

> As a website developer, I would really like HTTPS for everything as well... but when my localhost machine has 60 websites checked out (I seem to have collected a few), it would be a bit of a pain to sort out a certificate for each one.

Localhost, to any port, is considered a secure origin, regardless of scheme:

http://www.chromium.org/Home/chromium-security/prefer-secure-origins-for-powerful-new-features
https://w3c.github.io/webappsec/specs/powerfulfeatures/

So, you could conceivably do client-project-dev0 =
http://localhost:80, client-project-dev1 = http://localhost:81, et c.
But:

> For reference, they all use a "client.project.dev1.example.com" domain format, mapped automatically to a "client.project" folder via an Apache RewriteRule... where a wildcard certificate doesn't work for sub-sub-domains... and it is useful that dev2 can see dev1's work in progress (while on the same LAN).

Why not client-project-dev1.example.com? Then *.example.com would work.

You can also issue any example.com certificates you need from an
issuer you/your company sets up, and name-constrain it to example.com,
and install it as a trust anchor on your company's development
machines. (The name constraint is important for safety: Since that
issuing cert could get stolen if your company gets hacked, you
wouldn't want it to be an all-powerful issuer.)

Another idea is to get a real, publicly-issued and valid cert for
*.dev.yourcompany.com and *.staging.yourcompany.com, and then have as
many client-project-N hostnames as you want, and test by hooking "git
push ..." so that the change is also pushed to the server. Then you
can develop and test with fast iteration, and you can show your
clients the progress (probably want to gate it behind HTTP Basic Auth
or similar so that the random public can't easily get to it).

> I'm not saying my situation should be considered, but I know many people do work on multiple websites, and during development HTTPS is too hard to setup (something I hope will change).

We should consider your situation — we really do need to make it
easier for you and the many people like you! I'd be interested to hear
if any of the above deployment scenarios could work for you.

Chris Palmer

unread,
Dec 11, 2014, 2:08:29 PM12/11/14
to Joel Weinberger, security-dev, Frederik Braun
On Wed, Dec 10, 2014 at 9:45 PM, Joel Weinberger <j...@chromium.org> wrote:

> Chromium and Mozilla so far have had different approaches to this. Namely,
> we've publicly argued for a requirement that SRI only work on authenticated
> origins as we believe integrity is only meaningful if carried over a channel
> that itself provides integrity. Mozilla, however, feels that it still has
> value and we shouldn't neuter an API that might provide value. The
> compromise that Freddy Braun of Mozilla (my co-editor on the spec, along
> with Mike West of Google and Dev Akhawe of Dropbox) proposed is language
> like, "a user-agent MAY want to enable SRI support only on authenticated
> origins" which would allow both of our approaches.

That leaves users and developers in a state of confusion. Let's not
repeat the WebCrypto mishap.

I really don't understand why anyone would want to make HTTP appear,
falsely, to continue to be acceptable. It makes even less sense to
make the HTTPS guarantee ambiguous. The combination of server
authentication, data integrity, and data confidentiality, for all
resources, is the *bare* *minimum*.

We should be well past this, and moving on to discuss such things as
encrypting SNI/other parts of the handshake, making traffic analysis
more expensive, improvements to authentication like getting rid of
bearer tokens, and so on. We should be celebrating with scotch and ice
cream as Certificate Transparency rolls out to 100%...

I can't believe we're still having to fight to maintain the bare
minimum. It's almost 2015.

Marc-Antoine Ruel

unread,
Dec 11, 2014, 2:17:34 PM12/11/14
to Chris Palmer, Joel Weinberger, security-dev, Frederik Braun
Let's do what is best for the users? The choice is obvious. As a random bystander, I totally agree with Chris. It's almost 2015. One must be totally disconnected and not watch any news in the past years to not understand the implications.

M-A

Ryan Sleevi

unread,
Dec 11, 2014, 2:25:46 PM12/11/14
to Marc-Antoine Ruel, Chris Palmer, Joel Weinberger, security-dev, Frederik Braun
To make it clearer: I disagree with Marc-Antoine and Chris here.

SRI provides protection against rogue CDMs. That is it's single and sole security guarantee. It doesn't provide protection in transit - that's the mixed content discussion we rightfully pushed out of scope for v1.

In that model, if I'm the author of http://foo.com and I want to source a resource from http://bar.com, and I want to ensure that resource has not been modified by the host, then SRI benefits and provides a net-positive security benefit.

It does not and cannot provide protection against ISP tampering. But that's ok, because SRI has never been about protecting the transit/end to end communication, it's about not trusting the peer you're communicating with. HTTPS doesn't help here.

The only argument I can see for requiring HTTPS is:
1) Hostage taking - holding new features hostage until people adopt HTTPS
2) Uneducated developers - preventing people from assuming that SRI provides some form of E2E protection/guarantees, which it doesn't. That's the role of HTTPS.

I understand the "risk" of 2, but I don't think this is any different than the state of the world today. Developers think WebCrypto can provide them security. Developers think eval() is safe. Developers think regex's can parse HTML. Developers will continue to make mistakes, no matter what the platform, no matter what the language. We need to educate, we need to ensure APIs are safe by default, but we need to avoid paternalism where we'll assume you'll misuse every tool we give you, so we won't give you tools.

As for 1, I find that deeply disturbing, and I hope the Chrome/Chromium community never finds that acceptable. The choices so far on HTTPS are motivated very strongly by real privacy and security issues with the alternative; where the delivering the feature over HTTP would undermine the security or privacy of users. I don't see that argument here for SRI, not any more than HTTP in general.

Mike West

unread,
Dec 11, 2014, 2:27:31 PM12/11/14
to pal...@chromium.org, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 8:08 PM, 'Chris Palmer' via Security-dev <securi...@chromium.org> wrote:
I really don't understand why anyone would want to make HTTP appear,
falsely, to continue to be acceptable. It makes even less sense to
make the HTTPS guarantee ambiguous. The combination of server
authentication, data integrity, and data confidentiality, for all
resources, is the *bare* *minimum*.

We should be well past this, and moving on to discuss such things as
encrypting SNI/other parts of the handshake, making traffic analysis
more expensive, improvements to authentication like getting rid of
bearer tokens, and so on. We should be celebrating with scotch and ice
cream as Certificate Transparency rolls out to 100%...

I can't believe we're still having to fight to maintain the bare
minimum. It's almost 2015.

Just so we're clear about what's being proposed, here's my understanding:

Chrome currently hard-fails `<script integrity="..." src="https://...">` if the request is made from a page that isn't sufficiently secure[1], even if the integrity check would otherwise pass. The proposal is to remove that failure condition so that integrity checks inside HTTP-delivered documents could be performed.

This proposal is not about weakening mixed content checks, which I think is what you're referring to with "make the HTTPS guarantee ambiguous". 

Is that proposal what you're arguing against, Chris?


-mike

Chris Palmer

unread,
Dec 11, 2014, 2:44:27 PM12/11/14
to Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 11:25 AM, Ryan Sleevi <rsl...@chromium.org> wrote:

> In that model, if I'm the author of http://foo.com and I want to source a
> resource from http://bar.com, and I want to ensure that resource has not
> been modified by the host, then SRI benefits and provides a net-positive
> security benefit.

So the # of entities that could mangle the resource goes from 2**20 to
2**20 - 1. De minimis non curat lex securitatis.

> The only argument I can see for requiring HTTPS is:
> 1) Hostage taking - holding new features hostage until people adopt HTTPS

Nobody wants to do that.

> 2) Uneducated developers - preventing people from assuming that SRI provides
> some form of E2E protection/guarantees, which it doesn't. That's the role of
> HTTPS.

Of course. But people are *still* thinking that WebCrypto might mean
something over HTTP, which indicates that developers find security
confusing and subtle. Therefore we must not make it any more confusing
and subtle — it's bad enough already.

Mike West

unread,
Dec 11, 2014, 2:58:17 PM12/11/14
to Chris Palmer, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 8:44 PM, 'Chris Palmer' via Security-dev <securi...@chromium.org> wrote:
On Thu, Dec 11, 2014 at 11:25 AM, Ryan Sleevi <rsl...@chromium.org> wrote:

> In that model, if I'm the author of http://foo.com and I want to source a
> resource from http://bar.com, and I want to ensure that resource has not
> been modified by the host, then SRI benefits and provides a net-positive
> security benefit.

So the # of entities that could mangle the resource goes from 2**20 to
2**20 - 1. De minimis non curat lex securitatis.

Abundans cautela non nocet. So there!

-mike

Chris Palmer

unread,
Dec 11, 2014, 3:00:23 PM12/11/14
to Mike West, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 11:27 AM, Mike West <mk...@chromium.org> wrote:

> Just so we're clear about what's being proposed, here's my understanding:
>
> Chrome currently hard-fails `<script integrity="..." src="https://...">` if
> the request is made from a page that isn't sufficiently secure[1], even if
> the integrity check would otherwise pass. The proposal is to remove that
> failure condition so that integrity checks inside HTTP-delivered documents
> could be performed.
>
> This proposal is not about weakening mixed content checks, which I think is
> what you're referring to with "make the HTTPS guarantee ambiguous".

Primarily, I mean "ambiguous in the minds of developers." Developers
have a hard time understanding security engineering topics — it's a
specialty, and it's significantly weirder and more confusing even than
other engineering specialties. No more mental caltrops for people who
are already trying hard.

Thankfully, everyone seems to be against showing any user-visible
guarantee to users when SRI is in effect. But just you wait. Someone
will propose it.

This whole thing is a lot of effort for something that we know isn't
good enough to show users. Someone will say, "We've done all this work
on SRI, and even sleevi thinks that it is a significant security win
even in non-secure contexts. We should show users something about SRI,
so they can know how hard we've worked to help them!"

Chris Palmer

unread,
Dec 11, 2014, 3:04:42 PM12/11/14
to Mike West, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 11:57 AM, Mike West <mk...@chromium.org> wrote:

> Abundans cautela non nocet. So there!

I'm officially changing my terminology from "Better Than Nothingism"
to "Non Nocetism". :)

Mike West

unread,
Dec 11, 2014, 3:19:27 PM12/11/14
to Chris Palmer, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 8:44 PM, 'Chris Palmer' via Security-dev <securi...@chromium.org> wrote:
So the # of entities that could mangle the resource goes from 2**20 to
2**20 - 1. De minimis non curat lex securitatis.

Well, it goes down to the number of entities that can affect both foo.com and bar.com's traffic. So, your ISP certainly, but probably not Bar's ISP. I think that's a good deal better than nothing. It might even be "good".

Relatedly, should we stop offering CSP for HTTP hosts? Should we have never offered it in the first place?

I see SRI and CSP as being uniquely targeted towards least privilege; an attacker bypassing them can't do anything that the origin can't already do. At best, a network attacker gains no advantage, at worst, she is stymied completely.

"No! You can't tell the UA not to let you hit yourself in the face! Muwahaha!" is hardly the inspiring battle cry of a hardened hacker. 

-mike

Michal Zalewski

unread,
Dec 11, 2014, 4:06:25 PM12/11/14
to Mike West, Chris Palmer, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
Relatedly, should we stop offering CSP for HTTP hosts? Should we have never offered it in the first place?

I realize that this is not a popularity contest, but I would very much subscribe to Ryan's view. 

Yes, there is a good chunk of the Internet that must adopt HTTPS right away (one example rhymes with Bmazon), and we need to work hard to educate them, remove barriers to entry, and keep end users informed about the risks (that probably includes an overhaul of the address bar UI for HTTP sites).

Perhaps there is merit to a more radical view, and perhaps we simply see no room for HTTP sites on the Internet; but if that's the case, we should face this head-on and perhaps simply announce that plain-text HTTP will be put behind an interstitial or dropped in mainstream browsers in a couple of years.

But short of doing that, taking away features that are legitimately useful to HTTP sites and provide measurable security benefits seems like the wrong route to me.

Developer confusion is a valid concern, but if our guiding principle is to minimize it, then we have some soul-searching to do. The kitchen-sink approach to CSP, for example, probably makes it a lot easier to accidentally write bad policies than to create internally consistent and watertight ones.

/mz
 

Chris Palmer

unread,
Dec 11, 2014, 4:31:23 PM12/11/14
to Michal Zalewski, Mike West, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 1:06 PM, Michal Zalewski <lca...@google.com> wrote:

> remove barriers to entry, and keep end users informed about the risks (that
> probably includes an overhaul of the address bar UI for HTTP sites).

Hopefully I will have something to announce about that today or
tomorrow, in fact.

> Perhaps there is merit to a more radical view, and perhaps we simply see no
> room for HTTP sites on the Internet; but if that's the case, we should face
> this head-on and perhaps simply announce that plain-text HTTP will be put
> behind an interstitial or dropped in mainstream browsers in a couple of
> years.

Yes.

> But short of doing that, taking away features that are legitimately useful
> to HTTP sites and provide measurable security benefits seems like the wrong
> route to me.

I would agree, but I'm just not convinced that SRI for HTTP is useful
enough. It just looks reallllly marginal to me.

> Developer confusion is a valid concern, but if our guiding principle is to
> minimize it, then we have some soul-searching to do. The kitchen-sink
> approach to CSP, for example, probably makes it a lot easier to accidentally
> write bad policies than to create internally consistent and watertight ones.

We do, indeed, have a lot of soul-searching to do.

Ryan Sleevi

unread,
Dec 11, 2014, 4:42:07 PM12/11/14
to Chris Palmer, Michal Zalewski, Mike West, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, security-dev, Frederik Braun
On Thu, Dec 11, 2014 at 1:31 PM, Chris Palmer <pal...@google.com> wrote:
On Thu, Dec 11, 2014 at 1:06 PM, Michal Zalewski <lca...@google.com> wrote:

> remove barriers to entry, and keep end users informed about the risks (that
> probably includes an overhaul of the address bar UI for HTTP sites).

Hopefully I will have something to announce about that today or
tomorrow, in fact.

> Perhaps there is merit to a more radical view, and perhaps we simply see no
> room for HTTP sites on the Internet; but if that's the case, we should face
> this head-on and perhaps simply announce that plain-text HTTP will be put
> behind an interstitial or dropped in mainstream browsers in a couple of
> years.

Yes.

> But short of doing that, taking away features that are legitimately useful
> to HTTP sites and provide measurable security benefits seems like the wrong
> route to me.

I would agree, but I'm just not convinced that SRI for HTTP is useful
enough. It just looks reallllly marginal to me.

OK. So the next question is, if you can't see the utility, can you see the harm? As arbiters of the web platform, we have a balance to strike between minimizing harm and maximizing good.

Obviously, HTTPS is an good thing when compared to HTTP, but we agreed that holding features hostage (e.g. in order to maximize good) is not something we're going to do.

So the next question is, what is the harm?

So far, the only harm elaborated is "Developer confusion". Sure. But should the fact that developer A doesn't know how to use the API correctly hinder developer B from being able to use it and realize tangible security benefits? Can we argue we're putting developers first and respecting the priorities of constituencies if bad developers are given greater weight than good developers?

Recall that I was one of the staunchest "meh"ponents on SRI when the work started, precisely because of the potential of future stupidity, much as you've mentioned here. That is, arguments for "good enough" integrity, or, as you suggest, special UI for SRI. And while those, along with everything else imaginable (since I'm an infinite worlds kinda thinker), are still distinct possibilities in the future, that's not what's being proposed, and we owe it to developers to evaluate the feature on the face of what is proposed, not what stupidity we might imagine is proposed in the future.

And on the merits alone of what's being proposed, what the harm is for misuse, what the potential is for abuse, I think it's perfectly justifiable to allow over HTTP. 

craig....@gmail.com

unread,
Dec 12, 2014, 6:12:27 AM12/12/14
to securi...@chromium.org, craig....@gmail.com, j...@chromium.org, fbr...@mozilla.com, rsl...@chromium.org
On Thursday, 11 December 2014 18:57:05 UTC, Chris Palmer wrote:
> Why not client-project-dev1.example.com? Then *.example.com would work.


You know when you have one of those moments where you wonder why you didn't think of something yourself... well, that should work... but will need to investigate if its possible for a DNS server to respond with different IP addresses based on a partial wildcard (e.g. *-dev1.example.com), but the Apache and SSL problem should work.



> Another idea is to get a real, publicly-issued and valid cert for
> *.dev.yourcompany.com and *.staging.yourcompany.com, and then have as
> many client-project-N hostnames as you want.


Probably the solution I'm going to use... although it might get a bit more expensive with more dev's



> and test by hooking "git push ..." so that the change is also pushed
> to the server. Then you can develop and test with fast iteration, and
> you can show your clients the progress (probably want to gate it
> behind HTTP Basic Auth or similar so that the random public can't
> easily get to it).


Already done this bit :-)

craig....@gmail.com

unread,
Dec 12, 2014, 7:44:35 AM12/12/14
to securi...@chromium.org, pal...@google.com, lca...@google.com, mk...@chromium.org, rsl...@chromium.org, mar...@chromium.org, j...@chromium.org, fbr...@mozilla.com
On Thu, Dec 11, 2014 at 1:31 PM, Chris Palmer <pal...@google.com> wrote:
> Hopefully I will have something to announce about that today or
> tomorrow, in fact.



As a website developer, I really want HTTPS everywhere.

And I think the main steps are... HTTP/2 over TLS only (in implementations)... the LetsEncrypt.org setup... both being implemented in web servers (Apache/Nginx/IIS)... with these installed/maintained by the OS (OSX, RedHat, Debian, Windows, etc).

Realistically that will take at least a year or two (annoyingly).



In the mean time though... I want the lock icon to always be present in the browser UI... on HTTP connections it should be red and crossed out (for those who are red/green colour blind)... I'm hoping Chris's announcement is something like that :-)

Now you are showing the connection is not secure (but not breaking websites that cannot afford TLS at the moment, and makes websites using a lock for their favicon look really odd).

There should also be an ever present error in the dev Console tab, e.g. "Warning: This page was loaded over HTTP".

And taking the idea suggested by Joel and Chris about having a "Security" tab in the dev console... this could detail the security features in place (CSP, SRI, HSTS, Cert info, etc), but each one can also show a warning when loaded over HTTP.

https://groups.google.com/a/chromium.org/forum/#!topic/security-dev/yifaG5bDr8Q



As a developer, I have found implementing CSP over HTTP during development very useful... I know it isn't secure, but I've found it very hard to get fellow developers to try and work with it (they typically switch CSP off to get their copy/pasted inline JavaScript to work)... adding HTTPS as a requirement as well, I would never be able to get CSP working.

So (unfortunately) I would also need SRI to work over HTTP... but make it dam obvious (with warnings all over the place) that it is only partially working... then so long as the page loads (SRI checks pass, and resources load), then it should work on the live server (which does use HTTPS)... in the same way CSP does at the moment.

Once we have Apache/Nginx/IIS making TLS for all connections easy, then we can find ways to make HTTP difficult to use.



As an aside... if I was to include some JS from a CDN, one thing that is constantly on my mind is that the external resource is not under my control, and that the CDN can change it to anything they like... that is the main purpose I see for SRI (as Ryan pointed out)... modification during transit is for TLS/HTTPS... with warnings in the browser for education :-)

Craig

Justin Schuh

unread,
Dec 12, 2014, 12:37:00 PM12/12/14
to craig....@gmail.com, security-dev, Chris Palmer, Michal Zalewski, Mike West, Ryan Sleevi, Marc-Antoine Ruel, Joel Weinberger, fbr...@mozilla.com
I hate to come across as defeatist, but it seems like the discussion at this point has taken SRI in the direction of minimum possible utility. I'm highly dubious there's much legitimate demand for SRI beyond the desire to preserve the integrity chain while allowing intermediary caching (i.e. by bootstrapping from HTTPS and allowing resource fetches over HTTP). I also strongly doubt that the rogue CDN over HTTPS scenario is something any significant number of sites would even consider (much less implement).

So, here's my suggestions (which go beyond the original question, but that's already veered off the rails anyway):
  • Do not allow SRI to bootstrap from insecure origins (e.g. from HTTPS). This one isn't a dealbreaker, but it should be a no-brainer. The whole point of the standard is to provide integrity, and if you don't have integrity when you bootstrap then you can't magically get it at any point later in the chain. If we want to allow bootstrapping over channels lacking integrity, then it should really be under another name.
  • Allow SRI resource requests to insecure origins (e.g. to HTTP). I feel like it should be painfully obvious that this is the primary use case for the feature. Site operators want to be able to support intermediary caching of public, long-lived resources without giving up integrity. Instead, it seems like we're telling them they have to either operate infrastructure at the scale of Google or shell out money to CDNs to get edge caching. Worse, in the developing world intermediary caching is often the only thing available, and the upstream pipe is very small to begin with. So, what we're proposing at the moment is to just leave them out in the cold entirely, which seems like a pretty terrible idea.
  • Use the mixed-display indicator to identify SRI from secure to insecure origins (e.g. from HTTPS to HTTP). This is already one of the attributes that the mixed-display indicator represents anyway. So adding into that bucket actually makes sense. And it's a good transitional move to get us to a world where transport integrity is the baseline, and anything less is labelled as bad.
So, there's my proposal. It stills supports the more esoteric use case of a rogue CDN (over HTTP or HTTPS) plus it addresses the bigger problem of improving caching of bulk, public data while still guaranteeing integrity.


Tom Sepez

unread,
Dec 12, 2014, 1:16:33 PM12/12/14
to securi...@chromium.org, pal...@google.com, lca...@google.com, mk...@chromium.org, rsl...@chromium.org, mar...@chromium.org, j...@chromium.org
I subscribe to Justin's view.  A few years ago, when I was working on the server side of things, and contemplating whether we could get something like this implemented by the clients, the https -> http case was the only use case that mattered.  Maybe you should talk with large consumers of your feature and see whether what your proposing provides any benefit to them in the real world.

Ryan Sleevi

unread,
Dec 12, 2014, 1:19:56 PM12/12/14
to Justin Schuh, Joel Weinberger, Michal Zalewski, Mike West, security-dev, craig....@gmail.com, Chris Palmer, fbr...@mozilla.com, Marc-Antoine Ruel


On Dec 12, 2014 9:37 AM, "Justin Schuh" <jsc...@chromium.org> wrote:
>
> I hate to come across as defeatist, but it seems like the discussion at this point has taken SRI in the direction of minimum possible utility. I'm highly dubious there's much legitimate demand for SRI beyond the desire to preserve the integrity chain while allowing intermediary caching (i.e. by bootstrapping from HTTPS and allowing resource fetches over HTTP).

That isn't what the spec says. It sounds like your defeatism is to encourage and prolong mixed content? A key part of the discussions has been to avoid that.

> I also strongly doubt that the rogue CDN over HTTPS scenario is something any significant number of sites would even consider (much less implement).

That is the entire value proposition for SRI at this time.

>
> So, here's my suggestions (which go beyond the original question, but that's already veered off the rails anyway):
> Do not allow SRI to bootstrap from insecure origins (e.g. from HTTPS). This one isn't a dealbreaker, but it should be a no-brainer. The whole point of the standard is to provide integrity, and if you don't have integrity when you bootstrap then you can't magically get it at any point later in the chain. If we want to allow bootstrapping over channels lacking integrity, then it should really be under another name.

You're treating the intermediaries as hostile, except the point of this thread has been that SRI does nothing for hostile intermediaries, nor should it.

In a hostile intermediary case, I agree that HTTP->HTTP and HTTP->HTTPS are fundamentally broken. HTTPS->HTTP prolongs the use of optionally blockable content, and only helps the <img> and <video> tags (the two remaining areas).

However, for HTTPS->HTTPS, aka the state of the world we want to bring about, it provides zero value, because HTTPS already dealt with the intermediary.

Whereas if the target is hostile CDN, every single direction has value.

That's why I say if it is hostile intermediary (which the intent to implement said it wasn't), then it's pointless and we shouldn't do it. But it isn't - it is CDN.

> Allow SRI resource requests to insecure origins (e.g. to HTTP). I feel like it should be painfully obvious that this is the primary use case for the feature. Site operators want to be able to support intermediary caching of public, long-lived resources without giving up integrity. Instead, it seems like we're telling them they have to either operate infrastructure at the scale of Google or shell out money to CDNs to get edge caching. Worse, in the developing world intermediary caching is often the only thing available, and the upstream pipe is very small to begin with. So, what we're proposing at the moment is to just leave them out in the cold entirely, which seems like a pretty terrible idea.
> Use the mixed-display indicator to identify SRI from secure to insecure origins (e.g. from HTTPS to HTTP). This is already one of the attributes that the mixed-display indicator represents anyway. So adding into that bucket actually makes sense. And it's a good transitional move to get us to a world where transport integrity is the baseline, and anything less is labelled as bad.

This is... redundant? It is already there by definition of HTTPS including HTTP resources. You're not penalizing anything here, nor adding value to the UI.

Michal Zalewski

unread,
Dec 12, 2014, 2:50:10 PM12/12/14
to Ryan Sleevi, Justin Schuh, Joel Weinberger, Mike West, security-dev, craig....@gmail.com, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
Replying to Justin: I think that the "value proposition" of SRI is hard to judge. In some of the arguably most desirable use cases (ads), it probably won't work because nobody will want to lock into a specific version of the ad-serving script and have it break a week later. On top of that, the ad scripts will load other scripts and frames, and I can't imagine advertisers putting a lot of engineering effort into designing some sort of a "trusted bootstrap" system that progressively verifies everything without providing a good mechanism to make updates or revoke bad scripts.

The "authenticate your own scripts hosted at a CDN" use case is somewhat compelling. It would be something that at least high-profile sites like Facebook would probably want to consider, although it adds the need for some non-trivial manual maintenance or templating infrastructure. I think that virtually all of these use cases would be same-protocol. 

The other convincing use case is protecting the integrity of downloads that you recommend but that aren't within your control: for example, it is conceivable that altavista.com would want to refer people to a CuilToolbarRemover.exe at a third-party website, they could get some protection against hijacked sites (although, not exposing this signal to the user probably diminishes the value of the scheme). I suspect that many of these would be cross-protocol, and if we take it away, it's a bit of a bummer.

Outside a handful of top destinations like Facebook or Google, I wouldn't expect to see much adoption for the last two use cases. But that's true for any opt-in security mechanism - including CSP, XFO, etc. They are mostly written to be used by sites that can afford to hire a competent security team, and when others try to deploy them, they usually copy-and-paste stuff and do it wrong. I firmly believe that it's a shame, but it's a discussion that completely transcends SRI.

Interestingly, the download use case also starts to show the unexpected complexity of getting the SRI semantics right. A lot of effort went into tweaking the initial proposal to make it kind-of safe in case the destination site serves something without C-D that then meta-refresh-redirects you to an EXE hosted elsewhere, etc. Some of the original goals of SRI - also protecting <iframe> or document navigation - are probably nearly impossible to get right.

...

Replying to the general HTTP / HTTPS debate, I wouldn't conflate different classes of attackers and probabilities of attack. For the vast majority of users, chances are, there is nobody who controls their network trying to inject malicious payloads into executables; the farthest that legitimate businesses are usually willing to go is injecting ads and mining traffic data, which is something we have to worry about. The likelihood of users bumping into compromised ad networks or CDNs is many orders of magnitude higher than being on a government target list. So, I actually see value in HTTP -> HTTP SRI. It doesn't solve all problems, but it makes a quantitative difference - and if anything, breaking the HTTP/HTTPS symmetry makes it more confusing and fragile.

Of course, pervasive passive surveillance is a very real concern, but that's sort out outside the scope of SRI - SRI deals strictly with active attacks and is very hard to confuse with anything else. We should probably be also honest with ourselves and admit that HTTPS will not really make pervasive surveillance go away: I will still be able to know what websites one frequents, what types of porn one watches, or who that person communicates with. I won't be able to see the *contents* of private communications, but much of the pervasive surveillance stuff revealed in the past year doesn't really seem to be centered on that.

/mz

Joel Weinberger

unread,
Dec 12, 2014, 8:12:14 PM12/12/14
to Michal Zalewski, Ryan Sleevi, Justin Schuh, Mike West, security-dev, Craig Francis, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
I'm going to go out on a limb here and try to summarize the various positions that have been listed on this thread, of which I think there are basically 3:
  1. We should support SRI on HTTP sites and pointing towards HTTP resources. SRI should only be used for enforcing the integrity of the end resource thus the threat model has nothing to do with the transport, so it doesn't matter if the transport of the integrity itself or the resource is unauthenticated.
  2. We should support SRI on HTTPS sites only but they may point towards HTTP resources. The utility of SRI is gained not from enforcing the integrity of end resources, but from enforcing that the transport has integrity (albeit without confidentiality), thus as long as the browser is ensured of getting the SRI integrity value over an authenticated channel, it can enforce that its resources are what they expect, even if they're not over HTTPS. Since not all sites and CDNs will go to HTTPS, this can at least provide integrity (without confidentiality), and something is better than nothing.
  3. We should support SRI only on HTTPS sites and only pointing to HTTPS resources. Similar to (1), SRI should only be used for enforcing the integrity of end resources, since we sites that want transport security should simply use HTTPS. However, because we're talking about providing integrity guarantees, the developer can never know if the guarantee they are trying to provide actually makes it to the client, and we shouldn't pretend like it does by allowing SRI over HTTP, because that will just lead to developer confusion.
Notably, Mozilla's position that SRI should be allowed on HTTP sites but only pointing to HTTPS resources is not reflected here, so, Freddy, if you want to jump in and give that argument, go for it :-)

I'm incredibly torn at this point, and I'm on the edge of throwing my hands in the air and giving up. In really seems like we're nowhere near consensus, and I don't know what to do. Personally, I either fall in the (1) or (3) camp, depending on how pessimistic and cynical I'm feeling. I strongly disagree with (2), however, since (a) TLS solves transport security and I don't think we should help sites avoid even at an attempt at confidentiality, and (b) I strongly disagree that a site's ability to verify the expected content at jquery.com (...or apis.google.com) isn't valuable (and, yes, we've talked to site operators who've said they want SRI for this purpose; of course, who knows if they'll actually use it). I should also mention that there's no chance that v1 of SRI will cover images, video, or any other kind of media. No one has any good proposal on how to support a syntax merkle hashes in a meaningful way or how to enforce them as content loads.

The only thing on the order of a compromise is "let's do HTTPS -> HTTPS only for now, and we can always reevaluate later." But of course, that's very unsatisfying to lots of folks, so I'm not sure what to do. I'm going to go drink a gin and topic and think about this over the weekend. Suggestions welcome in the meantime.
--Joel

Joel Weinberger

unread,
Dec 13, 2014, 1:40:51 AM12/13/14
to Michal Zalewski, Ryan Sleevi, Justin Schuh, Mike West, security-dev, Craig Francis, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
As a random anecdote/not-really-data, on the Hacker News comments for palmer's "HTTP is Bad" announcement (hoo-ray!), one of the first comments is a request for SRI for JQuery (well, it's a request for hashes for content for JQuery, and Boris Zbarsky quickly responded and pointed to SRI). I know that Tom and Justin believe in the value of SRI for transport security, and that's fine, but I really, really don't understand the belief that it's of "little utility" for the CDN/JQuery case. I suppose maybe no one will use it in practice, but it's something that's requested quite often.
--Joel

Brian Smith

unread,
Dec 13, 2014, 1:47:59 AM12/13/14
to Michal Zalewski, Ryan Sleevi, Justin Schuh, Joel Weinberger, Mike West, security-dev, craig....@gmail.com, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
'Michal Zalewski' via Security-dev <securi...@chromium.org> wrote:
> In some of the arguably most desirable use cases (ads), it probably
> won't work because nobody will want to lock into a specific version of the
> ad-serving script and have it break a week later. On top of that, the ad
> scripts will load other scripts and frames, and I can't imagine advertisers
> putting a lot of engineering effort into designing some sort of a "trusted
> bootstrap" system that progressively verifies everything without providing a
> good mechanism to make updates or revoke bad scripts.

I don't think ads and analytics are reasonable use cases for SRI for
the reasons that Michal cited, even though they are one of the
official use cases listed in the SRI draft. I think the solution to
that is just to delete that use case from the SRI draft.

> The "authenticate your own scripts hosted at a CDN" use case is somewhat
> compelling. It would be something that at least high-profile sites like
> Facebook would probably want to consider, although it adds the need for some
> non-trivial manual maintenance or templating infrastructure. I think that
> virtually all of these use cases would be same-protocol.

Expanding this beyond what's done today:

1. GMail in Syria. Wouldn't it be great if GMail users in Syria could
fetch the initial page from a server outside of Syria, and then
securely (via SRI) fetch all the scripts and CSS and boilerplate
images from within Syria, so that the GMail UI still loads pretty
fast, without having to trust any servers in Syria?

2. Youtube in the US: Would't it be great to have all the subresources
for the Youtube app cached in a CDN with servers that are distributed
across every neighborhood in the US, such that latency is close to
0ms, but there's practically zero physical security for the servers,
yet Youtube still stays 100% secure as long as the initial page load
is from a trusted server?

3. Wouldn't it be great for every site in the world to be able to load
jquery from https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js
without having to trust the ajax.googleapis.com server?

(Interestingly, the initial page load is the most latency-sensitive
part, and its the thing SRI cannot protect. Future improvements to
browsers can fix that part.)

> Outside a handful of top destinations like Facebook or Google, I wouldn't
> expect to see much adoption for the last two use cases.

I disagree. See the jquery example above.

> But that's true for
> any opt-in security mechanism - including CSP, XFO, etc. They are mostly
> written to be used by sites that can afford to hire a competent security
> team, and when others try to deploy them, they usually copy-and-paste stuff
> and do it wrong. I firmly believe that it's a shame, but it's a discussion
> that completely transcends SRI.

If Google added the SRI attributes to the sample fragments added to
https://developers.google.com/speed/libraries/devguide#jquery then
LOTS of people would start using SRI--even (especially) the
copy-and-pasters.

> So, I actually see value in
> HTTP -> HTTP SRI. It doesn't solve all problems, but it makes a quantitative
> difference - and if anything, breaking the HTTP/HTTPS symmetry makes it more
> confusing and fragile.

The problem here is middleboxes tampering with the
subresource--attempting to minify CSS, attempting to strip out
malware-ish JS code, transcoding images, etc. Supporting SRI for HTTP
subresources would likely result in a lot of "it works from the
developer's system, but fails mysteriously for 1% of users" scenerios.
Somebody might prove that supposition wrong, but nobody has so far.

The use cases for HTTPS->HTTPS subresources are really compelling and
it seems feasible to get it working well for HTTPS->HTTPS. A
reasonable strategy is to verify that HTTPS->HTTPS works first, and
then eventually try to make HTTP->HTTPS and/or HTTP->HTTP work too.
The prioritization could even rationally be:

1. Make SRI for for HTTPS->HTTPS.
2. Do a bunch of other stuff to make the web awesome.
3. Make SRI work for other cases.

I don't see that as "hostage taking" but simply as recognizing that
not every part of SRI has the same priority.

Cheers,
Brian

Michal Zalewski

unread,
Dec 13, 2014, 2:22:52 AM12/13/14
to Brian Smith, Ryan Sleevi, Justin Schuh, Joel Weinberger, Mike West, security-dev, Craig Francis, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
> So, I actually see value in
> HTTP -> HTTP SRI. It doesn't solve all problems, but it makes a quantitative
> difference - and if anything, breaking the HTTP/HTTPS symmetry makes it more
> confusing and fragile.

The problem here is middleboxes tampering with the
subresource--attempting to minify CSS, attempting to strip out
malware-ish JS code, transcoding images, etc.

Such is life. There are also tons of middleware boxes that don't properly honor Cache-Control, causing much worse breakage; and do tons of other stupid things. There's also plenty of browser extensions and AV products that break CSP with content scripts, etc.

My personal take on this is that we don't need to cater to them too much.

I think that a situation where SRI silently fails for HTTP is awful; and where it results in a failed load... well, doesn't provide a whole lot of obvious benefit. It adds complexity by requiring people to have different versions of every SRI-using page if they want to support HTTP and HTTPS at the same time. 

But ultimately, I don't want to rehash the same discussions again. Ultimately, the folks who write the code and maintain the spec probably get to make some judgment calls.

/mz

Brian Smith

unread,
Dec 13, 2014, 3:23:46 AM12/13/14
to Michal Zalewski, Ryan Sleevi, Justin Schuh, Joel Weinberger, Mike West, security-dev, Craig Francis, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
Michal Zalewski <lca...@google.com> wrote:
> Brian Smith wrote:
>> The problem here is middleboxes tampering with the
>> subresource--attempting to minify CSS, attempting to strip out
>> malware-ish JS code, transcoding images, etc.
>
> Such is life. There are also tons of middleware boxes that don't properly
> honor Cache-Control, causing much worse breakage; and do tons of other
> stupid things. There's also plenty of browser extensions and AV products
> that break CSP with content scripts, etc.
>
> My personal take on this is that we don't need to cater to them too much.

It isn't an issue of catering to them as much as working the way the
end-user expects or not. Consider two browsers. Browser A enforces SRI
for HTTP subresources and 1% of its users experience sites that stop
working because of this. Browser B does not enforce SRI for HTTP
subresources and 0% of its users experience sites that stop working
because of this. I think most users would prefer Browser B.

Cheers,
Brian

Michal Zalewski

unread,
Dec 13, 2014, 3:46:00 AM12/13/14
to Brian Smith, Ryan Sleevi, Justin Schuh, Joel Weinberger, Mike West, security-dev, Craig Francis, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
> It isn't an issue of catering to them as much as working the way the
> end-user expects or not. Consider two browsers. Browser A enforces SRI
> for HTTP subresources and 1% of its users experience sites that stop
> working because of this. Browser B does not enforce SRI for HTTP
> subresources and 0% of its users experience sites that stop working
> because of this. I think most users would prefer Browser B.

Sure, but the same goes for a browser where your favorite add-on no
longer works because of CSP or XFO or stricter mixed content blocking.
In fact, honest mixed content blocking has been held back by similar
concerns for a very long time, and in retrospect, it likely wasn't the
right thing to do.

/mz

Ryan Sleevi

unread,
Dec 13, 2014, 4:50:54 AM12/13/14
to Brian Smith, Joel Weinberger, Michal Zalewski, Mike West, craig....@gmail.com, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun


On Dec 12, 2014 10:47 PM, "Brian Smith" <br...@briansmith.org> wrote:
>
> 'Michal Zalewski' via Security-dev <securi...@chromium.org> wrote:
> > In some of the arguably most desirable use cases (ads), it probably
> > won't work because nobody will want to lock into a specific version of the
> > ad-serving script and have it break a week later. On top of that, the ad
> > scripts will load other scripts and frames, and I can't imagine advertisers
> > putting a lot of engineering effort into designing some sort of a "trusted
> > bootstrap" system that progressively verifies everything without providing a
> > good mechanism to make updates or revoke bad scripts.
>
> I don't think ads and analytics are reasonable use cases for SRI for
> the reasons that Michal cited, even though they are one of the
> official use cases listed in the SRI draft. I think the solution to
> that is just to delete that use case from the SRI draft.
>
> > The "authenticate your own scripts hosted at a CDN" use case is somewhat
> > compelling. It would be something that at least high-profile sites like
> > Facebook would probably want to consider, although it adds the need for some
> > non-trivial manual maintenance or templating infrastructure. I think that
> > virtually all of these use cases would be same-protocol.
>
> Expanding this beyond what's done today:
>
> 1. GMail in Syria. Wouldn't it be great if GMail users in Syria could
> fetch the initial page from a server outside of Syria, and then
> securely (via SRI) fetch all the scripts and CSS and boilerplate
> images from within Syria, so that the GMail UI still loads pretty
> fast, without having to trust any servers in Syria?
>

Not if it means giving up confidentiality. GMail is HTTPS; your case only becomes compelling in a world of mixed content, which is a bad world that should go away.

> 2. Youtube in the US: Would't it be great to have all the subresources
> for the Youtube app cached in a CDN with servers that are distributed
> across every neighborhood in the US, such that latency is close to
> 0ms, but there's practically zero physical security for the servers,
> yet Youtube still stays 100% secure as long as the initial page load
> is from a trusted server?

You're conflating security with integrity, and in the process, intentionally giving up authentication and confidentiality.

This is not a good world to be in. If that is the goal of SRI - which I'm assured to no end that it is not - then we shouldn't be implementing it.

In both cases, these reveal to the network what the user is doing, explicitly. This is hardly a desirable state; we should protect the user's confidentiality, and give them a choice if they want to degrade that. In practice, this can only mean blocking the content and checking for user consent - the same as active mixed content.

The presence and prevalence of passive mixed content provides a huge boon for user tracking by passive intermediaries - an attack that should be defended against, not celebrated.

>
> 3. Wouldn't it be great for every site in the world to be able to load
> jquery from https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js
> without having to trust the ajax.googleapis.com server?

This is the only case where I am 100% on board with you - and it is ALL that I think SRI can supply.

That sounds like preventing a feature out of fear and superstition than solid technical ground. Said middleboxes can mess with any of the HTTP loaded resources, so HTTPS->HTTP does nothing to mitigate this, thus shouldn't be required for that threat model.

>
> The use cases for HTTPS->HTTPS subresources are really compelling and
> it seems feasible to get it working well for HTTPS->HTTPS. A
> reasonable strategy is to verify that HTTPS->HTTPS works first, and
> then eventually try to make HTTP->HTTPS and/or HTTP->HTTP work too.
> The prioritization could even rationally be:
>
> 1. Make SRI for for HTTPS->HTTPS.
> 2. Do a bunch of other stuff to make the web awesome.
> 3. Make SRI work for other cases.
>
> I don't see that as "hostage taking" but simply as recognizing that
> not every part of SRI has the same priority.
>
> Cheers,
> Brian

It is an artificial limitation based on superstition. No one will be able to prove your superstition right or wrong until they experiment. There will be zero incentive for sites to experiment if browsers don't implement it. So it makes far more sense under this model of something maybe-breaking in wild to actually treat it like any other web feature and release it. Site operators are plenty capable of watching for issues and errors - their incentives are aligned to do just that. There's no new data browsers will gain in the interim until they just implement it; the prioritization is unnecessary and artificial.

Mike West

unread,
Dec 13, 2014, 5:09:02 AM12/13/14
to Ryan Sleevi, Brian Smith, Joel Weinberger, Michal Zalewski, craig....@gmail.com, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun
For the moment, I think the clearest use case is verification of assets on CDNs (though I'm not convinced that other use cases aren't worth serving, I'd like to defer those discussions until after we have a v1 shipping). I think we can easily convince folks like Google's API maintainers to add integrity attribute to the copy/paste code they provide, and I think that would have substantial impact on adoption.

This form of SRI has clear value, we've seen exploits in the recent past that it would have prevented, and it's a least-privilege-enabling feature we should enable without HTTPS restrictions. To be concrete, I'd suggest landing https://codereview.chromium.org/803773002.

I share Brian's concern about allowing SRI of resources over HTTP, but I agree with Michal's claims regarding the impacts: yes, some HTTP resources will be modified. Yes, those resources will be blocked by SRI. Yes, this will annoy some users. Still, I think it's the right thing to enable.

-mike

--
Mike West <mk...@google.com>, @mikewest

Google Germany GmbH, Dienerstrasse 12, 80331 München, Germany, Registergericht und -nummer: Hamburg, HRB 86891, Sitz der Gesellschaft: Hamburg, Geschäftsführer: Graham Law, Christine Elizabeth Flores
(Sorry; I'm legally required to add this exciting detail to emails. Bleh.)

Brian Smith

unread,
Dec 13, 2014, 1:48:10 PM12/13/14
to Ryan Sleevi, Joel Weinberger, Michal Zalewski, Mike West, Craig Francis, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun
Ryan Sleevi <rsl...@chromium.org> wrote:
> On Dec 12, 2014 10:47 PM, "Brian Smith" <br...@briansmith.org> wrote:
>> 1. GMail in Syria. Wouldn't it be great if GMail users in Syria could
>> fetch the initial page from a server outside of Syria, and then
>> securely (via SRI) fetch all the scripts and CSS and boilerplate
>> images from within Syria, so that the GMail UI still loads pretty
>> fast, without having to trust any servers in Syria?
>
> Not if it means giving up confidentiality. GMail is HTTPS; your case only
> becomes compelling in a world of mixed content <snip>

Not true. All the loads would happen over HTTPS. The script, CSS, and
(eventually) boilerplate image loads are going to happen from
https://not-very-well-protected.googleapis.com, which resolves to
servers outside of Google's data centers with weaker private key
protection available to them. The actual sensitive data would continue
to be served from different origins hosted on servers in Google's data
centers.

>> 2. Youtube in the US: Would't it be great to have all the subresources
>> for the Youtube app cached in a CDN with servers that are distributed
>> across every neighborhood in the US, such that latency is close to
>> 0ms, but there's practically zero physical security for the servers,
>> yet Youtube still stays 100% secure as long as the initial page load
>> is from a trusted server?
>
> You're conflating security with integrity, and in the process, intentionally
> giving up authentication and confidentiality.

Note that in this case, I'm still talking about HTTPS.

> This is not a good world to be in. If that is the goal of SRI - which I'm
> assured to no end that it is not - then we shouldn't be implementing it.

One of the natural consequences of SRI is to enable CDNs to move
servers into unsafe locations. CDNs will offer different services with
different performance vs. private key protection tradeoffs. Websites
will have to make smart decisions about what subresources require high
levels of confidentiality, and which subresources don't. This is
already the case today with CDNs. SRI allows it to be taken to the
next level.

> In both cases, these reveal to the network what the user is doing,
> explicitly. This is hardly a desirable state; we should protect the user's
> confidentiality, and give them a choice if they want to degrade that. In
> practice, this can only mean blocking the content and checking for user
> consent - the same as active mixed content.
>
> The presence and prevalence of passive mixed content provides a
> huge boon for user tracking by passive intermediaries - an attack
> that should be defended against, not celebrated.

Again, I'm talking about stuff that will be done 100% over HTTPS. (I'm
sad that you'd think I'd be advocating for any kind of mixed content.)

>> The problem here is middleboxes tampering with the
>> subresource--attempting to minify CSS, attempting to strip out
>> malware-ish JS code, transcoding images, etc. Supporting SRI for HTTP
>> subresources would likely result in a lot of "it works from the
>> developer's system, but fails mysteriously for 1% of users" scenerios.
>> Somebody might prove that supposition wrong, but nobody has so far.
>
> That sounds like preventing a feature out of fear and superstition than
> solid technical ground. Said middleboxes can mess with any of the HTTP
> loaded resources, so HTTPS->HTTP does nothing to mitigate this, thus
> shouldn't be required for that threat model.

There are a lot of types of tamperings, like minifying CSS, or
appending code to a JS script, that will not break a page without SRI,
but would break a page with SRI.

> It is an artificial limitation based on superstition. No one will be able to
> prove your superstition right or wrong until they experiment.

I think we should avoid calling people's concerns "superstition," for
courtesy's sake. That said, I agree with you that it would be good to
find out whether the concern is warranted or not by experimenting. In
fact, that's exactly what I asked for in the webappsec thread [1].

Cheers,
Brian

[1] http://lists.w3.org/Archives/Public/public-webappsec/2014Nov/0053.html

Michal Zalewski

unread,
Dec 13, 2014, 2:53:14 PM12/13/14
to Brian Smith, Ryan Sleevi, Joel Weinberger, Mike West, Craig Francis, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun
A bit tangential, but...

> One of the natural consequences of SRI is to enable CDNs to move
> servers into unsafe locations. CDNs will offer different services with
> different performance vs. private key protection tradeoffs. Websites
> will have to make smart decisions about what subresources require high
> levels of confidentiality, and which subresources don't. This is
> already the case today with CDNs. SRI allows it to be taken to the
> next level.

Looking at many of the most popular destinations on the Internet and
the stuff they load on their front pages... I think that's an
extremely generous view of the thought process that usually goes into
loading content from CDNs / third-party servers (or into designing
CDNs in the first place) =)

But, even if SRI genuinely helps only a handful of security-conscious
sites, and only in a limited way, I think it's worth doing given its
relative simplicity.

/mz

Devdatta Akhawe

unread,
Dec 13, 2014, 6:32:26 PM12/13/14
to Michal Zalewski, Brian Smith, Ryan Sleevi, Joel Weinberger, Mike West, Craig Francis, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun
> Looking at many of the most popular destinations on the Internet and
> the stuff they load on their front pages... I think that's an
> extremely generous view of the thought process that usually goes into
> loading content from CDNs / third-party servers (or into designing
> CDNs in the first place) =)

Well, in their defense, browsers/web-platform currently only have the
answer: "don't do that". Not, "here is how you can do it securely"
(without massively affecting performance / features)

-dev

Joel Weinberger

unread,
Dec 16, 2014, 1:29:25 AM12/16/14
to Devdatta Akhawe, Michal Zalewski, Brian Smith, Ryan Sleevi, Mike West, Craig Francis, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun
At this point, I'm of the mindset that we should go either full secure origin requirements or no secure origin requirements at all. I think anything in between muddles the message we're sending to developers.

Brian has a good point that I think I would summarize as "SRI is actively harmful without transport security because network 'optimizations' would cause resource load failures." Michal's response is "So what? Developers will deal with it in the same way they deal with CSP breaking stuff." Unfortunately, both are totally reasonable and fair arguments.

I'm going to bed. Hopefully I'll awake with brilliant insight.

Craig Francis

unread,
Dec 16, 2014, 4:37:37 AM12/16/14
to Joel Weinberger, Devdatta Akhawe, Michal Zalewski, Brian Smith, Ryan Sleevi, Mike West, security-dev, Chris Palmer, Justin Schuh, Marc-Antoine Ruel, Frederik Braun
+1 for no secure origin requirements (so available to all).

As mentioned earlier, I develop/test my websites over HTTP (will be trying to sort out some wildcard HTTPS certificates later, but this will take a while), and it would be good to test the code before uploading to a live website.

As to Brian's valid concern over network 'optimizations', this can happen with CSP (although often not as bad), but making that HTTPS only would make it even harder for developers to get started (and often I just want them to try, rather than just switching these features off completely).

Also, I'm probably only going to use SRI to include things like jQuery from a CDN, and hopefully that (publicly available) resource would be over HTTPS, so shouldn't have any optimization problems :-)

Perhaps we could have a warning show in the developer console when it notices a resource being included over HTTP with the integrity check in place?

Craig

Tanvi Vyas

unread,
Dec 16, 2014, 6:38:54 PM12/16/14
to securi...@chromium.org, lca...@google.com, rsl...@chromium.org, jsc...@chromium.org, mk...@chromium.org, craig....@gmail.com, pal...@google.com, fbr...@mozilla.com, mar...@chromium.org
On Friday, December 12, 2014 5:12:14 PM UTC-8, Joel Weinberger wrote:
I'm going to go out on a limb here and try to summarize the various positions that have been listed on this thread, of which I think there are basically 3:
  1. We should support SRI on HTTP sites and pointing towards HTTP resources. SRI should only be used for enforcing the integrity of the end resource thus the threat model has nothing to do with the transport, so it doesn't matter if the transport of the integrity itself or the resource is unauthenticated.
  2. We should support SRI on HTTPS sites only but they may point towards HTTP resources. The utility of SRI is gained not from enforcing the integrity of end resources, but from enforcing that the transport has integrity (albeit without confidentiality), thus as long as the browser is ensured of getting the SRI integrity value over an authenticated channel, it can enforce that its resources are what they expect, even if they're not over HTTPS. Since not all sites and CDNs will go to HTTPS, this can at least provide integrity (without confidentiality), and something is better than nothing.
  3. We should support SRI only on HTTPS sites and only pointing to HTTPS resources. Similar to (1), SRI should only be used for enforcing the integrity of end resources, since we sites that want transport security should simply use HTTPS. However, because we're talking about providing integrity guarantees, the developer can never know if the guarantee they are trying to provide actually makes it to the client, and we shouldn't pretend like it does by allowing SRI over HTTP, because that will just lead to developer confusion.
Notably, Mozilla's position that SRI should be allowed on HTTP sites but only pointing to HTTPS resources is not reflected here, so, Freddy, if you want to jump in and give that argument, go for it :-)

Mozilla's position is covered in (1).  We plan to support SRI for both HTTP and HTTPS sites and for both HTTP and HTTPS resources.  If an HTTPS site includes an HTTP resource with the integrity attribute, that resource will still be considered mixed content and the UI will not change.

I agree with Ryan that we should be very specific in the spec about what should happen if a user agent decides that SRI must only be used with HTTPS.  For example, what will happen if a webpage has an integrity attribute on an HTTP resource?  In browsers that support SRI over HTTP, the integrity will be checked and if all is well the content will be loaded.  In browsers that do not support SRI over HTTP, the integrity attribute should be ignored and the content can continue to be loaded.  If instead the mere presence of an integrity attribute causes the content to be blocked, developers of HTTP websites will need to deliver different content to different browsers.  Or, more likely, the developers will just give up and abandon their use of SRI completely.

~Tanvi

Joel Weinberger

unread,
Dec 18, 2014, 1:46:05 PM12/18/14
to Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Mike West, Craig Francis, Chris Palmer, Frederik Braun, Marc-Antoine Ruel
I have given my lgtm to Mike's CL to remove the HTTPS restrictions (https://codereview.chromium.org/803773002). I am convinced that we will not come to a consensus, and this seems like an unsatisfying change that most closely matches the most opinions. On a personal level, while I disagree with this, Mike has also convinced me that SRI is just another form of content policy, and if we're not going to require CSP over HTTPS-only, it doesn't make sense to do it here, either.
--Joel

Chris Palmer

unread,
Dec 18, 2014, 2:53:52 PM12/18/14
to Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Mike West, Craig Francis, Frederik Braun, Marc-Antoine Ruel
On Thu, Dec 18, 2014 at 10:46 AM, Joel Weinberger <j...@chromium.org> wrote:

> I have given my lgtm to Mike's CL to remove the HTTPS restrictions
> (https://codereview.chromium.org/803773002). I am convinced that we will not
> come to a consensus, and this seems like an unsatisfying change that most
> closely matches the most opinions. On a personal level, while I disagree
> with this, Mike has also convinced me that SRI is just another form of
> content policy, and if we're not going to require CSP over HTTPS-only, it
> doesn't make sense to do it here, either.

I don't see why "can't come to a consensus" means we should complicate
an already confusing security guarantee.

Soon we'll have people asking to treat SRI'd mixed content as
non-mixed content, or to treat HTTP pages with SRI'd subresources as
"secure", and so on. And we might have a hard time holding the sanity
line, because SRI in non-secure contexts blurs the line. Rest assured,
I will hold that line. But it's not going to be fun.

The basic problem is that we cannot effectively communicate security
nuance in the UX of a mainstream product whose audience is,
ultimately, every single person on the planet. (E.g. integrity with
auth and confidentiality; vs. integrity and auth without
confidentiality; vs. weakened integrity + auth + confidentiality but
the confidentiality is weakened due to traffic analysis; and so on.)
These are degrees of nuance that even software engineers — and even
some security specialists! — are legitimately confused by. It's a
young field.

Therefore, when we have to "round off" the security message to make it
simpler, we should always round *up* — never down. If people have a
hard time knowing if they need integrity or confidentiality or both,
just give them both all the time.

SRI in uses cases other than HTTPS for both the main page and
sub-resources is rounding down.

Don't round down. Rounding down is bad.

Craig Francis

unread,
Dec 18, 2014, 5:11:21 PM12/18/14
to Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Mike West, Chris Palmer, Frederik Braun, Marc-Antoine Ruel



On 18 Dec 2014, at 18:46, Joel Weinberger <j...@chromium.org> wrote:

I have given my lgtm to Mike's CL to remove the HTTPS restrictions (https://codereview.chromium.org/803773002). I am convinced that we will not come to a consensus, and this seems like an unsatisfying change that most closely matches the most opinions. On a personal level, while I disagree with this, Mike has also convinced me that SRI is just another form of content policy, and if we're not going to require CSP over HTTPS-only, it doesn't make sense to do it here, either.
--Joel


Thanks Joel... I realise this is frustrating, and certainly hope we can move forward with HTTPS in general... I really like what I'm hearing on the other thread with marking HTTP as in-secure... but this change should make development/testing much easier for myself, and hopefully others (where the Live site should be HTTPS, with a CSP header, HSTS, secure cookies, etc)... and who knows, maybe we can re-visit this later :-)

Craig

Mike West

unread,
Dec 19, 2014, 5:15:07 AM12/19/14
to Chris Palmer, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
On Thu, Dec 18, 2014 at 8:53 PM, Chris Palmer <pal...@google.com> wrote:
On Thu, Dec 18, 2014 at 10:46 AM, Joel Weinberger <j...@chromium.org> wrote:

> I have given my lgtm to Mike's CL to remove the HTTPS restrictions
> (https://codereview.chromium.org/803773002). I am convinced that we will not
> come to a consensus, and this seems like an unsatisfying change that most
> closely matches the most opinions. On a personal level, while I disagree
> with this, Mike has also convinced me that SRI is just another form of
> content policy, and if we're not going to require CSP over HTTPS-only, it
> doesn't make sense to do it here, either.

Hrm. This makes it sound like I somehow forced you, against your will, to approve the change. I certainly argued for it, and it seemed like you agreed with me. If you disagree with the change, I'd suggest not LGTMing it. I'll leave it up for a while in the hopes that someone will be willing to more strongly endorse the change. If no one comments through January when I get back from vacation, I'll land it. :)
 
Soon we'll have people asking to treat SRI'd mixed content as
non-mixed content, or to treat HTTP pages with SRI'd subresources as
"secure", and so on. And we might have a hard time holding the sanity
line, because SRI in non-secure contexts blurs the line. Rest assured,
I will hold that line. But it's not going to be fun.

Yes. We will have people asking for that. We have people asking for that now. But we have people asking for lots of things. I disagree with the assertion that SRI for HTTP pages makes it more or less difficult to debate the merits of loading insecure resources into a secure context.
 
The basic problem is that we cannot effectively communicate security
nuance in the UX of a mainstream product whose audience is,
ultimately, every single person on the planet.

I don't think anyone is being asked to communicate anything here; SRI, as implemented and specced, has no effect on browser UI. Insecure pages don't gain indicators of security, and secure pages don't get extra bonus security indicators. 

Therefore, when we have to "round off" the security message to make it
simpler, we should always round *up* — never down. If people have a
hard time knowing if they need integrity or confidentiality or both,
just give them both all the time.

I agree with your assertion with regard to security UI.

I disagree that it has the impact you're claiming when applied to features whose core functionality is _negative_. That is, both SRI and CSP allow websites to _reduce_ their privilege, to restrict themselves from loading resources with certain properties. It's not clear to me why you think it's a bad thing for us to allow an insecure page to ask the user agent to prevent itself from doing certain things it knows could potentially be harmful.

-mike

Joel Weinberger

unread,
Dec 19, 2014, 11:12:31 AM12/19/14
to Mike West, Chris Palmer, Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
On Fri, Dec 19, 2014 at 2:14 AM, Mike West <mk...@chromium.org> wrote:
On Thu, Dec 18, 2014 at 8:53 PM, Chris Palmer <pal...@google.com> wrote:
On Thu, Dec 18, 2014 at 10:46 AM, Joel Weinberger <j...@chromium.org> wrote:

> I have given my lgtm to Mike's CL to remove the HTTPS restrictions
> (https://codereview.chromium.org/803773002). I am convinced that we will not
> come to a consensus, and this seems like an unsatisfying change that most
> closely matches the most opinions. On a personal level, while I disagree
> with this, Mike has also convinced me that SRI is just another form of
> content policy, and if we're not going to require CSP over HTTPS-only, it
> doesn't make sense to do it here, either.

Hrm. This makes it sound like I somehow forced you, against your will, to approve the change. I certainly argued for it, and it seemed like you agreed with me. If you disagree with the change, I'd suggest not LGTMing it. I'll leave it up for a while in the hopes that someone will be willing to more strongly endorse the change. If no one comments through January when I get back from vacation, I'll land it. :)
Sorry, you're right, it does read that way. Let me be clear: Mike has convinced me that this is the correct thing to do. I view this as content policy which we have precedent for allowing over HTTP. 

Chris Palmer

unread,
Dec 19, 2014, 3:22:10 PM12/19/14
to Mike West, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
On Fri, Dec 19, 2014 at 2:14 AM, Mike West <mk...@chromium.org> wrote:

>> The basic problem is that we cannot effectively communicate security
>> nuance in the UX of a mainstream product whose audience is,
>> ultimately, every single person on the planet.
>
> I don't think anyone is being asked to communicate anything here; SRI, as
> implemented and specced, has no effect on browser UI. Insecure pages don't
> gain indicators of security, and secure pages don't get extra bonus security
> indicators.

But I think people are going to start asking for that. And falsely
upgrading mixed content to non-mixed.

> It's not clear to me why you think it's a
> bad thing for us to allow an insecure page to ask the user agent to prevent
> itself from doing certain things it knows could potentially be harmful.

Because, as with WebCrypto, people are going to start seeing it as an
alternative to real HTTPS, and then demanding that it be treated as
one.

Ryan Sleevi

unread,
Dec 19, 2014, 4:24:31 PM12/19/14
to Chris Palmer, Mike West, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
Well, sure. But they're wrong. :) 

Chris Palmer

unread,
Dec 19, 2014, 4:26:08 PM12/19/14
to Ryan Sleevi, Mike West, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
On Fri, Dec 19, 2014 at 1:24 PM, Ryan Sleevi <rsl...@chromium.org> wrote:

>> Because, as with WebCrypto, people are going to start seeing it as an
>> alternative to real HTTPS, and then demanding that it be treated as
>> one.
>
> Well, sure. But they're wrong. :)

Indeed; and obviously so. But large communities of loud wrong people
often get their way. Especially when the wrongness is hard to
distinguish from a similar but crucially different right thing.

Mike West

unread,
Dec 23, 2014, 8:20:10 AM12/23/14
to Chris Palmer, Ryan Sleevi, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
Chris, you "not lgtm"'d the patch last night; I was hoping to see a continuation of the discussion here, as it sounded like you weren't happy with the idea, but also weren't planning to block the CL. What changed?

-mike

-Mike

Chris Palmer

unread,
Dec 23, 2014, 2:10:26 PM12/23/14
to Mike West, Ryan Sleevi, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Justin Schuh, Craig Francis, Frederik Braun, Marc-Antoine Ruel
I want to be on the record as very strongly against it, not merely
strongly. :) I assume my non-owner status does not block you from
landing it, right?

I have not seen any convincing argument that it's a good idea on its
own, and it's going to further embolden the fools. Who are numerous.
And who have made me sleepy.

Craig Francis

unread,
Dec 24, 2014, 10:46:59 AM12/24/14
to Chris Palmer, Mike West, Ryan Sleevi, Joel Weinberger, Tanvi Vyas, security-dev, Michal Zalewski, Justin Schuh, Frederik Braun, Marc-Antoine Ruel
On 23 Dec 2014, at 19:10, 'Chris Palmer' via Security-dev <securi...@chromium.org> wrote:

> I want to be on the record as very strongly against it, not merely
> strongly. :) I assume my non-owner status does not block you from
> landing it, right?
>
> I have not seen any convincing argument that it's a good idea on its
> own, and it's going to further embolden the fools. Who are numerous.
> And who have made me sleepy.
>



Sorry for the cross posting, but Chris, would this help?

https://craigfrancis.github.io/dev-security/#sri

It would help create awareness that the SRI feature exists, and allow easy verification that it's working (but please keep showing notices in the console).

And if you un-check the "HTTPS Mode" (assuming JS is enabled on your browser), you get a nice big warning to show how useless it will be without it loading over a HTTP connection.

Pull requests welcome on the GitHub project...

https://github.com/craigfrancis/dev-security/

fbr...@mozilla.com

unread,
Dec 28, 2014, 5:39:04 AM12/28/14
to securi...@chromium.org, lca...@google.com, rsl...@chromium.org, jsc...@chromium.org, mk...@chromium.org, craig....@gmail.com, pal...@google.com, fbr...@mozilla.com, mar...@chromium.org
On Wednesday, December 17, 2014 12:38:54 AM UTC+1, Tanvi Vyas wrote:
> On Friday, December 12, 2014 5:12:14 PM UTC-8, Joel Weinberger wrote:
>
> I'm going to go out on a limb here and try to summarize the various positions that have been listed on this thread, of which I think there are basically 3:
>
> We should support SRI on HTTP sites and pointing towards HTTP resources. SRI should only be used for enforcing the integrity of the end resource thus the threat model has nothing to do with the transport, so it doesn't matter if the transport of the integrity itself or the resource is unauthenticated.We should support SRI on HTTPS sites only but they may point towards HTTP resources. The utility of SRI is gained not from enforcing the integrity of end resources, but from enforcing that the transport has integrity (albeit without confidentiality), thus as long as the browser is ensured of getting the SRI integrity value over an authenticated channel, it can enforce that its resources are what they expect, even if they're not over HTTPS. Since not all sites and CDNs will go to HTTPS, this can at least provide integrity (without confidentiality), and something is better than nothing.We should support SRI only on HTTPS sites and only pointing to HTTPS resources. Similar to (1), SRI should only be used for enforcing the integrity of end resources, since we sites that want transport security should simply use HTTPS. However, because we're talking about providing integrity guarantees, the developer can never know if the guarantee they are trying to provide actually makes it to the client, and we shouldn't pretend like it does by allowing SRI over HTTP, because that will just lead to developer confusion.
> Notably, Mozilla's position that SRI should be allowed on HTTP sites but only pointing to HTTPS resources is not reflected here, so, Freddy, if you want to jump in and give that argument, go for it :-)
>

It seems I may have been mistaken or may have explained Mozilla's position incorrectly. Tanvi is phrasing it better than I could, so please refer to her take on this (as posted previously):

Chris Palmer

unread,
Dec 29, 2014, 1:56:40 AM12/29/14
to Frederik Braun, security-dev, Michal Zalewski, Ryan Sleevi, Justin Schuh, Mike West, Craig Francis, Marc-Antoine Ruel
On Sun, Dec 28, 2014 at 2:39 AM, <fbr...@mozilla.com> wrote:

> We should support SRI on HTTP sites and pointing towards HTTP resources. SRI should only be used for enforcing the integrity of the end resource thus the threat model has nothing to do with the transport, so it doesn't matter if the transport of the integrity itself or the resource is unauthenticated.We should support SRI on HTTPS sites only but they may point towards HTTP resources. The utility of SRI is gained not from enforcing the integrity of end resources, but from enforcing that the transport has integrity (albeit without confidentiality), thus as long as the browser is ensured of getting the SRI integrity value over an authenticated channel, it can enforce that its resources are what they expect, even if they're not over HTTPS. Since not all sites and CDNs will go to HTTPS, this can at least provide integrity (without confidentiality), and something is better than nothing.We should support SRI only on HTTPS sites and only pointing to HTTPS resources. Similar to (1), SRI should only be used for enforcing the integrity of end resources, since we sites that want transport security should simply use HTTPS. However, because we're talking about providing integrity guarantees, the developer can never know if the guarantee they are trying to provide actually makes it to the client, and we shouldn't pretend like it does by allowing SRI over HTTP, because that will just lead to developer confusion.

I can't follow this. Was there a text formatting mishap, or...?

Ryan Sleevi

unread,
Dec 29, 2014, 2:09:52 AM12/29/14
to Chris Palmer, Marc-Antoine Ruel, Michal Zalewski, Mike West, Frederik Braun, security-dev, Craig Francis, Justin Schuh

There was, but it was from Joel's original message.

To restate Freddy: To understand Moz's take, go with Tanvi's earlier replies. :)

Alexey Baranov

unread,
Dec 30, 2014, 9:34:28 AM12/30/14
to rsl...@chromium.org, Chris Palmer, Marc-Antoine Ruel, Michal Zalewski, Mike West, Frederik Braun, security-dev, Craig Francis, Justin Schuh
does anyone BTW has data on how often middleboxes do not respect no-transfrom header? Since otherwise anything but HTTPS page + HTTPS SRI may end up with a lot of, say, false positives (not every resource transformation is evil in the end:)) and webapp developers will naturally avoid using SRI anywhere but fully HTTPS pages just to be sure that their site will not be broken due to some random reasons.
 
29.12.2014, 10:09, "Ryan Sleevi" <rsl...@chromium.org>:
To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.

Joel Weinberger

unread,
Dec 30, 2014, 12:37:11 PM12/30/14
to Alexey Baranov, rsl...@chromium.org, Chris Palmer, Marc-Antoine Ruel, Michal Zalewski, Mike West, Frederik Braun, security-dev, Craig Francis, Justin Schuh
I am unaware of such data, and I believe this was part of Brian's point earlier. The natural response to this is, sure, and if that's where the world heads, we can always take away HTTP use of SRI later, but who are we to judge these use cases initially (especially given that we already allow other policy mechanisms such as CSP over HTTP, where modifications by middleboxes can also mess things up)?
Reply all
Reply to author
Forward
0 new messages