Proposal: Prefer secure origins for powerful new web platform features

208 views
Skip to first unread message

Chris Palmer

unread,
Jun 27, 2014, 6:55:35 PM6/27/14
to public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Hi everyone,

Apologies in advance to those of you who will get this more than once,
due to the cross-posting. I wanted to get this to a wide audience, to
gather feedback and have a discussion involving as many interested
parties as possible (browser vendors, web developers, security
engineers, privacy advocates, et al.).

* Proposal

The Chrome Security team and I propose that, for new and particularly
powerful web platform features, browser vendors tend to prefer to make
the the feature available only to secure origins by default.

* Definitions

"Particularly powerful" would mean things like: features that handle
personally-identifiable information, features that handle high-value
information like credentials or payment instruments, features that
provide the origin with control over the UA's trustworthy/native UI,
access to sensors on the user's device, or generally any feature that
we would provide a user-settable permission or privilege to. Please
discuss!

"Particularly powerful" would NOT mean things like: new rendering and
layout features, CSS selectors, innocuous JavaScript APIs like
showModalDialog, or the like. I expect that the majority of new work
in HTML5 fits in this category. Please discuss!

"Secure origins" are origins that match at least one of the following
(scheme, host, port) patterns:

* (https, *, *)
* (wss, *, *)
* (*, localhost, *)
* (*, 127/8, *)
* (*, ::1/128, *)
* (file, *, —)
* (chrome-extension, *, —)

This list may be incomplete, and may need to be changed. Please discuss!

A bug to define “secure transport” in Blink/Chromium:
https://code.google.com/p/chromium/issues/detail?id=362214

* For Example

For example, Chrome is going to make Service Workers available only to
secure origins, because it provides the origin with a new, higher
degree of control over a user's interactions with the origin over an
extended period of time, and because it gives the origin some control
over the user's device as a background task.

Consider the damage that could occur if a user downloaded a service
worker script that had been tampered with because they got it over a
MITM'd or spoofed cafe wifi connection. What should have been a nice
offline document editor could be turned into a long-lived spambot, or
maybe even a surveillance bot. If the script can only run when
delivered via authenticated, integrity-protected transport like HTTPS,
that particular risk is significantly mitigated.

* Background

Legacy platforms/operating systems have a 1-part principal: the user.
When a user logs in, they run programs that run with the full
privilege of the user: all of a user’s programs can do anything the
user can do on all their data and with all their resources. This has
become a source of trouble since the rise of mobile code from many
different origins. It has become less and less acceptable for a user’s
(e.g.) word processor to (e.g.) read the user’s private SSH keys.

Modern platforms have a 2-part security principal: the user, and the
origin of the code. Examples of such modern platforms include (to
varying degrees) the web, Android, and iOS. In these systems, code
from one origin has (or, should have) access only to the resources it
creates and which are explicitly given to it.

For example, the Gmail app on Android has access only to the user’s
Gmail and the system capabilities necessary to read and write that
email. Without an explicit grant, it does not have access to resources
that other apps (e.g. Twitter) create. It also does not have access to
system capabilities unrelated to email. Nor does it have access to the
email of another user on the same computer.

In systems with 2-part principals, it is crucial to strongly
authenticate both parts of the principal, not just one part.
(Otherwise, the system essentially degrades into a 1-part principal
system.) This is why, for example, both Android and iOS require that
every vendor (i.e. origin) cryptographically sign its code. That way,
when a user chooses to install Twitter and to give Twitter powerful
permissions (such as access to the device’s camera), they can be sure
that they are granting such capability only to the Twitter code, and
not to just any code.

By contrast, the web has historically made origin authentication
optional. On the web, origins are defined as having 3 parts: (scheme,
host, port), e.g. (HTTP, example.com, 80) or (HTTPS, mail.google.com,
443). Many origins use unauthenticated schemes like HTTP, WS, or even
FTP.

Granting permissions to unauthenticated origins is, in the presence of
a network attacker, equivalent to granting the permissions to any
origin. The state of the internet is such that we must indeed assume
that a network attacker is present.

* Thank You For Reading This Far!

We welcome discussion, critique, and cool new features!

Michal Zalewski

unread,
Jun 27, 2014, 7:30:09 PM6/27/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
I think it's a reasonable proposal, provided that we apply the rules
judiciously. This can be a tad tricky: credentials are listed as an
example, but if we ever come up with an authentication API that offers
better security properties than cookies, I do not think it would be
wise to make it available only to HTTPS origins.

Now, in some ways, it feels that the ship has sailed - microphone,
camera, accelerometer, and geolocation APIs aren't HTTPS-only, and
neither are password managers and cache manifests. This makes it a bit
harder to draw the line: should we aim strictly not to make things
worse, or to design a new breed of APIs that will hopefully one day
make the legacy ones obsolete?

A somewhat tangential concern that I think we are neglecting with the
ever-more-powerful privileged APIs is the near-certainty of XSS
vulnerabilities; the platform offers a growing number of opportunities
for origins to be persistently backdoored, and no clean way to recover
from a transient XSS bug. I think that sooner or later, we will need a
mechanism for servers to nuke or disavow any cached or currently
running content associated with their origin, and start anew.

I think the inclusion of file:/// is somewhat problematic, since it is
not implied that the content arrived over a secure channel, and the
mixed-content behavior is also not well specified. The localhost case
may be also problematic because some quasi-popular software is known
to set up webservers bound to 127.0.0.1 (I think I've seen some
anti-virus software, RAID tools, and print daemons do that).

Yan Zhu

unread,
Jun 27, 2014, 7:38:32 PM6/27/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
What are your thoughts on private address space IPs?
https://w3c.github.io/webappsec/specs/mixedcontent/#private-address-space

An example of this would be a router admin interface on 192.168.1.1 that
can be accessed either over plain HTTP or HTTPS with a self-signed cert,
which offers little protection from network attackers anyway.
--
Yan Zhu <y...@eff.org>, <y...@torproject.org>
Staff Technologist
Electronic Frontier Foundation https://www.eff.org
815 Eddy Street, San Francisco, CA 94109 +1 415 436 9333 x134

Chris Palmer

unread,
Jun 27, 2014, 7:50:28 PM6/27/14
to Michal Zalewski, Alex Russell, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Jun 27, 2014 at 4:29 PM, Michal Zalewski <lca...@coredump.cx> wrote:

> I think it's a reasonable proposal, provided that we apply the rules
> judiciously. This can be a tad tricky: credentials are listed as an
> example, but if we ever come up with an authentication API that offers
> better security properties than cookies, I do not think it would be
> wise to make it available only to HTTPS origins.

Agree. Let's imagine that I said "...handles credentials that are as
powerful as current ones like cookies, or moreso". If we get a *less*
powerful credential system, e.g. such that theft of it over non-secure
transport is not fatal, that'd be great and it might make perfect
sense to expose it to non-secure origins.

The whole idea is very case-by-case.

> Now, in some ways, it feels that the ship has sailed - microphone,
> camera, accelerometer, and geolocation APIs aren't HTTPS-only, and
> neither are password managers and cache manifests. This makes it a bit
> harder to draw the line: should we aim strictly not to make things
> worse, or to design a new breed of APIs that will hopefully one day
> make the legacy ones obsolete?

First, I dream of strictly not making things worse; and then perhaps
(over the course of eons) deprecating non-secure support for e.g.
cameras and mics. I have no illusions that rolling back existing
functionality will fly in the short term — nobody panic. :) And, yeah,
perhaps awesomer APIs will supercede existing ones. Time will tell.

> A somewhat tangential concern that I think we are neglecting with the
> ever-more-powerful privileged APIs is the near-certainty of XSS
> vulnerabilities; the platform offers a growing number of opportunities
> for origins to be persistently backdoored, and no clean way to recover
> from a transient XSS bug. I think that sooner or later, we will need a
> mechanism for servers to nuke or disavow any cached or currently
> running content associated with their origin, and start anew.

Indeed, there is a notion like that for Service Workers already, for
exactly this type of reason. Clients are to cache SW scripts, but are
required to re-check after a certain max-lifetime interval. Alex
(explicitly CC'd) can provide details.

> I think the inclusion of file:/// is somewhat problematic, since it is
> not implied that the content arrived over a secure channel,

Right. "But it's here now." Perhaps we should take file: off the list,
perhaps we should find some way to tag files as having come from
secure transport, or... People should feel free to comment on the
Chromium bug, too
(https://code.google.com/p/chromium/issues/detail?id=362214).

> and the
> mixed-content behavior is also not well specified.

Right. mkwst, others, and tangentially me are working on tightening it
up for reasons like this.
http://lists.w3.org/Archives/Public/public-webappsec/2014Jun/0214.html

> The localhost case
> may be also problematic because some quasi-popular software is known
> to set up webservers bound to 127.0.0.1 (I think I've seen some
> anti-virus software, RAID tools, and print daemons do that).

Yes; those are similar to the file: case, but such a thing is
definitely already in a user's TCB (whether good or bad). I sort of
think that clarifies the issue somewhat. But also we definitely would
like to help developers running a local test server out, just a bit.

Michal Zalewski

unread,
Jun 27, 2014, 7:56:35 PM6/27/14
to Chris Palmer, Alex Russell, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
>> I think the inclusion of file:/// is somewhat problematic, since it is
>> not implied that the content arrived over a secure channel,
>
> Right. "But it's here now." Perhaps we should take file: off the list,
> perhaps we should find some way to tag files as having come from
> secure transport, or...

A special problem here is also how to scope the permission if ever
granted by the user. A permission granted to
file:///installed_app/bar.html probably shouldn't carry over to
file:///some/random/downloaded/thing.html.

> Right. mkwst, others, and tangentially me are working on tightening it
> up for reasons like this.
> http://lists.w3.org/Archives/Public/public-webappsec/2014Jun/0214.html

Yeah, I was following this pretty closely, but didn't think it's
aiming to restrict the ability for file:/// to, say, load scripts from
http://bad.idea.com/nooo.js?

/mz

Chris Palmer

unread,
Jun 27, 2014, 7:58:57 PM6/27/14
to Yan Zhu, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Jun 27, 2014 at 4:38 PM, Yan Zhu <y...@eff.org> wrote:

>> * (https, *, *)
>> * (wss, *, *)
>> * (*, localhost, *)
>> * (*, 127/8, *)
>> * (*, ::1/128, *)
>> * (file, *, —)
>> * (chrome-extension, *, —)
>>
>> This list may be incomplete, and may need to be changed. Please discuss!
>
> What are your thoughts on private address space IPs?
> https://w3c.github.io/webappsec/specs/mixedcontent/#private-address-space

Note that UAs should increasingly disprefer IP addresses in X.509
certs as CNs or as SANs, and IIRC the CABF's Baseline Requirements no
longer allow CAs to issue such certs, and if the origin is remote then
it can only be secure with the help of TLS or something like it.

So, sort of: "That shouldn't be happening."

But, also, it seems like a bad idea to let http://192.168.0.106 access
powerful features when you're on the hotel wifi.

> An example of this would be a router admin interface on 192.168.1.1 that
> can be accessed either over plain HTTP or HTTPS with a self-signed cert,
> which offers little protection from network attackers anyway.

Right. But:

* Do such routers need access to fancy things like Service Workers or
Missile Launch Control Panel?
* They are computationally indistinguishable from a dubious person's
web server on the hotel wifi.
* I have another idea for how to handle authetnication for Internet Of
Things, but that's another thread entirely and we shan't go there just
yet. :)

Chris Palmer

unread,
Jun 27, 2014, 8:02:56 PM6/27/14
to Michal Zalewski, Alex Russell, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Jun 27, 2014 at 4:56 PM, Michal Zalewski <lca...@coredump.cx> wrote:

> A special problem here is also how to scope the permission if ever
> granted by the user. A permission granted to
> file:///installed_app/bar.html probably shouldn't carry over to
> file:///some/random/downloaded/thing.html.

Right. Permissions are (I hope always) granted and persisted per
origin; and, in Chrome at least, each file pathname is a distinct
origin.

>> Right. mkwst, others, and tangentially me are working on tightening it
>> up for reasons like this.
>> http://lists.w3.org/Archives/Public/public-webappsec/2014Jun/0214.html
>
> Yeah, I was following this pretty closely, but didn't think it's
> aiming to restrict the ability for file:/// to, say, load scripts from
> http://bad.idea.com/nooo.js?

I think you are right about that.

Peter Kasting

unread,
Jun 27, 2014, 8:03:00 PM6/27/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Jun 27, 2014 at 3:55 PM, 'Chris Palmer' via blink-dev <blin...@chromium.org> wrote:
"Particularly powerful" would mean ... generally any feature that

we would provide a user-settable permission or privilege to.

I don't really understand this last clause.  Users of browsers can set many permissions, e.g. in Chrome the user can grant or deny sites the ability to use plugins, open popup windows, run Javascript, etc. I doubt you intended to suggest that a new feature with a similar scope to those should be restricted.

PK

Peter Kasting

unread,
Jun 27, 2014, 8:04:06 PM6/27/14
to Michal Zalewski, Chris Palmer, Alex Russell, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Jun 27, 2014 at 4:56 PM, Michal Zalewski <lca...@coredump.cx> wrote:
>> I think the inclusion of file:/// is somewhat problematic, since it is
>> not implied that the content arrived over a secure channel,
>
> Right. "But it's here now." Perhaps we should take file: off the list,
> perhaps we should find some way to tag files as having come from
> secure transport, or...

A special problem here is also how to scope the permission if ever
granted by the user. A permission granted to
file:///installed_app/bar.html probably shouldn't carry over to
file:///some/random/downloaded/thing.html.

I believe in Chrome, at least for content settings and similar origin-scoped permissions, file: URLs are treated as if the entire file path is the origin, so every file's permissions are unique to it.

I haven't checked this against the code.

PK 

Ryan Sleevi

unread,
Jun 27, 2014, 8:35:49 PM6/27/14
to Peter Kasting, blink-dev, security-dev, dev-se...@lists.mozilla.org, public-w...@w3.org, Chris Palmer

There is, I think, a balance.

The examples you gave are examples where we default positive (allow), but then allow the user to deny. In effect, all origins BUT X have access to a permission.

However, for permissions where the assumption is default-deny (or prompt), those are certainly in scope. That's because if you grant Origin X access, and X is an origin delivered over an insecure transport, you've granted it to all origins, in effect.

Would it make more sense to clarify that its in response to deny-by-default permissions? geolocation, audio, video all come to mind as modern deny features that would, ideally, have been restricted for the reasons listed - though that horse has long since left the barn.

Peter Kasting

unread,
Jun 27, 2014, 8:46:46 PM6/27/14
to Ryan Sleevi, blink-dev, security-dev, dev-se...@lists.mozilla.org, public-w...@w3.org, Chris Palmer
On Fri, Jun 27, 2014 at 5:35 PM, Ryan Sleevi <rsl...@chromium.org> wrote:

On Jun 27, 2014 5:02 PM, "'Peter Kasting' via Security-dev" <securi...@chromium.org> wrote:
> On Fri, Jun 27, 2014 at 3:55 PM, 'Chris Palmer' via blink-dev <blin...@chromium.org> wrote:
>> "Particularly powerful" would mean ... generally any feature that
>>
>> we would provide a user-settable permission or privilege to.
>
> I don't really understand this last clause.  Users of browsers can set many permissions, e.g. in Chrome the user can grant or deny sites the ability to use plugins, open popup windows, run Javascript, etc. I doubt you intended to suggest that a new feature with a similar scope to those should be restricted.

There is, I think, a balance.

The examples you gave are examples where we default positive (allow), but then allow the user to deny. In effect, all origins BUT X have access to a permission.

I don't know that I'm comfortable with that summary.  We allow users to globally default-deny these permissions.  In the case of plugins in particular, we have a click-to-play setting that many people use that amounts to default-deny.  There are arguments one could make for a browser shipping that setting by default (although I agree with Chrome's decision not to do so in this case), but those arguments don't really have much connection to the security of the feature, they're more about preventing annoyances that are widespread on the web.  If we end up conflating things like "feature is scoped to HTTPS" with "feature is occasionally annoying", I think we've made a mistake. 

However, for permissions where the assumption is default-deny (or prompt), those are certainly in scope. That's because if you grant Origin X access, and X is an origin delivered over an insecure transport, you've granted it to all origins, in effect.

Again, I think the reason _why_ such a thing would be default-deny is an important part of the answer here.  I can imagine features that I think your argument makes sense for, as well as ones where it doesn't.  Accordingly, I wouldn't use "default-deny" in my list of descriptions of the rules governing this, I'd instead focus on the reasons why a particular capability could e.g. leak private data or something.  Which Chris basically did.  Which is why the trailing "...or anything else that has permissions" clause seemed not only confusing but unnecessary.

geolocation, audio, video all come to mind as modern deny features that would, ideally, have been restricted for the reasons listed - though that horse has long since left the barn.

Clarity: I assume you mean audio/video recording, not playback (which is an example of a capability we shouldn't restrict).

PK

PhistucK

unread,
Jun 28, 2014, 4:20:06 AM6/28/14
to Ryan Sleevi, Peter Kasting, blink-dev, security-dev, dev-se...@lists.mozilla.org, public-w...@w3.org, Chris Palmer
Regarding audio and video input - I would not dismiss that just yet.
While we are still in the unfortunate situation of having a prefixed implementation (and I think no browser except Presto based Opera has a non prefixed implementation), we can take advantage of this and add the secure origin restriction when we remove the prefix.


PhistucK


To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.

Harald Alvestrand

unread,
Jun 28, 2014, 4:35:11 AM6/28/14
to PhistucK, Ryan Sleevi, Peter Kasting, blink-dev, security-dev, dev-se...@lists.mozilla.org, public-w...@w3.org, Chris Palmer
Just because this echoes the long discussions in WebRTC:
For cameras and microphones, the rule is:

- Permissions are prompted for when required
- Permissions can be stored for secure origins
- Permissions can NOT be stored for non-secure origins

We treat file: as a non-secure origin, because file: is frequently used for things like HTML from incoming mail messages.
I'd argue that http://localhost should be in the same category as file:, but would have to work through the cases to have a strong opinion here.

I like the idea of having a Web platform-wide definition of "secure" vs "insecure" origins, so that we don't have a per-spec definition that is inconsistent from feature to feature. But there are devils in those details.




To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

Jesper Kristensen

unread,
Jun 28, 2014, 4:53:56 PM6/28/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org, mozilla-de...@lists.mozilla.org
Den 28-06-2014 00:55Chris Palmer skrev:
> "Secure origins" are origins that match at least one of the following
> (scheme, host, port) patterns:
>
> * (https, *, *)
> * (wss, *, *)
> * (*, localhost, *)
> * (*, 127/8, *)
> * (*, ::1/128, *)
> * (file, *, —)
> * (chrome-extension, *, —)
>
> This list may be incomplete, and may need to be changed. Please discuss!

I would like (http, localhost) to not be treated as more secure than
(http, example.com) because localhost is often used for development, and
I don't want things to work there and then break when I deploy it.

- Jesper Kristensen

Sigbjørn Vik

unread,
Jun 30, 2014, 4:27:18 AM6/30/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On 28-Jun-14 00:55, 'Chris Palmer' via Security-dev wrote:

> * Proposal
>
> The Chrome Security team and I propose that, for new and particularly
> powerful web platform features, browser vendors tend to prefer to make
> the the feature available only to secure origins by default.

I assume you don't mean that powerful features should be available by
default, but rather that the opt-in UI should only be available on
secure web pages? And that for insecure web pages, the UI might be
hidden/scary/temporary/etc.

I also assume that you are talking about secure web pages, and not
secure origins. Web pages served from secure origins can be insecure by
including insecure inlines, by triggering fraud/spoof checks, by
breaking browser heuristics (e.g. pinning) by changing too much, through
interference from insecure extensions, etc, and we presumably don't want
to give these sites access.

> "Secure origins" are origins that match at least one of the following
> (scheme, host, port) patterns:
>
> * (https, *, *)

That is a required part of the definition, but not sufficient. In
addition to using https, a secure origin should also limit itself to
secure https algorithms, have a matching and validating certificate, and
possibly more. The definition of "more" might also change over time, and
vary between browsers, e.g. requiring CT would make any hard definitions
unworkable. Browsers generally have a fairly good UI indicating secure
vs non-secure web pages, deferring to this might be sufficient.

Overall, making powerful features less accessible to insecure web pages
sounds like a good idea. :)

--
Sigbjørn Vik
Opera Software

mea...@chromium.org

unread,
Jun 30, 2014, 1:50:14 PM6/30/14
to securi...@chromium.org, lca...@coredump.cx, pal...@google.com, sligh...@google.com, public-w...@w3.org, blin...@chromium.org, dev-se...@lists.mozilla.org, pkas...@google.com
Permissions aren't saved for files for most APIs. This is unintended though: security origin is passed empty for those APIs, so while an infobar is shown the permission is neither granted nor saved.

Chris Palmer

unread,
Jul 15, 2014, 1:54:34 PM7/15/14
to Kevin Chadwick, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org, Mike West
[re-adding all the lists]

On Tue, Jul 15, 2014 at 10:06 AM, 'Kevin Chadwick' via Security-dev
<securi...@chromium.org> wrote:

> What immediately struck me is that this definition seems misleading.
>
>> Secure origins" are origins that match at least one of the following
>> (scheme, host, port) patterns:
>>
>> * (https, *, *)
>> * (wss, *, *)
>
> The below are very likely secure origins and if not the user should
> know? but the above are simply secure transports and the domain needs to
> be checked carefully by the user?

As usual, using the word "secure" is an oversimplification. I was
trying to use it advisedly; but yeah, there is tons of ambiguity.

My goal is for the browser/UA to be able to determine, unassisted and
at run-time, whether or not the code it is running could possibly have
been delivered from an authenticated origin and not tampered with in
transit. Degrees of security higher than this bare minimum are
obviously wonderful, but they do depend on us first nailing down this
bare minimum. They tend to also depend on things outside the browser,
such as user choice or configuration.

There is still the issue of mixed scripting content: Should we give
(https, example.com, 443) access to (say) ServiceWorkers when it loads
script from (http, goats.net, 80)? Surely not; and that is part of the
work Mike West is doing in defining the behavior of mixed content.

> There is certainly already way to much [bs|js] trying to get your
> private data over ssl and without asking.

I'm not sure what you mean. The powerful APIs tend to involve some
kind of user interaction/selection/permission as well, such as
getUserMedia asking which camera to use, the settable and un-settable
permissions like the Page Actions in the Chrome Omnibox and the Page
Info Bubble (that pops up when you click on the Lock icon or on the
Favicon).

If you're worried that (https, example.com, 443) is letting (https,
goats.org, 443) have access to e.g. your camera because example.com
includes script from goats.org, yeah that is tricky. Again, whether or
not that is acceptable to the user is not something the browser can
determine by itself.

>> This list may be incomplete, and may need to be changed. Please discuss!
>
> http over VPN's/IPSEC spring to mind but I don't know what you could do
> about that except ask the user

Right; a VPN is transparent to the browser — the browser could not
make a determination of origin authentication or script integrity
unassisted at run-time. Additionally, VPNs are not end-to-end secure:
the VPN provider can break the "security" "guarantee" in a way that is
not possible for an intermediary when HTTPS is working properly. (Some
VPN designs provide even less security, e.g. those in which the
VPN-using hosts all use the same key material. In that situation, any
user can break the security of the VPN, not just the VPN provider.)

Chris Palmer

unread,
Jul 15, 2014, 1:57:54 PM7/15/14
to Sigbjørn Vik, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Mon, Jun 30, 2014 at 1:27 AM, Sigbjørn Vik <sigb...@opera.com> wrote:

>> The Chrome Security team and I propose that, for new and particularly
>> powerful web platform features, browser vendors tend to prefer to make
>> the the feature available only to secure origins by default.
>
> I assume you don't mean that powerful features should be available by
> default, but rather that the opt-in UI should only be available on
> secure web pages? And that for insecure web pages, the UI might be
> hidden/scary/temporary/etc.

Yes, that is how I imagine the policy would be best applied. I don't
imaging giving any HTTPS page full access to all powerful goodies with
no user interaction. :)

> I also assume that you are talking about secure web pages, and not
> secure origins. Web pages served from secure origins can be insecure by
> including insecure inlines, by triggering fraud/spoof checks, by
> breaking browser heuristics (e.g. pinning) by changing too much, through
> interference from insecure extensions, etc, and we presumably don't want
> to give these sites access.

Right.

>> "Secure origins" are origins that match at least one of the following
>> (scheme, host, port) patterns:
>>
>> * (https, *, *)
>
> That is a required part of the definition, but not sufficient. In
> addition to using https, a secure origin should also limit itself to
> secure https algorithms, have a matching and validating certificate, and
> possibly more. The definition of "more" might also change over time, and
> vary between browsers, e.g. requiring CT would make any hard definitions
> unworkable. Browsers generally have a fairly good UI indicating secure
> vs non-secure web pages, deferring to this might be sufficient.

I agree. Ryan Sleevi and I have been gradually deprecating and then
removing support for the obviously-broken cipher suites, weak keys,
and so on. For now, I just want to pin down a bare minimum; ratcheting
up the definition of "secure transport" has been and will be on-going
work.

> Overall, making powerful features less accessible to insecure web pages
> sounds like a good idea. :)

Thanks! And thanks to everyone for your feedback.

Chris Palmer

unread,
Jul 15, 2014, 2:11:22 PM7/15/14
to Peter Kasting, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Jun 27, 2014 at 5:02 PM, Peter Kasting <pkas...@google.com> wrote:

>> "Particularly powerful" would mean ... generally any feature that
>> we would provide a user-settable permission or privilege to.
>
> I don't really understand this last clause. Users of browsers can set many
> permissions, e.g. in Chrome the user can grant or deny sites the ability to
> use plugins, open popup windows, run Javascript, etc. I doubt you intended
> to suggest that a new feature with a similar scope to those should be
> restricted.

"""In systems with 2-part principals, it is crucial to strongly
authenticate both parts of the principal, not just one part.
(Otherwise, the system essentially degrades into a 1-part principal
system.)"""

That is, to grant (say) ServiceWorkers or even just (say) "can load
JavaScript" power to an unauthenticated origin means, effectively,
granting that power to any origin, in the presence of a network
attacker. And we can only assume that a network attacker is
essentially always present.

Now, some of those features (disabling pop-ups, disabling JS, et c.)
are just as much convenience features as they are security features.
To the extent that people want them for convenience, and/or to the
extent that people want to turn them off for all origins, it makes
sense to expose a choice.
Reply all
Reply to author
Forward
0 new messages