Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Proposal: Marking HTTP As Non-Secure

543 views
Skip to first unread message

Chris Palmer

unread,
Dec 12, 2014, 7:46:44 PM12/12/14
to public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Hi everyone,

Apologies to those of you who are about to get this more than once, due to
the cross-posting. I'd like to get feedback from a wide variety of people:
UA developers, web developers, and users. The canonical location for this
proposal is:
https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.

Proposal

We, the Chrome Security Team, propose that user agents (UAs) gradually
change their UX to display non-secure origins as affirmatively non-secure.
We intend to devise and begin deploying a transition plan for Chrome in
2015.

The goal of this proposal is to more clearly display to users that HTTP
provides no data security.

Request

We’d like to hear everyone’s thoughts on this proposal, and to discuss with
the web community about how different transition plans might serve users.

Background

We all need data communication on the web to be secure (private,
authenticated, untampered). When there is no data security, the UA should
explicitly display that, so users can make informed decisions about how to
interact with an origin.

Roughly speaking, there are three basic transport layer security states for
web origins:


-

Secure (valid HTTPS, other origins like (*, localhost, *));
-

Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with
minor TLS errors); and
-

Non-secure (broken HTTPS, HTTP).


For more precise definitions of secure and non-secure, see Requirements for
Powerful Features <http://www.w3.org/TR/powerful-features/> and Mixed
Content <http://www.w3.org/TR/mixed-content/>.

We know that active tampering and surveillance attacks, as well as passive
surveillance attacks, are not theoretical but are in fact commonplace on
the web.

RFC 7258: Pervasive Monitoring Is an Attack
<https://tools.ietf.org/html/rfc7258>

NSA uses Google cookies to pinpoint targets for hacking
<http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/10/nsa-uses-google-cookies-to-pinpoint-targets-for-hacking/>

Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
<http://www.wired.com/2014/10/verizons-perma-cookie/>

How bad is it to replace adSense code id to ISP's adSense ID on free
Internet?
<http://stackoverflow.com/questions/25438910/how-bad-is-it-to-replace-adsense-code-id-to-isps-adsense-id-on-free-internet>

Comcast Wi-Fi serving self-promotional ads via JavaScript injection
<http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/>

Erosion of the moral authority of transparent middleboxes
<https://tools.ietf.org/html/draft-hildebrand-middlebox-erosion-01>

Transitioning The Web To HTTPS <https://w3ctag.github.io/web-https/>

We know that people do not generally perceive the absence of a warning sign.
(See e.g. The Emperor's New Security Indicators
<http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf>.)
Yet the only situation in which web browsers are guaranteed not to warn
users is precisely when there is no chance of security: when the origin is
transported via HTTP. Here are screenshots of the status quo for non-secure
domains in Chrome, Safari, Firefox, and Internet Explorer:

[image: Screen Shot 2014-12-11 at 5.08.48 PM.png]

[image: Screen Shot 2014-12-11 at 5.09.55 PM.png]

[image: Screen Shot 2014-12-11 at 5.11.04 PM.png]

[image: ie-non-secure.png]

Particulars

UA vendors who agree with this proposal should decide how best to phase in
the UX changes given the needs of their users and their product design
constraints. Generally, we suggest a phased approach to marking non-secure
origins as non-secure. For example, a UA vendor might decide that in the
medium term, they will represent non-secure origins in the same way that
they represent Dubious origins. Then, in the long term, the vendor might
decide to represent non-secure origins in the same way that they represent
Bad origins.

Ultimately, we can even imagine a long term in which secure origins are so
widely deployed that we can leave them unmarked (as HTTP is today), and
mark only the rare non-secure origins.

There are several ways vendors might decide to transition from one phase to
the next. For example, the transition plan could be time-based:


1.

T0 (now): Non-secure origins unmarked
2.

T1: Non-secure origins marked as Dubious
3.

T2: Non-secure origins marked as Non-secure
4.

T3: Secure origins unmarked


Or, vendors might set thresholds based on telemetry that measures the
ratios of user interaction with secure origins vs. non-secure. Consider
this strawman proposal:


1.

Secure > 65%: Non-secure origins marked as Dubious
2.

Secure > 75%: Non-secure origins marked as Non-secure
3.

Secure > 85%: Secure origins unmarked


The particular thresholds or transition dates are very much up for
discussion. Additionally, how to define “ratios of user interaction” is
also up for discussion; ideas include the ratio of secure to non-secure
page loads, the ratio of secure to non-secure resource loads, or the ratio
of total time spent interacting with secure vs. non-secure origins.

We’d love to hear what UA vendors, web developers, and users think. Thanks
for reading!

Chris Palmer

unread,
Dec 12, 2014, 8:32:33 PM12/12/14
to Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Fri, Dec 12, 2014 at 5:17 PM, Eduardo Robles Elvira <
edu...@agoravoting.com> wrote:

* The biggest problem I see is that to get an accepted certificate
> traditionally you needed to pay. This was a show-stopper for having TLS
> certs in small websites. Mozilla, EFF, Cisco, Akamai are trying to fix that
> [1] and that StartSSL gives free certificates though. Just stating the
> obvious: you either get easy and free "secure" certificates, or this
> proposal is going to make some webmasters angry.
>
>>
Oh yes, absolutely. Obviously, Let's Encrypt is a great help, and SSLMate's
ease-of-use and low price is great, and CloudFlare's free SSL helps too.

Hopefully, as operations like those ramp up, it will get increasingly
easier for web developers to switch to HTTPS. We (Chrome) will weigh
changes to the UX very carefully, and with a close eye on HTTPS adoption.

Igor Bukanov

unread,
Dec 13, 2014, 11:56:30 AM12/13/14
to Chris Palmer, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
Free SSL certificates helps, but another problem is that activating SSL not
only generates warnings, but just break the site due to links to insecure
resources. Just consider a case of old pages with a few youtube videos
served using http iframes. Accessing those pages over https stops the
videos from working as browsers blocks access to active insecure context.
And in case of youtube one can fix that, but for other resources it may not
be possible.

So what is required is ability to refer to insecure context from HTTPS
pages without harming user experience. For example, it should be a way to
insert http iframe into https site. Similarly, it would be nice if a
web-developer could refer to scripts, images etc. over http as long as the
script/image tag is accompanied with a secure hash of known context.
> _______________________________________________
> dev-security mailing list
> dev-se...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>

Richard Barnes

unread,
Dec 13, 2014, 1:08:18 PM12/13/14
to Chris Palmer, dev-se...@lists.mozilla.org
[limiting to dev.security to minimize cross-posting]

Hey Chris,

Thanks for putting together a good case for this idea. I think there's a fair degree of interest in this on the Mozilla side. A couple of comments:

The really critical question for me here is the timeline. It's pretty much out of the question to deploy an indicator like this today, because it would appear so often. That's why we're so enthusiastic about things like Let's Encrypt -- to get us to the threshold where there's enough TLS that we can be harder on non-secure sites. So my preference would be for something like the percentage-based ratchet you describe.

I wonder to what degree this is about transport vs. scheme. Of course, in HTTP/1.1, the only way you get TLS is with HTTPS. In HTTP/2, however, the full URL is carried in the request, so HTTP-schemed requests can be sent over TLS connections. Putting aside for the moment the questions of how they would get there and whether unauthenticated TLS is allowed, suppose a page was requested with an HTTP scheme, but the request went over an authenticated TLS connection. So it would get all the same COMSEC benefits as HTTPS, just not things like mixed content blocking and referer stripping. How does that fit into your scheme? It seems like it could be a useful distinction to make to allow sites to upgrade transports without having to change content.

Thanks again,
--Richard

ianG

unread,
Dec 13, 2014, 1:16:50 PM12/13/14
to dev-se...@lists.mozilla.org
On 13/12/2014 00:46 am, Chris Palmer wrote:
> Hi everyone,
>
> Apologies to those of you who are about to get this more than once, due to
> the cross-posting. I'd like to get feedback from a wide variety of people:
> UA developers, web developers, and users. The canonical location for this
> proposal is:
> https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.
>
> Proposal
>
> We, the Chrome Security Team, propose that user agents (UAs) gradually
> change their UX to display non-secure origins as affirmatively non-secure.
> We intend to devise and begin deploying a transition plan for Chrome in
> 2015.
>
> The goal of this proposal is to more clearly display to users that HTTP
> provides no data security.


What is your proposal for HTTPS self-signed and HTTPS unknown-CA? Is
this considered less secure than HTTP or is it considered more secure
than HTTP but less than HTTPS?

Are you also considering ways to get more servers up and running with
certs for zero cost?



iang

Eduardo Robles Elvira

unread,
Dec 13, 2014, 8:43:52 PM12/13/14
to Chris Palmer, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Hello Chris et al:

I'm a web developer. I did some vendor security-related development
sometime ago inside Konqueror. Some first thoughts:

* In principle, the proposal makes sense to me. Who doesn't want a more
secure web? Kudos for making it possible with ambitious proposals like this.

* The biggest problem I see is that to get an accepted certificate
traditionally you needed to pay. This was a show-stopper for having TLS
certs in small websites. Mozilla, EFF, Cisco, Akamai are trying to fix that
[1] and that StartSSL gives free certificates though. Just stating the
obvious: you either get easy and free "secure" certificates, or this
proposal is going to make some webmasters angry.


Regards,
--
[1]
https://www.eff.org/deeplinks/2014/11/certificate-authority-encrypt-entire-web
--
Eduardo Robles Elvira @edulix skype: edulix2
http://agoravoting.org @agoravoting +34 634 571 634

On Sat, Dec 13, 2014 at 1:46 AM, Chris Palmer <pal...@google.com> wrote:
>
> Hi everyone,
>
> Apologies to those of you who are about to get this more than once, due to
> the cross-posting. I'd like to get feedback from a wide variety of people:
> UA developers, web developers, and users. The canonical location for this
> proposal is:
> https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure
> .
>
> Proposal
>
> We, the Chrome Security Team, propose that user agents (UAs) gradually
> change their UX to display non-secure origins as affirmatively non-secure.
> We intend to devise and begin deploying a transition plan for Chrome in
> 2015.
>
> The goal of this proposal is to more clearly display to users that HTTP
> provides no data security.
>

Mathias Bynens

unread,
Dec 13, 2014, 8:43:52 PM12/13/14
to Chris Palmer, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Sat, Dec 13, 2014 at 1:46 AM, 'Chris Palmer' via blink-dev <
blin...@chromium.org> wrote:
>
> We know that people do not generally perceive the absence of a warning
> sign. (See e.g. The Emperor's New Security Indicators
> <http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf>.)
> Yet the only situation in which web browsers are guaranteed not to warn
> users is precisely when there is no chance of security: when the origin is
> transported via HTTP. Here are screenshots of the status quo for non-secure
> domains in Chrome, Safari, Firefox, and Internet Explorer:
>
> [image: Screen Shot 2014-12-11 at 5.08.48 PM.png]
>
> [image: Screen Shot 2014-12-11 at 5.09.55 PM.png]
>
> [image: Screen Shot 2014-12-11 at 5.11.04 PM.png]
>
> [image: ie-non-secure.png]
>

For completeness sake, here’s what a non-secure looks like in Opera:

Alex Gaynor

unread,
Dec 13, 2014, 8:43:52 PM12/13/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Fantastic news, I'm very glad to see the Chrome Security Team taking
initiative on this.

Existing browsers' behavior of defaulting to, and using a "neutral" UI for,
HTTP is fundamentally an assumption what users want. And it's not an
assumption that is grounded in data.

No ordinary users' mental for communication on the net includes lack of
authenticity, integrity, or confidentially. Plaintext is a blight on the
internet, and this is a fantastic step towards making reality match users
(TOTALLY REASONABLE!) expectations.

Cheers,
Alex

On Fri Dec 12 2014 at 4:46:36 PM 'Chris Palmer' via Security-dev <
> We know that people do not generally perceive the absence of a warning
> sign. (See e.g. The Emperor's New Security Indicators
> <http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf>.)
> Yet the only situation in which web browsers are guaranteed not to warn
> users is precisely when there is no chance of security: when the origin is
> transported via HTTP. Here are screenshots of the status quo for non-secure
> domains in Chrome, Safari, Firefox, and Internet Explorer:
>
> [image: Screen Shot 2014-12-11 at 5.08.48 PM.png]
>
> [image: Screen Shot 2014-12-11 at 5.09.55 PM.png]
>
> [image: Screen Shot 2014-12-11 at 5.11.04 PM.png]
>
> [image: ie-non-secure.png]
>
> To unsubscribe from this group and stop receiving emails from it, send an
> email to security-dev...@chromium.org.
>

Christian Heutger

unread,
Dec 13, 2014, 8:43:53 PM12/13/14
to pal...@google.com, edu...@agoravoting.com, dev-se...@lists.mozilla.org, blin...@chromium.org, public-w...@w3.org, securi...@chromium.org
I see a big danger in the current trend. Expecting everyone having a free „secure“ certificate and being in requirement to enable HTTPS it will result in nothing won. DV certificates (similar to DANE) do finally say absolute nothing about the website operator. They ensure encryption, so I can then be phished, be scammed, … encrypted. Big advantage!^^ Pushing real validation (e.g. EV with green adressbar and validated details by an independent third party, no breakable, spoofable automatism) vs. no validation is much more important and should be focussed on. However, this „change“ could come with marking HTTP as Non-Secure, but just stating HTTPS as secure is the completely wrong sign and will result in more confusion and loosing any trust in any kind of browser padlocks than before.

Just a proposal:

Mark HTTP as Non-Secure (similar to self-signed) e.g. with a red padlock or sth. similar.
Mark HTTPS as Secure (and only secure in favor of encrypted) e.g. with a yellow padlock or sth. similar
Mark HTTPS with Extended Validation (encrypted and validated) as it is with a green padlock or sth. similar

This would be a good road for more security on the web.

Daniel Veditz

unread,
Dec 14, 2014, 12:37:59 AM12/14/14
to ianG, dev-se...@lists.mozilla.org
On 12/13/14 10:15 AM, ianG wrote:
> Are you also considering ways to get more servers up and running with
> certs for zero cost?

You apparently missed this: https://letsencrypt.org/

-Dan Veditz

ianG

unread,
Dec 14, 2014, 2:21:41 AM12/14/14
to dev-se...@lists.mozilla.org
I didn't miss it -- I wondered what Chrome team's plan was...

If that is "the plan" then they should say that! Then we can respond to it.



iang

Michael Ströder

unread,
Dec 14, 2014, 11:04:01 AM12/14/14
to mozilla-de...@lists.mozilla.org
Eduardo Robles Elvira wrote:
> * The biggest problem I see is that to get an accepted certificate
> traditionally you needed to pay. This was a show-stopper for having TLS
> certs in small websites. Mozilla, EFF, Cisco, Akamai are trying to fix that
> [1] and that StartSSL gives free certificates though. Just stating the
> obvious: you either get easy and free "secure" certificates, or this
> proposal is going to make some webmasters angry.

It's not only getting certificates. It's also getting the cert installed. Note
that the majority of web content is not hosted on separate servers maintained
by admins. So "Let's Encrypt" won't help either.

That does not mean that I'm against this proposal. It might give a strong push
into the right direction.

Ciao, Michael.

Chris Palmer

unread,
Dec 14, 2014, 12:59:29 PM12/14/14
to Igor Bukanov, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
On Sat, Dec 13, 2014 at 8:56 AM, Igor Bukanov <ig...@mir2.org> wrote:

Free SSL certificates helps, but another problem is that activating SSL not
> only generates warnings, but just break the site due to links to insecure
> resources. Just consider a case of old pages with a few youtube videos
> served using http iframes. Accessing those pages over https stops the
> videos from working as browsers blocks access to active insecure context.
> And in case of youtube one can fix that, but for other resources it may not
> be possible.
>

Yes, unfortunately we have a collective action problem. (
http://en.wikipedia.org/wiki/Collective_action#Collective_action_problem)
But just because it's hard, doesn't mean we don't have try. I'd suggest
that embedders ask embeddees to at least make HTTPS available, even if not
the default.

Also, keep in mind that this proposal is only to mark HTTP as non-secure —
HTTP will still work, and you can still host your site over HTTP.


> So what is required is ability to refer to insecure context from HTTPS
> pages without harming user experience.
>

No, because that reduces or eliminates the security guarantee of HTTPS.


> For example, it should be a way to insert http iframe into https site.
> Similarly, it would be nice if a web-developer could refer to scripts,
> images etc. over http as long as the script/image tag is accompanied with a
> secure hash of known context.
>

Same thing here. The security guarantee of HTTPS is the combination of
server authentication, data integrity, and data confidentiality. It is not
a good user experience to take away confidentiality without telling users.
And, unfortunately, we cannot effectively communicate that nuance. We have
enough trouble effectively communicating the secure/non-secure distinction
as it is.

Alex Gaynor

unread,
Dec 14, 2014, 1:01:08 PM12/14/14
to Chris Palmer, Igor Bukanov, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
Chris,

Is there a plan for HTTP to eventually have an interstitial, the way HTTPS
with a bogus cert does?

Alex

On Sun Dec 14 2014 at 9:59:22 AM 'Chris Palmer' via Security-dev <

Chris Palmer

unread,
Dec 14, 2014, 1:03:50 PM12/14/14
to Richard Barnes, dev-se...@lists.mozilla.org
On Sat, Dec 13, 2014 at 10:07 AM, Richard Barnes <rba...@mozilla.com>
wrote:

The really critical question for me here is the timeline. It's pretty much
> out of the question to deploy an indicator like this today, because it
> would appear so often. That's why we're so enthusiastic about things like
> Let's Encrypt -- to get us to the threshold where there's enough TLS that
> we can be harder on non-secure sites. So my preference would be for
> something like the percentage-based ratchet you describe.
>

We are thinking the same thing, yeah.


> I wonder to what degree this is about transport vs. scheme. Of course, in
> HTTP/1.1, the only way you get TLS is with HTTPS. In HTTP/2, however, the
> full URL is carried in the request, so HTTP-schemed requests can be sent
> over TLS connections. Putting aside for the moment the questions of how
> they would get there and whether unauthenticated TLS is allowed, suppose a
> page was requested with an HTTP scheme, but the request went over an
> authenticated TLS connection. So it would get all the same COMSEC benefits
> as HTTPS, just not things like mixed content blocking and referer
> stripping. How does that fit into your scheme? It seems like it could be
> a useful distinction to make to allow sites to upgrade transports without
> having to change content.
>

What matters is the security isolation status of the *origin*. If the
origin is (HTTP, example.com, 80), and there is some hypothetical
STARTTLS-like upgrade to TLS, that's nice. But the origin is still (HTTP,
example.com, 80), and non-securely-transported script could still execute
in the same context.

Chris Palmer

unread,
Dec 14, 2014, 1:09:07 PM12/14/14
to ianG, dev-se...@lists.mozilla.org
On Sat, Dec 13, 2014 at 10:15 AM, ianG <ia...@iang.org> wrote:

What is your proposal for HTTPS self-signed and HTTPS unknown-CA? Is this
> considered less secure than HTTP or is it considered more secure than HTTP
> but less than HTTPS?
>

We are not proposing a change to the status quo for these scenarios.

The only way to make self-signed or unknown issuer certificates even barely
acceptable is to pin their keys on first use. But then you have to explain
to hundreds of millions of people how to recover when the keys change, and
how to distinguish legitimate key change from illegitimate. I don't think
we know how to do that yet.

Are you also considering ways to get more servers up and running with certs
> for zero cost?
> <https://lists.mozilla.org/listinfo/dev-security>
>

Yes, of course. But we have nothing to announce on that front just yet.

In the meantime, there is:

* CloudFlare
* SSLMate
* Let's Encrypt coming soon
* Non-custom-domain hosting like appspot.com and github.io

Chris Palmer

unread,
Dec 14, 2014, 1:17:13 PM12/14/14
to Christian Heutger, dev-se...@lists.mozilla.org, blin...@chromium.org, public-w...@w3.org, edu...@agoravoting.com, securi...@chromium.org
On Sat, Dec 13, 2014 at 11:05 AM, Christian Heutger <chri...@heutger.net>
wrote:

I see a big danger in the current trend. Expecting everyone having a free
> „secure“ certificate and being in requirement to enable HTTPS it will
> result in nothing won. DV certificates (similar to DANE) do finally say
> absolute nothing about the website operator.
>

Reducing the number of parties you have to trust from [ the site operator,
the operators of all networks between you and the site operator ] to just [
the site operator ] is a huge win.


> They ensure encryption, so I can then be phished, be scammed, … encrypted.
> Big advantage!^^ Pushing real validation (e.g. EV with green adressbar and
> validated details by an independent third party, no breakable, spoofable
> automatism) vs. no validation is much more important and should be focussed
> on.
>

I think you'll find EV is not as "extended" as you might be hoping.

But more importantly, the only way to get minimal server auth, data
integrity, and data confidentiality on a mass scale is with something at
least as easy to deploy as DV. Indeed, you'll see many of the other
messages in this thread are from people concerned that DV isn't easy enough
yet! So requiring EV is a non-starter.

Additionally, the web origin concept is (scheme, host, port). Crucially,
EV-issued names are not distinct origins from DV-issued names, and
proposals to enforce such a distinction in browsers have not gotten any
traction because they are not super feasible (for a variety of reasons).


> However, this „change“ could come with marking HTTP as Non-Secure, but
> just stating HTTPS as secure is the completely wrong sign and will result
> in more confusion and loosing any trust in any kind of browser padlocks
> than before.
>

HTTPS is the bare minimum requirement for secure web application
*transport*. Is secure transport by itself sufficient to achieve total
*application-semantic* security? No. But a browser couldn't determine that
level of security anyway. Our goal is for the browser to tell as much of
the truth as it can programatically determine at run-time.

>

Chris Palmer

unread,
Dec 14, 2014, 1:25:21 PM12/14/14
to Michael Ströder, mozilla-de...@lists.mozilla.org
On Sun, Dec 14, 2014 at 8:02 AM, Michael Ströder <mic...@stroeder.com>
wrote:

It's not only getting certificates. It's also getting the cert installed.
> Note
> that the majority of web content is not hosted on separate servers
> maintained
> by admins. So "Let's Encrypt" won't help either.
>

It's true that getting a certificate, and then installing it, are 2
separate problems. But it's not accurate to say that helping solve only 1
of the 2 problems does not help. It does help.

If your hosting provider doesn't have a way for you to install your
certificate for your site, file a bug with them. It's a problem they should
fix (e.g. hosted WordPress). I expect the availability of that feature will
be a market differentiator in the short term...

Igor Bukanov

unread,
Dec 14, 2014, 1:34:32 PM12/14/14
to Chris Palmer, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
On 14 December 2014 at 18:59, Chris Palmer <pal...@google.com> wrote:

>
> Yes, unfortunately we have a collective action problem. (
> http://en.wikipedia.org/wiki/Collective_action#Collective_action_problem)
> But just because it's hard, doesn't mean we don't have try. I'd suggest
> that embedders ask embeddees to at least make HTTPS available, even if not
> the default.
>
> Also, keep in mind that this proposal is only to mark HTTP as non-secure —
> HTTP will still work, and you can still host your site over HTTP.
>

If serving context over HTTPS generates broken pages, the insensitive of
enabling encryption is very low. As it was already mentioned, a solution to
that is to allow to serve encrypted pages over HTTP so pages that refer to
unencrypted elements would not break pages but just produces warnings. Such
encrypted http:// also allows to generate less warnings for a page where
all context is available over self-signed and key-pinned certificate as
that solution is strictly more secure then a plain HTTP.

Chris Palmer

unread,
Dec 14, 2014, 1:35:00 PM12/14/14
to Alex Gaynor, public-w...@w3.org, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, Igor Bukanov, blink-dev, security-dev
On Sun, Dec 14, 2014 at 10:00 AM, Alex Gaynor <alex....@gmail.com> wrote:

Is there a plan for HTTP to eventually have an interstitial, the way HTTPS
> with a bogus cert does?


We (Chrome) have no current plan to do that. In the Beautiful Future when
some huge percentage of pageviews are shown via secure transport, it might
or might not make sense to interstitial HTTP then. I kind of doubt that it
will be a good idea, but who knows. We'll see.

Michael Ströder

unread,
Dec 14, 2014, 1:38:22 PM12/14/14
to mozilla-de...@lists.mozilla.org
Chris Palmer wrote:
> On Sun, Dec 14, 2014 at 8:02 AM, Michael Ströder <mic...@stroeder.com>
> wrote:
>> It's not only getting certificates. It's also getting the cert
>> installed. Note that the majority of web content is not hosted on
>> separate servers maintained by admins. So "Let's Encrypt" won't help
>> either.
>
> It's true that getting a certificate, and then installing it, are 2
> separate problems. But it's not accurate to say that helping solve only 1
> of the 2 problems does not help.

I did *not* say solving 1 of 2 does not help. I just said there is also a 2.
major problem to be solved.

> If your hosting provider doesn't have a way for you to install your
> certificate for your site, file a bug with them. It's a problem they should
> fix (e.g. hosted WordPress).

I just wanted to point out that a large amount of domains will be affected
(marked as non-secure) where the domain owner cannot quickly do anything about it.

> I expect the availability of that feature will
> be a market differentiator in the short term...

But migrating domains from one hosting provider to another can be complex. Not
something people do within a day as a side job.

Ciao, Michael.

Chris Palmer

unread,
Dec 14, 2014, 1:41:03 PM12/14/14
to Igor Bukanov, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
On Sun, Dec 14, 2014 at 10:34 AM, Igor Bukanov <ig...@mir2.org> wrote:

If serving context over HTTPS generates broken pages, the insensitive of
> enabling encryption is very low.
>

That's the definition of a collective action problem, yes.

I think that the incentives will change, and are changing, and people are
becoming more aware of the problems of non-secure transport. There is an
on-going culture shift, and more and more publishers are going to offer
HTTPS. For example,
http://open.blogs.nytimes.com/author/eitan-konigsburg/?_r=0.

As it was already mentioned, a solution to that is to allow to serve
> encrypted pages over HTTP so pages that refer to unencrypted elements would
> not break pages but just produces warnings. Such encrypted http:// also
> allows to generate less warnings for a page where all context is available
> over self-signed and key-pinned certificate as that solution is strictly
> more secure then a plain HTTP.
>

But, again, consider the definition of the origin. If it is possible for
securely-transported code to run in the same context as non-securely
transported code, the securely-transported code is effectively non-secure.

Chris Palmer

unread,
Dec 14, 2014, 1:44:57 PM12/14/14
to Michael Ströder, mozilla-de...@lists.mozilla.org
On Sun, Dec 14, 2014 at 10:37 AM, Michael Ströder <mic...@stroeder.com>
wrote:

I just wanted to point out that a large amount of domains will be affected
> (marked as non-secure) where the domain owner cannot quickly do anything
> about it.
>

That's why we propose a gradual transition, and why we and Richard Barnes
of Mozilla prefer to predicate the transition on increased deployment of
HTTPS.


> But migrating domains from one hosting provider to another can be complex.
> Not
> something people do within a day as a side job.


Yes, I suspect that most people who are about to migrate from HTTP to HTTPS
are the people who have that as their main job (as e.g.
http://open.blogs.nytimes.com/2014/11/13/embracing-https/).

The long tail will come later, and most likely they won't migrate; instead,
their hosting providers will do it for them as a service. That is as it
should be.

Michael Ströder

unread,
Dec 14, 2014, 1:48:09 PM12/14/14
to mozilla-de...@lists.mozilla.org
Chris Palmer wrote:
> Reducing the number of parties you have to trust from [ the site operator,
> the operators of all networks between you and the site operator ] to just [
> the site operator ] is a huge win.

Yes, I agree. But to really guarantee this you would have to block all the
shiny SSL facade reverse proxy services out there. ;-)

I suspect your approach will rather make people rush into using such central
SSL fake services and data is transmitted in clear behind the service. If this
happens those services will be an even more attractive interception point.

Ciao, Michael.

Igor Bukanov

unread,
Dec 14, 2014, 1:48:14 PM12/14/14
to Chris Palmer, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
On 14 December 2014 at 19:40, Chris Palmer <pal...@google.com> wrote:

>
> But, again, consider the definition of the origin. If it is possible for
> securely-transported code to run in the same context as non-securely
> transported code, the securely-transported code is effectively non-secure.
>

Yes, but the point is that the page will be shown with the same warnings as
a plain http page rather then showing a broken page.

Michael Ströder

unread,
Dec 14, 2014, 1:53:10 PM12/14/14
to mozilla-de...@lists.mozilla.org
Chris Palmer wrote:
> HTTPS. For example,
> http://open.blogs.nytimes.com/author/eitan-konigsburg/?_r=0.

It does not make sense linking to NYT articles which are behind a paywall.
Or did you want to show us that the paywall login page is HTTPS? ;-)

Ciao, Michael.

Igor Bukanov

unread,
Dec 14, 2014, 1:53:36 PM12/14/14
to Chris Palmer, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
I.e. just consider that currently a hosting provider has no option to
unconditionally encrypt pages they host for modern browsers as that may
break pages of the users. With encrypted http:// they get such option
delegating the job of fixing warnings about insecure context to the content
producers as it should.

Chris Palmer

unread,
Dec 14, 2014, 1:58:21 PM12/14/14
to Michael Ströder, mozilla-de...@lists.mozilla.org
On Sun, Dec 14, 2014 at 10:52 AM, Michael Ströder <mic...@stroeder.com>
wrote:

> http://open.blogs.nytimes.com/author/eitan-konigsburg/?_r=0.
>
> It does not make sense linking to NYT articles which are behind a paywall.
> Or did you want to show us that the paywall login page is HTTPS? ;-)
> <https://lists.mozilla.org/listinfo/dev-security>
>

I can get to that page without a paywall. Maybe they distinguish based on
client IP (non-US?)? Or maybe you have read you "10 free articles this
month"? :)

But anyway, it's an article about how they challenge news services to go to
HTTPS, presumably including themselves.

Chris Palmer

unread,
Dec 14, 2014, 2:04:16 PM12/14/14
to Michael Ströder, mozilla-de...@lists.mozilla.org
On Sun, Dec 14, 2014 at 10:47 AM, Michael Ströder <mic...@stroeder.com>
wrote:

Chris Palmer wrote:
> > Reducing the number of parties you have to trust from [ the site
> operator,
> > the operators of all networks between you and the site operator ] to
> just [
> > the site operator ] is a huge win.
>
> Yes, I agree. But to really guarantee this you would have to block all the
> shiny SSL facade reverse proxy services out there. ;-)
>

That edges up to the remote attestation problem (which is unsolvable). We
just want the browser to tell the truth about what it can determine.

If the site operator chooses to deploy in a not perfectly safe manner, that
is a separate problem that the community will have to solve by other means.


> I suspect your approach will rather make people rush into using such
> central
> SSL fake services and data is transmitted in clear behind the service. If
> this
> happens those services will be an even more attractive interception point.
> <https://lists.mozilla.org/listinfo/dev-security>
>

We will have to try to deal with that somehow (most likely at "layer 8").
But, are you saying that, for this reason, the status quo is somehow
acceptable or even preferable?

Chris Palmer

unread,
Dec 14, 2014, 2:08:36 PM12/14/14
to Igor Bukanov, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
On Sun, Dec 14, 2014 at 10:53 AM, Igor Bukanov <ig...@mir2.org> wrote:

I.e. just consider that currently a hosting provider has no option to
> unconditionally encrypt pages they host for modern browsers as that may
> break pages of the users. With encrypted http:// they get such option
> delegating the job of fixing warnings about insecure context to the content
> producers as it should.
>

I'm sorry; I still don't understand what you mean. Do you mean that you
want browsers to treat some hypothetical encrypted HTTP protocol as if it
were a secure origin, but still allow non-secure embedded content in these
origins?

I would argue strongly against that, and so far not even the "opportunistic
encryption" advocates have argued for that.

Igor Bukanov

unread,
Dec 14, 2014, 2:26:52 PM12/14/14
to Chris Palmer, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, security-dev
I would like to see some hypothetical encrypted http:// when a browser
present a page as if it was over https:// if everything of a secure origin
and as if it was served over plain http if not. That is, if a future
browser shows warnings for plain http, so it will show the same warnings
for encrypted http:// with insecure resources.

The point of such encrypted http:// is to guarantee that *enabling
encryption never degrades user experience* compared with the case of plain
http. This will allow for a particular installation to start serving
everything encrypted independently from the job of fixing the content. And
as the page still served as http://, the user training/expectations about
https:// sites no longer applies.

Michael Ströder

unread,
Dec 14, 2014, 2:33:14 PM12/14/14
to mozilla-de...@lists.mozilla.org
Chris Palmer wrote:
> On Sun, Dec 14, 2014 at 10:47 AM, Michael Ströder <mic...@stroeder.com>
> wrote:
>
> Chris Palmer wrote:
>>> Reducing the number of parties you have to trust from [ the site
>> operator,
>>> the operators of all networks between you and the site operator ] to
>> just [
>>> the site operator ] is a huge win.
>>
>> Yes, I agree. But to really guarantee this you would have to block all the
>> shiny SSL facade reverse proxy services out there. ;-)
>
> That edges up to the remote attestation problem (which is unsolvable). We
> just want the browser to tell the truth about what it can determine.

Telling/attesting something to the browser is meaningless.
You have to tell the user something.
And the user often has a false understanding of technical issues.

> If the site operator chooses to deploy in a not perfectly safe manner, that
> is a separate problem that the community will have to solve by other means.
>
>> I suspect your approach will rather make people rush into using such
>> central
>> SSL fake services and data is transmitted in clear behind the service. If
>> this
>> happens those services will be an even more attractive interception point.
>> <https://lists.mozilla.org/listinfo/dev-security>
>
> We will have to try to deal with that somehow (most likely at "layer 8").
> But, are you saying that, for this reason, the status quo is somehow
> acceptable or even preferable?

During the last years I saw many so-called security mechanisms pushing things
into the wrong direction.

In this case:
Wiretapping is easier if traffic is going through only a few channels. If you
force everybody to speak HTTPS you might endorse leveraging central SSL
reverse proxies leading to even easier traffic interception.
This is probably not what you want.

All in all:
I'm not against this approach in general. But it might push things into the
wrong direction and never reach its good intention.

Ciao, Michael.

Igor Bukanov

unread,
Dec 14, 2014, 2:46:48 PM12/14/14
to Michael Ströder, mozilla-de...@lists.mozilla.org
On 14 December 2014 at 20:32, Michael Ströder <mic...@stroeder.com> wrote:

> if you
> force everybody to speak HTTPS you might endorse leveraging central SSL
> reverse proxies leading to even easier traffic interception.
>

Indeed. In a https-only world ISP may just require for the user to install
their certificate. Still it will be better than the current situation when
it is very cheap to sniff on a neighbor cable-modem link or with a fake GSM
node and revert the situation to the one that was several years ago when
sniffing was realistically possible only on the level of ISP.

Igor Bukanov

unread,
Dec 14, 2014, 3:04:12 PM12/14/14
to Michal Zalewski, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, blink-dev, security-dev
On 14 December 2014 at 20:47, Michal Zalewski <lca...@google.com> wrote:

> The main point of having a visible and stable indicator for encrypted
> sites is to communicate to the user that the site offers a good degree
> of resilience against the examination or modification of the exchanged
> data by network attackers.
>

Then browser should show absolutely no indications of secure origin for
encrypted http://. The idea is that encrypted http:// experience would be
equivalent to the current http experience with no indications of security
and no warnings. However, encrypted http:// with insecure elements will
start to produce warnings in the same way a future browser will show
warnings for plain http.

Without something like this I just do not see how a lot of sites could ever
start enabling encryption unconditionally. I.e. currently enabling https
requires to modify content often in a significant way. I would for a site
operator to have an option to enabling encryption unconditionally without
touching the content.

Christian Heutger

unread,
Dec 14, 2014, 4:42:45 PM12/14/14
to Chris Palmer, dev-se...@lists.mozilla.org, blin...@chromium.org, public-w...@w3.org, edu...@agoravoting.com, securi...@chromium.org
> Reducing the number of parties you have to trust from [ the site operator, the operators of all networks between you and the site operator ] to just [ the site operator ] is a huge win.

But how can I trust him and who is he? No WHOIS records no imprint, all spoofable, so in what should I trust then? If there is a third-party, who state me, the details given are correct and they have a warranty for misinformation, that’s something I could trust. I also look at online-shopping if there are customer reviews, but I do not recognize them as fully trustable as they may be spoofed, if the shop has a seal like Trusted Shops with a money-back guarantee, I feel good and shop there.

> I think you'll find EV is not as "extended" as you might be hoping.

I know, but it’s the best we currently have. And DV is much worser, finally loosing any trust in HTTPS with it, scaling it down to encryption, nothing else.

> But more importantly, the only way to get minimal server auth, data integrity, and data confidentiality on a mass scale is with something at least as easy to deploy as DV. Indeed, you'll see many of the other messages in this thread are from people concerned that DV isn’t
> easy enough yet! So requiring EV is a non-starter.

I agree on data confidentiality, maybe also on integrity although DV without effort or costs may break that also in any way, but server auth is somehow saying nothing as any server endpoint I called I get, nothing more is authenticated. However, I support the idea of having mass encryption, but before confusing and damaging end users mind on internet security, there need to be a clear differentiation in just encryption and encryption with valid authentication.

> HTTPS is the bare minimum requirement for secure web application *transport*. Is secure transport by itself sufficient to achieve total *application-semantic* security? No. But a browser couldn't determine that level of security anyway. Our goal is for the browser to tell
> as much of the truth as it can programatically determine at run-time.

But wasn’t that the idea of certificates? Seals on websites can be spoofed, WHOIS records can be spoofed, imprints can be spoofed, but spoofing EV certificates, e.g. in combination with solutions like pinning, is a hard job. Considering there would be no browser warning for self-signed certificates, I do not see any advantage in mass developed (finally requiring a full automated process) DV certificates. It’s a bit back to the roots to times, I remember, some website operators offered their self-signed root to be installed in the browser to remove the browser warning.

Peter Bowen

unread,
Dec 14, 2014, 5:57:48 PM12/14/14
to Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, Igor Bukanov, blink-dev, security-dev
On Sun, Dec 14, 2014 at 11:08 AM, 'Chris Palmer' via Security-dev
<securi...@chromium.org> wrote:
> On Sun, Dec 14, 2014 at 10:53 AM, Igor Bukanov <ig...@mir2.org> wrote:
>
>> I.e. just consider that currently a hosting provider has no option to
>> unconditionally encrypt pages they host for modern browsers as that may
>> break pages of the users. With encrypted http:// they get such option
>> delegating the job of fixing warnings about insecure context to the content
>> producers as it should.
>
>
> I'm sorry; I still don't understand what you mean. Do you mean that you want
> browsers to treat some hypothetical encrypted HTTP protocol as if it were a
> secure origin, but still allow non-secure embedded content in these origins?

I'm also not clear on what Igor intended, but there is a real issue
with browser presentation of URLs using TLS today. There is no way to
declare "I know that this page will have insecure content, so don't
consider me a secure origin" such that the browser will show a
"neutral" icon rather than a warning icon. I think there is a strong
impression that a closed lock is better than neutral, but a yellow
warning sign over the lock is worse than neutral. Today prevents
sites from using HTTPS unless they have a very high confidence that
all resources on the page will come from secure origins.

Thanks,
Peter

Michal Zalewski

unread,
Dec 15, 2014, 1:13:45 AM12/15/14
to Igor Bukanov, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, blink-dev, security-dev
> I would like to see some hypothetical encrypted http:// when a browser
> present a page as if it was over https:// if everything of a secure origin
> and as if it was served over plain http if not. That is, if a future browser
> shows warnings for plain http, so it will show the same warnings for
> encrypted http:// with insecure resources.

Browsers have flirted with along the lines of your proposal with
non-blocking mixed content icons. Unfortunately, websites are not
static - so the net effect was that if you watched the address bar
constantly, you'd eventually get notified that your previously-entered
data that you thought will be visible only to a "secure" origin has
been already leaked to / exposed to network attackers.

The main point of having a visible and stable indicator for encrypted
sites is to communicate to the user that the site offers a good degree
of resilience against the examination or modification of the exchanged
data by network attackers. (It is a complicated property and it is
often misunderstood as providing clear-cut privacy assurances for your
online habits, but that's a separate topic.)

Any changes that make this indicator disappear randomly at unexpected
times, or make the already-complicated assurances more fragile and
even harder to explain, are probably not the right way to go.

/mz

Michal Zalewski

unread,
Dec 15, 2014, 1:13:59 AM12/15/14
to Igor Bukanov, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, blink-dev, security-dev
> Then browser should show absolutely no indications of secure origin for
> encrypted http://. The idea is that encrypted http:// experience would be
> equivalent to the current http experience with no indications of security
> and no warnings. However, encrypted http:// with insecure elements will
> start to produce warnings in the same way a future browser will show
> warnings for plain http.

As mentioned in my previous response, this gets *really* hairy because
the "has insecure elements" part is not a static property that can be
determined up front; so, you end up with the problem of sudden and
unexpected downgrades and notifying the user only after the
confidentiality or integrity of the previously-stored data has been
compromised.

/mz

Igor Bukanov

unread,
Dec 15, 2014, 4:16:55 AM12/15/14
to Peter Bowen, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, blink-dev, security-dev
On 14 December 2014 at 23:57, Peter Bowen <pzb...@gmail.com> wrote:

> I think there is a strong
> impression that a closed lock is better than neutral, but a yellow
> warning sign over the lock is worse than neutral.
>

The problem is not just a warning sign.

Browsers prevents any active context including iframe served over http from
loading. Thus showing a page with youtube and other videos over https is
not an option unless one fixes the page. Now consider that it is not a
matter of running sed on a set of static files but rather patching the
stuff stored in the database or fixing JS code that inserts the video as
the task of enabling https becomes non-trivial and very content dependent.

So indeed an option to declare that despite proper certificates and
encryption the site should be treated as of insecure origin is needed. This
way the page will be shown as if it was served as before with plain http
with no changes in user experience. But then it cannot be a https site
since many users still consider that https is enough to assume a secure
site. Hence the idea of encrypted http:// or something that makes user
experience with an encrypted page absolutely the same as she has with plain
http:// down to the browser stripping http:// from the URL.

After considering this I think it will be even fine for a future browser to
show a warning for such
properly-encrypted-but-explicitly-declared-as-insecure in the same way as a
warning will be shown for plain http. And it will be really nice if a site
operator, after activating such user-invisible encryption, can receive
reports from a browser about any violation of the secure origin policy in
the same way how violations of CSP are reported today. This would give nice
possibility of activating encryption without breaking anything, collecting
reports of violated secure origin policy, fixing content and finally
declaring the site explicitly https-only.

Michal Zalewski

unread,
Dec 15, 2014, 4:30:32 AM12/15/14
to Igor Bukanov, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Peter Bowen, blink-dev, security-dev, Eduardo Robles Elvira
> So indeed an option to declare that despite proper certificates and
> encryption the site should be treated as of insecure origin is needed. This
> way the page will be shown as if it was served as before with plain http
> with no changes in user experience. But then it cannot be a https site
> since many users still consider that https is enough to assume a secure
> site. Hence the idea of encrypted http:// or something that makes user
> experience with an encrypted page absolutely the same as she has with plain
> http:// down to the browser stripping http:// from the URL.

Sounds like you're essentially proposing a flavor of opportunistic
encryption for http://, right?

That seems somewhat tangential to Chris' original proposal, and there
is probably a healthy debate to be had about this; it may be also
worthwhile to look at SPDY and QUIC. In general, if you're comfortable
with not providing users with a visible / verifiable degree of
transport security, I'm not sure how the proposal changes this?

By the way, note that nothing is as simple as it seems; opportunistic
encryption is easy to suggest, but it's pretty damn hard to iron out
all the kinks. If there is genuinely no distinction between plain old
HTTP and opportunistically encrypted HTTP, the scheme can be
immediately rendered useless by any active attacker, and suffers from
many other flaws (for example, how do you link from within that
opportunistic scheme to other URLs within your application without
downgrading to http:// or upgrading to "real" https://?). Establishing
a new scheme solves that, but doesn't really address your other
concern - you still need to clean up all links. On top of that, it
opens a whole new can of worms by messing around with SOP.

/mz

Ryan Sleevi

unread,
Dec 15, 2014, 4:38:54 AM12/15/14
to Jeffrey Walton, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, security-dev, Christian Heutger
On Dec 15, 2014 1:29 AM, "Jeffrey Walton" <nolo...@gmail.com> wrote:
>
> On Sat, Dec 13, 2014 at 2:05 PM, Christian Heutger
> <chri...@heutger.net> wrote:
> > I see a big danger in the current trend.
>
> Surely you haven't missed the big danger in plain text traffic. That
> traffic gets usuroed and fed into susyems like Xkeyscore for Tailored
> Access Operations (TAO). In layman's terms, adversaries are using the
> information gathered to gain unauthorized access to systems.
>
> > Expecting everyone having a free
> > „secure“ certificate and being in requirement to enable HTTPS it will
result
> > in nothing won. DV certificates (similar to DANE) do finally say
absolute
> > nothing about the website operator.
>
> The race to the bottom among CAs is to blame for the quality of
> verification by the CAs.
>
> With companies like StartCom, Cacert and Mozilla offering free
> certificates, there is no barrier to entry.
>
> Plus, I don't think a certificate needs to say anything about the
> operator. They need to ensure the server is authenticated. That is,
> the public key bound to the DNS name is authentic.
>
> > They ensure encryption, so I can then be
> > phished, be scammed, … encrypted. Big advantage!^^
>
> As I understand it, phishers try to avoid TLS because they count on
> the plain text channel to avoid all the browser warnings. Peter
> Gutmann discusses this in his Engineering Security book
> (https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf).
>
> > Pushing real validation
> > (e.g. EV with green adressbar and validated details by an independent
third
> > party, no breakable, spoofable automatism) vs. no validation is much
more
> > important and should be focussed on.
>
> You should probably read Gutmann's Engineering Security. See his
> discussion of "PKI me harder" in Chapter 1 or 6 (IIRC).
>
> > However, this „change“ could come with
> > marking HTTP as Non-Secure, but just stating HTTPS as secure is the
> > completely wrong sign and will result in more confusion and loosing any
> > trust in any kind of browser padlocks than before.
>
> Security engineering studies seem to indicate most users don't
> understand the icons. It would probably be better if the browsers did
> the right thing, and took the users out of the loop. Gutmann talks
> about it in detail (with lots of citations).
>
> > Just a proposal:
> >
> > Mark HTTP as Non-Secure (similar to self-signed) e.g. with a red
padlock or
> > sth. similar.
>
> +1. In the browser world, plaintext was (still is?) held in higher
> esteem then opportunistic encryption. Why the browsers choose to
> indicate things this way is a mystery.
>
> > Mark HTTPS as Secure (and only secure in favor of encrypted) e.g. with a
> > yellow padlock or sth. similar
> > Mark HTTPS with Extended Validation (encrypted and validated) as it is
with
> > a green padlock or sth. similar
>
> Why green for EV (or why yellow for DV or DANE)? EV does not add any
> technical controls. From a security standpoint, DV and EV are
> equivalent.
>

>From an SOP point of view, this is true.
However, it is increasingly less true if you're willing to ignore the (near
cataclysmic) SOP failure, as EV gains technical controls such as
certificate transparency and pontentially mandatory stronger security
settings (e.g. secure ciphersuites in modern TLS, OCSP stapling, etc).
Additionally, there are other technical controls (validity periods, key
processing) that do offer distinction.

That is, it is not all procedural changes, and UAs can detect and
differentiate. While the hope is that these will be able to apply to all
sites in the future, any change of this scale takes time.

> If DNS is authentic, then DANE provides stronger assurances than DV or
> EV since the domain operator published the information and the
> veracity does not rely on others like CAs (modulo DBOUND).
>
> Not relying on a CA is a good thing since its usually advantageous to
> minimize trust (for some definition of "trust"). Plus, CAs don't
> really warrant anything, so its not clear what exactly they are
> providing to relying parties (they are providing a a signature for
> money to the applicant).
>
> Open question: do you think the browsers will support a model other
> than the CA Zoo for rooting trust?

Chromium has no plans for this, particularly those based on DNS/DANE, which
are empirically less secure and more operationally fraught with peril. I
would neither take it as foregone that the CA system cannot improve nor am
I confident that any of the known alternatives are either practical or
comparable in security to CAs, let alone superior.

Igor Bukanov

unread,
Dec 15, 2014, 5:03:23 AM12/15/14
to Michal Zalewski, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Peter Bowen, blink-dev, security-dev, Eduardo Robles Elvira
On 15 December 2014 at 10:30, Michal Zalewski <lca...@google.com> wrote:

> That seems somewhat tangential to Chris' original proposal, and there
> is probably a healthy debate to be had about this; it may be also
> worthwhile to look at SPDY and QUIC. In general, if you're comfortable
> with not providing users with a visible / verifiable degree of
> transport security, I'm not sure how the proposal changes this?
>


Chris' original proposal is a stick. I want to give a site operator also a
carrot. That can be an option to activate encryption that is not visible to
the user and *receive* from the browser all reports about violations of
secure origin policy. This way the operator will know that they can
activate HTTPS without worsening user experience and have information that
helps to fix the content.

If there is genuinely no distinction between plain old
> HTTP and opportunistically encrypted HTTP, the scheme can be
> immediately rendered useless by any active attacker
>

I am not proposing that a user-invisible encryption should stay forever.
Rather it should be treated just as a tool to help site operators to
transition to the proper https so at no stage the user experience would be
worse than continuing to serve pages with plain http.

Gervase Markham

unread,
Dec 15, 2014, 5:50:12 AM12/15/14
to mozilla-de...@lists.mozilla.org
On 14/12/14 21:41, Christian Heutger wrote:
> But how can I trust him and who is he?

You are trying to solve a different problem than the problem solved by
DV certificates.

Your problem is a reasonable one to want to solve, and EV certificates
solve it to a degree, but it is not a problem we are required to solve
in order to move the world to HTTPS.

> But wasn’t that the idea of certificates?

Maybe originally; but it turns out, there's a lot of value in many
circumstances in domain-name-based endpoint authentication.

Gerv

Daniel Veditz

unread,
Dec 15, 2014, 12:55:58 PM12/15/14
to Igor Bukanov, Michal Zalewski, Chris Palmer, public-w...@w3.org, dev-se...@lists.mozilla.org, Peter Bowen, blink-dev, security-dev, Eduardo Robles Elvira
On 12/15/14 2:03 AM, Igor Bukanov wrote:
> Chris' original proposal is a stick. I want to give a site operator also
> a carrot. That can be an option to activate encryption that is not
> visible to the user and *receive* from the browser all reports about
> violations of secure origin policy. This way the operator will know that
> they can activate HTTPS without worsening user experience and have
> information that helps to fix the content.

Serve the HTML page over http: but load all sub-resources over https: as
expected after the transition. Add the following header:

Content-Security-Policy-Report-Only: default-src https:; report-uri <me>

(add "script-src https: 'unsafe-inline' 'unsafe-eval';" if necessary)

This doesn't give you the benefit of encrypting your main HTML content
during the transition as you requested, but it is something that can be
done today. When the reports come back clean enough you can switch the
page content to https too.

-Dan Veditz

Alex Gaynor

unread,
Dec 15, 2014, 6:53:11 PM12/15/14
to Adrienne Porter Felt, ferdy.c...@gmail.com, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Indeed, the notion that users don't care is based on an ill-founded premise
of informed consent.

Here's a copy-pasted comment I made elsewhere on this topic:

To respect a user's decision, their decision needs to be an informed one,
and it needs to be choice. I don't think there's a reasonable basis to say
either of those are the case with users using HTTP in favor of HTTPS:

First, is it a choice: given that browsers default to HTTP, when no
protocol is explicitly selected, and that many users will access the site
via external links that they don't control, I don't think it's fair to say
that users choose HTTP, they simply get HTTP.

Second, if we did say they'd made a choice, was an informed one. We, as an
industry, have done a very poor job of educating users about the security
implications of actions online. I don't believe most non-technical users
have an understanding of what the implications of the loss of the
Authentication, Integrity, or Confidentiality that coms with preferring
HTTP to HTTPS are.

Given the fact that most users don't proactively consent to having their
content spied upon or mutated in transit, and insofar as they do, it is not
informed consent, I don't believe website authors have any obligation to
provide access to content over dangerous protocols like HTTP.


Alex

On Mon Dec 15 2014 at 3:50:36 PM Adrienne Porter Felt <fe...@chromium.org>
wrote:

> If someone thinks their users are OK with their website not having
> integrity/authentication/privacy, then why is it problematic that Chrome
> will start telling users about it? Presumably these users would still be OK
> with it after Chrome starts making the situation more obvious. (And if the
> users start disliking it, then perhaps they really were never OK with it in
> the first place?)
>
> On Mon, Dec 15, 2014 at 3:28 PM, <ferdy.c...@gmail.com> wrote:
>>
>> I'm a small website owner and I believe this proposal will upset a lot of
>> small hosters and website owners. In particular for simple content
>> websites, https is a burden for them, in time, cost and it not adding much
>> value. I don't need to be convinced of the security advantages of this
>> proposal, I'm just looking at the practical aspects of it. Furthermore, as
>> mentioned here there is the issue of mixed content and plugins you don't
>> own or control. I sure hope any such warning message is very subtle,
>> otherwise a lot of traffic will be driven away from websites.
>>
>> On Saturday, December 13, 2014 1:46:39 AM UTC+1, Chris Palmer wrote:
>>
>>> Hi everyone,
>>>
>>> Apologies to those of you who are about to get this more than once, due
>>> to the cross-posting. I'd like to get feedback from a wide variety of
>>> people: UA developers, web developers, and users. The canonical location
>>> for this proposal is: https://www.chromium.org/Home/
>>> chromium-security/marking-http-as-non-secure.
>>>
>>> Proposal
>>>
>>> We, the Chrome Security Team, propose that user agents (UAs) gradually
>>> change their UX to display non-secure origins as affirmatively non-secure.
>>> We intend to devise and begin deploying a transition plan for Chrome in
>>> 2015.
>>>
>>> The goal of this proposal is to more clearly display to users that HTTP
>>> provides no data security.
>>>
>>> Request
>>>
>>> We’d like to hear everyone’s thoughts on this proposal, and to discuss
>>> with the web community about how different transition plans might serve
>>> users.
>>>
>>> Background
>>>
>>> We all need data communication on the web to be secure (private,
>>> authenticated, untampered). When there is no data security, the UA should
>>> explicitly display that, so users can make informed decisions about how to
>>> interact with an origin.
>>>
>>> Roughly speaking, there are three basic transport layer security states
>>> for web origins:
>>>
>>>
>>> -
>>>
>>> Secure (valid HTTPS, other origins like (*, localhost, *));
>>> -
>>>
>>> Dubious (valid HTTPS but with mixed passive resources, valid HTTPS
>>> with minor TLS errors); and
>>> -
>>>
>>> Non-secure (broken HTTPS, HTTP).
>>>
>>>
>>> For more precise definitions of secure and non-secure, see Requirements
>>> for Powerful Features <http://www.w3.org/TR/powerful-features/> and Mixed
>>> Content <http://www.w3.org/TR/mixed-content/>.
>>>
>>> We know that active tampering and surveillance attacks, as well as
>>> passive surveillance attacks, are not theoretical but are in fact
>>> commonplace on the web.
>>>
>>> RFC 7258: Pervasive Monitoring Is an Attack
>>> <https://tools.ietf.org/html/rfc7258>
>>>
>>> NSA uses Google cookies to pinpoint targets for hacking
>>> <http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/10/nsa-uses-google-cookies-to-pinpoint-targets-for-hacking/>
>>>
>>> Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
>>> <http://www.wired.com/2014/10/verizons-perma-cookie/>
>>>
>>> How bad is it to replace adSense code id to ISP's adSense ID on free
>>> Internet?
>>> <http://stackoverflow.com/questions/25438910/how-bad-is-it-to-replace-adsense-code-id-to-isps-adsense-id-on-free-internet>
>>>
>>> Comcast Wi-Fi serving self-promotional ads via JavaScript injection
>>> <http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/>
>>>
>>> Erosion of the moral authority of transparent middleboxes
>>> <https://tools.ietf.org/html/draft-hildebrand-middlebox-erosion-01>
>>>
>>> Transitioning The Web To HTTPS <https://w3ctag.github.io/web-https/>
>>>
>>> We know that people do not generally perceive the absence of a warning
>>> sign. (See e.g. The Emperor's New Security Indicators
>>> <http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf>.)
>>> Yet the only situation in which web browsers are guaranteed not to warn
>>> users is precisely when there is no chance of security: when the origin is
>>> transported via HTTP. Here are screenshots of the status quo for non-secure
>>> domains in Chrome, Safari, Firefox, and Internet Explorer:
>>>
>>> [image: Screen Shot 2014-12-11 at 5.08.48 PM.png]
>>>
>>> [image: Screen Shot 2014-12-11 at 5.09.55 PM.png]
>>>
>>> [image: Screen Shot 2014-12-11 at 5.11.04 PM.png]
>>>
>>> [image: ie-non-secure.png]
>>>
>>> Particulars
>>>
>>> UA vendors who agree with this proposal should decide how best to phase
>>> in the UX changes given the needs of their users and their product design
>>> constraints. Generally, we suggest a phased approach to marking non-secure
>>> origins as non-secure. For example, a UA vendor might decide that in the
>>> medium term, they will represent non-secure origins in the same way that
>>> they represent Dubious origins. Then, in the long term, the vendor might
>>> decide to represent non-secure origins in the same way that they represent
>>> Bad origins.
>>>
>>> Ultimately, we can even imagine a long term in which secure origins are
>>> so widely deployed that we can leave them unmarked (as HTTP is today), and
>>> mark only the rare non-secure origins.
>>>
>>> There are several ways vendors might decide to transition from one phase
>>> to the next. For example, the transition plan could be time-based:
>>>
>>>
>>> 1.
>>>
>>> T0 (now): Non-secure origins unmarked
>>> 2.
>>>
>>> T1: Non-secure origins marked as Dubious
>>> 3.
>>>
>>> T2: Non-secure origins marked as Non-secure
>>> 4.
>>>
>>> T3: Secure origins unmarked
>>>
>>>
>>> Or, vendors might set thresholds based on telemetry that measures the
>>> ratios of user interaction with secure origins vs. non-secure. Consider
>>> this strawman proposal:
>>>
>>>
>>> 1.
>>>
>>> Secure > 65%: Non-secure origins marked as Dubious
>>> 2.
>>>
>>> Secure > 75%: Non-secure origins marked as Non-secure
>>> 3.
>>>
>>> Secure > 85%: Secure origins unmarked
>>>
>>>
>>> The particular thresholds or transition dates are very much up for
>>> discussion. Additionally, how to define “ratios of user interaction” is
>>> also up for discussion; ideas include the ratio of secure to non-secure
>>> page loads, the ratio of secure to non-secure resource loads, or the ratio
>>> of total time spent interacting with secure vs. non-secure origins.
>>>
>>> We’d love to hear what UA vendors, web developers, and users think.
>>> Thanks for reading!
>>>
>> To unsubscribe from this group and stop receiving emails from it, send
> an email to security-dev...@chromium.org.
>

Ryan Sleevi

unread,
Dec 15, 2014, 7:19:06 PM12/15/14
to ferdy.c...@gmail.com, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, Adrienne Porter Felt, security-dev
On Mon, Dec 15, 2014 at 4:10 PM, <ferdy.c...@gmail.com> wrote:
>
> "If someone thinks their users are OK with their website not having
> integrity/authentication/privacy"
>
> That is an assumption that doesn't apply to every website. Many websites
> don't even have authentication.
>

I think there may be some confusion.

"Authentication" here does not refer to "Does the user authenticate
themselves to the site" (e.g. do they log in), but "Is the site you're
talking to the site you the site you expected" (or, put differently, "Does
the server authenticate itself to the user").

Without authentication in this sense (e.g. talking to whom you think you're
talking to), anyone can trivially impersonate a server and alter the
responses. This is not that hard, a few examples for you about why
authentication is important, even for sites without logins:

http://newstweek.com/
http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/
http://webpolicy.org/2014/10/24/how-verizons-advertising-header-works/

This is why it's important to know you're talking to the site you're
expecting (Authentication), and that no one has modified that site's
contents (Integrity).

Christian Heutger

unread,
Dec 15, 2014, 8:13:00 PM12/15/14
to nolo...@gmail.com, dev-se...@lists.mozilla.org, blin...@chromium.org, public-w...@w3.org, securi...@chromium.org
>Surely you haven't missed the big danger in plain text traffic. That
>traffic gets usuroed and fed into susyems like Xkeyscore for Tailored
>Access Operations (TAO). In layman's terms, adversaries are using the
>information gathered to gain unauthorized access to systems.

With DV (weak validation) it then goes encrypted to them, I don¹t see the
advantage. The magic bullet TOR to prevent from being monitored also
showed up, that the expected privacy may be broken. It¹s a good idea but
therefor stepping back from the value of PKIX is the wrong way in my
opinion.

>The race to the bottom among CAs is to blame for the quality of
>verification by the CAs.

Right, so DV need to be deprecated or set to a recognizable lower level,
clearly stating that it¹s only encryption, nothing else.

>With companies like StartCom, Cacert and Mozilla offering free
>certificates, there is no barrier to entry.

And no barrier breaking the value of certificate authorities vs.
self-signed certificates (Cacert is the only good exception, for a good
reason their approach is different).

>Plus, I don't think a certificate needs to say anything about the
>operator. They need to ensure the server is authenticated. That is, the
>public key bound to the DNS name is authentic.

If a certificate doesn¹t tell, what should tell? How should I be sure to
be on www.onlinebanking.de and not www.onlínebanking.de (see the accent)
by getting spoofed or phished? It¹s the same for Facebook.com or
Facebo0k.com, ...

>As I understand it, phishers try to avoid TLS because they count on the
>plain text channel to avoid all the browser warnings. Peter Gutmann
>discusses this in his Engineering Security book
>(https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf).

If there is a free certificate for everyone and everything is https, which
browser warnings should occur?

>Why green for EV (or why yellow for DV or DANE)? EV does not add any
>technical controls. From a security standpoint, DV and EV are equivalent.

That¹s what certificates are for. If we only would want to have
encryption, there would never be any requirement for certificates.
Browsers and servers handle cipher suites, handshakes etc., the
certificate is the digital equivalent to an authorized identity card, and
there for sure DV and EV are different. Security is about confidentiality,
integrity and availability. Confidentiality is the encryption, integrity
is the validation.

>If DNS is authentic, then DANE provides stronger assurances than DV or EV
>since the domain operator published the information and the veracity does
>not rely on others like CAs (modulo DBOUND).

>From the pure technical standpoint, yes, from the validation standpoint,
no. DANE has the hazel of compatibility, but it also struggle with harder
mandatory realization of restrictions (online or offline key material, key
sizes, algorithm, debian bug or heart bleed reissue, Š, all the topics,
which recently arised), for pinning validated (EV) certificates, it¹s the
best solution vs. pinning or transparency.

>Not relying on a CA is a good thing since its usually advantageous to
>minimize trust (for some definition of "trust"). Plus, CAs don¹t really
>warrant anything, so its not clear what exactly they are providing to
>relying parties (they are providing a a signature for money to the
>applicant).

As there is not internet governance, they are the only available
alternative. Similar to other agencies existing worldwide, they fetch
money for validation services and warrant for mis-validation. They are
dictated strict rules on how to do and be audited to proof, they follow
this rules. That¹s how auditing currently works in many places and
although it¹s not the optimal system, it¹s the one currently available.

>Open question: do you think the browsers will support a model other than
>the CA Zoo for rooting trust?

If a reliable, usable and manageable concept will be established, for
sure. But as e.g. ISO 27001 establish the same model, there is a company
being paid for stating what they audited is correct and issuing a seal
(being ISO 27001 certified) which end users should trust in.

Ryan Sleevi

unread,
Dec 15, 2014, 8:21:03 PM12/15/14
to Christian Heutger, nolo...@gmail.com, blin...@chromium.org, public-w...@w3.org, dev-se...@lists.mozilla.org, securi...@chromium.org
On Mon, Dec 15, 2014 at 5:11 PM, Christian Heutger <chri...@heutger.net>
wrote:
>
> >Surely you haven't missed the big danger in plain text traffic. That
> >traffic gets usuroed and fed into susyems like Xkeyscore for Tailored
> >Access Operations (TAO). In layman's terms, adversaries are using the
> >information gathered to gain unauthorized access to systems.
>
> With DV (weak validation) it then goes encrypted to them, I don¹t see the
> advantage. The magic bullet TOR to prevent from being monitored also
> showed up, that the expected privacy may be broken. It¹s a good idea but
> therefor stepping back from the value of PKIX is the wrong way in my
> opinion.
>
> >The race to the bottom among CAs is to blame for the quality of
> >verification by the CAs.
>
> Right, so DV need to be deprecated or set to a recognizable lower level,
> clearly stating that it¹s only encryption, nothing else.
>
> >With companies like StartCom, Cacert and Mozilla offering free
> >certificates, there is no barrier to entry.
>
> And no barrier breaking the value of certificate authorities vs.
> self-signed certificates (Cacert is the only good exception, for a good
> reason their approach is different).
>
> >Plus, I don't think a certificate needs to say anything about the
> >operator. They need to ensure the server is authenticated. That is, the
> >public key bound to the DNS name is authentic.
>
> If a certificate doesn¹t tell, what should tell? How should I be sure to
> be on www.onlinebanking.de and not www.onlínebanking.de
> <http://www.xn--onlnebanking-ufb.de> (see the accent)
There's a lot of information here that isn't quite correct, but I would
hate to rathole on a discussion of CA security vs the alternatives, which
inevitably arises when one discusses HTTPS in public fora.

I think the discussion here is somewhat orthogonal to the proposal at hand,
and thus might be best if kept to a separate thread.

The question of whether HTTP is equivalent to HTTPS-DV in security or
authenticity is simple - they aren't at all equivalent, an HTTPS-DV
provides vastly superior value over HTTP. So whether or not UAs embrace
HTTPS-EV is separate. But as a UA vendor, I would say HTTPS-DV has far more
appeal for protecting users and providing security than HTTPS-EV, for many
of the reasons you've heard on this thread from others (e.g. the challenges
regarding mixed HTTPS+HTTP is the same as HTTPS-EV + HTTPS-DV).

So, assuming we have HTTP vs HTTPS-EV/HTTPS-DV, how best should UAs
communicate to the user the lack of security guarantees from HTTP.

Christian Heutger

unread,
Dec 15, 2014, 8:51:42 PM12/15/14
to rsl...@chromium.org, nolo...@gmail.com, blin...@chromium.org, public-w...@w3.org, dev-se...@lists.mozilla.org, securi...@chromium.org
> So, assuming we have HTTP vs HTTPS-EV/HTTPS-DV, how best should UAs communicate to the user the lack of security guarantees from HTTP.

I would recommend here as mentioned:

No padlock, red bar or red strike, … => no encryption [and no validation], e.g. similar to SHA1 deprecation in worst situation
Only vs. HTTPS: Padlock => everything fine and not red, „normal“ address bar behavior
With EV differentiation: Padlock, yellow bar, yellow signal, … => only encryption, e.g. similar to current mixed content, …
EV: Validation information, Padlock green bar, no extras, … => similar to current EV

Red-Yellow-Green is recognized all other the world, all traffic signals are like this, explanation on what signal means what can be added to the dialog on click. (Red) strike, (yellow) signal, (green) additional validation information follow also the idea to have people without been able to differentiate colors to understand what happens here.

Peter Kasting

unread,
Dec 15, 2014, 9:00:26 PM12/15/14
to Christian Heutger, nolo...@gmail.com, rsl...@chromium.org, public-w...@w3.org, dev-se...@lists.mozilla.org, blin...@chromium.org, securi...@chromium.org
On Mon, Dec 15, 2014 at 5:50 PM, Christian Heutger <chri...@heutger.net>
wrote:
>
Please don't try to debate actual presentation ideas on this list. How UAs
present various states is something the individual UA's design teams have
much more context and experience doing, so debating that sort of thing here
just takes everyone's time to no benefit, and is likely to rapidly become a
bikeshed in any case.

As the very first message in the thread states, the precise UX changes here
are up to the UA vendors. What's more useful is to debate the concept of
displaying non-secure origins as non-secure, and how to transition to that
state over time.

PK

Igor Bukanov

unread,
Dec 16, 2014, 12:29:33 AM12/16/14
to Daniel Veditz, Chris Palmer, public-w...@w3.org, Michal Zalewski, dev-se...@lists.mozilla.org, Peter Bowen, blink-dev, security-dev, Eduardo Robles Elvira
On 15 December 2014 at 18:54, Daniel Veditz <dve...@mozilla.com> wrote:

> Serve the HTML page over http: but load all sub-resources over https: as
> expected after the transition. Add the following header:
>
> Content-Security-Policy-Report-Only: default-src https:; report-uri <me>
>

This is a nice trick! However, it does not work in general due to the use
of protocolless-links starting with // . Or should those be discouraged?

Ryan Sleevi

unread,
Dec 16, 2014, 12:35:34 AM12/16/14
to Igor Bukanov, Chris Palmer, public-w...@w3.org, Michal Zalewski, Daniel Veditz, dev-se...@lists.mozilla.org, Peter Bowen, blink-dev, security-dev, Eduardo Robles Elvira
Sounds like a CSP-bug to me; scheme-relative URLs are awesome, and we
should encourage them (over explicit http://-schemed URLs)

Igor Bukanov

unread,
Dec 16, 2014, 1:17:58 AM12/16/14
to Ryan Sleevi, ferdy.c...@gmail.com, public-w...@w3.org, dev-se...@lists.mozilla.org, Adrienne Porter Felt, blink-dev, security-dev
On 16 December 2014 at 01:18, Ryan Sleevi <rsl...@chromium.org> wrote:

> "Authentication" here does not refer to "Does the user authenticate
> themselves to the site" (e.g. do they log in), but "Is the site you're
> talking to the site you the site you expected" (or, put differently, "Does
> the server authenticate itself to the user").
>

With protocols like SRP or J-PAKE authentication in the first sense (log
in) also provides authentication in the second sense (protocols ensures
mutual authentication between the user and the server without leaking
passwords). I wish there would be at least some support in the browsers for
these protocols so one could avoid certificates and related problems in
many useful cases.

Andy Wingo

unread,
Dec 16, 2014, 4:15:47 AM12/16/14
to Ryan Sleevi, Chris Palmer, public-w...@w3.org, Michal Zalewski, Daniel Veditz, dev-se...@lists.mozilla.org, Eduardo Robles Elvira, Igor Bukanov, blink-dev, security-dev, Peter Bowen
On Tue 16 Dec 2014 06:35, Ryan Sleevi <rsl...@chromium.org> writes:

> scheme-relative URLs are awesome, and we should encourage them (over
> explicit http://-schemed URLs)

Isn't it an antipattern to make a resource available over HTTP if it is
available over HTTPS? In all cases you could just use HTTPS; no need to
provide an insecure option.

The one case that I know of when scheme-relative URLs are useful is when
HTTPS is not universally accessible, e.g. when the server only supports
TLSv1.2 and so is not reachable from old Android phones, among other
UAs. In that case scheme-relative URLs allow you to serve the same
content over HTTPS to browsers that speak TLSv1.2 but also have it
available insecurely to older browsers.

If there is mention of scheme-relative URLs in a "Marking HTTP as
Non-Secure" set of guidelines for authors and site operators, it should
be to avoid them in favor of explicitly using the HTTPS scheme.

Andy

Sigbjørn Vik

unread,
Dec 16, 2014, 8:59:33 AM12/16/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
I am happy to see this initiative, I consider the current standard
browser UI broken and upside-down. Today, plain http is not trustworthy,
but it still has the "normal" look in browsers. We ought to change this.

A few thoughts:

Users expect that when they come to a site that looks like Facebook, it
is Facebook. They expect any problems to be flagged, and unless there is
a warning, that everything is OK. They do not understand what most
icons/colors and dialogs mean, and are confused by the complexity of
security (and the web in general). A good UI should present the web the
way the user expects it to be presented. Expecting users to spend their
time learning and memorizing various browser UIs (user education) is
arrogant. Starting this discussion from the implementation details is
starting it in the wrong end.

One example of an experimental browser UI is Opera Coast. It goes much
further than cleaning up the security symbols, it removes the entire
address field. It uses a lot of extra background checks, with the aim to
allow users to browse without having to check addresses. If something
seems wrong, it will warn the user ahead of time. This seems to me to be
the ideal, where security is baked into the solution, not tacked on top.
>From a user's perspective, it just works. I think revamping address bars
and badges should take the long term goal into consideration as well.
(I'll happily discuss Coast's solutions, but please start a new thread
if so.)

Browsers normally have 3-5 different visual security states in the UI;
normal (no security), DV and EV. Some browsers have special visual
indicators for various types of broken security (dubious, bad, etc). In
addition there are a multitude of corner cases. Although I can see the
use of three states, to support gradual degradation via the middle
state, more than three states is confusing, and the ideal should be
none, as in the above example.

Given three states for now, the question is how we want to display them.
We need one for general unsecured contents. We want one for top
security, i.e. all the latest encryption standards and EV. Then general
encryption would go into the last bucket. Encryption standards will have
to change over time. From a user perspective, a natural way to mark
three states would be as insecure (red/warning), normal (neutral/no
marking) and secure (green/padlock).

There is no need to distinguish unsecured from dubiously secured, they
can just go into the same bucket. There isn't even any need to warn
users about certificate errors, the UI is just downgraded to insecure,
as a self-signed site is no less secure than an http site. There are
technical reasons for the warnings, but those can be bug-fixed. Active
attacks (e.g. certificate replacement to an invalid one, HSTS failure,
revoked certificates, ...) might still be hard-blocked, but note that
this constitutes a fourth state, and the UI is becoming very complicated
already - there are probably better ways to map such cases into the
insecure state, but that is a separate discussion.

One issue is that browser UI is and should be a place for innovation,
not rigid specifications. At the same time, users would clearly benefit
from consistent and good UI. Diverging from the de-facto UI standard
towards a better one comes with a cost for browsers, and they might not
have the incentive to do so. A coordinated move towards a better future
would be good, as long as we avoid the hard limitations. Regardless of
this discussion, we do need better coordination for removing old crypto
standards (SHA-1, SSLv3, RC4, ...) from the "secure" bucket in the UI.
In short, I am all for a coordinated move, but there needs to be space
for browsers to innovate as well.

In terms of the transition plan, I think a date-based plan is the only
thing which will work. This gives all parties time to prepare, they know
when the next phase will start, and nobody will be arguing if we have
reached a milestone or not. It also avoids any deadlocks where the next
phase is needed to push the web to the state where the next phase will
begin. Any ambitious timeline will fail to get all players on board. A
multi-year plan is still better than the resulting user confusion if
browsers move on their own.

BTW, have you explicitly contacted other browser teams?

--
Sigbjørn Vik
Opera Software

ianG

unread,
Dec 16, 2014, 12:36:09 PM12/16/14
to dev-se...@lists.mozilla.org
Yup. We are moving from a world where everyone can sniff more or less
everything because there is so much leakage ... to a world where ISPs
and other neerdowhellers will have to do an active attack in order to do
their sniffing.

This is definitively a better result because an active attack can be
detected. A passive attack by its nature is undetectable. We can fight
the former, not the latter.



iang

ianG

unread,
Dec 16, 2014, 12:48:20 PM12/16/14
to dev-se...@lists.mozilla.org
On 14/12/2014 19:04 pm, Chris Palmer wrote:
> On Sun, Dec 14, 2014 at 10:47 AM, Michael Ströder <mic...@stroeder.com>
> wrote:
>
> Chris Palmer wrote:
>>> Reducing the number of parties you have to trust from [ the site
>> operator,
>>> the operators of all networks between you and the site operator ] to
>> just [
>>> the site operator ] is a huge win.
>>
>> Yes, I agree. But to really guarantee this you would have to block all the
>> shiny SSL facade reverse proxy services out there. ;-)
>>
>
> That edges up to the remote attestation problem (which is unsolvable). We
> just want the browser to tell the truth about what it can determine.
>
> If the site operator chooses to deploy in a not perfectly safe manner, that
> is a separate problem that the community will have to solve by other means.
>
>
>> I suspect your approach will rather make people rush into using such
>> central
>> SSL fake services and data is transmitted in clear behind the service. If
>> this
>> happens those services will be an even more attractive interception point.
>> <https://lists.mozilla.org/listinfo/dev-security>
>>
>
> We will have to try to deal with that somehow (most likely at "layer 8").
> But, are you saying that, for this reason, the status quo is somehow
> acceptable or even preferable?


Improvement is always useful. However any improvement in one area
raises questions in another area. In general, if you're improving X
while not improving Y, there is always the possibility that you're
getting too far ahead with X and wasting the work.

In more particularity, the issue of pushing people to HTTPS all over
raises questions of javascript & domain rich sites, HTTPS proxy servers,
how to get the certs easily into web servers, goals of encryption v.
authentication, how to display who says what, etc etc.

These might be intractable questions. But, the reason we started the
push to HTTPS everywhere in 2005 or so was that these questions can only
be answered if we push everyone into that situation.

I guess the issue here is that we need to be holistic. Push for
improvements in all areas, and be somewhat patient with those areas that
push too fast.

Pushing for HTTPS everywhere is like harsh medicine. Tastes disgusting
and makes you gag, but totally necessary.



iang

Chris Palmer

unread,
Dec 16, 2014, 3:10:47 PM12/16/14
to Sigbjørn Vik, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Tue, Dec 16, 2014 at 5:59 AM, Sigbjørn Vik <sigb...@opera.com> wrote:

> There is no need to distinguish unsecured from dubiously secured, they
> can just go into the same bucket. There isn't even any need to warn
> users about certificate errors, the UI is just downgraded to insecure,
> as a self-signed site is no less secure than an http site. There are
> technical reasons for the warnings, but those can be bug-fixed. Active
> attacks (e.g. certificate replacement to an invalid one, HSTS failure,
> revoked certificates, ...) might still be hard-blocked, but note that
> this constitutes a fourth state, and the UI is becoming very complicated
> already - there are probably better ways to map such cases into the
> insecure state, but that is a separate discussion.

Well, we do have to make sure that the browser does not send cookies
to an impostor origin. That's (1 reason) why Chrome uses interstitial
warnings today.

We could do away with interstitials if the definition of the origin
included some notion of cryptographic identity — e.g. (HTTPS,
facebook.com, 443, [set of pinned keys]) instead of just (HTTPS,
facebook.com, 443) — but there'd still be problems with that, and very
few site operators are currently able to commit to pinning right now.
(And, that might never change.)

> BTW, have you explicitly contacted other browser teams?

This mass mailing is that.

Sigbjørn Vik

unread,
Dec 17, 2014, 6:53:08 AM12/17/14
to tyl...@google.com, blin...@chromium.org, dev-se...@lists.mozilla.org, public-w...@w3.org, securi...@chromium.org, pal...@google.com
On 17-Dec-14 08:48, tyl...@google.com wrote:

> Given these roughly 3 distinct scenarios with respect to connection status:
>
> A: The connection is successfully secured. (HTTPS)
> B: No security was attempted. (HTTP)
> C: Securing the connection has failed. (Certificate validation failure)
>
> A few people have said that B and C are roughly identical from a
> security perspective and could be represented as the same state -- in
> both cases no security is provided. I would disagree here. In the case
> of the failed certificate verification, the client has attempted to
> secure the connection and that attempt has failed. In the case of HTTP,
> the client made no indication of a preference for security. While
> scenario B represents the *absence* of security, scenario C represents
> the *failure* of security, and is therefore more troublesome. While we
> want to raise the awareness of scenario B, we shouldn't promote it to
> the severity of scenario C. Doing so conflates two very different cases
> and failure modes; while both represent the absence of verifiable
> transport security, the latter indicates that the user's expressed
> expectation of security has not been met, while the former simply
> reflects the absence of any expectation of security.

I respectfully, but strongly, disagree :) If you want to separate the
states, I'd say that C is better than B. C has *some* security, B has
*none*. Consider a self-signed certificate, where the site owner chooses
to provide what little security he can, this is still much better than
plain old http. Or a certificate expired by one day, which is the same
certificate that the browser has seen on that site for the 2 years past,
this is still way better than B.

If a malicious actor can get write access to a page with status C, he
can immediately change the security level to status B anyway. Redirect
the page to http://official-looking.subdomain-facebook.com, and present
B, so displaying B as better doesn't help users much against attacks. If
a malicious actor does not have write access to a page with status C,
then status C is already better than status B. If the browser can detect
an active attack (like the login form having moved to http from https or
replacement of a good certificate by a bad one) then the browser should
of course warn against the attack, but that is a different scenario.

In most cases, users type 'facebook.com', and give no preference for
security. Any such preference is a server preference. The same holds for
clicking links, the user has no expectation of where he will be taken.
For bookmarks, or cases where the user explicitly types 'https://', the
user might have an expectation of security. If he does, and the security
level of the page indicates either B or C, he should immediately be
alerted anyway. If you think this indicates an explicit preference for
security, then the browser could warn similar to an active attack in
these cases.

But my main point against this is still that you need an entire
paragraph to explain the difference, to people who already know the
background. A user wants to know if he is secure or not, not if his
'facebook.com' request was intercepted on the way and replaced by a http
MiTM (status B, really bad), or if 'facebook.com' made a bug leaving you
exposed (status C, pretty bad). Most users wouldn't understand the
difference. I consider it arrogant trying to force users to understand
the difference, most users just want to go to facebook, not get a
lecture on internet safety. I consider it harmful to try to display the
difference, as the more states we have in the UI, the more users have to
learn, which means they will remember less, and the states become less
meaningful. Keep it simple, and keep to user expectations, not
implementation details.

As a consumer buying bread, you want to know if the bread is safe to eat
or not. Whether the farmer tried to control pesticide usage and failed,
or he didn't try to control it, makes little difference. Professional
health and safety inspectors (akin to browser and web developers) are
about the only ones who care.

> I've
> received many reports from operators large and small indicating
> visible
> losses of revenue due to the nearly-hidden warning Chrome currently
> displays for a SHA-1 cert with a long expiration.

Are you able to share more details on this?

Sigbjørn Vik

unread,
Dec 17, 2014, 11:37:48 AM12/17/14
to Adrienne Porter Felt, Chris Palmer, tyl...@google.com, public-w...@w3.org, dev-se...@lists.mozilla.org, blink-dev, security-dev
On 17-Dec-14 17:22, 'Adrienne Porter Felt' via Security-dev wrote:
> We plan to continue treating B and C differently. If there is a
> validation failure (C), Chrome will show a full-page interstitial. That
> will not be the case for HTTP (B). They will look the same in the URL
> bar because they are both insecure but the overall experience will be
> quite different.

Looking the same in the URL bar is already a good improvement on today.
However, the interstitial will continue to provide a negative incentive
to webmasters to attempt to apply security, as if they get it wrong,
users get a worse experience. Going for http might just be the safer
choice. The interstitial thus has the opposite effect of what this
proposal aims to achieve.

In an ideal world, where there were no technical reasons for the
interstitial (meaning the browser wouldn't leak cookies or other data
and the user would be at least as secure as when using http), would you
still want to show it to users? And if so why?


> On Wed, Dec 17, 2014 at 3:52 AM, Sigbjørn Vik <sigb...@opera.com
> <mailto:sigb...@opera.com>> wrote:
> In most cases, users type 'facebook.com <http://facebook.com>', and
> give no preference for
> security. Any such preference is a server preference. The same holds for
> clicking links, the user has no expectation of where he will be taken.
> For bookmarks, or cases where the user explicitly types 'https://', the
> user might have an expectation of security. If he does, and the security
> level of the page indicates either B or C, he should immediately be
> alerted anyway. If you think this indicates an explicit preference for
> security, then the browser could warn similar to an active attack in
> these cases.
>
> But my main point against this is still that you need an entire
> paragraph to explain the difference, to people who already know the
> background. A user wants to know if he is secure or not, not if his
> 'facebook.com <http://facebook.com>' request was intercepted on the
> way and replaced by a http
> MiTM (status B, really bad), or if 'facebook.com
> <http://facebook.com>' made a bug leaving you
> exposed (status C, pretty bad). Most users wouldn't understand the
> difference. I consider it arrogant trying to force users to understand
> the difference, most users just want to go to facebook, not get a
> lecture on internet safety. I consider it harmful to try to display the
> difference, as the more states we have in the UI, the more users have to
> learn, which means they will remember less, and the states become less
> meaningful. Keep it simple, and keep to user expectations, not
> implementation details.
>
> As a consumer buying bread, you want to know if the bread is safe to eat
> or not. Whether the farmer tried to control pesticide usage and failed,
> or he didn't try to control it, makes little difference. Professional
> health and safety inspectors (akin to browser and web developers) are
> about the only ones who care.
>
> > I've
> > received many reports from operators large and small indicating
> > visible
> > losses of revenue due to the nearly-hidden warning Chrome currently
> > displays for a SHA-1 cert with a long expiration.
>
> Are you able to share more details on this?
>
> --
> Sigbjørn Vik
> Opera Software
>
> To unsubscribe from this group and stop receiving emails from it, send
> an email to security-dev...@chromium.org
> <mailto:security-dev...@chromium.org>.

Anne van Kesteren

unread,
Dec 17, 2014, 12:17:29 PM12/17/14
to Sigbjørn Vik, Chris Palmer, tyl...@google.com, WebAppSec WG, dev-se...@lists.mozilla.org, blink-dev, securi...@chromium.org
On Wed, Dec 17, 2014 at 12:52 PM, Sigbjørn Vik <sigb...@opera.com> wrote:
> I respectfully, but strongly, disagree :) If you want to separate the
> states, I'd say that C is better than B. C has *some* security, B has
> *none*.

You would advocate not blocking on certificate failures and just hand
over credentials to network attackers? What would happen exactly when
you visit e.g. google.com from the airport (connected to something
with a shitty captive portal)?


--
https://annevankesteren.nl/

Sigbjørn Vik

unread,
Dec 17, 2014, 12:50:36 PM12/17/14
to Anne van Kesteren, Chris Palmer, tyl...@google.com, WebAppSec WG, dev-se...@lists.mozilla.org, blink-dev, securi...@chromium.org
On 17-Dec-14 18:17, Anne van Kesteren wrote:
> On Wed, Dec 17, 2014 at 12:52 PM, Sigbjørn Vik <sigb...@opera.com> wrote:
>> I respectfully, but strongly, disagree :) If you want to separate the
>> states, I'd say that C is better than B. C has *some* security, B has
>> *none*.
>
> You would advocate not blocking on certificate failures
> and just hand
> over credentials to network attackers?

My comment above is about the relative security of http versus
non-perfect https. In most cases, non-perfect https is better. In some
cases, they are equally bad.[*]

Another topic is how to deal with broken https. Browsers today present
the user with an interstitial designed to allow him to shoot himself in
the foot, after which they leak any cached secure data to the broken
site. I consider that leakage a bug.

> What would happen exactly when
> you visit e.g. google.com from the airport (connected to something
> with a shitty captive portal)?

Assuming interstitials were replaced with cache separation:

The browser would detect that this isn't the same secure google you
talked to yesterday, and not share any data you got from google
yesterday with the captive portal. Once you reconnect to the authentic
google, the browser would use the first set of data again.

ianG

unread,
Dec 17, 2014, 2:30:57 PM12/17/14
to dev-se...@lists.mozilla.org
On 17/12/2014 17:17 pm, Anne van Kesteren wrote:
> On Wed, Dec 17, 2014 at 12:52 PM, Sigbjørn Vik <sigb...@opera.com> wrote:
>> I respectfully, but strongly, disagree :) If you want to separate the
>> states, I'd say that C is better than B. C has *some* security, B has
>> *none*.
>
> You would advocate not blocking on certificate failures and just hand
> over credentials to network attackers? What would happen exactly when
> you visit e.g. google.com from the airport (connected to something
> with a shitty captive portal)?


There is a big issue with signalling, something that isn't defined in
the 'secure browsing protocol' and is interpreted differently by
different folks.

Take expired certificates. "We all know" there is no difference between
the technical security available for a certificate at E-1 and E+1 where
E is expiry day. Yet, the browser presents to the user that the
certificate is insecure. It's not. We know it's not. But the
"commercial" prerogative kicks in and the protocol is instructed to
harass the user(s) and get a fee paid. This is the CA's security not
the users' security.

So a clear benefit *to security of end-users* would be to accept expired
certificates if we actually know they were good before. But
CAs/browsers will never accept that. Oh well.



Then there is self-signed certs. If you compare to HTTP, when my site
uses a self-signed cert, it is better in all ways, even in the presence
of MITMing. But when you compare a self-signed cert to say
https://google then "we know" it is worse.

Which is it? Better or worse? It depends on your baseline.

To some extent it also depends on what we know. Sigbjørn assumes we
know that google communicates with HTTPS and we should separate the
cached info out. Which means the browser must become HTTPS aware.
Which is good except HTTP was supposed to be stateless once upon a time.

In the alternate, we don't know these things so we could simply assume
that self-signed certs are signalling a highly secure thing and are bad,
hence the "mozilla N-click interstitial torture screen" that forces the
user to do something really bad, and get taught by sysadms to do that.
The result of this is of course that users are taught to again ignore
warning signs, because it is mostly when google goes self-signed that we
care. If we had known this was the interpretation from the beginning
(again, signalling) it would have been far better to ban self-signed
totally because they basically undermine the CA-cert model. But we
didn't know that.

To make some progress, really the browser should become much more aware
so it can for example know that google is not self-signed, and make a
distinction. There should be more of a wizard approach, where it knows
that this is the first time a site has been visited, and it remembers
that security arrangement.

Summary of all this is that it is very messy. There are multiple layers
of meanings here. When OP says there is A, B, C, D, I'd say sure, do
that but don't believe there is only A, B, C, D. Or at least surface
your assumptions (my original question), and be prepared to see that
list get complicated.



iang

Chris Palmer

unread,
Dec 17, 2014, 3:42:01 PM12/17/14
to ianG, dev-se...@lists.mozilla.org
On Wed, Dec 17, 2014 at 11:29 AM, ianG <ia...@iang.org> wrote:

> Take expired certificates. "We all know" there is no difference between the
> technical security available for a certificate at E-1 and E+1 where E is
> expiry day. Yet, the browser presents to the user that the certificate is
> insecure. It's not. We know it's not. But the "commercial" prerogative
> kicks in and the protocol is instructed to harass the user(s) and get a fee
> paid. This is the CA's security not the users' security.

If a site operator can't get operational details like certificate
expiration right, should users believe they can get more difficult
things right?

That said, we (Chrome) have experimented a bit with softer warnings in
case of "barely expired" certificates. It might happen (no promises).

> So a clear benefit *to security of end-users* would be to accept expired
> certificates if we actually know they were good before. But CAs/browsers
> will never accept that. Oh well.

We would if we could make it work, and maybe we can. Maybe we can't.

> Then there is self-signed certs. If you compare to HTTP, when my site uses
> a self-signed cert, it is better in all ways, even in the presence of
> MITMing. But when you compare a self-signed cert to say https://google then
> "we know" it is worse.
>
> Which is it? Better or worse? It depends on your baseline.

The way to make self-signed certificates "safe" (as in: safe enough to
make a representation to a real-world user trying to achieve a task)
is to pin the keys, and hard-fail on pinning validation failure. Now
you need a ceremony for recovery. Do you have a ceremony design that
would work for real-world users who have no idea what is going on? I
bet you don't.

Because we cannot effectively communicate all this nuance to
real-world users, and because the client has very limited "knowledge"
of what is "expected" at run-time, we (Chrome, and most other
browsers) have consistently chosen to quantize the definition of
security *up*. It does upset some people with certain ideologies, but
it results in a more clear message to most people.

> To make some progress, really the browser should become much more aware so
> it can for example know that google is not self-signed, and make a
> distinction.

Yes, that's key pinning. We are doing it and it works. But it's not a
slam-dunk for all operators.

Ryan Sleevi

unread,
Dec 18, 2014, 4:47:13 AM12/18/14
to cha...@yandex-team.ru, pal...@google.com, tyl...@google.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, sigb...@opera.com, blink-dev, software...@gmail.com, Anne van Kesteren
Inline

On Dec 18, 2014 1:07 AM, <cha...@yandex-team.ru> wrote:
>
>
>
> 18.12.2014, 01:27, "software...@gmail.com" <
software...@gmail.com>:
>>
>> On Wednesday, December 17, 2014 7:44:59 PM UTC+1, cha...@yandex-team.ru
wrote:
>>>
>>> This is a pretty interesting use case. When you connect at the airport,
the typical first thing that happens is you get a warning saying that the
site you want to connect to has the wrong certificate (you went to
pogoda.yandex.ru but the certtificate is for airport.logins.aero, or
1.1.1.1).
>>> --
>>> Charles McCathie Nevile - web standards - CTO Office, Yandex
>>
>>
>> 511 Network Authentication Required?
>>
>> There is http://tools.ietf.org/html/rfc6585#section-6 for that. Chromium
bug is https://code.google.com/p/chromium/issues/detail?id=114929 , Firefox
has their own as well. As far as I know this only works for HTTP
connections. There really is no reasonable way how the airport can step
into an HTTPS connection and demand authentication without causing a
certificate error.
>
>
> There is a certificate error. The point is that since it is expected
behaviour, I get trained to say "yeah, whatever" so I can pay for the
connection I need. Despite the fact that it is very difficult to be *sure*
that the error is not actually a real problem.
>
> I'd love to see a better situation relying on a proper standard.
>
> But in general I don't.
>

I'm not sure if it's terribly germane to the
HTTP-being-indicated-as-not-secure to rathole too much on the ways that
HTTPS can be messed with, but I will simply note that the Chrome Security
Enamel team are working on ways to better detect and manage this.

While we can wish for better standards, this is an area where standards
compliant devices take years to become even remotely ubiquitous. As such, a
heuristic based approach on the way the world is, at least with respective
to captive portals that actively try to disrupt and compromise users'
connections.

>>
>> There is experimantal https://tools.ietf.org/rfc/rfc2521.txt which
suggests an ICMP packet "Need Authorization", but as I said, it is
experimantal. Am I missing something?
>>
>> This gradual roll out of the UI hints that is being proposed now would
help shift attention to such problems. The problems won't be solved until
we get to a state we (actually, you ;) truly _need_ to be solving them.
>
>
> Sure. But this turns out to be a case where right now there is a problem,
and instead of *solving* it it seems that "the world" (or at least the
parts I see, which is quite a lot by geography) is instead finding a quick
workaround that gets them where they were going - at the cost of learning
to ignore a potentially serious problem.
>
> On the whole I think this discussion is valuable, and the proposal makes
sense. But I have concerns about whether we really understand the things
that are going to change and the implications, so use cases like this are
important to find and make sure we understand.
>
> cheers
>
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
> cha...@yandex-team.ru - - - Find more at http://yandex.com
>

As noted elsewhere, we aren't trying to boil the ocean, and though I
certainly accept the concerns are valid (and, as mentioned above, are
already being independently worked on), I think we should be careful how
much we fixate on these issues versus considering the broader philosophical
issues this proposal is bringing forward.

There are certainly awful things in the world of HTTPS, on a variety of
fronts. And yet, despite those warts, we would be misleading ourselves and
others to think that insecure transports such as HTTP - ones actively
disrupted for commercial gain, "value" adding, or malicious havoc and ones
that are passively monitored on widespread, pervasive scale - represent
the desirable state of where we want to be or go.

I think the end goal is more robust - we want a world where users are not
only safe by default, but they expect that, and can understand what makes
them unsafe. Though some of these may be outside our ken as UAs - for
example, we have limited ability to know you're running a three year old
version of phpBB that is owned harder than Sony Pictures and has more
remote exploits than msasn1.dll - there are things we do know and should
communicate. One of them is that the assumption many users have - that
their messages are shared only between them and the server - is not true
unless the server operator is conscientious.

Gervase Markham

unread,
Dec 18, 2014, 11:45:55 AM12/18/14
to ianG
On 17/12/14 19:29, ianG wrote:
> Take expired certificates. "We all know" there is no difference between
> the technical security available for a certificate at E-1 and E+1 where
> E is expiry day. Yet, the browser presents to the user that the
> certificate is insecure. It's not. We know it's not. But the
> "commercial" prerogative kicks in and the protocol is instructed to
> harass the user(s) and get a fee paid.

Except that certs from free CAs also expire. Why do you think that is?

> So a clear benefit *to security of end-users* would be to accept expired
> certificates if we actually know they were good before.

Not so. Once a certificate has expired, there is no requirement to
maintain revocation information for it. So if you see an expired
certificate, it may or may not be also a revoked certificate - you have
no certain way of telling.

Certificate expiry also allows us to advance the technical state of the
art. The current max cert lifetime is (or will soon be) 39 months - I'd
like lower, but it's better than it was. That means that if we mandate a
change, 39 months later, all certs will have it.

Gerv

Gervase Markham

unread,
Dec 18, 2014, 12:15:19 PM12/18/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev
On 13/12/14 00:46, Chris Palmer wrote:
> We, the Chrome Security Team, propose that user agents (UAs) gradually
> change their UX to display non-secure origins as affirmatively non-secure.
> We intend to devise and begin deploying a transition plan for Chrome in
> 2015.

I think this is a good idea - in fact, it's essential if we are to make
secure the 'new normal'.

I agree that a phased transition plan based on telemetry thresholds is
the right thing. This is a collective action problem ("Chrome tells me
this site is insecure, but Firefox is fine - so I'll use Firefox") and
so it would be awesome if we could get cross-browser agreement on what
the thresholds were and how they were measured.

I wonder whether we could make a start by marking non-secure origins in
a neutral way, as a step forward from not marking them at all. Straw-man
proposal for Firefox: replace the current greyed-out globe which appears
where the lock otherwise is with a black eye icon. When clicked, instead
of saying:

"This website does not supply identity information.

Your connection to this website is not encrypted."

it has a larger eye icon, and says something like:

"This web page was transferred over a non-secure connection, which means
that the information could have been (was probably?!) intercepted and
read by a third party while in transit."

There are many degrees of this; let's start moving this way.

Gerv

Chris Palmer

unread,
Dec 18, 2014, 2:29:32 PM12/18/14
to Gervase Markham, blink-dev, public-w...@w3.org, security-dev, mozilla-de...@lists.mozilla.org
On Thu, Dec 18, 2014 at 9:14 AM, Gervase Markham <ge...@mozilla.org> wrote:

> I think this is a good idea - in fact, it's essential if we are to make
> secure the 'new normal'.

Woo hoo! :)

> I agree that a phased transition plan based on telemetry thresholds is
> the right thing. This is a collective action problem ("Chrome tells me
> this site is insecure, but Firefox is fine - so I'll use Firefox") and
> so it would be awesome if we could get cross-browser agreement on what
> the thresholds were and how they were measured.

We don't currently have any hard thresholds, just numbers that I kind
of made up. Any suggestions?

Also, shall we measure resource loads, top-level navigations, minutes
spent looking at the top-level origin, ...? Probably all of those and
more...

> I wonder whether we could make a start by marking non-secure origins in
> a neutral way, as a step forward from not marking them at all. Straw-man
> proposal for Firefox: replace the current greyed-out globe which appears
> where the lock otherwise is with a black eye icon. When clicked, instead
> of saying:
>
> "This website does not supply identity information.
>
> Your connection to this website is not encrypted."
>
> it has a larger eye icon, and says something like:
>
> "This web page was transferred over a non-secure connection, which means
> that the information could have been (was probably?!) intercepted and
> read by a third party while in transit."
>
> There are many degrees of this; let's start moving this way.

Yeah, that sounds good.

Thanks!

Chris Palmer

unread,
Dec 18, 2014, 2:33:41 PM12/18/14
to Jason Striegel, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Thu, Dec 18, 2014 at 9:52 AM, jstriegel via blink-dev
<blin...@chromium.org> wrote:

> I'd like to propose consideration of a fourth category:
> Personal Devices (home routers, printers, IoT, raspberry pis in classrooms, refrigerators):
> - cannot, by nature, participate in DNS and CA systems
> - likely on private network block
> - user is the owner of the service, hence can trust self rather than CA
>
> Suggested use:
> - IoT devices generate unique, self-signed cert
> - Friendlier interstitial (Ie. "Is this a device you recognize?") for self-signed connections on *.local, 192.168.*, 10.*, or on same local network as browser.
> - user approves use on first https connection
> - browser remembers (device is promoted to "secure" status)
>
> A lot of IoT use cases could benefit from direct connection (not requiring a cloud service as secure data proxy), but this currently gives the scariest of Chrome warnings. This is probably why the average home router or firewall is administered over http.

Yes, I agree this is a problem. I am hoping to publish a proposal for
how UAs can authenticate private devices soon (in January probably).

A key goal is not having to ask the user "Is this a device you
recognize?" — I think we can get the UX flow even simpler, and still
be strong. Watch this space...

Monica Chew

unread,
Dec 18, 2014, 3:12:32 PM12/18/14
to Chris Palmer, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Hello Chris,

I support the goal of this project, but I'm not sure how we can get to a
point where showing warning indicators makes sense. It seems that about 67%
of pageviews on the Firefox beta channel are http, not https. How are
Chrome's numbers?

http://telemetry.mozilla.org/#filter=beta%2F34%2FHTTP_PAGELOAD_IS_SSL&aggregates=multiselect-all!Submissions&evoOver=Builds&locked=true&sanitize=true&renderhistogram=Graph

Security warnings are often overused and therefore ignored [1]; it's even
worse to provide a warning for something that's not actionable. I think
we'd have to see very low plaintext rates (< 1%) in order not to habituate
users into ignoring a plaintext warning indicator.

Lots of site operators don't support HTTPS, in fact some of them (e.g.,
https://nytimes.com and https://monica-at-mozilla.blogspot.com, which is
out of my control) redirect to plaintext in order to avoid mixed content
warnings. I don't think that user agents provided the right incentives in
this case, and showing a warning 100% of the time to a NYTimes user seems
like a losing battle.

Why not shift the onus from the user to the site operators? I would love to
see a "wall of shame" for the Alexa top 1M sites that don't support HTTPS,
redirect HTTPS to HTTP, and don't support HSTS. Perhaps search providers
could use those to penalize rankings, as Google already does for non HTTPS
sites. Efforts to make it cheap and easy to deploy HTTPS also need to
advance.

Thanks,
Monica

[1] http://lorrie.cranor.org/pubs/sslwarnings.pdf

On Fri, Dec 12, 2014 at 4:46 PM, Chris Palmer <pal...@google.com> wrote:
>
> Hi everyone,
>
> Apologies to those of you who are about to get this more than once, due to
> the cross-posting. I'd like to get feedback from a wide variety of people:
> UA developers, web developers, and users. The canonical location for this
> proposal is:
> https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure
> .
>
> Proposal
>
> We, the Chrome Security Team, propose that user agents (UAs) gradually
> change their UX to display non-secure origins as affirmatively non-secure.
> We intend to devise and begin deploying a transition plan for Chrome in
> 2015.
>
> _______________________________________________
> dev-security mailing list
> dev-se...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security
>

Peter Kasting

unread,
Dec 18, 2014, 3:20:15 PM12/18/14
to Monica Chew, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, Chris Palmer
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:
>
> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think
> we'd have to see very low plaintext rates (< 1%) in order not to habituate
> users into ignoring a plaintext warning indicator.
>

The context of the paper you cite is for a far more intrusive type of
warning than anyone has proposed here. Interstitials or popups are very
aggressive methods of warning that should only be used when something is
almost certainly wrong, or else they indeed risk the "crying wolf" effect.
Some sort of small passive indicator is a very different thing.

PK

Chris Palmer

unread,
Dec 18, 2014, 3:27:51 PM12/18/14
to Monica Chew, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:

> I support the goal of this project, but I'm not sure how we can get to a
> point where showing warning indicators makes sense. It seems that about 67%
> of pageviews on the Firefox beta channel are http, not https. How are
> Chrome's numbers?

Currently, roughly 58% of top-level navigations in Chrome are HTTPS.

> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think we'd
> have to see very low plaintext rates (< 1%) in order not to habituate users
> into ignoring a plaintext warning indicator.

(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.

(b) What Peter Kasting said: we propose a passive indicator, not a
pop-up or interstitial.

> Lots of site operators don't support HTTPS, in fact some of them (e.g.,
> https://nytimes.com and https://monica-at-mozilla.blogspot.com, which is out
> of my control) redirect to plaintext in order to avoid mixed content
> warnings. I don't think that user agents provided the right incentives in
> this case, and showing a warning 100% of the time to a NYTimes user seems
> like a losing battle.

Again, it's a passive indicator; and, the proposal is to *fix* what
you seem to agree is the wrong incentive.

The NY Times in particular is committed to change and challenges other
news sites to move to HTTPS:

http://open.blogs.nytimes.com/2014/11/13/embracing-https/

> Why not shift the onus from the user to the site operators?

This isn't about putting an onus on users, it's about allowing users
to at least perceive the reality. And yes, that will put pressure on
some site operators. At the same time, the industry is working to make
HTTPS more usable. These efforts are complementary.

Monica Chew

unread,
Dec 18, 2014, 4:19:02 PM12/18/14
to Chris Palmer, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Thu, Dec 18, 2014 at 12:27 PM, Chris Palmer <pal...@google.com> wrote:

> On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:
>
> > I support the goal of this project, but I'm not sure how we can get to a
> > point where showing warning indicators makes sense. It seems that about
> 67%
> > of pageviews on the Firefox beta channel are http, not https. How are
> > Chrome's numbers?
>
> Currently, roughly 58% of top-level navigations in Chrome are HTTPS.
>

Thanks for the numbers. That's a significant gap (58% vs 33%). Do you have
any idea why this might be the case?


>
> > Security warnings are often overused and therefore ignored [1]; it's even
> > worse to provide a warning for something that's not actionable. I think
> we'd
> > have to see very low plaintext rates (< 1%) in order not to habituate
> users
> > into ignoring a plaintext warning indicator.
>
> (a) Users are currently habituated to treat non-secure transport as
> OK. The status quo is terrible.
>
> (b) What Peter Kasting said: we propose a passive indicator, not a
> pop-up or interstitial.
>

I understand the desire here, but a passive indicator is not going to
change the status quo if it's shown 42% of the time (or 67% of the time, in
Firefox's case). Other passive indicators (e.g., Prop 65 warnings if you
live in California, or compiler warnings that aren't failures) haven't
succeeded in changing the status quo. Again, what's the action that typical
users are going to take when they see a passive indicator?

Thanks,
Monica

Monica Chew

unread,
Dec 18, 2014, 4:21:25 PM12/18/14
to Adrienne Porter Felt, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, Chris Palmer
> I'm curious about the difference between the two browsers. My guess is
> that we're treating same-origin navigations differently, particularly
> fragment changes. Monica, is Firefox collapsing all same-origin navigations
> into a single histogram entry? Given that people spend a lot of time on a
> small number of popular (and HTTPS) sites, it would account for the
> different stats.
>

I don't think so.. This histogram is updated at
https://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpChannel.cpp#1385
.

Thanks,
Monica

Peter Kasting

unread,
Dec 18, 2014, 4:34:31 PM12/18/14
to Monica Chew, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, Chris Palmer
On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:
>
> I understand the desire here, but a passive indicator is not going to
> change the status quo if it's shown 42% of the time (or 67% of the time, in
> Firefox's case).
>

Which is presumably why the key question this thread asked is what metrics
to use to decide it makes sense to start showing these warnings, and what
the thresholds should be.

PK

Monica Chew

unread,
Dec 18, 2014, 4:41:43 PM12/18/14
to Peter Kasting, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, Chris Palmer
OK. I think the thresholds should be < 5%, preferably < 1%. What do you
think they should be?

Also I was wrong about collapsing fragment navigation, and that probably
explains the difference between FF and Chrome.

Thanks,
Monica

>

Peter Kasting

unread,
Dec 18, 2014, 4:42:52 PM12/18/14
to Monica Chew, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, Chris Palmer
On Thu, Dec 18, 2014 at 1:41 PM, Monica Chew <m...@mozilla.com> wrote:
>
> On Thu, Dec 18, 2014 at 1:34 PM, Peter Kasting <pkas...@google.com>
> wrote:
>>
>> On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:
>>>
>>> I understand the desire here, but a passive indicator is not going to
>>> change the status quo if it's shown 42% of the time (or 67% of the time, in
>>> Firefox's case).
>>>
>>
>> Which is presumably why the key question this thread asked is what
>> metrics to use to decide it makes sense to start showing these warnings,
>> and what the thresholds should be.
>>
>
> OK. I think the thresholds should be < 5%, preferably < 1%. What do you
> think they should be?
>

I have no opinion. I'm simply trying to keep the discussion on track.

PK

Chris Palmer

unread,
Dec 18, 2014, 4:46:19 PM12/18/14
to Monica Chew, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Sigh. Re-posting this, because the Mozilla list doesn't like large attachments.

To see the equivalent, go to https://www.google.com and click on the
green lock in the Omnibox.

On Thu, Dec 18, 2014 at 1:41 PM, Chris Palmer <pal...@google.com> wrote:
> On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:
>
>> Thanks for the numbers. That's a significant gap (58% vs 33%). Do you have
>> any idea why this might be the case?
>
> I don't, unfortunately.
>
> I think we (Chrome) are going to try measuring HTTPS vs. HTTP
> deployment in other ways too, and then we might see discrepancies.
>
>> I understand the desire here, but a passive indicator is not going to change
>> the status quo if it's shown 42% of the time (or 67% of the time, in
>> Firefox's case).
>
> That's part of why we plan to gate the change on increasing HTTPS
> adoption. Gervase liked that idea, too.
>
>> Other passive indicators (e.g., Prop 65 warnings if you
>> live in California, or compiler warnings that aren't failures) haven't
>> succeeded in changing the status quo.
>
> Citation needed...?
>
> (If you're arguing that we should all compile with -Werror, I'll
> surely agree with you. Chrome does. But I assume you did not mean to
> suggest we should do the equivalent for HTTP navigation, at least not
> yet...)
>
>> Again, what's the action that typical
>> users are going to take when they see a passive indicator?
>
> First, keep in mind that you can't argue that showing the passive
> indicator will be both ignored and crying wolf. It's one or the other.
> Which argument are you making?
>
> That said,
>
> * Those few users who do look at it will at least be able to discern
> the truth. Currently, they cannot.
>
> * Site operators are likely to discern the truth, and may be motivated
> to deploy HTTPS, if they feel that their user base might demand it.
> (Complementarily, as site operators seek to use more powerful,
> app-like features like Service Workers, they will increasingly deploy
> HTTPS because they have to.)
>
> * As we make the web more powerful and more app-like, we (Chrome) seek
> to join the "what powerful permissions does this origin have?" views
> and controls with the "by the way, how authentic is this origin?"
> view. (See attached Chrome screenshot, showing that they are already
> significantly merged. In my other window I am working on a patch to
> make this easier to use.) As users become increasingly aware of these
> controls, they may become increasingly aware of the authenticity
> marking. And then they may make decisions about granting permissions
> differently, or at least with more information. Basically, "how real
> and how powerful is this origin" is gradually becoming a first-class
> UX piece.
>
> But, fundamentally, we owe it to users to tell the truth. I don't see
> that the status quo is defensible.

Chris Palmer

unread,
Dec 18, 2014, 5:29:23 PM12/18/14
to nolo...@gmail.com, blink-dev, public-w...@w3.org, mozilla-de...@lists.mozilla.org, security-dev, Daniel Kahn Gillmor
On Thu, Dec 18, 2014 at 2:22 PM, Jeffrey Walton <nolo...@gmail.com> wrote:

>> A) i don't think we should remove "This website does not supply
>> identity information" -- but maybe replace it with "The identity of this
>> site is unconfirmed" or "The true identity of this site is unknown"
>
> None of them are correct when an interception proxy is involved. All
> of them lead to a false sense of security.
>
> Given the degree to which standard bodies accommodate (promote?)
> interception, UA's should probably steer clear of making any
> statements like that if accuracy is a goal.

Are you talking about if an intercepting proxy is intercepting HTTP
traffic, or HTTPS traffic?

A MITM needs a certificate issued for the proxied hostname, that is
signed by an issuer the client trusts. Some attackers can achieve
that, but it's not trivial.

Ryan Sleevi

unread,
Dec 18, 2014, 6:04:15 PM12/18/14
to michael....@xenite.org, blin...@chromium.org, public-w...@w3.org, security-dev, mozilla-de...@lists.mozilla.org
On Dec 18, 2014 2:55 PM, "Michael Martinez" <michael....@xenite.org>
wrote:
> No it doesn't need a certificate. A MITM can be executed through a
compromised or rogue router. It's simple enough to set up a public network
in well-known wifi hotspots and attract unwitting users. Then the HTTPS
doesn't protect anyone's transmission from anything as the router forms the
other end of the secure connection and initiates its own secure connection
with the user's intended destination (either the site they are trying to
get to or whatever site the bad guys want them to visit).
>
> Google, Apple, and other large tech companies learned the hard way this
year that their use of HTTPS failed to protect users from MITM attacks.
>
>
>
> --
> Michael Martinez
> http://www.michael-martinez.com/
>
> YOU CAN HELP OUR WOUNDED WARRIORS
> http://www.woundedwarriorproject.org/
>

I'm sorry, this isn't how HTTPS works and isn't accurate, unless you have
explicitly installed the routers cert as a CA cert. If you have, then
you're not really an unwitting user (and it is quite hard to do this).

Of course, everything you said is true for insecure HTTP. Which is why this
proposal is about reflecting that truth to the user.

Chris Palmer

unread,
Dec 18, 2014, 6:50:00 PM12/18/14
to michael....@xenite.org, blink-dev, public-w...@w3.org, security-dev, mozilla-de...@lists.mozilla.org
On Thu, Dec 18, 2014 at 3:39 PM, Michael Martinez
<michael....@xenite.org> wrote:

> You're assuming people don't connect to open wifi hotspots where rogue
> routers can be set up by anyone. If thieves are willing to build fake ATM
> machines and distribute them to shopping centers across a large geographical
> area then they will certainly go to the same lengths to distribute rogue
> routers.

Indeed, there are many rogue wifi hotspots, and indeed many rogue
routers at ISPs (it's definitely not just "last mile" routing that we
need to be concerned about).

The part you're missing is that the man-in-the-middle attacker needs
to present a certificate for the server, say mail.google.com, that was
issued by a certification authority *that the client trusts*. Not just
any certificate for mail.google.com will do.

Now, this is not an insurmountable obstacle to the attacker. But it is
non-trivial: the CAs that clients trust are trying hard not to
mis-issue certificates. And, we are working to make it even more
difficult for attackers, such as with our Certificate Transparency and
public key pinning efforts.

Before arguing against HTTPS, you should make sure you know how it works.

I would encourage you try to mount the attack you describe (only
against your own computers, of course!). I think you will find that
you won't get very far without a valid certificate issued by a
well-known CA.

Chris Palmer

unread,
Dec 18, 2014, 7:17:51 PM12/18/14
to michael....@xenite.org, blink-dev, public-w...@w3.org, mozilla-de...@lists.mozilla.org, security-dev, Daniel Kahn Gillmor
On Thu, Dec 18, 2014 at 4:08 PM, Michael Martinez
<michael....@xenite.org> wrote:

> A Study of SSL Proxy Attacks on Android and iOS Mobile Applications
> http://harvey.binghamton.edu/~ychen/CCNC2014_SSL_Attacks.pdf

That paper describes bugs in the certificate validation procedures *of
specific clients*. (Note that the authors call out the fact that the
clients in question are *not* browsers.)

That doesn't mean the protocol is fundamentally flawed; it means those
particular non-browser clients have bugs.

If you can find such a bug in Chrome (or Firefox, or other browser),
you should report the flaw to the vendor. Google offers money in
reward for such findings:

https://www.google.com/about/appsecurity/chrome-rewards/index.html

If you can find one, we would consider such a finding to be a high-priority bug.

Monica Chew

unread,
Dec 18, 2014, 9:33:48 PM12/18/14
to Chris Palmer, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
> > Other passive indicators (e.g., Prop 65 warnings if you
> > live in California, or compiler warnings that aren't failures) haven't
> > succeeded in changing the status quo.
>
> Citation needed...?
>

For Prop 65, last paragraph of
http://en.wikipedia.org/wiki/California_Proposition_65_%281986%29#Warning_label.
For compiler warnings, just my own anecdotal experience that they aren't
attended unless -Werror is true, even if the person compiling is in a
position to fix the warning.

> Again, what's the action that typical
> > users are going to take when they see a passive indicator?
>
> First, keep in mind that you can't argue that showing the passive
> indicator will be both ignored and crying wolf. It's one or the other.
> Which argument are you making?
>

I'm making the argument that most people will ignore passive indicators,
the ones who notice it will be frustrated because it's not actionable
(other than not visiting the site), especially at the non-HTTPS traffic
rates we are seeing, and that there are probably better ways to put
pressure on site operators. Sorry if that wasn't clear.

Thanks,
Monica

Chris Palmer

unread,
Dec 18, 2014, 9:55:50 PM12/18/14
to Monica Chew, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Thu, Dec 18, 2014 at 6:33 PM, Monica Chew <m...@mozilla.com> wrote:

> I'm making the argument that most people will ignore passive indicators, the
> ones who notice it will be frustrated because it's not actionable (other
> than not visiting the site),

Users can take actions like these:

* Use the site, but maybe not grant it Geolocation or Camera permission
* Use the site, but be aware that it's not a great place to talk about
sensitive topics
* Use the site, but be aware that these stock price listings might not
be 100% true
* Use the site, but ask the operator why the browser thinks it's Non-Secure
* Use a competing site that does use secure transport
* Hug their real cat instead of looking at cats on the screen

Rather than thinking of it like California Proposition 65, think of it
like those health inspection stores that restaurants have to show:
http://sfscores.com/. Maybe a score of 63 isn't high enough for you,
or maybe you'll get something packaged or heavily cooked.

The truth is sometimes gross, but there are actions you can take.

> especially at the non-HTTPS traffic rates we
> are seeing, and that there are probably better ways to put pressure on site
> operators. Sorry if that wasn't clear.

This is a proposal to tell users the truth, and to stop lying by omission.

If some users pressure site operators (either with tech support calls
or by exerting market pressure), or if site operators decide
unilaterally that they don't like the truth and then choose to fix it,
that is a second-order effect. A good second-order effect which makes
me happy, but it's not my primary goal with this proposal.

ianG

unread,
Dec 19, 2014, 7:20:42 AM12/19/14
to dev-se...@lists.mozilla.org
On 17/12/2014 20:41 pm, Chris Palmer wrote:
> On Wed, Dec 17, 2014 at 11:29 AM, ianG <ia...@iang.org> wrote:
>
>> Take expired certificates. "We all know" there is no difference between the
>> technical security available for a certificate at E-1 and E+1 where E is
>> expiry day. Yet, the browser presents to the user that the certificate is
>> insecure. It's not. We know it's not. But the "commercial" prerogative
>> kicks in and the protocol is instructed to harass the user(s) and get a fee
>> paid. This is the CA's security not the users' security.
>
> If a site operator can't get operational details like certificate
> expiration right, should users believe they can get more difficult
> things right?


I understand the angst, but take a moment to think it through. If you
really believe that, then we should be doing more active tests on
websites and revoking their certs if they muck up. Right? Pen tests,
sql injection, all these things should ensure a safer web, and
revocation should follow for any demerits.

Doesn't really fly, does it :) Even for EV. The Internet shouldn't be
about adding barriers, just because.


> That said, we (Chrome) have experimented a bit with softer warnings in
> case of "barely expired" certificates. It might happen (no promises).


:)

>> So a clear benefit *to security of end-users* would be to accept expired
>> certificates if we actually know they were good before. But CAs/browsers
>> will never accept that. Oh well.
>
> We would if we could make it work, and maybe we can. Maybe we can't.
>
>> Then there is self-signed certs. If you compare to HTTP, when my site uses
>> a self-signed cert, it is better in all ways, even in the presence of
>> MITMing. But when you compare a self-signed cert to say https://google then
>> "we know" it is worse.
>>
>> Which is it? Better or worse? It depends on your baseline.
>
> The way to make self-signed certificates "safe" (as in: safe enough to
> make a representation to a real-world user trying to achieve a task)
> is to pin the keys,

OK.

> and hard-fail on pinning validation failure. Now
> you need a ceremony for recovery. Do you have a ceremony design that
> would work for real-world users who have no idea what is going on? I
> bet you don't.


You're trying to protect two users who have decided to utilise
self-signed certs in order to improve over HTTP. Whatever the system,
it doesn't matter that much if it fails, because after all, it was still
better than HTTP.

See what I mean about your baseline? Why did you make an assumption
that you have to make a representation to users about self-signed? Why
do you assume the users don't know what's going on here, yet you also
assume you have to make representations to users? What is this
representation anyway, and where is it written so I might rely on it?
Audit it?

Are you even allowed to make a representation? I don't recall seeing a
CA contract that permitted you to make a representation about a CA's
certs [0], so why would you bother to do *more* for a self-signed cert?


> Because we cannot effectively communicate all this nuance to
> real-world users, and because the client has very limited "knowledge"
> of what is "expected" at run-time, we (Chrome, and most other
> browsers) have consistently chosen to quantize the definition of
> security *up*.

! This speaks to my original point, being that a model based on A, B,
C, D is unlikely to be sustainable.

Above, you are assuming that you can "quantise the definition of
security." That's an extraordinary statement. Security is inversely
related to risk, and risk isn't quantizable, only estimizable. Risks
interact in complicated ways. Those changes that are universally clear
in one direction can result in overall changes in the opposite direction.


> It does upset some people with certain ideologies, but
> it results in a more clear message to most people.

It's totally clear that HTTP is 'white'. If self-signed certs were also
white, what would be unclear?


>> To make some progress, really the browser should become much more aware so
>> it can for example know that google is not self-signed, and make a
>> distinction.
>
> Yes, that's key pinning. We are doing it and it works. But it's not a
> slam-dunk for all operators.


It's not easy, no. But you have to start somewhere and expand on that.
There's no doubt that, as well as weaknesses inside fully
authenticated HTTPS, and outside, there are also weaknesses any time an
approximation enters scope.

You just have to give it your best shot and see what happens. Take some
risks.



iang


[0] For non-Americans, BR 18.2 might be instructive here.

Gervase Markham

unread,
Dec 19, 2014, 9:08:55 AM12/19/14
to Chris Palmer, blink-dev, public-w...@w3.org, security-dev
On 18/12/14 19:29, Chris Palmer wrote:
> We don't currently have any hard thresholds, just numbers that I kind
> of made up. Any suggestions?
>
> Also, shall we measure resource loads, top-level navigations, minutes
> spent looking at the top-level origin, ...? Probably all of those and
> more...

Very good questions. All of those are interesting metrics; it would be
interesting to try and see, on a small scale, if any of the
harder-to-measure ones actually track the easier-to-measure ones! :-)

I'd say top-level is key, as that's what the UI indicators relate to.
Probably site count is a bit more important than minutes spent - and
easier and less privacy-invasive to track, to boot.

Gerv


Ryan Sleevi

unread,
Dec 19, 2014, 12:11:01 PM12/19/14
to Dominick Marciano, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Dec 19, 2014 8:52 AM, "Dominick Marciano" <dominick...@gmail.com>
wrote:
>
> Good Afternoon,
>
> I have read the proposal regarding marking HTTP as non-secure. I feel
that it is a good step to let users know when their information is not
secure, however I also feel that applying this across the board may be too
broad of stroke.
>
> If users are on sites that are transmitting any personally identifiable
information, such as geo-location information, name, address, telephone,
etc., to a non-secure site that the user should definitely be informed.
However there are also plenty of cases where a site may not employ HTTPS
that the user does not necessarily need to be notified about. Good
examples of this may be news sites, blogs, etc., where a user does not need
to login or provide any other information. In these cases (where no data
is being transmitted) HTTPS is not necessary and if users are being warned
about every site they are going to (that doesn't use HTTPS and doesn't
transmit any data), I believe a lot of users will start ignoring the
warning (or call their IT company) and then the warning will provide no
additional benefit.
>
> If possible, I believe this the warning would be better used in cases
where a user is on a site not using HTTPS but is still transmitting
personal information (location, name, telephone, etc.) in addition to login
pages not using HTTPS. I don't know the feasibility for this, but I
strongly feel that the more specific the warning could be (instead of just
every HTTP site), the more useful they will and will hopefully users will
be more attentive to them when they appear.
>
> Thank You,
>
> Dominick Marciano
> Programmer/IT Consultant

While the transmission of data is indeed an active concern, I believe you
underestimate the risks and attacks posed by the mere passive receipt of
data.

This could be the viewing of a video that is contrarian to local
authorities. It could be reading an article on reproductive health or
religious beliefs that are contrary to local norms. Revealing this
information could get you in "trouble", for some definition of trouble.

You could be reading the news and using the information as part of your
investment and long-term planning. An attacker who can modify this can
convince you that the sky is falling - or that there is a grand new
investment opportunity.

You could be reading a blog, but which an attacker changes the content to
suggest the blogger is endorsing or holding views contrary to what they
really hold.

You could be just browsing the web, and a pervasive passive monitor could
use this to build a profile of you, to track your related experiences on a
variety of sites, to cross-reference your interactions, and then declare
you a "threat" for otherwise benign interactions.

You could be reading a website supported by advertising, but that
advertising might be rewritten to credit the attacker, rather than the site
you're reading. Over time, the site you're reading may need to shut down,
because all of their revenue has been stolen by attackers.

The reality is that the confidentiality, integrity, and authenticity of the
content matters just as much for the receiver as it does the sender. This
is not merely being done solely for privacy grounds - but even if it was,
that goal would only be achievable if the response was as protected as the
request.

The other reality is that the mere act of an HTTP Request leaks a lot of
ambient state that is not intuitively associable with a user's action or
intent. Further, when you consider cookies, any proposal to do it
heuristically on the request devolves quickly into doing it for
(effectively) every request. This proposal not only simplifies what is
expected of developers AND vastly simplifies what is messaged to users, but
it can provide real protection.

However, even on top of all that, this proposal is not explicitly geared
towards any of what I said - even though it is all true. It is about
ensuring user agents do not mislead users about the state of the
connection, as they have for so long, implying through omission that their
connections are secure or private. For whatever attacks there may be on
HTTPS, whether implementation to operation, whatever attacks there may be
on the content, such as XSS or CSRF, it is unquestionable that HTTP is an
insecure transport that offers zero confidentiality, zero integrity, and
zero authenticity, and user agents should appropriately reflect that hard
truth.

Chris Palmer

unread,
Dec 19, 2014, 1:53:07 PM12/19/14
to Ryan Sleevi, Dominick Marciano, blink-dev, public-w...@w3.org, dev-se...@lists.mozilla.org, security-dev
I adapted Ryan's reply to be a new FAQ answer in the Chromium wiki:

https://sites.google.com/a/chromium.org/dev/Home/chromium-security/education/tls?pli=1#TOC-The-only-security-guarantee-TLS-provides-is-confidentiality.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to security-dev...@chromium.org.

Monica Chew

unread,
Dec 19, 2014, 4:34:17 PM12/19/14
to Chris Palmer, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
> Why not shift the onus from the user to the site operators? I would love
> to see a "wall of shame" for the Alexa top 1M sites that don't support
> HTTPS,
>

Just to follow up on this, from section 5.4 of
http://randomwalker.info/publications/cookie-surveillance-v2.pdf, 87% of
the sites from Alexa top 500 serve HTTP by default, and 66% of them don't
serve HTTPS at all, as measured by using HTTPS-Everywhere.

Thanks,
Monica

Ryan Sleevi

unread,
Dec 19, 2014, 11:10:43 PM12/19/14
to Michael Martinez, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, mozilla-de...@lists.mozilla.org
On Fri, Dec 19, 2014 at 7:32 PM, Michael Martinez <
michael....@xenite.org> wrote:

> On 12/19/2014 8:33 PM, Donald Stufft wrote:
>
>> So how is marking some Websites as "non-secure" (they all are) improving
>> the situation? Shaming Website owners for not using encrypted connections,
>> especially when your only concern is that you don't want some random
>> stranger to see that you are reading a blog, is not acceptable.
>> I think you’re fundamentally confused, I do not believe that anyone who
>> is making this proposal is trying to force site operators to use HTTPS.
>>
>
> Then I suggest you go back and reread the other posts from people who have
> said exactly that.
>
> It is precisely this kind of inattention to what is actually being said
> that keeps resetting this conversation.
>
> I am very ill right now and I don't have the energy for further
> discussion. I hope that the people who need to consider these things see
> past the needless nit-pickery and think about the big picture. You won't
> be able to undo the damage this proposal will do, if it is carried out,
> even if that turns out to be less than some of us fear.
>

It is not needlessly nitpicky. You've made several claims that range from
demonstrably inaccurate to factually incorrect. When pressed for details,
either you shift the topic to something unrelated or you claim it's not
your responsibility to provide those details. When presented with evidence
counter to your claim, you ignore it.

You'd be surprised by the number of people making a good faith effort to
give you both the benefit of the doubt and to patiently explain to you why
you're either misunderstanding the issues at play or downright wrong.

You've confused ARP poisoning with certificate compromise. You've proposed
TLS in everything but name, yet argued against TLS. You've conflated "want
the world to move to HTTPS" with "force the world to move to HTTPS", when
multiple people - the original poster, people from the same organization,
and people from different browsers - all pointing out that the two are not
the same.

Quothe the original proposal (
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/qTm0E376lswJ
)
"We, the Chrome Security Team, propose that user agents (UAs) gradually
change their UX to display non-secure origins as affirmatively non-secure."

Echo'd again -
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/cKBImJOUrEcJ

"HTTPS is the bare minimum requirement for secure web application
*transport*. Is secure transport by itself sufficient to achieve total
*application-semantic* security? No. But a browser couldn't determine that
level of security anyway. Our goal is for the browser to tell as much of
the truth as it can programatically determine at run-time."

And again -
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/vgfv-e6A21MJ
"(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.

(b) What Peter Kasting said: we propose a passive indicator, not a
pop-up or interstitial. "
"Again, it's a passive indicator;"

More importantly, you've continued to make claims without supporting
evidence. This has been explained by
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/zWkvtQ7HpB4J
and
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/FlyAPvETiwMJ
and
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/n3DGBTbdyiMJ
and
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/sHWde0wF7akJ

While all feedback is appreciated to some level, you seem to be frustrated
by the lack of attention being posed to your points. You've gone as far as
to insult the intelligence of those replying in
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/sfs-2A1P7rUJ
. However, I'd like to again point out (as several have already patiently
done so) that you've effectively ignored every single question posed to
you, instead turned out to be rather dismissive and rude.

I think we're very aware of the big picture here, and have tried patiently
to explain it and to allay your misplaced fears that seem to be based on
honest and genuine misunderstanding. However, in the absence of reasonable
and rationale discourse, and the absence of new information, perhaps it's
best to have had your say for now. In the big picture, continuing to use
unauthenticated, non-confidential transports that anyone can modify while
presenting them as acceptable and secure is downright dishonest - something
the original post explained with a number of examples -
https://groups.google.com/a/chromium.org/d/msg/security-dev/DHQLv76QaEM/qTm0E376lswJ
.

Ryan Sleevi

unread,
Dec 19, 2014, 11:10:46 PM12/19/14
to Michael Martinez, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev, mozilla-de...@lists.mozilla.org

Henri Sivonen

unread,
Dec 21, 2014, 4:13:41 AM12/21/14
to Monica Chew, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org, security-dev, Chris Palmer
On Dec 18, 2014 10:12 PM, "Monica Chew" <m...@mozilla.com> wrote:

> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think
> we'd have to see very low plaintext rates (< 1%) in order not to habituate
> users into ignoring a plaintext warning indicator.

If the indicator is initially unobtrusive (e.g. in Firefox changing the
light gray globe to a darker gray eye) and the doorhanger just states the
truth about the lack of confidentiality, integrity and authenticity,
positive effects can be had even if the bulk of users ignore it. As long as
it makes site operators are uneasy with users maybe realizing the truth
about http being insecure as opposed to neutral, this may well lead to side
operators choosing to switch to https. That is, this initiative can be a
success even if most users ignore it, because most users don't need to be
the audience for them to benefit. The audience needs to be site operators
and a subset of users that the site operators don't want to alienate.

cha...@yandex-team.ru

unread,
Dec 21, 2014, 6:13:21 PM12/21/14
to Anne van Kesteren, Sigbjørn Vik, Chris Palmer, tyl...@google.com, WebAppSec WG, dev-se...@lists.mozilla.org, blink-dev, securi...@chromium.org


17.12.2014, 20:19, "Anne van Kesteren" <ann...@annevk.nl>:
> On Wed, Dec 17, 2014 at 12:52 PM, Sigbjørn Vik <sigb...@opera.com> wrote:
>>  I respectfully, but strongly, disagree :) If you want to separate the
>>  states, I'd say that C is better than B. C has *some* security, B has
>>  *none*.
>
> You would advocate not blocking on certificate failures and just hand
> over credentials to network attackers? What would happen exactly when
> you visit e.g. google.com from the airport (connected to something
> with a shitty captive portal)?

This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex.ru but the certtificate is for airport.logins.aero, or 1.1.1.1).

If you are me, you wrestle with the interface until you find out how to connect anyway, and hope that it doesn't remember this for other places (and that I do).

so having handed over your credit card details to get 30 minutes of connection time, you're in a hurry (your plane will leave soon, and you still haven't told Mum you're hoping she will collect you when you land).

If you're visiting google.com, it's hard to see what the next interstitial does that is useful. To take the standard Coast example, if you went to myAe.ro every day for the last month, and their certificate expired yesterday but hasn't changed, I think the answer is pretty clear.

If it expired last month, and you've been using it for a year, there may be an issue. If it is brand new and registered to someone else, there might well be an issue even though the certificate itself looks good…

just some thinking out loud…

cheers

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com

Daniel Kahn Gillmor

unread,
Dec 21, 2014, 6:13:22 PM12/21/14
to michael....@xenite.org, public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
On 12/18/2014 05:55 PM, Michael Martinez wrote:
> No it doesn't need a certificate. A MITM can be executed through a
> compromised or rogue router. It's simple enough to set up a public
> network in well-known wifi hotspots and attract unwitting users. Then
> the HTTPS doesn't protect anyone's transmission from anything as the
> router forms the other end of the secure connection and initiates its
> own secure connection with the user's intended destination (either the
> site they are trying to get to or whatever site the bad guys want them
> to visit).

It sounds like you're saying that browsers don't verify the X.509
certificate presented by the https origin server, or at least that they
don't verify that the hostname matches.

This is a serious and extraordinary claim. Please provide evidence for it.

--dkg

signature.asc

Michael Martinez

unread,
Dec 21, 2014, 6:13:22 PM12/21/14
to public-w...@w3.org, mozilla-de...@lists.mozilla.org, securi...@chromium.org, blin...@chromium.org
On 12/18/2014 6:04 PM, Ryan Sleevi wrote:
>
>
> On Dec 18, 2014 2:55 PM, "Michael Martinez"
> <michael....@xenite.org <mailto:michael....@xenite.org>> wrote:
> >
> > On 12/18/2014 5:29 PM, Chris Palmer wrote:
> >>
> >> On Thu, Dec 18, 2014 at 2:22 PM, Jeffrey Walton <nolo...@gmail.com
> <mailto:nolo...@gmail.com>> wrote:
> >>
> >>>> A) i don't think we should remove "This website does not supply
> >>>> identity information" -- but maybe replace it with "The identity
> of this
> >>>> site is unconfirmed" or "The true identity of this site is unknown"
> >>>
> >>> None of them are correct when an interception proxy is involved. All
> >>> of them lead to a false sense of security.
> >>>
> >>> Given the degree to which standard bodies accommodate (promote?)
> >>> interception, UA's should probably steer clear of making any
> >>> statements like that if accuracy is a goal.
> >>
> >> Are you talking about if an intercepting proxy is intercepting HTTP
> >> traffic, or HTTPS traffic?
> >>
> >> A MITM needs a certificate issued for the proxied hostname, that is
> >> signed by an issuer the client trusts. Some attackers can achieve
> >> that, but it's not trivial.
> >
> >
> > No it doesn't need a certificate. A MITM can be executed through a
> compromised or rogue router. It's simple enough to set up a public
> network in well-known wifi hotspots and attract unwitting users. Then
> the HTTPS doesn't protect anyone's transmission from anything as the
> router forms the other end of the secure connection and initiates its
> own secure connection with the user's intended destination (either the
> site they are trying to get to or whatever site the bad guys want them
> to visit).
> >
> > Google, Apple, and other large tech companies learned the hard way
> this year that their use of HTTPS failed to protect users from MITM
> attacks.
> >
>
> I'm sorry, this isn't how HTTPS works and isn't accurate, unless you
> have explicitly installed the routers cert as a CA cert. If you have,
> then you're not really an unwitting user (and it is quite hard to do
> this).
>

You're assuming people don't connect to open wifi hotspots where rogue
routers can be set up by anyone. If thieves are willing to build fake
ATM machines and distribute them to shopping centers across a large
geographical area then they will certainly go to the same lengths to
distribute rogue routers. Furthermore, at least two research teams have
shown earlier this year that wireless routers can be compromised with
virus-like software; when these vulnerabilities are exploited it won't
matter if the router has a valid certificate.

On top of that University of Birmingham researchers also found that
inappropriately built apps for iOS and Android can leave users' security
credentials vulnerable to sniffing.

> Of course, everything you said is true for insecure HTTP. Which is why
> this proposal is about reflecting that truth to the user.
>
You people are putting your faith in a defense that has already been
compromised in many ways. The distributed nature of the network of
access points virtually assures that MITM attacks will continue to
bypass HTTPS security. The manner of warnings you place on Websites
with apparently invalid certificates is rendered moot.

Jeffrey Walton

unread,
Dec 21, 2014, 6:13:23 PM12/21/14
to Daniel Kahn Gillmor, blink-dev, public-w...@w3.org, security-dev, mozilla-de...@lists.mozilla.org
On Thu, Dec 18, 2014 at 5:10 PM, Daniel Kahn Gillmor
<d...@fifthhorseman.net> wrote:
> ...
> Four proposed fine-tunings:

Jeffrey Walton

unread,
Dec 21, 2014, 6:13:23 PM12/21/14
to Peter Kasting, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Thu, Dec 18, 2014 at 3:20 PM, Peter Kasting <pkas...@google.com> wrote:
> On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:
>>
>> Security warnings are often overused and therefore ignored [1]; it's even
>> worse to provide a warning for something that's not actionable. I think we'd
>> have to see very low plaintext rates (< 1%) in order not to habituate users
>> into ignoring a plaintext warning indicator.
>
> The context of the paper you cite is for a far more intrusive type of
> warning than anyone has proposed here. Interstitials or popups are very
> aggressive methods of warning that should only be used when something is
> almost certainly wrong, or else they indeed risk the "crying wolf" effect.
> Some sort of small passive indicator is a very different thing.
According to Gutmann, they are equally ignored by users. In the first
case, the user will click through the intrusive popup. In the second
case, they won't know what the icon means or they will ignore it.
Refer to Chapter 2 and Chapter 3 of his book.

In both cases, the browser should do the right thing for the user. In
a security context, that 's "defend, don't ask". Refer to Chapter 2 of
Gutmann's book.

Michael Martinez

unread,
Dec 21, 2014, 6:13:24 PM12/21/14
to public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
On 12/18/2014 5:29 PM, Chris Palmer wrote:
> On Thu, Dec 18, 2014 at 2:22 PM, Jeffrey Walton <nolo...@gmail.com> wrote:
>
>>> A) i don't think we should remove "This website does not supply
>>> identity information" -- but maybe replace it with "The identity of this
>>> site is unconfirmed" or "The true identity of this site is unknown"
>> None of them are correct when an interception proxy is involved. All
>> of them lead to a false sense of security.
>>
>> Given the degree to which standard bodies accommodate (promote?)
>> interception, UA's should probably steer clear of making any
>> statements like that if accuracy is a goal.
> Are you talking about if an intercepting proxy is intercepting HTTP
> traffic, or HTTPS traffic?
>
> A MITM needs a certificate issued for the proxied hostname, that is
> signed by an issuer the client trusts. Some attackers can achieve
> that, but it's not trivial.

No it doesn't need a certificate. A MITM can be executed through a
compromised or rogue router. It's simple enough to set up a public
network in well-known wifi hotspots and attract unwitting users. Then
the HTTPS doesn't protect anyone's transmission from anything as the
router forms the other end of the secure connection and initiates its
own secure connection with the user's intended destination (either the
site they are trying to get to or whatever site the bad guys want them
to visit).

Google, Apple, and other large tech companies learned the hard way this
year that their use of HTTPS failed to protect users from MITM attacks.



Michael Martinez

unread,
Dec 21, 2014, 6:13:25 PM12/21/14
to Daniel Kahn Gillmor, public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
On 12/18/2014 6:07 PM, Daniel Kahn Gillmor wrote:
> On 12/18/2014 05:55 PM, Michael Martinez wrote:
>> No it doesn't need a certificate. A MITM can be executed through a
>> compromised or rogue router. It's simple enough to set up a public
>> network in well-known wifi hotspots and attract unwitting users. Then
>> the HTTPS doesn't protect anyone's transmission from anything as the
>> router forms the other end of the secure connection and initiates its
>> own secure connection with the user's intended destination (either the
>> site they are trying to get to or whatever site the bad guys want them
>> to visit).
> It sounds like you're saying that browsers don't verify the X.509
> certificate presented by the https origin server, or at least that they
> don't verify that the hostname matches.
>
> This is a serious and extraordinary claim. Please provide evidence for it.
>
> --dkg
>

No, what I am saying is that you can bypass the certificate for a MITM
attack via a new technique that was published earlier this year. If you
compromise someone else's router you can control it from your own nearby
router. The compromised router with the valid certificate sends the
user through whatever gateway you specify.

What makes the access points most vulnerable to attack is the human
factor. Someone has to monitor the system for breaches and how often
does that happen? It will vary by company and community, depending on
how well they budget for competent security techs. And how often are
these routers replaced with newer models? Look at what happened with
the ISPs earlier this year who had to replace all their routers because
they ran out of pathway memory. Even the "big guys" who are supposed to
think about this stuff all the time allow their equipment to depreciate
off the books or grow old until it's obsolete.

Meanwhile, you're trying to plug holes in a sieve with HTTPS and browser
warnings.

Donald Stufft

unread,
Dec 21, 2014, 6:13:25 PM12/21/14
to michael....@xenite.org, blin...@chromium.org, public-w...@w3.org, mozilla-de...@lists.mozilla.org, securi...@chromium.org, Daniel Kahn Gillmor

> On Dec 18, 2014, at 7:08 PM, Michael Martinez <michael....@xenite.org> wrote:
>
> On 12/18/2014 6:57 PM, Daniel Kahn Gillmor wrote:
>> On 12/18/2014 06:46 PM, Michael Martinez wrote:
>>> No, what I am saying is that you can bypass the certificate for a MITM
>>> attack via a new technique that was published earlier this year.
>> Links, please.
> I'm not going to sit here and do the research that you should already be doing for yourself, but here is one link that explains how some smart phone apps were compromised. It's disturbing to see that people working on security protocols are not aware of articles that have appeared on security blogs, in news media, and on university Websites.
>
> A Study of SSL Proxy Attacks on Android and iOS Mobile Applications
> http://harvey.binghamton.edu/~ychen/CCNC2014_SSL_Attacks.pdf
>
> This is only one example.

A skim of this shows that this is about mobile apps not correctly verifying TLS and has nothing to do with whether TLS as a protocol is broken. Probably you should learn how TLS actually works and read the papers you are linking before making extraordinary claims.

>>
>>> If you
>>> compromise someone else's router you can control it from your own nearby
>>> router. The compromised router with the valid certificate sends the
>>> user through whatever gateway you specify.
>> You seem to be saying now that the attacker does need a valid
>> certificate; earlier you claimed no certificate was needed.
> If you compromise a legitimate router you just hide behind the legitimate router and send people wherever you want. What's one or two more hops even if you're only passing them through a gateway they know nothing about? One recent study followed traffic on compromised routers that did nothing. The researchers suggested that it may have been an early stage botnet either in testing or buildout.
>> The fact that HTTPS is not 100% perfect does not mean that HTTP is
>> somehow secure.
> HTTP doesn't need to be secure. Explain to me why I should have to have connect to an HTTPS server just to read the news or a blog? If I am not passing any information to the Website, why does it have to be hosted on HTTPS? That is not trivial for the average Webmaster.
>
> I'm not just concerned about HTTPS attacks. I'm also concerned about wasted effort being spent on unnecessary security because of Google's fear-mongering. I get why they are doing this. They were hurt in the public image by the Edward Snowden scandal. But they sure don't mind telling everyone they have no reasonable expectation of privacy when their services are involved.
>
> BTW -- as you do your due diligence in these matters, look also for information on how YouTube videos can be used to compromise users' computers. Gosh, that's an HTTPS Website, isn't it? I feel safer already.
>
> Complexity that serves no useful purpose represents no improvement on the status quo. Google and those who stand with it on these "privacy" issues need to make a much better case for coercing millions of Websites into using HTTPS.
>
> Browser developers do not need to participate in this public shaming game.
>
> I will leave off on this discussion at this point as it is clear to me I am better informed about what is happening than some, and I really am not going to be drawn into to sharing link after link in order to play a stalling game.
>
> Some of you guys have already made up your minds on this without doing proper research. But you clearly haven't stopped to think about what a burden you will create for millions of Website owners who have no idea of how to support this insane initiative. And that will just make them more dependent on self-serving solution providers like Google.
>
> Don't do this. Please do NOT screw up the web with this nonsense.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

It is loading more messages.
0 new messages