Proposal: Marking HTTP As Non-Secure

2,121 views
Skip to first unread message

Chris Palmer

unread,
Dec 12, 2014, 7:46:36 PM12/12/14
to public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Hi everyone,

Apologies to those of you who are about to get this more than once, due to the cross-posting. I'd like to get feedback from a wide variety of people: UA developers, web developers, and users. The canonical location for this proposal is: https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.

Proposal


We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015.


The goal of this proposal is to more clearly display to users that HTTP provides no data security.


Request


We’d like to hear everyone’s thoughts on this proposal, and to discuss with the web community about how different transition plans might serve users.


Background


We all need data communication on the web to be secure (private, authenticated, untampered). When there is no data security, the UA should explicitly display that, so users can make informed decisions about how to interact with an origin.


Roughly speaking, there are three basic transport layer security states for web origins:


  • Secure (valid HTTPS, other origins like (*, localhost, *));

  • Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with minor TLS errors); and

  • Non-secure (broken HTTPS, HTTP).


For more precise definitions of secure and non-secure, see Requirements for Powerful Features and Mixed Content.


We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web.


RFC 7258: Pervasive Monitoring Is an Attack

NSA uses Google cookies to pinpoint targets for hacking

Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine

How bad is it to replace adSense code id to ISP's adSense ID on free Internet?

Comcast Wi-Fi serving self-promotional ads via JavaScript injection

Erosion of the moral authority of transparent middleboxes

Transitioning The Web To HTTPS


We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:


Screen Shot 2014-12-11 at 5.08.48 PM.png


Screen Shot 2014-12-11 at 5.09.55 PM.png


Screen Shot 2014-12-11 at 5.11.04 PM.png


ie-non-secure.png


Particulars


UA vendors who agree with this proposal should decide how best to phase in the UX changes given the needs of their users and their product design constraints. Generally, we suggest a phased approach to marking non-secure origins as non-secure. For example, a UA vendor might decide that in the medium term, they will represent non-secure origins in the same way that they represent Dubious origins. Then, in the long term, the vendor might decide to represent non-secure origins in the same way that they represent Bad origins.


Ultimately, we can even imagine a long term in which secure origins are so widely deployed that we can leave them unmarked (as HTTP is today), and mark only the rare non-secure origins.


There are several ways vendors might decide to transition from one phase to the next. For example, the transition plan could be time-based:


  1. T0 (now): Non-secure origins unmarked

  2. T1: Non-secure origins marked as Dubious

  3. T2: Non-secure origins marked as Non-secure

  4. T3: Secure origins unmarked


Or, vendors might set thresholds based on telemetry that measures the ratios of user interaction with secure origins vs. non-secure. Consider this strawman proposal:


  1. Secure > 65%: Non-secure origins marked as Dubious

  2. Secure > 75%: Non-secure origins marked as Non-secure

  3. Secure > 85%: Secure origins unmarked


The particular thresholds or transition dates are very much up for discussion. Additionally, how to define “ratios of user interaction” is also up for discussion; ideas include the ratio of secure to non-secure page loads, the ratio of secure to non-secure resource loads, or the ratio of total time spent interacting with secure vs. non-secure origins.


We’d love to hear what UA vendors, web developers, and users think. Thanks for reading!

Eduardo Robles Elvira

unread,
Dec 12, 2014, 8:18:12 PM12/12/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Hello Chris et al:

I'm a web developer. I did some vendor security-related development sometime ago inside Konqueror. Some first thoughts:

* In principle, the proposal makes sense to me. Who doesn't want a more secure web? Kudos for making it possible with ambitious proposals like this.

* The biggest problem I see is that to get an accepted certificate traditionally you needed to pay. This was a show-stopper for having TLS certs in small websites. Mozilla, EFF, Cisco, Akamai are trying to fix that [1] and that StartSSL gives free certificates though. Just stating the obvious: you either get easy and free "secure" certificates, or this proposal is going to make some webmasters angry.


Regards,
--
[1] https://www.eff.org/deeplinks/2014/11/certificate-authority-encrypt-entire-web
--
Eduardo Robles Elvira     @edulix             skype: edulix2
http://agoravoting.org       @agoravoting     +34 634 571 634

Chris Palmer

unread,
Dec 12, 2014, 8:32:28 PM12/12/14
to Eduardo Robles Elvira, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Fri, Dec 12, 2014 at 5:17 PM, Eduardo Robles Elvira <edu...@agoravoting.com> wrote:

* The biggest problem I see is that to get an accepted certificate traditionally you needed to pay. This was a show-stopper for having TLS certs in small websites. Mozilla, EFF, Cisco, Akamai are trying to fix that [1] and that StartSSL gives free certificates though. Just stating the obvious: you either get easy and free "secure" certificates, or this proposal is going to make some webmasters angry.


Oh yes, absolutely. Obviously, Let's Encrypt is a great help, and SSLMate's ease-of-use and low price is great, and CloudFlare's free SSL helps too. 

Hopefully, as operations like those ramp up, it will get increasingly easier for web developers to switch to HTTPS. We (Chrome) will weigh changes to the UX very carefully, and with a close eye on HTTPS adoption.

Alex Gaynor

unread,
Dec 12, 2014, 8:47:00 PM12/12/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Fantastic news, I'm very glad to see the Chrome Security Team taking initiative on this.

Existing browsers' behavior of defaulting to, and using a "neutral" UI for, HTTP is fundamentally an assumption what users want. And it's not an assumption that is grounded in data.

No ordinary users' mental for communication on the net includes lack of authenticity, integrity, or confidentially. Plaintext is a blight on the internet, and this is a fantastic step towards making reality match users (TOTALLY REASONABLE!) expectations.

Cheers,
Alex

To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.

Hanno Böck

unread,
Dec 12, 2014, 9:17:01 PM12/12/14
to securi...@chromium.org, Chris Palmer
Am Fri, 12 Dec 2014 16:46:34 -0800
schrieb "'Chris Palmer' via Security-dev" <securi...@chromium.org>:

> Apologies to those of you who are about to get this more than once,
> due to the cross-posting. I'd like to get feedback from a wide
> variety of people: UA developers, web developers, and users. The
> canonical location for this proposal is:

First of all: ++ from me, I like this very much.

But there's one thing I feel increasingly uneasy about and I'd like to
bring that up: I think much more stuff should be moved into the
"dubious" category.

I feel that there is some "de-facto standard" for secure HTTPS configs
that emerged from discussions around TLS in recent years, some large
webpages and some security conscious people follow it, most ignore it
(including e.g. banks and others who should care). E.g. I'd consider
HSTS pretty much required for a secure setup. Also agl wrote a few days
ago "everything less than TLS 1.2 with an AEAD cipher suite is
cryptographically broken". Of course that's completely reasonable given
the amount of CBC/MacthenEncrypt-related attacks we've seen in the
past. So I'm asking: Why do webpages that only support
"cryptographically broken" protocols get a green lock? They shouldn't.

I feel this is the part that should be discussed alongside.

--
Hanno Böck
http://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42

Ryan Sleevi

unread,
Dec 12, 2014, 9:23:54 PM12/12/14
to Hanno Böck, security-dev, Chris Palmer

Indeed, and expect a separate discussion of that. You can already see some of the discussion on the security-dev@ list regarding requiring OCSP stapling or modern ciphersuites for EV, and one can naturally assume that will migrate to DV.

That is, just as EV moves to DV when deployed dangerously, so too should DV move to dubious.

But, as you note, that's something to be discussed alongside.

Chris Palmer

unread,
Dec 12, 2014, 9:25:45 PM12/12/14
to Ryan Sleevi, Hanno Böck, security-dev
The SHA-1 deprecation plan that Ryan and I put up (http://googleonlinesecurity.blogspot.com/2014/09/gradually-sunsetting-sha-1.html) was a first take at moving old and busted crypto into Dubious. See also https://code.google.com/p/chromium/issues/detail?id=102949 for broken things we simply un-supported, as well.

Mike West

unread,
Dec 13, 2014, 6:53:18 AM12/13/14
to Ryan Sleevi, Hanno Böck, security-dev, Chris Palmer

That is, just as EV moves to DV when deployed dangerously, so too should DV move to dubious.

Ryan, for clarity, I think you mean that DV should move to dubious iff deployed dangerously, right? Not as a blanket statement? :)

-mike

Igor Bukanov

unread,
Dec 13, 2014, 11:56:22 AM12/13/14
to Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Free SSL certificates helps, but another problem is that activating SSL not only generates warnings, but just break the site due to links to insecure resources. Just consider a case of old pages with a few youtube videos served using http iframes. Accessing those pages over https stops the videos from working as browsers blocks access to active insecure context. And in case of youtube one can fix that, but for other resources it may not be possible.

So what is required is ability to refer to insecure context from HTTPS pages without harming user experience. For example, it should be a way to insert http iframe into https site. Similarly, it would be nice if a web-developer could refer to scripts, images etc. over http as long as the script/image tag is accompanied with a secure hash of known context.

_______________________________________________
dev-security mailing list
dev-se...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Mathias Bynens

unread,
Dec 13, 2014, 12:33:42 PM12/13/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Sat, Dec 13, 2014 at 1:46 AM, 'Chris Palmer' via blink-dev <blin...@chromium.org> wrote:

We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:


Screen Shot 2014-12-11 at 5.08.48 PM.png


Screen Shot 2014-12-11 at 5.09.55 PM.png


Screen Shot 2014-12-11 at 5.11.04 PM.png


ie-non-secure.png


For completeness sake, here’s what a non-secure looks like in Opera:


 

Christian Heutger

unread,
Dec 13, 2014, 2:06:02 PM12/13/14
to pal...@google.com, edu...@agoravoting.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
I see a big danger in the current trend. Expecting everyone having a free „secure“ certificate and being in requirement to enable HTTPS it will result in nothing won. DV certificates (similar to DANE) do finally say absolute nothing about the website operator. They ensure encryption, so I can then be phished, be scammed, … encrypted. Big advantage!^^ Pushing real validation (e.g. EV with green adressbar and validated details by an independent third party, no breakable, spoofable automatism) vs. no validation is much more important and should be focussed on. However, this „change“ could come with marking HTTP as Non-Secure, but just stating HTTPS as secure is the completely wrong sign and will result in more confusion and loosing any trust in any kind of browser padlocks than before.

Just a proposal:

Mark HTTP as Non-Secure (similar to self-signed) e.g. with a red padlock or sth. similar.
Mark HTTPS as Secure (and only secure in favor of encrypted) e.g. with a yellow padlock or sth. similar
Mark HTTPS with Extended Validation (encrypted and validated) as it is with a green padlock or sth. similar

This would be a good road for more security on the web.

Chris Palmer

unread,
Dec 14, 2014, 12:59:23 PM12/14/14
to Igor Bukanov, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Sat, Dec 13, 2014 at 8:56 AM, Igor Bukanov <ig...@mir2.org> wrote:

Free SSL certificates helps, but another problem is that activating SSL not only generates warnings, but just break the site due to links to insecure resources. Just consider a case of old pages with a few youtube videos served using http iframes. Accessing those pages over https stops the videos from working as browsers blocks access to active insecure context. And in case of youtube one can fix that, but for other resources it may not be possible.

Yes, unfortunately we have a collective action problem. (http://en.wikipedia.org/wiki/Collective_action#Collective_action_problem) But just because it's hard, doesn't mean we don't have try. I'd suggest that embedders ask embeddees to at least make HTTPS available, even if not the default.

Also, keep in mind that this proposal is only to mark HTTP as non-secure — HTTP will still work, and you can still host your site over HTTP.
 
So what is required is ability to refer to insecure context from HTTPS pages without harming user experience.

No, because that reduces or eliminates the security guarantee of HTTPS.
 
For example, it should be a way to insert http iframe into https site. Similarly, it would be nice if a web-developer could refer to scripts, images etc. over http as long as the script/image tag is accompanied with a secure hash of known context.

Same thing here. The security guarantee of HTTPS is the combination of server authentication, data integrity, and data confidentiality. It is not a good user experience to take away confidentiality without telling users. And, unfortunately, we cannot effectively communicate that nuance. We have enough trouble effectively communicating the secure/non-secure distinction as it is.

Alex Gaynor

unread,
Dec 14, 2014, 1:01:00 PM12/14/14
to Chris Palmer, Igor Bukanov, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Chris,

Is there a plan for HTTP to eventually have an interstitial, the way HTTPS with a bogus cert does?

Alex

To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.

Chris Palmer

unread,
Dec 14, 2014, 1:17:08 PM12/14/14
to Christian Heutger, edu...@agoravoting.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
On Sat, Dec 13, 2014 at 11:05 AM, Christian Heutger <chri...@heutger.net> wrote:

I see a big danger in the current trend. Expecting everyone having a free „secure“ certificate and being in requirement to enable HTTPS it will result in nothing won. DV certificates (similar to DANE) do finally say absolute nothing about the website operator.

Reducing the number of parties you have to trust from [ the site operator, the operators of all networks between you and the site operator ] to just [ the site operator ] is a huge win.
 
They ensure encryption, so I can then be phished, be scammed, … encrypted. Big advantage!^^ Pushing real validation (e.g. EV with green adressbar and validated details by an independent third party, no breakable, spoofable automatism) vs. no validation is much more important and should be focussed on.

I think you'll find EV is not as "extended" as you might be hoping.

But more importantly, the only way to get minimal server auth, data integrity, and data confidentiality on a mass scale is with something at least as easy to deploy as DV. Indeed, you'll see many of the other messages in this thread are from people concerned that DV isn't easy enough yet! So requiring EV is a non-starter.

Additionally, the web origin concept is (scheme, host, port). Crucially, EV-issued names are not distinct origins from DV-issued names, and proposals to enforce such a distinction in browsers have not gotten any traction because they are not super feasible (for a variety of reasons).
 
However, this „change“ could come with marking HTTP as Non-Secure, but just stating HTTPS as secure is the completely wrong sign and will result in more confusion and loosing any trust in any kind of browser padlocks than before.

HTTPS is the bare minimum requirement for secure web application *transport*. Is secure transport by itself sufficient to achieve total *application-semantic* security? No. But a browser couldn't determine that level of security anyway. Our goal is for the browser to tell as much of the truth as it can programatically determine at run-time.

Igor Bukanov

unread,
Dec 14, 2014, 1:34:26 PM12/14/14
to Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 14 December 2014 at 18:59, Chris Palmer <pal...@google.com> wrote:

Yes, unfortunately we have a collective action problem. (http://en.wikipedia.org/wiki/Collective_action#Collective_action_problem) But just because it's hard, doesn't mean we don't have try. I'd suggest that embedders ask embeddees to at least make HTTPS available, even if not the default.

Also, keep in mind that this proposal is only to mark HTTP as non-secure — HTTP will still work, and you can still host your site over HTTP.

If serving context over HTTPS generates broken pages, the insensitive of enabling encryption is very low. As it was already mentioned, a solution to that is to allow to serve encrypted pages over HTTP so pages that refer to unencrypted elements would not break pages but just produces warnings. Such encrypted http:// also allows to generate less warnings for a page where all context is available over self-signed and key-pinned certificate as that solution is strictly more secure then a plain HTTP.

Chris Palmer

unread,
Dec 14, 2014, 1:34:56 PM12/14/14
to Alex Gaynor, Igor Bukanov, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Sun, Dec 14, 2014 at 10:00 AM, Alex Gaynor <alex....@gmail.com> wrote:

Is there a plan for HTTP to eventually have an interstitial, the way HTTPS with a bogus cert does?

We (Chrome) have no current plan to do that. In the Beautiful Future when some huge percentage of pageviews are shown via secure transport, it might or might not make sense to interstitial HTTP then. I kind of doubt that it will be a good idea, but who knows. We'll see.

Chris Palmer

unread,
Dec 14, 2014, 1:40:58 PM12/14/14
to Igor Bukanov, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Sun, Dec 14, 2014 at 10:34 AM, Igor Bukanov <ig...@mir2.org> wrote:

If serving context over HTTPS generates broken pages, the insensitive of enabling encryption is very low.

That's the definition of a collective action problem, yes.

I think that the incentives will change, and are changing, and people are becoming more aware of the problems of non-secure transport. There is an on-going culture shift, and more and more publishers are going to offer HTTPS. For example, http://open.blogs.nytimes.com/author/eitan-konigsburg/?_r=0.

As it was already mentioned, a solution to that is to allow to serve encrypted pages over HTTP so pages that refer to unencrypted elements would not break pages but just produces warnings. Such encrypted http:// also allows to generate less warnings for a page where all context is available over self-signed and key-pinned certificate as that solution is strictly more secure then a plain HTTP.

But, again, consider the definition of the origin. If it is possible for securely-transported code to run in the same context as non-securely transported code, the securely-transported code is effectively non-secure.

Igor Bukanov

unread,
Dec 14, 2014, 1:48:08 PM12/14/14
to Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 14 December 2014 at 19:40, Chris Palmer <pal...@google.com> wrote:

But, again, consider the definition of the origin. If it is possible for securely-transported code to run in the same context as non-securely transported code, the securely-transported code is effectively non-secure.

Yes, but the point is that the page will be shown with the same warnings as a plain http page rather then showing a broken page. 

Igor Bukanov

unread,
Dec 14, 2014, 1:53:30 PM12/14/14
to Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
I.e. just consider that currently a hosting provider has no option to unconditionally encrypt pages they host for modern browsers as that may break pages of the users. With encrypted http:// they get such option delegating the job of fixing warnings about insecure context to the content producers as it should.

Chris Palmer

unread,
Dec 14, 2014, 2:08:32 PM12/14/14
to Igor Bukanov, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Sun, Dec 14, 2014 at 10:53 AM, Igor Bukanov <ig...@mir2.org> wrote:

I.e. just consider that currently a hosting provider has no option to unconditionally encrypt pages they host for modern browsers as that may break pages of the users. With encrypted http:// they get such option delegating the job of fixing warnings about insecure context to the content producers as it should.

I'm sorry; I still don't understand what you mean. Do you mean that you want browsers to treat some hypothetical encrypted HTTP protocol as if it were a secure origin, but still allow non-secure embedded content in these origins?

I would argue strongly against that, and so far not even the "opportunistic encryption" advocates have argued for that.

Igor Bukanov

unread,
Dec 14, 2014, 2:26:45 PM12/14/14
to Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
I would like to see some hypothetical encrypted http:// when a browser present a page as if it was over https:// if everything of a secure origin and as if it was served over plain http if not. That is, if a future browser shows warnings for plain http, so it will show the same warnings for encrypted http:// with insecure resources.

The point of such encrypted http:// is to guarantee that *enabling encryption never degrades user experience* compared with the case of plain http. This will allow for a particular installation to start serving everything encrypted independently from the job of fixing the content. And as the page still served as http://, the user training/expectations about https:// sites no longer applies.

Michal Zalewski

unread,
Dec 14, 2014, 2:48:08 PM12/14/14
to Igor Bukanov, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
> I would like to see some hypothetical encrypted http:// when a browser
> present a page as if it was over https:// if everything of a secure origin
> and as if it was served over plain http if not. That is, if a future browser
> shows warnings for plain http, so it will show the same warnings for
> encrypted http:// with insecure resources.

Browsers have flirted with along the lines of your proposal with
non-blocking mixed content icons. Unfortunately, websites are not
static - so the net effect was that if you watched the address bar
constantly, you'd eventually get notified that your previously-entered
data that you thought will be visible only to a "secure" origin has
been already leaked to / exposed to network attackers.

The main point of having a visible and stable indicator for encrypted
sites is to communicate to the user that the site offers a good degree
of resilience against the examination or modification of the exchanged
data by network attackers. (It is a complicated property and it is
often misunderstood as providing clear-cut privacy assurances for your
online habits, but that's a separate topic.)

Any changes that make this indicator disappear randomly at unexpected
times, or make the already-complicated assurances more fragile and
even harder to explain, are probably not the right way to go.

/mz

Igor Bukanov

unread,
Dec 14, 2014, 3:04:05 PM12/14/14
to Michal Zalewski, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 14 December 2014 at 20:47, Michal Zalewski <lca...@google.com> wrote:
The main point of having a visible and stable indicator for encrypted
sites is to communicate to the user that the site offers a good degree
of resilience against the examination or modification of the exchanged
data by network attackers.

Then browser should show absolutely no indications of secure origin for encrypted http://. The idea is that encrypted http:// experience would be equivalent to the current http experience with no indications of security and no warnings. However, encrypted http:// with insecure elements will start to produce warnings in the same way a future browser will show warnings for plain http.

Without something like this I just do not see how a lot of sites could ever start enabling encryption unconditionally. I.e. currently enabling https requires to modify content often in a significant way. I would for a site operator to have an option to enabling encryption unconditionally without touching the content.

Michal Zalewski

unread,
Dec 14, 2014, 3:07:46 PM12/14/14
to Igor Bukanov, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
> Then browser should show absolutely no indications of secure origin for
> encrypted http://. The idea is that encrypted http:// experience would be
> equivalent to the current http experience with no indications of security
> and no warnings. However, encrypted http:// with insecure elements will
> start to produce warnings in the same way a future browser will show
> warnings for plain http.

As mentioned in my previous response, this gets *really* hairy because
the "has insecure elements" part is not a static property that can be
determined up front; so, you end up with the problem of sudden and
unexpected downgrades and notifying the user only after the
confidentiality or integrity of the previously-stored data has been
compromised.

/mz

Christian Heutger

unread,
Dec 14, 2014, 4:41:39 PM12/14/14
to Chris Palmer, edu...@agoravoting.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
> Reducing the number of parties you have to trust from [ the site operator, the operators of all networks between you and the site operator ] to just [ the site operator ] is a huge win.

But how can I trust him and who is he? No WHOIS records no imprint, all spoofable, so in what should I trust then? If there is a third-party, who state me, the details given are correct and they have a warranty for misinformation, that’s something I could trust. I also look at online-shopping if there are customer reviews, but I do not recognize them as fully trustable as they may be spoofed, if the shop has a seal like Trusted Shops with a money-back guarantee, I feel good and shop there.
 
> I think you'll find EV is not as "extended" as you might be hoping.

I know, but it’s the best we currently have. And DV is much worser, finally loosing any trust in HTTPS with it, scaling it down to encryption, nothing else.

> But more importantly, the only way to get minimal server auth, data integrity, and data confidentiality on a mass scale is with something at least as easy to deploy as DV. Indeed, you'll see many of the other messages in this thread are from people concerned that DV isn’t
> easy enough yet! So requiring EV is a non-starter.

I agree on data confidentiality, maybe also on integrity although DV without effort or costs may break that also in any way, but server auth is somehow saying nothing as any server endpoint I called I get, nothing more is authenticated. However, I support the idea of having mass encryption, but before confusing and damaging end users mind on internet security, there need to be a clear differentiation in just encryption and encryption with valid authentication.

> HTTPS is the bare minimum requirement for secure web application *transport*. Is secure transport by itself sufficient to achieve total *application-semantic* security? No. But a browser couldn't determine that level of security anyway. Our goal is for the browser to tell
> as much of the truth as it can programatically determine at run-time.

But wasn’t that the idea of certificates? Seals on websites can be spoofed, WHOIS records can be spoofed, imprints can be spoofed, but spoofing EV certificates, e.g. in combination with solutions like pinning, is a hard job. Considering there would be no browser warning for self-signed certificates, I do not see any advantage in mass developed (finally requiring a full automated process) DV certificates. It’s a bit back to the roots to times, I remember, some website operators offered their self-signed root to be installed in the browser to remove the browser warning.

Peter Bowen

unread,
Dec 14, 2014, 5:57:44 PM12/14/14
to Chris Palmer, Igor Bukanov, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Sun, Dec 14, 2014 at 11:08 AM, 'Chris Palmer' via Security-dev
<securi...@chromium.org> wrote:
> On Sun, Dec 14, 2014 at 10:53 AM, Igor Bukanov <ig...@mir2.org> wrote:
>
>> I.e. just consider that currently a hosting provider has no option to
>> unconditionally encrypt pages they host for modern browsers as that may
>> break pages of the users. With encrypted http:// they get such option
>> delegating the job of fixing warnings about insecure context to the content
>> producers as it should.
>
>
> I'm sorry; I still don't understand what you mean. Do you mean that you want
> browsers to treat some hypothetical encrypted HTTP protocol as if it were a
> secure origin, but still allow non-secure embedded content in these origins?

I'm also not clear on what Igor intended, but there is a real issue
with browser presentation of URLs using TLS today. There is no way to
declare "I know that this page will have insecure content, so don't
consider me a secure origin" such that the browser will show a
"neutral" icon rather than a warning icon. I think there is a strong
impression that a closed lock is better than neutral, but a yellow
warning sign over the lock is worse than neutral. Today prevents
sites from using HTTPS unless they have a very high confidence that
all resources on the page will come from secure origins.

Thanks,
Peter

cha...@yandex-team.ru

unread,
Dec 15, 2014, 4:04:44 AM12/15/14
to Christian Heutger, pal...@google.com, edu...@agoravoting.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
(Ouch. maintaining the cross-posting ;( ).
 
15.12.2014, 11:57, "Christian Heutger" <chri...@heutger.net>:
However, this „change“ could come with marking HTTP as Non-Secure, but just stating HTTPS as secure is the completely wrong sign and will result in more confusion and loosing any trust in any kind of browser padlocks than before.
 
The message someone wrote about "I understand I am using insecure mixed content, please don't make me look worse than someone who doesn't understand that" strikes a chord - I think there is value in supporting the use case.
 
Just a proposal:
 
Mark HTTP as Non-Secure (similar to self-signed) e.g. with a red padlock or sth. similar.
Mark HTTPS as Secure (and only secure in favor of encrypted) e.g. with a yellow padlock or sth. similar
Mark HTTPS with Extended Validation (encrypted and validated) as it is with a green padlock or sth. similar
 
Just a "nit pick" - please don't EVER rely only on colour distinction to communicate anything important. There are a lot of colour-blind people, and on top there are various people who rely on high-contrast modes and the like which effectively strip the colour out.
 
cheers
 
Chaals
 
--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com
 

Igor Bukanov

unread,
Dec 15, 2014, 4:16:46 AM12/15/14
to Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 14 December 2014 at 23:57, Peter Bowen <pzb...@gmail.com> wrote:
 I think there is a strong
impression that a closed lock is better than neutral, but a yellow
warning sign over the lock is worse than neutral.

The problem is not just a warning sign.

Browsers prevents any active context including iframe served over http from loading. Thus showing a page with youtube and other videos over https is not an option unless one fixes the page. Now consider that it is not a matter of running sed on a set of static files but rather patching the stuff stored in the database or fixing JS code that inserts the video as the task of enabling https becomes non-trivial and very content dependent.

So indeed an option to declare that despite proper certificates and encryption the site should be treated as of insecure origin is needed. This way the page will be shown as if it was served as before with plain http with no changes in user experience.  But then it cannot be a https site since many users still consider that https is enough to assume a secure site. Hence the idea of encrypted http:// or something that makes user experience with an encrypted page absolutely the same as she has with plain http:// down to the browser stripping http:// from the URL.

After considering this I think it will be even fine for a future browser to show a warning for such properly-encrypted-but-explicitly-declared-as-insecure in the same way as a warning will be shown for plain http. And it will be really nice if a site operator, after activating such user-invisible encryption, can receive reports from a browser about any violation of the secure origin policy in the same way how violations of CSP are reported today. This would give nice possibility of activating encryption without breaking anything, collecting reports of violated secure origin policy, fixing content and finally declaring the site explicitly https-only.

Jeffrey Walton

unread,
Dec 15, 2014, 4:29:54 AM12/15/14
to Christian Heutger, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
On Sat, Dec 13, 2014 at 2:05 PM, Christian Heutger
<chri...@heutger.net> wrote:
> I see a big danger in the current trend.

Surely you haven't missed the big danger in plain text traffic. That
traffic gets usuroed and fed into susyems like Xkeyscore for Tailored
Access Operations (TAO). In layman's terms, adversaries are using the
information gathered to gain unauthorized access to systems.

> Expecting everyone having a free
> „secure“ certificate and being in requirement to enable HTTPS it will result
> in nothing won. DV certificates (similar to DANE) do finally say absolute
> nothing about the website operator.

The race to the bottom among CAs is to blame for the quality of
verification by the CAs.

With companies like StartCom, Cacert and Mozilla offering free
certificates, there is no barrier to entry.

Plus, I don't think a certificate needs to say anything about the
operator. They need to ensure the server is authenticated. That is,
the public key bound to the DNS name is authentic.

> They ensure encryption, so I can then be
> phished, be scammed, … encrypted. Big advantage!^^

As I understand it, phishers try to avoid TLS because they count on
the plain text channel to avoid all the browser warnings. Peter
Gutmann discusses this in his Engineering Security book
(https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf).

> Pushing real validation
> (e.g. EV with green adressbar and validated details by an independent third
> party, no breakable, spoofable automatism) vs. no validation is much more
> important and should be focussed on.

You should probably read Gutmann's Engineering Security. See his
discussion of "PKI me harder" in Chapter 1 or 6 (IIRC).

> However, this „change“ could come with
> marking HTTP as Non-Secure, but just stating HTTPS as secure is the
> completely wrong sign and will result in more confusion and loosing any
> trust in any kind of browser padlocks than before.

Security engineering studies seem to indicate most users don't
understand the icons. It would probably be better if the browsers did
the right thing, and took the users out of the loop. Gutmann talks
about it in detail (with lots of citations).

> Just a proposal:
>
> Mark HTTP as Non-Secure (similar to self-signed) e.g. with a red padlock or
> sth. similar.

+1. In the browser world, plaintext was (still is?) held in higher
esteem then opportunistic encryption. Why the browsers choose to
indicate things this way is a mystery.

> Mark HTTPS as Secure (and only secure in favor of encrypted) e.g. with a
> yellow padlock or sth. similar
> Mark HTTPS with Extended Validation (encrypted and validated) as it is with
> a green padlock or sth. similar

Why green for EV (or why yellow for DV or DANE)? EV does not add any
technical controls. From a security standpoint, DV and EV are
equivalent.

If DNS is authentic, then DANE provides stronger assurances than DV or
EV since the domain operator published the information and the
veracity does not rely on others like CAs (modulo DBOUND).

Not relying on a CA is a good thing since its usually advantageous to
minimize trust (for some definition of "trust"). Plus, CAs don't
really warrant anything, so its not clear what exactly they are
providing to relying parties (they are providing a a signature for
money to the applicant).

Open question: do you think the browsers will support a model other
than the CA Zoo for rooting trust?

Michal Zalewski

unread,
Dec 15, 2014, 4:30:23 AM12/15/14
to Igor Bukanov, Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
> So indeed an option to declare that despite proper certificates and
> encryption the site should be treated as of insecure origin is needed. This
> way the page will be shown as if it was served as before with plain http
> with no changes in user experience. But then it cannot be a https site
> since many users still consider that https is enough to assume a secure
> site. Hence the idea of encrypted http:// or something that makes user
> experience with an encrypted page absolutely the same as she has with plain
> http:// down to the browser stripping http:// from the URL.

Sounds like you're essentially proposing a flavor of opportunistic
encryption for http://, right?

That seems somewhat tangential to Chris' original proposal, and there
is probably a healthy debate to be had about this; it may be also
worthwhile to look at SPDY and QUIC. In general, if you're comfortable
with not providing users with a visible / verifiable degree of
transport security, I'm not sure how the proposal changes this?

By the way, note that nothing is as simple as it seems; opportunistic
encryption is easy to suggest, but it's pretty damn hard to iron out
all the kinks. If there is genuinely no distinction between plain old
HTTP and opportunistically encrypted HTTP, the scheme can be
immediately rendered useless by any active attacker, and suffers from
many other flaws (for example, how do you link from within that
opportunistic scheme to other URLs within your application without
downgrading to http:// or upgrading to "real" https://?). Establishing
a new scheme solves that, but doesn't really address your other
concern - you still need to clean up all links. On top of that, it
opens a whole new can of worms by messing around with SOP.

/mz

Ryan Sleevi

unread,
Dec 15, 2014, 4:38:43 AM12/15/14
to Jeffrey Walton, security-dev, blink-dev, dev-se...@lists.mozilla.org, Christian Heutger, public-w...@w3.org

From an SOP point of view, this is true.
However, it is increasingly less true if you're willing to ignore the (near cataclysmic) SOP failure, as EV gains technical controls such as certificate transparency and pontentially mandatory stronger security settings (e.g. secure ciphersuites in modern TLS, OCSP stapling, etc). Additionally, there are other technical controls (validity periods, key processing) that do offer distinction.

That is, it is not all procedural changes, and UAs can detect and differentiate. While the hope is that these will be able to apply to all sites in the future, any change of this scale takes time.

> If DNS is authentic, then DANE provides stronger assurances than DV or
> EV since the domain operator published the information and the
> veracity does not rely on others like CAs (modulo DBOUND).
>
> Not relying on a CA is a good thing since its usually advantageous to
> minimize trust (for some definition of "trust"). Plus, CAs don't
> really warrant anything, so its not clear what exactly they are
> providing to relying parties (they are providing a a signature for
> money to the applicant).
>
> Open question: do you think the browsers will support a model other
> than the CA Zoo for rooting trust?

Chromium has no plans for this, particularly those based on DNS/DANE, which are empirically less secure and more operationally fraught with peril. I would neither take it as foregone that the CA system cannot improve nor am I confident that any of the known alternatives are either practical or comparable in security to CAs, let alone superior.

Igor Bukanov

unread,
Dec 15, 2014, 5:03:15 AM12/15/14
to Michal Zalewski, Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 15 December 2014 at 10:30, Michal Zalewski <lca...@google.com> wrote:
That seems somewhat tangential to Chris' original proposal, and there
is probably a healthy debate to be had about this; it may be also
worthwhile to look at SPDY and QUIC. In general, if you're comfortable
with not providing users with a visible / verifiable degree of
transport security, I'm not sure how the proposal changes this?


Chris' original proposal is a stick. I want to give a site operator also a carrot. That can be an option to activate encryption that is not visible to the user and *receive* from the browser all reports about violations of secure origin policy. This way the operator will know that they can activate HTTPS without worsening user experience and have information that helps to fix the content.

If there is genuinely no distinction between plain old
HTTP and opportunistically encrypted HTTP, the scheme can be
immediately rendered useless by any active attacker

I am not proposing that a user-invisible encryption should stay forever. Rather it should be treated just as a tool to help site operators to transition to the proper https so at no stage the user experience would be worse than continuing to serve pages with plain http.

mof...@gmail.com

unread,
Dec 15, 2014, 9:41:39 AM12/15/14
to securi...@chromium.org
суббота, 13 декабря 2014 г., 3:46:36 UTC+3 пользователь Chris Palmer написал:
> Hi everyone,
>
>
> Apologies to those of you who are about to get this more than once, due to the cross-posting. I'd like to get feedback from a wide variety of people: UA developers, web developers, and users. The canonical location for this proposal is: https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.
>
>
>
> Proposal
>
> We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015.
>
> The goal of this proposal is to more clearly display to users that HTTP provides no data security.
>
> Request
>
> We’d like to hear everyone’s thoughts on this proposal, and to discuss with the web community about how different transition plans might serve users.
>
> Background
>
> We all need data communication on the web to be secure (private, authenticated, untampered). When there is no data security, the UA should explicitly display that, so users can make informed decisions about how to interact with an origin.
>
> Roughly speaking, there are three basic transport layer security states for web origins:
>
> Secure (valid HTTPS, other origins like (*, localhost, *));
> Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with minor TLS errors); and
> Non-secure (broken HTTPS, HTTP).
>
> For more precise definitions of secure and non-secure, see Requirements for Powerful Features and Mixed Content.
>
> We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web.
>
> RFC 7258: Pervasive Monitoring Is an Attack
> NSA uses Google cookies to pinpoint targets for hacking
> Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
> How bad is it to replace adSense code id to ISP's adSense ID on free Internet?
> Comcast Wi-Fi serving self-promotional ads via JavaScript injection
> Erosion of the moral authority of transparent middleboxes
> Transitioning The Web To HTTPS
>
> We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:
>
>
>
>
>
>
>
>
>
> Particulars
>
> UA vendors who agree with this proposal should decide how best to phase in the UX changes given the needs of their users and their product design constraints. Generally, we suggest a phased approach to marking non-secure origins as non-secure. For example, a UA vendor might decide that in the medium term, they will represent non-secure origins in the same way that they represent Dubious origins. Then, in the long term, the vendor might decide to represent non-secure origins in the same way that they represent Bad origins.
>
> Ultimately, we can even imagine a long term in which secure origins are so widely deployed that we can leave them unmarked (as HTTP is today), and mark only the rare non-secure origins.
>
> There are several ways vendors might decide to transition from one phase to the next. For example, the transition plan could be time-based:
>
> T0 (now): Non-secure origins unmarked
> T1: Non-secure origins marked as Dubious
> T2: Non-secure origins marked as Non-secure
> T3: Secure origins unmarked
>
> Or, vendors might set thresholds based on telemetry that measures the ratios of user interaction with secure origins vs. non-secure. Consider this strawman proposal:
>
> Secure > 65%: Non-secure origins marked as Dubious
> Secure > 75%: Non-secure origins marked as Non-secure
> Secure > 85%: Secure origins unmarked
>
> The particular thresholds or transition dates are very much up for discussion. Additionally, how to define “ratios of user interaction” is also up for discussion; ideas include the ratio of secure to non-secure page loads, the ratio of secure to non-secure resource loads, or the ratio of total time spent interacting with secure vs. non-secure origins.
>
> We’d love to hear what UA vendors, web developers, and users think. Thanks for reading!

This is the worst idea ever. There are no point to encrypt ANYTHING. What for to encrypt cat's images? There are public websites, static websites - there are no need to encrypt it's traffic. They are public, open and not related to something which need to be hidden. HTTPS adds an additional point of failure and additional point to monitor for system administrator and web developers.
Also it's not always possible to use HTTPS. For many websites it's much easier to just stay only HTTP mode.

You basically want all websites to work on HTTPS and it's stupid idea.

If something like it will be invented in chromium, i'll cast a vote to prevent it from grow.

Daniel Veditz

unread,
Dec 15, 2014, 12:55:00 PM12/15/14
to Igor Bukanov, Michal Zalewski, Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 12/15/14 2:03 AM, Igor Bukanov wrote:
> Chris' original proposal is a stick. I want to give a site operator also
> a carrot. That can be an option to activate encryption that is not
> visible to the user and *receive* from the browser all reports about
> violations of secure origin policy. This way the operator will know that
> they can activate HTTPS without worsening user experience and have
> information that helps to fix the content.

Serve the HTML page over http: but load all sub-resources over https: as
expected after the transition. Add the following header:

Content-Security-Policy-Report-Only: default-src https:; report-uri <me>

(add "script-src https: 'unsafe-inline' 'unsafe-eval';" if necessary)

This doesn't give you the benefit of encrypting your main HTML content
during the transition as you requested, but it is something that can be
done today. When the reports come back clean enough you can switch the
page content to https too.

-Dan Veditz

simonand...@gmail.com

unread,
Dec 15, 2014, 6:21:18 PM12/15/14
to blin...@chromium.org, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org
I'm a webmaster and I have switched a good 2 months ago (my website is really not huge, around 100K unique monthly visitors, it's a one-man website). 
I would say switching was "interesting.
It was easy till I realized I had mixed content on some of my pages ( a visitor tipped me off, it was embarrassing).
Finding/analyzing and solving that tiny problem was hard. I spent 99% of the time spent on switching working on that one little problem. 

My biggest problem with switching to HTTPS is that it is time and money and as an independent, non venture backed webmaster my resources are very limited.

I'm not sure there is a positive ROI in switching for a small webmaster like me. 
My visitors don't care and/or understand what is HTTPS and why is it better for them.

However, flagging non-secure websites would definitely generate more awareness. 

I definitely - and I think a lot of the small guys, who switched already - support this 100%.


On Saturday, December 13, 2014 1:46:39 AM UTC+1, Chris Palmer wrote:
Hi everyone,

Apologies to those of you who are about to get this more than once, due to the cross-posting. I'd like to get feedback from a wide variety of people: UA developers, web developers, and users. The canonical location for this proposal is: https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.

Proposal


We, the Chrome Security Team, propose that user agents (UAs) gradually change their UX to display non-secure origins as affirmatively non-secure. We intend to devise and begin deploying a transition plan for Chrome in 2015.


The goal of this proposal is to more clearly display to users that HTTP provides no data security.


Request


We’d like to hear everyone’s thoughts on this proposal, and to discuss with the web community about how different transition plans might serve users.


Background


We all need data communication on the web to be secure (private, authenticated, untampered). When there is no data security, the UA should explicitly display that, so users can make informed decisions about how to interact with an origin.


Roughly speaking, there are three basic transport layer security states for web origins:


  • Secure (valid HTTPS, other origins like (*, localhost, *));

  • Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with minor TLS errors); and

  • Non-secure (broken HTTPS, HTTP).


For more precise definitions of secure and non-secure, see Requirements for Powerful Features and Mixed Content.


We know that active tampering and surveillance attacks, as well as passive surveillance attacks, are not theoretical but are in fact commonplace on the web.


RFC 7258: Pervasive Monitoring Is an Attack

NSA uses Google cookies to pinpoint targets for hacking

Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine

How bad is it to replace adSense code id to ISP's adSense ID on free Internet?

Comcast Wi-Fi serving self-promotional ads via JavaScript injection

Erosion of the moral authority of transparent middleboxes

Transitioning The Web To HTTPS


We know that people do not generally perceive the absence of a warning sign. (See e.g. The Emperor's New Security Indicators.) Yet the only situation in which web browsers are guaranteed not to warn users is precisely when there is no chance of security: when the origin is transported via HTTP. Here are screenshots of the status quo for non-secure domains in Chrome, Safari, Firefox, and Internet Explorer:


Screen Shot 2014-12-11 at 5.08.48 PM.png


Screen Shot 2014-12-11 at 5.09.55 PM.png


Screen Shot 2014-12-11 at 5.11.04 PM.png


ie-non-secure.png


ferdy.c...@gmail.com

unread,
Dec 15, 2014, 6:28:38 PM12/15/14
to blin...@chromium.org, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org
I'm a small website owner and I believe this proposal will upset a lot of small hosters and website owners. In particular for simple content websites, https is a burden for them, in time, cost and it not adding much value. I don't need to be convinced of the security advantages of this proposal, I'm just looking at the practical aspects of it. Furthermore, as mentioned here there is the issue of mixed content and plugins you don't own or control. I sure hope any such warning message is very subtle, otherwise a lot of traffic will be driven away from websites.

Adrienne Porter Felt

unread,
Dec 15, 2014, 6:50:36 PM12/15/14
to ferdy.c...@gmail.com, blink-dev, public-w...@w3.org, security-dev, dev-se...@lists.mozilla.org
If someone thinks their users are OK with their website not having integrity/authentication/privacy, then why is it problematic that Chrome will start telling users about it? Presumably these users would still be OK with it after Chrome starts making the situation more obvious. (And if the users start disliking it, then perhaps they really were never OK with it in the first place?)

Alex Gaynor

unread,
Dec 15, 2014, 6:53:02 PM12/15/14
to Adrienne Porter Felt, ferdy.c...@gmail.com, blink-dev, public-w...@w3.org, security-dev, dev-se...@lists.mozilla.org
Indeed, the notion that users don't care is based on an ill-founded premise of informed consent.

Here's a copy-pasted comment I made elsewhere on this topic:

To respect a user's decision, their decision needs to be an informed one, and it needs to be choice. I don't think there's a reasonable basis to say either of those are the case with users using HTTP in favor of HTTPS:

First, is it a choice: given that browsers default to HTTP, when no protocol is explicitly selected, and that many users will access the site via external links that they don't control, I don't think it's fair to say that users choose HTTP, they simply get HTTP.

Second, if we did say they'd made a choice, was an informed one. We, as an industry, have done a very poor job of educating users about the security implications of actions online. I don't believe most non-technical users have an understanding of what the implications of the loss of the Authentication, Integrity, or Confidentiality that coms with preferring HTTP to HTTPS are.

Given the fact that most users don't proactively consent to having their content spied upon or mutated in transit, and insofar as they do, it is not informed consent, I don't believe website authors have any obligation to provide access to content over dangerous protocols like HTTP.


Alex

To unsubscribe from this group and stop receiving emails from it, send an email to security-dev...@chromium.org.

ferdy.c...@gmail.com

unread,
Dec 15, 2014, 7:10:21 PM12/15/14
to blin...@chromium.org, ferdy.c...@gmail.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, fe...@chromium.org
"If someone thinks their users are OK with their website not having integrity/authentication/privacy"

That is an assumption that doesn't apply to every website. Many websites don't even have authentication.

"Presumably these users would still be OK with it after Chrome starts making the situation more obvious."

Or perhaps it doesn't, and it scares them away. Just like with the cookie bars, where now every user believes all cookies are evil. You assume users are able to make an informed decision based on such warnings, and I doubt that.


"Presumably these users would still be OK with it after Chrome starts making the situation more obvious"

Donald Stufft

unread,
Dec 15, 2014, 7:12:28 PM12/15/14
to ferdy.c...@gmail.com, blin...@chromium.org, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, fe...@chromium.org

On Dec 15, 2014, at 7:10 PM, ferdy.c...@gmail.com wrote:

"If someone thinks their users are OK with their website not having integrity/authentication/privacy"

That is an assumption that doesn't apply to every website. Many websites don't even have authentication.

"Presumably these users would still be OK with it after Chrome starts making the situation more obvious."

Or perhaps it doesn't, and it scares them away. Just like with the cookie bars, where now every user believes all cookies are evil. You assume users are able to make an informed decision based on such warnings, and I doubt that.

"Presumably these users would still be OK with it after Chrome starts making the situation more obvious"

If users are unable to make an informed choice than I personally believe it’s up to the User Agent to try and pick what choice the user most likely wants. I have a hard time imagining that most users, if given the choice between allowing anyone in the same coffee shop to read what they are reading and not allowing, would willingly choose HTTP over HTTPS.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

ferdy.c...@gmail.com

unread,
Dec 15, 2014, 7:12:41 PM12/15/14
to blin...@chromium.org, fe...@chromium.org, ferdy.c...@gmail.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, alex....@gmail.com
"To respect a user's decision, their decision needs to be an informed one, and it needs to be choice. I don't think there's a reasonable basis to say either of those are the case with users using HTTP in favor of HTTPS:"

What if a decision doesn't apply, what if the user is unable to be informed and make a proper decision? I don't agree that every action should be decided upon by the end-user, I am much more in favor of education website owners.

ferdy.c...@gmail.com

unread,
Dec 15, 2014, 7:15:14 PM12/15/14
to blin...@chromium.org, ferdy.c...@gmail.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, fe...@chromium.org, donald...@gmail.com
I think a choice between HTTP and HTTPS by the user doesn't make sense. The choice is different, it is the choice between staying on a HTTP-only site or leaving it alltogether.

Ryan Sleevi

unread,
Dec 15, 2014, 7:19:00 PM12/15/14
to ferdy.c...@gmail.com, blink-dev, public-w...@w3.org, security-dev, dev-se...@lists.mozilla.org, Adrienne Porter Felt
On Mon, Dec 15, 2014 at 4:10 PM, <ferdy.c...@gmail.com> wrote:
"If someone thinks their users are OK with their website not having integrity/authentication/privacy"

That is an assumption that doesn't apply to every website. Many websites don't even have authentication. 

I think there may be some confusion.

"Authentication" here does not refer to "Does the user authenticate themselves to the site" (e.g. do they log in), but "Is the site you're talking to the site you the site you expected" (or, put differently, "Does the server authenticate itself to the user").

Without authentication in this sense (e.g. talking to whom you think you're talking to), anyone can trivially impersonate a server and alter the responses. This is not that hard, a few examples for you about why authentication is important, even for sites without logins:


This is why it's important to know you're talking to the site you're expecting (Authentication), and that no one has modified that site's contents (Integrity).

Christian Heutger

unread,
Dec 15, 2014, 8:11:55 PM12/15/14
to nolo...@gmail.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
>Surely you haven't missed the big danger in plain text traffic. That
>traffic gets usuroed and fed into susyems like Xkeyscore for Tailored
>Access Operations (TAO). In layman's terms, adversaries are using the
>information gathered to gain unauthorized access to systems.

With DV (weak validation) it then goes encrypted to them, I don¹t see the
advantage. The magic bullet TOR to prevent from being monitored also
showed up, that the expected privacy may be broken. It¹s a good idea but
therefor stepping back from the value of PKIX is the wrong way in my
opinion.

>The race to the bottom among CAs is to blame for the quality of
>verification by the CAs.

Right, so DV need to be deprecated or set to a recognizable lower level,
clearly stating that it¹s only encryption, nothing else.

>With companies like StartCom, Cacert and Mozilla offering free
>certificates, there is no barrier to entry.

And no barrier breaking the value of certificate authorities vs.
self-signed certificates (Cacert is the only good exception, for a good
reason their approach is different).

>Plus, I don't think a certificate needs to say anything about the
>operator. They need to ensure the server is authenticated. That is, the
>public key bound to the DNS name is authentic.

If a certificate doesn¹t tell, what should tell? How should I be sure to
be on www.onlinebanking.de and not www.onlínebanking.de (see the accent)
by getting spoofed or phished? It¹s the same for Facebook.com or
Facebo0k.com, ...

>As I understand it, phishers try to avoid TLS because they count on the
>plain text channel to avoid all the browser warnings. Peter Gutmann
>discusses this in his Engineering Security book
>(https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf).

If there is a free certificate for everyone and everything is https, which
browser warnings should occur?

>Why green for EV (or why yellow for DV or DANE)? EV does not add any
>technical controls. From a security standpoint, DV and EV are equivalent.

That¹s what certificates are for. If we only would want to have
encryption, there would never be any requirement for certificates.
Browsers and servers handle cipher suites, handshakes etc., the
certificate is the digital equivalent to an authorized identity card, and
there for sure DV and EV are different. Security is about confidentiality,
integrity and availability. Confidentiality is the encryption, integrity
is the validation.

>If DNS is authentic, then DANE provides stronger assurances than DV or EV
>since the domain operator published the information and the veracity does
>not rely on others like CAs (modulo DBOUND).

From the pure technical standpoint, yes, from the validation standpoint,
no. DANE has the hazel of compatibility, but it also struggle with harder
mandatory realization of restrictions (online or offline key material, key
sizes, algorithm, debian bug or heart bleed reissue, Š, all the topics,
which recently arised), for pinning validated (EV) certificates, it¹s the
best solution vs. pinning or transparency.

>Not relying on a CA is a good thing since its usually advantageous to
>minimize trust (for some definition of "trust"). Plus, CAs don¹t really
>warrant anything, so its not clear what exactly they are providing to
>relying parties (they are providing a a signature for money to the
>applicant).

As there is not internet governance, they are the only available
alternative. Similar to other agencies existing worldwide, they fetch
money for validation services and warrant for mis-validation. They are
dictated strict rules on how to do and be audited to proof, they follow
this rules. That¹s how auditing currently works in many places and
although it¹s not the optimal system, it¹s the one currently available.

>Open question: do you think the browsers will support a model other than
>the CA Zoo for rooting trust?

If a reliable, usable and manageable concept will be established, for
sure. But as e.g. ISO 27001 establish the same model, there is a company
being paid for stating what they audited is correct and issuing a seal
(being ISO 27001 certified) which end users should trust in.

Ryan Sleevi

unread,
Dec 15, 2014, 8:20:54 PM12/15/14
to Christian Heutger, nolo...@gmail.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
There's a lot of information here that isn't quite correct, but I would hate to rathole on a discussion of CA security vs the alternatives, which inevitably arises when one discusses HTTPS in public fora.

I think the discussion here is somewhat orthogonal to the proposal at hand, and thus might be best if kept to a separate thread.

The question of whether HTTP is equivalent to HTTPS-DV in security or authenticity is simple - they aren't at all equivalent, an HTTPS-DV provides vastly superior value over HTTP. So whether or not UAs embrace HTTPS-EV is separate. But as a UA vendor, I would say HTTPS-DV has far more appeal for protecting users and providing security than HTTPS-EV, for many of the reasons you've heard on this thread from others (e.g. the challenges regarding mixed HTTPS+HTTP is the same as HTTPS-EV + HTTPS-DV).

So, assuming we have HTTP vs HTTPS-EV/HTTPS-DV, how best should UAs communicate to the user the lack of security guarantees from HTTP.


Christian Heutger

unread,
Dec 15, 2014, 8:50:37 PM12/15/14
to rsl...@chromium.org, nolo...@gmail.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
> So, assuming we have HTTP vs HTTPS-EV/HTTPS-DV, how best should UAs communicate to the user the lack of security guarantees from HTTP.

I would recommend here as mentioned:

No padlock, red bar or red strike, … => no encryption [and no validation], e.g. similar to SHA1 deprecation in worst situation
Only vs. HTTPS: Padlock => everything fine and not red, „normal“ address bar behavior
With EV differentiation: Padlock, yellow bar, yellow signal, … => only encryption, e.g. similar to current mixed content, …
EV: Validation information, Padlock green bar, no extras, … => similar to current EV

Red-Yellow-Green is recognized all other the world, all traffic signals are like this, explanation on what signal means what can be added to the dialog on click. (Red) strike, (yellow) signal, (green) additional validation information follow also the idea to have people without been able to differentiate colors to understand what happens here.

Peter Kasting

unread,
Dec 15, 2014, 9:00:22 PM12/15/14
to Christian Heutger, rsl...@chromium.org, nolo...@gmail.com, public-w...@w3.org, blin...@chromium.org, securi...@chromium.org, dev-se...@lists.mozilla.org
Please don't try to debate actual presentation ideas on this list.  How UAs present various states is something the individual UA's design teams have much more context and experience doing, so debating that sort of thing here just takes everyone's time to no benefit, and is likely to rapidly become a bikeshed in any case.

As the very first message in the thread states, the precise UX changes here are up to the UA vendors.  What's more useful is to debate the concept of displaying non-secure origins as non-secure, and how to transition to that state over time.

PK

Igor Bukanov

unread,
Dec 16, 2014, 12:29:27 AM12/16/14
to Daniel Veditz, Michal Zalewski, Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On 15 December 2014 at 18:54, Daniel Veditz <dve...@mozilla.com> wrote:
Serve the HTML page over http: but load all sub-resources over https: as
expected after the transition. Add the following header:

Content-Security-Policy-Report-Only: default-src https:; report-uri <me>

This is a nice trick! However, it does not work in general due to the use of protocolless-links starting with // . Or should those be discouraged?

Ryan Sleevi

unread,
Dec 16, 2014, 12:35:28 AM12/16/14
to Igor Bukanov, Daniel Veditz, Michal Zalewski, Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
Sounds like a CSP-bug to me; scheme-relative URLs are awesome, and we should encourage them (over explicit http://-schemed URLs) 

Igor Bukanov

unread,
Dec 16, 2014, 1:17:51 AM12/16/14
to Ryan Sleevi, ferdy.c...@gmail.com, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, Adrienne Porter Felt, security-dev
On 16 December 2014 at 01:18, Ryan Sleevi <rsl...@chromium.org> wrote:
"Authentication" here does not refer to "Does the user authenticate
themselves to the site" (e.g. do they log in), but "Is the site you're
talking to the site you the site you expected" (or, put differently, "Does
the server authenticate itself to the user").

With protocols like SRP or J-PAKE authentication in the first sense (log in) also provides authentication in the second sense (protocols ensures mutual authentication between the user and the server without leaking passwords). I wish there would be at least some support in the browsers for these protocols so one could avoid certificates and related problems in many useful cases.

Andy Wingo

unread,
Dec 16, 2014, 4:14:45 AM12/16/14
to Ryan Sleevi, Igor Bukanov, Daniel Veditz, Michal Zalewski, Peter Bowen, Chris Palmer, Eduardo Robles Elvira, dev-se...@lists.mozilla.org, blink-dev, public-w...@w3.org, security-dev
On Tue 16 Dec 2014 06:35, Ryan Sleevi <rsl...@chromium.org> writes:

> scheme-relative URLs are awesome, and we should encourage them (over
> explicit http://-schemed URLs)

Isn't it an antipattern to make a resource available over HTTP if it is
available over HTTPS? In all cases you could just use HTTPS; no need to
provide an insecure option.

The one case that I know of when scheme-relative URLs are useful is when
HTTPS is not universally accessible, e.g. when the server only supports
TLSv1.2 and so is not reachable from old Android phones, among other
UAs. In that case scheme-relative URLs allow you to serve the same
content over HTTPS to browsers that speak TLSv1.2 but also have it
available insecurely to older browsers.

If there is mention of scheme-relative URLs in a "Marking HTTP as
Non-Secure" set of guidelines for authors and site operators, it should
be to avoid them in favor of explicitly using the HTTPS scheme.

Andy

Sigbjørn Vik

unread,
Dec 16, 2014, 8:59:26 AM12/16/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
I am happy to see this initiative, I consider the current standard
browser UI broken and upside-down. Today, plain http is not trustworthy,
but it still has the "normal" look in browsers. We ought to change this.

A few thoughts:

Users expect that when they come to a site that looks like Facebook, it
is Facebook. They expect any problems to be flagged, and unless there is
a warning, that everything is OK. They do not understand what most
icons/colors and dialogs mean, and are confused by the complexity of
security (and the web in general). A good UI should present the web the
way the user expects it to be presented. Expecting users to spend their
time learning and memorizing various browser UIs (user education) is
arrogant. Starting this discussion from the implementation details is
starting it in the wrong end.

One example of an experimental browser UI is Opera Coast. It goes much
further than cleaning up the security symbols, it removes the entire
address field. It uses a lot of extra background checks, with the aim to
allow users to browse without having to check addresses. If something
seems wrong, it will warn the user ahead of time. This seems to me to be
the ideal, where security is baked into the solution, not tacked on top.
From a user's perspective, it just works. I think revamping address bars
and badges should take the long term goal into consideration as well.
(I'll happily discuss Coast's solutions, but please start a new thread
if so.)

Browsers normally have 3-5 different visual security states in the UI;
normal (no security), DV and EV. Some browsers have special visual
indicators for various types of broken security (dubious, bad, etc). In
addition there are a multitude of corner cases. Although I can see the
use of three states, to support gradual degradation via the middle
state, more than three states is confusing, and the ideal should be
none, as in the above example.

Given three states for now, the question is how we want to display them.
We need one for general unsecured contents. We want one for top
security, i.e. all the latest encryption standards and EV. Then general
encryption would go into the last bucket. Encryption standards will have
to change over time. From a user perspective, a natural way to mark
three states would be as insecure (red/warning), normal (neutral/no
marking) and secure (green/padlock).

There is no need to distinguish unsecured from dubiously secured, they
can just go into the same bucket. There isn't even any need to warn
users about certificate errors, the UI is just downgraded to insecure,
as a self-signed site is no less secure than an http site. There are
technical reasons for the warnings, but those can be bug-fixed. Active
attacks (e.g. certificate replacement to an invalid one, HSTS failure,
revoked certificates, ...) might still be hard-blocked, but note that
this constitutes a fourth state, and the UI is becoming very complicated
already - there are probably better ways to map such cases into the
insecure state, but that is a separate discussion.

One issue is that browser UI is and should be a place for innovation,
not rigid specifications. At the same time, users would clearly benefit
from consistent and good UI. Diverging from the de-facto UI standard
towards a better one comes with a cost for browsers, and they might not
have the incentive to do so. A coordinated move towards a better future
would be good, as long as we avoid the hard limitations. Regardless of
this discussion, we do need better coordination for removing old crypto
standards (SHA-1, SSLv3, RC4, ...) from the "secure" bucket in the UI.
In short, I am all for a coordinated move, but there needs to be space
for browsers to innovate as well.

In terms of the transition plan, I think a date-based plan is the only
thing which will work. This gives all parties time to prepare, they know
when the next phase will start, and nobody will be arguing if we have
reached a milestone or not. It also avoids any deadlocks where the next
phase is needed to push the web to the state where the next phase will
begin. Any ambitious timeline will fail to get all players on board. A
multi-year plan is still better than the resulting user confusion if
browsers move on their own.

BTW, have you explicitly contacted other browser teams?

--
Sigbjørn Vik
Opera Software

Hanno Böck

unread,
Dec 16, 2014, 12:12:47 PM12/16/14
to securi...@chromium.org, Chris Palmer
After having read a couple of discussions two issues came up I haven't
seen mentioned here:

First of all this will kill caching proxies that cache HTTP content.
I'm inclined to think that this is just the price to pay for more
security and probably they have limited use anyway. But it's certainly
something to be aware of.

Also I think that there is unexplored territorry in better deploying
existing tech to optimize. Probably this could be over-compensated by
better tools to optimize websites. E.g. from my experience there is
no widespread knowledge and use of existing lossless image optimizations
(mozilla jpeg optimizations and zopfli).


Then on hackernews someone mentioned that his adsense revenues dropped
significantly because when switching to https less ads are served:
https://news.ycombinator.com/item?id=8744183

I haven't checked if this is true but this indicates it is:
https://support.google.com/adsense/answer/10528?hl=en

I was actually surprised that google ads isn't forcing its customers to
ship https. If going https means less google ad revenues this is
something google should fix.
(also it's well known that ad networks are a major showstopper in
deploying https - but google could lead here)


(and sidenote: today I wrote a lengthy article for the German news page
Golem.de about https myths partly in response to this plans:
http://www.golem.de/news/netzverschluesselung-mythen-ueber-https-1412-111188.html
Google translate isn't perfect, but should give you a glimpse:
http://translate.google.de/translate?sl=de&tl=en&u=http%3A%2F%2Fwww.golem.de%2Fnews%2Fnetzverschluesselung-mythen-ueber-https-1412-111188.html
It was pretty well received)

cu,
--
Hanno Böck
http://hboeck.de/

mail/jabber: ha...@hboeck.de
GPG: BBB51E42

Chris Palmer

unread,
Dec 16, 2014, 3:10:43 PM12/16/14
to Sigbjørn Vik, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Tue, Dec 16, 2014 at 5:59 AM, Sigbjørn Vik <sigb...@opera.com> wrote:

> There is no need to distinguish unsecured from dubiously secured, they
> can just go into the same bucket. There isn't even any need to warn
> users about certificate errors, the UI is just downgraded to insecure,
> as a self-signed site is no less secure than an http site. There are
> technical reasons for the warnings, but those can be bug-fixed. Active
> attacks (e.g. certificate replacement to an invalid one, HSTS failure,
> revoked certificates, ...) might still be hard-blocked, but note that
> this constitutes a fourth state, and the UI is becoming very complicated
> already - there are probably better ways to map such cases into the
> insecure state, but that is a separate discussion.

Well, we do have to make sure that the browser does not send cookies
to an impostor origin. That's (1 reason) why Chrome uses interstitial
warnings today.

We could do away with interstitials if the definition of the origin
included some notion of cryptographic identity — e.g. (HTTPS,
facebook.com, 443, [set of pinned keys]) instead of just (HTTPS,
facebook.com, 443) — but there'd still be problems with that, and very
few site operators are currently able to commit to pinning right now.
(And, that might never change.)

> BTW, have you explicitly contacted other browser teams?

This mass mailing is that.

Chris Palmer

unread,
Dec 16, 2014, 3:12:40 PM12/16/14
to Hanno Böck, security-dev
On Tue, Dec 16, 2014 at 9:12 AM, Hanno Böck <ha...@hboeck.de> wrote:

> Then on hackernews someone mentioned that his adsense revenues dropped
> significantly because when switching to https less ads are served:
> https://news.ycombinator.com/item?id=8744183
>
> I haven't checked if this is true but this indicates it is:
> https://support.google.com/adsense/answer/10528?hl=en
>
> I was actually surprised that google ads isn't forcing its customers to
> ship https. If going https means less google ad revenues this is
> something google should fix.
> (also it's well known that ad networks are a major showstopper in
> deploying https - but google could lead here)

We are aware of it, and efforts to improve the situation are indeed underway.

Jeffrey Walton

unread,
Dec 16, 2014, 8:16:25 PM12/16/14
to Donald Stufft, securi...@chromium.org, dev-se...@lists.mozilla.org, fe...@chromium.org
> "If someone thinks their users are OK with their website not having
> integrity/authentication/privacy"
>
> That is an assumption that doesn't apply to every website. Many websites
> don't even have authentication.
>
> "Presumably these users would still be OK with it after Chrome starts making
> the situation more obvious."
>
> Or perhaps it doesn't, and it scares them away. Just like with the cookie
> bars, where now every user believes all cookies are evil. You assume users
> are able to make an informed decision based on such warnings, and I doubt
> that.
>
> "Presumably these users would still be OK with it after Chrome starts making
> the situation more obvious"
>
> If users are unable to make an informed choice than I personally believe
> it’s up to the User Agent to try and pick what choice the user most likely
> wants.
Makes sense to me, but others have complained that it will break the web.

For example, allowing mixed content when a user specifically asked for
HTTPS seems like a [obvious] bad thing.

As another example, location data is classified as high value. The
code or API that operates on the data is also high value. As such, it
needs a secure origin. Allowing mixed content so the sensitive
location data acquired from sensitive API calls data can be backhauled
over HTTP seems like a [obvious] bad thing.

Both examples seem bad because it does not meet user's expectations
(especially when a user specifically requested an HTTPS page).

But asking for that enforcement on the user's behalf drew some
negative criticism.

> I have a hard time imagining that most users, if given the choice
> between allowing anyone in the same coffee shop to read what they are
> reading and not allowing, would willingly choose HTTP over HTTPS.
+1.

Peter Gutmann has a lot to say about users and their behaviors in
Engineering Security
(https://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf). Be sure to
check out Chapter 3, Psychology. In particular, see the section "How
Users Make Decisions" on page 125.

Jeff

tyl...@google.com

unread,
Dec 17, 2014, 2:48:41 AM12/17/14
to blin...@chromium.org, pal...@google.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, sigb...@opera.com
First of all, some change along these lines is absolutely necessary is it closes a huge hole that has been successfully exploited since the Netscape days. That is, while browsers indicate positive security for TLS, they make no indication at all in the case no security, leaving ample room for website content to fill that void. Call this the "Green padlock favicon" problem if you like. 

Some notable points:

Given these roughly 3 distinct scenarios with respect to connection status:

A: The connection is successfully secured. (HTTPS)
B: No security was attempted. (HTTP)
C: Securing the connection has failed. (Certificate validation failure)

A few people have said that B and C are roughly identical from a security perspective and could be represented as the same state -- in both cases no security is provided. I would disagree here. In the case of the failed certificate verification, the client has attempted to secure the connection and that attempt has failed. In the case of HTTP, the client made no indication of a preference for security. While scenario B represents the *absence* of security, scenario C represents the *failure* of security, and is therefore more troublesome. While we want to raise the awareness of scenario B, we shouldn't promote it to the severity of scenario C. Doing so conflates two very different cases and failure modes; while both represent the absence of verifiable transport security, the latter indicates that the user's expressed expectation of security has not been met, while the former simply reflects the absence of any expectation of security.

With respect to EV vs DV/DANE certificates, that discussion should be completely separate from this. No further comment necessary.

Finally, it's worth noting that reports from the field in response to the Chrome SHA-1 sunsetting initiative have shown that even the most minor of warnings has a measurable impact on site operators. I've received many reports from operators large and small indicating visible losses of revenue due to the nearly-hidden warning Chrome currently displays for a SHA-1 cert with a long expiration. This suggests that the UX changes surrounding security needn't initially be intrusive to have a strong impact on site operations. An unobtrusive but centrally-located notice to the effect of "your connection has not been secured" is indisputably accurate, conveys no bias or agenda, and yet can be expected to produce a sea-change of behavior for site operators. 

It's like bike locks: they're functional and highly visible, but also optional. Still, if one day someone started putting up signs saying "this bike has no lock", even though it's telling you nothing you couldn't already see, behavior would immediately change. 

Sigbjørn Vik

unread,
Dec 17, 2014, 6:53:01 AM12/17/14
to tyl...@google.com, blin...@chromium.org, pal...@google.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org
On 17-Dec-14 08:48, tyl...@google.com wrote:

> Given these roughly 3 distinct scenarios with respect to connection status:
>
> A: The connection is successfully secured. (HTTPS)
> B: No security was attempted. (HTTP)
> C: Securing the connection has failed. (Certificate validation failure)
>
> A few people have said that B and C are roughly identical from a
> security perspective and could be represented as the same state -- in
> both cases no security is provided. I would disagree here. In the case
> of the failed certificate verification, the client has attempted to
> secure the connection and that attempt has failed. In the case of HTTP,
> the client made no indication of a preference for security. While
> scenario B represents the *absence* of security, scenario C represents
> the *failure* of security, and is therefore more troublesome. While we
> want to raise the awareness of scenario B, we shouldn't promote it to
> the severity of scenario C. Doing so conflates two very different cases
> and failure modes; while both represent the absence of verifiable
> transport security, the latter indicates that the user's expressed
> expectation of security has not been met, while the former simply
> reflects the absence of any expectation of security.

I respectfully, but strongly, disagree :) If you want to separate the
states, I'd say that C is better than B. C has *some* security, B has
*none*. Consider a self-signed certificate, where the site owner chooses
to provide what little security he can, this is still much better than
plain old http. Or a certificate expired by one day, which is the same
certificate that the browser has seen on that site for the 2 years past,
this is still way better than B.

If a malicious actor can get write access to a page with status C, he
can immediately change the security level to status B anyway. Redirect
the page to http://official-looking.subdomain-facebook.com, and present
B, so displaying B as better doesn't help users much against attacks. If
a malicious actor does not have write access to a page with status C,
then status C is already better than status B. If the browser can detect
an active attack (like the login form having moved to http from https or
replacement of a good certificate by a bad one) then the browser should
of course warn against the attack, but that is a different scenario.

In most cases, users type 'facebook.com', and give no preference for
security. Any such preference is a server preference. The same holds for
clicking links, the user has no expectation of where he will be taken.
For bookmarks, or cases where the user explicitly types 'https://', the
user might have an expectation of security. If he does, and the security
level of the page indicates either B or C, he should immediately be
alerted anyway. If you think this indicates an explicit preference for
security, then the browser could warn similar to an active attack in
these cases.

But my main point against this is still that you need an entire
paragraph to explain the difference, to people who already know the
background. A user wants to know if he is secure or not, not if his
'facebook.com' request was intercepted on the way and replaced by a http
MiTM (status B, really bad), or if 'facebook.com' made a bug leaving you
exposed (status C, pretty bad). Most users wouldn't understand the
difference. I consider it arrogant trying to force users to understand
the difference, most users just want to go to facebook, not get a
lecture on internet safety. I consider it harmful to try to display the
difference, as the more states we have in the UI, the more users have to
learn, which means they will remember less, and the states become less
meaningful. Keep it simple, and keep to user expectations, not
implementation details.

As a consumer buying bread, you want to know if the bread is safe to eat
or not. Whether the farmer tried to control pesticide usage and failed,
or he didn't try to control it, makes little difference. Professional
health and safety inspectors (akin to browser and web developers) are
about the only ones who care.

> I've
> received many reports from operators large and small indicating
> visible
> losses of revenue due to the nearly-hidden warning Chrome currently
> displays for a SHA-1 cert with a long expiration.

Are you able to share more details on this?

Håvard Molland

unread,
Dec 17, 2014, 9:02:37 AM12/17/14
to Chris Palmer, Sigbjørn Vik, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On 16. des. 2014 21:10, 'Chris Palmer' via Security-dev wrote:
> Well, we do have to make sure that the browser does not send cookies
> to an impostor origin. That's (1 reason) why Chrome uses interstitial
> warnings today.
I've been experimenting with a Chromium patch where the url context is
given two cookie stores, "standard" and "insecure https" (this being
adapted to the current scheme where we don't warn about http). After the
connection has been established but before the request is sent, the
appropriate cookies are picked from the stores based on the security
state of the connection. Secure cookies going over a good tls
connection and all http cookies are picked from the "normal" store,
while secure cookies going over a bad tls connection is picked from the
"insecure https" store. This stops secure cookies received on a good
connection from being sent to an imposter origin.

However, to remove the interstitial warning, most offline storages would
have to be separated into two caches, to avoid cache poisoning. Examples
are appcache, service workers, fileapi, dom storage, indexed db and the
standard disk cache. This would be a huge undertaking to get right and
maintain.

Separating the cookie store still has some value even without removing
the interstitial warning though.

>> BTW, have you explicitly contacted other browser teams?
> This mass mailing is that.

Hopefully all the relevant Browser UI teams read these lists.

--
---
Opera Software

Adrienne Porter Felt

unread,
Dec 17, 2014, 11:22:37 AM12/17/14
to Sigbjørn Vik, tyl...@google.com, blink-dev, Chris Palmer, public-w...@w3.org, security-dev, dev-se...@lists.mozilla.org
We plan to continue treating B and C differently. If there is a validation failure (C), Chrome will show a full-page interstitial. That will not be the case for HTTP (B). They will look the same in the URL bar because they are both insecure but the overall experience will be quite different.

Sigbjørn Vik

unread,
Dec 17, 2014, 11:37:39 AM12/17/14
to Adrienne Porter Felt, tyl...@google.com, blink-dev, Chris Palmer, public-w...@w3.org, security-dev, dev-se...@lists.mozilla.org
On 17-Dec-14 17:22, 'Adrienne Porter Felt' via Security-dev wrote:
> We plan to continue treating B and C differently. If there is a
> validation failure (C), Chrome will show a full-page interstitial. That
> will not be the case for HTTP (B). They will look the same in the URL
> bar because they are both insecure but the overall experience will be
> quite different.

Looking the same in the URL bar is already a good improvement on today.
However, the interstitial will continue to provide a negative incentive
to webmasters to attempt to apply security, as if they get it wrong,
users get a worse experience. Going for http might just be the safer
choice. The interstitial thus has the opposite effect of what this
proposal aims to achieve.

In an ideal world, where there were no technical reasons for the
interstitial (meaning the browser wouldn't leak cookies or other data
and the user would be at least as secure as when using http), would you
still want to show it to users? And if so why?


> On Wed, Dec 17, 2014 at 3:52 AM, Sigbjørn Vik <sigb...@opera.com
> <mailto:sigb...@opera.com>> wrote:
> In most cases, users type 'facebook.com <http://facebook.com>', and
> give no preference for
> security. Any such preference is a server preference. The same holds for
> clicking links, the user has no expectation of where he will be taken.
> For bookmarks, or cases where the user explicitly types 'https://', the
> user might have an expectation of security. If he does, and the security
> level of the page indicates either B or C, he should immediately be
> alerted anyway. If you think this indicates an explicit preference for
> security, then the browser could warn similar to an active attack in
> these cases.
>
> But my main point against this is still that you need an entire
> paragraph to explain the difference, to people who already know the
> background. A user wants to know if he is secure or not, not if his
> 'facebook.com <http://facebook.com>' request was intercepted on the
> way and replaced by a http
> MiTM (status B, really bad), or if 'facebook.com
> <http://facebook.com>' made a bug leaving you
> exposed (status C, pretty bad). Most users wouldn't understand the
> difference. I consider it arrogant trying to force users to understand
> the difference, most users just want to go to facebook, not get a
> lecture on internet safety. I consider it harmful to try to display the
> difference, as the more states we have in the UI, the more users have to
> learn, which means they will remember less, and the states become less
> meaningful. Keep it simple, and keep to user expectations, not
> implementation details.
>
> As a consumer buying bread, you want to know if the bread is safe to eat
> or not. Whether the farmer tried to control pesticide usage and failed,
> or he didn't try to control it, makes little difference. Professional
> health and safety inspectors (akin to browser and web developers) are
> about the only ones who care.
>
> > I've
> > received many reports from operators large and small indicating
> > visible
> > losses of revenue due to the nearly-hidden warning Chrome currently
> > displays for a SHA-1 cert with a long expiration.
>
> Are you able to share more details on this?
>
> --
> Sigbjørn Vik
> Opera Software
>
> To unsubscribe from this group and stop receiving emails from it, send
> an email to security-dev...@chromium.org
> <mailto:security-dev...@chromium.org>.

Anne van Kesteren

unread,
Dec 17, 2014, 12:17:24 PM12/17/14
to Sigbjørn Vik, tyl...@google.com, blink-dev, Chris Palmer, WebAppSec WG, securi...@chromium.org, dev-se...@lists.mozilla.org
On Wed, Dec 17, 2014 at 12:52 PM, Sigbjørn Vik <sigb...@opera.com> wrote:
> I respectfully, but strongly, disagree :) If you want to separate the
> states, I'd say that C is better than B. C has *some* security, B has
> *none*.

You would advocate not blocking on certificate failures and just hand
over credentials to network attackers? What would happen exactly when
you visit e.g. google.com from the airport (connected to something
with a shitty captive portal)?


--
https://annevankesteren.nl/

Stephen Gallagher

unread,
Dec 17, 2014, 12:18:35 PM12/17/14
to securi...@chromium.org, public-w...@w3.org, blin...@chromium.org, dev-se...@lists.mozilla.org
As a web developer, I feel that this proposal is missing the point somewhat.

For the typical end-user browsing general purpose sites, the biggest risk is not interception of traffic, but vulnerabilities on the back-end, and HTTPS does absolutely nothing to address this.

Making a more prominent indication of site "security" risks mis-interpretation by end users that their data is in fact secure, which is not the case in reality - especially since most users won't differentiate between transport security and back-end security.

Additionally, many sites simply don't have the type of content or interaction that requires the added complexity of HTTPS, even if it's perceived as an easy upgrade in tech circles. An overly intrusive "non-secure" indicator could lead to end-user confusion in these cases.

After users realise that these sites are not in fact stealing their data, the effectiveness of the indicator as a valid tool is diminished, since it has already flagged a false positive in the eyes of the end user.

I believe these and other nuances should be carefully taken into account when considering this proposition.

gsholli...@gmail.com

unread,
Dec 17, 2014, 12:29:58 PM12/17/14
to securi...@chromium.org, public-w...@w3.org, blin...@chromium.org, dev-se...@lists.mozilla.org
My comment will focus on the Non-secure (broken HTTPS, HTTP). There is a significant and extremely important security difference between broken HTTPS and HTTP. Assuming the webmaster and web developer properly chose HTTP, it is not intended to be secure but intended for everybody to see. Broken HTTPS is intended to be secure but is not, again same assumption of proper choice by webmaster and web developer. The message to the user should not be the same. Broken HTTPS deserves an alarming declaration of being insecure to warn the user. HTTP deserves a more gentle reminder message that it is not intended to be secure. If the page content on a HTTP page includes password fields or other clearly identifiable sensitive content fields, then that deserves the same treatment as broken HTTPS.

Webmasters and web developers should be making the appropriate choices for HTTP and HTTPS. Not all web content needs to be served via HTTPS. Users will suffer from alert overload and begin to ignore the important alerts. The simple fact a site is HTTP does not deserve an alert, it depends on the content and context.

I did notice a few comments identifying possible issues when trying to serve include HTTP in an HTTPS page. That is insecure and should not be expected to work. Mixed HTTPS and HTTP has long been identified to users as a security issue and should continue to be.

Sigbjørn Vik

unread,
Dec 17, 2014, 12:50:30 PM12/17/14
to Anne van Kesteren, tyl...@google.com, blink-dev, Chris Palmer, WebAppSec WG, securi...@chromium.org, dev-se...@lists.mozilla.org
On 17-Dec-14 18:17, Anne van Kesteren wrote:
> On Wed, Dec 17, 2014 at 12:52 PM, Sigbjørn Vik <sigb...@opera.com> wrote:
>> I respectfully, but strongly, disagree :) If you want to separate the
>> states, I'd say that C is better than B. C has *some* security, B has
>> *none*.
>
> You would advocate not blocking on certificate failures
> and just hand
> over credentials to network attackers?

My comment above is about the relative security of http versus
non-perfect https. In most cases, non-perfect https is better. In some
cases, they are equally bad.[*]

Another topic is how to deal with broken https. Browsers today present
the user with an interstitial designed to allow him to shoot himself in
the foot, after which they leak any cached secure data to the broken
site. I consider that leakage a bug.

> What would happen exactly when
> you visit e.g. google.com from the airport (connected to something
> with a shitty captive portal)?

Assuming interstitials were replaced with cache separation:

The browser would detect that this isn't the same secure google you
talked to yesterday, and not share any data you got from google
yesterday with the captive portal. Once you reconnect to the authentic
google, the browser would use the first set of data again.

cha...@yandex-team.ru

unread,
Dec 17, 2014, 1:44:53 PM12/17/14
to Anne van Kesteren, Sigbjørn Vik, tyl...@google.com, blink-dev, Chris Palmer, WebAppSec WG, securi...@chromium.org, dev-se...@lists.mozilla.org


17.12.2014, 20:19, "Anne van Kesteren" <ann...@annevk.nl>:
This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex.ru but the certtificate is for airport.logins.aero, or 1.1.1.1).

If you are me, you wrestle with the interface until you find out how to connect anyway, and hope that it doesn't remember this for other places (and that I do).

so having handed over your credit card details to get 30 minutes of connection time, you're in a hurry (your plane will leave soon, and you still haven't told Mum you're hoping she will collect you when you land).

If you're visiting google.com, it's hard to see what the next interstitial does that is useful. To take the standard Coast example, if you went to myAe.ro every day for the last month, and their certificate expired yesterday but hasn't changed, I think the answer is pretty clear.

If it expired last month, and you've been using it for a year, there may be an issue. If it is brand new and registered to someone else, there might well be an issue even though the certificate itself looks good…

just some thinking out loud…

cheers

--
Charles McCathie Nevile - web standards - CTO Office, Yandex
cha...@yandex-team.ru - - - Find more at http://yandex.com

software...@gmail.com

unread,
Dec 17, 2014, 5:27:50 PM12/17/14
to blin...@chromium.org, ann...@annevk.nl, sigb...@opera.com, tyl...@google.com, pal...@google.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, cha...@yandex-team.ru
On Wednesday, December 17, 2014 7:44:59 PM UTC+1, cha...@yandex-team.ru wrote:
This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex..ru but the certtificate is for airport.logins.aero, or 1.1.1.1).
--
Charles McCathie Nevile - web standards - CTO Office, Yandex

511 Network Authentication Required?

There is http://tools.ietf.org/html/rfc6585#section-6 for that. Chromium bug is https://code.google.com/p/chromium/issues/detail?id=114929 , Firefox has their own as well. As far as I know this only works for HTTP connections. There really is no reasonable way how the airport can step into an HTTPS connection and demand authentication without causing a certificate error. There is experimantal https://tools.ietf.org/rfc/rfc2521.txt which suggests an ICMP packet "Need Authorization", but as I said, it is experimantal. Am I missing something?

This gradual roll out of the UI hints that is being proposed now would help shift attention to such problems. The problems won't be solved until we get to a state we (actually, you ;) truly _need_ to be solving them.

Matthew Dempsky

unread,
Dec 17, 2014, 6:22:09 PM12/17/14
to software...@gmail.com, blink-dev, ann...@annevk.nl, sigb...@opera.com, tyl...@google.com, Chris Palmer, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org, cha...@yandex-team.ru
On Wed, Dec 17, 2014 at 2:27 PM, <software...@gmail.com> wrote:
Am I missing something?

The way Android and Chrome OS detect captive portals is to issue requests to a known HTTP URL that should only send 204 responses, and anything else indicates some action is required on behalf of the user.  See: http://www.chromium.org/chromium-os/chromiumos-design-docs/network-portal-detection

As a longer term solution, I like the idea of announcing captive portal logins via DHCP (e.g., draft-wkumari-dhc-capport).

michael...@gmail.com

unread,
Dec 17, 2014, 8:15:04 PM12/17/14
to securi...@chromium.org, public-w...@w3.org, blin...@chromium.org, dev-se...@lists.mozilla.org
User Interface suggestions:

I think coloring the address bar background is the way to go.

Small icons are too easily missed, and even if every browser agreed to use the same icon, (unlikely) they would likely be placed differently. Also, error messages are likely to be ignored by non-techies who will not understand them.

As for suggested colors, red, yellow and green have a universal meaning.
With that in mind, I suggest starting with

HTTP - yellow background
HTTPS lite green background
HTTPS with EV certificate darker green background

Then, phase two might be:

HTTP: red background
HTTPS with mixed mode: yellow background
HTTPS: still light green
HTTPS with EV cert: still dark green

To further emphasize the red and/or yellow, the entire browser window might be framed in that color, along the lines of what Sandboxie does.

I would also add an explanation of the colors right in the address bar. The word "color" with a white "i" in a blue circle should be obvious as the explanation. Clicking it (or perhaps just hovering on mouse based OSs) should popup a description of what the colors mean.

cha...@yandex-team.ru

unread,
Dec 18, 2014, 4:07:32 AM12/18/14
to software...@gmail.com, blin...@chromium.org, ann...@annevk.nl, sigb...@opera.com, tyl...@google.com, pal...@google.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org
 
 
On Wednesday, December 17, 2014 7:44:59 PM UTC+1, cha...@yandex-team.ru wrote:
This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex.ru but the certtificate is for airport.logins.aero, or 1.1.1.1).
--
Charles McCathie Nevile - web standards - CTO Office, Yandex

511 Network Authentication Required?

There is http://tools.ietf.org/html/rfc6585#section-6 for that. Chromium bug is https://code.google.com/p/chromium/issues/detail?id=114929 , Firefox has their own as well. As far as I know this only works for HTTP connections. There really is no reasonable way how the airport can step into an HTTPS connection and demand authentication without causing a certificate error.
 
There is a certificate error. The point is that since it is expected behaviour, I get trained to say "yeah, whatever" so I can pay for the connection I need. Despite the fact that it is very difficult to be *sure* that the error is not actually a real problem.
 
I'd love to see a better situation relying on a proper standard.
 
But in general I don't.
 
There is experimantal https://tools.ietf.org/rfc/rfc2521.txt which suggests an ICMP packet "Need Authorization", but as I said, it is experimantal. Am I missing something?
 
This gradual roll out of the UI hints that is being proposed now would help shift attention to such problems. The problems won't be solved until we get to a state we (actually, you ;) truly _need_ to be solving them.
 
Sure. But this turns out to be a case where right now there is a problem, and instead of *solving* it it seems that "the world" (or at least the parts I see, which is quite a lot by geography) is instead finding a quick workaround that gets them where they were going - at the cost of learning to ignore a potentially serious problem.
 
On the whole I think this discussion is valuable, and the proposal makes sense. But I have concerns about whether we really understand the things that are going to change and the implications, so use cases like this are important to find and make sure we understand.
 
cheers
 
--
Charles McCathie Nevile - web standards - CTO Office, Yandex

Ryan Sleevi

unread,
Dec 18, 2014, 4:47:03 AM12/18/14
to cha...@yandex-team.ru, blink-dev, Anne van Kesteren, public-w...@w3.org, sigb...@opera.com, pal...@google.com, software...@gmail.com, securi...@chromium.org, dev-se...@lists.mozilla.org, tyl...@google.com

Inline

On Dec 18, 2014 1:07 AM, <cha...@yandex-team.ru> wrote:
>
>  
>  
> 18.12.2014, 01:27, "software...@gmail.com" <software...@gmail.com>:
>>
>> On Wednesday, December 17, 2014 7:44:59 PM UTC+1, cha...@yandex-team.ru wrote:
>>>
>>> This is a pretty interesting use case. When you connect at the airport, the typical first thing that happens is you get a warning saying that the site you want to connect to has the wrong certificate (you went to pogoda.yandex.ru but the certtificate is for airport.logins.aero, or 1.1.1.1).
>>> --
>>> Charles McCathie Nevile - web standards - CTO Office, Yandex
>>
>>
>> 511 Network Authentication Required?
>>
>> There is http://tools.ietf.org/html/rfc6585#section-6 for that. Chromium bug is https://code.google.com/p/chromium/issues/detail?id=114929 , Firefox has their own as well. As far as I know this only works for HTTP connections. There really is no reasonable way how the airport can step into an HTTPS connection and demand authentication without causing a certificate error.
>
>  
> There is a certificate error. The point is that since it is expected behaviour, I get trained to say "yeah, whatever" so I can pay for the connection I need. Despite the fact that it is very difficult to be *sure* that the error is not actually a real problem.
>  
> I'd love to see a better situation relying on a proper standard.
>  
> But in general I don't.
>  

I'm not sure if it's terribly germane to the HTTP-being-indicated-as-not-secure to rathole too much on the ways that HTTPS can be messed with, but I will simply note that the Chrome Security Enamel team are working on ways to better detect and manage this.

While we can wish for better standards, this is an area where standards compliant devices take years to become even remotely ubiquitous. As such, a heuristic based approach on the way the world is, at least with respective to captive portals that actively try to disrupt and compromise users' connections.

>>
>> There is experimantal https://tools.ietf.org/rfc/rfc2521.txt which suggests an ICMP packet "Need Authorization", but as I said, it is experimantal. Am I missing something?
>>  
>> This gradual roll out of the UI hints that is being proposed now would help shift attention to such problems. The problems won't be solved until we get to a state we (actually, you ;) truly _need_ to be solving them.
>
>  
> Sure. But this turns out to be a case where right now there is a problem, and instead of *solving* it it seems that "the world" (or at least the parts I see, which is quite a lot by geography) is instead finding a quick workaround that gets them where they were going - at the cost of learning to ignore a potentially serious problem.
>  
> On the whole I think this discussion is valuable, and the proposal makes sense. But I have concerns about whether we really understand the things that are going to change and the implications, so use cases like this are important to find and make sure we understand.
>  
> cheers
>  
> --
> Charles McCathie Nevile - web standards - CTO Office, Yandex
> cha...@yandex-team.ru - - - Find more at http://yandex.com
>  

As noted elsewhere, we aren't trying to boil the ocean, and though I certainly accept the concerns are valid (and, as mentioned above, are already being independently worked on), I think we should be careful how much we fixate on these issues versus considering the broader philosophical issues this proposal is bringing forward.

There are certainly awful things in the world of HTTPS, on a variety of fronts. And yet, despite those warts, we would be misleading ourselves and others to think that insecure transports such as HTTP - ones actively disrupted for commercial gain, "value" adding, or malicious havoc and ones that are passively monitored on widespread, pervasive scale  - represent the desirable state of where we want to be or go.

I think the end goal is more robust - we want a world where users are not only safe by default, but they expect that, and can understand what makes them unsafe. Though some of these may be outside our ken as UAs - for example, we have limited ability to know you're running a three year old version of phpBB that is owned harder than Sony Pictures and has more remote exploits than msasn1.dll - there are things we do know and should communicate. One of them is that the assumption many users have - that their messages are shared only between them and the server - is not true unless the server operator is conscientious.

Daniel Kahn Gillmor

unread,
Dec 18, 2014, 11:01:01 AM12/18/14
to cha...@yandex-team.ru, software...@gmail.com, blin...@chromium.org, ann...@annevk.nl, sigb...@opera.com, tyl...@google.com, pal...@google.com, public-w...@w3.org, securi...@chromium.org, dev-se...@lists.mozilla.org
On 12/18/2014 04:07 AM, cha...@yandex-team.ru wrote:
> There is a certificate error. The point is that since it is expected behaviour,
> I get trained to say "yeah, whatever" so I can pay for the connection I need.
> Despite the fact that it is very difficult to be *sure* that the error is not
> actually a real problem.
> I'd love to see a better situation relying on a proper standard.
> But in general I don't.

This is the closest thing to a standard for dealing with this situation
that i know of:

https://tools.ietf.org/html/draft-wkumari-dhc-capport

Until this mechanism is deployed, when you believe that you will be on
such a network, and you are willing to expose yourself to their
middlebox devices, you should *not* accept the bogus cert. Accepting
the bogus cert potentially means sending the middlebox the cookies that
you would have sent to the desired origin, which is a Bad Thing.

Instead, you should open a new browser window and point it at
http://www.example.org/, which does not use https, and so can be
rewritten/hijacked by the captive portal situation however they like.

This is a clunky mess, of course, but that's the nature of captive portals.

--dkg

signature.asc

Gervase Markham

unread,
Dec 18, 2014, 12:14:13 PM12/18/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev
On 13/12/14 00:46, Chris Palmer wrote:
> We, the Chrome Security Team, propose that user agents (UAs) gradually
> change their UX to display non-secure origins as affirmatively non-secure.
> We intend to devise and begin deploying a transition plan for Chrome in
> 2015.

I think this is a good idea - in fact, it's essential if we are to make
secure the 'new normal'.

I agree that a phased transition plan based on telemetry thresholds is
the right thing. This is a collective action problem ("Chrome tells me
this site is insecure, but Firefox is fine - so I'll use Firefox") and
so it would be awesome if we could get cross-browser agreement on what
the thresholds were and how they were measured.

I wonder whether we could make a start by marking non-secure origins in
a neutral way, as a step forward from not marking them at all. Straw-man
proposal for Firefox: replace the current greyed-out globe which appears
where the lock otherwise is with a black eye icon. When clicked, instead
of saying:

"This website does not supply identity information.

Your connection to this website is not encrypted."

it has a larger eye icon, and says something like:

"This web page was transferred over a non-secure connection, which means
that the information could have been (was probably?!) intercepted and
read by a third party while in transit."

There are many degrees of this; let's start moving this way.

Gerv

jstr...@google.com

unread,
Dec 18, 2014, 12:52:56 PM12/18/14
to securi...@chromium.org, public-w...@w3.org, blin...@chromium.org, dev-se...@lists.mozilla.org

> Roughly speaking, there are three basic transport layer security states for web origins:
>
> Secure (valid HTTPS, other origins like (*, localhost, *));
> Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with minor TLS errors); and
> Non-secure (broken HTTPS, HTTP).

I'd like to propose consideration of a fourth category:
Personal Devices (home routers, printers, IoT, raspberry pis in classrooms, refrigerators):
- cannot, by nature, participate in DNS and CA systems
- likely on private network block
- user is the owner of the service, hence can trust self rather than CA

Suggested use:
- IoT devices generate unique, self-signed cert
- Friendlier interstitial (Ie. "Is this a device you recognize?") for self-signed connections on *.local, 192.168.*, 10.*, or on same local network as browser.
- user approves use on first https connection
- browser remembers (device is promoted to "secure" status)

A lot of IoT use cases could benefit from direct connection (not requiring a cloud service as secure data proxy), but this currently gives the scariest of Chrome warnings. This is probably why the average home router or firewall is administered over http.

Chris Palmer

unread,
Dec 18, 2014, 2:29:26 PM12/18/14
to Gervase Markham, public-w...@w3.org, blink-dev, security-dev, mozilla-de...@lists.mozilla.org
On Thu, Dec 18, 2014 at 9:14 AM, Gervase Markham <ge...@mozilla.org> wrote:

> I think this is a good idea - in fact, it's essential if we are to make
> secure the 'new normal'.

Woo hoo! :)

> I agree that a phased transition plan based on telemetry thresholds is
> the right thing. This is a collective action problem ("Chrome tells me
> this site is insecure, but Firefox is fine - so I'll use Firefox") and
> so it would be awesome if we could get cross-browser agreement on what
> the thresholds were and how they were measured.

We don't currently have any hard thresholds, just numbers that I kind
of made up. Any suggestions?

Also, shall we measure resource loads, top-level navigations, minutes
spent looking at the top-level origin, ...? Probably all of those and
more...

> I wonder whether we could make a start by marking non-secure origins in
> a neutral way, as a step forward from not marking them at all. Straw-man
> proposal for Firefox: replace the current greyed-out globe which appears
> where the lock otherwise is with a black eye icon. When clicked, instead
> of saying:
>
> "This website does not supply identity information.
>
> Your connection to this website is not encrypted."
>
> it has a larger eye icon, and says something like:
>
> "This web page was transferred over a non-secure connection, which means
> that the information could have been (was probably?!) intercepted and
> read by a third party while in transit."
>
> There are many degrees of this; let's start moving this way.

Yeah, that sounds good.

Thanks!

Chris Palmer

unread,
Dec 18, 2014, 2:33:36 PM12/18/14
to Jason Striegel, security-dev, public-w...@w3.org, blink-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 9:52 AM, jstriegel via blink-dev
<blin...@chromium.org> wrote:

> I'd like to propose consideration of a fourth category:
> Personal Devices (home routers, printers, IoT, raspberry pis in classrooms, refrigerators):
> - cannot, by nature, participate in DNS and CA systems
> - likely on private network block
> - user is the owner of the service, hence can trust self rather than CA
>
> Suggested use:
> - IoT devices generate unique, self-signed cert
> - Friendlier interstitial (Ie. "Is this a device you recognize?") for self-signed connections on *.local, 192.168.*, 10.*, or on same local network as browser.
> - user approves use on first https connection
> - browser remembers (device is promoted to "secure" status)
>
> A lot of IoT use cases could benefit from direct connection (not requiring a cloud service as secure data proxy), but this currently gives the scariest of Chrome warnings. This is probably why the average home router or firewall is administered over http.

Yes, I agree this is a problem. I am hoping to publish a proposal for
how UAs can authenticate private devices soon (in January probably).

A key goal is not having to ask the user "Is this a device you
recognize?" — I think we can get the UX flow even simpler, and still
be strong. Watch this space...

Monica Chew

unread,
Dec 18, 2014, 3:12:24 PM12/18/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Hello Chris,

I support the goal of this project, but I'm not sure how we can get to a point where showing warning indicators makes sense. It seems that about 67% of pageviews on the Firefox beta channel are http, not https. How are Chrome's numbers?

http://telemetry.mozilla.org/#filter=beta%2F34%2FHTTP_PAGELOAD_IS_SSL&aggregates=multiselect-all!Submissions&evoOver=Builds&locked=true&sanitize=true&renderhistogram=Graph

Security warnings are often overused and therefore ignored [1]; it's even worse to provide a warning for something that's not actionable. I think we'd have to see very low plaintext rates (< 1%) in order not to habituate users into ignoring a plaintext warning indicator.

Lots of site operators don't support HTTPS, in fact some of them (e.g., https://nytimes.com and https://monica-at-mozilla.blogspot.com, which is out of my control) redirect to plaintext in order to avoid mixed content warnings. I don't think that user agents provided the right incentives in this case, and showing a warning 100% of the time to a NYTimes user seems like a losing battle.

Why not shift the onus from the user to the site operators? I would love to see a "wall of shame" for the Alexa top 1M sites that don't support HTTPS, redirect HTTPS to HTTP, and don't support HSTS. Perhaps search providers could use those to penalize rankings, as Google already does for non HTTPS sites. Efforts to make it cheap and easy to deploy HTTPS also need to advance.

Thanks,
Monica

[1] http://lorrie.cranor.org/pubs/sslwarnings.pdf

On Fri, Dec 12, 2014 at 4:46 PM, Chris Palmer <pal...@google.com> wrote:
Hi everyone,

Apologies to those of you who are about to get this more than once, due to
the cross-posting. I'd like to get feedback from a wide variety of people:
UA developers, web developers, and users. The canonical location for this
proposal is:
https://www.chromium.org/Home/chromium-security/marking-http-as-non-secure.

Proposal


We, the Chrome Security Team, propose that user agents (UAs) gradually
change their UX to display non-secure origins as affirmatively non-secure.
We intend to devise and begin deploying a transition plan for Chrome in
2015.

The goal of this proposal is to more clearly display to users that HTTP
provides no data security.

Request

We’d like to hear everyone’s thoughts on this proposal, and to discuss with
the web community about how different transition plans might serve users.

Background

We all need data communication on the web to be secure (private,
authenticated, untampered). When there is no data security, the UA should
explicitly display that, so users can make informed decisions about how to
interact with an origin.


Roughly speaking, there are three basic transport layer security states for
web origins:


   -


   Secure (valid HTTPS, other origins like (*, localhost, *));
   -


   Dubious (valid HTTPS but with mixed passive resources, valid HTTPS with
   minor TLS errors); and
   -

   Non-secure (broken HTTPS, HTTP).


For more precise definitions of secure and non-secure, see Requirements for
Powerful Features <http://www.w3.org/TR/powerful-features/> and Mixed
Content <http://www.w3.org/TR/mixed-content/>.

We know that active tampering and surveillance attacks, as well as passive
surveillance attacks, are not theoretical but are in fact commonplace on
the web.

RFC 7258: Pervasive Monitoring Is an Attack
<https://tools.ietf.org/html/rfc7258>

NSA uses Google cookies to pinpoint targets for hacking
<http://www.washingtonpost.com/blogs/the-switch/wp/2013/12/10/nsa-uses-google-cookies-to-pinpoint-targets-for-hacking/>

Verizon’s ‘Perma-Cookie’ Is a Privacy-Killing Machine
<http://www.wired.com/2014/10/verizons-perma-cookie/>

How bad is it to replace adSense code id to ISP's adSense ID on free
Internet?
<http://stackoverflow.com/questions/25438910/how-bad-is-it-to-replace-adsense-code-id-to-isps-adsense-id-on-free-internet>

Comcast Wi-Fi serving self-promotional ads via JavaScript injection
<http://arstechnica.com/tech-policy/2014/09/why-comcasts-javascript-ad-injections-threaten-security-net-neutrality/>

Erosion of the moral authority of transparent middleboxes
<https://tools.ietf.org/html/draft-hildebrand-middlebox-erosion-01>

Transitioning The Web To HTTPS <https://w3ctag.github.io/web-https/>

We know that people do not generally perceive the absence of a warning sign.
(See e.g. The Emperor's New Security Indicators
<http://commerce.net/wp-content/uploads/2012/04/The%20Emperors_New_Security_Indicators.pdf>.)
Yet the only situation in which web browsers are guaranteed not to warn
users is precisely when there is no chance of security: when the origin is
transported via HTTP. Here are screenshots of the status quo for non-secure
domains in Chrome, Safari, Firefox, and Internet Explorer:

[image: Screen Shot 2014-12-11 at 5.08.48 PM.png]

[image: Screen Shot 2014-12-11 at 5.09.55 PM.png]

[image: Screen Shot 2014-12-11 at 5.11.04 PM.png]

[image: ie-non-secure.png]

Particulars

UA vendors who agree with this proposal should decide how best to phase in
the UX changes given the needs of their users and their product design
constraints. Generally, we suggest a phased approach to marking non-secure
origins as non-secure. For example, a UA vendor might decide that in the
medium term, they will represent non-secure origins in the same way that
they represent Dubious origins. Then, in the long term, the vendor might
decide to represent non-secure origins in the same way that they represent
Bad origins.

Ultimately, we can even imagine a long term in which secure origins are so
widely deployed that we can leave them unmarked (as HTTP is today), and
mark only the rare non-secure origins.

There are several ways vendors might decide to transition from one phase to
the next. For example, the transition plan could be time-based:


   1.

   T0 (now): Non-secure origins unmarked
   2.

   T1: Non-secure origins marked as Dubious
   3.

   T2: Non-secure origins marked as Non-secure
   4.

   T3: Secure origins unmarked


Or, vendors might set thresholds based on telemetry that measures the
ratios of user interaction with secure origins vs. non-secure. Consider
this strawman proposal:


   1.

   Secure > 65%: Non-secure origins marked as Dubious
   2.

   Secure > 75%: Non-secure origins marked as Non-secure
   3.

   Secure > 85%: Secure origins unmarked


The particular thresholds or transition dates are very much up for
discussion. Additionally, how to define “ratios of user interaction” is
also up for discussion; ideas include the ratio of secure to non-secure
page loads, the ratio of secure to non-secure resource loads, or the ratio
of total time spent interacting with secure vs. non-secure origins.

We’d love to hear what UA vendors, web developers, and users think. Thanks
for reading!
_______________________________________________
dev-security mailing list
dev-se...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Peter Kasting

unread,
Dec 18, 2014, 3:20:10 PM12/18/14
to Monica Chew, Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:
Security warnings are often overused and therefore ignored [1]; it's even worse to provide a warning for something that's not actionable. I think we'd have to see very low plaintext rates (< 1%) in order not to habituate users into ignoring a plaintext warning indicator.

The context of the paper you cite is for a far more intrusive type of warning than anyone has proposed here.  Interstitials or popups are very aggressive methods of warning that should only be used when something is almost certainly wrong, or else they indeed risk the "crying wolf" effect.  Some sort of small passive indicator is a very different thing.

PK 

Chris Palmer

unread,
Dec 18, 2014, 3:27:45 PM12/18/14
to Monica Chew, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:

> I support the goal of this project, but I'm not sure how we can get to a
> point where showing warning indicators makes sense. It seems that about 67%
> of pageviews on the Firefox beta channel are http, not https. How are
> Chrome's numbers?

Currently, roughly 58% of top-level navigations in Chrome are HTTPS.

> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think we'd
> have to see very low plaintext rates (< 1%) in order not to habituate users
> into ignoring a plaintext warning indicator.

(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.

(b) What Peter Kasting said: we propose a passive indicator, not a
pop-up or interstitial.

> Lots of site operators don't support HTTPS, in fact some of them (e.g.,
> https://nytimes.com and https://monica-at-mozilla.blogspot.com, which is out
> of my control) redirect to plaintext in order to avoid mixed content
> warnings. I don't think that user agents provided the right incentives in
> this case, and showing a warning 100% of the time to a NYTimes user seems
> like a losing battle.

Again, it's a passive indicator; and, the proposal is to *fix* what
you seem to agree is the wrong incentive.

The NY Times in particular is committed to change and challenges other
news sites to move to HTTPS:

http://open.blogs.nytimes.com/2014/11/13/embracing-https/

> Why not shift the onus from the user to the site operators?

This isn't about putting an onus on users, it's about allowing users
to at least perceive the reality. And yes, that will put pressure on
some site operators. At the same time, the industry is working to make
HTTPS more usable. These efforts are complementary.

Jeffrey Walton

unread,
Dec 18, 2014, 3:43:19 PM12/18/14
to Peter Kasting, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
According to Gutmann, they are equally ignored by users. In the first
case, the user will click through the intrusive popup. In the second
case, they won't know what the icon means or they will ignore it.
Refer to Chapter 2 and Chapter 3 of his book.

In both cases, the browser should do the right thing for the user. In
a security context, that 's "defend, don't ask". Refer to Chapter 2 of
Gutmann's book.

Adrienne Porter Felt

unread,
Dec 18, 2014, 3:49:42 PM12/18/14
to Chris Palmer, Monica Chew, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 12:27 PM, 'Chris Palmer' via Security-dev <securi...@chromium.org> wrote:
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:

> I support the goal of this project, but I'm not sure how we can get to a
> point where showing warning indicators makes sense. It seems that about 67%
> of pageviews on the Firefox beta channel are http, not https. How are
> Chrome's numbers?

Currently, roughly 58% of top-level navigations in Chrome are HTTPS.

I'm curious about the difference between the two browsers. My guess is that we're treating same-origin navigations differently, particularly fragment changes. Monica, is Firefox collapsing all same-origin navigations into a single histogram entry? Given that people spend a lot of time on a small number of popular (and HTTPS) sites, it would account for the different stats.
 

> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think we'd
> have to see very low plaintext rates (< 1%) in order not to habituate users
> into ignoring a plaintext warning indicator.

(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.

I originally shared Monica's reservations --- I don't want to add another indicator that people will learn to ignore. But people are already ignoring http because we show no indicator at all, so in the worst case we will end up in the same place (but at least we will be consistent with how we label schemes).

Monica Chew

unread,
Dec 18, 2014, 4:18:57 PM12/18/14
to Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 12:27 PM, Chris Palmer <pal...@google.com> wrote:
On Thu, Dec 18, 2014 at 12:12 PM, Monica Chew <m...@mozilla.com> wrote:

> I support the goal of this project, but I'm not sure how we can get to a
> point where showing warning indicators makes sense. It seems that about 67%
> of pageviews on the Firefox beta channel are http, not https. How are
> Chrome's numbers?

Currently, roughly 58% of top-level navigations in Chrome are HTTPS.

Thanks for the numbers. That's a significant gap (58% vs 33%). Do you have any idea why this might be the case?
 

> Security warnings are often overused and therefore ignored [1]; it's even
> worse to provide a warning for something that's not actionable. I think we'd
> have to see very low plaintext rates (< 1%) in order not to habituate users
> into ignoring a plaintext warning indicator.

(a) Users are currently habituated to treat non-secure transport as
OK. The status quo is terrible.

(b) What Peter Kasting said: we propose a passive indicator, not a
pop-up or interstitial.

I understand the desire here, but a passive indicator is not going to change the status quo if it's shown 42% of the time (or 67% of the time, in Firefox's case). Other passive indicators (e.g., Prop 65 warnings if you live in California, or compiler warnings that aren't failures) haven't succeeded in changing the status quo. Again, what's the action that typical users are going to take when they see a passive indicator?

Thanks,
Monica

Monica Chew

unread,
Dec 18, 2014, 4:21:20 PM12/18/14
to Adrienne Porter Felt, Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
I'm curious about the difference between the two browsers. My guess is that we're treating same-origin navigations differently, particularly fragment changes. Monica, is Firefox collapsing all same-origin navigations into a single histogram entry? Given that people spend a lot of time on a small number of popular (and HTTPS) sites, it would account for the different stats.

Thanks,
Monica

Peter Kasting

unread,
Dec 18, 2014, 4:34:25 PM12/18/14
to Monica Chew, Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:
I understand the desire here, but a passive indicator is not going to change the status quo if it's shown 42% of the time (or 67% of the time, in Firefox's case).

Which is presumably why the key question this thread asked is what metrics to use to decide it makes sense to start showing these warnings, and what the thresholds should be.

PK 

Monica Chew

unread,
Dec 18, 2014, 4:41:38 PM12/18/14
to Peter Kasting, Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
OK. I think the thresholds should be < 5%, preferably < 1%. What do you think they should be?

Also I was wrong about collapsing fragment navigation, and that probably explains the difference between FF and Chrome.

Thanks,
Monica

Chris Palmer

unread,
Dec 18, 2014, 4:41:40 PM12/18/14
to Monica Chew, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:

> Thanks for the numbers. That's a significant gap (58% vs 33%). Do you have
> any idea why this might be the case?

I don't, unfortunately.

I think we (Chrome) are going to try measuring HTTPS vs. HTTP
deployment in other ways too, and then we might see discrepancies.

> I understand the desire here, but a passive indicator is not going to change
> the status quo if it's shown 42% of the time (or 67% of the time, in
> Firefox's case).

That's part of why we plan to gate the change on increasing HTTPS
adoption. Gervase liked that idea, too.

> Other passive indicators (e.g., Prop 65 warnings if you
> live in California, or compiler warnings that aren't failures) haven't
> succeeded in changing the status quo.

Citation needed...?

(If you're arguing that we should all compile with -Werror, I'll
surely agree with you. Chrome does. But I assume you did not mean to
suggest we should do the equivalent for HTTP navigation, at least not
yet...)

> Again, what's the action that typical
> users are going to take when they see a passive indicator?

First, keep in mind that you can't argue that showing the passive
indicator will be both ignored and crying wolf. It's one or the other.
Which argument are you making?

That said,

* Those few users who do look at it will at least be able to discern
the truth. Currently, they cannot.

* Site operators are likely to discern the truth, and may be motivated
to deploy HTTPS, if they feel that their user base might demand it.
(Complementarily, as site operators seek to use more powerful,
app-like features like Service Workers, they will increasingly deploy
HTTPS because they have to.)

* As we make the web more powerful and more app-like, we (Chrome) seek
to join the "what powerful permissions does this origin have?" views
and controls with the "by the way, how authentic is this origin?"
view. (See attached Chrome screenshot, showing that they are already
significantly merged. In my other window I am working on a patch to
make this easier to use.) As users become increasingly aware of these
controls, they may become increasingly aware of the authenticity
marking. And then they may make decisions about granting permissions
differently, or at least with more information. Basically, "how real
and how powerful is this origin" is gradually becoming a first-class
UX piece.

But, fundamentally, we owe it to users to tell the truth. I don't see
that the status quo is defensible.
Screen Shot 2014-12-18 at 1.29.37 PM.png

Peter Kasting

unread,
Dec 18, 2014, 4:42:46 PM12/18/14
to Monica Chew, Chris Palmer, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
On Thu, Dec 18, 2014 at 1:41 PM, Monica Chew <m...@mozilla.com> wrote:
On Thu, Dec 18, 2014 at 1:34 PM, Peter Kasting <pkas...@google.com> wrote:
On Thu, Dec 18, 2014 at 1:18 PM, Monica Chew <m...@mozilla.com> wrote:
I understand the desire here, but a passive indicator is not going to change the status quo if it's shown 42% of the time (or 67% of the time, in Firefox's case).

Which is presumably why the key question this thread asked is what metrics to use to decide it makes sense to start showing these warnings, and what the thresholds should be.

OK. I think the thresholds should be < 5%, preferably < 1%. What do you think they should be?

I have no opinion.  I'm simply trying to keep the discussion on track.

PK

Chris Palmer

unread,
Dec 18, 2014, 4:46:13 PM12/18/14
to Monica Chew, public-w...@w3.org, blink-dev, security-dev, dev-se...@lists.mozilla.org
Sigh. Re-posting this, because the Mozilla list doesn't like large attachments.

To see the equivalent, go to https://www.google.com and click on the
green lock in the Omnibox.

Daniel Kahn Gillmor

unread,
Dec 18, 2014, 5:11:07 PM12/18/14
to Gervase Markham, Chris Palmer, public-w...@w3.org, blink-dev, security-dev, mozilla-de...@lists.mozilla.org
On 12/18/2014 12:14 PM, Gervase Markham wrote:
> I wonder whether we could make a start by marking non-secure origins in
> a neutral way, as a step forward from not marking them at all. Straw-man
> proposal for Firefox: replace the current greyed-out globe which appears
> where the lock otherwise is with a black eye icon. When clicked, instead
> of saying:
>
> "This website does not supply identity information.
>
> Your connection to this website is not encrypted."
>
> it has a larger eye icon, and says something like:
>
> "This web page was transferred over a non-secure connection, which means
> that the information could have been (was probably?!) intercepted and
> read by a third party while in transit."

I like this change.

Four proposed fine-tunings:

A) i don't think we should remove "This website does not supply
identity information" -- but maybe replace it with "The identity of this
site is unconfirmed" or "The true identity of this site is unknown"

B) snooping isn't the only issue -- modification is as well. Maybe the
updated statement can mention that the web page could have been modified
in transit as well.

C) if there was a way to ensure that the user knows we're talking about
the data they sent as well as the data they're looking at that would be
good too.

D) "a third party" is both legalistic and singular. there could be
multiple parties tapping the line. what about just "others"?


Here's my attempt at resolving these, fwiw:

-------
The true identity of this site is unknown.

This web page was transferred over a non-secure connection, which means
that the page and any information you sent to it could have been read or
modified by others while in transit.
-------

--dkg

signature.asc

Jeffrey Walton

unread,
Dec 18, 2014, 5:22:12 PM12/18/14
to Daniel Kahn Gillmor, public-w...@w3.org, blink-dev, security-dev, mozilla-de...@lists.mozilla.org
On Thu, Dec 18, 2014 at 5:10 PM, Daniel Kahn Gillmor
<d...@fifthhorseman.net> wrote:
> ...
> Four proposed fine-tunings:
>
> A) i don't think we should remove "This website does not supply
> identity information" -- but maybe replace it with "The identity of this
> site is unconfirmed" or "The true identity of this site is unknown"
None of them are correct when an interception proxy is involved. All
of them lead to a false sense of security.

Given the degree to which standard bodies accommodate (promote?)
interception, UA's should probably steer clear of making any
statements like that if accuracy is a goal.

Chris Palmer

unread,
Dec 18, 2014, 5:29:18 PM12/18/14
to nolo...@gmail.com, Daniel Kahn Gillmor, public-w...@w3.org, blink-dev, security-dev, mozilla-de...@lists.mozilla.org
On Thu, Dec 18, 2014 at 2:22 PM, Jeffrey Walton <nolo...@gmail.com> wrote:

>> A) i don't think we should remove "This website does not supply
>> identity information" -- but maybe replace it with "The identity of this
>> site is unconfirmed" or "The true identity of this site is unknown"
>
> None of them are correct when an interception proxy is involved. All
> of them lead to a false sense of security.
>
> Given the degree to which standard bodies accommodate (promote?)
> interception, UA's should probably steer clear of making any
> statements like that if accuracy is a goal.

Are you talking about if an intercepting proxy is intercepting HTTP
traffic, or HTTPS traffic?

A MITM needs a certificate issued for the proxied hostname, that is
signed by an issuer the client trusts. Some attackers can achieve
that, but it's not trivial.

Michael Martinez

unread,
Dec 18, 2014, 5:55:58 PM12/18/14
to public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
No it doesn't need a certificate. A MITM can be executed through a
compromised or rogue router. It's simple enough to set up a public
network in well-known wifi hotspots and attract unwitting users. Then
the HTTPS doesn't protect anyone's transmission from anything as the
router forms the other end of the secure connection and initiates its
own secure connection with the user's intended destination (either the
site they are trying to get to or whatever site the bad guys want them
to visit).

Google, Apple, and other large tech companies learned the hard way this
year that their use of HTTPS failed to protect users from MITM attacks.



--
Michael Martinez
http://www.michael-martinez.com/

YOU CAN HELP OUR WOUNDED WARRIORS
http://www.woundedwarriorproject.org/

Ryan Sleevi

unread,
Dec 18, 2014, 6:04:08 PM12/18/14
to michael....@xenite.org, mozilla-de...@lists.mozilla.org, security-dev, blin...@chromium.org, public-w...@w3.org

I'm sorry, this isn't how HTTPS works and isn't accurate, unless you have explicitly installed the routers cert as a CA cert. If you have, then you're not really an unwitting user (and it is quite hard to do this).

Of course, everything you said is true for insecure HTTP. Which is why this proposal is about reflecting that truth to the user.

Daniel Kahn Gillmor

unread,
Dec 18, 2014, 6:07:21 PM12/18/14
to michael....@xenite.org, public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
On 12/18/2014 05:55 PM, Michael Martinez wrote:
> No it doesn't need a certificate. A MITM can be executed through a
> compromised or rogue router. It's simple enough to set up a public
> network in well-known wifi hotspots and attract unwitting users. Then
> the HTTPS doesn't protect anyone's transmission from anything as the
> router forms the other end of the secure connection and initiates its
> own secure connection with the user's intended destination (either the
> site they are trying to get to or whatever site the bad guys want them
> to visit).

It sounds like you're saying that browsers don't verify the X.509
certificate presented by the https origin server, or at least that they
don't verify that the hostname matches.

This is a serious and extraordinary claim. Please provide evidence for it.

--dkg

signature.asc

Michael Martinez

unread,
Dec 18, 2014, 6:39:17 PM12/18/14
to public-w...@w3.org, mozilla-de...@lists.mozilla.org, securi...@chromium.org, blin...@chromium.org
On 12/18/2014 6:04 PM, Ryan Sleevi wrote:


On Dec 18, 2014 2:55 PM, "Michael Martinez" <michael....@xenite.org> wrote:
>
> On 12/18/2014 5:29 PM, Chris Palmer wrote:
>>
>> On Thu, Dec 18, 2014 at 2:22 PM, Jeffrey Walton <nolo...@gmail.com> wrote:
>>
>>>>   A) i don't think we should remove "This website does not supply
>>>> identity information" -- but maybe replace it with "The identity of this
>>>> site is unconfirmed" or "The true identity of this site is unknown"
>>>
>>> None of them are correct when an interception proxy is involved. All
>>> of them lead to a false sense of security.
>>>
>>> Given the degree to which standard bodies accommodate (promote?)
>>> interception, UA's should probably steer clear of making any
>>> statements like that if accuracy is a goal.
>>
>> Are you talking about if an intercepting proxy is intercepting HTTP
>> traffic, or HTTPS traffic?
>>
>> A MITM needs a certificate issued for the proxied hostname, that is
>> signed by an issuer the client trusts. Some attackers can achieve
>> that, but it's not trivial.
>
>
> No it doesn't need a certificate.  A MITM can be executed through a compromised or rogue router.  It's simple enough to set up a public network in  well-known wifi hotspots and attract unwitting users. Then the HTTPS doesn't protect anyone's transmission from anything as the router forms the other end of the secure connection and initiates its own secure connection with the user's intended destination (either the site they are trying to get to or whatever site the bad guys want them to visit).
>
> Google, Apple, and other large tech companies learned the hard way this year that their use of HTTPS failed to protect users from MITM attacks.
>

I'm sorry, this isn't how HTTPS works and isn't accurate, unless you have explicitly installed the routers cert as a CA cert. If you have, then you're not really an unwitting user (and it is quite hard to do this).


You're assuming people don't connect to open wifi hotspots where rogue routers can be set up by anyone.  If thieves are willing to build fake ATM machines and distribute them to shopping centers across a large geographical area then they will certainly go to the same lengths to distribute rogue routers.  Furthermore, at least two research teams have shown earlier this year that wireless routers can be compromised with virus-like software; when these vulnerabilities are exploited it won't matter if the router has a valid certificate.

On top of that University of Birmingham researchers also found that inappropriately built apps for iOS and Android can leave users' security credentials vulnerable to sniffing.


Of course, everything you said is true for insecure HTTP. Which is why this proposal is about reflecting that truth to the user.

You people are putting your faith in a defense that has already been compromised in many ways.  The distributed nature of the network of access points virtually assures that MITM attacks will continue to bypass HTTPS security.  The manner of warnings you place on Websites with apparently invalid certificates is rendered moot.

Michael Martinez

unread,
Dec 18, 2014, 6:46:31 PM12/18/14
to Daniel Kahn Gillmor, public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
No, what I am saying is that you can bypass the certificate for a MITM
attack via a new technique that was published earlier this year. If you
compromise someone else's router you can control it from your own nearby
router. The compromised router with the valid certificate sends the
user through whatever gateway you specify.

What makes the access points most vulnerable to attack is the human
factor. Someone has to monitor the system for breaches and how often
does that happen? It will vary by company and community, depending on
how well they budget for competent security techs. And how often are
these routers replaced with newer models? Look at what happened with
the ISPs earlier this year who had to replace all their routers because
they ran out of pathway memory. Even the "big guys" who are supposed to
think about this stuff all the time allow their equipment to depreciate
off the books or grow old until it's obsolete.

Meanwhile, you're trying to plug holes in a sieve with HTTPS and browser
warnings.

Chris Palmer

unread,
Dec 18, 2014, 6:49:48 PM12/18/14
to michael....@xenite.org, public-w...@w3.org, mozilla-de...@lists.mozilla.org, security-dev, blink-dev
On Thu, Dec 18, 2014 at 3:39 PM, Michael Martinez
<michael....@xenite.org> wrote:

> You're assuming people don't connect to open wifi hotspots where rogue
> routers can be set up by anyone. If thieves are willing to build fake ATM
> machines and distribute them to shopping centers across a large geographical
> area then they will certainly go to the same lengths to distribute rogue
> routers.

Indeed, there are many rogue wifi hotspots, and indeed many rogue
routers at ISPs (it's definitely not just "last mile" routing that we
need to be concerned about).

The part you're missing is that the man-in-the-middle attacker needs
to present a certificate for the server, say mail.google.com, that was
issued by a certification authority *that the client trusts*. Not just
any certificate for mail.google.com will do.

Now, this is not an insurmountable obstacle to the attacker. But it is
non-trivial: the CAs that clients trust are trying hard not to
mis-issue certificates. And, we are working to make it even more
difficult for attackers, such as with our Certificate Transparency and
public key pinning efforts.

Before arguing against HTTPS, you should make sure you know how it works.

I would encourage you try to mount the attack you describe (only
against your own computers, of course!). I think you will find that
you won't get very far without a valid certificate issued by a
well-known CA.

Michael Martinez

unread,
Dec 18, 2014, 6:56:00 PM12/18/14
to public-w...@w3.org, mozilla-de...@lists.mozilla.org, securi...@chromium.org, blin...@chromium.org
On 12/18/2014 6:49 PM, Chris Palmer wrote:
> On Thu, Dec 18, 2014 at 3:39 PM, Michael Martinez
> <michael....@xenite.org> wrote:
>
>> You're assuming people don't connect to open wifi hotspots where rogue
>> routers can be set up by anyone. If thieves are willing to build fake ATM
>> machines and distribute them to shopping centers across a large geographical
>> area then they will certainly go to the same lengths to distribute rogue
>> routers.
> Indeed, there are many rogue wifi hotspots, and indeed many rogue
> routers at ISPs (it's definitely not just "last mile" routing that we
> need to be concerned about).
>
> The part you're missing is that the man-in-the-middle attacker needs
> to present a certificate for the server, say mail.google.com, that was
> issued by a certification authority *that the client trusts*. Not just
> any certificate for mail.google.com will do.
No, see my other reply for why that is no longer true. The point here
is that coercing the Web into changing over to HTTPS is equivalent to
forcing everyone to replace their cell phones with land lines. You're
trying to fix a technology that has been rendered obsolete by exploits
that were never anticipated in the original design.

> Now, this is not an insurmountable obstacle to the attacker. But it is
> non-trivial: the CAs that clients trust are trying hard not to
> mis-issue certificates. And, we are working to make it even more
> difficult for attackers, such as with our Certificate Transparency and
> public key pinning efforts.
>
> Before arguing against HTTPS, you should make sure you know how it works.
Before arguing FOR HTTPS you need to make sure you know about all the
latest exploits that render it useless.

I encourage you to spend more time doing research in this area and less
time repeating lectures that are outdated. HTTPS really doesn't
accomplish anything in the long run anyway. All the user data you're
encrypting eventually becomes vulnerable to hacking on the server side.
Sure, that data could be encrypted over there (and should be) but it's not.

So you're standing guard at the front door and the thieves are breaking
into the house through the windows. Meanwhile you're creating a bad
user experience with all these warnings and road-blocks to perfectly
legitimate Websites, burdening the system with extra processing cycles,
and not preventing massive MITM attacks from making the news every 1-2
months.

Your time and effort would be better spent improving the browser
experience.

Daniel Kahn Gillmor

unread,
Dec 18, 2014, 6:57:46 PM12/18/14
to michael....@xenite.org, public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blin...@chromium.org
On 12/18/2014 06:46 PM, Michael Martinez wrote:
> No, what I am saying is that you can bypass the certificate for a MITM
> attack via a new technique that was published earlier this year.

Links, please.

> If you
> compromise someone else's router you can control it from your own nearby
> router. The compromised router with the valid certificate sends the
> user through whatever gateway you specify.

You seem to be saying now that the attacker does need a valid
certificate; earlier you claimed no certificate was needed.

I'm sure everyone agrees that the dominant X.509 certificate issuance
process and auditability can be improved, but it's not trivial to get a
fake cert automatically.

The fact that HTTPS is not 100% perfect does not mean that HTTP is
somehow secure.

You sound very concerned about MITM attacks. I am too.

Compared to HTTPS, HTTP is *trivially* vulnerable to MITM attacks.
Shouldn't we visibly mark HTTP connections as insecure?

--dkg

signature.asc

Matthew Dempsky

unread,
Dec 18, 2014, 7:02:45 PM12/18/14
to michael....@xenite.org, Daniel Kahn Gillmor, public-w...@w3.org, securi...@chromium.org, mozilla-de...@lists.mozilla.org, blink-dev
On Thu, Dec 18, 2014 at 3:46 PM, Michael Martinez <michael....@xenite.org> wrote:
No, what I am saying is that you can bypass the certificate for a MITM attack via a new technique that was published earlier this year.

Citation needed.

Earlier this year, you made these two G+ posts suggesting HTTPS is broken:

Google, the great champion of HTTPS/SSL, cannot prevent yet more man-in-the-middle attacks against its users: http://www.theregister.co.uk/2014/11/21/hackers_snaffling_smartphone_secrets_with_redirection_attack/
If your company is serious about using HTTPS it has to do it right (not that it will matter, but don't throw your money away on bad implementation).  http://www.darkreading.com/endpoint/the-week-when-attackers-started-winning-the-war-on-trust-/a/d-id/1317657

The first link is about an ARP-poisoning man-in-the-middle attack that has nothing to do with HTTPS/SSL, the article doesn't mention "HTTPS" or "SSL", and in fact the attack would have been *prevented* by HTTPS/SSL.

The second link is about how mismanaging your web server can compromise HTTPS's added security benefits (e.g., using long-unsupported MD5 certificates or revealing your SSL secret key).  That's true, but misleading: the risks are no more severe than if you mismanage an HTTP-only server.


You seem to be arguing that people shouldn't be encouraged to lock their doors when leaving because sometimes they forget to lock their windows.  But actually we need to encourage people to do *both*.
It is loading more messages.
0 new messages