Intent to deprecate: Insecure HTTP

16181 views
Skip to first unread message

Richard Barnes

unread,
Apr 13, 2015, 10:57:58 AM4/13/15
to dev-pl...@lists.mozilla.org
There's pretty broad agreement that HTTPS is the way forward for the web.
In recent months, there have been statements from IETF [1], IAB [2], W3C
[3], and even the US Government [4] calling for universal use of
encryption, which in the case of the web means HTTPS.

In order to encourage web developers to move from HTTP to HTTPS, I would
like to propose establishing a deprecation plan for HTTP without security.
Broadly speaking, this plan would entail limiting new features to secure
contexts, followed by gradually removing legacy features from insecure
contexts. Having an overall program for HTTP deprecation makes a clear
statement to the web community that the time for plaintext is over -- it
tells the world that the new web uses HTTPS, so if you want to use new
things, you need to provide security. Martin Thomson and I drafted a
one-page outline of the plan with a few more considerations here:

https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Some earlier threads on this list [5] and elsewhere [6] have discussed
deprecating insecure HTTP for "powerful features". We think it would be a
simpler and clearer statement to avoid the discussion of which features are
"powerful" and focus on moving all features to HTTPS, powerful or not.

The goal of this thread is to determine whether there is support in the
Mozilla community for a plan of this general form. Developing a precise
plan will require coordination with the broader web community (other
browsers, web sites, etc.), and will probably happen in the W3C.

Thanks,
--Richard

[1] https://tools.ietf.org/html/rfc7258
[2]
https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
[3] https://w3ctag.github.io/web-https/
[4] https://https.cio.gov/
[5]
https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
[6]
https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion

DDD

unread,
Apr 13, 2015, 1:40:24 PM4/13/15
to
I think that you'll need to define a number of levels of security, and decide how to distinguish them in the Firefox GUI:

- Unauthenticated/Unencrypted [http]
- Unauthenticated/Encrypted [https ignoring untrusted cert warning]
- DNS based auth/Encrypted [TLSA certificate hash in DNS]
- Ditto with TLSA/DNSSEC
- Trusted CA Authenticated [Any root CA]
- EV Trusted CA [Special policy certificates]

Ironically, your problem is more a GUI thing. All the security technology you need actually exists already...

Eric Rescorla

unread,
Apr 13, 2015, 1:57:16 PM4/13/15
to DDD, dev-platform
On Mon, Apr 13, 2015 at 10:40 AM, DDD <david.a...@gmail.com> wrote:

> I think that you'll need to define a number of levels of security, and
> decide how to distinguish them in the Firefox GUI:
>
> - Unauthenticated/Unencrypted [http]
> - Unauthenticated/Encrypted [https ignoring untrusted cert warning]
> - DNS based auth/Encrypted [TLSA certificate hash in DNS]
> - Ditto with TLSA/DNSSEC
>

Note that Firefox does not presently support either DANE or DNSSEC,
so we don't need to distinguish these.

-Ekr




> - Trusted CA Authenticated [Any root CA]
> - EV Trusted CA [Special policy certificates]
>
> Ironically, your problem is more a GUI thing. All the security technology
> you need actually exists already...
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

DDD

unread,
Apr 13, 2015, 2:29:00 PM4/13/15
to
>
> Note that Firefox does not presently support either DANE or DNSSEC,
> so we don't need to distinguish these.
>
> -Ekr
>

Nor does Chrome, and look what happened to both browsers...

http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

...the keys to the castle are in the DNS registration process. It is illogical not to add TLSA support.

mh.in....@gmail.com

unread,
Apr 13, 2015, 2:33:02 PM4/13/15
to
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.

May I suggest defining "security" here as either:

1) A secure host (SSL)

or

2) Protected by subresource integrity from a secure host

This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies.

david.a...@gmail.com

unread,
Apr 13, 2015, 2:52:42 PM4/13/15
to

> 2) Protected by subresource integrity from a secure host
>
> This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies.

Is that a complicated word for SHA512 HASH? :) You could envisage a new http URL pattern http://video.vp9?<SHA512-HASH>

Frederik Braun

unread,
Apr 13, 2015, 3:00:54 PM4/13/15
to dev-pl...@lists.mozilla.org
I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -

But, note that this will not give you extra security UI (or less
warnings): Browsers will still disable scripts served over HTTP on an
HTTPS page - even if the integrity matches.

This is because HTTPS promises integrity, authenticity and
confidentiality. SRI only provides the former.

Richard Barnes

unread,
Apr 13, 2015, 3:04:35 PM4/13/15
to Frederik Braun, dev-pl...@lists.mozilla.org
On Mon, Apr 13, 2015 at 3:00 PM, Frederik Braun <fbr...@mozilla.com> wrote:

> On 13.04.2015 20:52, david.a...@gmail.com wrote:
> >
> I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -
>
> But, note that this will not give you extra security UI (or less
> warnings): Browsers will still disable scripts served over HTTP on an
> HTTPS page - even if the integrity matches.
>
> This is because HTTPS promises integrity, authenticity and
> confidentiality. SRI only provides the former.
>

I agree that we should probably not allow insecure HTTP resource to be
looped in through SRI.

There are several issues with this idea, but the one that sticks out for me
is the risk of leakage from HTTPS through these http-schemed resource
loads. For example, that fact that you're loading certain images might
reveal which Wikipedia page you're reading.

--Richard

Gervase Markham

unread,
Apr 13, 2015, 3:11:32 PM4/13/15
to
On 13/04/15 15:57, Richard Barnes wrote:
> Martin Thomson and I drafted a
> one-page outline of the plan with a few more considerations here:
>
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Are you sure "privileged contexts" is the right phrase? Surely contexts
are "secure", and APIs or content is "privileged" by being only
available in a secure context?

There's nothing wrong with your plan, but that's partly because it's
hard to disagree with your principle, and the plan is pretty high level.
I think the big arguments will be over when and what features require a
secure context, and how much breakage we are willing to tolerate.

I know the Chrome team have a similar plan; is there any suggestion that
we might coordinate on feature re-privilegings?

Would we put an error on the console when a privileged API was used in
an insecure context?

Gerv

Gervase Markham

unread,
Apr 13, 2015, 3:12:44 PM4/13/15
to DDD
On 13/04/15 18:40, DDD wrote:
> I think that you'll need to define a number of levels of security, and decide how to distinguish them in the Firefox GUI:
>
> - Unauthenticated/Unencrypted [http]
> - Unauthenticated/Encrypted [https ignoring untrusted cert warning]
> - DNS based auth/Encrypted [TLSA certificate hash in DNS]
> - Ditto with TLSA/DNSSEC
> - Trusted CA Authenticated [Any root CA]
> - EV Trusted CA [Special policy certificates]

I'm not quite sure what this has to do with the proposal you are
commenting on, but I would politely ask you how many users you think are
both interested in, able to understand, and willing to take decisions
based on _six_ different security states in a browser?

The entire point of this proposal is to reduce the web to 1 security
state - "secure".

Gerv


Martin Thomson

unread,
Apr 13, 2015, 3:28:45 PM4/13/15
to Gervase Markham, dev-platform
On Mon, Apr 13, 2015 at 12:11 PM, Gervase Markham <ge...@mozilla.org> wrote:
> Are you sure "privileged contexts" is the right phrase? Surely contexts
> are "secure", and APIs or content is "privileged" by being only
> available in a secure context?

There was a long-winded group bike-shed-painting session on the
public-webappsec list and this is the term they ended up with. I
don't believe that it is the right term either, FWIW.

> There's nothing wrong with your plan, but that's partly because it's
> hard to disagree with your principle, and the plan is pretty high level.
> I think the big arguments will be over when and what features require a
> secure context, and how much breakage we are willing to tolerate.

Not much, but maybe more than we used to.

> I know the Chrome team have a similar plan; is there any suggestion that
> we might coordinate on feature re-privilegings?

Yes, the intent is definitely to collaborate, as the original email
stated. Chrome isn't the only stakeholder, which is why we suggested
that we go to the W3C so that the browser formerly known as IE and
Safari are included.

> Would we put an error on the console when a privileged API was used in
> an insecure context?

Absolutely. That's likely to be a first step once the targets have
been identified. That pattern has already been established for bad
crypto and a bunch of other things that we don't like but are forced
to tolerate for compatibility reasons.

david.a...@gmail.com

unread,
Apr 13, 2015, 3:35:08 PM4/13/15
to
I would politely ask you how many users you think are
> both interested in, able to understand, and willing to take decisions
> based on _six_ different security states in a browser?

I think this thread is about deprecating things and moving developers onto more secure platforms. To do that, you'll need to tell me *why* I need to make the effort. The only thing that I am going to care about is to get users closer to that magic green bar and padlock icon.

You may hope that security is black and white, but in practice it isn't. There is always going to be a sliding scale. Do you show me a green bar and padlock if I go to www.google.com, but the certificate is issued by my intranet? Do you show me the same certificate error I'd get as if I was connecting to a clearly malicious certificate.

What if I go to www.google.com, but the certificate has been issued incorrectly because Firefox ships with 500 equally trusted root certificates?


So - yeah, you're going to need a rating system for your security: A, B, C, D, Fail. You're going to have to explain what situations get you into what group, how as a developer I can move to a higher group (e.g. add a certificate hash into DNS, get an EV certificate costing $10,000, implement DNSSEC, use PFS ciphersuites and you get an A rating). I'm sure that there'll be new security vulnerabilities and best practice in future, too.

Then it is up to me as a developer to decide how much effort I can realistically put into this...

...for my web-site containing pictures of cats...

commod...@gmail.com

unread,
Apr 13, 2015, 3:36:56 PM4/13/15
to
Great, peachy, more authoritarian dictation of end-user behavior by the Elite is just what the Internet needs right now. And hey, screw anybody trying to use legacy systems for anything, right? Right!

stu...@testtrack4.com

unread,
Apr 13, 2015, 4:29:23 PM4/13/15
to
HTTP should remain optional and fully-functional, for the purposes of prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a development server running on my local machine, or to debug which point before hitting the TLS layer is corrupting requests.

byu...@gmail.com

unread,
Apr 13, 2015, 4:43:25 PM4/13/15
to
On Monday, April 13, 2015 at 3:36:56 PM UTC-4, commod...@gmail.com wrote:
> Great, peachy, more authoritarian dictation of end-user behavior by the Elite is just what the Internet needs right now. And hey, screw anybody trying to use legacy systems for anything, right? Right!

Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be open season for someone to fork/make a new browser with HTTP support, and gain an instant 30% market share. These guys have run amok with major decisions (like the HTTP/2 TLS mandate) because of a lack of competition.

These guys can go around thinking they're secure while trusting root CAs like CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back on track with a new, sane browser. While we're at it, we could start treating self-signed certs like we do SSH, rather than as being *infinitely worse* than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a notary public to accept a self-signed cert yet. But I shouldn't give them any ideas ...)

ipar...@gmail.com

unread,
Apr 13, 2015, 4:48:31 PM4/13/15
to
I have given this a lot of thought lately, and to me the only way forward is to do exactly what is suggested here: phase out and eventually drop plain HTTP support. There are numerous reasons for doing this:

- Plain HTTP allows someone to snoop on your users.

- Plain HTTP allows someone to misrepresent your content to the users.

- Plain HTTP is a great vector for phishing, as well as injecting malicious code that comes from your domain.

- Plain HTTP provides no guarantees of identity to the user. Arguably, the current HTTPS implementation doesn't do much to fix this, but more on this below.

- Lastly, arguing that HTTP is cheaper than HTTPS is going to be much harder once there are more providers giving away free certs (looking at StartSSL and Let's Encrypt).

My vision would be that HTTP should be marked with the same warning (except for wording of course) as an HTTPS site secured by a self-signed cert. In terms of security, they are more or less equivalent, so there is no reason to treat them differently. This should be the goal.

There are problems with transitioning to giving a huge scary warning for HTTP. They include:

- A large number of sites that don't support HTTPS. To fix this, I think the best method is to show the "http://" part of the URL in red, and publicly announce that over the next X months Firefox is moving to the model of giving a big scary warning a la self-signed cert warning if HTTPS is not enabled.

- A large number of corporate intranets that run plain HTTP. Perhaps a build-time configuration could be enabled that would enable system administrators to ignore the warning for certain subdomains or the RFC 1918 addresses as well as localhost. Note that carrier grade NAT in IPv4 might make the latter a bad choice by default.

- Ad supported sites report a drop in ad revenue when switching to HTTPS. I don't know what the problem or solution here is, but I am certain this is a big hurdle for some sites.

- Lack of free wildcard certificates. Ideally, Let's Encrypt should provide these.

- Legacy devices that cannot be upgraded to support HTTPS or only come with self-signed certificates. This is a problem that can be addressed by letting the user bypass the scary warning (just like with self-signed certs).

Finally, some people conflate the idea of a global transition from plain HTTP to HTTPS as a move by CA's to make more money. They might argue that first, we need to get rid of CA's or provide an alternative path for obtaining certificates. I disagree. Switching from plain HTTP to HTTPS is step one. Step two might include adding more avenues for establishing trust and authentication. There is no reason to try to add additional methods of authenticating the servers while still allowing them to use no encryption at all. Let's kill off plain HTTP first, then worry about how to fix the CA system. Let's Encrypt will of course make this a lot easier by providing free certs.

ipar...@gmail.com

unread,
Apr 13, 2015, 4:55:01 PM4/13/15
to
On Monday, April 13, 2015 at 4:43:25 PM UTC-4, byu...@gmail.com wrote:

> These guys can go around thinking they're secure while trusting root CAs like CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back on track with a new, sane browser. While we're at it, we could start treating self-signed certs like we do SSH, rather than as being *infinitely worse* than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a notary public to accept a self-signed cert yet. But I shouldn't give them any ideas ...)

A self-signed cert is worse than HTTP, in that you cannot know if the site you are accessing is supposed to have a self-signed cert or not. If you know that, you can check the fingerprint and bypass the warning. But let's say you go to download a fresh copy of Firefox, just to find out that https://www.mozilla.org/ is serving a self-singed cert. How can you possibly be sure that you are not being MITM'ed? Arguably, it's worse if we simply ignore the fact that the cert is self-signed, and simply let you download the compromised version, vs giving you some type of indication that the connection is not secure (e.g.: no green bar because it's plain HTTP).

That is not to say that we should continue as is. HTTP is insecure, and should give the same warning as HTTPS with a self-signed cert.

Joshua Cranmer 🐧

unread,
Apr 13, 2015, 5:08:54 PM4/13/15
to
On 4/13/2015 3:29 PM, stu...@testtrack4.com wrote:
> HTTP should remain optional and fully-functional, for the purposes of prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a development server running on my local machine, or to debug which point before hitting the TLS layer is corrupting requests.

If you actually go to read the details of the proposal rather than
relying only on the headline, you'd find that there is an intent to
actually let you continue to use http for, e.g., localhost. The exact
boundary between "secure" HTTP and "insecure" HTTP is being actively
discussed in other forums.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

bryan....@gmail.com

unread,
Apr 13, 2015, 5:11:02 PM4/13/15
to
One limiting factor is that Firefox doesn't treat form data the same on HTTPS sites.

Examples:

http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor

http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button

After loosing a few forum posts or wiki edits to this bug in Firefox, you quickly insist on using unsecured HTTP as often as possible.

Boris Zbarsky

unread,
Apr 13, 2015, 5:29:23 PM4/13/15
to
On 4/13/15 5:11 PM, bryan....@gmail.com wrote:
> After loosing a few forum posts or wiki edits to this bug in Firefox, you quickly insist on using unsecured HTTP as often as possible.

This is only done in cases in which the page explicitly requires that
nothing about the page be cached (no-cache), yes?

That said, we should see if we can stop doing the state-not-saving thing
for SSL+no-cache and tell banks who want it to use no-store.

-Boris

Joseph Lorenzo Hall

unread,
Apr 13, 2015, 5:57:59 PM4/13/15
to ipar...@gmail.com, dev-pl...@lists.mozilla.org
Late to the thread, but I'll use this reply to say we're very
supportive of the proposal at CDT.
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform



--
Joseph Lorenzo Hall
Chief Technologist
Center for Democracy & Technology
1634 I ST NW STE 1100
Washington DC 20006-4011
(p) 202-407-8825
(f) 202-637-0968
j...@cdt.org
PGP: https://josephhall.org/gpg-key
fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

Eugene

unread,
Apr 13, 2015, 6:53:01 PM4/13/15
to
I fully support this proposal. In addition to APIs, I'd like to propose prohibiting caching any resources loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N. The reasons are:
1) MITM can pollute users' HTTP cache, by modifying some JavaScript files with a long time cache control max-age.
2) It won't break any websites, just some performance penalty for them.
3) Many website operators and users avoid using HTTPS, since they believe HTTPS is much slower than plaintext HTTP. After deprecating HTTP cache, this argument will be more wrong.

Martin Thomson

unread,
Apr 13, 2015, 7:03:13 PM4/13/15
to Eugene, dev-platform
On Mon, Apr 13, 2015 at 3:53 PM, Eugene <imfasterth...@gmail.com> wrote:
> In addition to APIs, I'd like to propose prohibiting caching any resources loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N.

This has some negative consequences (if only for performance). I'd
like to see changes like this properly coordinated. I'd rather just
treat "caching" as one of the features for Phase 2.N.

Karl Dubost

unread,
Apr 13, 2015, 7:13:48 PM4/13/15
to Richard Barnes, dev-pl...@lists.mozilla.org
Richard,

Le 13 avr. 2015 à 23:57, Richard Barnes <rba...@mozilla.com> a écrit :
> There's pretty broad agreement that HTTPS is the way forward for the web.

Yes, but that doesn't make deprecation of HTTP a consensus.

> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.

This is not encouragement. This is call forcing. ^_^ Just that we are using the right terms for the right thing.


In the document
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

You say:
Phase 3: Essentially all of the web is HTTPS.

I understand this is the last hypothetical step, but it sounds like a bit let's move the Web to XML. It didn't work out very well.

I would love to have a more secure Web, but this can not happen without a few careful consideration.

* Third tier person for certificates being mandatory is a no-go. It creates a system of authority and power, an additional layer of hierarchy which deeply modify the ability for anyone to publish and might in some circumstances increase the security risk.

* If we have to rely, cost of certificates must be zero. These for the simple reason than not everyone is living in a rich industrialized country.

* Setup and publication through HTTPS should be as easy as HTTP. The Web brought a publishing power to any individuals. Imagine cases where you need to create a local network, web developing on your computer, hacking a server for your school, community, etc. If it relies on a heavy process, it will not happen.


So instead of a plan based on technical features, I would love to see a: "Let's move to a secure Web. What are the user scenarios, we need to solve to achieve that."

These user scenarios are economical, social, etc.


my 2 cents.
So yes, but not the way it is introduced and plan now.


--
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

david.a...@gmail.com

unread,
Apr 13, 2015, 7:48:27 PM4/13/15
to
> * If we have to rely, cost of certificates must be zero. These for the simple reason than not everyone is living in a rich industrialized country.

Certificates (and paying for them) is an artificial economy. If I register a DNS address, I should get a certificate to go with it. Heck, last time I got an SSL certificate, they effectively bootstrapped the trust based on my DNS MX record...

Hence IMO TLS should be:
- DANE for everyone
- DANE & Trusted Third Party CAs for the few
- DANE & TTP & EV for sites that accept financial and medical details

The Firefox opportunistic encryption feature is a good first step towards this goal. If they could just nslookup the TLSA certificate hash, we'd be a long way down the road.

northrupt...@gmail.com

unread,
Apr 13, 2015, 8:57:41 PM4/13/15
to
On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote:
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
> Broadly speaking, this plan would entail limiting new features to secure
> contexts, followed by gradually removing legacy features from insecure
> contexts. Having an overall program for HTTP deprecation makes a clear
> statement to the web community that the time for plaintext is over -- it
> tells the world that the new web uses HTTPS, so if you want to use new
> things, you need to provide security.

I'd be fully supportive of this if - and only if - at least one of the following is implemented alongside it:

* Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure
* Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority

Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.

imfasterth...@gmail.com

unread,
Apr 13, 2015, 9:43:27 PM4/13/15
to
On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com wrote:
>
> * Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure

This feature (i.e. opportunistic encryption) was implemented in Firefox 37, but unfortunately an implementation bug made HTTPS insecure too. But I guess Mozilla will fix it and make this feature available in a future release.

> * Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority
>
> Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.

I don't think the current CA system is broken. The domain name registration is also centralized, but almost every website has a hostname, rather than using IP address, and few people complain about this.

Karl Dubost

unread,
Apr 13, 2015, 10:10:44 PM4/13/15
to imfasterth...@gmail.com, dev-pl...@lists.mozilla.org

Le 14 avr. 2015 à 10:43, imfasterth...@gmail.com a écrit :
> I don't think the current CA system is broken.

The current CA system creates issues for certain categories of population. It is broken in some ways.

> The domain name registration is also centralized, but almost every website has a hostname, rather than using IP address, and few people complain about this.

Two points:

1. You do not need to register a domain name to have a Web site (IP address)
2. You do not need to register a domain name to run a local blah.test.site

Both are still working and not deprecated in browsers ^_^

Now the fact to have to rent your domain name ($$$) and that all the URIs are tied to this is in terms of permanent identifiers and the fabric of time on information has strong social consequences. But's that another debate than the one of this thread on deprecating HTTP in favor of HTTPS.

I would love to see this discussion happening in Whistler too.

ipar...@gmail.com

unread,
Apr 13, 2015, 11:26:59 PM4/13/15
to
> * Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure

I am against this. Both are insecure and should be treated as such. How is your browser supposed to know that gmail.com is intended to serve a self-signed cert? It's not, and it cannot possibly know it in the general case. Thus it must be treated as insecure.

> * Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority

No. Namecoin has so many other problems that it is not feasible.

> Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.

Agree that it's broken. The fact that any CA can issue a cert for any domain is stupid, always was and always will be. It's now starting to bite us.

However, HTTPS and the CA system don't have to be tied together. Let's ditch the immediately insecure plain HTTP, then add ways to authenticate trusted certs in HTTPS by means other than our current CA system. The two problems are orthogonal, and trying to solve both at once will just leave us exactly where we are: trying to argue for a fundamentally different system.

ipar...@gmail.com

unread,
Apr 13, 2015, 11:31:19 PM4/13/15
to
On Monday, April 13, 2015 at 10:10:44 PM UTC-4, Karl Dubost wrote:

> Now the fact to have to rent your domain name ($$$) and that all the URIs are tied to this is in terms of permanent identifiers and the fabric of time on information has strong social consequences. But's that another debate than the one of this thread on deprecating HTTP in favor of HTTPS.

The registrars are, as far as I'm concerned, where the solution to the CA problem lies. You buy a domain name from someone, you are already trusting them with it. They can simply redirect your nameservers elsewhere and you can't do anything about it. Remember, you never buy a domain name, you lease it.

What does this have to do with plain HTTP to HTTPS transition? Well, why are we trusting CA's at all? Why not have the registrar issue you a wildcard cert with the purchase of a domain, and add restrictions to the protocol such that only your registrar can issue a cert for that domain?

Or even better, have the registrar sign a CA cert for you that is good for your domain only. That way you can issue unlimited certs for domains you own and *nobody but you can do that*.

However, like you said that's a separate discussion. We can solve the CA problem after we solve the plain HTTP problem.

commod...@gmail.com

unread,
Apr 14, 2015, 12:27:22 AM4/14/15
to
On Monday, April 13, 2015 at 1:43:25 PM UTC-7, byu...@gmail.com wrote:
> Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be open season for someone to fork/make a new browser with HTTP support, and gain an instant 30% market share.
Or, more likely, it'll be a chance for Microsoft and Apple to laugh all the way to the bank. Because seriously, what else would you expect to happen when the makers of a web browser announce that, starting in X months, they'll be phasing out compatibility with the vast majority of existing websites?

vic

unread,
Apr 14, 2015, 1:16:25 AM4/14/15
to
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote:
> HTTP deprecation

I'm strongly against the proposal as it is described here. I work with small embedded devices (think sensor network) that are accessed over HTTP. These devices have very little memory, only a few kB, implementing SSL is simply not possible. Who are you to decree these devices become unfit hosts?

Secondly the proposal to restrain unrelated new features like CSS attributes to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine but authoritarianism is not. Please consider that everyone is capable of making their own decisions.

Lastly deprecating HTTP in the current state of the certificate authority business is completely unacceptable. These are *not* separate issues, to implement HTTPS without warnings you must be able to obtain certificates (including wildcard ones) easily and affordably and not only to rich western country citizens. The "let's go ahead and we'll figure this out later" attitude is irresponsible considering the huge impact that this change will have.

I would view this proposal favorably if 1) you didn't try to force people to adopt the One True Way and 2) the CA situation was fixed.

b...@hutchins.co

unread,
Apr 14, 2015, 1:18:47 AM4/14/15
to
This isn't at all what Richard was trying to say. The original discussion states that the plan will be to make all new browser features only work under HTTPS, to help developers and website owners to migrate to HTTPS only. This does mean these browsers will remove support for HTTP ever; but simply to deprecate it. Browsers still support many legacy and deprecated features.

b...@hutchins.co

unread,
Apr 14, 2015, 1:28:41 AM4/14/15
to
An embedded device would not be using a web browser such as Firefox, so this isn't really much of a concern. The idea would be to only enforce HTTPS deprecation from browsers, not web servers. You can continue to use HTTP on your own web services and therefore use it through your embedded devices.

As all technology protocols change over time, enforcing encryption is a natural and logical step to evolve web technology. Additionally, while everyone is able to make their own decisions, it doesn't mean people make the right choice. A website that uses sensitive data insecurely over HTTP and the users are unaware, as most web consumers are not even aware what the difference of HTTP vs HTTPS means, is not worth the risk. It'd be better to enforce security and reduce the risks that exist with internet privacy. Mozilla though never truly tries to operate anything with an authoritarianism approach, but this suggestion is to protect the consumers of the web, not the developers of the web.

Mozilla is trying to get https://letsencrypt.org/ started, which will be free, removing all price arguments from this discussion.

IMHO, this debate should be focused on improving the way HTTP is deprecated, but I do not believe there are any valid concerns that HTTP should not be deprecated.

Yoav Weiss

unread,
Apr 14, 2015, 1:53:06 AM4/14/15
to b...@hutchins.co, dev-pl...@lists.mozilla.org
IMO, limiting new features to HTTPS only, when there's no real security
reason behind it will only end up limiting feature adoption.
It directly "punishing" developers and adds friction to using new features,
but only influence business in a very indirect manner.

If we want to move more people to HTTPS, we can do any or all of the
following:
* Show user warnings when the site they're on is insecure
* Provide an opt-in "don't display HTTPS" mode as an integral part of the
browser. Make it extremely easy to opt in.

Search engines can also:
* Downgrade ranking of insecure sites in a significant way
* Provide a "don't show me insecure results" button

If you're limiting features to HTTPS with no reason you're implicitly
saying that developer laziness is what's stalling adoption. I don't believe
that the case.

There's a real eco-system problem with 3rd party widgets and ad networks
that makes it hard for large sites to switch until all of their site's
widgets have. Developers have no saying here. Business does.

What you want is to make the business folks threaten that out-dated 3rd
party widget that if it doesn't move to HTTPS, the site would switch to the
competition. For that you need to use a stick that business folks
understand: "If you're on HTTP, you'd see less and less traffic". Limiting
new features does absolutely nothing in that aspect.

Anne van Kesteren

unread,
Apr 14, 2015, 2:23:07 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org, b...@hutchins.co
On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> Limiting new features does absolutely nothing in that aspect.

Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
Workers as a reason to start deploying HTTPS:

http://open.blogs.nytimes.com/2014/11/13/embracing-https/

(And anecdotally, I find it easier to convince developers to deploy
HTTPS on the basis of some feature needing it than on merit. And it
makes sense, if they need their service to do X, they'll go through
the extra trouble to do Y to get to X.)


--
https://annevankesteren.nl/

Anne van Kesteren

unread,
Apr 14, 2015, 2:25:56 AM4/14/15
to david.a...@gmail.com, dev-pl...@lists.mozilla.org

Anne van Kesteren

unread,
Apr 14, 2015, 3:29:05 AM4/14/15
to Karl Dubost, dev-pl...@lists.mozilla.org, imfasterth...@gmail.com
On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdu...@mozilla.com> wrote:
> 1. You do not need to register a domain name to have a Web site (IP address)

Name one site you visit regularly that doesn't have a domain name. And
even then, you can get certificates for public IP addresses.


> 2. You do not need to register a domain name to run a local blah.test.site

We should definitely allow whitelisting of sorts for developers. As a
start localhost will be a privileged context by default. We also have
an override in place for Service Workers.

This is not a reason not to do HTTPS. This is something we need to
improve along the way.


--
https://annevankesteren.nl/

david.a...@gmail.com

unread,
Apr 14, 2015, 3:29:26 AM4/14/15
to
Yawn - those were all terrible articles. To summarise their points: "NSA is bad, some DNS servers are out of date, DNSSEC may be still using shorter 1024bit RSA key lengths (hmm... much like TLS then)"

The trouble is: Just because something isn't perfect, doesn't make it a bad idea. Certificates are not perfect, but they are not a bad idea. Putting certificate thumbprints in DNS is not perfect, but it's not half a *good* idea.

Think about it: if your completely clear-text, unauthenticated DNS connection is compromised, then your browser is going to go to the wrong server anyway. If it goes to the wrong server, so will your email, as will the identity verification messages from your CA.

Your browser needs to retrieve A and AAAA addresses from DNS anyway, so why not pull TLSA certificate hashes at the same time? Even without DNSSEC, this could only improve things.

Casepoint, *absolutely* due to the frankly incomprehensible refusal to do this: http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

There is nothing you can do to fix this with traditional X509, or any single chain of trust. You need multiple, independent proofs of identity. A combination of X509 and a number of different signed DNS providers seem like a good way to approach this.

Finally - you can audit DNSSEC/TLSA responses programmatically as the response records are cached publicly in globally dispersed DNS servers, it's really hard to do the equivalent of "send a different chain when IP address 1.2.3.4 connects".

I have my own opinions why TLSA certificate pinning records are not being checked and, having written an implementation myself, I can guarantee you that it isn't due to any technical complexity.

Anne van Kesteren

unread,
Apr 14, 2015, 3:39:29 AM4/14/15
to david.a...@gmail.com, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 9:29 AM, <david.a...@gmail.com> wrote:
> The trouble is: Just because something isn't perfect, doesn't make it a bad idea.

I think it's a pretty great idea and it's one people immediately think
of. However, as those articles explain in detail, it's also a far from
realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
and is the focus of this thread. Whether we can achieve similar
guarantees through DNS at some point is orthogonal and is best
discussed elsewhere:

https://tools.ietf.org/wg/dane/


--
https://annevankesteren.nl/

david.a...@gmail.com

unread,
Apr 14, 2015, 3:47:16 AM4/14/15
to
> realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
> and is the focus of this thread.

http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

Sure it works :)

imm...@gmail.com

unread,
Apr 14, 2015, 3:48:48 AM4/14/15
to
> Secondly the proposal to restrain unrelated new features like CSS attributes to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine but authoritarianism is not. Please consider that everyone is capable of making their own decisions.

One might note that this has already been tried, *and succeeded*, with SPDY and then HTTP 2.

HTTP 2 is faster than HTTP 1, but both Mozilla and Google are refusing to allow unencrypted HTTP 2 connections. Sites like http://httpvshttps.com/ intentionally mislead users into thinking that TLS improves connection speed, when actually the increased speed is from HTTP 2.

lorenzo...@gmail.com

unread,
Apr 14, 2015, 3:51:59 AM4/14/15
to
> The goal of this thread is to determine whether there is support in the
> Mozilla community for a plan of this general form. Developing a precise
> plan will require coordination with the broader web community (other
> browsers, web sites, etc.), and will probably happen in the W3C.
>

From the user/sysadmin point of view it would be very helpful to have information on how the following issues will be handled:

1) Caching proxies: resources obtained over HTTPS cannot be cached by a proxy that doesn't use MITM certificates. If all users must move to HTTPS there will be no way to re-use content downloaded for one user to accelerate another user. This is an important issue for locations with many users and poor internet connectivity.

2) Self signed certificates: in many situations it is hard/impossible to get certificates signed by a CA (e.g. provisioning embedded devices). The current approach in many of these situations is not to use HTTPS. If the plan goes into effect what other solution could be used?

Regarding problem 1: I guess that allowing HTTP for resources loaded with subresource integrity could be some sort of alternative, but would require collaboration from the server owner. Being more work than simply letting the webserver send out automatically caching headers I wonder how many sites will implement it.

Regarding problem 2: in my opinion it can be mitigated by offering the user a new standard way to validate self-signed certificates: the user is prompted to enter the fingerprint of the certificate that she must have received out-of-band, if the user enters the correct fingerprint the certificate is marked as trusted (see [1]). This clearly opens up some attacks that should be carefully assessed.

Best,
Lorenzo


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1012879

imm...@gmail.com

unread,
Apr 14, 2015, 3:55:45 AM4/14/15
to
Another note:

Nobody, to within experimental error, uses IP addresses to access public websites.

But plenty of people use them for test servers, temporary servers, and embedded devices. (My home router is http://192.168.1.254/, do they need to get a certificate for 192.168.1.254? Or do home routers need to come with installation CDs that install the router's root certificate? How is that not a worse situation, where every web user has to trust the router manufacturer?)

And even though nobody uses IP addresses, and many public websites don't work with IP addresses (because vhosts), nobody in their right mind would ever suggest removing the possibility of accessing web servers without domain names.

Yoav Weiss

unread,
Apr 14, 2015, 3:56:01 AM4/14/15
to Anne van Kesteren, dev-pl...@lists.mozilla.org, b...@hutchins.co
On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren <ann...@annevk.nl> wrote:

> On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > Limiting new features does absolutely nothing in that aspect.
>
> Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
> Workers as a reason to start deploying HTTPS:
>
> http://open.blogs.nytimes.com/2014/11/13/embracing-https/


I stand corrected. So it's the 8th reason out of 9, right before technical
debt.

I'm not saying using new features is not an incentive, and I'm definitely
not saying HTTP2 and SW should have been enabled on HTTP.
But, when done without any real security or deployment issues that mandate
it, you're subjecting new features to significant adoption friction that is
unrelated to the feature itself, in order to apply some indirect pressure
on businesses to do the right thing.
You're inflicting developer pain without any real justification. A sort of
collective punishment, if you will.

If you want to apply pressure, apply it where it makes the most impact with
the least cost. Limiting new features to HTTPS is not the place, IMO.


>
> (And anecdotally, I find it easier to convince developers to deploy
> HTTPS on the basis of some feature needing it than on merit. And it
> makes sense, if they need their service to do X, they'll go through
> the extra trouble to do Y to get to X.)
>
>
Don't convince the developers. Convince the business. Drive users away to
secure services by displaying warnings, etc.
Anecdotally on my end, I saw small Web sites that care very little about
security, move to HTTPS over night after Google added HTTPS as a (weak)
ranking signal
<http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html>.
(reason #4 in that NYT article)

Anne van Kesteren

unread,
Apr 14, 2015, 4:05:09 AM4/14/15
to lorenzo...@gmail.com, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 9:51 AM, <lorenzo...@gmail.com> wrote:
> 1) Caching proxies: resources obtained over HTTPS cannot be cached by a proxy that doesn't use MITM certificates. If all users must move to HTTPS there will be no way to re-use content downloaded for one user to accelerate another user. This is an important issue for locations with many users and poor internet connectivity.

Where is the evidence that this is a problem in practice? What do
these environments do for YouTube?


> 2) Self signed certificates: in many situations it is hard/impossible to get certificates signed by a CA (e.g. provisioning embedded devices). The current approach in many of these situations is not to use HTTPS. If the plan goes into effect what other solution could be used?

Either something like
https://bugzilla.mozilla.org/show_bug.cgi?id=1012879 as you mentioned
or overrides for local devices. This definitely needs more research
but shouldn't preclude rolling out HTTPS on public resources.


--
https://annevankesteren.nl/

Anne van Kesteren

unread,
Apr 14, 2015, 4:07:26 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> You're inflicting developer pain without any real justification. A sort of
> collective punishment, if you will.

Why is that you think there is no justification in deprecating HTTP?


>> (And anecdotally, I find it easier to convince developers to deploy
>> HTTPS on the basis of some feature needing it than on merit. And it
>> makes sense, if they need their service to do X, they'll go through
>> the extra trouble to do Y to get to X.)
>
> Don't convince the developers. Convince the business.

Why not both? There's no reason to only attack this top-down.


--
https://annevankesteren.nl/

Yoav Weiss

unread,
Apr 14, 2015, 4:39:27 AM4/14/15
to Anne van Kesteren, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 10:07 AM, Anne van Kesteren <ann...@annevk.nl>
wrote:

> On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > You're inflicting developer pain without any real justification. A sort
> of
> > collective punishment, if you will.
>
> Why is that you think there is no justification in deprecating HTTP?
>

Deprecating HTTP is totally justified. Enabling some features on HTTP but
not others is not, unless there's a real technical reason why these new
features shouldn't be enabled.

Anne van Kesteren

unread,
Apr 14, 2015, 4:44:25 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> Deprecating HTTP is totally justified. Enabling some features on HTTP but
> not others is not, unless there's a real technical reason why these new
> features shouldn't be enabled.

I don't follow. If HTTP is no longer a first-class citizen, why do we
need to treat it as such?


--
https://annevankesteren.nl/
Message has been deleted

Alex C

unread,
Apr 14, 2015, 4:54:41 AM4/14/15
to
On Tuesday, April 14, 2015 at 8:44:25 PM UTC+12, Anne van Kesteren wrote:
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?

When it would take more effort to disable a feature on HTTP than to let it work, and yet the feature is disabled anyway, that's more than just HTTP being "not a first class citizen".

Yoav Weiss

unread,
Apr 14, 2015, 5:33:14 AM4/14/15
to Anne van Kesteren, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 10:43 AM, Anne van Kesteren <ann...@annevk.nl>
wrote:

> On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > Deprecating HTTP is totally justified. Enabling some features on HTTP but
> > not others is not, unless there's a real technical reason why these new
> > features shouldn't be enabled.
>
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?
>

I'm afraid the second class citizens in that scenario would be the new
features, rather than HTTP.

intell...@gmail.com

unread,
Apr 14, 2015, 5:42:09 AM4/14/15
to
Op maandag 13 april 2015 16:57:58 UTC+2 schreef Richard Barnes:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.

Each organisation has it own reasons to move away from HTTPS.
It doesn't mean that each of those reasons are ethical.


> In order to encourage web developers to move from HTTP to HTTPS

Why ?
Large multinationals do not allow HTTPS traffic within their border gateways of their own infrastructure, why make it harder for them?

Why give people the impression in the future that because they are using HTTPS they are much safer, but instead the implication are much larger. (no dependability anymore, forced to trust root-CA etc..)

Why force hosting companies and webmasters with extra costs ?


Do not forget that most used webmaster/webhoster controle panels do not support SNI, and that each HTTPS site has to have it own unique IP address.
Here in EUROPE we are still using IPv4 and RIPE can't issue new IPv4 address because they are all gone. So as long that isn't resolved it can't be done.


IMHO HTTPS would be safer if no larger companies or governments are involved with issuing the certificates, and the certificates would be free or somehow other wise being compensated.

The countries where the people have lesser profiting from HTTPS because human rights are more respected have the means to pay for SSL certificates, but the people who you want to protect don't and even if they would have, they always have a government(s) to deal with.

As long you think that ROOT-CA are 100% trustworthy and governments can't manipulate or do a replay attack afterwards, HTTPS is the way to go... until that (and SNI/IPv4) issue are not handled, don't, because it will cause more harm in the long run.

Do not get me wrong, the intention is good. But trying to protect humanity from humanity also means to keep in mind the issues surrounding it.

Mike de Boer

unread,
Apr 14, 2015, 6:11:12 AM4/14/15
to intell...@gmail.com, Mozilla dev-platform mailing list mailing list

> On 14 Apr 2015, at 11:42, intell...@gmail.com wrote:
>

Something entirely off-topic: I’d like to inform people that your replies to popular threads like this unsigned, with only a notion of identity in an obscure email address, makes me - and I’m sure others too - skip your message or worse; not take it seriously. In my mind I fantasize your message signed off with something like:

"Cheers, mYLitTL3P0nIEZLuLZrAinBowZ.

- Sent from a Galaxy Tab Nexuzzz Swift Super, Gold & Ruby Edition by an 8yr old stuck in Kindergarten.”

… which doesn’t feel like the identity anyone would prefer to assume.

Best, Mike.

Henri Sivonen

unread,
Apr 14, 2015, 6:30:21 AM4/14/15
to dev-platform
On Mon, Apr 13, 2015 at 5:57 PM, Richard Barnes <rba...@mozilla.com> wrote:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.

I agree that we should get the Web onto https and I'me very happy to
see this proposal.

> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> Some earlier threads on this list [5] and elsewhere [6] have discussed
> deprecating insecure HTTP for "powerful features". We think it would be a
> simpler and clearer statement to avoid the discussion of which features are
> "powerful" and focus on moving all features to HTTPS, powerful or not.

I understand that especially in debates about crypto, there's a strong
desire to avoid ratholing and bikeshedding. However, I think avoiding
the discussion on which features are "powerful" is the wrong way to
get from the current situation to where we want to be.

Specifically:

1) I expect the non-availability on http origins of e.g. new CSS
effects that are equally non-privacy-sensitive as existing CSS effects
just as a way to force sites onto https to create resentment among Web
devs that would be better avoided in order to have Web devs support
the cause of encrypting the Web and that could be avoided by
withholding features from http on grounds that tie clearly to the
downsides of http relative to https.

2) I expect withholding certain *existing* privacy-sensitive features
from http to have a greater leverage to push sites to https than
withholding privacy-neutral *new* features.

Specifically, on point #2, I think we should start by, by default,
forgetting all cookies that don't have the "secure" flag set at the
end of the Firefox session. Persistent cookies have two main use
cases:
* On login-requiring sites, not requiring the user to have to
re-enter credentials in every browser session.
* Behavioral profiling.

The first has a clear user-facing benefit. The second is something
that users typically don't want and breaking it has no obvious
user-visible effect of breaking Web compat of the browser.

Fortunately, the most-used login-requiring sites use https already, so
forgetting insecure cookies at the end of the session would have no
adverse effect on the most-user-visible use of persistent cookies.
Also, if a login-requiring site is not already using https, it's
pretty non-controversial that they are Doing It Wrong and should
migrate to https.

One big reason why mostly content-oriented sites, such as news sites,
haven't migrated to https is that they are ad-funded and the
advertising networks are lagging behind in https deployment. Removing
persistence from insecure cookies would give a reason for the ad
networks to accelerate https deployment and do so in a way that
doesn't break the Web in user-visible ways during the transition. That
is, if ad networks want to track users, at least they shouldn't enable
collateral tracking by network eavesdroppers while doing so.

So I think withholding cookie persistence from insecure cookies could
well be way more effective per unit of disruption of user-perceived
Web compat than anything in your proposal.

In addition to persistent cookies, I think we should seek to be more
aggressive in making other features that allow sites to store
persistent state on the client https-only than in making new features
in general https-only. (I realize that applying this consistently to
the HTTP cache could be infeasible on performance grounds in the near
future at least.)

Furthermore, I think this program should have a UI aspect to it:
Currently, the UI designation for http is neutral while the UI
designation for mixed content is undesirable. I think we should make
the UI designation of plain http undesirable once x% the sites that
users encounter on a daily basis are https. Since users don't interact
with the whole Web equally, this means that the UI for http would be
made undesirable much earlier than the time when x% of Web sites
migrates to https. x should be chosen to be high enough as to avoid
warning fatigue that'd desensitize users to the undesirable UI
designation.

--
Henri Sivonen
hsiv...@hsivonen.fi
https://hsivonen.fi/

Boris Zbarsky

unread,
Apr 14, 2015, 7:47:18 AM4/14/15
to
On 4/14/15 3:28 AM, Anne van Kesteren wrote:
> On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdu...@mozilla.com> wrote:
>> 1. You do not need to register a domain name to have a Web site (IP address)
>
> Name one site you visit regularly that doesn't have a domain name.

My router's configuration UI. Here "regularly" is probably once a month
or so.

> And even then, you can get certificates for public IP addresses.

It's not a public IP address.

We do need a solution for this space, which I expect includes the
various embedded devices people are bringing up; I expect those are
behind firewalls more often than on the publicly routable internet.

-Boris

david.a...@gmail.com

unread,
Apr 14, 2015, 7:53:37 AM4/14/15
to
> Something entirely off-topic: I'd like to inform people that your replies to popular threads like this unsigned, with only a notion of identity in an obscure email address, makes me - and I'm sure others too - skip your message or worse; not take it seriously.


Not everyone has the luxury of being public on the Internet. Especially in discussions about default Internet encryption. The real decision makers won't be posting at all.

Eric Shepherd

unread,
Apr 14, 2015, 8:32:54 AM4/14/15
to Joshua Cranmer 🐧, dev-pl...@lists.mozilla.org
Joshua Cranmer 🐧 wrote:
> If you actually go to read the details of the proposal rather than
> relying only on the headline, you'd find that there is an intent to
> actually let you continue to use http for, e.g., localhost. The exact
> boundary between "secure" HTTP and "insecure" HTTP is being actively
> discussed in other forums.
My main concern with the notion of phasing out unsecured HTTP is that
doing so will cripple or eliminate Internet access by older devices that
aren't generally capable of handling encryption and decryption on such a
massive scale in real time.

While it may sound silly, those of us who are intro classic computers
and making them do fun new things use HTTP to connect 10 MHz (or even 1
MHz) machines to the Internet. These machines can't handle the demands
of SSL. So this is a step toward making their Internet connections go away.

This may not be enough of a reason to save HTTP, but it's something I
wanted to point out.

--

Eric Shepherd
Senior Technical Writer
Mozilla <https://www.mozilla.org/>
Blog: http://www.bitstampede.com/
Twitter: http://twitter.com/sheppy

Gervase Markham

unread,
Apr 14, 2015, 8:36:11 AM4/14/15
to
Yep. That's the system working. CA does something they shouldn't, we
find out, CA is no longer trusted (perhaps for a time).

Or do you have an alternative system design where no-one ever makes a
mistake and all the actors are trustworthy?

Gerv

ena...@gmail.com

unread,
Apr 14, 2015, 8:37:35 AM4/14/15
to
On Tuesday, April 14, 2015 at 3:05:09 AM UTC-5, Anne van Kesteren wrote:

> This definitely needs more research
> but shouldn't preclude rolling out HTTPS on public resources.

The proposal as presented is not limited to public resources. The W3C Privileged Context draft which it references exempts only localhost and file:/// resources, not resources on private networks. There are hundreds of millions of home routers and similar devices with web UIs on private networks, and no clear path under this proposal to keep them fully accessible (without arbitrary feature limitations) except to set up your own local CA, which is excessively burdensome.

Eli Naeher

Gervase Markham

unread,
Apr 14, 2015, 8:39:24 AM4/14/15
to
On 14/04/15 01:57, northrupt...@gmail.com wrote:
> * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> treated as less secure than HTTP is - to put this as politely and
> gently as possible - a pile of bovine manure

http://gerv.net/security/self-signed-certs/ , section 3.

But also, Firefox is implementing opportunistic encryption, which AIUI
gives you a lot of what you want here.

Gerv

Gervase Markham

unread,
Apr 14, 2015, 8:45:15 AM4/14/15
to lorenzo...@gmail.com
On 14/04/15 08:51, lorenzo...@gmail.com wrote:
> 1) Caching proxies: resources obtained over HTTPS cannot be cached by
> a proxy that doesn't use MITM certificates. If all users must move to
> HTTPS there will be no way to re-use content downloaded for one user
> to accelerate another user. This is an important issue for locations
> with many users and poor internet connectivity.

Richard talked, IIRC, about not allowing subloads over HTTP with
subresource integrity. This is one argument to the contrary. Sites could
use HTTP-with-integrity to provide an experience which allowed for
better caching, with the downside being some loss of coarse privacy for
the user. (Cached resources, by their nature, are not going to be
user-specific, so there won't be leak of PII. But it might leak what you
are reading or what site you are on.)

Gerv

david.a...@gmail.com

unread,
Apr 14, 2015, 8:47:50 AM4/14/15
to
> Yep. That's the system working. CA does something they shouldn't, we
> find out, CA is no longer trusted (perhaps for a time).
>
> Or do you have an alternative system design where no-one ever makes a
> mistake and all the actors are trustworthy?
>
> Gerv

Yes - as I said previously. Do the existing certificate checks to a trusted CA root, then do a TLSA DNS look up for the certificate PIN and check that *as well*. If you did this (and Google publish their SHA512 hashes in DNS) you'd could have had lots of copies of Firefox ringing back "potential compromise" messages. Who knows how long those certificates were out there (or what other ones are currently out there that you could find just by implementing TLSA).

The more routes to the trust the better. Trusted Root CA is "all eggs in one basket". DANE is "all eggs in one basket", DNSSEC is "all eggs in one basket".

Put them all together and you have a pretty reliable basket :)

This is what I mean by working a security rating A,B,C,D,Fail - not just a "yes/no" answer.

hugoosval...@gmail.com

unread,
Apr 14, 2015, 9:57:21 AM4/14/15
to
I'm curious as to what would happen with things that cannot have TLS certificates: routers and similar web-configurable-only devices (like small PBX-like devices, etc).

They don't have a proper domain, and may grab an IP via radvd (or dhcp on IPv4), so there's no certificate to be had.

They'd have to use self-signed, which seems to be treated pretty badly (warning message, etc).

Would we be getting rid of the self-signed warning when visiting a website?

Aryeh Gregor

unread,
Apr 14, 2015, 10:01:41 AM4/14/15
to Gervase Markham, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 3:36 PM, Gervase Markham <ge...@mozilla.org> wrote:
> Yep. That's the system working. CA does something they shouldn't, we
> find out, CA is no longer trusted (perhaps for a time).
>
> Or do you have an alternative system design where no-one ever makes a
> mistake and all the actors are trustworthy?

No, but it would make sense to require that sites be validated through
a single specific CA, rather than allowing any CA to issue a
certificate for any site. That would drastically reduce the scope of
attacks: an attacker would have to compromise a single specific CA,
instead of any one of hundreds. IIRC, HSTS already allows this on an
opt-in basis. If validation was done via DNSSEC instead of the
existing CA system, this would follow automatically, without sites
having to commit to a single CA. It also avoids the bootstrapping
problem with HSTS, unless someone has solved that in some other way
and I didn't notice.

Richard Barnes

unread,
Apr 14, 2015, 10:11:56 AM4/14/15