Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Intent to deprecate: Insecure HTTP

16,397 views
Skip to first unread message

Richard Barnes

unread,
Apr 13, 2015, 10:57:58 AM4/13/15
to dev-pl...@lists.mozilla.org
There's pretty broad agreement that HTTPS is the way forward for the web.
In recent months, there have been statements from IETF [1], IAB [2], W3C
[3], and even the US Government [4] calling for universal use of
encryption, which in the case of the web means HTTPS.

In order to encourage web developers to move from HTTP to HTTPS, I would
like to propose establishing a deprecation plan for HTTP without security.
Broadly speaking, this plan would entail limiting new features to secure
contexts, followed by gradually removing legacy features from insecure
contexts. Having an overall program for HTTP deprecation makes a clear
statement to the web community that the time for plaintext is over -- it
tells the world that the new web uses HTTPS, so if you want to use new
things, you need to provide security. Martin Thomson and I drafted a
one-page outline of the plan with a few more considerations here:

https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Some earlier threads on this list [5] and elsewhere [6] have discussed
deprecating insecure HTTP for "powerful features". We think it would be a
simpler and clearer statement to avoid the discussion of which features are
"powerful" and focus on moving all features to HTTPS, powerful or not.

The goal of this thread is to determine whether there is support in the
Mozilla community for a plan of this general form. Developing a precise
plan will require coordination with the broader web community (other
browsers, web sites, etc.), and will probably happen in the W3C.

Thanks,
--Richard

[1] https://tools.ietf.org/html/rfc7258
[2]
https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
[3] https://w3ctag.github.io/web-https/
[4] https://https.cio.gov/
[5]
https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
[6]
https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion

DDD

unread,
Apr 13, 2015, 1:40:24 PM4/13/15
to
I think that you'll need to define a number of levels of security, and decide how to distinguish them in the Firefox GUI:

- Unauthenticated/Unencrypted [http]
- Unauthenticated/Encrypted [https ignoring untrusted cert warning]
- DNS based auth/Encrypted [TLSA certificate hash in DNS]
- Ditto with TLSA/DNSSEC
- Trusted CA Authenticated [Any root CA]
- EV Trusted CA [Special policy certificates]

Ironically, your problem is more a GUI thing. All the security technology you need actually exists already...

Eric Rescorla

unread,
Apr 13, 2015, 1:57:16 PM4/13/15
to DDD, dev-platform
On Mon, Apr 13, 2015 at 10:40 AM, DDD <david.a...@gmail.com> wrote:

> I think that you'll need to define a number of levels of security, and
> decide how to distinguish them in the Firefox GUI:
>
> - Unauthenticated/Unencrypted [http]
> - Unauthenticated/Encrypted [https ignoring untrusted cert warning]
> - DNS based auth/Encrypted [TLSA certificate hash in DNS]
> - Ditto with TLSA/DNSSEC
>

Note that Firefox does not presently support either DANE or DNSSEC,
so we don't need to distinguish these.

-Ekr




> - Trusted CA Authenticated [Any root CA]
> - EV Trusted CA [Special policy certificates]
>
> Ironically, your problem is more a GUI thing. All the security technology
> you need actually exists already...
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

DDD

unread,
Apr 13, 2015, 2:29:00 PM4/13/15
to
>
> Note that Firefox does not presently support either DANE or DNSSEC,
> so we don't need to distinguish these.
>
> -Ekr
>

Nor does Chrome, and look what happened to both browsers...

http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

...the keys to the castle are in the DNS registration process. It is illogical not to add TLSA support.

mh.in....@gmail.com

unread,
Apr 13, 2015, 2:33:02 PM4/13/15
to
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.

May I suggest defining "security" here as either:

1) A secure host (SSL)

or

2) Protected by subresource integrity from a secure host

This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies.

david.a...@gmail.com

unread,
Apr 13, 2015, 2:52:42 PM4/13/15
to

> 2) Protected by subresource integrity from a secure host
>
> This would allow website operators to securely serve static assets from non-HTTPS servers without MITM risk, and without breaking transparent caching proxies.

Is that a complicated word for SHA512 HASH? :) You could envisage a new http URL pattern http://video.vp9?<SHA512-HASH>

Frederik Braun

unread,
Apr 13, 2015, 3:00:54 PM4/13/15
to dev-pl...@lists.mozilla.org
I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -

But, note that this will not give you extra security UI (or less
warnings): Browsers will still disable scripts served over HTTP on an
HTTPS page - even if the integrity matches.

This is because HTTPS promises integrity, authenticity and
confidentiality. SRI only provides the former.

Richard Barnes

unread,
Apr 13, 2015, 3:04:35 PM4/13/15
to Frederik Braun, dev-pl...@lists.mozilla.org
On Mon, Apr 13, 2015 at 3:00 PM, Frederik Braun <fbr...@mozilla.com> wrote:

> On 13.04.2015 20:52, david.a...@gmail.com wrote:
> >
> I suppose Subresource Integrity would be http://www.w3.org/TR/SRI/ -
>
> But, note that this will not give you extra security UI (or less
> warnings): Browsers will still disable scripts served over HTTP on an
> HTTPS page - even if the integrity matches.
>
> This is because HTTPS promises integrity, authenticity and
> confidentiality. SRI only provides the former.
>

I agree that we should probably not allow insecure HTTP resource to be
looped in through SRI.

There are several issues with this idea, but the one that sticks out for me
is the risk of leakage from HTTPS through these http-schemed resource
loads. For example, that fact that you're loading certain images might
reveal which Wikipedia page you're reading.

--Richard

Gervase Markham

unread,
Apr 13, 2015, 3:11:32 PM4/13/15
to
On 13/04/15 15:57, Richard Barnes wrote:
> Martin Thomson and I drafted a
> one-page outline of the plan with a few more considerations here:
>
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

Are you sure "privileged contexts" is the right phrase? Surely contexts
are "secure", and APIs or content is "privileged" by being only
available in a secure context?

There's nothing wrong with your plan, but that's partly because it's
hard to disagree with your principle, and the plan is pretty high level.
I think the big arguments will be over when and what features require a
secure context, and how much breakage we are willing to tolerate.

I know the Chrome team have a similar plan; is there any suggestion that
we might coordinate on feature re-privilegings?

Would we put an error on the console when a privileged API was used in
an insecure context?

Gerv

Gervase Markham

unread,
Apr 13, 2015, 3:12:44 PM4/13/15
to DDD
On 13/04/15 18:40, DDD wrote:
> I think that you'll need to define a number of levels of security, and decide how to distinguish them in the Firefox GUI:
>
> - Unauthenticated/Unencrypted [http]
> - Unauthenticated/Encrypted [https ignoring untrusted cert warning]
> - DNS based auth/Encrypted [TLSA certificate hash in DNS]
> - Ditto with TLSA/DNSSEC
> - Trusted CA Authenticated [Any root CA]
> - EV Trusted CA [Special policy certificates]

I'm not quite sure what this has to do with the proposal you are
commenting on, but I would politely ask you how many users you think are
both interested in, able to understand, and willing to take decisions
based on _six_ different security states in a browser?

The entire point of this proposal is to reduce the web to 1 security
state - "secure".

Gerv


Martin Thomson

unread,
Apr 13, 2015, 3:28:45 PM4/13/15
to Gervase Markham, dev-platform
On Mon, Apr 13, 2015 at 12:11 PM, Gervase Markham <ge...@mozilla.org> wrote:
> Are you sure "privileged contexts" is the right phrase? Surely contexts
> are "secure", and APIs or content is "privileged" by being only
> available in a secure context?

There was a long-winded group bike-shed-painting session on the
public-webappsec list and this is the term they ended up with. I
don't believe that it is the right term either, FWIW.

> There's nothing wrong with your plan, but that's partly because it's
> hard to disagree with your principle, and the plan is pretty high level.
> I think the big arguments will be over when and what features require a
> secure context, and how much breakage we are willing to tolerate.

Not much, but maybe more than we used to.

> I know the Chrome team have a similar plan; is there any suggestion that
> we might coordinate on feature re-privilegings?

Yes, the intent is definitely to collaborate, as the original email
stated. Chrome isn't the only stakeholder, which is why we suggested
that we go to the W3C so that the browser formerly known as IE and
Safari are included.

> Would we put an error on the console when a privileged API was used in
> an insecure context?

Absolutely. That's likely to be a first step once the targets have
been identified. That pattern has already been established for bad
crypto and a bunch of other things that we don't like but are forced
to tolerate for compatibility reasons.

david.a...@gmail.com

unread,
Apr 13, 2015, 3:35:08 PM4/13/15
to
I would politely ask you how many users you think are
> both interested in, able to understand, and willing to take decisions
> based on _six_ different security states in a browser?

I think this thread is about deprecating things and moving developers onto more secure platforms. To do that, you'll need to tell me *why* I need to make the effort. The only thing that I am going to care about is to get users closer to that magic green bar and padlock icon.

You may hope that security is black and white, but in practice it isn't. There is always going to be a sliding scale. Do you show me a green bar and padlock if I go to www.google.com, but the certificate is issued by my intranet? Do you show me the same certificate error I'd get as if I was connecting to a clearly malicious certificate.

What if I go to www.google.com, but the certificate has been issued incorrectly because Firefox ships with 500 equally trusted root certificates?


So - yeah, you're going to need a rating system for your security: A, B, C, D, Fail. You're going to have to explain what situations get you into what group, how as a developer I can move to a higher group (e.g. add a certificate hash into DNS, get an EV certificate costing $10,000, implement DNSSEC, use PFS ciphersuites and you get an A rating). I'm sure that there'll be new security vulnerabilities and best practice in future, too.

Then it is up to me as a developer to decide how much effort I can realistically put into this...

...for my web-site containing pictures of cats...

commod...@gmail.com

unread,
Apr 13, 2015, 3:36:56 PM4/13/15
to
Great, peachy, more authoritarian dictation of end-user behavior by the Elite is just what the Internet needs right now. And hey, screw anybody trying to use legacy systems for anything, right? Right!

stu...@testtrack4.com

unread,
Apr 13, 2015, 4:29:23 PM4/13/15
to
HTTP should remain optional and fully-functional, for the purposes of prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a development server running on my local machine, or to debug which point before hitting the TLS layer is corrupting requests.

byu...@gmail.com

unread,
Apr 13, 2015, 4:43:25 PM4/13/15
to
On Monday, April 13, 2015 at 3:36:56 PM UTC-4, commod...@gmail.com wrote:
> Great, peachy, more authoritarian dictation of end-user behavior by the Elite is just what the Internet needs right now. And hey, screw anybody trying to use legacy systems for anything, right? Right!

Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be open season for someone to fork/make a new browser with HTTP support, and gain an instant 30% market share. These guys have run amok with major decisions (like the HTTP/2 TLS mandate) because of a lack of competition.

These guys can go around thinking they're secure while trusting root CAs like CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back on track with a new, sane browser. While we're at it, we could start treating self-signed certs like we do SSH, rather than as being *infinitely worse* than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a notary public to accept a self-signed cert yet. But I shouldn't give them any ideas ...)

ipar...@gmail.com

unread,
Apr 13, 2015, 4:48:31 PM4/13/15
to
I have given this a lot of thought lately, and to me the only way forward is to do exactly what is suggested here: phase out and eventually drop plain HTTP support. There are numerous reasons for doing this:

- Plain HTTP allows someone to snoop on your users.

- Plain HTTP allows someone to misrepresent your content to the users.

- Plain HTTP is a great vector for phishing, as well as injecting malicious code that comes from your domain.

- Plain HTTP provides no guarantees of identity to the user. Arguably, the current HTTPS implementation doesn't do much to fix this, but more on this below.

- Lastly, arguing that HTTP is cheaper than HTTPS is going to be much harder once there are more providers giving away free certs (looking at StartSSL and Let's Encrypt).

My vision would be that HTTP should be marked with the same warning (except for wording of course) as an HTTPS site secured by a self-signed cert. In terms of security, they are more or less equivalent, so there is no reason to treat them differently. This should be the goal.

There are problems with transitioning to giving a huge scary warning for HTTP. They include:

- A large number of sites that don't support HTTPS. To fix this, I think the best method is to show the "http://" part of the URL in red, and publicly announce that over the next X months Firefox is moving to the model of giving a big scary warning a la self-signed cert warning if HTTPS is not enabled.

- A large number of corporate intranets that run plain HTTP. Perhaps a build-time configuration could be enabled that would enable system administrators to ignore the warning for certain subdomains or the RFC 1918 addresses as well as localhost. Note that carrier grade NAT in IPv4 might make the latter a bad choice by default.

- Ad supported sites report a drop in ad revenue when switching to HTTPS. I don't know what the problem or solution here is, but I am certain this is a big hurdle for some sites.

- Lack of free wildcard certificates. Ideally, Let's Encrypt should provide these.

- Legacy devices that cannot be upgraded to support HTTPS or only come with self-signed certificates. This is a problem that can be addressed by letting the user bypass the scary warning (just like with self-signed certs).

Finally, some people conflate the idea of a global transition from plain HTTP to HTTPS as a move by CA's to make more money. They might argue that first, we need to get rid of CA's or provide an alternative path for obtaining certificates. I disagree. Switching from plain HTTP to HTTPS is step one. Step two might include adding more avenues for establishing trust and authentication. There is no reason to try to add additional methods of authenticating the servers while still allowing them to use no encryption at all. Let's kill off plain HTTP first, then worry about how to fix the CA system. Let's Encrypt will of course make this a lot easier by providing free certs.

ipar...@gmail.com

unread,
Apr 13, 2015, 4:55:01 PM4/13/15
to
On Monday, April 13, 2015 at 4:43:25 PM UTC-4, byu...@gmail.com wrote:

> These guys can go around thinking they're secure while trusting root CAs like CNNIC whilst ignoring DNSSEC and the like; the rest of us can get back on track with a new, sane browser. While we're at it, we could start treating self-signed certs like we do SSH, rather than as being *infinitely worse* than HTTP (I'm surprised Mozilla doesn't demand a faxed form signed by a notary public to accept a self-signed cert yet. But I shouldn't give them any ideas ...)

A self-signed cert is worse than HTTP, in that you cannot know if the site you are accessing is supposed to have a self-signed cert or not. If you know that, you can check the fingerprint and bypass the warning. But let's say you go to download a fresh copy of Firefox, just to find out that https://www.mozilla.org/ is serving a self-singed cert. How can you possibly be sure that you are not being MITM'ed? Arguably, it's worse if we simply ignore the fact that the cert is self-signed, and simply let you download the compromised version, vs giving you some type of indication that the connection is not secure (e.g.: no green bar because it's plain HTTP).

That is not to say that we should continue as is. HTTP is insecure, and should give the same warning as HTTPS with a self-signed cert.

Joshua Cranmer 🐧

unread,
Apr 13, 2015, 5:08:54 PM4/13/15
to
On 4/13/2015 3:29 PM, stu...@testtrack4.com wrote:
> HTTP should remain optional and fully-functional, for the purposes of prototyping and diagnostics. I shouldn't need to set up a TLS layer to access a development server running on my local machine, or to debug which point before hitting the TLS layer is corrupting requests.

If you actually go to read the details of the proposal rather than
relying only on the headline, you'd find that there is an intent to
actually let you continue to use http for, e.g., localhost. The exact
boundary between "secure" HTTP and "insecure" HTTP is being actively
discussed in other forums.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

bryan....@gmail.com

unread,
Apr 13, 2015, 5:11:02 PM4/13/15
to
One limiting factor is that Firefox doesn't treat form data the same on HTTPS sites.

Examples:

http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor

http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button

After loosing a few forum posts or wiki edits to this bug in Firefox, you quickly insist on using unsecured HTTP as often as possible.

Boris Zbarsky

unread,
Apr 13, 2015, 5:29:23 PM4/13/15
to
On 4/13/15 5:11 PM, bryan....@gmail.com wrote:
> After loosing a few forum posts or wiki edits to this bug in Firefox, you quickly insist on using unsecured HTTP as often as possible.

This is only done in cases in which the page explicitly requires that
nothing about the page be cached (no-cache), yes?

That said, we should see if we can stop doing the state-not-saving thing
for SSL+no-cache and tell banks who want it to use no-store.

-Boris

Joseph Lorenzo Hall

unread,
Apr 13, 2015, 5:57:59 PM4/13/15
to ipar...@gmail.com, dev-pl...@lists.mozilla.org
Late to the thread, but I'll use this reply to say we're very
supportive of the proposal at CDT.
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform



--
Joseph Lorenzo Hall
Chief Technologist
Center for Democracy & Technology
1634 I ST NW STE 1100
Washington DC 20006-4011
(p) 202-407-8825
(f) 202-637-0968
j...@cdt.org
PGP: https://josephhall.org/gpg-key
fingerprint: 3CA2 8D7B 9F6D DBD3 4B10 1607 5F86 6987 40A9 A871

Eugene

unread,
Apr 13, 2015, 6:53:01 PM4/13/15
to
I fully support this proposal. In addition to APIs, I'd like to propose prohibiting caching any resources loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N. The reasons are:
1) MITM can pollute users' HTTP cache, by modifying some JavaScript files with a long time cache control max-age.
2) It won't break any websites, just some performance penalty for them.
3) Many website operators and users avoid using HTTPS, since they believe HTTPS is much slower than plaintext HTTP. After deprecating HTTP cache, this argument will be more wrong.

Martin Thomson

unread,
Apr 13, 2015, 7:03:13 PM4/13/15
to Eugene, dev-platform
On Mon, Apr 13, 2015 at 3:53 PM, Eugene <imfasterth...@gmail.com> wrote:
> In addition to APIs, I'd like to propose prohibiting caching any resources loaded over insecure HTTP, regardless of Cache-Control header, in Phase 2.N.

This has some negative consequences (if only for performance). I'd
like to see changes like this properly coordinated. I'd rather just
treat "caching" as one of the features for Phase 2.N.

Karl Dubost

unread,
Apr 13, 2015, 7:13:48 PM4/13/15
to Richard Barnes, dev-pl...@lists.mozilla.org
Richard,

Le 13 avr. 2015 à 23:57, Richard Barnes <rba...@mozilla.com> a écrit :
> There's pretty broad agreement that HTTPS is the way forward for the web.

Yes, but that doesn't make deprecation of HTTP a consensus.

> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.

This is not encouragement. This is call forcing. ^_^ Just that we are using the right terms for the right thing.


In the document
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing

You say:
Phase 3: Essentially all of the web is HTTPS.

I understand this is the last hypothetical step, but it sounds like a bit let's move the Web to XML. It didn't work out very well.

I would love to have a more secure Web, but this can not happen without a few careful consideration.

* Third tier person for certificates being mandatory is a no-go. It creates a system of authority and power, an additional layer of hierarchy which deeply modify the ability for anyone to publish and might in some circumstances increase the security risk.

* If we have to rely, cost of certificates must be zero. These for the simple reason than not everyone is living in a rich industrialized country.

* Setup and publication through HTTPS should be as easy as HTTP. The Web brought a publishing power to any individuals. Imagine cases where you need to create a local network, web developing on your computer, hacking a server for your school, community, etc. If it relies on a heavy process, it will not happen.


So instead of a plan based on technical features, I would love to see a: "Let's move to a secure Web. What are the user scenarios, we need to solve to achieve that."

These user scenarios are economical, social, etc.


my 2 cents.
So yes, but not the way it is introduced and plan now.


--
Karl Dubost, Mozilla
http://www.la-grange.net/karl/moz

david.a...@gmail.com

unread,
Apr 13, 2015, 7:48:27 PM4/13/15
to
> * If we have to rely, cost of certificates must be zero. These for the simple reason than not everyone is living in a rich industrialized country.

Certificates (and paying for them) is an artificial economy. If I register a DNS address, I should get a certificate to go with it. Heck, last time I got an SSL certificate, they effectively bootstrapped the trust based on my DNS MX record...

Hence IMO TLS should be:
- DANE for everyone
- DANE & Trusted Third Party CAs for the few
- DANE & TTP & EV for sites that accept financial and medical details

The Firefox opportunistic encryption feature is a good first step towards this goal. If they could just nslookup the TLSA certificate hash, we'd be a long way down the road.

northrupt...@gmail.com

unread,
Apr 13, 2015, 8:57:41 PM4/13/15
to
On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote:
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
> Broadly speaking, this plan would entail limiting new features to secure
> contexts, followed by gradually removing legacy features from insecure
> contexts. Having an overall program for HTTP deprecation makes a clear
> statement to the web community that the time for plaintext is over -- it
> tells the world that the new web uses HTTPS, so if you want to use new
> things, you need to provide security.

I'd be fully supportive of this if - and only if - at least one of the following is implemented alongside it:

* Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure
* Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority

Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.

imfasterth...@gmail.com

unread,
Apr 13, 2015, 9:43:27 PM4/13/15
to
On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com wrote:
>
> * Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure

This feature (i.e. opportunistic encryption) was implemented in Firefox 37, but unfortunately an implementation bug made HTTPS insecure too. But I guess Mozilla will fix it and make this feature available in a future release.

> * Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority
>
> Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.

I don't think the current CA system is broken. The domain name registration is also centralized, but almost every website has a hostname, rather than using IP address, and few people complain about this.

Karl Dubost

unread,
Apr 13, 2015, 10:10:44 PM4/13/15
to imfasterth...@gmail.com, dev-pl...@lists.mozilla.org

Le 14 avr. 2015 à 10:43, imfasterth...@gmail.com a écrit :
> I don't think the current CA system is broken.

The current CA system creates issues for certain categories of population. It is broken in some ways.

> The domain name registration is also centralized, but almost every website has a hostname, rather than using IP address, and few people complain about this.

Two points:

1. You do not need to register a domain name to have a Web site (IP address)
2. You do not need to register a domain name to run a local blah.test.site

Both are still working and not deprecated in browsers ^_^

Now the fact to have to rent your domain name ($$$) and that all the URIs are tied to this is in terms of permanent identifiers and the fabric of time on information has strong social consequences. But's that another debate than the one of this thread on deprecating HTTP in favor of HTTPS.

I would love to see this discussion happening in Whistler too.

ipar...@gmail.com

unread,
Apr 13, 2015, 11:26:59 PM4/13/15
to
> * Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure

I am against this. Both are insecure and should be treated as such. How is your browser supposed to know that gmail.com is intended to serve a self-signed cert? It's not, and it cannot possibly know it in the general case. Thus it must be treated as insecure.

> * Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority

No. Namecoin has so many other problems that it is not feasible.

> Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.

Agree that it's broken. The fact that any CA can issue a cert for any domain is stupid, always was and always will be. It's now starting to bite us.

However, HTTPS and the CA system don't have to be tied together. Let's ditch the immediately insecure plain HTTP, then add ways to authenticate trusted certs in HTTPS by means other than our current CA system. The two problems are orthogonal, and trying to solve both at once will just leave us exactly where we are: trying to argue for a fundamentally different system.

ipar...@gmail.com

unread,
Apr 13, 2015, 11:31:19 PM4/13/15
to
On Monday, April 13, 2015 at 10:10:44 PM UTC-4, Karl Dubost wrote:

> Now the fact to have to rent your domain name ($$$) and that all the URIs are tied to this is in terms of permanent identifiers and the fabric of time on information has strong social consequences. But's that another debate than the one of this thread on deprecating HTTP in favor of HTTPS.

The registrars are, as far as I'm concerned, where the solution to the CA problem lies. You buy a domain name from someone, you are already trusting them with it. They can simply redirect your nameservers elsewhere and you can't do anything about it. Remember, you never buy a domain name, you lease it.

What does this have to do with plain HTTP to HTTPS transition? Well, why are we trusting CA's at all? Why not have the registrar issue you a wildcard cert with the purchase of a domain, and add restrictions to the protocol such that only your registrar can issue a cert for that domain?

Or even better, have the registrar sign a CA cert for you that is good for your domain only. That way you can issue unlimited certs for domains you own and *nobody but you can do that*.

However, like you said that's a separate discussion. We can solve the CA problem after we solve the plain HTTP problem.

commod...@gmail.com

unread,
Apr 14, 2015, 12:27:22 AM4/14/15
to
On Monday, April 13, 2015 at 1:43:25 PM UTC-7, byu...@gmail.com wrote:
> Let 'em do this. When Mozilla and Google drop HTTP support, then it'll be open season for someone to fork/make a new browser with HTTP support, and gain an instant 30% market share.
Or, more likely, it'll be a chance for Microsoft and Apple to laugh all the way to the bank. Because seriously, what else would you expect to happen when the makers of a web browser announce that, starting in X months, they'll be phasing out compatibility with the vast majority of existing websites?

vic

unread,
Apr 14, 2015, 1:16:25 AM4/14/15
to
On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote:
> HTTP deprecation

I'm strongly against the proposal as it is described here. I work with small embedded devices (think sensor network) that are accessed over HTTP. These devices have very little memory, only a few kB, implementing SSL is simply not possible. Who are you to decree these devices become unfit hosts?

Secondly the proposal to restrain unrelated new features like CSS attributes to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine but authoritarianism is not. Please consider that everyone is capable of making their own decisions.

Lastly deprecating HTTP in the current state of the certificate authority business is completely unacceptable. These are *not* separate issues, to implement HTTPS without warnings you must be able to obtain certificates (including wildcard ones) easily and affordably and not only to rich western country citizens. The "let's go ahead and we'll figure this out later" attitude is irresponsible considering the huge impact that this change will have.

I would view this proposal favorably if 1) you didn't try to force people to adopt the One True Way and 2) the CA situation was fixed.

b...@hutchins.co

unread,
Apr 14, 2015, 1:18:47 AM4/14/15
to
This isn't at all what Richard was trying to say. The original discussion states that the plan will be to make all new browser features only work under HTTPS, to help developers and website owners to migrate to HTTPS only. This does mean these browsers will remove support for HTTP ever; but simply to deprecate it. Browsers still support many legacy and deprecated features.

b...@hutchins.co

unread,
Apr 14, 2015, 1:28:41 AM4/14/15
to
An embedded device would not be using a web browser such as Firefox, so this isn't really much of a concern. The idea would be to only enforce HTTPS deprecation from browsers, not web servers. You can continue to use HTTP on your own web services and therefore use it through your embedded devices.

As all technology protocols change over time, enforcing encryption is a natural and logical step to evolve web technology. Additionally, while everyone is able to make their own decisions, it doesn't mean people make the right choice. A website that uses sensitive data insecurely over HTTP and the users are unaware, as most web consumers are not even aware what the difference of HTTP vs HTTPS means, is not worth the risk. It'd be better to enforce security and reduce the risks that exist with internet privacy. Mozilla though never truly tries to operate anything with an authoritarianism approach, but this suggestion is to protect the consumers of the web, not the developers of the web.

Mozilla is trying to get https://letsencrypt.org/ started, which will be free, removing all price arguments from this discussion.

IMHO, this debate should be focused on improving the way HTTP is deprecated, but I do not believe there are any valid concerns that HTTP should not be deprecated.

Yoav Weiss

unread,
Apr 14, 2015, 1:53:06 AM4/14/15
to b...@hutchins.co, dev-pl...@lists.mozilla.org
IMO, limiting new features to HTTPS only, when there's no real security
reason behind it will only end up limiting feature adoption.
It directly "punishing" developers and adds friction to using new features,
but only influence business in a very indirect manner.

If we want to move more people to HTTPS, we can do any or all of the
following:
* Show user warnings when the site they're on is insecure
* Provide an opt-in "don't display HTTPS" mode as an integral part of the
browser. Make it extremely easy to opt in.

Search engines can also:
* Downgrade ranking of insecure sites in a significant way
* Provide a "don't show me insecure results" button

If you're limiting features to HTTPS with no reason you're implicitly
saying that developer laziness is what's stalling adoption. I don't believe
that the case.

There's a real eco-system problem with 3rd party widgets and ad networks
that makes it hard for large sites to switch until all of their site's
widgets have. Developers have no saying here. Business does.

What you want is to make the business folks threaten that out-dated 3rd
party widget that if it doesn't move to HTTPS, the site would switch to the
competition. For that you need to use a stick that business folks
understand: "If you're on HTTP, you'd see less and less traffic". Limiting
new features does absolutely nothing in that aspect.

Anne van Kesteren

unread,
Apr 14, 2015, 2:23:07 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org, b...@hutchins.co
On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> Limiting new features does absolutely nothing in that aspect.

Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
Workers as a reason to start deploying HTTPS:

http://open.blogs.nytimes.com/2014/11/13/embracing-https/

(And anecdotally, I find it easier to convince developers to deploy
HTTPS on the basis of some feature needing it than on merit. And it
makes sense, if they need their service to do X, they'll go through
the extra trouble to do Y to get to X.)


--
https://annevankesteren.nl/

Anne van Kesteren

unread,
Apr 14, 2015, 2:25:56 AM4/14/15
to david.a...@gmail.com, dev-pl...@lists.mozilla.org

Anne van Kesteren

unread,
Apr 14, 2015, 3:29:05 AM4/14/15
to Karl Dubost, dev-pl...@lists.mozilla.org, imfasterth...@gmail.com
On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdu...@mozilla.com> wrote:
> 1. You do not need to register a domain name to have a Web site (IP address)

Name one site you visit regularly that doesn't have a domain name. And
even then, you can get certificates for public IP addresses.


> 2. You do not need to register a domain name to run a local blah.test.site

We should definitely allow whitelisting of sorts for developers. As a
start localhost will be a privileged context by default. We also have
an override in place for Service Workers.

This is not a reason not to do HTTPS. This is something we need to
improve along the way.


--
https://annevankesteren.nl/

david.a...@gmail.com

unread,
Apr 14, 2015, 3:29:26 AM4/14/15
to
Yawn - those were all terrible articles. To summarise their points: "NSA is bad, some DNS servers are out of date, DNSSEC may be still using shorter 1024bit RSA key lengths (hmm... much like TLS then)"

The trouble is: Just because something isn't perfect, doesn't make it a bad idea. Certificates are not perfect, but they are not a bad idea. Putting certificate thumbprints in DNS is not perfect, but it's not half a *good* idea.

Think about it: if your completely clear-text, unauthenticated DNS connection is compromised, then your browser is going to go to the wrong server anyway. If it goes to the wrong server, so will your email, as will the identity verification messages from your CA.

Your browser needs to retrieve A and AAAA addresses from DNS anyway, so why not pull TLSA certificate hashes at the same time? Even without DNSSEC, this could only improve things.

Casepoint, *absolutely* due to the frankly incomprehensible refusal to do this: http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

There is nothing you can do to fix this with traditional X509, or any single chain of trust. You need multiple, independent proofs of identity. A combination of X509 and a number of different signed DNS providers seem like a good way to approach this.

Finally - you can audit DNSSEC/TLSA responses programmatically as the response records are cached publicly in globally dispersed DNS servers, it's really hard to do the equivalent of "send a different chain when IP address 1.2.3.4 connects".

I have my own opinions why TLSA certificate pinning records are not being checked and, having written an implementation myself, I can guarantee you that it isn't due to any technical complexity.

Anne van Kesteren

unread,
Apr 14, 2015, 3:39:29 AM4/14/15
to david.a...@gmail.com, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 9:29 AM, <david.a...@gmail.com> wrote:
> The trouble is: Just because something isn't perfect, doesn't make it a bad idea.

I think it's a pretty great idea and it's one people immediately think
of. However, as those articles explain in detail, it's also a far from
realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
and is the focus of this thread. Whether we can achieve similar
guarantees through DNS at some point is orthogonal and is best
discussed elsewhere:

https://tools.ietf.org/wg/dane/


--
https://annevankesteren.nl/

david.a...@gmail.com

unread,
Apr 14, 2015, 3:47:16 AM4/14/15
to
> realistic idea. Meanwhile, HTTPS exists, is widely deployed, works,
> and is the focus of this thread.

http://www.zdnet.com/article/google-banishes-chinas-main-digital-certificate-authority-cnnic/

Sure it works :)

imm...@gmail.com

unread,
Apr 14, 2015, 3:48:48 AM4/14/15
to
> Secondly the proposal to restrain unrelated new features like CSS attributes to HTTPS sites only is simply a form of strong-arming. Favoring HTTPS is fine but authoritarianism is not. Please consider that everyone is capable of making their own decisions.

One might note that this has already been tried, *and succeeded*, with SPDY and then HTTP 2.

HTTP 2 is faster than HTTP 1, but both Mozilla and Google are refusing to allow unencrypted HTTP 2 connections. Sites like http://httpvshttps.com/ intentionally mislead users into thinking that TLS improves connection speed, when actually the increased speed is from HTTP 2.

lorenzo...@gmail.com

unread,
Apr 14, 2015, 3:51:59 AM4/14/15
to
> The goal of this thread is to determine whether there is support in the
> Mozilla community for a plan of this general form. Developing a precise
> plan will require coordination with the broader web community (other
> browsers, web sites, etc.), and will probably happen in the W3C.
>

From the user/sysadmin point of view it would be very helpful to have information on how the following issues will be handled:

1) Caching proxies: resources obtained over HTTPS cannot be cached by a proxy that doesn't use MITM certificates. If all users must move to HTTPS there will be no way to re-use content downloaded for one user to accelerate another user. This is an important issue for locations with many users and poor internet connectivity.

2) Self signed certificates: in many situations it is hard/impossible to get certificates signed by a CA (e.g. provisioning embedded devices). The current approach in many of these situations is not to use HTTPS. If the plan goes into effect what other solution could be used?

Regarding problem 1: I guess that allowing HTTP for resources loaded with subresource integrity could be some sort of alternative, but would require collaboration from the server owner. Being more work than simply letting the webserver send out automatically caching headers I wonder how many sites will implement it.

Regarding problem 2: in my opinion it can be mitigated by offering the user a new standard way to validate self-signed certificates: the user is prompted to enter the fingerprint of the certificate that she must have received out-of-band, if the user enters the correct fingerprint the certificate is marked as trusted (see [1]). This clearly opens up some attacks that should be carefully assessed.

Best,
Lorenzo


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1012879

imm...@gmail.com

unread,
Apr 14, 2015, 3:55:45 AM4/14/15
to
Another note:

Nobody, to within experimental error, uses IP addresses to access public websites.

But plenty of people use them for test servers, temporary servers, and embedded devices. (My home router is http://192.168.1.254/, do they need to get a certificate for 192.168.1.254? Or do home routers need to come with installation CDs that install the router's root certificate? How is that not a worse situation, where every web user has to trust the router manufacturer?)

And even though nobody uses IP addresses, and many public websites don't work with IP addresses (because vhosts), nobody in their right mind would ever suggest removing the possibility of accessing web servers without domain names.

Yoav Weiss

unread,
Apr 14, 2015, 3:56:01 AM4/14/15
to Anne van Kesteren, dev-pl...@lists.mozilla.org, b...@hutchins.co
On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren <ann...@annevk.nl> wrote:

> On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > Limiting new features does absolutely nothing in that aspect.
>
> Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
> Workers as a reason to start deploying HTTPS:
>
> http://open.blogs.nytimes.com/2014/11/13/embracing-https/


I stand corrected. So it's the 8th reason out of 9, right before technical
debt.

I'm not saying using new features is not an incentive, and I'm definitely
not saying HTTP2 and SW should have been enabled on HTTP.
But, when done without any real security or deployment issues that mandate
it, you're subjecting new features to significant adoption friction that is
unrelated to the feature itself, in order to apply some indirect pressure
on businesses to do the right thing.
You're inflicting developer pain without any real justification. A sort of
collective punishment, if you will.

If you want to apply pressure, apply it where it makes the most impact with
the least cost. Limiting new features to HTTPS is not the place, IMO.


>
> (And anecdotally, I find it easier to convince developers to deploy
> HTTPS on the basis of some feature needing it than on merit. And it
> makes sense, if they need their service to do X, they'll go through
> the extra trouble to do Y to get to X.)
>
>
Don't convince the developers. Convince the business. Drive users away to
secure services by displaying warnings, etc.
Anecdotally on my end, I saw small Web sites that care very little about
security, move to HTTPS over night after Google added HTTPS as a (weak)
ranking signal
<http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html>.
(reason #4 in that NYT article)

Anne van Kesteren

unread,
Apr 14, 2015, 4:05:09 AM4/14/15
to lorenzo...@gmail.com, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 9:51 AM, <lorenzo...@gmail.com> wrote:
> 1) Caching proxies: resources obtained over HTTPS cannot be cached by a proxy that doesn't use MITM certificates. If all users must move to HTTPS there will be no way to re-use content downloaded for one user to accelerate another user. This is an important issue for locations with many users and poor internet connectivity.

Where is the evidence that this is a problem in practice? What do
these environments do for YouTube?


> 2) Self signed certificates: in many situations it is hard/impossible to get certificates signed by a CA (e.g. provisioning embedded devices). The current approach in many of these situations is not to use HTTPS. If the plan goes into effect what other solution could be used?

Either something like
https://bugzilla.mozilla.org/show_bug.cgi?id=1012879 as you mentioned
or overrides for local devices. This definitely needs more research
but shouldn't preclude rolling out HTTPS on public resources.


--
https://annevankesteren.nl/

Anne van Kesteren

unread,
Apr 14, 2015, 4:07:26 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> You're inflicting developer pain without any real justification. A sort of
> collective punishment, if you will.

Why is that you think there is no justification in deprecating HTTP?


>> (And anecdotally, I find it easier to convince developers to deploy
>> HTTPS on the basis of some feature needing it than on merit. And it
>> makes sense, if they need their service to do X, they'll go through
>> the extra trouble to do Y to get to X.)
>
> Don't convince the developers. Convince the business.

Why not both? There's no reason to only attack this top-down.


--
https://annevankesteren.nl/

Yoav Weiss

unread,
Apr 14, 2015, 4:39:27 AM4/14/15
to Anne van Kesteren, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 10:07 AM, Anne van Kesteren <ann...@annevk.nl>
wrote:

> On Tue, Apr 14, 2015 at 9:55 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > You're inflicting developer pain without any real justification. A sort
> of
> > collective punishment, if you will.
>
> Why is that you think there is no justification in deprecating HTTP?
>

Deprecating HTTP is totally justified. Enabling some features on HTTP but
not others is not, unless there's a real technical reason why these new
features shouldn't be enabled.

Anne van Kesteren

unread,
Apr 14, 2015, 4:44:25 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> Deprecating HTTP is totally justified. Enabling some features on HTTP but
> not others is not, unless there's a real technical reason why these new
> features shouldn't be enabled.

I don't follow. If HTTP is no longer a first-class citizen, why do we
need to treat it as such?


--
https://annevankesteren.nl/
Message has been deleted

Alex C

unread,
Apr 14, 2015, 4:54:41 AM4/14/15
to
On Tuesday, April 14, 2015 at 8:44:25 PM UTC+12, Anne van Kesteren wrote:
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?

When it would take more effort to disable a feature on HTTP than to let it work, and yet the feature is disabled anyway, that's more than just HTTP being "not a first class citizen".

Yoav Weiss

unread,
Apr 14, 2015, 5:33:14 AM4/14/15
to Anne van Kesteren, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 10:43 AM, Anne van Kesteren <ann...@annevk.nl>
wrote:

> On Tue, Apr 14, 2015 at 10:39 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > Deprecating HTTP is totally justified. Enabling some features on HTTP but
> > not others is not, unless there's a real technical reason why these new
> > features shouldn't be enabled.
>
> I don't follow. If HTTP is no longer a first-class citizen, why do we
> need to treat it as such?
>

I'm afraid the second class citizens in that scenario would be the new
features, rather than HTTP.

intell...@gmail.com

unread,
Apr 14, 2015, 5:42:09 AM4/14/15
to
Op maandag 13 april 2015 16:57:58 UTC+2 schreef Richard Barnes:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.

Each organisation has it own reasons to move away from HTTPS.
It doesn't mean that each of those reasons are ethical.


> In order to encourage web developers to move from HTTP to HTTPS

Why ?
Large multinationals do not allow HTTPS traffic within their border gateways of their own infrastructure, why make it harder for them?

Why give people the impression in the future that because they are using HTTPS they are much safer, but instead the implication are much larger. (no dependability anymore, forced to trust root-CA etc..)

Why force hosting companies and webmasters with extra costs ?


Do not forget that most used webmaster/webhoster controle panels do not support SNI, and that each HTTPS site has to have it own unique IP address.
Here in EUROPE we are still using IPv4 and RIPE can't issue new IPv4 address because they are all gone. So as long that isn't resolved it can't be done.


IMHO HTTPS would be safer if no larger companies or governments are involved with issuing the certificates, and the certificates would be free or somehow other wise being compensated.

The countries where the people have lesser profiting from HTTPS because human rights are more respected have the means to pay for SSL certificates, but the people who you want to protect don't and even if they would have, they always have a government(s) to deal with.

As long you think that ROOT-CA are 100% trustworthy and governments can't manipulate or do a replay attack afterwards, HTTPS is the way to go... until that (and SNI/IPv4) issue are not handled, don't, because it will cause more harm in the long run.

Do not get me wrong, the intention is good. But trying to protect humanity from humanity also means to keep in mind the issues surrounding it.

Mike de Boer

unread,
Apr 14, 2015, 6:11:12 AM4/14/15
to intell...@gmail.com, Mozilla dev-platform mailing list mailing list

> On 14 Apr 2015, at 11:42, intell...@gmail.com wrote:
>

Something entirely off-topic: I’d like to inform people that your replies to popular threads like this unsigned, with only a notion of identity in an obscure email address, makes me - and I’m sure others too - skip your message or worse; not take it seriously. In my mind I fantasize your message signed off with something like:

"Cheers, mYLitTL3P0nIEZLuLZrAinBowZ.

- Sent from a Galaxy Tab Nexuzzz Swift Super, Gold & Ruby Edition by an 8yr old stuck in Kindergarten.”

… which doesn’t feel like the identity anyone would prefer to assume.

Best, Mike.

Henri Sivonen

unread,
Apr 14, 2015, 6:30:21 AM4/14/15
to dev-platform
On Mon, Apr 13, 2015 at 5:57 PM, Richard Barnes <rba...@mozilla.com> wrote:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.

I agree that we should get the Web onto https and I'me very happy to
see this proposal.

> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> Some earlier threads on this list [5] and elsewhere [6] have discussed
> deprecating insecure HTTP for "powerful features". We think it would be a
> simpler and clearer statement to avoid the discussion of which features are
> "powerful" and focus on moving all features to HTTPS, powerful or not.

I understand that especially in debates about crypto, there's a strong
desire to avoid ratholing and bikeshedding. However, I think avoiding
the discussion on which features are "powerful" is the wrong way to
get from the current situation to where we want to be.

Specifically:

1) I expect the non-availability on http origins of e.g. new CSS
effects that are equally non-privacy-sensitive as existing CSS effects
just as a way to force sites onto https to create resentment among Web
devs that would be better avoided in order to have Web devs support
the cause of encrypting the Web and that could be avoided by
withholding features from http on grounds that tie clearly to the
downsides of http relative to https.

2) I expect withholding certain *existing* privacy-sensitive features
from http to have a greater leverage to push sites to https than
withholding privacy-neutral *new* features.

Specifically, on point #2, I think we should start by, by default,
forgetting all cookies that don't have the "secure" flag set at the
end of the Firefox session. Persistent cookies have two main use
cases:
* On login-requiring sites, not requiring the user to have to
re-enter credentials in every browser session.
* Behavioral profiling.

The first has a clear user-facing benefit. The second is something
that users typically don't want and breaking it has no obvious
user-visible effect of breaking Web compat of the browser.

Fortunately, the most-used login-requiring sites use https already, so
forgetting insecure cookies at the end of the session would have no
adverse effect on the most-user-visible use of persistent cookies.
Also, if a login-requiring site is not already using https, it's
pretty non-controversial that they are Doing It Wrong and should
migrate to https.

One big reason why mostly content-oriented sites, such as news sites,
haven't migrated to https is that they are ad-funded and the
advertising networks are lagging behind in https deployment. Removing
persistence from insecure cookies would give a reason for the ad
networks to accelerate https deployment and do so in a way that
doesn't break the Web in user-visible ways during the transition. That
is, if ad networks want to track users, at least they shouldn't enable
collateral tracking by network eavesdroppers while doing so.

So I think withholding cookie persistence from insecure cookies could
well be way more effective per unit of disruption of user-perceived
Web compat than anything in your proposal.

In addition to persistent cookies, I think we should seek to be more
aggressive in making other features that allow sites to store
persistent state on the client https-only than in making new features
in general https-only. (I realize that applying this consistently to
the HTTP cache could be infeasible on performance grounds in the near
future at least.)

Furthermore, I think this program should have a UI aspect to it:
Currently, the UI designation for http is neutral while the UI
designation for mixed content is undesirable. I think we should make
the UI designation of plain http undesirable once x% the sites that
users encounter on a daily basis are https. Since users don't interact
with the whole Web equally, this means that the UI for http would be
made undesirable much earlier than the time when x% of Web sites
migrates to https. x should be chosen to be high enough as to avoid
warning fatigue that'd desensitize users to the undesirable UI
designation.

--
Henri Sivonen
hsiv...@hsivonen.fi
https://hsivonen.fi/

Boris Zbarsky

unread,
Apr 14, 2015, 7:47:18 AM4/14/15
to
On 4/14/15 3:28 AM, Anne van Kesteren wrote:
> On Tue, Apr 14, 2015 at 4:10 AM, Karl Dubost <kdu...@mozilla.com> wrote:
>> 1. You do not need to register a domain name to have a Web site (IP address)
>
> Name one site you visit regularly that doesn't have a domain name.

My router's configuration UI. Here "regularly" is probably once a month
or so.

> And even then, you can get certificates for public IP addresses.

It's not a public IP address.

We do need a solution for this space, which I expect includes the
various embedded devices people are bringing up; I expect those are
behind firewalls more often than on the publicly routable internet.

-Boris

david.a...@gmail.com

unread,
Apr 14, 2015, 7:53:37 AM4/14/15
to
> Something entirely off-topic: I'd like to inform people that your replies to popular threads like this unsigned, with only a notion of identity in an obscure email address, makes me - and I'm sure others too - skip your message or worse; not take it seriously.


Not everyone has the luxury of being public on the Internet. Especially in discussions about default Internet encryption. The real decision makers won't be posting at all.

Eric Shepherd

unread,
Apr 14, 2015, 8:32:54 AM4/14/15
to Joshua Cranmer 🐧, dev-pl...@lists.mozilla.org
Joshua Cranmer 🐧 wrote:
> If you actually go to read the details of the proposal rather than
> relying only on the headline, you'd find that there is an intent to
> actually let you continue to use http for, e.g., localhost. The exact
> boundary between "secure" HTTP and "insecure" HTTP is being actively
> discussed in other forums.
My main concern with the notion of phasing out unsecured HTTP is that
doing so will cripple or eliminate Internet access by older devices that
aren't generally capable of handling encryption and decryption on such a
massive scale in real time.

While it may sound silly, those of us who are intro classic computers
and making them do fun new things use HTTP to connect 10 MHz (or even 1
MHz) machines to the Internet. These machines can't handle the demands
of SSL. So this is a step toward making their Internet connections go away.

This may not be enough of a reason to save HTTP, but it's something I
wanted to point out.

--

Eric Shepherd
Senior Technical Writer
Mozilla <https://www.mozilla.org/>
Blog: http://www.bitstampede.com/
Twitter: http://twitter.com/sheppy

Gervase Markham

unread,
Apr 14, 2015, 8:36:11 AM4/14/15
to
Yep. That's the system working. CA does something they shouldn't, we
find out, CA is no longer trusted (perhaps for a time).

Or do you have an alternative system design where no-one ever makes a
mistake and all the actors are trustworthy?

Gerv

ena...@gmail.com

unread,
Apr 14, 2015, 8:37:35 AM4/14/15
to
On Tuesday, April 14, 2015 at 3:05:09 AM UTC-5, Anne van Kesteren wrote:

> This definitely needs more research
> but shouldn't preclude rolling out HTTPS on public resources.

The proposal as presented is not limited to public resources. The W3C Privileged Context draft which it references exempts only localhost and file:/// resources, not resources on private networks. There are hundreds of millions of home routers and similar devices with web UIs on private networks, and no clear path under this proposal to keep them fully accessible (without arbitrary feature limitations) except to set up your own local CA, which is excessively burdensome.

Eli Naeher

Gervase Markham

unread,
Apr 14, 2015, 8:39:24 AM4/14/15
to
On 14/04/15 01:57, northrupt...@gmail.com wrote:
> * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> treated as less secure than HTTP is - to put this as politely and
> gently as possible - a pile of bovine manure

http://gerv.net/security/self-signed-certs/ , section 3.

But also, Firefox is implementing opportunistic encryption, which AIUI
gives you a lot of what you want here.

Gerv

Gervase Markham

unread,
Apr 14, 2015, 8:45:15 AM4/14/15
to lorenzo...@gmail.com
On 14/04/15 08:51, lorenzo...@gmail.com wrote:
> 1) Caching proxies: resources obtained over HTTPS cannot be cached by
> a proxy that doesn't use MITM certificates. If all users must move to
> HTTPS there will be no way to re-use content downloaded for one user
> to accelerate another user. This is an important issue for locations
> with many users and poor internet connectivity.

Richard talked, IIRC, about not allowing subloads over HTTP with
subresource integrity. This is one argument to the contrary. Sites could
use HTTP-with-integrity to provide an experience which allowed for
better caching, with the downside being some loss of coarse privacy for
the user. (Cached resources, by their nature, are not going to be
user-specific, so there won't be leak of PII. But it might leak what you
are reading or what site you are on.)

Gerv

david.a...@gmail.com

unread,
Apr 14, 2015, 8:47:50 AM4/14/15
to
> Yep. That's the system working. CA does something they shouldn't, we
> find out, CA is no longer trusted (perhaps for a time).
>
> Or do you have an alternative system design where no-one ever makes a
> mistake and all the actors are trustworthy?
>
> Gerv

Yes - as I said previously. Do the existing certificate checks to a trusted CA root, then do a TLSA DNS look up for the certificate PIN and check that *as well*. If you did this (and Google publish their SHA512 hashes in DNS) you'd could have had lots of copies of Firefox ringing back "potential compromise" messages. Who knows how long those certificates were out there (or what other ones are currently out there that you could find just by implementing TLSA).

The more routes to the trust the better. Trusted Root CA is "all eggs in one basket". DANE is "all eggs in one basket", DNSSEC is "all eggs in one basket".

Put them all together and you have a pretty reliable basket :)

This is what I mean by working a security rating A,B,C,D,Fail - not just a "yes/no" answer.

hugoosval...@gmail.com

unread,
Apr 14, 2015, 9:57:21 AM4/14/15
to
I'm curious as to what would happen with things that cannot have TLS certificates: routers and similar web-configurable-only devices (like small PBX-like devices, etc).

They don't have a proper domain, and may grab an IP via radvd (or dhcp on IPv4), so there's no certificate to be had.

They'd have to use self-signed, which seems to be treated pretty badly (warning message, etc).

Would we be getting rid of the self-signed warning when visiting a website?

Aryeh Gregor

unread,
Apr 14, 2015, 10:01:41 AM4/14/15
to Gervase Markham, dev-pl...@lists.mozilla.org
On Tue, Apr 14, 2015 at 3:36 PM, Gervase Markham <ge...@mozilla.org> wrote:
> Yep. That's the system working. CA does something they shouldn't, we
> find out, CA is no longer trusted (perhaps for a time).
>
> Or do you have an alternative system design where no-one ever makes a
> mistake and all the actors are trustworthy?

No, but it would make sense to require that sites be validated through
a single specific CA, rather than allowing any CA to issue a
certificate for any site. That would drastically reduce the scope of
attacks: an attacker would have to compromise a single specific CA,
instead of any one of hundreds. IIRC, HSTS already allows this on an
opt-in basis. If validation was done via DNSSEC instead of the
existing CA system, this would follow automatically, without sites
having to commit to a single CA. It also avoids the bootstrapping
problem with HSTS, unless someone has solved that in some other way
and I didn't notice.

Richard Barnes

unread,
Apr 14, 2015, 10:11:56 AM4/14/15
to bryan....@gmail.com, dev-pl...@lists.mozilla.org
On Mon, Apr 13, 2015 at 5:11 PM, <bryan....@gmail.com> wrote:

> One limiting factor is that Firefox doesn't treat form data the same on
> HTTPS sites.
>
> Examples:
>
>
> http://stackoverflow.com/questions/14420624/how-to-keep-changed-form-content-when-leaving-and-going-back-to-https-page-wor
>
>
> http://stackoverflow.com/questions/10511581/why-are-html-forms-sometimes-cleared-when-clicking-on-the-browser-back-button
>
> After loosing a few forum posts or wiki edits to this bug in Firefox, you
> quickly insist on using unsecured HTTP as often as possible.
>

Interesting observation. ISTM that that's a bug in HTTPS. At least I
don't see an obvious security reason for the behavior to be that way.

More generally: I expect that this process will turn up bugs in HTTPS
behavior, either "actual" bugs in terms of implementation errors, or
"logical" bugs where the intended behavior does not meet the expectations
or needs of websites. So we should be open to adapting our HTTPS behavior
some (within the bounds of the security requirements) in order to
facilitate this transition.

--Richard



> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Richard Barnes

unread,
Apr 14, 2015, 10:15:36 AM4/14/15
to Martin Thomson, dev-platform, Eugene
On Mon, Apr 13, 2015 at 7:03 PM, Martin Thomson <m...@mozilla.com> wrote:

> On Mon, Apr 13, 2015 at 3:53 PM, Eugene <imfasterth...@gmail.com>
> wrote:
> > In addition to APIs, I'd like to propose prohibiting caching any
> resources loaded over insecure HTTP, regardless of Cache-Control header, in
> Phase 2.N.
>
> This has some negative consequences (if only for performance). I'd
> like to see changes like this properly coordinated. I'd rather just
> treat "caching" as one of the features for Phase 2.N.
>

That seem sensible.

I was about to propose a lifetime limit on caching (say a few hours?) to
limit the persistence scope of MitM, i.e., require periodic re-infection.
There may be ways to circumvent this (e.g., the MitM's code sending cache
priming requests), but it seems incrementally better.

Eric Rescorla

unread,
Apr 14, 2015, 10:16:37 AM4/14/15
to Aryeh Gregor, dev-pl...@lists.mozilla.org, Gervase Markham
On Tue, Apr 14, 2015 at 7:01 AM, Aryeh Gregor <a...@aryeh.name> wrote:

> On Tue, Apr 14, 2015 at 3:36 PM, Gervase Markham <ge...@mozilla.org> wrote:
> > Yep. That's the system working. CA does something they shouldn't, we
> > find out, CA is no longer trusted (perhaps for a time).
> >
> > Or do you have an alternative system design where no-one ever makes a
> > mistake and all the actors are trustworthy?
>
> No, but it would make sense to require that sites be validated through
> a single specific CA, rather than allowing any CA to issue a
> certificate for any site. That would drastically reduce the scope of
> attacks: an attacker would have to compromise a single specific CA,
> instead of any one of hundreds. IIRC, HSTS already allows this on an
> opt-in basis.


This is called "pinning".

https://developer.mozilla.org/en-US/docs/Web/Security/Public_Key_Pinning
https://tools.ietf.org/html/draft-ietf-websec-key-pinning-21


> If validation was done via DNSSEC instead of the
> existing CA system, this would follow automatically, without sites
> having to commit to a single CA.


Note that pinning does not require sites to commit to a single CA. You can
pin
multiple CAs.

Using DNS and DNSSEC for this purpose is described in
http://tools.ietf.org/html/rfc6698.
However, to my knowledge no mainstream browser presently accepts DANE/TLSA
authentication for reasons already described upthread.

-Ekr

Richard Barnes

unread,
Apr 14, 2015, 10:17:16 AM4/14/15
to Eugene, dev-platform
On Mon, Apr 13, 2015 at 9:43 PM, <imfasterth...@gmail.com> wrote:

> On Monday, April 13, 2015 at 8:57:41 PM UTC-4, northrupt...@gmail.com
> wrote:
> >
> > * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with
> HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less
> secure than HTTP is - to put this as politely and gently as possible - a
> pile of bovine manure
>
> This feature (i.e. opportunistic encryption) was implemented in Firefox
> 37, but unfortunately an implementation bug made HTTPS insecure too. But I
> guess Mozilla will fix it and make this feature available in a future
> release.
>
> > * Support for a decentralized (blockchain-based, ala Namecoin?)
> certificate authority
> >
> > Basically, the current CA system is - again, to put this as gently and
> politely as possible - fucking broken. Anything that forces the world to
> rely on it exclusively is not a solution, but is instead just going to make
> the problem worse.
>
> I don't think the current CA system is broken. The domain name
> registration is also centralized, but almost every website has a hostname,
> rather than using IP address, and few people complain about this.
>

I would also note that Mozilla is contributing heavily to Let's Encrypt,
which is about as close to a decentralized CA as we can get with current
technology.

If people have ideas for decentralized CAs, I would be interested in
listening, and possibly adding support in the long run. But unfortunately,
the state of the art isn't quite there yet.

Richard Barnes

unread,
Apr 14, 2015, 10:18:24 AM4/14/15
to ipar...@gmail.com, dev-platform
On Mon, Apr 13, 2015 at 11:26 PM, <ipar...@gmail.com> wrote:

> > * Less scary warnings about self-signed certificates (i.e. treat
> HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with
> HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less
> secure than HTTP is - to put this as politely and gently as possible - a
> pile of bovine manure
>
> I am against this. Both are insecure and should be treated as such. How is
> your browser supposed to know that gmail.com is intended to serve a
> self-signed cert? It's not, and it cannot possibly know it in the general
> case. Thus it must be treated as insecure.
>

This is a good point. This is exactly why the opportunistic security
feature in Firefox 37 enables encryption without certificate checks for
*http* resources.

--Richard



> > * Support for a decentralized (blockchain-based, ala Namecoin?)
> certificate authority
>
> No. Namecoin has so many other problems that it is not feasible.
>
> > Basically, the current CA system is - again, to put this as gently and
> politely as possible - fucking broken. Anything that forces the world to
> rely on it exclusively is not a solution, but is instead just going to make
> the problem worse.
>
> Agree that it's broken. The fact that any CA can issue a cert for any
> domain is stupid, always was and always will be. It's now starting to bite
> us.
>
> However, HTTPS and the CA system don't have to be tied together. Let's
> ditch the immediately insecure plain HTTP, then add ways to authenticate
> trusted certs in HTTPS by means other than our current CA system. The two
> problems are orthogonal, and trying to solve both at once will just leave
> us exactly where we are: trying to argue for a fundamentally different
> system.

Richard Barnes

unread,
Apr 14, 2015, 10:40:32 AM4/14/15
to Karl Dubost, dev-platform, Eugene
On Mon, Apr 13, 2015 at 10:10 PM, Karl Dubost <kdu...@mozilla.com> wrote:

>
> Le 14 avr. 2015 à 10:43, imfasterth...@gmail.com a écrit :
> > I don't think the current CA system is broken.
>
> The current CA system creates issues for certain categories of population.
> It is broken in some ways.
>
> > The domain name registration is also centralized, but almost every
> website has a hostname, rather than using IP address, and few people
> complain about this.
>
> Two points:
>
> 1. You do not need to register a domain name to have a Web site (IP
> address)
> 2. You do not need to register a domain name to run a local blah.test.site
>
> Both are still working and not deprecated in browsers ^_^
>
> Now the fact to have to rent your domain name ($$$) and that all the URIs
> are tied to this is in terms of permanent identifiers and the fabric of
> time on information has strong social consequences. But's that another
> debate than the one of this thread on deprecating HTTP in favor of HTTPS.
>

This is a fair point, and we should probably figure out a way to
accommodate these. My inclination is to mostly punt this to manual
configuration (e.g., installing a new trusted cert/override), since we're
not talking about generally available public service on the Internet. But
if there are more elegant solutions that don't reduce security, I would be
interested to hear them.



> I would love to see this discussion happening in Whistler too.
>

Agreed. That sounds like an excellent opportunity to hammer out details
here, assuming we can agree on overall direction in the meantime.

--Richard



>
> --
> Karl Dubost, Mozilla
> http://www.la-grange.net/karl/moz

Richard Barnes

unread,
Apr 14, 2015, 11:01:12 AM4/14/15
to Yoav Weiss, dev-pl...@lists.mozilla.org, b...@hutchins.co
On Tue, Apr 14, 2015 at 3:55 AM, Yoav Weiss <yo...@yoav.ws> wrote:

> On Tue, Apr 14, 2015 at 8:22 AM, Anne van Kesteren <ann...@annevk.nl>
> wrote:
>
> > On Tue, Apr 14, 2015 at 7:52 AM, Yoav Weiss <yo...@yoav.ws> wrote:
> > > Limiting new features does absolutely nothing in that aspect.
> >
> > Hyperbole much? CTO of the New York Times cited HTTP/2 and Service
> > Workers as a reason to start deploying HTTPS:
> >
> > http://open.blogs.nytimes.com/2014/11/13/embracing-https/
>
>
> I stand corrected. So it's the 8th reason out of 9, right before technical
> debt.
>
> I'm not saying using new features is not an incentive, and I'm definitely
> not saying HTTP2 and SW should have been enabled on HTTP.
> But, when done without any real security or deployment issues that mandate
> it, you're subjecting new features to significant adoption friction that is
> unrelated to the feature itself, in order to apply some indirect pressure
> on businesses to do the right thing.
>

Please note that there is no inherent security reason to limit HTTP/2 to be
used only over TLS (as there is for SW), at least not any more than the
security reasons for carrying HTTP/1.1 over TLS. They're semantically
equivalent; HTTP/2 is just faster. So if you're OK with limiting HTTP/2 to
TLS, you've sort of already bought into the strategy we're proposing here.



> You're inflicting developer pain without any real justification. A sort of
> collective punishment, if you will.
>
> If you want to apply pressure, apply it where it makes the most impact with
> the least cost. Limiting new features to HTTPS is not the place, IMO.
>

I would note that these options are not mutually exclusive :) We can apply
pressure with feature availability at the same time that we work on the
ecosystem problems. In fact, I had a call with some advertising folks last
week about how to get the ad industry upgraded to HTTPS.

--Richard



>
>
> >
> > (And anecdotally, I find it easier to convince developers to deploy
> > HTTPS on the basis of some feature needing it than on merit. And it
> > makes sense, if they need their service to do X, they'll go through
> > the extra trouble to do Y to get to X.)
> >
> >
> Don't convince the developers. Convince the business. Drive users away to
> secure services by displaying warnings, etc.
> Anecdotally on my end, I saw small Web sites that care very little about
> security, move to HTTPS over night after Google added HTTPS as a (weak)
> ranking signal
> <
> http://googlewebmastercentral.blogspot.fr/2014/08/https-as-ranking-signal.html
> >.
> (reason #4 in that NYT article)

Richard Barnes

unread,
Apr 14, 2015, 11:09:10 AM4/14/15
to Eric Shepherd, Joshua Cranmer 🐧, dev-platform
On Tue, Apr 14, 2015 at 8:32 AM, Eric Shepherd <eshe...@mozilla.com>
wrote:

> Joshua Cranmer [image: 🐧] wrote:
>
>> If you actually go to read the details of the proposal rather than
>> relying only on the headline, you'd find that there is an intent to
>> actually let you continue to use http for, e.g., localhost. The exact
>> boundary between "secure" HTTP and "insecure" HTTP is being actively
>> discussed in other forums.
>>
> My main concern with the notion of phasing out unsecured HTTP is that
> doing so will cripple or eliminate Internet access by older devices that
> aren't generally capable of handling encryption and decryption on such a
> massive scale in real time.
>
> While it may sound silly, those of us who are intro classic computers and
> making them do fun new things use HTTP to connect 10 MHz (or even 1 MHz)
> machines to the Internet. These machines can't handle the demands of SSL.
> So this is a step toward making their Internet connections go away.
>
> This may not be enough of a reason to save HTTP, but it's something I
> wanted to point out.


As the owner of a Mac SE/30 with an 100MB Ethernet card, I sympathize.
However, consider it part of the challenge! :) There are definitely TLS
stacks that work on some pretty small devices.

--Richard



>
>
> --
>
> Eric Shepherd
> Senior Technical Writer
> Mozilla <https://www.mozilla.org/>
> Blog: http://www.bitstampede.com/
> Twitter: http://twitter.com/sheppy
>

Richard Barnes

unread,
Apr 14, 2015, 11:14:39 AM4/14/15
to hugoosval...@gmail.com, dev-platform
Well, no. :)

Note that the primary difference between opportunistic security (which is
HTTP) and HTTPS is authentication. We should think about what sorts of
expectations people have for these devices, and to what degree those
expectations can be met.

Since you bring up IPv6, there might be some possibility that devices could
authenticate their IP addresses automatially, using cryptographically
generated addresses and self-signed certificates using the same public key.
http://en.wikipedia.org/wiki/Cryptographically_Generated_Address

--Richard

Richard Barnes

unread,
Apr 14, 2015, 11:24:18 AM4/14/15
to Karl Dubost, dev-platform
On Mon, Apr 13, 2015 at 7:13 PM, Karl Dubost <kdu...@mozilla.com> wrote:

> Richard,
>
> Le 13 avr. 2015 à 23:57, Richard Barnes <rba...@mozilla.com> a écrit :
> > There's pretty broad agreement that HTTPS is the way forward for the web.
>
> Yes, but that doesn't make deprecation of HTTP a consensus.
>
> > In order to encourage web developers to move from HTTP to HTTPS, I would
> > like to propose establishing a deprecation plan for HTTP without
> security.
>
> This is not encouragement. This is call forcing. ^_^ Just that we are
> using the right terms for the right thing.
>

If so, then it's about the most gentle forcing we could do. If your web
page works today over HTTP, it will continue working for a long time,
O(years) probably, until we get around to removing features you care about.

The idea of this proposal is to start communicating to web site operators
that in the *long* run, HTTP will no longer be viable, while giving them
time to transition.



> In the document
> >
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> You say:
> Phase 3: Essentially all of the web is HTTPS.
>
> I understand this is the last hypothetical step, but it sounds like a bit
> let's move the Web to XML. It didn't work out very well.
>

The lack of XML doesn't enable things like the Great Cannon.
https://citizenlab.org/2015/04/chinas-great-cannon/



> I would love to have a more secure Web, but this can not happen without a
> few careful consideration.
>
> * Third tier person for certificates being mandatory is a no-go. It
> creates a system of authority and power, an additional layer of hierarchy
> which deeply modify the ability for anyone to publish and might in some
> circumstances increase the security risk.
>
> * If we have to rely, cost of certificates must be zero. These for the
> simple reason than not everyone is living in a rich industrialized country.
>

There are already multiple sources of free publicly-trusted certificates,
with more on the way.
https://www.startssl.com/
https://buy.wosign.com/free/
https://blog.cloudflare.com/introducing-universal-ssl/
https://letsencrypt.org/



> * Setup and publication through HTTPS should be as easy as HTTP. The Web
> brought a publishing power to any individuals. Imagine cases where you need
> to create a local network, web developing on your computer, hacking a
> server for your school, community, etc. If it relies on a heavy process, it
> will not happen.
>

I agree that we should work on this, and Let's Encrypt is making a big push
in this direction. However, we're not that far off today. Most hosting
platforms already allow HTTPS with only a few more clicks. If you're
running your own server, there's lots of documentation, including
documentation provided by Mozilla:

https://mozilla.github.io/server-side-tls/ssl-config-generator/?1

In other words, this is a gradual plan, and while you've raised some
important things to work on, they shouldn't block us getting started.

--Richard




>
>
> So instead of a plan based on technical features, I would love to see a:
> "Let's move to a secure Web. What are the user scenarios, we need to solve
> to achieve that."
>
> These user scenarios are economical, social, etc.
>
>
> my 2 cents.
> So yes, but not the way it is introduced and plan now.

david.a...@gmail.com

unread,
Apr 14, 2015, 11:39:51 AM4/14/15
to

> There are already multiple sources of free publicly-trusted certificates,
> with more on the way.
> https://www.startssl.com/
> https://buy.wosign.com/free/
> https://blog.cloudflare.com/introducing-universal-ssl/
> https://letsencrypt.org/
>

I think that you should avoid making this an exercise in marketing Mozilla's "Let's Encrypt" initiative. "Let's Encrypt" is a great idea and definitely has a place in the world, but it's very important to be impartial.

In my mind there is no particular advantage in swapping lock in from one CA to another. Even if the Mozilla one is free.

justin...@gmail.com

unread,
Apr 14, 2015, 11:53:13 AM4/14/15
to
Dynamic DNS might be difficult to run on HTTPS as the IP address needs to change when say your cable modem IP updates. HTTPS only would make running personal sites more difficult for individuals, and would make the internet slightly less democratic.

On Monday, April 13, 2015 at 7:57:58 AM UTC-7, Richard Barnes wrote:
> There's pretty broad agreement that HTTPS is the way forward for the web.
> In recent months, there have been statements from IETF [1], IAB [2], W3C
> [3], and even the US Government [4] calling for universal use of
> encryption, which in the case of the web means HTTPS.
>
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
> Broadly speaking, this plan would entail limiting new features to secure
> contexts, followed by gradually removing legacy features from insecure
> contexts. Having an overall program for HTTP deprecation makes a clear
> statement to the web community that the time for plaintext is over -- it
> tells the world that the new web uses HTTPS, so if you want to use new
> things, you need to provide security. Martin Thomson and I drafted a
> one-page outline of the plan with a few more considerations here:
>
> https://docs.google.com/document/d/1IGYl_rxnqEvzmdAP9AJQYY2i2Uy_sW-cg9QI9ICe-ww/edit?usp=sharing
>
> Some earlier threads on this list [5] and elsewhere [6] have discussed
> deprecating insecure HTTP for "powerful features". We think it would be a
> simpler and clearer statement to avoid the discussion of which features are
> "powerful" and focus on moving all features to HTTPS, powerful or not.
>
> The goal of this thread is to determine whether there is support in the
> Mozilla community for a plan of this general form. Developing a precise
> plan will require coordination with the broader web community (other
> browsers, web sites, etc.), and will probably happen in the W3C.
>
> Thanks,
> --Richard
>
> [1] https://tools.ietf.org/html/rfc7258
> [2]
> https://www.iab.org/2014/11/14/iab-statement-on-internet-confidentiality/
> [3] https://w3ctag.github.io/web-https/
> [4] https://https.cio.gov/
> [5]
> https://groups.google.com/d/topic/mozilla.dev.platform/vavZdN4tX44/discussion
> [6]
> https://groups.google.com/a/chromium.org/d/topic/blink-dev/2LXKVWYkOus/discussion

Adam Roach

unread,
Apr 14, 2015, 12:02:41 PM4/14/15
to justin...@gmail.com, dev-pl...@lists.mozilla.org
On 4/14/15 10:53, justin...@gmail.com wrote:
> Dynamic DNS might be difficult to run on HTTPS as the IP address needs to change when say your cable modem IP updates. HTTPS only would make running personal sites more difficult for individuals, and would make the internet slightly less democratic.

I'm not sure I follow. I have a cert for a web site running on a dynamic
address using DynDNS, and it works just fine. Certs are bound to names,
not addresses.

--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863

Boris Zbarsky

unread,
Apr 14, 2015, 12:07:15 PM4/14/15
to
On 4/14/15 11:53 AM, justin...@gmail.com wrote:
> Dynamic DNS might be difficult to run on HTTPS as the IP address needs to change when say your cable modem IP updates.

Justin, I'm not sure I follow the problem here. If I understand
correctly, you're talking about a domain name, say "foo.bar", which is
mapped to different IPs via dynamic DNS, and a website running on the
machine behind the relevant cable modem, right?

Is the site being accessed directly via the IP address or via the
foo.bar hostname? Because if it's the latter, then a cert issued to
foo.bar would work fine as the IP changes; certificates are bound to a
hostname string (which can happen to have the form "123.123.123.123", of
course), not an IP address.

And if the site is being accessed via (changeable) IP address, then how
is dynamic DNS relevant?

I would really appreciate an explanation of what problem you're seeing
here that I'm missing.

-Boris

j...@chromium.org

unread,
Apr 14, 2015, 12:47:01 PM4/14/15
to
Hi Mozilla friends. Glad to see this proposal! As Richard mentions, we over on Chromium are working on a similar plan, albeit limited to "powerful features."

I just wanted to mention that regarding subresource integrity (https://w3c.github.io/webappsec/specs/subresourceintegrity/), the general consensus over here is that we will not treat origins as secure if they are over HTTP but loaded with integrity. We believe that security includes confidentiality, which that would approach would lack.
--Joel

mh.in....@gmail.com

unread,
Apr 14, 2015, 1:09:44 PM4/14/15
to
> We believe that security includes confidentiality, which that would approach would lack.

Hey Joel,

SSL already leaks which domain name you are visiting anyway, so the most confidentiality this can bring you is hiding the specific URL involved in a cache miss. That's a fairly narrow upgrade to confidentiality.

A scenario where it would matter: a MITM wishes to block viewing of a specific video on a video hosting site, but is unwilling to block the whole site. In such cases you would indeed want full SSL, assuming the host can afford it.

A scenario where it would not matter: some country wishes to fire a Great Cannon. There integrity is enough.

I think the case for requiring integrity for all connections is strong: malware injection is simply not on. The case for confidentiality of user data and cookies is equally clear. The case for confidentiality of cache misses of static assets is a bit less clear: sites that host a lot of very different content like YouTube might care and a site where all the content is the same (e.g. a porn site) might feel the difference between a URL and a domain name is so tiny that it's irrelevant - they'd rather have the performance improvements from caching proxies. Sites that have a lot of users in developing countries might also feel differently to Google engineers with workstations hard-wired into the internet backbone ;)

Anyway, just my 2c.

Martin Thomson

unread,
Apr 14, 2015, 1:21:04 PM4/14/15
to Henri Sivonen, dev-platform
On Tue, Apr 14, 2015 at 3:29 AM, Henri Sivonen <hsiv...@hsivonen.fi> wrote:
> Specifically, on point #2, I think we should start by, by default,
> forgetting all cookies that don't have the "secure" flag set at the
> end of the Firefox session. Persistent cookies have two main use
> cases:
> * On login-requiring sites, not requiring the user to have to
> re-enter credentials in every browser session.
> * Behavioral profiling.

This is a reasonable proposal. I think that this, as well as the
caching suggestion up-thread, fall into the general category of things
we've identified as "persistence" features. Persistence has been
identified as one of the most dangerous aspects of the unsecured web.

I like this sort of approach, because it can be implemented at a much
lower https:// adoption rate (i.e., today's rate) than other more
obvious things.

Eric Shepherd

unread,
Apr 14, 2015, 1:39:15 PM4/14/15
to Richard Barnes, Joshua Cranmer 🐧, dev-platform
Richard Barnes wrote:
> As the owner of a Mac SE/30 with an 100MB Ethernet card, I
> sympathize. However, consider it part of the challenge! :) There
> are definitely TLS stacks that work on some pretty small devices.
That's a lot faster machine than the ones I play with. My fastest retro
machine is an 8-bit unit with a 10 MHz processor and 4 MB of memory,
with a 10 Mbps ethernet card. And the ethernet is underutilized because
the bus speed of the computer is too slow to come anywhere close to
saturating the bandwidth available. :)

connor...@gmail.com

unread,
Apr 14, 2015, 2:26:27 PM4/14/15
to
HTTPS has its moments, but the majority of the web does not need it. I certainly wouldn't appreciate the encryption overhead just for visiting David's lolcats website. As one of the most important organizations related to free software, it's sad to see Mozilla developers join the war on plaintext: http://arc.pasp.de/ The owners of websites like this have a right to serve their pages in formats that do not make hypocrites of themselves.

emmanuel...@gmail.com

unread,
Apr 14, 2015, 4:35:05 PM4/14/15
to
Hello,

On Monday, April 13, 2015 at 4:57:58 PM UTC+2, Richard Barnes wrote:
> In order to encourage web developers to move from HTTP to HTTPS, I would
> like to propose establishing a deprecation plan for HTTP without security.
>
> <snip>
>
> Thanks,
> --Richard

While I fully understand what's at stake here and the reasoning behind this, I'd like to ask an admittedly troll-like question :

Will Mozilla start to offer certificates to every single domain name owner ?

Without that, your proposal tells me: either you pay for a certificate or you don't use the latest supported features on your personal (or professional) web site. This is a call for a revival of the "best viewed with XXX browser" banners.

Making the warning page easier to bypass is a very, very bad idea. The warning page is here for a very good reason, and its primary function is to scare non-technical literate people so that they don't put themselves in danger. Make it less scary and you'll get the infamous Windows Vista UAC dialog boxes where people click OK without even reading the content.

The proposal fails to foresee another consequence of a full HTTPS web: the rise and fall of root CAs. If everyone needs to buy a certificate you can be sure that some companies will sell them for a low price, with limited background check. These companies will be spotted - and their root CA will be revoked by browser vendors (this already happened in the past and I fail to see any reason why it would not happen again). Suddenly, a large portion of the web will be seen as even worse than "insecure HTTP" - it will be seen as "potentially dangerous HTTPS". The only way to avoid this situation is to put all the power in a very limited number of hands - then you'll witness a sharp rise on certificate prices.

Finally, Mozilla's motto is to keep the web open. Requiring one to pay a fee - even if it's a small one - in order to allow him to have a presence on the Intarweb is not helping.

Best regards,

-- Emmanuel Deloget

Adam Roach

unread,
Apr 14, 2015, 4:45:28 PM4/14/15
to emmanuel...@gmail.com, dev-pl...@lists.mozilla.org
On 4/14/15 15:35, emmanuel...@gmail.com wrote:
> Will Mozilla start to offer certificates to every single domain name owner ?

Yes [1].

https://letsencrypt.org/


____
[1] I'll note that Mozilla is only one of several organizations involved
in making this effort happen.

Chris Peterson

unread,
Apr 14, 2015, 5:30:31 PM4/14/15
to
On 4/14/15 3:29 AM, Henri Sivonen wrote:
> Specifically, on point #2, I think we should start by, by default,
> forgetting all cookies that don't have the "secure" flag set at the
> end of the Firefox session. Persistent cookies have two main use
> cases:
> * On login-requiring sites, not requiring the user to have to
> re-enter credentials in every browser session.
> * Behavioral profiling.

I searched for an existing bug to treat non-secure cookies as session
cookies, but I couldn't find one.

However, I did find bug 530594 ("eternalsession"). Firefox's session
restore, as the name suggests, restores session cookies even after the
user quits and restarts the browser. This is somewhat surprising, but
the glass-half-full perspective is that the negative effects of Henri's
suggestion would be lessened (until bug 530594 is fixed).

northrupt...@gmail.com

unread,
Apr 14, 2015, 5:32:40 PM4/14/15
to
On Monday, April 13, 2015 at 8:26:59 PM UTC-7, ipar...@gmail.com wrote:
> > * Less scary warnings about self-signed certificates (i.e. treat HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do with HTTPS+selfsigned now); the fact that self-signed HTTPS is treated as less secure than HTTP is - to put this as politely and gently as possible - a pile of bovine manure
>
> I am against this. Both are insecure and should be treated as such. How is your browser supposed to know that gmail.com is intended to serve a self-signed cert? It's not, and it cannot possibly know it in the general case. Thus it must be treated as insecure.

Except that one is encrypted, and the other is not. *By logical measure*, the one that is encrypted but unauthenticated is more secure than the one that is neither encrypted nor authenticated, and the fact that virtually every HTTPS-supporting browser assumes the precise opposite is mind-boggling.

I agree that authentication/verification is necessary for security, but to pretend that encryption is a non-factor when it's the only actual subject of this thread as presented by its creator is asinine.

>
> > * Support for a decentralized (blockchain-based, ala Namecoin?) certificate authority
>
> No. Namecoin has so many other problems that it is not feasible.

Like?

And I'm pretty sure none of those problems (if they even exist) even remotely compare to the clusterfsck that is our current CA system.

>
> > Basically, the current CA system is - again, to put this as gently and politely as possible - fucking broken. Anything that forces the world to rely on it exclusively is not a solution, but is instead just going to make the problem worse.
>
> Agree that it's broken. The fact that any CA can issue a cert for any domain is stupid, always was and always will be. It's now starting to bite us.
>
> However, HTTPS and the CA system don't have to be tied together. Let's ditch the immediately insecure plain HTTP, then add ways to authenticate trusted certs in HTTPS by means other than our current CA system. The two problems are orthogonal, and trying to solve both at once will just leave us exactly where we are: trying to argue for a fundamentally different system.

Indeed they don't, but with the current ecosystem they are, which is my point; by deprecating HTTP *and* continuing to treat self-signed certs as literally worse than Hitler *and* relying on the current CA system exclusively for verification of certificates, we're doing nothing to actually solve anything.

As orthogonal as those problems may seem, an HTTPS-only world will fail rather spectacularly without significant reform and refactoring on the CA side of things.

Cameron Kaiser

unread,
Apr 14, 2015, 5:51:32 PM4/14/15
to
On 4/14/15 10:38 AM, Eric Shepherd wrote:
> Richard Barnes wrote:
>> As the owner of a Mac SE/30 with an 100MB Ethernet card, I
>> sympathize. However, consider it part of the challenge! :) There
>> are definitely TLS stacks that work on some pretty small devices.
> That's a lot faster machine than the ones I play with. My fastest retro
> machine is an 8-bit unit with a 10 MHz processor and 4 MB of memory,
> with a 10 Mbps ethernet card. And the ethernet is underutilized because
> the bus speed of the computer is too slow to come anywhere close to
> saturating the bandwidth available. :)

Candidly, and not because I still run such a site, I've always found
Gopher to be a better fit for resource-constrained computing. The
Commodore 128 sitting next to me does very well for that because the
protocol and menu parsing conventions are incredibly trivial.

What is your 10MHz 8-bit system?

Cameron Kaiser
gopher://gopher.floodgap.com/

Adam Roach

unread,
Apr 14, 2015, 5:57:28 PM4/14/15
to northrupt...@gmail.com, dev-pl...@lists.mozilla.org
On 4/14/15 16:32, northrupt...@gmail.com wrote:
> *By logical measure*, the [connection] that is encrypted but unauthenticated is more secure than the one that is neither encrypted nor authenticated, and the fact that virtually every HTTPS-supporting browser assumes the precise opposite is mind-boggling.

That depends on what kind of resource you're trying to access. If the
resource you're trying to reach (in both circumstances) isn't demanding
security -- i.e., it is an "http" URL -- then your logic is sound.
That's the basis for enabling OE.

The problem here is that you're comparing:

* Unsecured connections working as designed

with

* Supposedly secured connections that have a detected security flaw


An "https" URL is a promise of encryption _and_ authentication; and when
those promises are violated, it's a sign that something has gone wrong
in a way that likely has stark security implications.

Resources loaded via an "http" URL make no such promises, so the
situation isn't even remotely comparable.

northrupt...@gmail.com

unread,
Apr 14, 2015, 5:59:51 PM4/14/15
to
On Tuesday, April 14, 2015 at 5:39:24 AM UTC-7, Gervase Markham wrote:
> On 14/04/15 01:57, northrupt...@gmail.com wrote:
> > * Less scary warnings about self-signed certificates (i.e. treat
> > HTTPS+selfsigned like we do with HTTP now, and treat HTTP like we do
> > with HTTPS+selfsigned now); the fact that self-signed HTTPS is
> > treated as less secure than HTTP is - to put this as politely and
> > gently as possible - a pile of bovine manure
>
> http://gerv.net/security/self-signed-certs/ , section 3.

That whole article is just additional shovelfuls of bovine manure slopped onto the existing heap.

The article assumes that when folks connect to something via SSH and something changes - causing MITM-attack warnings and a refusal to connect - folks default to just removing the existing entry in ~/.ssh/known_hosts without actually questioning anything. This conveniently ignores the fact that - when people do this - it's because they already know there's been a change (usually due to a server replacement); most folks (that I've encountered at least) *will* stop and think before editing their known_hosts if it's an unexpected change.

"The first important thing to note about this model is that key changes are an expected part of life."

Only if they've been communicated first. In the vast majority of SSH deployments, a host key will exist at least as long as the host does (if not longer). If one is going to criticize SSH's model, one should, you know, actually freaking understand it first.

"You can't provide [Joe Public] with a string of hex characters and expect it to read it over the phone to his bank."

Sure you can. Joe Public *already* has to do this with social security numbers, credit card numbers, checking/savings account numbers, etc. on a pretty routine basis, whether it's over the phone, over the Internet, by mail, in person, or what have you. What makes an SSH fingerprint any different? The fact that now you have the letters A through F to read? Please.

"Everyone can [install a custom root certificate] manually or the IT department can use the Client Customizability Kit (CCK) to make a custom Firefox. "

I've used the CCK in the past for Firefox customizations in enterprise settings. It's a royal pain in the ass, and is not nearly as viable a solution as the article suggests (and the alternate suggestion of "oh just use the broken, arbitrarily-trusted CA system for your internal certs!" is a hilarious joke at best; the author of the article would do better as a comedian than as a serious authority when it comes to security best practices).

A better solution might be to do this on a client workstation level, but it's still a pain and usually not worth the trouble for smaller enterprises v. just sticking to the self-signed cert.

The article, meanwhile, also assumes (in the section before the one you've cited) that the CA system is immune to being compromised while DNS is vulnerable. Anyone with a number of brain cells greater than or equal to one should know better than to take that assumption at face value.

>
> But also, Firefox is implementing opportunistic encryption, which AIUI
> gives you a lot of what you want here.
>
> Gerv

Then that needs to happen first. Otherwise, this whole discussion is moot, since absolutely nobody in their right mind would want to be shoehorned into our current broken CA system without at least *some* alternative.

Richard Barnes

unread,
Apr 14, 2015, 6:23:49 PM4/14/15
to northrupt...@gmail.com, dev-platform
OE shipped in Firefox 37. It's currently turned off pending a bugfix, but
it will be back soon.

Joshua Cranmer 🐧

unread,
Apr 14, 2015, 6:25:22 PM4/14/15
to
On 4/14/2015 4:59 PM, northrupt...@gmail.com wrote:
> The article assumes that when folks connect to something via SSH and > something changes - causing MITM-attack warnings and a refusal to >
connect - folks default to just removing the existing entry in >
~/.ssh/known_hosts without actually questioning anything. This >
conveniently ignores the fact that - when people do this - it's >
because they already know there's been a change (usually due to a >
server replacement); most folks (that I've encountered at least) >
*will* stop and think before editing their known_hosts if it's an >
unexpected change.
I've had an offending key at least 5 times. Only once did I seriously
think to consider what specifically had changed to cause the ssh key to
change. The other times, I assumed there was a good reason and deleted it.

This illustrates a very, very, very important fact about UX: the more
often people see a dialog, the more routine it becomes to deal with
it--you stop considering whether or not it applies, because it's always
applied and it's just yet another step you have to go through to do it.

--
Joshua Cranmer
Thunderbird and DXR developer
Source code archæologist

commod...@gmail.com

unread,
Apr 14, 2015, 6:47:19 PM4/14/15
to
On Tuesday, April 14, 2015 at 2:51:32 PM UTC-7, Cameron Kaiser wrote:
> Candidly, and not because I still run such a site, I've always found
> Gopher to be a better fit for resource-constrained computing. The
> Commodore 128 sitting next to me does very well for that because the
> protocol and menu parsing conventions are incredibly trivial.
Certainly true on a "how well can it keep up?" level, but unfortunately precious few sites support Gopher these days, so while it may be a better fit it offers vastly more constricted access to online resources.

Cameron Kaiser

unread,
Apr 14, 2015, 8:19:36 PM4/14/15
to
The counter argument is, of course, that the "modern Web" (however you
define it) is effectively out of reach of computers older than a decade
or so, let alone an 8-bit system, due to loss of vendor or browser
support, or just plain not being up to the task. So even if they could
access the pages, meaningfully displaying them is another thing
entirely. I won't dispute the much smaller amount of content available
in Gopherspace, but it's still an option that has *some* support, and
that support is often in the retrocomputing community already.

Graceful degradation went out the window a couple years back, unfortunately.

Anyway, I'm derailing the topic, so I'll put a sock in it now.
Cameron Kaiser

Karl Dubost

unread,
Apr 14, 2015, 8:33:49 PM4/14/15
to Henri Sivonen, dev-platform
Henri,
great points, about…

Le 14 avr. 2015 à 19:29, Henri Sivonen <hsiv...@hsivonen.fi> a écrit :
> Currently, the UI designation for http is neutral while the UI
> designation for mixed content is undesirable. I think we should make
> the UI designation of plain http undesirable once x% the sites that
> users encounter on a daily basis are https.

What about changing the color of the grey world icon for http into something which is more telling.
An icon that would mean "eavesdropping possible". but yes UI should be part of the work.

About mixed content, insecure Web site, wrong certificates, etc. People should head to these bugs to understand, that it's not that simple.

* https://bugzilla.mozilla.org/show_bug.cgi?id=1126620
[Bug 1126620] [META] TLS 1.1/1.2 version intolerant sites
* https://bugzilla.mozilla.org/show_bug.cgi?id=1138101
[Bug 1138101] [META] Sites that still haven't upgraded to something better than RC4
* https://bugzilla.mozilla.org/show_bug.cgi?id=844556
[Bug 844556] [tracking] compatibility issues with mixed content blocker on non-Mozilla websites


For Web Compatibility, dropping non secure cookies would be an interesting survey to do and see how much it breaks (or not) the Web and user experience.

Gervase Markham

unread,
Apr 15, 2015, 5:37:13 AM4/15/15
to Eric Shepherd, Joshua Cranmer 🐧
On 14/04/15 13:32, Eric Shepherd wrote:
> My main concern with the notion of phasing out unsecured HTTP is that
> doing so will cripple or eliminate Internet access by older devices that
> aren't generally capable of handling encryption and decryption on such a
> massive scale in real time.
>
> While it may sound silly, those of us who are intro classic computers
> and making them do fun new things use HTTP to connect 10 MHz (or even 1
> MHz) machines to the Internet. These machines can't handle the demands
> of SSL. So this is a step toward making their Internet connections go away.

If this is important to you, then you could simply run them through a
proxy. That's what jwz did when he wanted to get Netscape 1.0 running again:
http://www.jwz.org/blog/2008/03/happy-run-some-old-web-browsers-day/

Gerv


Gervase Markham

unread,
Apr 15, 2015, 5:44:42 AM4/15/15
to
On 14/04/15 22:59, northrupt...@gmail.com wrote:
> The article assumes that when folks connect to something via SSH and
> something changes - causing MITM-attack warnings and a refusal to
> connect - folks default to just removing the existing entry in
> ~/.ssh/known_hosts without actually questioning anything.

https://www.usenix.org/system/files/login/articles/105484-Gutmann.pdf

> "The first important thing to note about this model is that key
> changes are an expected part of life."
>
> Only if they've been communicated first.

How does a website communicate with all its users that it is expecting
to have (or has already had) a key change? After all, you can't exactly
put a notice on the site itself...

> "You can't provide [Joe Public] with a string of hex characters and
> expect it to read it over the phone to his bank."
>
> Sure you can. Joe Public *already* has to do this with social
> security numbers, credit card numbers, checking/savings account
> numbers, etc. on a pretty routine basis, whether it's over the phone,
> over the Internet, by mail, in person, or what have you. What makes
> an SSH fingerprint any different? The fact that now you have the
> letters A through F to read? Please.

You have missed the question of motivation. I put up with reading a CC
number over the phone (begrudgingly) because I know I need to do that in
order to buy something. If I have a choice of clicking "OK" or phoning
my bank, waiting in a queue, and eventually saying "Hi. I need to verify
the key of your webserver's cert so I can log on to do my online
banking. Is it 09F9.....?" then I'm just going to click "OK" (or
"Whatever", as that button should be labelled).

Gerv

Gervase Markham

unread,
Apr 15, 2015, 5:47:03 AM4/15/15
to
On 14/04/15 16:39, david.a...@gmail.com wrote:
>
>> There are already multiple sources of free publicly-trusted certificates,
>> with more on the way.
>> https://www.startssl.com/
>> https://buy.wosign.com/free/
>> https://blog.cloudflare.com/introducing-universal-ssl/
>> https://letsencrypt.org/
>
> I think that you should avoid making this an exercise in marketing Mozilla's "Let's Encrypt" initiative.

Perhaps that's why Richard took the time to make a comprehensive list of
all known sources of free certs, rather than just mentioning LE?

Gerv


Gervase Markham

unread,
Apr 15, 2015, 5:50:09 AM4/15/15
to
On 14/04/15 17:46, j...@chromium.org wrote:
> I just wanted to mention that regarding subresource integrity
> (https://w3c.github.io/webappsec/specs/subresourceintegrity/), the
> general consensus over here is that we will not treat origins as
> secure if they are over HTTP but loaded with integrity. We believe
> that security includes confidentiality, which that would approach
> would lack. --Joel

Radical idea: currently, the web has two states, insecure and secure.
What if it still had two states, with the same UI, but insecure meant
"HTTPS top-level, but some resources may be loaded using HTTP with
integrity", and secure meant "HTTPS throughout"?

That is to say, we don't have to tie the availability of new features to
the same criteria as we tie the HTTP vs. HTTPS icon/UI in the browser.
We could allow powerful features for
HTTPS-top-level-and-some-HTTP-with-integrity, while still displaying it
as insecure.

Gerv


It is loading more messages.
0 new messages