Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Over 14K 'Let's Encrypt' SSL Certificates Issued To PayPal Phishing Sites

690 views
Skip to first unread message

David E. Ross

unread,
Mar 26, 2017, 11:53:35 AM3/26/17
to mozilla-dev-s...@lists.mozilla.org
The subject is the title of a Slashdot article posted today. The
article can be accessed at
<https://it.slashdot.org/story/17/03/25/2222246/over-14k-lets-encrypt-ssl-certificates-issued-to-paypal-phishing-sites>.


The article contains two links. One is to a Bleeping Computer article
that gives more detail.

The other link is to a Let's Encrypt page where that certification
authority states:> Let’s Encrypt is going to be issuing Domain
Validation (DV)
> certificates. On a technical level, a DV certificate asserts that a
> public key belongs to a domain – it says nothing else about a site’s
> content or who runs it. DV certificates do not include any
> information about a website’s reputation, real-world identity, or
> safety. To me, this means that certificates can be freely issued to criminal
enterprises.

--
David E. Ross
<http://www.rossde.com>

Consider:
* Most state mandate that drivers have liability insurance.
* Employers are mandated to have worker's compensation insurance.
* If you live in a flood zone, flood insurance is mandatory.
* If your home has a mortgage, fire insurance is mandatory.

Why then is mandatory health insurance so bad??

Adam Caudill

unread,
Mar 26, 2017, 1:14:34 PM3/26/17
to David E. Ross, mozilla-dev-s...@lists.mozilla.org
Much has been written about this issue of late; most of the focus has been
on Let's Encrypt, but they are not the only CA issuing certificates to
phishing sites, though because of the scale Let's Encrypt operates at, they
issue the most, and thus take most of the heat.

One of the better articles on the topic is this one by Scott Helme, which
is well worth reading:

https://scotthelme.co.uk/lets-encrypt-are-enabling-the-bad-guys-and-why-they-should/

DV certificates only prove control of a domain, not who operates it, or if
it should be trusted. To try to read anything more into it is a mistake;
this applies to all CAs issuing DV certificates, not just those issued by
Let's Encrypt.

The goal many share is to achieve near-ubiquitous TLS use to minimize
insecure traffic as much as possible. To achieve that goal, the barrier to
entry needs to be minimal, which means freely available DV certificates.
Let's Encrypt issues certificates to anyone that can prove control of a
domain (with few restrictions), and as with most other forms of secure
communications, this means not everyone that uses it will have honest
intentions. That is simply the cost of achieving ubiquitous encryption.

Some have suggested a significant change to how browsers display status:
display a warning for HTTP, and show HTTPS with a DV certificate as neutral
(handling of OV & EV certificates in such an arrangement is more
contentious). This would help to eliminate the erroneous feeling some have
that certificates impart trust.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
--

--*Adam Caudill*
http://adamcaudill.com

Vincent Lynch

unread,
Mar 26, 2017, 2:15:09 PM3/26/17
to Adam Caudill, David E. Ross, mozilla-dev-s...@lists.mozilla.org
Hi David,

I am the author of the research discussed in that Bleeping Computer post..

Your post is a bit brief, so I'm not sure if you are just sharing news, or
wanted to discuss a certain aspect of this story or topic.

So I will just share some general thoughts:



1. The most important thing to note is that Let's Encrypt has not violated
any issuance requirements by issuing these certificates.

The debate over malicious sites using SSL is certainly a popular topic, and
while many regularly discuss whether this is right or wrong, there is no
such ambiguity in policy.

The CA/B Forum's Baseline Requirements (which set forth issuance practices
for CAs) do *not* forbid issuing certificates to phishing sites, or require
certificates for such sites to be revoked.

There is a section in the Baseline Requirements (BRs) about "high-risk"
certificate requests, which some cite and interpret as applying to this
situation. However, that section of the BRs is incredibly vague and does
not lay out any specific requirements.

The only relevant requirements a CA may need to follow would come from a
root program. For instance, Microsoft's root program includes a stipulation
that they may request a CA to revoke a certificate.

I don't think it's publicly known how often Microsoft exercises that rule.
But I am not aware of any cases where Let's Encrypt has ignored such
requests, and I highly doubt they would as the risks to their organization
would be obvious.



2. The topic of malicious certs has been thoroughly discussed by the
industry. These discussions have already taken place on this list, such as
this one:

https://groups.google.com/d/topic/mozilla.dev.security.polic
y/vMrncPi3tx8/discussion
<https://groups.google.com/d/topic/mozilla.dev.security.policy/vMrncPi3tx8/discussion>


Not to be rude to anyone, but I strongly suggest reading that those before
starting off on a new discussion, as most points have already been made and
debated.



3. After reading (and participating) in many discussions about the use of
SSL on malicious sites and CAs "policing" the content/purpose of websites,
it is clear to me that this is an issue that we will never come to an
ideological consensus on as an industry. So we must defer to the rules,
which do not forbid such a practice and are unlikely to ever be changed.

While I personally believe there are some changes that could be made to
improve current practices, it is more important to recognize that this is
simply an area where meaningful cooperation is not possible. Instead of
using up more time (and stressing relationships) on this debate, we should
look to make improvements in other areas of Web PKI where we can find
consensus.




I think my research is useful in quantifying and showing the scale of SSL
use on phishing sites; and for security UX/user safety/user education
discussions, I think this should be considered. But overall, I hope that
people will not re-start a debate here over whether Let's Encrypt should be
dis-trusted or whether its acceptable to issue such certificates.

Martin Heaps

unread,
Mar 27, 2017, 11:09:07 AM3/27/17
to mozilla-dev-s...@lists.mozilla.org
This topic is frustrating in that there seems to be a wide attempt by people to use one form of authentication (DV TLS) to verify another form of authentication (EV TLS).

There seems an issue for people not being able to understand that a FREE service with a vey low bar in knowledge requirement on the part of the end user (the website owner) will be used across the spectrum of human achievement (good and bad).

Economics: If something costs money, they far fewer people will make use of it, this has been (one of) the core reasonings behind "Lets Encrpyt" and other free SSL service providers.

Education: If something requires a skill and background knowledge to work properly and correctly then far fewer people will be willing to deploy it across their websites.

The next level is now that any business or high valued entity should over the course of the next few years implement EV certificates (many already have) and that browsers should make EV certificates MORE noticable on websites.

BUT:
The end nessecity is that the general public need to be educated that a "secure" website does not mean an "authenticated" business, person or organisation. The general public needs to be aware of the difference between a DV and EV certificate.

The community has spent meany years trying to highlight that lack of secure SSL/TLS for websites, that now it's in place, the community needs to highlight the different *Types* of certificates available and what they mean for the website visitor.

In addition I think this topic seems to be highlighted as an excuse by parties who (for some reason) don't like Lets Encrypt and similar services and want to use it as a way for people who don't understand what DV TLS actually does, to use it to draw attention to others who do not know what DV TLS does, to highlight that Lets Encrypt is somehow Bad or Evil for providing a secure service for nefarious websites.

Some ideas:

1) Browsers can gradually make the EV certificates more prominent, something such as first time a site is visited with an EV certificate that a popup notice appears declaring the name and address of the owner of the site.

2) Websites themselves need to deploy better Content Security Policy practises. Very few websites seem to be using CSP despite it being a very powerful and flexible tool for preventing any site masqurading as another by "borrowing" their media and contents.

3) There could be a system of word recognition / repetition count for something such as browse plugins to detect websites that use the "Paypal" word for instance above a certain level and then notifying the user the site is NOT an actual paypal domain.

(sorry, I'm sure most of you reading this are well aware of the details, I wanted a bit of a vent)

Gervase Markham

unread,
Mar 27, 2017, 4:03:26 PM3/27/17
to mozilla-dev-s...@lists.mozilla.org
On 27/03/17 16:08, Martin Heaps wrote:
> The next level is now that any business or high valued entity should
> over the course of the next few years implement EV certificates (many
> already have) and that browsers should make EV certificates MORE
> noticable on websites..

....or we should decide that the phishing problem is not best solved at
the level of certificates, but instead at a higher level (e.g. Safe
Browsing and similar mechanisms).

Gerv

Kurt Roeckx

unread,
Mar 27, 2017, 4:17:32 PM3/27/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Mon, Mar 27, 2017 at 09:02:48PM +0100, Gervase Markham via dev-security-policy wrote:
> On 27/03/17 16:08, Martin Heaps wrote:
> > The next level is now that any business or high valued entity should
> > over the course of the next few years implement EV certificates (many
> > already have) and that browsers should make EV certificates MORE
> > noticable on websites..
>
> ....or we should decide that the phishing problem is not best solved at
> the level of certificates, but instead at a higher level (e.g. Safe
> Browsing and similar mechanisms).

I've been wondering if CT is a good tool for things like safe
browsing to monitor possible phishing sites and possibly detect
them faster.


Kurt

Peter Gutmann

unread,
Mar 27, 2017, 7:24:29 PM3/27/17
to mozilla-dev-s...@lists.mozilla.org, Martin Heaps
Martin Heaps via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>This topic is frustrating in that there seems to be a wide attempt by people
>to use one form of authentication (DV TLS) to verify another form of
>authentication (EV TLS).

The overall problem is that browser vendors have decreed that you can't have
encryption unless you have a certificate, i.e. a CA-supplied magic token to
turn the crypto on. Let's Encrypt was an attempt to kludge around this by
giving everyone one of these magic tokens. Like a lot of other kludges, it
had negative consequences...

So it's now being actively exploited... how could anyone *not* see this
coming? How can anyone actually be surprised that this is now happening? As
the late Bob Jueneman once said on the PKIX list (over a different PKI-related
topic), "it's like watching a train wreck in slow motion, one freeze-frame at
a time". It's pre-ordained what's going to happen, the most you can do is
artificially delay its arrival.

>The end nessecity is that the general public need to be educated [...]

Quoting Vesselin Bontchev, "if user education was going to work, it would have
worked by now". And that was a decade ago.

Peter.

mono...@gmail.com

unread,
Mar 27, 2017, 7:41:54 PM3/27/17
to mozilla-dev-s...@lists.mozilla.org
> I've been wondering if CT is a good tool for things like safe
> browsing to monitor possible phishing sites and possibly detect
> them faster.

Are there general proposals yet on how to distinguish phishing vs legitimate when it comes to domains? (like apple.com vs app1e.com vs mom'n'pop farmer's myapple.com)

Thanks,

Nico

Matt Palmer

unread,
Mar 27, 2017, 8:36:34 PM3/27/17
to dev-secur...@lists.mozilla.org
On Mon, Mar 27, 2017 at 10:16:52PM +0200, Kurt Roeckx via dev-security-policy wrote:
> On Mon, Mar 27, 2017 at 09:02:48PM +0100, Gervase Markham via dev-security-policy wrote:
> > On 27/03/17 16:08, Martin Heaps wrote:
> > > The next level is now that any business or high valued entity should
> > > over the course of the next few years implement EV certificates (many
> > > already have) and that browsers should make EV certificates MORE
> > > noticable on websites..
> >
> > ....or we should decide that the phishing problem is not best solved at
> > the level of certificates, but instead at a higher level (e.g. Safe
> > Browsing and similar mechanisms).
>
> I've been wondering if CT is a good tool for things like safe
> browsing to monitor possible phishing sites and possibly detect
> them faster.

I'm about 100% sure that having a pre-populated list of sites that are
likely to be used for URL-confusion phishing would be a valuable thing for
systems like safe browsing to implement.

- Matt

Florian Weimer

unread,
Mar 28, 2017, 5:53:04 AM3/28/17
to mozilla-dev-s...@lists.mozilla.org
* mono riot:

>> I've been wondering if CT is a good tool for things like safe
>> browsing to monitor possible phishing sites and possibly detect
>> them faster.
>
> Are there general proposals yet on how to distinguish phishing vs
> legitimate when it comes to domains? (like apple.com vs app1e.com vs
> mom'n'pop farmer's myapple.com)

If there was a general rule, people would game the system, making the
rule useless.

In general, recognizing malicious web content requires looking at said
content. It is not possible to go by the domain name alone.

Gervase Markham

unread,
Mar 28, 2017, 6:03:40 AM3/28/17
to mozilla-dev-s...@lists.mozilla.org
On 27/03/17 23:15, mono...@gmail.com wrote:
> Are there general proposals yet on how to distinguish phishing vs
> legitimate when it comes to domains? (like apple.com vs app1e.com vs
> mom'n'pop farmer's myapple.com)

Not for those sorts of differences. There are in an IDN context:
http://unicode.org/reports/tr39/

Gerv

Hector Martin

unread,
Mar 29, 2017, 7:31:06 AM3/29/17
to Peter Gutmann, mozilla-dev-s...@lists.mozilla.org, Martin Heaps
On 28/03/17 08:23, Peter Gutmann via dev-security-policy wrote:
> Martin Heaps via dev-security-policy <dev-secur...@lists.mozilla.org> writes:
>
>> This topic is frustrating in that there seems to be a wide attempt by people
>> to use one form of authentication (DV TLS) to verify another form of
>> authentication (EV TLS).
>
> The overall problem is that browser vendors have decreed that you can't have
> encryption unless you have a certificate, i.e. a CA-supplied magic token to
> turn the crypto on. Let's Encrypt was an attempt to kludge around this by
> giving everyone one of these magic tokens. Like a lot of other kludges, it
> had negative consequences...

It's not a kludge, though. Let's Encrypt is not (merely) a workaround
for the fact that self-signed certificates are basically considered
worthless. If it were, it wouldn't meet BR rules. Let's Encrypt actively
performs validation of domains, and in that respect is as legitimate as
any other DV CA.

We actually have *five* levels of trust here:

1. HTTP
2. HTTPS with no validation (self-signed or anonymous ciphersuite)
3. HTTPS with DV
4. HTTPS with OV
5. HTTPS with EV

These are technically objective levels of trust (mostly). There is also
a technically subjective tangential attribute:

a. Is not a phishing or malicious site.

Let's Encrypt aims to obsolete levels 1 and 2 by making 3 ubiquitously
accessible.

The problem is that browser vendors have historically treated trust as
binary, confounding (3), (4), and (a), mostly because the ecosystem at
the time made it hard to get (3) without meeting (a). They also
inexplicably treated (2) as worse than (1), which is of course nonsense,
but I guess was driven by some sort of backwards thinking that "if you
have security at all, you'd better have good security" (or,
equivalently: "normal people don't need security, and a mediocre attempt
at security implies Bad Evil Things Are Happening").

With time, certificates have become more accessible, everyone has come
to agree that we all need security, and with that, that thinking has
become obsolete. Getting a DV cert for a phishing site was by no means
hard before Let's Encrypt. Now that Let's Encrypt is here, it's trivial.

> So it's now being actively exploited... how could anyone *not* see this
> coming? How can anyone actually be surprised that this is now happening? As
> the late Bob Jueneman once said on the PKIX list (over a different PKI-related
> topic), "it's like watching a train wreck in slow motion, one freeze-frame at
> a time". It's pre-ordained what's going to happen, the most you can do is
> artificially delay its arrival.

And this question should be directed at browser vendors. After years of
mistakenly educating users that "green lock = good, safe, secure,
awesome, please type in all your passwords", how could they *not* see
this coming?

>> The end nessecity is that the general public need to be educated [...]
>
> Quoting Vesselin Bontchev, "if user education was going to work, it would have
> worked by now". And that was a decade ago.

This is strictly a presentation layer problem. We *know* what the
various trust levels mean. We need to present them in a way that is
*useful* to users.

Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5)
full EV banner. (a) still correlates reasonably well with (4) and (5).
HTTPS is no longer optional. All those phishing sites get a neutral URL
bar. We've already educated users that their bank needs a green lock in
the URL.

--

mono...@gmail.com

unread,
Mar 29, 2017, 5:34:25 PM3/29/17
to mozilla-dev-s...@lists.mozilla.org
> Not for those sorts of differences. There are in an IDN context:
> http://unicode.org/reports/tr39/

wasn't aware of that TS, thanks!

Ryan Sleevi

unread,
Mar 29, 2017, 6:40:00 PM3/29/17
to Hector Martin, Martin Heaps, mozilla-dev-s...@lists.mozilla.org, Peter Gutmann
On Wed, Mar 29, 2017 at 7:30 AM, Hector Martin via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> We actually have *five* levels of trust here:
>
> 1. HTTP
> 2. HTTPS with no validation (self-signed or anonymous ciphersuite)
> 3. HTTPS with DV
> 4. HTTPS with OV
> 5. HTTPS with EV
>

No, we actually only have three levels.

1. HTTP
2. "I explicitly asked for security and didn't get it" (HTTPS with no
validation)
3. HTTPS

Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5)
> full EV banner. (a) still correlates reasonably well with (4) and (5).
> HTTPS is no longer optional. All those phishing sites get a neutral URL
> bar. We've already educated users that their bank needs a green lock in the
> URL.


And that was a mistake - one which has been known since the very
introduction of EV in the academic community, but sadly, like Cassandra,
was not heeded.

http://www.adambarth.com/papers/2008/jackson-barth-b.pdf should be required
reading for anyone who believes OV or EV objectively improves security,
because it explains how since the very beginning of browsers support for
SSL/TLS (~1995), there's been a security policy at place that determines
equivalence - the Same Origin Policy.

While the proponents of SSL/TLS then - and now - want certificates to be
Something More, the reality has been that, from the get-go, the only
boundary has been the Origin.

I think the general community here would agree that making HTTPS simple and
ubiquitous is the goal, and efforts by CAs - commercial and non-commercial
- towards those efforts, whether it be through making certificates more
affordable to obtain or simpler to install or easier to support - are
well-deserving of praise.

But if folks want OV/EV, then they also have to accept there needs to be an
origin boundary, like Barth/Jackson originally called for in 2008
(httpsev://), and that any downtrust in that boundary needs to be blocked
(similar to mixed content blocking of https -> http, as those degrade the
effective assurance). Further, it seems as if it would be necessary to
obtain the goals of 4, 5, or (a) that the boundary be 'not just'
httpsev://, but somehow bound to the organization itself - an
origin-per-organization, if you will.

And that, at its core, is fundamentally opposed to how the Web was supposed
to and does work. Which is why (4), (5), and (a) are unreasonable and
unrealistic goals, despite having been around for over 20 years, and no new
solutions have been put forward since Barth/Jackson called out the obvious
one nearly a decade ago, which no one was interested in.

Hector Martin

unread,
Mar 30, 2017, 10:26:03 AM3/30/17
to ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Martin Heaps, Peter Gutmann
On 2017-03-30 07:39, Ryan Sleevi via dev-security-policy wrote:
> No, we actually only have three levels.
>
> 1. HTTP
> 2. "I explicitly asked for security and didn't get it" (HTTPS with no
> validation)
> 3. HTTPS

(2) assumes users ask for security. 99% of users do not ask for
security, they follow links that choose whether to ask for security for
them or not. In fact, the whole point of HSTS is to enforce some kind of
persistent request for security, because users aren't expected to know
to prefix their URLs with https://

(2) is also strictly more secure than (1), even though it may not meet
some arbitrary definition of "security". If it weren't, the security
community would be up in arms over SSH's TOFU model, and it isn't. That
model has its problems, but I would argue that TOFU with no validation
is closer to (3) than it is to (1), and even with no TOFU/persistence,
protection against passive snooping is still a significant plus.

Of course, now that Let's Encrypt is a thing, that's mostly moot since
everyone can get a cert, but IMO we've wasted many years by waiting
until now for a solution that sites could use to deploy encryption *at
all* with a low barrier to entry. NSA-style dragnet fiber tap
surveillance would not have worked in a world where everyone could just
use a self-signed cert for a modicum of security. Plaintext HTTP
could've been obsoleted a decade ago.

>> Obvious answer? Make (1)-(2) big scary red, (3) neutral, (4) green, (5)
>> full EV banner. (a) still correlates reasonably well with (4) and (5).
>> HTTPS is no longer optional. All those phishing sites get a neutral URL
>> bar. We've already educated users that their bank needs a green lock in the
>> URL.
>
>
> And that was a mistake - one which has been known since the very
> introduction of EV in the academic community, but sadly, like Cassandra,
> was not heeded.

Indeed, but good luck correcting it *now*. Practically speaking, we need
to work with what we have. And "green = good" ultimately extends way
beyond technology.

> http://www.adambarth.com/papers/2008/jackson-barth-b.pdf should be required
> reading for anyone who believes OV or EV objectively improves security,
> because it explains how since the very beginning of browsers support for
> SSL/TLS (~1995), there's been a security policy at place that determines
> equivalence - the Same Origin Policy.
>
> While the proponents of SSL/TLS then - and now - want certificates to be
> Something More, the reality has been that, from the get-go, the only
> boundary has been the Origin.

I agree with the premise of that paper, but it doesn't really counter my
view. I'm arguing that EV certificates do more than endorse the public
key with further validation: they endorse the *origin* with further
validation. It's still the same origin.

The phishing threat model is not that an attacker gets ahold of a cert
for paypal.com. That's a higher bar. EV doesn't help you there (today).
The threat model is that an attacker gets ahold of a cert for paypa1.com
and claims to be PayPal.

EV (today) doesn't say "this server is trustworthy", it says "this
origin is a given organization" and then the regular DV properties of
the cert transitively apply. Yes, if a CA issued a non-EV cert for
paypal.com to a party other than PayPal, then a website MitMing PayPal
could steal all of its credentials without an EV cert (and httpsev://
could help prevent that). But that's a *very* different attack scenario
from what we're focusing on here, which is an attacker with an origin
that *looks* like PayPal (or not, many users don't even care).

> But if folks want OV/EV, then they also have to accept there needs to be an
> origin boundary, like Barth/Jackson originally called for in 2008
> (httpsev://), and that any downtrust in that boundary needs to be blocked
> (similar to mixed content blocking of https -> http, as those degrade the
> effective assurance). Further, it seems as if it would be necessary to
> obtain the goals of 4, 5, or (a) that the boundary be 'not just'
> httpsev://, but somehow bound to the organization itself - an
> origin-per-organization, if you will.

SOP aside, that's the point of displaying the organization name in the
URL, is it not? IMO the point of EV is to allow a user to determine that
the origin they are currently accessing is controlled by the legal
entity displayed in the URL bar.

This does not afford protection against "Bob G. Parker hijacks PayPal's
IP and gets a DV cert for PayPal and steals everyone's password", but it
does afford protection against typical phishing sites. Creating a
separate origin for EV, or per-organization, strengthens this, but the
lack thereof does not make it useless.

The value of OV is less clear, but still nonzero over DV. It at least
implies that the certificate was issued with some amount of legal
validation, which presumably affords higher chances of catching abuse
attempts than automated DV issuance.

> And that, at its core, is fundamentally opposed to how the Web was supposed
> to and does work. Which is why (4), (5), and (a) are unreasonable and
> unrealistic goals, despite having been around for over 20 years, and no new
> solutions have been put forward since Barth/Jackson called out the obvious
> one nearly a decade ago, which no one was interested in.

Is it? This is a philosophical discussion, not a technical one. I think
there is value in having some assurance (against some threat models at
least) that you are talking to a given legal organization. Trust is
hard, and trust based on anonymous identifiers should always be
supported (enabling decentralized determination of trust), but some
organizations operate strictly within the framework of a legal identity,
and I don't see why we shouldn't have *some* way and process of
exporting that legal identity to a user.

Ultimately, if EV is not the answer, we need *some* way of having users
be aware of who they are interacting with. Any suggestions?

--
Hector Martin (mar...@marcan.st)
Public Key: https://mrcn.st/pub

Alex Gaynor

unread,
Mar 30, 2017, 10:30:43 AM3/30/17
to Hector Martin, ry...@sleevi.com, Martin Heaps, mozilla-dev-s...@lists.mozilla.org, Peter Gutmann
On Thu, Mar 30, 2017 at 10:25 AM, Hector Martin via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 2017-03-30 07:39, Ryan Sleevi via dev-security-policy wrote:
> > No, we actually only have three levels.
> >
> > 1. HTTP
> > 2. "I explicitly asked for security and didn't get it" (HTTPS with no
> > validation)
> > 3. HTTPS
>
> (2) assumes users ask for security. 99% of users do not ask for
> security, they follow links that choose whether to ask for security for
> them or not. In fact, the whole point of HSTS is to enforce some kind of
> persistent request for security, because users aren't expected to know
> to prefix their URLs with https://
>

> (2) is also strictly more secure than (1), even though it may not meet
> some arbitrary definition of "security". If it weren't, the security
> community would be up in arms over SSH's TOFU model, and it isn't. That
> model has its problems, but I would argue that TOFU with no validation
> is closer to (3) than it is to (1), and even with no TOFU/persistence,
> protection against passive snooping is still a significant plus.
>

You're not wrong that (2) is better than (1). It's also indistinguishable
from a downgrade attack from (3).

If we got to do the web all over again, I think we'd make the UX for (1)
have an interstitial, or just not exist. Unfortunately, we're paying down
two decades of technical debt :-)

Hector Martin

unread,
Mar 30, 2017, 10:44:23 AM3/30/17
to Alex Gaynor, ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Martin Heaps, Peter Gutmann
On 2017-03-30 23:30, Alex Gaynor via dev-security-policy wrote:
>>> 1. HTTP
>>> 2. "I explicitly asked for security and didn't get it" (HTTPS with no
>>> validation)
>>> 3. HTTPS
>
> You're not wrong that (2) is better than (1). It's also indistinguishable
> from a downgrade attack from (3).

But so is (1) if the URI didn't come from somewhere that already
requested HTTPS. Enter HSTS, etc. Ultimately, yes, ideally we'd have had
something like HSTS levels for each trust level, plus matching URI
schemes or some other way of requesting a minimum trust level in the URI.

> If we got to do the web all over again, I think we'd make the UX for (1)
> have an interstitial, or just not exist. Unfortunately, we're paying down
> two decades of technical debt :-)

Indeed. This is something that was a day 1 design flaw in HTTPS (with
the UX as implemented). The moment you start throwing up big scary
warnings for self-signed certs and not for HTTP, you've lost, because
the people with certs aren't going to want to become susceptible to
downgrade attacks. Though browser makers have progressively made this
worse by making the warning scarier and scarier.

Ah well, we are where we are. I'm grateful I can finally nuke a couple
random personal CAs and just Let's Encrypt everything, with HSTS. With
any luck browsers will start significantly penalizing the HTTP UX and
we'll finally get on the path to ubiquitous encryption.

Nick Lamb

unread,
Mar 30, 2017, 2:57:20 PM3/30/17
to mozilla-dev-s...@lists.mozilla.org
Doesn't Chrome's behaviour already "penalise" plaintext HTTP? You can't build a login form, or use shiny new features.

We aren't where we'd ideally be, everybody is agreed about that. That's not the same thing as agreeing our direction of travel is wrong.

I am far from home reduced to using mobile devices, or I'd do it myself but I recommend someone try to measure the proximate cause of these certificates. Unlike with earlier "free" certs the advent of ACME means hosts are throwing in certs with hosting, I suspect that some sizeable fraction of the 14k were issued on this basis. If so phishers may not even be using the HTTPS feature, any more than they'd have used free vouchers for discounted T-shirts if the host included those. So 14k becomes a measure not of criminal interest in TLS certificates but of the success of full automation in bulk hosting combined with the high turnover of phishing sites.
0 new messages