Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

2018.01.09 Issue with TLS-SNI-01 and Shared Hosting Infrastructure

1,718 views
Skip to first unread message

jo...@letsencrypt.org

unread,
Jan 10, 2018, 4:33:31 AM1/10/18
to mozilla-dev-s...@lists.mozilla.org
At approximately 5 p.m. Pacific time on January 9, 2018, we received a report from Frans Rosén of Detectify outlining a method of exploiting some shared hosting infrastructures to obtain certificates for domains he did not control, by making use of the ACME TLS-SNI-01 challenge type. We quickly confirmed the issue and mitigated it by entirely disabling TLS-SNI-01 validation in Let’s Encrypt. We’re grateful to Frans for finding this issue and reporting it to us.

We’d like to describe the issue and our plans for possibly re-enabling TLS-SNI-01 support.

Problem Summary

In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA) validates a domain name by generating a random token and communicating it to the ACME client. The ACME client uses that token to create a self-signed certificate with a specific, invalid hostname (for example, 773c7d.13445a.acme.invalid), and configures the web server on the domain name being validated to serve that certificate. The ACME server then looks up the domain name’s IP address, initiates a TLS connection, and sends the specific .acme.invalid hostname in the SNI extension. If the response is a self-signed certificate containing that hostname, the ACME client is considered to be in control of the domain name, and will be allowed to issue certificates for it.

However, Frans noticed that at least two large hosting providers combine two properties that together violate the assumptions behind TLS-SNI:

* Many users are hosted on the same IP address, and
* Users have the ability to upload certificates for arbitrary names without proving domain control.

When both are true of a hosting provider, an attack is possible. Suppose example.com’s DNS is pointed at the same shared hosting IP address as a site controlled by the attacker. The attacker can run an ACME client to get a TLS-SNI-01 challenge, then install their .acme.invalid certificate on the hosting provider. When the ACME server looks up example.com, it will connect to the hosting provider’s IP address and use SNI to request the .acme.invalid hostname. The hosting provider will serve the certificate uploaded by the attacker. The ACME server will then consider the attacker’s ACME client authorized to issue certificates for example.com, and be willing to issue a certificate for example.com even though the attacker doesn’t actually control it.

This issue only affects domain names that use hosting providers with the above combination of properties. It is independent of whether the hosting provider itself acts as an ACME client.

Our Plans

Shortly after the issue was reported, we disabled TLS-SNI-01 in Let’s Encrypt. However, a large number of people and organizations use the TLS-SNI-01 challenge type to get certificates. It’s important that we restore service if possible, though we will only do so if we’re confident that the TLS-SNI-01 challenge type is sufficiently secure.

At this time, we believe that the issue can be addressed by having certain services providers implement stronger controls for domains hosted on their infrastructure. We have been in touch with the providers we know to be affected, and mitigations will start being deployed for their systems shortly.

Over the next 48 hours we will be building a list of vulnerable providers and their associated IP addresses. Our tentative plan, once the list is completed, is to re-enable the TLS-SNI-01 challenge type with vulnerable providers blocked from using it.

We’re also going to be soliciting feedback on our plans from our community, partners and other PKI stakeholders prior to re-enabling the TLS-SNI-01 challenge. There is a lot to consider here and we’re looking forward to feedback.

We will post more information and details as our plans progress.

Kurt Roeckx

unread,
Jan 10, 2018, 8:16:30 AM1/10/18
to jo...@letsencrypt.org, mozilla-dev-s...@lists.mozilla.org
On Wed, Jan 10, 2018 at 01:33:20AM -0800, josh--- via dev-security-policy wrote:
> * Users have the ability to upload certificates for arbitrary names without proving domain control.

So a user can always take over the domain of an other user on
those providers just by installing a (self-signed) certificate?
I guess it works easiest if the other just doesn't have SSL.


Kurt

Dmitry Belyavsky

unread,
Jan 10, 2018, 8:33:28 AM1/10/18
to Kurt Roeckx, mozilla-dev-s...@lists.mozilla.org, jo...@letsencrypt.org
Hello,
If SSL is off, hosting may not include SSL-related directives in the config
of webserver at the machine at all.


--
SY, Dmitry Belyavsky

Patrick Figel

unread,
Jan 10, 2018, 9:10:54 AM1/10/18
to jo...@letsencrypt.org, mozilla-dev-s...@lists.mozilla.org
First of all: Thanks for the transparency, the detailed report and quick
response to this incident.

A user on Hacker News brought up the possibility that the fairly popular
DirectAdmin control panel might also demonstrate the problematic
behaviour mentioned in your report[1].

I successfully reproduced this on a shared web hosting provider that
uses DirectAdmin. The control panel allowed me to set the vhost domain
to a value like "12345.54321.acme.invalid" and to deploy a self-signed
certificate that included this domain. The web server responded with
said certificate given the following request:

openssl s_client -servername 12345.54321.acme.invalid -connect
192.0.2.0:443 -showcerts

I did not perform an end-to-end test against a real ACME server, but my
understanding is that this would be enough to issue a certificate for
any other domain on the same IP address.

I couldn't find any public data on DirectAdmin's market share, but I
would expect a fairly large number of domains to be affected.

It might also be worth investigating whether other control panels are
similarly affected.

Patrick

[1]: https://news.ycombinator.com/item?id=16114181

On 10.01.18 10:33, josh--- via dev-security-policy wrote:
> At approximately 5 p.m. Pacific time on January 9, 2018, we received a report from Frans Rosén of Detectify outlining a method of exploiting some shared hosting infrastructures to obtain certificates for domains he did not control, by making use of the ACME TLS-SNI-01 challenge type. We quickly confirmed the issue and mitigated it by entirely disabling TLS-SNI-01 validation in Let’s Encrypt. We’re grateful to Frans for finding this issue and reporting it to us.
>
> We’d like to describe the issue and our plans for possibly re-enabling TLS-SNI-01 support.
>
> Problem Summary
>
> In the ACME protocol’s TLS-SNI-01 challenge, the ACME server (the CA) validates a domain name by generating a random token and communicating it to the ACME client. The ACME client uses that token to create a self-signed certificate with a specific, invalid hostname (for example, 773c7d.13445a.acme.invalid), and configures the web server on the domain name being validated to serve that certificate. The ACME server then looks up the domain name’s IP address, initiates a TLS connection, and sends the specific .acme.invalid hostname in the SNI extension. If the response is a self-signed certificate containing that hostname, the ACME client is considered to be in control of the domain name, and will be allowed to issue certificates for it.
>
> However, Frans noticed that at least two large hosting providers combine two properties that together violate the assumptions behind TLS-SNI:
>
> * Many users are hosted on the same IP address, and
> * Users have the ability to upload certificates for arbitrary names without proving domain control.
>
> When both are true of a hosting provider, an attack is possible. Suppose example.com’s DNS is pointed at the same shared hosting IP address as a site controlled by the attacker. The attacker can run an ACME client to get a TLS-SNI-01 challenge, then install their .acme.invalid certificate on the hosting provider. When the ACME server looks up example.com, it will connect to the hosting provider’s IP address and use SNI to request the .acme.invalid hostname. The hosting provider will serve the certificate uploaded by the attacker. The ACME server will then consider the attacker’s ACME client authorized to issue certificates for example.com, and be willing to issue a certificate for example.com even though the attacker doesn’t actually control it.
>
> This issue only affects domain names that use hosting providers with the above combination of properties. It is independent of whether the hosting provider itself acts as an ACME client.
>
> Our Plans
>
> Shortly after the issue was reported, we disabled TLS-SNI-01 in Let’s Encrypt. However, a large number of people and organizations use the TLS-SNI-01 challenge type to get certificates. It’s important that we restore service if possible, though we will only do so if we’re confident that the TLS-SNI-01 challenge type is sufficiently secure.
>
> At this time, we believe that the issue can be addressed by having certain services providers implement stronger controls for domains hosted on their infrastructure. We have been in touch with the providers we know to be affected, and mitigations will start being deployed for their systems shortly.
>
> Over the next 48 hours we will be building a list of vulnerable providers and their associated IP addresses. Our tentative plan, once the list is completed, is to re-enable the TLS-SNI-01 challenge type with vulnerable providers blocked from using it.
>
> We’re also going to be soliciting feedback on our plans from our community, partners and other PKI stakeholders prior to re-enabling the TLS-SNI-01 challenge. There is a lot to consider here and we’re looking forward to feedback.
>
> We will post more information and details as our plans progress.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Jakob Bohm

unread,
Jan 10, 2018, 9:34:51 AM1/10/18
to mozilla-dev-s...@lists.mozilla.org
Depending on exactly how the shared web server is misconfigured, it
still might direct the traffic of actual (real) hostnames of other users
to the correct user account, even if matching the SNI to the rogue
certificate). This boils down to the fact that many web servers use
neither the client-supplied SNI value nor the list of certificate SAN
DNS values as an alternative / override / filter for the HTTP/1.x Host:
header and/or the HTTP full URL in request option.

It is also quite possible that a number of affected hosting systems will
only allow this for domains not already hosted by another user (such as
acme.invalid).

Enforcement on shared hosting systems would be easier if the TLS-SNI-01
ACME mechanism used names such as
1234556-24356476._acme.requested.domain.example.com
since that would allow hosting providers to restrict certificate uploads
that claim to be for other customers domains. Maybe the name form used
by TLS-SNI-02 could be the same as for the DNS-01 challenge.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

ssimo...@gmail.com

unread,
Jan 10, 2018, 10:57:55 AM1/10/18
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, January 10, 2018 at 3:34:51 PM UTC+1, Jakob Bohm wrote:
> Depending on exactly how the shared web server is misconfigured

I don't think the web server is misconfigured: serving a self signed cert for any domain - even one that I don't own - is something that is absolutely valid and done for test purposes.

> Enforcement on shared hosting systems would be easier if the TLS-SNI-01
> ACME mechanism used names such as
> 1234556-24356476._acme.requested.domain.example.com
> since that would allow hosting providers to restrict certificate uploads
> that claim to be for other customers domains. Maybe the name form used
> by TLS-SNI-02 could be the same as for the DNS-01 challenge.

I think that the assumptions TLS-SNI-01/2 make are not valid:
- it assumes that you control the IP address the domain resolves to, AND
- it assumes that the tls certificate returned by the web server responding on that IP is your own.

Those two assumptions are not valid, as SNI is designed exactly for the use case of multiple domains on the same IP, and shared hosts are just providers for that use case.

IMHO, returning a self signed cert the IP address that domain resolves to, should not be proof of ownership for that domain.

Gervase Markham

unread,
Jan 10, 2018, 11:36:36 AM1/10/18
to Jakob Bohm
On 10/01/18 14:34, Jakob Bohm wrote:
> Enforcement on shared hosting systems would be easier if the TLS-SNI-01
> ACME mechanism used names such as
>   1234556-24356476._acme.requested.domain.example.com
> since that would allow hosting providers to restrict certificate uploads
> that claim to be for other customers domains. 

Hosting providers can simply refuse to accept uploads of any certificate
which contains names ending in "acme.invalid".

AIUI, this is Let's Encrypt's recommended mitigation method.

Gerv

Jakob Bohm

unread,
Jan 10, 2018, 11:54:57 AM1/10/18
to mozilla-dev-s...@lists.mozilla.org
It is (with this special exception) as much proof as putting a serving a
magic file from the webserver at this IP address.

The two possible shared hosting configurations causing problems are:

a) The ability to upload a certificate for *another user's* domain.

b) The ability to upload a certificate for a non-hosted domain.

b is actually a valid thing to do, especially if the certificate
contains SAN values for both the uploader's domain and a
non-conflicting domain (that the uploader might be hosting
elsewhere). Which is why the TLS-SNI-01 test using a non-existent
(and thus never hosted) domain fails badly on shared hosting.

Enforcing restrictions against a also prevents existing attacks, such as
uploading a less trusted certificate for another user as a local DoS
attack.

Adding a special ban, just to please Let's Encrypt (and the new ACME
providers launched recently), is on the other hand a classic example
of arbitrary annoyances as far as hosting environments not using them
(at the hoster level) is concerned. I fear that a lot of hosting
environments will be belligerent and insist that they have no obligation
to honor the request.

Matthew Hardeman

unread,
Jan 10, 2018, 12:04:16 PM1/10/18
to Gervase Markham, mozilla-dev-security-policy, Jakob Bohm
On Wed, Jan 10, 2018 at 10:35 AM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> Hosting providers can simply refuse to accept uploads of any certificate
> which contains names ending in "acme.invalid".
>
> AIUI, this is Let's Encrypt's recommended mitigation method.
>
> Gerv
>
>
That seems remarkably deficient. No other validation mechanism which is
accepted by the community relies upon specific preventative behavior by any
number of random hosting companies on the internet.

Why would that suffice?

Matthew Hardeman

unread,
Jan 10, 2018, 12:22:14 PM1/10/18
to jo...@letsencrypt.org, mozilla-dev-security-policy
I applaud LetsEncrypt for disclosing rapidly and thoroughly.

During the period after the initial announcement and before the full
report, I quickly read the ACME spec portion pertaining to TLS-SNI-01.

I had not previously read the details of that validation method as that
method was not once I intended to utilize.

Upon reading, I was surprised that the mechanism had survived scrutiny to
make it through to industry adoption and production use.

There exists an unambiguous comparative deficiency between the TLS-SNI-01
validation mechanism and every other validation mechanism presently
utilized by LetsEncrypt:

Specifically, the portion of the protocol which validates connection to the
infrastructure that responds for a given domain label presents the entire
value of the correct "answer" to the challenge within the question itself
(the TLS SNI name indication toward the server that the DNS says the
domain-label being tested is at.)

The result of this is that we can definitively assert that the TLS-SNI-01
protocol provides no evidence that the party who requested the validation
(and would receive the certificate) is the party responsible for the answer
which arises from the infrastructure that the DNS says is the right
infrastructure for a given domain label.

Furthermore, it would not be shocking that a plausible design for a load
balancer or hosting infrastructure might on-demand generate a self-signed
or corporate CA signed certificate for a heretofore unknown to the
infrastructure domain label as surfaced in the TLS SNI name value. That
would "just work" in terms of validating any TLS-SNI-01 challenge on behalf
of any outside party who happens to know that a given domain label is
directed in the DNS to infrastructure of that behavioral mode.

LetsEncrypt has been such a shining beacon of good practice in this space
that I feel that many -- certainly it is my own opinion -- view LetsEncrypt
as a "best practices" model CA for domain validation. The continuance of
the TLS-SNI-01 validation method, to my mind, would be a marked departure
from that position.

I believe LetsEncrypt should give careful consideration to the reputational
risks involved. Now that the mode of the problem with this method is in
the public mind, there will be detractors looking to achieve a publishable
mis-issuance. LetsEncrypt's proposed plan to work with hosting service
providers on the Internet seems naive in that light. Participants in that
market come and go all the time. If the plan for returning TLS-SNI-01 to
sufficient integrity for reliance by the WebPKI requires affirmative effort
on the part of an uncountable number of current and future participants in
the hosting space... I do not mean to be rude, but are you saying this
with a straight face?

Just my thoughts...

Matt Hardeman
> * Users have the ability to upload certificates for arbitrary names
> without proving domain control.
>

Gervase Markham

unread,
Jan 10, 2018, 12:25:20 PM1/10/18
to mozilla-dev-s...@lists.mozilla.org
On 10/01/18 17:04, Matthew Hardeman wrote:
> That seems remarkably deficient. No other validation mechanism which is
> accepted by the community relies upon specific preventative behavior by any
> number of random hosting companies on the internet.

I don't think that's true. If your hosting provider allows other sites
to respond to HTTP requests for your domain, there's a similar
vulnerability in the HTTP-01 checker. One configuration where this can
happen is when multiple sites share an IP but only one gets port 443
(i.e. the pre-SNI support situation), and it's not you.

Or, if an email provider allows people to claim any of the special email
addresses, there's a similar vulnerability in email-based methods.

The "don't allow acme.invalid" mitigation is the easiest one to
implement, but another perfectly good one would be "don't allow people
to deploy certs for sites they don't own or control", or even "don't
allow people to deploy certs for sites your other customers own or
control". Put that way, that doesn't seem like an unreasonable
requirement, does it?

Gerv

Matthew Hardeman

unread,
Jan 10, 2018, 12:39:41 PM1/10/18
to Gervase Markham, mozilla-dev-security-policy
On Wed, Jan 10, 2018 at 11:24 AM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> I don't think that's true. If your hosting provider allows other sites
> to respond to HTTP requests for your domain, there's a similar
> vulnerability in the HTTP-01 checker. One configuration where this can
> happen is when multiple sites share an IP but only one gets port 443
> (i.e. the pre-SNI support situation), and it's not you.
>
>
There's a significant difference here. At a minimum the original request
arrives on port 80 and with a proper Host: header identifying the target
website to be validated. Yes, it's possible that your host redirects that,
but presumably you the website at that address have some say or control
over that. Furthermore, at a minimum the target being forwarded to still
has to have knowledge of a calculated challenge value to return to the
validator which the validator does not reveal in the process of raising the
question. A fact which arises from this is that the target was being
manipulated by the requestor of the validation -- a fact which some modes
of failure of the TLS-SNI-01 mechanism would not be able to assert. The
TLS-SNI-01 validation process never even surfaces to the hosting
infrastructure just exactly what domain label is being validated.


> Or, if an email provider allows people to claim any of the special email
> addresses, there's a similar vulnerability in email-based methods.
>

Clearly those mechanisms have that well known risk for a very long time
now. Certainly, I have no doubt that one can still today bootstrap their
way to a bad certificate via these mechanisms. I note that LetsEncrypt
and ACME chose to eschew those methods. I admit to merely presuming that
those chose not to implement, at least in part, due to those risks.


> The "don't allow acme.invalid" mitigation is the easiest one to
> implement, but another perfectly good one would be "don't allow people
> to deploy certs for sites they don't own or control", or even "don't
> allow people to deploy certs for sites your other customers own or
> control". Put that way, that doesn't seem like an unreasonable
> requirement, does it?
>

Here again, I think we have a problem. It's regarded as normal and
acceptable at many web host infrastructures to pre-stage sites for
domain-labels not yet in use to allow for development and test deployment.
Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
direct the "dev and test browser" to the right infrastructure for the
pending label. It will be an uphill battle to get arbitrary web hosts to
implement any one of the mitigations you've set out. Especially when it
reduces a functionality some of their clients like and doesn't seem to get
them any tangible benefit.

In the course of adopting the 10 blessed methods, did any of the methods
move forward with the expectation that active effort on the part of non-CA
participants versus the status quo would be required in order to ensure the
continuing reliability of the method?

Wayne Thayer

unread,
Jan 10, 2018, 1:00:38 PM1/10/18
to Matthew Hardeman, mozilla-dev-security-policy, Gervase Markham
On Wed, Jan 10, 2018 at 10:39 AM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Wed, Jan 10, 2018 at 11:24 AM, Gervase Markham via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
> >
> > I don't think that's true. If your hosting provider allows other sites
> > to respond to HTTP requests for your domain, there's a similar
> > vulnerability in the HTTP-01 checker. One configuration where this can
> > happen is when multiple sites share an IP but only one gets port 443
> > (i.e. the pre-SNI support situation), and it's not you.
> >
> >
> There's a significant difference here. At a minimum the original request
> arrives on port 80 and with a proper Host: header identifying the target
> website to be validated. Yes, it's possible that your host redirects that,
> but presumably you the website at that address have some say or control
> over that. Furthermore, at a minimum the target being forwarded to still
> has to have knowledge of a calculated challenge value to return to the
> validator which the validator does not reveal in the process of raising the
> question. A fact which arises from this is that the target was being
> manipulated by the requestor of the validation -- a fact which some modes
> of failure of the TLS-SNI-01 mechanism would not be able to assert. The
> TLS-SNI-01 validation process never even surfaces to the hosting
> infrastructure just exactly what domain label is being validated.
>
> Although the BRs allow method 6 to be performed over TLS, my understanding
is that Let's Encrypt only supports the HTTP-01 mechanism on port 80 in
order to prevent the exploit that Gerv described. Similarly, my
understanding is that the updated TLS-SNI-02 mechanism prevents the attack
that Matthew described.

>
> > Or, if an email provider allows people to claim any of the special email
> > addresses, there's a similar vulnerability in email-based methods.
> >
>
> Clearly those mechanisms have that well known risk for a very long time
> now. Certainly, I have no doubt that one can still today bootstrap their
> way to a bad certificate via these mechanisms. I note that LetsEncrypt
> and ACME chose to eschew those methods. I admit to merely presuming that
> those chose not to implement, at least in part, due to those risks.
>
>
> > The "don't allow acme.invalid" mitigation is the easiest one to
> > implement, but another perfectly good one would be "don't allow people
> > to deploy certs for sites they don't own or control", or even "don't
> > allow people to deploy certs for sites your other customers own or
> > control". Put that way, that doesn't seem like an unreasonable
> > requirement, does it?
> >
>
> Here again, I think we have a problem. It's regarded as normal and
> acceptable at many web host infrastructures to pre-stage sites for
> domain-labels not yet in use to allow for development and test deployment.
> Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
> direct the "dev and test browser" to the right infrastructure for the
> pending label. It will be an uphill battle to get arbitrary web hosts to
> implement any one of the mitigations you've set out. Especially when it
> reduces a functionality some of their clients like and doesn't seem to get
> them any tangible benefit.
>
> I agree with this point. It's common and by design for shared hosting
environments to allow sites to exist without any sort of domain name
validation.


> In the course of adopting the 10 blessed methods, did any of the methods
> move forward with the expectation that active effort on the part of non-CA
> participants versus the status quo would be required in order to ensure the
> continuing reliability of the method?
>

In my opinion, adoption of the 10 blessed methods was only an effort to
document what CAs were already doing in practice so that the catch-all "any
other method" could be removed. There is more work to be done, as can be
seen in the current discussion of method #1 on the CAB Forum Public list.

Matthew Hardeman

unread,
Jan 10, 2018, 1:52:12 PM1/10/18
to Wayne Thayer, mozilla-dev-security-policy, Gervase Markham
On Wed, Jan 10, 2018 at 12:00 PM, Wayne Thayer <wth...@mozilla.com> wrote:

> ficant difference here. At a minimum the original request
>> arrives on port 80 and with a proper Host: header identifying the target
>> website to be validated. Yes, it's possible that your host redirects
>> that,
>> but presumably you the website at that address have some say or control
>> over that. Furthermore, at a minimum the target being forwarded to still
>> has to have knowledge of a calculated challenge value to return to the
>> validator which the validator does not reveal in the process of raising
>> the
>> question. A fact which arises from this is that the target was being
>> manipulated by the requestor of the validation -- a fact which some modes
>> of failure of the TLS-SNI-01 mechanism would not be able to assert. The
>> TLS-SNI-01 validation process never even surfaces to the hosting
>> infrastructure just exactly what domain label is being validated.
>>
>> Although the BRs allow method 6 to be performed over TLS, my
> understanding is that Let's Encrypt only supports the HTTP-01 mechanism on
> port 80 in order to prevent the exploit that Gerv described. Similarly, my
> understanding is that the updated TLS-SNI-02 mechanism prevents the attack
> that Matthew described.
>

I acknowledge that the TLS-SNI-02 improvements do eliminate certain risks
of the TLS-SNI-01 validation method -- and they do at least restore a
promise that the answering TLS infrastructure to which the validation
request is being made has been modified/configured/affected by the party
who requested the certificate / validation, there does remain a significant
gap. I'll discuss that below in my response to your commentary on the
state of web hosting practices.


>>
>> Here again, I think we have a problem. It's regarded as normal and
>> acceptable at many web host infrastructures to pre-stage sites for
>> domain-labels not yet in use to allow for development and test deployment.
>> Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
>> direct the "dev and test browser" to the right infrastructure for the
>> pending label. It will be an uphill battle to get arbitrary web hosts to
>> implement any one of the mitigations you've set out. Especially when it
>> reduces a functionality some of their clients like and doesn't seem to get
>> them any tangible benefit.
>>
>> I agree with this point. It's common and by design for shared hosting
> environments to allow sites to exist without any sort of domain name
> validation.
>
>

To the extent that this is true, I harbor significant concern that
TLS-SNI-01 could responsibly return to use.

I also see a possibility that the mitigations in TLS-SNI-02 may be
ineffective in this case. TLS-SNI-02 would prevent naive and automatic
accidental success of validations by some infrastructure, but an attacker
who can still create the proper zone in .acme.invalid and upload a custom
certificate to be served for this zone would still be able to succeed at
validation.

My belief is that THAT risk could be further hedged by modifying the
mechanism, say TLS-SNI-03 to incorporate changes such that the only SAN
dnsName in the certificate is a well known child of the domain label to be
validated [ex: well-known-acme-pki.example.com for example.com validation]
and that another certificate property (description, org, org unit, ???) be
stuffed with a signed challenge response calculated over some derivation of
input of the challenge token generated by the CA and transmitted to the
requestor and the requestor's account key.

However, even that plan only actually gains security if the hosting
infrastructure would generally apply protection for heretofore unknown
names which are children of existing boarded named on another customer's
account. In other words, how likely is it that if I have a login at some
hosting company, and I have boarded on my account a hosting zone that
includes the labels www.example.com and example.com that a totally separate
login would be allowed to prospectively create a zone called
notreallyexample.example.com? If that's likely or even non-rare, there's
still a problem with the mechanism.

Ryan Sleevi

unread,
Jan 10, 2018, 3:39:23 PM1/10/18
to Matthew Hardeman, Gervase Markham, mozilla-dev-security-policy, Wayne Thayer
On Wed, Jan 10, 2018 at 1:51 PM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I acknowledge that the TLS-SNI-02 improvements do eliminate certain risks
> of the TLS-SNI-01 validation method -- and they do at least restore a
> promise that the answering TLS infrastructure to which the validation
> request is being made has been modified/configured/affected by the party
> who requested the certificate / validation, there does remain a significant
> gap. I'll discuss that below in my response to your commentary on the
> state of web hosting practices.
>

I think it's important to point out that these levels of technical
discussions are best directed to the IETF ACME WG, under the auspices of
the IETF NoteWell - https://datatracker.ietf.org/wg/acme/about/


> To the extent that this is true, I harbor significant concern that
> TLS-SNI-01 could responsibly return to use.
>
> I also see a possibility that the mitigations in TLS-SNI-02 may be
> ineffective in this case. TLS-SNI-02 would prevent naive and automatic
> accidental success of validations by some infrastructure, but an attacker
> who can still create the proper zone in .acme.invalid and upload a custom
> certificate to be served for this zone would still be able to succeed at
> validation.
>

Can you explain what you mean by 'create a proper zone'? .invalid is an
explicitly reserved TLD.


> However, even that plan only actually gains security if the hosting
> infrastructure would generally apply protection for heretofore unknown
> names which are children of existing boarded named on another customer's
> account. In other words, how likely is it that if I have a login at some
> hosting company, and I have boarded on my account a hosting zone that
> includes the labels www.example.com and example.com that a totally
> separate
> login would be allowed to prospectively create a zone called
> notreallyexample.example.com? If that's likely or even non-rare, there's
> still a problem with the mechanism.
>
>
It is likely and non-rare (infact, quite common as it turns out). There are
very few that match domain authorizations in some way. Note that this is
further 'difficult' because it would also require cloud providers be aware
of the tree-walking notion of authorization domain name.

So I don't think this buys any improvement over the status quo, and
actually makes it considerably more complex and failure prone, due to the
cross-sectional lookups, versus the fact that .invalid is a reserved TLD.

Santhan Raj

unread,
Jan 10, 2018, 3:40:12 PM1/10/18
to mozilla-dev-s...@lists.mozilla.org
As others have mentioned, the transparency in the disclosure and quick response is applaudable. However, it doesn't mention anything about whether anyone has exploited this already. Have you started analyzing your existing certs to see if any may have been mis-issued? If so, how?

Thanks,
Santhan

Nick Lamb

unread,
Jan 10, 2018, 4:06:09 PM1/10/18
to dev-secur...@lists.mozilla.org, Patrick Figel
On Wed, 10 Jan 2018 15:10:41 +0100
Patrick Figel via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:

> A user on Hacker News brought up the possibility that the fairly
> popular DirectAdmin control panel might also demonstrate the
> problematic behaviour mentioned in your report[1].

Although arguably tangential to the purpose of m.d.s.policy, I think it
would be really valuable to understand what behaviours are actually out
there and in what sort of volumes.

I know from personal experience that my own popular host lets me create
web hosting for a 2LD I don't actually control. I had management
agreement to take control, began setting up the web site and then
technical inertia meant control over the name was never actually
transferred, the site is still there but obviously in that case needs
an /etc/hosts override to visit from a normal web browser.

Would that host:

* Let me do this even if another of their customers was hosting that
exact site ? If so, would mine sometimes "win" over theirs, perhaps if
they temporarily disabled access or due to some third criteria like
our usernames or seniority of account age ?

* Let me do this for sub-domains or sub-sub-domains of other customers,
including perhaps ones which have a wildcard DNS entry so that "my"
site would actually get served to ordinary users ?

* Let me do this for DNS names that can't exist (like *.acme.invalid,
leading to the Let's Encrypt issue we started discussing) ?


I don't know the answer to any of those questions, but I think that
even if they're tangential to m.d.s.policy somebody needs to find out,
and not just for the company I happen to use.

Matthew Hardeman

unread,
Jan 10, 2018, 4:37:38 PM1/10/18
to Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Wayne Thayer
On Wed, Jan 10, 2018 at 2:38 PM, Ryan Sleevi <ry...@sleevi.com> wrote:


>
> I think it's important to point out that these levels of technical
> discussions are best directed to the IETF ACME WG, under the auspices of
> the IETF NoteWell - https://datatracker.ietf.org/wg/acme/about/
>

Noted. If you think there's potentially merit in the modifications I've
rough sketched here, please indicate as much and I will consider attempting
to pursue as directed.


>
>
>> To the extent that this is true, I harbor significant concern that
>> TLS-SNI-01 could responsibly return to use.
>>
>> I also see a possibility that the mitigations in TLS-SNI-02 may be
>> ineffective in this case. TLS-SNI-02 would prevent naive and automatic
>> accidental success of validations by some infrastructure, but an attacker
>> who can still create the proper zone in .acme.invalid and upload a custom
>> certificate to be served for this zone would still be able to succeed at
>> validation.
>>
>
> Can you explain what you mean by 'create a proper zone'? .invalid is an
> explicitly reserved TLD.
>

I apologize. I realized almost immediately on posting that message that I
had erred significantly in oversubscribing that word. My prior messages's
"zone" would have been better written as "web hosting context", roughly
defined as that embodiment of configuration and resources which would
define a hosting architecture's responses to HTTP requests directed to the
infrastructure with one of a number of
configured-and-bound-to-the-web-hosting-context domain labels in the Host:
header, and for TLS connections those TLS connections reaching the hosting
infrastructure with a TLS SNI name of that same set of
configured-and-bound-to-the-web-hosting-context domain labels.

You correctly point out at .invalid is a reserved TLD. I imagine there are
a great many hosting infrastructures which allow creating such a new web
hosting context and then binding
not-yet-used-elsewhere-on-this-infrastructure DNS labels prospectively,
without any kind of actual DNS validation. More importantly, I need not
imagine it. As Patrick Figel pointed out, it is confirmed that
DirectAdmin, a software infrastructure for hosting with some not
insignificant number of deployments appears to do just so.

I imagine that Mr. Thayer can add more to the conversation regarding the
reasons and level of market penetration and practice, but it does appear
that many shared hosting infrastructures will allow creating new
configurations without having yet pointed the matching real DNS entries to
the infrastructure. There are pre-staging, development, testing, etc, etc,
reasons that some of the customers out there seem to want that. Obviously,
there are better ways to handle that, and yet... In so far as others have
already found examples, it is a market reality, unless there is compelling
evidence to the contrary.

In the exact text above, what I meant by "create the proper zone in
.acme.invalid" was to create that web hosting context (or actually set of
web hosting contexts) and bind to the Host names that are the
z(i)[0...32].z(i)[33...64].acme.invalid labels that the attacker knows to
be the set of those which may arrive in the TLS SNI name values for the
validation calls from the CA to the TLS infrastructure. I clarify again
that I'm not speaking of any real DNS mapping at all. I'm speaking of a
mapping between a received TLS SNI label to a web hosting context on the
hosting infrastructure.


>
>
>> However, even that plan only actually gains security if the hosting
>> infrastructure would generally apply protection for heretofore unknown
>> names which are children of existing boarded named on another customer's
>> account. In other words, how likely is it that if I have a login at some
>> hosting company, and I have boarded on my account a hosting zone that
>> includes the labels www.example.com and example.com that a totally
>> separate
>> login would be allowed to prospectively create a zone called
>> notreallyexample.example.com? If that's likely or even non-rare, there's
>> still a problem with the mechanism.
>>
>>
> It is likely and non-rare (infact, quite common as it turns out). There
> are very few that match domain authorizations in some way. Note that this
> is further 'difficult' because it would also require cloud providers be
> aware of the tree-walking notion of authorization domain name.
>
> So I don't think this buys any improvement over the status quo, and
> actually makes it considerably more complex and failure prone, due to the
> cross-sectional lookups, versus the fact that .invalid is a reserved TLD.
>

If this is the case, I can only conclude that all presently proposed
enhancements to TLS-SNI-01 and TLS-SNI-02 validation, including my own
rough sketch recommendations, are deficient for improvement of security and
all of these TLS-SNI validation mechanisms are materially less secure and
less useful than the other ACME methods that Let's Encrypt presently
implements.

All the recommendations and guidance in the world is unlikely to timely
change the various (and there are so many) hosting providers' behavior with
regards to allowing creating of web hosting contexts for labels like
"*.*.acme.invalid". The CAs are beholden to the CABforum and root
programs. The various web hosts are not.

That being the case, I would recommend that the proper change to the
TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
of those mechanisms.

As long as the web hosting infrastructure does not automatically create new
contexts for heretofore never seen labels, it won't be possible to fully
validate in an automated fashion whether or not a given hosting
infrastructure would or would not allow any random customer to create some
blah.blah.acme.invalid label and bind it to a certificate that said random
customer controls. Because of the various incentives and motivations, it
seems almost inevitable that it would eventually occur. When a
mis-issuance arises resulting from that scenario, I wonder how the
community would view that?

Thanks,

Matt Hardeman

Matthew Hardeman

unread,
Jan 10, 2018, 4:45:22 PM1/10/18
to Nick Lamb, Patrick Figel, dev-secur...@lists.mozilla.org
I agree with Nick's questions, and I can certainly see the relevance in
matching what actually happens out there to the effectiveness and
appropriateness of the various domain validation mechanisms.

Having said that, I think it should effectively be a "read only" affair,
shaping community and CA response to the conditions that exist rather than
striving for better conditions. I think it would be impractical to assume
that the community can persuade the entire web hosting industry to effect
meaningful universal change in a relevantly short time frame.

Matthew Hardeman

unread,
Jan 10, 2018, 4:55:51 PM1/10/18
to Nick Lamb, Patrick Figel, dev-secur...@lists.mozilla.org
As another tangent question on the advisability of resuming the TLS-SNI-01
validation method, can/will Let's Encrypt share any data on prevalence of
the various validation mechanisms over time and how they stack up against
each other in terms of prevalence. Also, it might be helpful to know
attempted versus completed successfully.

I wonder how big a problem it is if all of the TLS-SNI-01/02 mechanisms go
away?

On Wed, Jan 10, 2018 at 3:45 PM, Matthew Hardeman <mhar...@gmail.com>
wrote:

Ryan Sleevi

unread,
Jan 10, 2018, 4:57:52 PM1/10/18
to Matthew Hardeman, Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Wayne Thayer
On Wed, Jan 10, 2018 at 4:37 PM, Matthew Hardeman <mhar...@gmail.com>
wrote:
>
> In the exact text above, what I meant by "create the proper zone in
> .acme.invalid" was to create that web hosting context (or actually set of
> web hosting contexts) and bind to the Host names that are the
> z(i)[0...32].z(i)[33...64].acme.invalid labels that the attacker knows to
> be the set of those which may arrive in the TLS SNI name values for the
> validation calls from the CA to the TLS infrastructure. I clarify again
> that I'm not speaking of any real DNS mapping at all. I'm speaking of a
> mapping between a received TLS SNI label to a web hosting context on the
> hosting infrastructure.
>

Got it. Yes, a large number of web hosting providers allow for potentially
binding names not yet bound to DNS.

This becomes an issue iff they share the same IP (which is a far more
varied story) and they allow control over the SNI<->certificate mapping
(which is also far more variable). So the lack of a binding to a 'real'
name in and of itself is not an issue, merely the confluence of things.


> If this is the case, I can only conclude that all presently proposed
> enhancements to TLS-SNI-01 and TLS-SNI-02 validation, including my own
> rough sketch recommendations, are deficient for improvement of security and
> all of these TLS-SNI validation mechanisms are materially less secure and
> less useful than the other ACME methods that Let's Encrypt presently
> implements.
>

Note that the presumptive discussion re: .well-known has ignored that the
Host header requirements are underspecified, thus the fundamental issue
still exists for that too. That said, there absolutely has been both
tension regarding and concern over the use of file-based or
certificate-based proofs of control, rather than DNS-based proofs. This is
a complex tradeoff though - unquestionably, the ability to use the
certificate-based proof has greatly expanded the ease in which to get a
certificate, and for the vast majority of those certificates, this is not
at all a security issue.

I think the apt comparison is about introducing a 'new' reserved e-mail
address, in addition to the ones already in the Baseline Requirements. The
conversation being held now is a natural consequence of removing the 'any
other' method and performing more rigorous examination of the application
in practice.

For comparison of "What could be worse", you could imagine a CA using the
.10 method to assert the Random Value (which, unlike .7, is not bounded in
its validity) is expressed via the serial number. In this case, a CA could
validate a request and issue a certificate. Then, every 3 years (or 2 years
starting later this year), connect to the host, see that it's serving their
previously issued certificate, assert that the "Serial Number" constitutes
the Random Value, and perform no other authorization checks beyond that. In
a sense, fully removing any reasonable assertion that the domain holder has
authorized (by proof of acceptance) the issuance.


> All the recommendations and guidance in the world is unlikely to timely
> change the various (and there are so many) hosting providers' behavior with
> regards to allowing creating of web hosting contexts for labels like
> "*.*.acme.invalid". The CAs are beholden to the CABforum and root
> programs. The various web hosts are not.
>

Agreed; although, pragmatically, I hope that the visibility of the issue,
and the excellent documentation provided by Let's Encrypt, may allow us the
opportunity to provide a graceful transition into a more robust
implementation and a more restrictive version of .10 over the coming months.


> That being the case, I would recommend that the proper change to the
> TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
> of those mechanisms.
>

I'm not sure I agree that haste is advisable or desirable, but I'm still
evaluating. At the core, we're debating whether something should be opt-out
by default (which blacklisting .invalid is essentially doing) or opt-in. An
opt-in mechanism cannot be signaled in-band within the certificate, but may
be signalable in-band to the TLS termination, such as via a TLS extension
or via the use of an ALPN protocol identifier (such as "acme").

End-users (e.g. those who are not cloud) with full-stack control of their
TLS termination can 'simply' add the "acme" ALPN advertisement to signal
their configuration.
Cloud providers that provide a degree of segmentation and isolation can
similarly allow the "acme" ALPN protocol to be negotiated, and complete the
enrollment (either themselves, as some providers do, or allowing their
customers to do so)
Providers in that proverbial 'long tail' that don't update to explicitly
advertise the TLS extension or ALPN identifier (or equivalent TLS handshake
signal) would otherwise fail the ACME challenge, since it wouldn't be clear
that it was safe to do so.

As long as the web hosting infrastructure does not automatically create new
> contexts for heretofore never seen labels, it won't be possible to fully
> validate in an automated fashion whether or not a given hosting
> infrastructure would or would not allow any random customer to create some
> blah.blah.acme.invalid label and bind it to a certificate that said random
> customer controls. Because of the various incentives and motivations, it
> seems almost inevitable that it would eventually occur. When a
> mis-issuance arises resulting from that scenario, I wonder how the
> community would view that?
>

I'm not sure I'd classify it as misissuance, no more than those who were
able to get certificates by registering mailboxes such as 'hostmaster' or
'webmaster' on free email providers (despite the RFC's that reserve such
names).

While I admit that .invalid (and needing to blacklist) is unquestionably a
backwards-incompatible change to the 'real world' and, unfortunately, did
not turn out to be as safe as presumed, the method remains itself in the
BRs, and as the example showed, can be creatively used (or is it abused?)
while fully complying with the BRs. Much in the same way a cloud provider
that allowed unrestricted access to .well-known across hosting accounts, or
web messaging boards that allowed direct file upload into .well-known, at
some point, we need to acknowledge that what happened was fully
permissible, question whether or not it was documented/acknowledged as
risky (which both the TLS-SNI and .well-known files are called out as such,
in the ACME draft), and what steps the CA took to assuage, mitigate, or
minimize those risks.

Tim Hollebeek

unread,
Jan 10, 2018, 5:12:10 PM1/10/18
to ry...@sleevi.com, Matthew Hardeman, mozilla-dev-security-policy, Gervase Markham, Wayne Thayer

> For comparison of "What could be worse", you could imagine a CA using the
> .10 method to assert the Random Value (which, unlike .7, is not bounded in
its
> validity) is expressed via the serial number. In this case, a CA could
validate a
> request and issue a certificate. Then, every 3 years (or 2 years starting
later this
> year), connect to the host, see that it's serving their previously issued
> certificate, assert that the "Serial Number" constitutes the Random Value,
and
> perform no other authorization checks beyond that. In a sense, fully
removing
> any reasonable assertion that the domain holder has authorized (by proof
of
> acceptance) the issuance.

My "Freshness Value" ballot should fix this, by requiring that Freshness
Values actually be fresh.

-Tim

Matthew Hardeman

unread,
Jan 10, 2018, 5:54:03 PM1/10/18
to Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Wayne Thayer
On Wed, Jan 10, 2018 at 3:57 PM, Ryan Sleevi <ry...@sleevi.com> wrote:

>
>
> Note that the presumptive discussion re: .well-known has ignored that the
> Host header requirements are underspecified, thus the fundamental issue
> still exists for that too. That said, there absolutely has been both
> tension regarding and concern over the use of file-based or
> certificate-based proofs of control, rather than DNS-based proofs. This is
> a complex tradeoff though - unquestionably, the ability to use the
> certificate-based proof has greatly expanded the ease in which to get a
> certificate, and for the vast majority of those certificates, this is not
> at all a security issue.
>
>
As you note, http-01, as strictly specified has some weaknesses. That Host
header requirement should be shored up. Redirect chasing, if any, should
be shored up. Etc, etc. I do believe that LE's implementation largely
hedges the major vulnerability. What vulnerability remains in the mix
requires that a web host literally fail in their duty to protect what
resources may be served up quite literally in the same named label as is
being validated. The difference, from a web host's perspective, in that
duty versus the duty we would like to impose upon them in TLS-SNI-01 is
that it is commonly expected that the web host will take responsibility for
ensuring that only that customer of theirs paying them for www.example.com
will be able to publish content at www.example.com. Additionally, all of
the community, the customer, and the web host can all intellectually
understand, without a great deal of complex thought, how that
responsibility for a resource under the correct domain label must be
controlled by the customer. What's less clear to all, I should think, is
why the web host has a duty not to serve some resource under a totally
unrelated name like rofl.blah.acme.invalid in defense of his customer
www.example.com.

Ultimately, as you suggest, I wonder if the [hehehe] "shocking" conclusion
of all of this is that, perhaps, if we seek to demonstrate meaningful
control of a domain or DNS label, the proper way to do so is by requiring
specific manipulation of only the DNS infrastructure, as, for example, in
dns-01? DNS infrastructure and its behavior are literally in scope of
demonstration of meaningful control of a domain label. Any behavior on
part of any web host really technically isn't. I do understand the reasons
it's presently allowed that non-DNS mechanisms be used.


>
> For comparison of "What could be worse", you could imagine a CA using the
> .10 method to assert the Random Value (which, unlike .7, is not bounded in
> its validity) is expressed via the serial number. In this case, a CA could
> validate a request and issue a certificate. Then, every 3 years (or 2 years
> starting later this year), connect to the host, see that it's serving their
> previously issued certificate, assert that the "Serial Number" constitutes
> the Random Value, and perform no other authorization checks beyond that. In
> a sense, fully removing any reasonable assertion that the domain holder has
> authorized (by proof of acceptance) the issuance.
>

That, indeed, is a chilling picture. I'd like to think the community's
response to any such stretch of the rules would be along the lines of "Of
course, you're entirely correct. Technically this was permitted. Oh, by
the way, we're pulling your roots, we've decided you're too clever to be
trusted."


>
>
>> That being the case, I would recommend that the proper change to the
>> TLS-SNI-0X mechanisms at the IEFT level would be the hasty discontinuance
>> of those mechanisms.
>>
>
> I'm not sure I agree that haste is advisable or desirable, but I'm still
> evaluating. At the core, we're debating whether something should be opt-out
> by default (which blacklisting .invalid is essentially doing) or opt-in. An
> opt-in mechanism cannot be signaled in-band within the certificate, but may
> be signalable in-band to the TLS termination, such as via a TLS extension
> or via the use of an ALPN protocol identifier (such as "acme").
>
>
The TLS extension or ALPN protocol seem feasible to secure, though
obviously there's a lot of infrastructure change and deployment to get
there.


>
> As long as the web hosting infrastructure does not automatically create
>> new contexts for heretofore never seen labels, it won't be possible to
>> fully validate in an automated fashion whether or not a given hosting
>> infrastructure would or would not allow any random customer to create some
>> blah.blah.acme.invalid label and bind it to a certificate that said random
>> customer controls. Because of the various incentives and motivations, it
>> seems almost inevitable that it would eventually occur. When a
>> mis-issuance arises resulting from that scenario, I wonder how the
>> community would view that?
>>
>
> I'm not sure I'd classify it as misissuance, no more than those who were
> able to get certificates by registering mailboxes such as 'hostmaster' or
> 'webmaster' on free email providers (despite the RFC's that reserve such
> names).
>

Perhaps "misissuance" is the wrong term, in a strict sense. Maybe instead
we could call it "irresponsible issuance". What distinguishes, in my mind,
the difference in an issuance subsequent to the described attack on
TLS-SNI-01 versus an attack via HTTP-01 on a web host that has a shared
.well-known directory across all clients is that in the case of the
TLS-SNI-01 exploit, the web host had no pre-existing duty to know that new
web contexts named entirely unrelated to current client contexts could and
would cause security risks for his customers. It is indisputable that the
web host who shares a world-writeable .well-known directory across all his
clients is doing something wrong and has gone from being a distributor or
data to a publisher of data. If there's clear failing of a baseline
responsibility of a web host to their customer and that results in a bad
issuance, I think the CA can sleep soundly. If there is not such a clear
and affirmative duty of a particular behavior on the part of the web host,
and yet an improper third party has managed to finagle a certificate, I
think the CA has to start sweating about such issuance that occurred
because the web host didn't know or didn't want to invest in what you've
called a backwards-incompatible change to the existing "real world".

Matthew Hardeman

unread,
Jan 10, 2018, 5:58:56 PM1/10/18
to Tim Hollebeek, ry...@sleevi.com, mozilla-dev-security-policy, Gervase Markham, Wayne Thayer
You've just triggered me with an early 2000s flashback.

Now I can't get that "So fresh and so clean, clean..." rap line out of my
head from OutKast's "So Fresh, So Clean".

On Wed, Jan 10, 2018 at 4:11 PM, Tim Hollebeek <tim.ho...@digicert.com>
wrote:

Jakob Bohm

unread,
Jan 10, 2018, 6:36:03 PM1/10/18
to mozilla-dev-s...@lists.mozilla.org
Agree.

Hence my suggestion that TLS-SNI-0next use a name under the customer's
domain (such as the name used for DNS-01), not a name under .invalid.

> Ultimately, as you suggest, I wonder if the [hehehe] "shocking" conclusion
> of all of this is that, perhaps, if we seek to demonstrate meaningful
> control of a domain or DNS label, the proper way to do so is by requiring
> specific manipulation of only the DNS infrastructure, as, for example, in
> dns-01? DNS infrastructure and its behavior are literally in scope of
> demonstration of meaningful control of a domain label. Any behavior on
> part of any web host really technically isn't. I do understand the reasons
> it's presently allowed that non-DNS mechanisms be used.
>

Disagree.

In the world of real hosting providers, sometimes users often don't get
to control the DNS of a domain purchased through that hosting provider,
while they might still have the ability to "purchase" (for free from
letsencrypt.org) their own certificates and the ability to configure
simple aspects of their website, such as available files.
But wouldn't the backward compatibility features of TLS itself (and/or
some permissive TLS / https implementations) either ignore ALPN
extensions when they "know" they are only going to serve up HTTP/1.x
(not HTTP/SPDY) or complete the TLS handshake before deciding that they
don't have an "acme" service to connect to?

And even if it wasn't so, most sites that do "control" the whole stack,
and run on their own dedicated machine and IP probably lack the ability
and/or patience to modify the https code in "their" web server.
This may come back to the unfortunate use of BR language to redefine the
plain word "misissuance".

To me, "misissuance" means either issuing a certificate to a party not
actually the entity identified in that certificate and/or issuing a
certificate that is known to cause harm (such as a certificate for
IP:127.0.0.1 or an MD5 certificate with no random serial mitigation).

A certificate can have been issued according to all formal procedures
and still be misissuance (Imagine a rogue employee in a privileged
position at the subscribing organization going through all the
procedures but not actually letting the organization have the private
key). Such would not be cases of CA non-compliance (with associated
bugzilla bugs), but would still justify revocation with misissuance
related reason codes etc.

Another certificate can have been issued in violation of all formal
procedures but (by dumb luck) been issued to the right party and thus
not misissued anyway (though proving so may be difficult within the
short timeframe needed to revoke it due to lack of reason to believe it
wasn't misissued in the real sense).

Jakob Bohm

unread,
Jan 10, 2018, 6:49:56 PM1/10/18
to mozilla-dev-s...@lists.mozilla.org
On 10/01/2018 18:39, Matthew Hardeman wrote:

> Here again, I think we have a problem. It's regarded as normal and
> acceptable at many web host infrastructures to pre-stage sites for
> domain-labels not yet in use to allow for development and test deployment.
> Split horizon DNS or other in-browser or /etc/hosts, etc, are utilized to
> direct the "dev and test browser" to the right infrastructure for the
> pending label. It will be an uphill battle to get arbitrary web hosts to
> implement any one of the mitigations you've set out. Especially when it
> reduces a functionality some of their clients like and doesn't seem to get
> them any tangible benefit.
>

Another common use of setting up web hosting for a label before pointing
it there is to simply keep an existing site running on another host
until the new one is fully configured and validated, then switching over
the DNS to the new server (with the usual DNS-caching overlap in time).

A specific use of hosting ".invalid" domain labels is to temporarily
disable a site that is in some state of maintenance/construction/etc.
with an intent to switch to a valid label later, especially if the
intended valid label is currently pointing to another vhost on the same
host.

Thus preventing setup for previously unknown domain labels that don't
point to a host is the normal situation whenever customers move to that
host. And with all the new TLDs allowed by ICANN, it is no longer
practical or reliable for providers to keep whitelists and blacklists of
hostable TLDs.

Ryan Sleevi

unread,
Jan 10, 2018, 7:10:20 PM1/10/18
to Jakob Bohm, mozilla-dev-security-policy
On Wed, Jan 10, 2018 at 6:35 PM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> Agree.
>
> Hence my suggestion that TLS-SNI-0next use a name under the customer's
> domain (such as the name used for DNS-01), not a name under .invalid.


I thought it had been pointed out, but that doesn't actually address the
vulnerability. It presumes a hierarchal nature of certificate/account
management that simply does not align with how the real world and how real
providers work.

I can understand why it might seem intuitive - and, I agree, for providers
that create a lock between customer<->domain hierarchy, that might work -
but I would assert that they're not unique. And given that the concern is
precisely about those that *don't* do such bonding, it simply fails as a
solution.

In short, any solution that relies solely on the name will be technically
deficient in the real world, as this issue shows us. So any 'solution' that
proposes to shift the names around is to misunderstand that risk.


> Disagree.
>
> In the world of real hosting providers, sometimes users often don't get
> to control the DNS of a domain purchased through that hosting provider,
> while they might still have the ability to "purchase" (for free from
> letsencrypt.org) their own certificates and the ability to configure
> simple aspects of their website, such as available files.
>

If they can't control the DNS (for permission reasons), then they didn't
really purchase the domain. If they can't control the DNS for technical
reasons, then that's a deficiency of the hosting provider, and that doesn't
mean we should weaken the validation methods to accommodate those hosts who
can't invest in infrastructure.

But wouldn't the backward compatibility features of TLS itself (and/or
> some permissive TLS / https implementations) either ignore ALPN
> extensions when they "know" they are only going to serve up HTTP/1.x
> (not HTTP/SPDY) or complete the TLS handshake before deciding that they
> don't have an "acme" service to connect to?
>

No. You've misunderstood how ALPN works then.


> And even if it wasn't so, most sites that do "control" the whole stack,
> and run on their own dedicated machine and IP probably lack the ability
> and/or patience to modify the https code in "their" web server.
>

Do you believe people are bespoke minting these ACME challenges on the fly?
Because that's not how it's working in the real world - it's being based on
tooling, generally directly integrated in the server to automatically
enroll, manage, and renew (and indeed, that is explicitly what is
recommended as the ACME integration). In such a model - i.e. how it works
today - that lack of ability/patience is a non-issue, because it's simply
handled by the ACME client without any additional work by the server
operator - the same as it is today.


> This may come back to the unfortunate use of BR language to redefine the
> plain word "misissuance".
>

You can blame the BRs, but this is really simply a notion of the language
of PKI, and this is not a new debate by any means, so probably not worth
haggling about here, in as much as the nuance doesn't alter the
conclusions, and the point stands that "The CA met their obligations, but
the undesirable result happened". The solution for that is to fix the
requirements to prevent undesirable results. If "The CA didn't meet their
obligations", well, that's a different conversation.

Ryan Sleevi

unread,
Jan 10, 2018, 7:17:34 PM1/10/18
to Matthew Hardeman, Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Wayne Thayer
On Wed, Jan 10, 2018 at 5:53 PM, Matthew Hardeman <mhar...@gmail.com>
wrote:

> For comparison of "What could be worse", you could imagine a CA using the
>> .10 method to assert the Random Value (which, unlike .7, is not bounded in
>> its validity) is expressed via the serial number. In this case, a CA could
>> validate a request and issue a certificate. Then, every 3 years (or 2 years
>> starting later this year), connect to the host, see that it's serving their
>> previously issued certificate, assert that the "Serial Number" constitutes
>> the Random Value, and perform no other authorization checks beyond that. In
>> a sense, fully removing any reasonable assertion that the domain holder has
>> authorized (by proof of acceptance) the issuance.
>>
>
> That, indeed, is a chilling picture. I'd like to think the community's
> response to any such stretch of the rules would be along the lines of "Of
> course, you're entirely correct. Technically this was permitted. Oh, by
> the way, we're pulling your roots, we've decided you're too clever to be
> trusted."
>

GlobalSign proposed this as a new method -
https://cabforum.org/pipermail/validation/2017-May/000553.html
Amazon pointed out that .10 already permitted this -
https://cabforum.org/pipermail/validation/2017-May/000557.html

Your reaction means you must be one of the "worrywarts who treat
certificate owners like criminals" though, in the words of Steve Medin of
Symantec/Digicert -
https://cabforum.org/pipermail/validation/2017-May/000554.html , who was
also excited because of the 'brand stickiness' it would create (the term
typically used to refer to the likelihood or difficulty for someone to
switch to another, potentially more competent CA - in this case, due to the
ease of the lower security)

Matthew Hardeman

unread,
Jan 10, 2018, 7:36:37 PM1/10/18
to mozilla-dev-s...@lists.mozilla.org
Wow. The economic incentives for behaving badly clearly were at work in those.

I think I am one of those worrywarts, in fact.

Also, I just reread and contemplated the .10 method's definition. It's lacking. A legitimate definition of "on the authorization domain name" would have clarified a normative reference for what accessing that over TLS means and likely would have included that the SNI needed to be the authorization domain name. As such, it's really just a tenuous land-grab that TLS-SNI-01 is compliant with .10.

One of these days I need to sign the IPR waiver and join the cabforum mailing list as an interested party.

Matt Palmer

unread,
Jan 10, 2018, 8:42:43 PM1/10/18
to dev-secur...@lists.mozilla.org
On Wed, Jan 10, 2018 at 05:24:41PM +0000, Gervase Markham via dev-security-policy wrote:
> On 10/01/18 17:04, Matthew Hardeman wrote:
> > That seems remarkably deficient. No other validation mechanism which is
> > accepted by the community relies upon specific preventative behavior by any
> > number of random hosting companies on the internet.
>
> I don't think that's true. If your hosting provider allows other sites
> to respond to HTTP requests for your domain, there's a similar
> vulnerability in the HTTP-01 checker.

That's quite different, though, from your hosting provider allowing other
sites to respond to SNI requests for some completely other domain that
happens to then authorise certificate issuance for your domain.

> Or, if an email provider allows people to claim any of the special email
> addresses, there's a similar vulnerability in email-based methods.

Yeah, and that's a continuing gift of amusing blog posts ("check out who I
got a certificate for this time!"). I'd hope we'd all have learnt from
that, though, and not be looking to cheer on other validation methods that
suffer from the same problems. Playing whack-a-mole with hosting providers
to get them to do something that is *only* needed to secure certificate
issuance, and provides zero operational benefit otherwise, seems like a
losing proposition.

- Matt

Jakob Bohm

unread,
Jan 10, 2018, 8:47:05 PM1/10/18
to mozilla-dev-s...@lists.mozilla.org
On 11/01/2018 01:08, Ryan Sleevi wrote:
> On Wed, Jan 10, 2018 at 6:35 PM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>>
>> Agree.
>>
>> Hence my suggestion that TLS-SNI-0next use a name under the customer's
>> domain (such as the name used for DNS-01), not a name under .invalid.
>
>
> I thought it had been pointed out, but that doesn't actually address the
> vulnerability. It presumes a hierarchal nature of certificate/account
> management that simply does not align with how the real world and how real
> providers work.
>

There are TWO related vulnerabilities at work here:

1. A lot of hosting providers allow users to provision certificates for
whatever.acme.invalid on SNI capable hosts, even though those users
are not owners of whatever domain was issued a challenge for the
number in the "whatever" part. Other than adding code to specifically
block "acme.invalid" to every software stack/configuration used by
hosting providers, this is almost unsolvable at the host provider end,
thus it may be easier to change the TLS-SNI-xx method in some way.

2. A much smaller group of hosting providers allow users to set up
hosting for subdomains of domains already provisioned for other users
(e.g. user B setting up hosting for whatever.acme.example.com when
user A is already using the host for example.com). This case is not
solved by changing the SNI challenge to be a subdomain of the domain
being validated. But since this is a smaller population of hosting
providers, getting them to at least enforce that the parent domain
user needs to authorize which other users can host a subdomain with
them is much more tractable, especially as it has obvious direct
security benefits outside the ACME context.

(Hosting providers who allow uploading certificates for the specific
DNS/SNI names of other users are a security problem in itself, as it
could allow e.g. uploading an untrusted exact domain cert to disrupt
another user's site having only a wildcard certificate).

Note that neither issue #1, nor issue #2 involves any kind of DNS
checking or walking, as it is perfectly OK for either or both involved
domains to not point their DNS at the configured server at any given
point in time. Of cause the CA would use their view of the DNS to
locate the host that will be probed for the challenge certificate, but
the actual host need not.

If a popular hosting package such as DirectAdmin suffers from issue #2,
then that would rule out the subdomain solution.

> I can understand why it might seem intuitive - and, I agree, for providers
> that create a lock between customer<->domain hierarchy, that might work -
> but I would assert that they're not unique. And given that the concern is
> precisely about those that *don't* do such bonding, it simply fails as a
> solution.
>
> In short, any solution that relies solely on the name will be technically
> deficient in the real world, as this issue shows us. So any 'solution' that
> proposes to shift the names around is to misunderstand that risk.
>
>
>> Disagree.
>>
>> In the world of real hosting providers, sometimes users often don't get
>> to control the DNS of a domain purchased through that hosting provider,
>> while they might still have the ability to "purchase" (for free from
>> letsencrypt.org) their own certificates and the ability to configure
>> simple aspects of their website, such as available files.
>>
>
> If they can't control the DNS (for permission reasons), then they didn't
> really purchase the domain. If they can't control the DNS for technical
> reasons, then that's a deficiency of the hosting provider, and that doesn't
> mean we should weaken the validation methods to accommodate those hosts who
> can't invest in infrastructure.

Reality at many providers is like that. User's typically need to go
through hoops to transfer their domains to a 3rd party DNS hoster that
allows them to change DNS entries, and then the original hosting
provider stops helping them with their "unsupported" configuration,
thereby forcing them to switch to more expensive hosting providers too.

On the other hand, such providers will often (included or at extra fee)
allow provisioning arbitrary subdomains that are then typically added to
the HTTP(S) vhost configuration and the hosted DNS configuration, which
is good enough for TLS-SNI-modified-to-use-subdomain and HTTP-01, but
won't allow users to respond to the DNS-01 and may or may not allow or
users to respond to TLS-SNI-01 challenges (the feature allowing
responding to TLS-SNI-01 challenges is likely to suffer from security
issue #1).

The overall HTTPS-everywhere goal would fail if we restricted ACME to
only "the best" providers running "the most popular" server software in
"the latest version".


>
> But wouldn't the backward compatibility features of TLS itself (and/or
>> some permissive TLS / https implementations) either ignore ALPN
>> extensions when they "know" they are only going to serve up HTTP/1.x
>> (not HTTP/SPDY) or complete the TLS handshake before deciding that they
>> don't have an "acme" service to connect to?
>>
>
> No. You've misunderstood how ALPN works then.
>

Just reread RFC7301. While it does say that servers SHALL reject such
connections (or at least not send back an ALPN indicating a selected
value, as if not implementing the extension), I find it likely that some
combinations of TLS implementation and application implementation will
blindly accept whatever unknown protocol identifier a client lists as
the only option.


>
>> And even if it wasn't so, most sites that do "control" the whole stack,
>> and run on their own dedicated machine and IP probably lack the ability
>> and/or patience to modify the https code in "their" web server.
>>
>
> Do you believe people are bespoke minting these ACME challenges on the fly?
> Because that's not how it's working in the real world - it's being based on
> tooling, generally directly integrated in the server to automatically
> enroll, manage, and renew (and indeed, that is explicitly what is
> recommended as the ACME integration). In such a model - i.e. how it works
> today - that lack of ability/patience is a non-issue, because it's simply
> handled by the ACME client without any additional work by the server
> operator - the same as it is today.
>

No, but I do know that not every HTTPS server in existence supports
deep acme integration like Apache and a few other famous ones do.

Those are currently supported by using scripting to externally
reconfigure them to do their part in ACME negotiations, such as by
simply adding a TLS-SNI-01 challenge response certificate to the
existing SNI configuration of a server. But requiring them to support a
new protocol extension (in the form of a specially handled ALPN value or
otherwise) would not allow that.

Ryan Sleevi

unread,
Jan 10, 2018, 11:39:12 PM1/10/18
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Thu, Jan 11, 2018 at 2:46 AM Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 11/01/2018 01:08, Ryan Sleevi wrote:
> > On Wed, Jan 10, 2018 at 6:35 PM, Jakob Bohm via dev-security-policy <
> > dev-secur...@lists.mozilla.org> wrote:
> >>
> >> Agree.
> >>
> >> Hence my suggestion that TLS-SNI-0next use a name under the customer's
> >> domain (such as the name used for DNS-01), not a name under .invalid.
> >
> >
> > I thought it had been pointed out, but that doesn't actually address the
> > vulnerability. It presumes a hierarchal nature of certificate/account
> > management that simply does not align with how the real world and how
> real
> > providers work.
> >
>
> There are TWO related vulnerabilities at work here:
>
> 1. A lot of hosting providers allow users to provision certificates for
> whatever.acme.invalid on SNI capable hosts, even though those users
> are not owners of whatever domain was issued a challenge for the
> number in the "whatever" part. Other than adding code to specifically
> block "acme.invalid" to every software stack/configuration used by
> hosting providers, this is almost unsolvable at the host provider end,
> thus it may be easier to change the TLS-SNI-xx method in some way.
>
> 2. A much smaller group of hosting providers allow users to set up
> hosting for subdomains of domains already provisioned for other users
> (e.g. user B setting up hosting for whatever.acme.example.com when
> user A is already using the host for example.com). This case is not
> solved by changing the SNI challenge to be a subdomain of the domain
> being validated. But since this is a smaller population of hosting
> providers, getting them to at least enforce that the parent domain
> user needs to authorize which other users can host a subdomain with
> them is much more tractable, especially as it has obvious direct
> security benefits outside the ACME context.


This is categorically false. It is, itself, more complex, more error prone,
and more complexity (for example, due to the nature of authorization
domains) and that, at the end of the day, fails to achieve its goals.

The simplest way I can try to get you to think about it is to consider a
cert for foo.bar.example.com being requested by Iser C, and preexisting
domains of www.example.com (User A) and example.com (Iser B). Think about
how that would be “checked” - or even simply who the authorizors should be.

I assure you, it both fails to address the problem (of limiting risk) and
increases the complexity. Put simply, it doesn’t work - so there is no
value in doubling down trying to make it work, especially given that it
also fails to provide a solution for the overall population (like
blacklisting does).

Finally, the assumption there will be fewer of X so it’s easier to fix is,
also, counterintuitively false - the fewer there are and the more baroque
and complex the solution is, the harder it is to make any assumption about
adoption uptake.

(Hosting providers who allow uploading certificates for the specific
> DNS/SNI names of other users are a security problem in itself, as it
> could allow e.g. uploading an untrusted exact domain cert to disrupt
> another user's site having only a wildcard certificate).


Not really. You say this but that is the reality today and can and is
mitigated.

On the other hand, such providers will often (included or at extra fee)
> allow provisioning arbitrary subdomains that are then typically added to
> the HTTP(S) vhost configuration and the hosted DNS configuration, which
> is good enough for TLS-SNI-modified-to-use-subdomain and HTTP-01, but
> won't allow users to respond to the DNS-01 and may or may not allow or
> users to respond to TLS-SNI-01 challenges (the feature allowing
> responding to TLS-SNI-01 challenges is likely to suffer from security
> issue #1).


The problem in your thinking, which I wasn’t clear enough about I suppose,
is that those use cases are already met by other validation means and
there’s no assumption nor need for TLS-SNI, and while you pose your
solution as an improvement, in no way makes it easier or more widespread,
and simply limits what it can do and overlaps with other methods.

In any event, I think if you want to continue to explore that line of
thinking, you’re more than free to within the IETF, where you can learn
more directly about the requirements rather than construct hypothetical
environments.

Just reread RFC7301. While it does say that servers SHALL reject such
> connections (or at least not send back an ALPN indicating a selected
> value, as if not implementing the extension), I find it likely that some
> combinations of TLS implementation and application implementation will
> blindly accept whatever unknown protocol identifier a client lists as
> the only option.


That is completely unproductive speculative strawmanning that doesn’t allow
for productive dialog. More specifically, I do not think it all useful for
this Forum for the “if I was king, and we assume it works like x, here’s
what I would do” is actually at all productive or appropriate. The right
venue would be ACME if you wanted to discuss designs, and what is relevant
here and appropriate is merely a critical evaluation of comparative risk to
_what the Baselines permit_.

“I think we shouldn’t allow X, because it introduces condition Y, where if
met would result in Z, and that is new and unique to this method” is useful.

>
>

Ryan Sleevi

unread,
Jan 10, 2018, 11:43:45 PM1/10/18
to Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org
On Thu, Jan 11, 2018 at 1:36 AM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Wednesday, January 10, 2018 at 6:17:34 PM UTC-6, Ryan Sleevi wrote:
> Wow. The economic incentives for behaving badly clearly were at work in
> those.
>
> I think I am one of those worrywarts, in fact.
>
> Also, I just reread and contemplated the .10 method's definition. It's
> lacking. A legitimate definition of "on the authorization domain name"
> would have clarified a normative reference for what accessing that over
> TLS means and likely would have included that the SNI needed to be the
> authorization domain name. As such, it's really just a tenuous land-grab
> that TLS-SNI-01 is compliant with .10.


I do not know why you say that, considering the Forum explicitly decided to
make .10 flexible as it is to accommodate both solutions.

The goal was explicitly NOT to make an ideal-secure solution, it was to
document what is practiced in favor of replacing “any other method”

To that end, it is more useful to point out, “As written, X is permissible,
but not desired, while restricting to Y reduces that risk”. The goal is
honestly less to provide solutions (“I think it should be this”) and more
to provide risk assessments and suggestions. The latter is far more
beneficial for walking folks through the risks and concerns and how to
mitigate.


>
> One of these days I need to sign the IPR waiver and join the cabforum
> mailing list as an interested party.
>

Matthew Hardeman

unread,
Jan 11, 2018, 1:20:52 AM1/11/18
to Ryan Sleevi, mozilla-dev-security-policy
On Wed, Jan 10, 2018 at 10:42 PM, Ryan Sleevi <ry...@sleevi.com> wrote:
>
>
> I do not know why you say that, considering the Forum explicitly decided
> to make .10 flexible as it is to accommodate both solutions.
>
> The goal was explicitly NOT to make an ideal-secure solution, it was to
> document what is practiced in favor of replacing “any other method”
>
> To that end, it is more useful to point out, “As written, X is
> permissible, but not desired, while restricting to Y reduces that risk”.
> The goal is honestly less to provide solutions (“I think it should be
> this”) and more to provide risk assessments and suggestions. The latter is
> far more beneficial for walking folks through the risks and concerns and
> how to mitigate.
>

Ouch. I was not aware of that aspect of the historical part of the
picture. What I recall most was there there was some IPR drama over some
of the blessed methods.

So, essentially, the bargain that was struck was something along the lines
of "Confess your validation method sins and let them -- at least for a time
-- be blessed, as long as they're not entirely egregious and in exchange
for killing the ability to hide behind `or any other method`?"

Gervase Markham

unread,
Jan 11, 2018, 11:19:10 AM1/11/18
to Matthew Hardeman
On 10/01/18 17:39, Matthew Hardeman wrote:
> Here again, I think we have a problem. It's regarded as normal and
> acceptable at many web host infrastructures to pre-stage sites for
> domain-labels not yet in use to allow for development and test deployment.

I agree that "no unknown domain names" is hard to implement "No domain
names owned by another customer" is easier and doesn't cause the
problems you raise. "No acme.invalid" is even easier.

> In the course of adopting the 10 blessed methods, did any of the methods
> move forward with the expectation that active effort on the part of non-CA
> participants versus the status quo would be required in order to ensure the
> continuing reliability of the method?

"Active effort vs. the status quo" is a skewed framing because security
always requires active effort in the face of change, new entrants etc. A
new entrant in the email market has to make an active effort to make
sure that the special addresses are not claimable, even though that
issue has been known for years.

Gerv

Ryan Sleevi

unread,
Jan 11, 2018, 4:36:50 PM1/11/18
to jo...@letsencrypt.org, mozilla-dev-security-policy
(Wearing a Google Chrome hat on behalf of our root store policy)

Josh,

Thanks for bringing this rapidly to the attention of the broader community
and proactively reaching out to root programs.

As framing to the discussion, we still believe TLS-SNI is fully permitted
by the Baseline Requirements, which, while not ideal, still permits
issuance using this method. As such, the 'root' cause is that the Baseline
Requirements permit methods that are less secure than desired, and the
discussion that follows is now around what steps to take - as CAs, as Root
Programs, for site operators, and for the CA/Browser Forum.

When faced with a vulnerable validation method that is permitted, it's
always a challenge to balance the need for security - for sites and users -
with the risk of compatibility and breakage from the removal of such a
method. Fundamentally, the issues you raise call into question the level of
assurance of 3.2.2.4.9 and 3.2.2.4.10 in the Baseline Requirements, and are
not limited to TLS-SNI, and potentially affects every CA using these
methods.

When evaluating these methods, and their risks, compared to, say, the
also-weak 3.2.2.4.1 and 3.2.2.4.5 discussions ongoing with the CA/Browser
Forum, a few key distinctions, although non-exhaustive, apply and are
factored in to our response and proposal here:

- The average lifetime of certificates using these methods, across CAs,
compared to 3.2.2.4.1/3.2.2.4.5, is significantly shorter - very close to
the 90 days that Let's Encrypt uses, based on the available information we
have. The fact that so many of these certificates are short lived creates a
situation in where there's simultaneously more risk to the ecosystem to
rapidly removing these methods as acceptable (due to the need to
obtain/renew certificate), while there's also much less risk in allowing
this method to continue to be used for a limited time, due to the fact that
certificates that could be obtained by exploiting this will expire much
sooner than the 2-3 years that many other certificates are issued with.
That is, the security risk of a bad validation that lives for 3 years is
much greater than the risk of a bad validation that lives for 90 days, and
the fact that the badness is only valid for 90 days means that it's easier
to allow it to more gracefully shut down than potentially accepting that
implied risk for years.

- The ease of which alternative methods exist. Methods that are manual are
substantially easier to remove quickly, as alternative manual processes can
also be used during the human-to-human interaction, while methods that are
highly automated conversely create greater challenges, due to the need to
update client software to whatever new automated methods may be used. While
3.2.2.4.1 and 3.2.2.4.5 are highly human-driven methods, methods like
3.2.2.4.9 and .10 are designed for automation - and why we were supportive
of their addition - but also mean that any mitigations will necessarily
face ecosystem challenges, much like deploying new versions of TLS or
deprecating old ones.

- The ease of which alternative automated methods can be used. As automated
methods are generally designed around integrated systems and certain design
constraints, it's not always possible to move to an equivalently
automatable method (as it is with manual methods), and it may be that no
equivalent automated method exists to fill the design niche. If that design
niche is a substantial one for clients, and enables otherwise unautomatable
systems, it can pose greater risk in prematurely removing it. Specific
applications of the .9 and .10 methods, such as ACME's TLS-SNI, occupy an
important niche, similar to the 3.2.2.4.6 method and ACME's HTTP-01 method,
provide a level of automation for systems not directly integrated with DNS,
and while that means they must be particularly attentive to the security
risks that come from that, done correctly, they can provide a greater path
towards security.

- Compared to 3.2.2.4.1 and 3.2.2.4.5, specific applications of 3.2.2.4.9
and 3.2.2.4.10 can be evaluated against possible mitigations for the risk,
both short- and long-term, and steps in which site operators can take to
affirmatively protect themselves offer better assurances than those that
rely entirely on the CA's good behaviour. As you call out, the specific
risks of TLS-SNI are limited to shared providers (not individual users)
that meet certain conditions, and these shared providers can already take
existing steps to minimize the immediate risk, such as blocking the use of
certificates or SNI negotiations that contain the '.invalid' TLD. While
this is not an ideal long-term solution, by any means, it allows us to
frame both the immediate and specific risks and the ways to reduce that.

For the sake of brevity, I'll end my comparisons there, but hopefully
highlights some of the factors we've considered in our response to your
proposal.

Given the risks identified to 3.2.2.4.9 and 3.2.2.4.10, we think it would
be best for CAs using these Baseline Requirements-acceptable methods of
validation to begin immediately transitioning away from them, with the goal
of either removing them entirely from the Baseline Requirements, or
identifying ways in which .9 and .10 can be better specified to mitigate
such risks. That said, given the potential risks to the ecosystem,
particularly those with pre-existing short-lived certificates, we think
that, provided that the new certificates are valid for 90 days or less,
we're open to allowing the specific TLS-SNI methods identified by the ACME
specification to continue to be used for a limited time, while the broader
community works to identify potential mitigations (if possible) or
transition away from these methods.

While we don't think the current status quo represents a viable long-term
solution, given that the ACME TLS-SNI methods have been broadly reviewed
within the IETF, that the risks apply to a limited subset of specific
infrastructures, that mitigations are possible for these infrastructures to
deploy, that Let's Encrypt is actively working with the community to
identify, and ideally, share, those that haven't or cannot deploy such
mitigations, and all of the other items previously mentioned, we think this
represents an appropriate short-term balance.

If and as new facts become available, it may be necessary to revisit this.
We may have overlooked additional risks, or failed to consider mitigating
factors. Further, this response is contextualized in the application of
ACME's TLS-SNI methods for validation, and such a response may not be
appropriate for other forms of validations within the framework of
3.2.2.4.9 and 3.2.2.4.10. Similarly, this response doesn't apply to
certificates that may be valid for longer periods, as they may present
substantially greater risk to making effective improvements to or an
orderly transition away from these methods.

We look forward to working with other browser vendors, site operators, and
the relying community to work out ways to provide an orderly and effective
transition to more secure methods - whether that means away from the
3.2.2.4.9/.10 series of domain validations, or to more restrictive forms
that are more clearly "opt-in" rather than the explicit "opt-out" proposed
(of 'blacklisting .invalid').

We're also curious if we've overlooked salient details in our response, and
thus welcome feedback from Let's Encrypt, other CAs utilizing these
validation methods (both TLS-SNI and 3.2.2.4.9 and 3.2.2.4.10), and the
broader community as to our proposed next steps. Please consider this a
draft response, and we look forward to future updates regarding proposed
next steps.

jo...@letsencrypt.org

unread,
Jan 11, 2018, 5:29:09 PM1/11/18
to mozilla-dev-s...@lists.mozilla.org
We have published an update on our plans for TLS-SNI:

https://community.letsencrypt.org/t/2018-01-11-update-regarding-acme-tls-sni-and-shared-hosting-infrastructure/50188

The short summary is that we do not plan to generally re-enable TLS-SNI validation, but we will introduce various forms of whitelists to limit impact during our transition away from TLS-SNI.

Thanks to everyone for the feedback on this thread already. Let us know if you have any questions or concerns.

Wayne Thayer

unread,
Jan 11, 2018, 6:01:32 PM1/11/18
to jo...@letsencrypt.org, mozilla-dev-security-policy
On Thu, Jan 11, 2018 at 3:28 PM, josh--- via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> https://community.letsencrypt.org/t/2018-01-11-update-regard
> ing-acme-tls-sni-and-shared-hosting-infrastructure/50188
>
> Speaking for myself, this is an excellent game plan that prioritizes the
protection of Mozilla users and the Web PKI in general.

Jakob Bohm

unread,
Jan 11, 2018, 11:47:17 PM1/11/18
to mozilla-dev-s...@lists.mozilla.org
I explicitly stated why fixing #2 would be simpler than fixing #1, you
are making no factual argument.

Which specific "nature of authorization domains" are you referring to?

> The simplest way I can try to get you to think about it is to consider a
> cert for foo.bar.example.com being requested by Iser C, and preexisting
> domains of www.example.com (User A) and example.com (Iser B). Think about
> how that would be “checked” - or even simply who the authorizors should be.
>

I was referring to strict subdomains. So User B would have to give
permission to User C (e.g. in a control panel or via an admin procedure,
that's up to the host).

User B would also have to give permission (in the same way) for user A
(or vice versa if user A requested before user B). Again the details
can be left to the discretion of the host, as long as user C would need
permission from user B, issue 2 goes away as far as a hypothetical
improved TLS-SNI-0next using a subdomain of the requested domain is
concerned.

> I assure you, it both fails to address the problem (of limiting risk) and
> increases the complexity. Put simply, it doesn’t work - so there is no
> value in doubling down trying to make it work, especially given that it
> also fails to provide a solution for the overall population (like
> blacklisting does).

Again you make no factual argument.

>
> Finally, the assumption there will be fewer of X so it’s easier to fix is,
> also, counterintuitively false - the fewer there are and the more baroque
> and complex the solution is, the harder it is to make any assumption about
> adoption uptake.
>

I am trying to make solutions simpler and less baroque than what Let's
encrypt is apparently (according to 3rd party posts here) proposing
namely to introduce a special rule that hosting providers must follow to
protect their users against the newly discovered vulnerability).

I am also trying to avoid a solution likely to suffer from a "long tail"
problem of hosts that won't implement needed fixes anytime soon.

> (Hosting providers who allow uploading certificates for the specific
>> DNS/SNI names of other users are a security problem in itself, as it
>> could allow e.g. uploading an untrusted exact domain cert to disrupt
>> another user's site having only a wildcard certificate).
>
>
> Not really. You say this but that is the reality today and can and is
> mitigated.
>

How is it mitigated today in a way that would not stop users from
uploading a cert for somenumber._acme.www.example.com on the same host
where some other user is hosting www.example.com?

> On the other hand, such providers will often (included or at extra fee)
>> allow provisioning arbitrary subdomains that are then typically added to
>> the HTTP(S) vhost configuration and the hosted DNS configuration, which
>> is good enough for TLS-SNI-modified-to-use-subdomain and HTTP-01, but
>> won't allow users to respond to the DNS-01 and may or may not allow or
>> users to respond to TLS-SNI-01 challenges (the feature allowing
>> responding to TLS-SNI-01 challenges is likely to suffer from security
>> issue #1).
>
>
> The problem in your thinking, which I wasn’t clear enough about I suppose,
> is that those use cases are already met by other validation means and
> there’s no assumption nor need for TLS-SNI, and while you pose your
> solution as an improvement, in no way makes it easier or more widespread,
> and simply limits what it can do and overlaps with other methods.

You are claiming no need for TLS-SNI, without data. Someone obviously
saw a need to create it, and Let's encrypt said there is sufficient need
that they are looking for a way to restore that particular service as
soon as practical.


>
> In any event, I think if you want to continue to explore that line of
> thinking, you’re more than free to within the IETF, where you can learn
> more directly about the requirements rather than construct hypothetical
> environments.
>

You always want to send discussions elsewhere.

> Just reread RFC7301. While it does say that servers SHALL reject such
>> connections (or at least not send back an ALPN indicating a selected
>> value, as if not implementing the extension), I find it likely that some
>> combinations of TLS implementation and application implementation will
>> blindly accept whatever unknown protocol identifier a client lists as
>> the only option.
>
>
> That is completely unproductive speculative strawmanning that doesn’t allow
> for productive dialog. More specifically, I do not think it all useful for
> this Forum for the “if I was king, and we assume it works like x, here’s
> what I would do” is actually at all productive or appropriate. The right
> venue would be ACME if you wanted to discuss designs, and what is relevant
> here and appropriate is merely a critical evaluation of comparative risk to
> _what the Baselines permit_.

I am talking about which real-world issues (not rules and regulations)
might make your (not mine) suggestion of using ALPN insecure.
Specifically I am /guessing/ at the likelihood of a specific
implementation bug/detail causing servers affected by the current issue
to remain equally affected with an ALPN of "acme" added to the probing
for TLS-SNI-01.

There is no claim of what I would want to command people to do.

As for the Baselines, they currently permit the existing TLS-SNI-01
methods and presumable a security-improved TLS-SNI-future would
not change that.

>
> “I think we shouldn’t allow X, because it introduces condition Y, where if
> met would result in Z, and that is new and unique to this method” is useful.
>
>>
>>

I was simply suggesting that your suggested new method X (ALPN "acme")
would remain affected by the /existing issues/ under discussion if a
specific issue Y (ALPN ignored or checking deferred beyond duration of
probe) exists, thus indicating that, subject to checking if Y is
widespread, your novel suggestion X /might/ be useless. There's no new
Z.

In the part of my post that you conveniently snipped, I explained why
your suggestion of using an ALPN to prevent old web servers from
accepting a hypothetical TLS-SNI-future would be detrimental to real
world non-vulnerable implementations of ACME clients (Specifically all
those using a standalone program such as Certbot). That part contained
to hypotheticals at all.

Ryan Sleevi

unread,
Jan 12, 2018, 2:13:00 AM1/12/18
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Fri, Jan 12, 2018 at 5:46 AM Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 11/01/2018 05:38, Ryan Sleevi wrote:
> I explicitly stated why fixing #2 would be simpler than fixing #1, you
> are making no factual argument.


It would not be, because you do not understand the constraints of the
servers that exist that are affected or the issue, it would seem.

It would appear you are guessing as to how they work, but they do not work
that way, thus this proposal would not in any way be better.


>
> Which specific "nature of authorization domains" are you referring to?


Read the Baseline Requirements for Authorization Domain Name and how label
pruning work. Your design assumes a single account is associates with a
registerable domain and all its subdomains, but if you read about who the
issue affects, you would see that it precisely affects those who do not
have such a notion of authorized domains to accounts.

I was referring to strict subdomains. So User B would have to give
> permission to User C (e.g. in a control panel or via an admin procedure,
> that's up to the host).


Correct. And these are not the folks affected and thus not why this is
worth discussing further.


>
> User B would also have to give permission (in the same way) for user A
> (or vice versa if user A requested before user B). Again the details
> can be left to the discretion of the host, as long as user C would need
> permission from user B, issue 2 goes away as far as a hypothetical
> improved TLS-SNI-0next using a subdomain of the requested domain is
> concerned.


If and only if Authorization is done by accounts, which it is not, which
means this solves nothing.

> Finally, the assumption there will be fewer of X so it’s easier to fix is,
> > also, counterintuitively false - the fewer there are and the more baroque
> > and complex the solution is, the harder it is to make any assumption
> about
> > adoption uptake.
> >
>
> I am trying to make solutions simpler and less baroque than what Let's
> encrypt is apparently (according to 3rd party posts here) proposing
> namely to introduce a special rule that hosting providers must follow to
> protect their users against the newly discovered vulnerability).


“According to third party posts” - they’ve already made first party
updates. I fail to see what you hope to attain from that comment, but it is
also clear that you don’t understand the constraints - at best, you are
guessing, but speaking as if it’s correct (rather than soliciting feedback
or displaying uncertainty) and ignoring criticism, which is completely
unproductive. Your solution is not a simpler way to mitigate the issue for
providers, and thus provides no value for those affected, while being more
complex and fundamentally a protocol change that no one would be able to
use until they changed clients - which is a design concern that should be
patently obvious that any proposal must address first and foremost.

I am also trying to avoid a solution likely to suffer from a "long tail"
> problem of hosts that won't implement needed fixes anytime soon.


Except it doesn’t do that, and it’s not clear why that isn’t immediately
obvious.

How is it mitigated today in a way that would not stop users from
> uploading a cert for somenumber._acme.www.example.com on the same host
> where some other user is hosting www.example.com?


That is the point.

First, that is allowed, today, by a number of providers.

Second, Authorization domain names means you also need to worry about hash._
acme.example.com (note the stripped www). This is further exemplified if
the existing domain is mail.corp.example.com, and that’s what the attacker
wants. It is legal for CAs to validate at the “mail.corp.example.com” or
the parents - thus hash._acme.mail.corp.example.com, hash._
acme.corp.example.com, and hash._acme.example.com are all valid
authorization domain names in your scheme. To mitigate this, the cloud
provider would need to know to scope authorizations to the BR’s notion of
Authorization Domain Name (namely, the account that uploaded www.example.com
is valid for all of Example.com).

Third, this is not how the real world works - a number of providers
intentionally allow multiple users to upload certs for www.example.com (as
DNS will sort it out if these customers are assigned to different IPs, and
first come first served if only one IP). Even without that, they allow the
accounts for www.example.com and Corp.example.com to be different, meaning
both have legitimate claim for example.com above.

Fourth, a underscore for the domain label first character is not valid to
use with A/AAAA (i.e. you propose something not technically valid)

Fifth, this is a new protocol, thus requires new client and server support,
and thus is not at all a relevant or applicable solution that address the
key problem that CAs face with existing customers. This should be
immediately and blatantly obvious.


That said, please do not reply to this criticism - I’ve shown you why it
doesn’t work, but if you want to continue to try to explore ideas that you
believe do work, the proper venue is the ACME mailing list, for the obvious
and previously stated reason of the IP policy.

To be more explicit: Every idea or suggestion you propose on m.d.s.p., for
this issue or others, related or not, is immediately biased against it for
IP reasons. This is not a good venue to discuss your technical ideas, full
stop.


> >
> > The problem in your thinking, which I wasn’t clear enough about I
> suppose,
> > is that those use cases are already met by other validation means and
> > there’s no assumption nor need for TLS-SNI, and while you pose your
> > solution as an improvement, in no way makes it easier or more widespread,
> > and simply limits what it can do and overlaps with other methods.
>
> You are claiming no need for TLS-SNI, without data. Someone obviously
> saw a need to create it, and Let's encrypt said there is sufficient need
> that they are looking for a way to restore that particular service as
> soon as practical.


No. You again misunderstand.

Your proposal is only workable for environments which have existing
alternatives to TLS-SNI, and does not address the set of environments that
TLS-SNI is trying to serve. As mentioned above, it fails to understand what
“restore that particular service” means, or the constraints therein, thus
only serves those that aren’t necessarily affected by this.

It’s a bad proposal, as it tries to sketch an outline for a problem that
isn’t the problem people are looking for a solution for. And it’s presented
with such length and detail, while being wrong in the fundamental
assumptions, that it is not a productive use of folks time.

In short, the “guessing” is the fundamental problem, and it would be better
to ask questions and wait and listen to answers, then it would be to
attempt to sketch out full solutions (in addition to or in lieu of asking
questions)

> In any event, I think if you want to continue to explore that line of
> > thinking, you’re more than free to within the IETF, where you can learn
> > more directly about the requirements rather than construct hypothetical
> > environments.
> >
>
> You always want to send discussions elsewhere.


Because you “always” want to make technical solutions on how you believe
things work. Beyond being more productive to simply ask if things work that
way (without offering solutions), for IP reasons, as previously stated, any
solution you offer is inherently tainted and biased again, and this group
is intentionally not the place for them. Please internalize this, is
anything - this is not a good place for technical discussions about what
you think CAs should do or other people should implement, when talking at
the ecosystem layer. While Mozilla forums are generally excellent places to
discuss changes to Mozilla products (because then only Mozilla assumes the
risk), when you try to design for others, this is not that venue. Hopefully
that is clear now.


>
> > Just reread RFC7301. While it does say that servers SHALL reject such
> >> connections (or at least not send back an ALPN indicating a selected
> >> value, as if not implementing the extension), I find it likely that some
> >> combinations of TLS implementation and application implementation will
> >> blindly accept whatever unknown protocol identifier a client lists as
> >> the only option.
> >
> >
> > That is completely unproductive speculative strawmanning that doesn’t
> allow
> > for productive dialog. More specifically, I do not think it all useful
> for
> > this Forum for the “if I was king, and we assume it works like x, here’s
> > what I would do” is actually at all productive or appropriate. The right
> > venue would be ACME if you wanted to discuss designs, and what is
> relevant
> > here and appropriate is merely a critical evaluation of comparative risk
> to
> > _what the Baselines permit_.
>
> I am talking about which real-world issues (not rules and regulations)


Not a real world issue - a hypothetical issue until shown otherwise, and
thus a straw man.


> might make your (not mine) suggestion of using ALPN insecure.
> Specifically I am /guessing/ at the likelihood of a specific
> implementation bug/detail causing servers affected by the current issue
> to remain equally affected with an ALPN of "acme" added to the probing
> for TLS-SNI-01.


Yes, you are guessing, which is an unproductive reason to reject or
redesign, or to offer a radically different, incomplete, inadequate, and
incompatible solution.

>
> > “I think we shouldn’t allow X, because it introduces condition Y, where
> if
> > met would result in Z, and that is new and unique to this method” is
> useful.
> >
> >>
> >>
>
> I was simply suggesting that your suggested new method X (ALPN "acme")
> would remain affected by the /existing issues/ under discussion if a
> specific issue Y (ALPN ignored or checking deferred beyond duration of
> probe) exists, thus indicating that, subject to checking if Y is
> widespread, your novel suggestion X /might/ be useless. There's no new
> Z.


The problems with this straw man is that it’s no different than saying “If
someone doesn’t implement TCP/IP correctly, this wouldn’t be able to
connect to them.” It’s not new information or useful contributions to
imagine folks that don’t implement the spec correctly, because their issue
is not with this, it’s that they don’t implement the ALPN spec correctly.
If you can’t talk to someone because they didn’t implement TCP/IP
correctly, the “issue” is not that you need to connect to someone, the
issue is they didn’t implement properly.

It’s an unnecessary speculation without effort for evidence and a violation
of the spec’d behavior. One of the many purposes of specs is to give us a
shared vocabulary for discussing and building solutions, so speculation
without evidence of spec violating behavior only serves to derail the very
conversation specs are trying to enable.

In the part of my post that you conveniently snipped, I explained why
> your suggestion of using an ALPN to prevent old web servers from
> accepting a hypothetical TLS-SNI-future would be detrimental to real
> world non-vulnerable implementations of ACME clients (Specifically all
> those using a standalone program such as Certbot). That part contained
> to hypotheticals at all.


The problem is that your solution doesn’t actually allow for clients to
continue to work as before. They would need to change (to change the SNI
they configure that the server will expect). As such, only servers that
update can use your method - the same as only servers that update can use
what I was proposing. You don’t get free backwards compat (among the many
other technical flaws).

The problem with your solution is that even if servers update, it doesn’t
address the vulnerability. I agree that the solution offered also wouldn’t
address the vulnerability if servers implement specs incorrectly. However,
at that point, the issue isn’t the proposal - it’s that a server didn’t
implement s spec correctly. And this is only relevant to actual evidence of
that, since no one knows what hypothetical spec violating servers might
have done incorrectly, while evidence of spec violation actually provides
concrete data that can then be discussed, evaluated, and worked around.
Absent that, it merely derails conversation.

Jakob Bohm

unread,
Jan 12, 2018, 8:29:07 AM1/12/18
to mozilla-dev-s...@lists.mozilla.org
When I wrote my previous reply, I had not yet received Let's encrypt's
post in which they announced they would not reenable TLS-SNI-01
globally. So this was written based on Let's encrypt only *temporarily*
disabling TLS-SNI-01 as stated in their original post and *allegedly*
(according to 3rd party posts) asking hosting providers to block uploads
of certificates for acme.invalid.

This situation has since changed, and most of my suggestions are thus
mostly moot.

jo...@letsencrypt.org

unread,
Jan 12, 2018, 10:38:42 PM1/12/18
to mozilla-dev-s...@lists.mozilla.org
Another update, the main thing being that we have deployed patches to our CA that allow TLS-SNI for both renewal and whitelisted accounts, as we said we would in our previous update:

https://community.letsencrypt.org/t/tls-sni-challenges-disabled-for-most-new-issuance/50316

jo...@letsencrypt.org

unread,
Jan 12, 2018, 11:14:18 PM1/12/18
to mozilla-dev-s...@lists.mozilla.org
I would like to thank our community, including many people who read m.d.s.p., for helping with our response. This includes individuals in the PKI community, other CAs, hosting and infrastructure providers, corporate security teams, and root programs.

Our response depended on quickly consuming large amounts of information from different external sources. We sought outside opinions regarding our vulnerability analysis, we needed to know how widespread the problem was, how fast many different organizations could patch, what the impact of disabling TLS-SNI for different periods of time would be, we had compliance questions...

Community members and partners immediately stepped up to provide input, many in the middle of the night via both phone and email. We're very grateful and we'll pay it forward given the opportunity.

Hector Martin 'marcan'

unread,
Jan 13, 2018, 3:35:47 AM1/13/18
to jo...@letsencrypt.org, mozilla-dev-s...@lists.mozilla.org
On 2018-01-13 12:38, josh--- via dev-security-policy wrote:
> Another update, the main thing being that we have deployed patches to our CA that allow TLS-SNI for both renewal and whitelisted accounts, as we said we would in our previous update:
>
> https://community.letsencrypt.org/t/tls-sni-challenges-disabled-for-most-new-issuance/50316

Would it make sense to effectively allow "self-service" whitelisting by
using a DNS TXT record? This would allow a static DNS configuration (no
need for dynamic records as in DNS-01) and basically allow TLS-SNI-01
users to continue using their existing setup. The record would basically
be an assertion that yes, the domain owner allows the usage of
TLS-SNI-01 and the server it is pointed to will not allow third-party
provisioning of acme.invalid certs.

Another suggestion is to use an SRV record for TLS-SNI-01 validation.
This would serve as an assertion that the method is acceptable and also
allow choosing a different port or even a different hostname/IP
altogether. Supporting this for HTTP-01 would also make sense, e.g. that
would allow using certbot in standalone mode on a nonstandard port,
making it perhaps one of the simplest and most universal validation
configurations, working with any server software as long as you can
provision a single static DNS record.

--
Hector Martin "marcan" (mar...@marcan.st)
Public Key: https://mrcn.st/pub

jacob.hoff...@gmail.com

unread,
Jan 14, 2018, 4:32:48 PM1/14/18
to mozilla-dev-s...@lists.mozilla.org
On Saturday, January 13, 2018 at 12:35:47 AM UTC-8, Hector Martin 'marcan' wrote:
> Would it make sense to effectively allow "self-service" whitelisting by
> using a DNS TXT record?

We discussed a similar approach (using CAA) on our community forum, and concluded we don't want to pursue it at this time: https://community.letsencrypt.org/t/tls-sni-via-caa/50172. The TXT record would probably work more widely than CAA, but it would still be encouraging further integration with TLS-SNI-01, when we really want to encourage migration away from it. Right now it's our feeling that the account and renewal whitelisting should mitigate most of the pain of migrating away, but experience and feedback from subscribers will help inform that over time.

Gervase Markham

unread,
Jan 15, 2018, 9:27:52 AM1/15/18
to mozilla-dev-s...@lists.mozilla.org
On 14/01/18 21:32, jacob.hoff...@gmail.com wrote:
> We discussed a similar approach (using CAA) on our community forum,
> and concluded we don't want to pursue it at this time:
> https://community.letsencrypt.org/t/tls-sni-via-caa/50172. The TXT
> record would probably work more widely than CAA, but it would still
> be encouraging further integration with TLS-SNI-01, when we really
> want to encourage migration away from it. Right now it's our feeling
> that the account and renewal whitelisting should mitigate most of the
> pain of migrating away, but experience and feedback from subscribers
> will help inform that over time.

Why would you want to continue migrating away if it were based on a
self-serve whitelist? Would that not re-secure the method?

Gerv

Alex Gaynor

unread,
Jan 16, 2018, 9:17:27 AM1/16/18
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
It would come at the expense of a more streamlined and secure approach
(e.g. the ALPN proposal on the acme-wg list), which once standardized I
assume Let's Encrypt (and other ACME CAs) would want to fully migrate to.

Alex

On Mon, Jan 15, 2018 at 9:27 AM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 14/01/18 21:32, jacob.hoff...@gmail.com wrote:
> > We discussed a similar approach (using CAA) on our community forum,
> > and concluded we don't want to pursue it at this time:
> > https://community.letsencrypt.org/t/tls-sni-via-caa/50172. The TXT
> > record would probably work more widely than CAA, but it would still
> > be encouraging further integration with TLS-SNI-01, when we really
> > want to encourage migration away from it. Right now it's our feeling
> > that the account and renewal whitelisting should mitigate most of the
> > pain of migrating away, but experience and feedback from subscribers
> > will help inform that over time.
>
> Why would you want to continue migrating away if it were based on a
> self-serve whitelist? Would that not re-secure the method?
>
> Gerv
>
0 new messages