Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

StartEncrypt considered harmful today

834 views
Skip to first unread message

Rob Stradling

unread,
Jun 30, 2016, 11:31:20 AM6/30/16
to mozilla-dev-s...@lists.mozilla.org, Eddy Nigg (StartCom Ltd.)
https://www.computest.nl/blog/startencrypt-considered-harmful-today/

Eddy, is this report correct? Are you planning to post a public
incident report?

Thanks.

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Peter Kurrasch

unread,
Jun 30, 2016, 12:10:54 PM6/30/16
to Rob Stradling, mozilla-dev-s...@lists.mozilla.org, Eddy Nigg (StartCom Ltd.)
Very interesting. This is exactly the sort of thing I'm concerned about with respect to Let's Encrypt and ACME.

I would think that all CA's should issue some sort of statement regarding the security testing of any similar, Internet-facing API interface they might be using. I would actually like to see a statement regarding any interface, including browser-based, but one step at a time. Let's at least know that all the other interfaces undergo regular security scans--or when a CA might start doing them.

Anyone proposing updates in CABF?


  Original Message  
From: Rob Stradling
Sent: Thursday, June 30, 2016 10:31 AM
To: mozilla-dev-s...@lists.mozilla.org; 'Eddy Nigg (StartCom Ltd.)'
Subject: StartEncrypt considered harmful today
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Phillip Hallam-Baker

unread,
Jun 30, 2016, 12:24:22 PM6/30/16
to Rob Stradling, Eddy Nigg (StartCom Ltd.), mozilla-dev-s...@lists.mozilla.org
Argh....

As with Etherium, the whole engineering approach gives me a cold sweat.
Security and scripting languages are not a good mix.

What makes something easy to hack in Perl does not make for good security
architecture.


:(



On Thu, Jun 30, 2016 at 11:30 AM, Rob Stradling <rob.st...@comodo.com>
wrote:

Juergen Christoffel

unread,
Jun 30, 2016, 12:46:53 PM6/30/16
to dev-secur...@lists.mozilla.org
On 30.06.16 18:24, Phillip Hallam-Baker wrote:
> What makes something easy to hack in Perl does not make for good security
> architecture.

Bad design, engineering or implementation is not primarily a problem of the
language used. Or we would never have seen buffer overflows in C. Please
castigate the implementor instead.

--jc

Tom Ritter

unread,
Jun 30, 2016, 12:57:41 PM6/30/16
to Peter Kurrasch, Eddy Nigg (StartCom Ltd.), Rob Stradling, mozilla-dev-s...@lists.mozilla.org
On 30 June 2016 at 11:10, Peter Kurrasch <fhw...@gmail.com> wrote:
> Very interesting. This is exactly the sort of thing I'm concerned about with respect to Let's Encrypt and ACME.
>
> I would think that all CA's should issue some sort of statement regarding the security testing of any similar, Internet-facing API interface they might be using. I would actually like to see a statement regarding any interface, including browser-based, but one step at a time. Let's at least know that all the other interfaces undergo regular security scans--or when a CA might start doing them.
>
> Anyone proposing updates in CABF?

In theory I would support this, in practice it has no teeth. There is
no (real) accreditation for security reviews, and the accreditations
that exist do not, in practice, ensure one with the accreditation is
skilled. You can say "APIs must have a security review" or an
"adversarial security scan" or a "vulnerability scan", or "manual
penetration test", or a "red team assessment" - but the definition of
the terms and the skillsets of people performing them vary so widely
that it would not guarantee very much in practice.

I believe that the CAs who want to be a leader in this niche already
are, and the CAs who cannot afford to do so (because I assume every CA
wants to take security seriously, but is confined in practice) will
wind up meeting the requirement in a way that does not significantly
improve their security. (And various shades in between)

But I'm biased, being a security consultant and all.

-tom

Daniel Veditz

unread,
Jun 30, 2016, 1:04:50 PM6/30/16
to Rob Stradling, mozilla-dev-s...@lists.mozilla.org, Eddy Nigg (StartCom Ltd.)
On 6/30/16 8:30 AM, Rob Stradling wrote:
> https://www.computest.nl/blog/startencrypt-considered-harmful-today/
>
> Eddy, is this report correct? Are you planning to post a public
> incident report?

Does StartCom honor CAA?

Does StartCom publish to CT logs?

How many mis-issued certs were obtained by the researchers? Has there
been an investigation to see if there were similarly mis-issued certs
prior to this report?

Have those certs been revoked?

-Dan Veditz

Daniel Veditz

unread,
Jun 30, 2016, 1:05:17 PM6/30/16
to Rob Stradling, mozilla-dev-s...@lists.mozilla.org, Eddy Nigg (StartCom Ltd.)
On 6/30/16 8:30 AM, Rob Stradling wrote:
> https://www.computest.nl/blog/startencrypt-considered-harmful-today/
>
> Eddy, is this report correct? Are you planning to post a public
> incident report?

Peter Kurrasch

unread,
Jun 30, 2016, 1:13:53 PM6/30/16
to Daniel Veditz, Rob Stradling, mozilla-dev-s...@lists.mozilla.org, Eddy Nigg (StartCom Ltd.)
Let's be even more pointed: How do we know that *any* of the certs issued through this ‎interface were issued to the right person for the right domain? How can StartCom make that determination?

  Original Message  
From: Daniel Veditz
Sent: Thursday, June 30, 2016 12:04 PM‎

...

How many mis-issued certs were obtained by the researchers? Has there
been an investigation to see if there were similarly mis-issued certs
prior to this report?

Have those certs been revoked?

-Dan Veditz

Peter Kurrasch

unread,
Jun 30, 2016, 1:59:07 PM6/30/16
to Tom Ritter, Eddy Nigg (StartCom Ltd.), Rob Stradling, mozilla-dev-s...@lists.mozilla.org
All good points. ‎I wonder if we need to start with something more basic: setting expectations.

Maybe we need to communicate to all participating CA's that we expect them to perform a security scan of all Internet-facing interfaces. That we expect each interface to be able to pass the OWASP Top Ten. That we expect a scan to be performed at least once per year.

To be sure, that's a pretty low bar but I don't know that all CA's could pass even that minimal benchmark today. If so, that's a big problem.


  Original Message  
From: Tom Ritter
Sent: Thursday, June 30, 2016 11:57 AM‎

Richard Barnes

unread,
Jun 30, 2016, 2:05:16 PM6/30/16
to Peter Kurrasch, Eddy Nigg (StartCom Ltd.), mozilla-dev-s...@lists.mozilla.org, Rob Stradling, Tom Ritter
TBH, the OWASP Top Ten is not really a metric, it's a set of general bins
of threats. There's no such thing as "passing the OWASP Top Ten".

I think you're going to struggle to establish any sort of objective,
general criteria. These protocol bugs are challenging to find and specific
to different operational models.

Honestly, I think the best thing for the industry is to align around one
issuance API so that CAs don't have to keep reinventing the wheel. In
addition to all the benefits that come from code reuse, it means that you
can go ahead and nail down a bunch of security stuff, so that there are
fewer ways for a CA deploying a new API to screw up. For example, several
of the issues with the StartEncrypt API have already been raised on the
ACME mailing list and addressed in the draft protocol. (And to be clear,
though I obviously think ACME is the bee's knees, I would be just as happy
to get alignment around *a* protocol.)

--Richard

Phillip Hallam-Baker

unread,
Jun 30, 2016, 2:52:37 PM6/30/16
to Juergen Christoffel, dev-secur...@lists.mozilla.org
​My college tutor, Tony Hoare used his Turing Award acceptance speech to
warn people why that feature of C was a terrible architectural blunder.

If you are writing security code without strong type checking and robust
memory management with array bounds checking then you are doing it wrong.​

Jonathan Rudenberg

unread,
Jun 30, 2016, 3:54:41 PM6/30/16
to Christiaan Ottow, dev-secur...@lists.mozilla.org

> On Jun 30, 2016, at 15:44, Christiaan Ottow <cot...@computest.nl> wrote:
>
> The certificates we had issuedto us as proof of concept (only for our own domains), were not revoked and we don't see them in the CT logs. However, we informed StartCom that we had only issued certificates for domains under our control, so I can imagine no red flags were raised by their helpdesk.

The lack of CT logging is interesting, as StartCom claims that all certificates they issue are being logged to at least three CT servers: https://www.startssl.com/NewsDetails?date=20160323

Do you mind uploading the certificate files that were obtained somewhere and linking us to them?

Thanks,

Jonathan

Andrew Ayer

unread,
Jun 30, 2016, 3:57:50 PM6/30/16
to Christiaan Ottow, dev-secur...@lists.mozilla.org
On Thu, 30 Jun 2016 21:44:02 +0200
Christiaan Ottow <cot...@computest.nl> wrote:

> > On 6/30/16 8:30 AM, Rob Stradling wrote:
> > > https://www.computest.nl/blog/startencrypt-considered-harmful-today/
> > >
> > > Eddy, is this report correct? Are you planning to post a public
> > > incident report?
> >
> > Does StartCom honor CAA?
> >
> > Does StartCom publish to CT logs?
> >
> > How many mis-issued certs were obtained by the researchers? Has
> > there been an investigation to see if there were similarly
> > mis-issued certs prior to this report?
> >
> > Have those certs been revoked?
> >
> > -Dan Veditz
> >
>
> The certificates we had issuedto us as proof of concept (only for
> our own domains), were not revoked and we don't see them in the CT
> logs. However, we informed StartCom that we had only issued
> certificates for domains under our control, so I can imagine no red
> flags were raised by their helpdesk.

Hi Christiaan,

First of all, thank you for conducting this research!

That's very interesting that you did not see the certs in CT, since it
would contradict Startcom's claim that they log all certs:
https://www.startssl.com/NewsDetails?date=20160323

I have a couple questions:

1. Did you check the CT logs at least 24 hours after the certificates
were issued? If not, the log entries might not have been incorporated
yet.

2. Do your certificates contain the embedded SCT extension (OID
1.3.6.1.4.1.11129.2.4.2)? If so, would you be willing to provide the
contents of the extension?

Regards,
Andrew

Andrew Ayer

unread,
Jun 30, 2016, 4:00:59 PM6/30/16
to Jonathan Rudenberg, dev-secur...@lists.mozilla.org, Christiaan Ottow
On Thu, 30 Jun 2016 15:54:02 -0400
Jonathan Rudenberg <jona...@titanous.com> wrote:

>
> > On Jun 30, 2016, at 15:44, Christiaan Ottow <cot...@computest.nl>
> > wrote:
> >
> > The certificates we had issuedto us as proof of concept (only for
> > our own domains), were not revoked and we don't see them in the CT
> > logs. However, we informed StartCom that we had only issued
> > certificates for domains under our control, so I can imagine no red
> > flags were raised by their helpdesk.
>
> The lack of CT logging is interesting, as StartCom claims that all
> certificates they issue are being logged to at least three CT
> servers: https://www.startssl.com/NewsDetails?date=20160323
>
> Do you mind uploading the certificate files that were obtained
> somewhere and linking us to them?

It would be best not to release the full certificates quite yet, since
doing so would make it impossible to determine who logged them if they
later show up in CT logs.

Providing a hash of the certificate and the contents of the SCT
extension, if any, would be OK.

Regards,
Andrew

Andrew Ayer

unread,
Jun 30, 2016, 5:10:57 PM6/30/16
to Christiaan Ottow, dev-secur...@lists.mozilla.org
On Thu, 30 Jun 2016 22:36:19 +0200
Christiaan Ottow <cot...@computest.nl> wrote:

> We acquired certificates for a private domain (and some subdomains)
> of the tester in question, and one for our domain pine.nl. Details of
> the latter are attached, with the modulus and signature left out. The
> SHA256 fingerprint of the certificate is:
> A7:E5:BD:6E:81:8F:A8:CE:FD:73:97:32:70:06:89:59:98:86:33:5A:06:7E:FD:ED:EA:B6:19:B3:3F:67:F6:A1

Thanks. There's no SCT extension, despite StartCom claiming to embed
SCTs in all certificates they issue. Also, the cert was issued over a
week ago, so even if StartCom was logging post-issuance the cert should
have been logged by now.

I would like to hear StartCom explain this as well.

Regards,
Andrew

Nick Lamb

unread,
Jun 30, 2016, 7:15:34 PM6/30/16
to mozilla-dev-s...@lists.mozilla.org
On Thursday, 30 June 2016 18:13:53 UTC+1, Peter Kurrasch wrote:
> Let's be even more pointed: How do we know that *any* of the certs issued through this ‎interface were issued to the right person for the right domain? How can StartCom make that determination?

Assuming that

* This was the only practical vulnerability that could lead to mis-issuance (other issues are listed in the article but without enough detail to be sure they were sufficient to cause mis-issuance - most certainly somebody independent of StartSSL should check these carefully)

* StartSSL keep detailed sufficient records of the circumstances in which they issued certificates suitable to verify _for themselves_ what follows below

* We're interested only in the same degree of confidence we have assigned to existing DV certificates, such as those verified by email to a WHOIS contact

Then it seems we can achieve that by checking that the default /signfile URL was used for verification (not some arbitrary forced URL) and that no HTTP redirect was followed.

This would mean either the applicant did own the site, or a hypothetical attacker was somehow able to arrange for /signfile, without redirection, to contain the desired value, which would be tricky even for most pastebin-type sites and impossible for an ordinary web site.

I'm not _happy_ about this, as I understand it the CA/B already discussed a revision to the BRs that would require this sort of check to use the IETF-approved /.well-known/ prefix. Reason being it's not inconceivable that somewhere on the web is a site where an attacker can make paths like /signfile contain arbitrary text, it's a LOT less likely to have a site where they can do this to a path beginning /.well-known/acme-challenge/

Still though, it does achieve similar levels of confidence to the usual "send an email to an address we found in WHOIS and check the applicant can read it" approach we see today, and which there is seemingly no hurry to deprecate.

If they don't have the records to check whether redirects were followed or whether the default path was used, then I agree they can't hope to achieve the usual levels of confidence in their own validation prior to the fix.

Matt Palmer

unread,
Jul 1, 2016, 1:16:48 AM7/1/16
to dev-secur...@lists.mozilla.org
On Thu, Jun 30, 2016 at 11:10:45AM -0500, Peter Kurrasch wrote:
> Very interesting. This is exactly the sort of thing I'm concerned about
> with respect to Let's Encrypt and ACME.

Why? StartCom isn't the first CA to have had quite glaring holes in its
automated DCV interface and code, and I'm sure it won't be the last. What
is so special about Let's Encrypt and ACME that you feel the need to
constantly refer to it as though it's some sort of new and special threat to
the PKI ecosystem?

- Matt

Eddy Nigg

unread,
Jul 1, 2016, 3:35:20 AM7/1/16
to mozilla-dev-s...@lists.mozilla.org
On 06/30/2016 06:30 PM, Rob Stradling wrote:
> https://www.computest.nl/blog/startencrypt-considered-harmful-today/
>
> Eddy, is this report correct? Are you planning to post a public
> incident report?

Hi Rob and all,

There were indeed a couple of issues with the client software - known
bugs have been fixed by our developers (hope there wont be anything more
significant than that :-) ).

So far less than three hundred certificates have been issued using this
method, none should have been effectively issue wrongfully due to our
backend checks.

At the moment I don't believe that a public incident report is
necessary, but should anything change in our current assessment we will
obviously act accordingly. I instructed additional verifications and
confirmations to assert that assessment.

--
Regards
Signer: Eddy Nigg, COO/CTO
StartCom Ltd. <http://www.startcom.org>
XMPP: star...@startcom.org <xmpp:star...@startcom.org>
Blog: Join the Revolution! <http://blog.startcom.org>
Twitter: Follow Me <http://twitter.com/eddy_nigg>

Patrick Figel

unread,
Jul 1, 2016, 10:54:31 AM7/1/16
to dev-secur...@lists.mozilla.org
On Friday, July 1, 2016 at 9:35:20 AM UTC+2, Eddy Nigg wrote:
> So far less than three hundred certificates have been issued using
> this method, none should have been effectively issue wrongfully due
> to our backend checks.

Can you comment on how your backend checks would have prevented any
misissuance? My understanding of the report is that this was not so much
an issue with the client software, but rather an oversight in the
protocol that allows Domain Validation checks that are not sufficient in
assuring domain ownership, thus the issue was very much a backend issue.
I assume there are reasonable controls in place to prevent misissuance
for high-risk domains, but what about other domains? Would they have
been affected by this?

I would also be curious about why the certificate has not been logged to
CT, given StartCom's prior statements with regards to CT adoption.

Peter Kurrasch

unread,
Jul 1, 2016, 3:44:00 PM7/1/16
to Matt Palmer, dev-secur...@lists.mozilla.org
Only reason I'm focusing on Let's Encrypt and ACME is because they are currently under review for inclusion.‎ As far as I'm concerned all CA's with similar interfaces warrant this extra scrutiny.

I am somewhat curious if any of this has come up before in other forums--that these interfaces can ‎be abused and lead to certificate mis-issuance? 


  Original Message  
From: Matt Palmer
Sent: Friday, July 1, 2016 12:16 AM
To: dev-secur...@lists.mozilla.org
Subject: Re: StartEncrypt considered harmful today

Nick Lamb

unread,
Jul 1, 2016, 4:26:52 PM7/1/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, 1 July 2016 20:44:00 UTC+1, Peter Kurrasch wrote:
> Only reason I'm focusing on Let's Encrypt and ACME is because they are currently under review for inclusion.‎ As far as I'm concerned all CA's with similar interfaces warrant this extra scrutiny.
>
> I am somewhat curious if any of this has come up before in other forums--that these interfaces can ‎be abused and lead to certificate mis-issuance? 

As I understand it StartCom sprang their protocol and its implementation, which are proprietary and very thinly documented, as a surprise from first announcement to general availability in a day or less - presumably for commercial advantage. I'm not aware of - and suspect there hasn't been any - independent analysis of their system.

ACME is a protocol intended to become an IETF Standards Track RFC. You are welcome to read the existing discussions of the protocol, or to participate (subject to usual IETF rules) https://www.ietf.org/mailman/listinfo/acme. As with Mozilla's inclusion process the IETF process ends up partly being a test of endurance, as even simple ideas are dragged out over several months with posts that have some technical meat being mixed in with axe-grinding and larger politics.

Let's Encrypt's implementation of ACME, Boulder, is on github for anyone to inspect. I am not aware of any independent formal analysis, but it's obvious from the contributions to Boulder that people outside Let's Encrypt do look at it.

Eddy Nigg

unread,
Jul 2, 2016, 2:39:48 PM7/2/16
to mozilla-dev-s...@lists.mozilla.org
On 07/01/2016 05:54 PM, Patrick Figel wrote:
>
> Can you comment on how your backend checks would have prevented any
> misissuance? My understanding of the report is that this was not so much
> an issue with the client software, but rather an oversight in the
> protocol that allows Domain Validation checks that are not sufficient in
> assuring domain ownership, thus the issue was very much a backend issue.
> I assume there are reasonable controls in place to prevent misissuance
> for high-risk domains, but what about other domains? Would they have
> been affected by this?

Hi Patrick,

Depending on the flagging parameters and the attending certificate
officer, the (some) certificate might or might have not been issued -
I'm careful with this statement as suspicion can arise for this or the
other reason, but it's not 100%. High-profile names would have been
flagged and not issued though.

> I would also be curious about why the certificate has not been logged to
> CT, given StartCom's prior statements with regards to CT adoption.

We are checking it, it might have been logged at the wrong place. I'll
try to provide an answer on this too when possible.

jo...@letsencrypt.org

unread,
Jul 3, 2016, 1:58:16 AM7/3/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, July 1, 2016 at 3:26:52 PM UTC-5, Nick Lamb wrote:
> ACME is a protocol intended to become an IETF Standards Track RFC. You are welcome to read the existing discussions of the protocol, or to participate (subject to usual IETF rules) https://www.ietf.org/mailman/listinfo/acme. As with Mozilla's inclusion process the IETF process ends up partly being a test of endurance, as even simple ideas are dragged out over several months with posts that have some technical meat being mixed in with axe-grinding and larger politics.
>
> Let's Encrypt's implementation of ACME, Boulder, is on github for anyone to inspect. I am not aware of any independent formal analysis, but it's obvious from the contributions to Boulder that people outside Let's Encrypt do look at it.

We'll probably never be 100% sure that ACME or any other protocol doesn't contain serious flaws but we've done quite a bit to build up confidence that ACME is secure. Here's a list, in no particular order:

1) Clearly documented the spec in public. This allows anyone to read it and give us feedback, from day one and into the future. We know that people do look at it and provide valuable feedback (Andrew Ayer and Martin Thomson are good examples). Our server implementation of the spec is also open source.

2) We're working to get ACME standardized in the IETF. This gets us more high quality feedback.

3) We paid to have the cryptography and application security teams at NCC Group audit the spec. This is also true for our boulder software (at least annually).

4) We commissioned a review and formal modeling of the ACME protocol from Karthikeyan Bhargavan (INRA).

5) ACME has proven to be quite solid in production so far. It has been used to safely issue millions of certificates.

6) We have skilled technical staff, community, and partners who helped to build ACME and continue to review and think about it every day. These people are not only skilled, but have strong reputations in the security, cryptography, and open source communities.

Like I said, we probably can't ever be 100% sure there aren't serious flaws in ACME, but I think this constitutes pretty reasonable due diligence. If/when the next flaw is found, it'll be found earlier than it might have been due to our transparency.

I don't know enough about what StartCom is doing to comment on this thread's actual topic, but I do wish others would use ACME (if ACME doesn't work for someone, let's talk about what we might do to fix that in the IETF working group). If not ACME, then at least something with a well-written public specification.

CAs ask the public to trust them and there is no trust without transparency.

Peter Gutmann

unread,
Jul 6, 2016, 4:50:46 AM7/6/16
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
Nick Lamb <tiala...@gmail.com> writes:

>ACME is a protocol intended to become an IETF Standards Track RFC.

Oh dear God, another one? We've already got CMP, CMC, SCEP, EST, and a whole
slew of other ones that failed to get as far as RFCs, which all do what ACME
is trying to do. What's the selling point for ACME? That it blows up in your
face at the worse possible time?

Peter.

Nick Lamb

unread,
Jul 6, 2016, 9:16:14 AM7/6/16
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, 6 July 2016 09:50:46 UTC+1, Peter Gutmann wrote:
> Oh dear God, another one? We've already got CMP, CMC, SCEP, EST, and a whole
> slew of other ones that failed to get as far as RFCs, which all do what ACME
> is trying to do. What's the selling point for ACME? That it blows up in your
> face at the worse possible time?

In the examples I've reviewed the decision seems to have been made (either explicitly or tacitly) to leave the really difficult problem - specifically achieving confidence in the identity of the subject - completely unaddressed. ACME went out of its way to address it for the domain we care about around here.

Your work on SCEP is probably appreciated by people who aren't interested in that problem, but this forum is concerned with the Web PKI, where that problem is pre-eminent, and this thread is about another provider, StartCom trying and failing to solve that problem.

So the answer to your question is that ACME's selling point is that it solves the problem lots of people actually have, a problem which was traditionally solved by various ad hoc methods whose security (or more often otherwise) was only inspected after the fact rather than being considered in advance.

I presume the "blows up in your face" comment was purely because of ACME's hilarious choice of name, but if not please elaborate _in a thread about ACME_

Richard Barnes

unread,
Jul 6, 2016, 10:15:47 AM7/6/16
to Peter Gutmann, Nick Lamb, mozilla-dev-s...@lists.mozilla.org
On Wed, Jul 6, 2016 at 4:50 AM, Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> Nick Lamb <tiala...@gmail.com> writes:
>
> >ACME is a protocol intended to become an IETF Standards Track RFC.
>
> Oh dear God, another one? We've already got CMP, CMC, SCEP, EST, and a
> whole
> slew of other ones that failed to get as far as RFCs, which all do what
> ACME
> is trying to do. What's the selling point for ACME? That it blows up in
> your
> face at the worse possible time?
>

Read the draft, man. ACME is targeted at a problems that none of those
other protocols solve -- most critically, enabling the applicant to
demonstrate control of an identifier. That's the reason you have all of
these CA proprietary APIs and ACME; these previous efforts failed to solve
the problems people actually cared about.

--Richard


>
> Peter.

Peter Gutmann

unread,
Jul 6, 2016, 8:52:23 PM7/6/16
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
Nick Lamb <tiala...@gmail.com> writes:

>In the examples I've reviewed the decision seems to have been made (either
>explicitly or tacitly) to leave the really difficult problem - specifically
>achieving confidence in the identity of the subject - completely unaddressed.

There wasn't any decision to leave it unaddressed, no-one had ever expressed
any interest in this at any point during the work on the previous protocols,
so there's nothing about it in any of the specs. If anyone did care about it,
it shouldn't be too hard to add support for it to any of the existing
protocols.

>So the answer to your question is that ACME's selling point is that it solves
>the problem lots of people actually have

Well, it solves a problem that no previous protocol, or potential user of the
protocol, had even acknowledged as a problem before. Whether that's (a) worth
creating an entirely new protocol rather than just adding support for it to an
existing, long-established one and (b) will make said new protocol a success
when every other attempt to do this has failed, is another matter.

>I presume the "blows up in your face" comment was purely because of ACME's
>hilarious choice of name,

You guys really need to do some work on that one :-).

Peter.

Nick Lamb

unread,
Jul 6, 2016, 11:16:47 PM7/6/16
to mozilla-dev-s...@lists.mozilla.org
On Thursday, 7 July 2016 01:52:23 UTC+1, Peter Gutmann wrote:
> There wasn't any decision to leave it unaddressed, no-one had ever expressed
> any interest in this at any point during the work on the previous protocols,
> so there's nothing about it in any of the specs.

This claim is plainly false. Early drafts of SCEP, before it confined itself to "closed networks" even spell out what the problem is before they basically say they're not going to make any real attempt to tackle it.

CMP, CMC and SCEP all resort to saying that some "out of band" mechanism should be used to verify that the applicant is or controls the subject DN and treat this problem as completely out of scope. Even by 2005 this should have seemed like weak sauce indeed.

> If anyone did care about it,
> it shouldn't be too hard to add support for it to any of the existing
> protocols.

"Schneier's Law" very much applies.

> Well, it solves a problem that no previous protocol, or potential user of the
> protocol, had even acknowledged as a problem before. Whether that's (a) worth
> creating an entirely new protocol rather than just adding support for it to an
> existing, long-established one and (b) will make said new protocol a success
> when every other attempt to do this has failed, is another matter.

Each week several hundred thousand certificates are issued using (an earlier draft of) ACME by what is now as a result one of the Web PKI's top five Certificate Authorities in terms of how many sites use its certificates.

I'm content to label this "success" even before ACME becomes an RFC.

Peter Gutmann

unread,
Jul 8, 2016, 2:04:49 AM7/8/16
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
Nick Lamb <tiala...@gmail.com> writes:

>Early drafts of SCEP, before it confined itself to "closed networks" even
>spell out what the problem is before they basically say they're not going to
>make any real attempt to tackle it. CMP, CMC and SCEP all resort to saying
>that some "out of band" mechanism should be used to verify that the applicant
>is or controls the subject DN and treat this problem as completely out of
>scope.

Various SCEP drafts have contained all sorts of stuff that was dropped when
no-one cared about it. The "out of band"/"beyond the scope of this document"
is standard boilerplate that's used when no-one cares enough to include it in
the document. In fact it pretty much explicitly says that it's not covered in
the doc because no-one cared how it was done.

So I'll repeat this again: It wasn't added to any existing protocol because
no-one's ever cared about it before. If people do care about it, why not add
it to any one of the many existing protocols rather than inventing yet another
incompatible way of doing what numerous other protocols already do?

Or is it that ACME is just a desperate attempt to derail StartCom's
StartEncrypt at any cost?

>"Schneier's Law" very much applies.

What does that have to do with no-one bothering to add whatever magic
ingredient ACME is claiming to have to any other protocol that does the same
thing? Or are you claiming that ACME is flawed because it's a reinvention of
the wheel by amateurs (which is what Schneier's Law would be saying)? That
seems a bit unlikely...

>Each week several hundred thousand certificates are issued using (an earlier
>draft of) ACME by what is now as a result one of the Web PKI's top five
>Certificate Authorities in terms of how many sites use its certificates.

OK, I think I can parse that convoluted sentence... in response: Each week who
knows how many certificates are issued using HTTP POST, Xenroll.dll, SCEP,
CMP, and who knows what else. What's your point?

Peter.

Patrick Figel

unread,
Jul 8, 2016, 5:25:25 AM7/8/16
to dev-secur...@lists.mozilla.org
On 08/07/16 08:04, Peter Gutmann wrote:
> Or is it that ACME is just a desperate attempt to derail StartCom's
> StartEncrypt at any cost?

That doesn't make any sense - ACME has been in production for close to a
year, while StartAPI was launched this April (and StartEncrypt just a
couple of weeks ago).

Peter Gutmann

unread,
Jul 8, 2016, 5:50:58 AM7/8/16
to Patrick Figel, dev-secur...@lists.mozilla.org
Patrick Figel <patf...@gmail.com> writes:

Fair enough. Just trying to figure out why someone would invent an entire new
protocol rather than tweak any one of the existing ones.

Peter.

Peter Kurrasch

unread,
Jul 8, 2016, 10:36:39 AM7/8/16
to mozilla-dev-s...@lists.mozilla.org
At Nick's request, I've changed the subject line. Also, for my part, my comments are not intended to single out ACME to the exclusion of other protocols or implementations to which my comments might equally apply.

I see on the gitub site for the draft that updates are frequently and continuously being made to the protocol spec (at least one a week, it appears). Is there any formalized process to review the updates? Is there any expectation for when a "stable" version might be achieved (by which I mean that further updates are unlikely)?‎ How are compatibility issues being addressed? Has any consideration been given to possible saboteurs who might like to introduce backdoors?

I personally don't see the wisdom in having the server implementation details‎ in what is ostensibly a protocol specification. Will there be any sort of audit to establish compliance between a particular sever implementation and this Internet-Draft? Will the client software be able to determine the version of the specification under which the server is operating? (I apologize if it is in the spec; I didn't do a detailed reading of it.)

On the client side, is there a document describing the details of an ideal implementation? Does the client inform the server to which version of the protocol it is adhering--for example, in a user-agent string (again, I didn't notice one). Is there any test to validate the compliance of a client with a particular version of the Internet-Draft?

One thought for consideration is the idea of a saboteur who seeks to compromise the client software.‎ This is of particular concern if the client software can also generate the key pair since there are the obvious benefits to bad actors if certain sites are using a weaker key. Just as Firefox is a target for malware, the developers of client-side software should be cognizant of bad actors who might seek to compromise their software. 


From: Nick Lamb
Sent: Wednesday, July 6, 2016 8:16 AM
Subject: Re: StartEncrypt considered harmful today

On Wednesday, 6 July 2016 09:50:46 UTC+1, Peter Gutmann wrote:
> Oh dear God, another one? We've already got CMP, CMC, SCEP, EST, and a whole
> slew of other ones that failed to get as far as RFCs, which all do what ACME
> is trying to do. What's the selling point for ACME? That it blows up in your
> face at the worse possible time?

In the examples I've reviewed the decision seems to have been made (either explicitly or tacitly) to leave the really difficult problem - specifically achieving confidence in the identity of the subject - completely unaddressed. ACME went out of its way to address it for the domain we care about around here.

Your work on SCEP is probably appreciated by people who aren't interested in that problem, but this forum is concerned with the Web PKI, where that problem is pre-eminent, and this thread is about another provider, StartCom trying and failing to solve that problem.

So the answer to your question is that ACME's selling point is that it solves the problem lots of people actually have, a problem which was traditionally solved by various ad hoc methods whose security (or more often otherwise) was only inspected after the fact rather than being considered in advance.

I presume the "blows up in your face" comment was purely because of ACME's hilarious choice of name, but if not please elaborate _in a thread about ACME_

Patrick Figel

unread,
Jul 8, 2016, 5:43:39 PM7/8/16
to dev-secur...@lists.mozilla.org
Before getting into specifics, I should say that you're likely to get a
better answer to most of these question on the IETF ACME WG mailing list[1].

On 08/07/16 16:36, Peter Kurrasch wrote:
> I see on the gitub site for the draft that updates are frequently
> and continuously being made to the protocol spec (at least one a
> week, it appears). Is there any formalized process to review the
> updates? Is there any expectation for when a "stable" version might
> be achieved (by which I mean that further updates are unlikely)?‎

The IETF has a working group for ACME that's developing this protocol.
The IETF process is hard to describe in a couple of words (you can read
up on it on ietf.org if you're interested). Other related protocols such
as TLS are developed in a similar fashion.

> How are compatibility issues being addressed?

Boulder (the only ACME server implementation right now, AFAIK) plans to
tackle this by providing new endpoints (i.e. server URLs) whenever
backwards-incompatible changes are introduced in a new ACME draft, while
keeping the old endpoints available and backwards-compatible until the
client ecosystem catches up. I imagine once ACME becomes an internet
standard, future changes will be kept backwards-compatible (i.e.
"extensions" of some sort), but that's just me guessing.

> Has any consideration been given to possible saboteurs who might like
> to introduce backdoors?

The IETF process is public, which makes this harder (though not
impossible) to pull off. A number of people have reviewed and audited
the protocol (including a formal model[2]).

> I personally don't see the wisdom in having the server
> implementation details‎ in what is ostensibly a protocol
> specification.

Which part of the specification mentions implementation details?

> Will there be any sort of audit to establish compliance between a
> particular sever implementation and this Internet-Draft?

Someone could definitely build tools to check compliance, but who would
enforce this, and what happens to a server/client that's not compliant?

> Will the client software be able to determine the version of the
> specification under which the server is operating? (I apologize if it
> is in the spec; I didn't do a detailed reading of it.) On the client
> side, is there a document describing the details of an ideal
> implementation? Does the client inform the server to which version of
> the protocol it is adhering--for example, in a user-agent string
> (again, I didn't notice one). Is there any test to validate the
> compliance of a client with a particular version of the
> Internet-Draft?

See the previous paragraph on compatibility: Server URLs can be
considered backwards-compatible; there's currently no protocol version
negotiation or something like that.

> One thought for consideration is the idea of a saboteur who seeks to
> compromise the client software.‎ This is of particular concern if the
> client software can also generate the key pair since there are the
> obvious benefits to bad actors if certain sites are using a weaker
> key. Just as Firefox is a target for malware, the developers of
> client-side software should be cognizant of bad actors who might seek
> to compromise their software.

That's certainly something to keep in mind, but not something that can
be solved by the protocol. It's also not specific to ACME clients, the
same concern applies to any software that touches keys in the course of
normal operation. FWIW, functional ACME client implementations can be
written in < 200 LOC, which would be relatively easy to review, and a
client would not necessarily need access to the private key of your
certificate - a CSR would be sufficient.

[1]: https://www.ietf.org/mailman/listinfo/acme
[2]: https://mailarchive.ietf.org/arch/msg/acme/9HX2i0oGyIPuE-nZYAkTTYXhqnk

Nick Lamb

unread,
Jul 8, 2016, 5:48:07 PM7/8/16
to mozilla-dev-s...@lists.mozilla.org
On Friday, 8 July 2016 07:04:49 UTC+1, Peter Gutmann wrote:
> Various SCEP drafts have contained all sorts of stuff that was dropped when
> no-one cared about it. The "out of band"/"beyond the scope of this document"
> is standard boilerplate that's used when no-one cares enough to include it in
> the document. In fact it pretty much explicitly says that it's not covered in
> the doc because no-one cared how it was done.

But alas, even if you didn't care, it does matter.

Which is why there's VU#971035

SCEP (and all the real SCEP implementations that I could find) take the optimistic view that this is somebody else's problem, and so the practical result is security theatre. Certificates are issued, public key mathematics is done, there is superficial appearance of a secure system but no useful assurance of identity is achieved and so no real threat is neutralised.

> What does that have to do with no-one bothering to add whatever magic
> ingredient ACME is claiming to have to any other protocol that does the same
> thing?

This idea that you should just be able to "add whatever magic ingredient" is the exact sort of naivety that Bruce is talking about.

> OK, I think I can parse that convoluted sentence... in response: Each week who
> knows how many certificates are issued using HTTP POST, Xenroll.dll, SCEP,
> CMP, and who knows what else. What's your point?

This is still mozilla.dev.security.policy. ACME automatically issues certificates that are trustworthy in the web PKI. That's the point of the protocol and the point of my statistic.

Counting up certificates that aren't ever going to be trusted by Mozilla's software may make you feel better about the time you invested, but it's not relevant to this group.

Peter Gutmann

unread,
Jul 10, 2016, 9:29:34 AM7/10/16
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
Nick Lamb <tiala...@gmail.com> writes:

>SCEP (and all the real SCEP implementations that I could find) take the
>optimistic view that this is somebody else's problem, and so the practical
>result is security theatre.

Uhh, do you even know how SCEP is used? When you're provisioning a VPN
gateway, SCADA device, an iPhone, or one of the other systems that SCEP is
used with, the cert issue is auth'd with a PSK unique to that system, so you
know that if you see a cert with DNP3 ID ABC or AppleID XYZ, you really are
talking to the exact device/service you think you're talking to.

In contrast with the web PKI (and presumably ACME), you think you're talking
to Paypal but you could just as easily be talking to a Paypal phishing site.
Here's one example, a phishing email identified by Paypal security as being a
phish, redirecting people to an equally phishy site www.paypal-special.com,
which however had an EV cert with a serial number that duplicated that of a
genuine Paypal cert:
http://security.stackexchange.com/questions/49200/apparently-paypal-affiliated-site-with-very-suspect-security/58470).
Is it genuine or a phish? Paypal security says it's a phish, the EV cert says
it isn't.

So SCEP provides pretty strong assurance of who you're talking to, ACME (if it
follows the web PKI) provides the illusion of assurance (you think you're
talking to Paypal but it's actually a phisher). The fact that SCEP, in its
original design, doesn't do DV or whatever isn't a shortcoming of the spec,
it's because there's no need for such a low-assurance mechanism when you've
already got a high-assurance mechanism available.

Having said that, if you do think DV or whatever provides some guarantee of
security, there's nothing preventing you from adding it to SCEP, CMP, CMC,
EST, or whatever.

>Certificates are issued, public key mathematics is done, there is superficial
>appearance of a secure system but no useful assurance of identity is achieved
>and so no real threat is neutralised.

That's actually a pretty good description of the web PKI. And, presumably,
ACME if that's what it's meant to automate.

>This idea that you should just be able to "add whatever magic ingredient" is
>the exact sort of naivety that Bruce is talking about.

So adding a minor extension to a proven, established protocol is naivety but
inventing a totally new, unproven one from scratch isn't?

Look, I can see that this is obviously an issue of religious dogma for you
that ACME is perfect and everything else isn't, in light of which it doesn't
seem productive to continue this discussion, it'll just annoy everyone else
who's having to listen. Also, while it was fun initially, I'm now getting
bored. I'll bow out now.

Peter.

Nick Lamb

unread,
Jul 10, 2016, 7:09:56 PM7/10/16
to mozilla-dev-s...@lists.mozilla.org
On Sunday, 10 July 2016 14:29:34 UTC+1, Peter Gutmann wrote:
> Uhh, do you even know how SCEP is used? When you're provisioning a VPN
> gateway, SCADA device, an iPhone, or one of the other systems that SCEP is
> used with, the cert issue is auth'd with a PSK unique to that system, so you
> know that if you see a cert with DNP3 ID ABC or AppleID XYZ, you really are
> talking to the exact device/service you think you're talking to.

You've described the optimism, I was talking about the reality. I already provided the CERT reference for where the chasm between them opens up.

Apple provide a nice diagram for would-be deployers in which the actual verification step is labelled "optional", and they provide source code for a reference implementation of their profile service needed to issue iPhones with a certificate, which is more or less a complete working system... and does no verification whatsoever. Funny thing about "reference implementations", they tend to become the actual implementation by a process of copy-paste.

> In contrast with the web PKI (and presumably ACME), you think you're talking
> to Paypal but you could just as easily be talking to a Paypal phishing site.

You'd have been talking to PayPal, specifically their "Partner Support" department, which will be a group managing affinity relationships within PayPal.

> Here's one example, a phishing email identified by Paypal security as being a
> phish, redirecting people to an equally phishy site www.paypal-special.com,
> which however had an EV cert with a serial number that duplicated that of a
> genuine Paypal cert:
> http://security.stackexchange.com/questions/49200/apparently-paypal-affiliated-site-with-very-suspect-security/58470).

A genuine EV certificate for a genuine (but crappy) PayPal marketing site.

2.5.4.5 aka id-at-serialNumber is an X500 OID used in the subject DN. As such it identifies the subject of the certificate, not the certificate itself. In particular for the web PKI id-at-serialNumber will be the identifier for this company in a register or index, such as in this case PayPal's company number 3014267 in the Delaware Division of Corporations.

The certificates themselves do, as expected and required, each have their own completely different serial numbers, and they're both genuine. The fact that human agents behave inconsistently (in transcribing or not the punctuation at the end of PayPal Inc.) is permitted by the current BRs since this information is intended to be human readable.

> Is it genuine or a phish? Paypal security says it's a phish, the EV cert says
> it isn't.

So far as I can see a Customer Services agent pushed the button to generate an form reply "thanks for telling us about phishing" email out to a concerned customer. Maybe they should not have done that. Calling this person "Paypal security" without the pay jump that would imply seems a bit unfair, they're probably expected to process dozens or even hundreds of such emails per hour.

PayPal had been operating this site since at least 2013, but they eventually shut it down some time after that post, perhaps because it gave a negative impression, as you've demonstrated.

> So SCEP provides pretty strong assurance of who you're talking to

SCEP itself provides no assurance whatever, as you were originally proud to point out that "no-one cared". It is possible to achieve some assurance out of band but I still haven't seen examples of this in the wild.

> ACME (if it follows the web PKI) provides the illusion of assurance
> (you think you're talking to Paypal but it's actually a phisher).

ACME itself today provides reasonable assurance only as to the (DNS) name of the entity you're talking to. Semi-automation of EV has come a long way, but I doubt that fully automated issuance of EV certificates is on the horizon. It is anticipated that an outfit like Symantec (Versign), if they used ACME for EV, would add one or more ACME challenges that could only be completed manually and require those for their EV certificates.

> The fact that SCEP, in its
> original design, doesn't do DV or whatever isn't a shortcoming of the spec,
> it's because there's no need for such a low-assurance mechanism when you've
> already got a high-assurance mechanism available.

Previously you said nobody cared, now you claim they have an unspecified "high-assurance mechanism". The reality is closer to the former, unfortunately.

> So adding a minor extension to a proven, established protocol is naivety but
> inventing a totally new, unproven one from scratch isn't?

Insistence that it would constitute a "minor extension" is all yours and I think reflects further on your naivety. The challenge mechanisms make up a considerable fraction of the entire ACME standard on paper, and practical experience shows that this is the hardest part to get right.

> Look, I can see that this is obviously an issue of religious dogma for you
> that ACME is perfect and everything else isn't

ACME is much too complicated to ever be perfect, but while you were bothering people in this unrelated thread, development of ACME continued, draft-ietf-acme-acme-03 was published and discussion of the open items continued in the appropriate IETF working group.

> it doesn't seem productive to continue this discussion, it'll just annoy
> everyone else who's having to listen. Also, while it was fun initially,
> I'm now getting bored. I'll bow out now.

Thank you, in the event you have anything actually useful to contribute to ACME, please do it on the ACME Working Group mailing list or bring it to IETF 96, if you have contributions (other than general disdain) to the state of the Web PKI as it pertains to Mozilla you should put them in a new thread here in mozilla.dev.security.policy with an appropriate subject line.

Peter Kurrasch

unread,
Jul 19, 2016, 11:00:00 PM7/19/16
to mozilla-dev-s...@lists.mozilla.org
Thanks, Patrick. This is helpful. A few answers/responses:

Regarding the on-going development of the spec: I was thinking more about the individual commits on github and less about the IETF process. I presume that most commits will not get much scrutiny but a periodic (holistic?) review of the doc is expected to find and resolve conflicts, etc. Is that a fair statement?

The report on the security audit was interesting to read. It's good to see someone even attempted it. In addition to the protocol itself it would be interesting to see an analysis of an ACME server (Boulder, I suppose). Maybe someone will do some pentesting at least?

The 200 LOC is an interesting idea. I assume such an implementation would rely heavily on external libraries for certain functions (e.g. key generation‎, https handling, validating the TLS certificate chain provided by the server, etc.)? If so, does anyone anticipate that someone will develop a standalone, all-in-one (or mostly-in-one) client? Is a client expected to do full cert chain validation including revocation checks?


In the -03 version of the draft, section 6.1 is where I felt the spec was getting too much into server implementation details. I think there were some other spots where "server must" statements felt a little over-specified.


After reading the latest draft of the spec and the audit report, I figured I would offer up my take on the "state of the protocol", if you will.‎ I know there will be sharp disagreements and that's fine; this is but one person's perspective.

In terms of an overly broad, overly general statement, the protocol strikes me as being too new, too immature. There are gaps to be filled, complexities to be distilled, and (unknown) problems to be fixed. I doubt this comes as new information to anyone but I think there's value in recognizing that the protocol has not had the benefit of time for it to reach it's full potential.

The big, unaddressed (or insufficiently addressed) issue as I see it‎ is compatibility. This is likely to become a bigger problem should other CA's deploy ACME and as interdependencies grow over time. Plus, when vulnerabilities are found and resolved, compatibility problems become inevitable (the security audit results hint at this).

The versioning strategy of having CA's provide different URL's for different versions of different clients might not scale well.‎ One should not expect all cert applicants to have and use only the latest client software. This approach might work for now but it could easily become unmanageable. Picture, if you will, a CA that must support 20 different client versions and the headaches that can bring.

My recommendation is for the protocol to accommodate ‎version information (data and status codes) but for a separate document to discuss deployment details. A deployment doc could also be used to cover the pro's and con's of using one server to do both ACME and other Web sites and services. The chief concern is if a vulnerability in the web site can lead to remote code execution which can then impact handling on the ACME side of the fence. Just a thought.

Thanks.

Patrick Figel

unread,
Jul 20, 2016, 5:25:27 AM7/20/16
to dev-secur...@lists.mozilla.org
On 20/07/16 04:59, Peter Kurrasch wrote:
> Regarding the on-going development of the spec: I was thinking more
> about the individual commits on github and less about the IETF
> process. I presume that most commits will not get much scrutiny but
> a periodic (holistic?) review of the doc is expected to find and
> resolve conflicts, etc. Is that a fair statement?

Yep, the GitHub repository is not what I would call the canonical source
of the "approved" draft produced by the working group. Implementers
should look at the published drafts (-01, -02, -03) and the final RFC
once it's released.

> The report on the security audit was interesting to read. It's good
> to see someone even attempted it. In addition to the protocol itself
> it would be interesting to see an analysis of an ACME server
> (Boulder, I suppose). Maybe someone will do some pentesting at
> least?

I'm having difficulties finding a source for this, but I seem to recall
that in addition to the WebTrust audit, ISRG hired an independent
infosec company to perform a pentest/review of boulder. FWIW, I don't
think this is a ACME/Let's Encrypt-specific concern, and I'd personally
be much more worried about the large number of other CAs whose CA
software is closed-source (and thus impossible for anyone to review).

> The 200 LOC is an interesting idea. I assume such an implementation
> would rely heavily on external libraries for certain functions (e.g.
> key generation‎, https handling, validating the TLS certificate chain
> provided by the server, etc.)? If so, does anyone anticipate that
> someone will develop a standalone, all-in-one (or mostly-in-one)
> client? Is a client expected to do full cert chain validation
> including revocation checks?

acme-tiny[1] would be an example of a client that comes in at just shy
of 200 LOC. Yes, it definitely makes use of other libraries such as
OpenSSL. I'm not exactly sure what you're referring to with chain
validation/revocation checks? Communication with the CA server is using
HTTPS and validating the certificates, if that's what you mean.

> In terms of an overly broad, overly general statement, the protocol
> strikes me as being too new, too immature. There are gaps to be
> filled, complexities to be distilled, and (unknown) problems to be
> fixed. I doubt this comes as new information to anyone but I think
> there's value in recognizing that the protocol has not had the
> benefit of time for it to reach it's full potential.

The IETF process might be far from perfect (and certainly not what
anyone would call fast), but it's currently most likely the best and
most secure way for the internet to come up with new protocols. In the
context of publicly-trusted CAs, I personally doubt that any CA has put
in the same amount of effort for any of their internal or external APIs
for certificate issuance, and past examples show this to be true (see
the recent StartCom fiasco). In that context, I don't see why we should
allow CAs to continue using their own proprietary systems for issuance
while at the same time calling ACME too new and immature to be trusted
with the security of the Web PKI.

> The big, unaddressed (or insufficiently addressed) issue as I see
> it‎ is compatibility. This is likely to become a bigger problem
> should other CA's deploy ACME and as interdependencies grow over
> time. Plus, when vulnerabilities are found and resolved,
> compatibility problems become inevitable (the security audit results
> hint at this).
>
> The versioning strategy of having CA's provide different URL's for
> different versions of different clients might not scale well.‎ One
> should not expect all cert applicants to have and use only the
> latest client software. This approach might work for now but it could
> easily become unmanageable. Picture, if you will, a CA that must
> support 20 different client versions and the headaches that can
> bring.

I think you're overestimating the number of incompatible API endpoints
ACME CAs will launch in the first place. There's a good chance this
won't happen at all for Let's Encrypt until the final RFC is released,
at which point we're looking at two endpoints to maintain. In the
meantime, backwards-compatible changes from newer drafts can continue to
be pulled into the current endpoint. Let's Encrypt has recently added
some documentation on this matter[2].

> [...] a separate document to discuss deployment details. A deployment
> doc could also be used to cover the pro's and con's of using one
> server to do both ACME and other Web sites and services. The chief
> concern is if a vulnerability in the web site can lead to remote code
> execution which can then impact handling on the ACME side of the
> fence. Just a thought.

There are a number of other documents that specify operational details
for publicly-trusted CAs, such as the Baseline or Network Security
Requirements. I certainly hope there's something in there that would
prevent CAs from hosting issuance-related code on the same
infrastructure as their public web site. I seem to recall that there was
a discussion on the ACME mailing list regarding this (or something
similar?) where it was decided that ACME should not attempt to
re-implement the Baseline Requirements (and other documents relevant to
CAs), but rather focus on (operational) details that are specific to
ACME. Separating the web site from your issuance infrastructure seems
like a general recommendation that's not particularly specific to ACME.


[1]: https://github.com/diafygi/acme-tiny
[2]: https://letsencrypt.org/docs/acme-protocol-updates/
0 new messages