Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Intermediates Supporting Many EE Certs

424 views
Skip to first unread message

Gervase Markham

unread,
Feb 13, 2017, 7:23:47 AM2/13/17
to mozilla-dev-s...@lists.mozilla.org
The GoDaddy situation raises an additional issue.

Mozilla is neither adding any of the 8951 revoked certificates to
OneCRL, nor untrusting any GoDaddy intermediates. However, a more
serious incident might have led us to consider that course of action. In
that regard, the following information is worth considering.

The certificates in the GoDaddy case chain up to two different
intermediates:

GoDaddy Secure Certificate Authority - G2 : 6563
Starfield Secure Certificate Authority - G2 : 2388

Those two intermediates together support over a million certificates,
with a roughly 90/10 split between GoDaddy and Starfield. GoDaddy does
not have a rotation policy for intermediates. Un-trusting such
intermediates from the current date onwards is possible; un-trusting
them from a date in the past would be extremely impactful to sites.
Using this particular problem as an example, it occurred between July
2016 and January 2017. If we wanted to untrust these intermediates from
July 2016 onwards, that would affect (by my rough guess) around 300,000
certificates. (The actual figure will depend upon the lifetime
distribution histogram.) Contacting that many customers and getting them
to rotate their certs would be a gargantuan effort.

What can be done about the potential future issue (which might happen
with any large CA) of the need to untrust a popular intermediate?
Suggestions welcome.

Gerv

Steve Medin

unread,
Feb 13, 2017, 11:18:46 AM2/13/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org

> -----Original Message-----
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+steve_medin=symant...@lists.mozilla.org] On Behalf Of
> Gervase Markham via dev-security-policy
> Sent: Monday, February 13, 2017 7:23 AM
> To: mozilla-dev-s...@lists.mozilla.org
> Subject: Intermediates Supporting Many EE Certs
>
>
> What can be done about the potential future issue (which might happen with
> any large CA) of the need to untrust a popular intermediate?
> Suggestions welcome.
>
> Gerv
>

Either timespan or total certificates issued limits, as ballots, accounting for quantity growth from the end entity certificate lifespan reduction proposals, would be an approach.

Getting all user agents with interest is issuance limits to implement the CA Issuers form of AIA for dynamic path discovery and educating server operators to get out of the practice of static chain installation on servers would make CA rollovers fairly fluid and less subject to operator error of failing to install the proper intermediate.

Ryan Sleevi

unread,
Feb 13, 2017, 12:26:15 PM2/13/17
to Steve Medin, mozilla-dev-s...@lists.mozilla.org, Gervase Markham
On Mon, Feb 13, 2017 at 8:17 AM, Steve Medin via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Getting all user agents with interest is issuance limits to implement the
> CA Issuers form of AIA for dynamic path discovery and educating server
> operators to get out of the practice of static chain installation on
> servers would make CA rollovers fairly fluid and less subject to operator
> error of failing to install the proper intermediate.


Can you explain more to support that statement?

The issue that Gerv is discussing is primarily related to intermediate
issuance; a CA an easily roll over to a new intermediate and provide their
customers a holistic chain that represents a path to a Mozilla root. The
issue you describe - with AIA fetching - is one primarily restricted to
handling _root_ rollover, not _intermediate_ rollover; that is, when you're
constructing an alternative trust path for a set of existing certificates,
rather than, as Gerv raised, ensuring that new certificates come from a
single ('new') trust path once the existing intermediate has been
'exhausted'.

While a strong proponent of AIA, I don't believe your argument here is
relevant, although I'm quite happy to understand what technical criteria
exist that make you believe it would be beneficial to address this specific
problem.

okaphone.e...@gmail.com

unread,
Feb 13, 2017, 12:34:24 PM2/13/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, 13 February 2017 13:23:47 UTC+1, Gervase Markham wrote:
> The GoDaddy situation raises an additional issue.
>
> What can be done about the potential future issue (which might happen
> with any large CA) of the need to untrust a popular intermediate?
> Suggestions welcome.
>
> Gerv

Isn't this mostly something that CAs should keep in mind when they setup "shop"?

I mean it would be nice to have a way of avoiding that kind of impact of course, but if they think it's best to put all their eggs in one basket... ;-)

On the other hand, it's probably not possible to foresee what kind of partitioning will be needed if/when things go wrong.

CU Hans

David E. Ross

unread,
Feb 13, 2017, 12:53:32 PM2/13/17
to mozilla-dev-s...@lists.mozilla.org
On 2/13/2017 8:17 AM, Steve Medin wrote:
> Getting all user agents with interest is issuance limits to implement
> the CA Issuers form of AIA for dynamic path discovery and educating
> server operators to get out of the practice of static chain
> installation on servers would make CA rollovers fairly fluid and less
> subject to operator error of failing to install the proper
> intermediate.

That is all one, very long sentence, far too long to really understand.
On top of that, I think
> with interest is issuance limits
should have been
> with interest in issuance limitsbut the sentence is so long, I am not sure.

--
David E. Ross
<http://www.rossde.com/>

Paraphrasing Mark Twain, who was quoting someone else:
There are three kinds of lies: lies, damned lies, and
alternative truths.

Nick Lamb

unread,
Feb 13, 2017, 1:10:46 PM2/13/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, 13 February 2017 16:18:46 UTC, Steve Medin wrote:
> Getting all user agents with interest is issuance limits to implement the CA Issuers form of AIA for dynamic path discovery and educating server operators to get out of the practice of static chain installation on servers would make CA rollovers fairly fluid and less subject to operator error of failing to install the proper intermediate.

Rather than teaching the User Agents about AIA path discovery, surely if you're concerned about operator error it makes more sense to teach the Servers about AIA instead ? I don't know if any TLS Server vendors read m.d.s.policy (they probably should) but I'd suggest they're the best people to reach out to.

Patrick Figel

unread,
Feb 13, 2017, 2:10:02 PM2/13/17
to ry...@sleevi.com, Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 13/02/2017 18:25, Ryan Sleevi via dev-security-policy wrote:
> On Mon, Feb 13, 2017 at 8:17 AM, Steve Medin via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>> Getting all user agents with interest is issuance limits to implement the
>> CA Issuers form of AIA for dynamic path discovery and educating server
>> operators to get out of the practice of static chain installation on
>> servers would make CA rollovers fairly fluid and less subject to operator
>> error of failing to install the proper intermediate.
>
>
> Can you explain more to support that statement?
>
> The issue that Gerv is discussing is primarily related to intermediate
> issuance; a CA an easily roll over to a new intermediate and provide their
> customers a holistic chain that represents a path to a Mozilla root. The
> issue you describe - with AIA fetching - is one primarily restricted to
> handling _root_ rollover, not _intermediate_ rollover; that is, when you're
> constructing an alternative trust path for a set of existing certificates,
> rather than, as Gerv raised, ensuring that new certificates come from a
> single ('new') trust path once the existing intermediate has been
> 'exhausted'.
>
> While a strong proponent of AIA, I don't believe your argument here is
> relevant, although I'm quite happy to understand what technical criteria
> exist that make you believe it would be beneficial to address this specific
> problem.

I suspect many CAs would be reluctant to rotate intermediates regularly
because updating the intermediate certificate would be yet another thing
that server administrators can get wrong during renewal. Data from
Chrome shows that incorrect or missing intermediates account for 10-30%
of all certificate validation errors depending on platform[1], though
I'm guessing missing intermediates would account for most of that.

Let's Encrypt switched to a new intermediate certificate about a year
ago. Despite plenty of warnings that it may change at any time and a
protocol that allowed retrieving the intermediate certificate
programmatically, there were still a number of clients and guides with
static intermediates out there, which caused breakage for some sites
once they renewed. This is the main reason why later versions of the
ACME draft switched to delivering the end-entity certificates and
intermediates as one file by default.

Having support for AIA fetching in all major browsers would reduce the
impact of such misconfigurations. IIRC the only browsers that currently
don't do this are Firefox and Chrome on Android, though the latter seems
to have plans to change this.

Something else that needs to be considered is how it affects HPKP
deployment. I don't know if there are any CAs out there who recommend
pinning to (not-customer-specific) intermediates, but if there are any,
we might need either an exemption for renewals or make sure CAs
recommend pins that aren't affected by the rotation policy. I'd be
interested in the CA perspective on this; perhaps the lack of HPKP
adoption and CA documentation on this topic makes this a non-issue anyway.

(It would, of course, be possible to rate the intermediate certificates
without rotating the keys, but doing that causes interoperability
issues[2], so I suspect most CAs wouldn't want to do that regularly.)


[1]:
https://docs.google.com/document/d/1ryqFMSHHRDERg1jm3LeVt7VMfxtXXrI8p49gmtniNP0/edit?pli=1
[2]:
https://serverfault.com/questions/706252/iis-sends-incorrect-intermediate-ssl-certificate/706278#706278

Jeremy Rowley

unread,
Feb 13, 2017, 2:22:41 PM2/13/17
to Patrick Figel, ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Gervase Markham
I think rotating intermediates is a good way to limit the impact of
intermediate revocation/compromise, and one we tried to implement a while
ago. We had a policy that every x companies would be put on an intermediate.
I think we even pitched it as a suggested requirement on the Mozilla list
back in 2010.

As we tied the intermediate to a specific set of companies (which correlated
roughly to a specific volume of certificates), renewal and pinning were
non-issues. As long as each company was identified under the same umbrella,
an entity renewing, ordering a new cert, or pinning received the same
intermediate each time and was tied to the specific entity.

Jeremy
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Steve Medin

unread,
Feb 13, 2017, 2:57:33 PM2/13/17
to Patrick Figel, ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Gervase Markham
Patrick, thanks, it appears my attempt at brevity produced density.

- No amount of mantra, training, email notification, blinking text and
certificate installation checkers make 100% of IT staff who install
certificates on servers aware that issuing CAs change and need to be
installed with the server certificate when they do.
- Many servers do not support PKCS#7 installation.
- When you roll an intermediate issuer and you modify the end entity
certificate's AIA CA Issuers URI at the same time, the server presents an EE
to the browser that provides a remedy to path validation failure.
- The browser does its normal path discovery using cached discovered
intermediates.
- At rollover, the browser doesn't find the EE's issuer cached locally.
- The browser chases AIA to the issuer that the EE asserts is its issuer,
validates that, and caches the issuer for another <lifespan limit> years.
It's a one-validation latency cost per end user given cached path discovery.

Recently, we renewed a subordinate under the Federal Bridge CA and we
deployed trust across the community by updating a JBoC PKCS#7 file
referenced by the CA Issuers AIA. Granted, this is what some may call a
cross-certificate rather than subordination, but my point is that the end
entities that point to the .p7c enable peers and clients to discover path of
FBCA > SSP Gen 1 CA > EE as easily as FBCA > SSP Gen 2 CA > EE.

Ryan Sleevi

unread,
Feb 13, 2017, 3:35:42 PM2/13/17
to Steve Medin, ry...@sleevi.com, Gervase Markham, Patrick Figel, mozilla-dev-s...@lists.mozilla.org
On Mon, Feb 13, 2017 at 11:56 AM, Steve Medin via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Patrick, thanks, it appears my attempt at brevity produced density.
>
> - No amount of mantra, training, email notification, blinking text and
> certificate installation checkers make 100% of IT staff who install
> certificates on servers aware that issuing CAs change and need to be
> installed with the server certificate when they do.
> - Many servers do not support PKCS#7 installation.
> - When you roll an intermediate issuer and you modify the end entity
> certificate's AIA CA Issuers URI at the same time, the server presents an
> EE
> to the browser that provides a remedy to path validation failure.
> - The browser does its normal path discovery using cached discovered
> intermediates.
> - At rollover, the browser doesn't find the EE's issuer cached locally.
> - The browser chases AIA to the issuer that the EE asserts is its issuer,
> validates that, and caches the issuer for another <lifespan limit> years.
> It's a one-validation latency cost per end user given cached path
> discovery.
>

In the absence of AIA, this quickly becomes discoverable for servers. The
only reason it represents a burden on CAs today is precisely because of
customers' (inadvertant) reliance on AIA to correct for server
misconfiguration.

As mentioned, I'm a strong proponent of AIA - I think it serves a valuable
role in ecosystem agility for root migrations - but I don't think it's
necessarily good for users when it's used to paper over (clear) server
misconfigurations, which is the situation you describe - where the path
from the EE to the Intermediate is improper. I'm more thinking about
situations for where the Intermediate to Root path may change, in order to
accommodate changes in the Root (from Root 1 to Root 2).

Ultimately, it seems like it's a question of whether it's "too burdensome"
to expect servers properly configure their TLS certificate, therefore, the
argument is browser should employ logic to obviate that need. However,
given tools like CFSSL, is that really a good or compelling argument -
particularly one to suggest it's a gating factor for improvement? Isn't it
largely a question of how CAs engage with their customers for the
provisioning and deployment of certificates, rather than a holistic
ecosystem issue (of which I consider root migration to be part of the
latter)

Steve Medin

unread,
Feb 13, 2017, 5:40:45 PM2/13/17
to ry...@sleevi.com, Gervase Markham, Patrick Figel, mozilla-dev-s...@lists.mozilla.org
With de facto use of AIA, there is no issuer installation on the server that could be improper. Proper is defined at the moment, either by cache or discovery hints.



CA Issuers AIA resolution of EE to issuer is good enough for government work, and fortunately the market offers software that consumes it. I don’t consider the Federal Bridge papered over or the FPKI PA an authority that papers over their solutions.



We can roll issuing CAs with less operator error if we can reliably use AIA and educate that the issuing chain does not need to be installed, maintained, or passed in the TLS payload.



We all understand that microseconds are core to your business model. We’re talking about one hit every N years or N-thousand certificates. You’re going to earn back the time spent through smaller TLS payload no longer sending intermediates that are already cached.



We can deploy AIA CA Issuers support across user agents faster than we can deploy PKCS#7 support across servers.





From: Ryan Sleevi [mailto:ry...@sleevi.com]
Sent: Monday, February 13, 2017 3:35 PM
To: Steve Medin <Steve...@symantec.com>
Cc: Patrick Figel <pat...@figel.email>; ry...@sleevi.com; mozilla-dev-s...@lists.mozilla.org; Gervase Markham <ge...@mozilla.org>
Subject: Re: Intermediates Supporting Many EE Certs





Nick Lamb

unread,
Feb 13, 2017, 6:36:45 PM2/13/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, 13 February 2017 22:40:45 UTC, Steve Medin wrote:
> With de facto use of AIA, there is no issuer installation on the server that could be improper. Proper is defined at the moment, either by cache or discovery hints.

Much as I should like ubiquitous ambient Internet to be a ground truth, the reality is that clients connecting to a TLS server today don't necessarily have access in order to resolve URLs baked into AIA. Indeed in many cases (including for products sold by your own company, Symantec) the whole reason the client is talking to this particular server is in order to get access _to_ the Internet.

As a result, and indeed exactly as we see today in the wild, trying to "paper over" this gap from the client cannot work reliably.

Ryan Sleevi

unread,
Feb 13, 2017, 6:45:48 PM2/13/17
to Steve Medin, ry...@sleevi.com, Gervase Markham, Patrick Figel, mozilla-dev-s...@lists.mozilla.org
On Mon, Feb 13, 2017 at 2:39 PM, Steve Medin <Steve...@symantec.com>
wrote:

> With de facto use of AIA, there is no issuer installation on the server
> that could be improper. Proper is defined at the moment, either by cache or
> discovery hints.
>

I think this may be the crux of our disagreement. I believe that an ideal
configuration is one that is the most efficient for the most users.
Anything less - that is, things that slow connections or require all
clients to introduce additional logic - is an improper configuration. This
is similar to an HTTP server that always forced an extra redirect or which
failed to use modern cryptographic algorithms or which sent along an extra
40KB of non-gzip'd JS. It may be valid in the protocol, but it's an
improper configuration.

Some improper configurations - however valid - can cause breakage. Others
can be papered over. Any time it's papered over, that's a "hack", not a
desired end state.


> We all understand that microseconds are core to your business model. We’re
> talking about one hit every N years or N-thousand certificates. You’re
> going to earn back the time spent through smaller TLS payload no longer
> sending intermediates that are already cached.
>

In practice, we don't, given CAs' poor responsiveness for AIA fetches and
the poor configurations for cache lifetimes. In theory, yes. In practice,
no.


>
>
> We can deploy AIA CA Issuers support across user agents faster than we can
> deploy PKCS#7 support across servers.
>

This is false for most values, because "user agents" extend beyond the Big
Five browsers (Chrome, Edge/IE, Firefox, Safari, Opera). Consider software
such as curl, or clients such as Python or Perl. In those scenarios, your
deployment scenario is as-bad-or-worse than the server support, but
focusing on server support is a several orders of magnitude less work for
implementation or deployment. Also, to be clear, the deployment of
intermediates in no way requires PKCS#7 support.

Steve Medin

unread,
Feb 14, 2017, 8:05:24 AM2/14/17
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org

> -----Original Message-----
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+steve_medin=symant...@lists.mozilla.org] On Behalf Of Nick
> Lamb via dev-security-policy
> Sent: Monday, February 13, 2017 6:37 PM
> To: mozilla-dev-s...@lists.mozilla.org
> Subject: Re: Intermediates Supporting Many EE Certs
>
> On Monday, 13 February 2017 22:40:45 UTC, Steve Medin wrote:
> > With de facto use of AIA, there is no issuer installation on the server
that
> could be improper. Proper is defined at the moment, either by cache or
> discovery hints.
>
> Much as I should like ubiquitous ambient Internet to be a ground truth,
the
> reality is that clients connecting to a TLS server today don't necessarily
have
> access in order to resolve URLs baked into AIA. Indeed in many cases
> (including for products sold by your own company, Symantec) the whole
> reason the client is talking to this particular server is in order to get
access
> _to_ the Internet.

Locally resolved on access points, gateways and egress inspection devices by
full chain installation, not the problem I'm working.

Steve Medin

unread,
Feb 14, 2017, 8:47:51 AM2/14/17
to ry...@sleevi.com, Gervase Markham, Patrick Figel, mozilla-dev-s...@lists.mozilla.org
Top comments for readability.



- IT professionals, server administrators, are humans, often overworked, who need care, assistance, and attention. In my past version, I offered helpdesk to helpdesk support and lost business that demanded helpdesk to end user server admin.

- The caching I’m talking about is not header directives, I mean how CAPI and NSS retain discovered path for the life of the intermediate. One fetch, per person, per CA, for the life of the CA certificate.

- AIA CAI URIs pushed to CDN? Mindless, one click.

- I use the term user agent intentionally acknowledging that if all it took was 6 contracts, we’d have to run CABF meetings in convention centers.

- When Microsoft first supported dynamic path discovery using AIA, we all fielded the support questions: why does IE work and X does not? We all pulled our AIA CAI extensions because the confusion wasn’t worth the benefit.

- Ever since Vista, CAPI’s root store has been pulled over a wire upon discovery. Only kernel mode driver code signing roots are shipped.

- Once the mass market UAs enable dynamic path discovery as an option, server admins can opt in based on analytics.

- PKCS#7 chains are indeed not a requirement, but see point 1. It’s probably no coincidence that IIS supports it given awareness of the demands placed on enterprise IT admins.



At this point, I may as well be hitting tennis balls off a cliff. You’re dug in.







From: Ryan Sleevi [mailto:ry...@sleevi.com]
Sent: Monday, February 13, 2017 6:45 PM
To: Steve Medin <Steve...@symantec.com>
Cc: ry...@sleevi.com; Patrick Figel <pat...@figel.email>; mozilla-dev-s...@lists.mozilla.org; Gervase Markham <ge...@mozilla.org>
Subject: Re: Intermediates Supporting Many EE Certs







On Mon, Feb 13, 2017 at 2:39 PM, Steve Medin <Steve...@symantec.com <mailto:Steve...@symantec.com> > wrote:

With de facto use of AIA, there is no issuer installation on the server that could be improper. Proper is defined at the moment, either by cache or discovery hints.



Ryan Sleevi

unread,
Feb 14, 2017, 10:04:07 AM2/14/17
to Steve Medin, ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Patrick Figel, Gervase Markham
On Tue, Feb 14, 2017 at 5:47 AM, Steve Medin via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> - The caching I’m talking about is not header directives, I mean
> how CAPI and NSS retain discovered path for the life of the intermediate.
> One fetch, per person, per CA, for the life of the CA certificate.
>

Right, which has problematic privacy issues, and is otherwise not advisable
- certainly not advisable in a world of technically constrained sub-CAs (in
which the subscriber can use such caching as a supercookie). So if a UA
doesn't do such 'permacache' and instead respects the HTTP cache, you get
those issues.

(Also, NSS doesn't do that behaviour by default; that was a Firefox-ism)


> - Ever since Vista, CAPI’s root store has been pulled over a wire
> upon discovery. Only kernel mode driver code signing roots are shipped.
>

No, this isn't accurate.


> - Once the mass market UAs enable dynamic path discovery as an
> option, server admins can opt in based on analytics.
>

Not really. Again, you're largely ignoring the ecosystem issues, so perhaps
this is where the tennis ball remark comes into play. There are effectively
two TLS communities that matter - the browser community, and the
non-browser community. Mozillan and curl maintainer Daniel Stenberg pretty
accurately captures this in
https://daniel.haxx.se/blog/2017/01/10/lesser-https-for-non-browsers/

- PKCS#7 chains are indeed not a requirement, but see point 1.
> It’s probably no coincidence that IIS supports it given awareness of the
> demands placed on enterprise IT admins.
>

My point was that PKCS#7 is an abomination of a format (in the general
sense), but to the specific technical choice, is a poor technical choice
because the format lacks any structure of expressing order/relationship. A
server supporting PKCS#7 needs not just support PKCS#7, but the complexity
of chain building, in order to reorder the unstructured PKCS#7. And if the
server supports chain building, then it could be argued just as well, that
the server supports AIA. Indeed, if you're taking an ecosystem approach,
the set of clouds to argue at is arguably the TLS server market improving
their support to match IIS's (which, I agree, is quite good). That includes
basic things like OCSP stapling (e.g.
https://gist.github.com/sleevi/5efe9ef98961ecfb4da8 ) and potentially
support for AIA fetching, as you mention. Same effect, but instead of
offloading the issues to the clients, you centralize at the server. But
even if you set aside PKCS#7 as the technical delivery method and set aside
chain building support, you can accomplish the same goal, easier, by simply
utilizing a structured PEM-encoded file.

My point here is that you're advocating a specific technology here that's
regrettably poorly suited for the job. You're not wrong - that is, you can
deliver PKCS#7 certs - but you're not right either that it represents the
low-hanging fruit.


Philosophically, the discussion here is where the points of influence lie -
with a few hundred CAs, with a few thousand server software stacks (and a
few million deployments), and a few billion users. It's a question about
whether the solution only needs to consider the browser (which, by
definition, has a fully functioning HTTP stack and therefore _could_
support AIA) or the ecosystem (which, in many cases, lacks such a stack -
meaning no AIA, no OCSP, and no CRLs either). We're both right in that
these represent technical solutions to the issues, but we disagree on which
offers the best lever for impact - for end-users and for relying parties.
This doesn't mean it's a fruitless argument of intractable positions - it
just means we need to recognize our differences in philosophy and approach.

You've highlighted a fair point - which is that if CAs rotate intermediates
periodically, and if CAs do not (a) deliver the full chain (whether through
PEM or PKCS#7) or (b) subscribers are using software/hardware that does not
support configuring the full chain when installing a certificate, then
there's a possibility of increased errors for users due to servers sending
the wrong intermediate. That's a real problem, and what we've described are
different approaches to solving that problem, with different tradeoffs. The
question is whether that problem is significant enough to prevent or block
attempts to solve the problem Gerv highlighted - intermediates with
millions of certificates. We may also disagree here, but I don't believe
it's a blocker.

Nick Lamb

unread,
Feb 14, 2017, 12:14:33 PM2/14/17
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, 14 February 2017 13:47:51 UTC, Steve Medin wrote:
> - PKCS#7 chains are indeed not a requirement, but see point 1. It’s probably no coincidence that IIS supports it given awareness of the demands placed on enterprise IT admins.

I don't see how PKCS#7 offers any advantage at all.

I end up helping lots of ordinary people with certificate installation (on things which are more or less web servers, and other things), which today mostly means Let's Encrypt because even though Let's Encrypt focuses on automation that $0 price point is very attractive without the automation when you've got no idea what you're doing.

Not once have I thought "This would be easier with PKCS#7". Literally I've never even had to walk a user through how to make a PKCS#7 file, because it never comes up. In addition to PEM they've needed JKS and PKCS#12 and ZIP files but never PKCS#7.

When it comes to installation, the main problem is usually the awful UX in the GUI they're trying to use. Invalid inputs are often swallowed with no visible commentary or result, let alone helpful error messages; the system may expect them to wait for a lengthy restart or reboot before their changes take effect; and nomenclature is arbitrary, one program's "CA Cert" is another's "Chain File" and yet another's "Intermediate Certificates".

I would pressure server vendors to clean this up, except that really in most cases what they actually need to do is embrace at least one of the automation options and bake that into their software instead. We didn't make the safety elevator easier to use by affixing a great many wordy instruction panels about the correct means of closing the doors and sequence of operation for the motors, we just made the machine smarter so that all the humans do is press a floor button and try to avoid eye-contact with strangers. As a result even an illiterate child can confidently operate such an elevator once they can reach the buttons. Nobody would purchase an old-style manual elevator today even if it were available a little cheaper from a major manufacturer, it's just not worth the hassle.

Jakob Bohm

unread,
Feb 14, 2017, 12:55:18 PM2/14/17
to mozilla-dev-s...@lists.mozilla.org
Unfortunately, for these not-quite-web-server things (printers, routers
etc.), automating use of the current ACME Let's encrypt protocol with
or without hardcoding the Let's Encrypt URL is a non-starter for anyone
using these things in a more secure network and/or beyond the firmware
renewal availability from the vendor.

On a simple network where public certs are acceptable, such devices
will often need to get renewed certificates long past the availability
of upstream firmware updates to adapt to ecosystem changes (such as
Let's Encrypt switching to an incompatible ACME version in the year
2026 or WoSign free certs becoming a thing of the past in 2016).

On a secure network, existence and address of each such device should
not be revealed to an outside entity (such as Let's encrypt admins),
let alone anyone who knows how to read CT logs. For such devices I
generally use an in-house CA which is trusted only in-house and uses
the validation procedure "The subject is known personally to the CA
admin and the transport of the CSR and cert have been secured by
out-of-band means"

Thus the ability to install certificates and keys in a standard format
such as PKCS#12 or PEM is much more important than the ability to talk
to a public service such as Let's Encrypt/ACME or WoSign.

Similarly, it would be useful to have an easily findable tool/script
for doing ACME in a semi-offline way that doesn't presume that the ACME
client has any kind of direct control over the servers that will be
configured with the certificates. Such a tool could be installed once
by a site and then used to generate certs for the various "web-managed"
devices that need them.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Steve Medin

unread,
Feb 14, 2017, 1:13:43 PM2/14/17
to Nick Lamb, mozilla-dev-s...@lists.mozilla.org
> -----Original Message-----
> From: dev-security-policy [mailto:dev-security-policy-
> bounces+steve_medin=symant...@lists.mozilla.org] On Behalf Of Nick
> Lamb via dev-security-policy
> Sent: Tuesday, February 14, 2017 12:14 PM
> To: mozilla-dev-s...@lists.mozilla.org
> Subject: Re: Intermediates Supporting Many EE Certs
>
> On Tuesday, 14 February 2017 13:47:51 UTC, Steve Medin wrote:
> > - PKCS#7 chains are indeed not a requirement, but see point 1. It’s
> probably no coincidence that IIS supports it given awareness of the demands
> placed on enterprise IT admins.
>
>
> Not once have I thought "This would be easier with PKCS#7". Literally I've
> never even had to walk a user through how to make a PKCS#7 file, because it
> never comes up. In addition to PEM they've needed JKS and PKCS#12 and ZIP
> files but never PKCS#7.
>

But Nick, you carry PKI around in your back pocket. Any of us reading this know JKS, CAPI, apache mod-ssl directives and prefer a manifest of separate files.

I mention P7 because IIS inhales them in one click and ensures that the intermediate gets installed. There is an audience that likes that. In my last version, my enrollment portal asked for server type at request time and delivered target-friendly files on fulfillment with a link to other formats at a download center.

Ryan Sleevi

unread,
Feb 14, 2017, 3:12:46 PM2/14/17
to Steve Medin, Nick Lamb, mozilla-dev-s...@lists.mozilla.org
On Tue, Feb 14, 2017 at 10:13 AM, Steve Medin via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> I mention P7 because IIS inhales them in one click and ensures that the
> intermediate gets installed.


Yes, but that's not because of PKCS#7, as I tried to explain and capture.
That's because a host of other things - incluing AIA fetching - that IIS
does.

You're not wrong that PKCS#7 could partially address this. But your desired
end-state has no intrinsic relationship to the use of PKCS#7 - it's because
of all the other server implementation decisions - so it'd be wrong to
assume PKCS#7 is the lever to be pulled.

Nick Lamb

unread,
Feb 14, 2017, 4:03:23 PM2/14/17
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, 14 February 2017 17:55:18 UTC, Jakob Bohm wrote:
> Unfortunately, for these not-quite-web-server things (printers, routers
> etc.), automating use of the current ACME Let's encrypt protocol with
> or without hardcoding the Let's Encrypt URL is a non-starter for anyone
> using these things in a more secure network and/or beyond the firmware
> renewal availability from the vendor.

Whilst I agree there are challenges, I think greater automation is both possible and necessary for these things.

> On a simple network where public certs are acceptable, such devices
> will often need to get renewed certificates long past the availability
> of upstream firmware updates to adapt to ecosystem changes (such as
> Let's Encrypt switching to an incompatible ACME version in the year
> 2026 or WoSign free certs becoming a thing of the past in 2016).

Ecosystem changes that make stuff stop working are much more likely to be algorithmic changes (Does your printer know SHA-3? Elliptic curve crypto? Will it work if we need quantum-resistant crypto?).

> On a secure network, existence and address of each such device should
> not be revealed to an outside entity (such as Let's encrypt admins),
> let alone anyone who knows how to read CT logs. For such devices I
> generally use an in-house CA which is trusted only in-house and uses
> the validation procedure "The subject is known personally to the CA
> admin and the transport of the CSR and cert have been secured by
> out-of-band means"

Like the manual verification of SSH host fingerprints, I fear such a system most often looks successful because it's not coming up against any serious adversaries rather than because it's actually implemented in a sound way. Unless everybody is very careful it easily becomes the Yale lock of PKIs, successfully keeping out small children and sufficient to show legally that you intended to forbid entry, but not exactly an impediment to organised criminals.

Also it's weird that you mentioned transporting the CSR and certificate out of band. I can kind of get that if you take the CSR from the device to the CA issuer by hand then you feel as though you avoid MITM replacement of the CSR so it makes your reasoning about the Subject simpler. But why the certificate ?

In this scenario (personal knowledge of subject's identity) I am currently fairly confident that something like SCEP is the right approach. As with your manual system I expect that SCEP will often be deployed in a fashion that does not resist attack, but in _principle_ it's possible to have this work well and unlike hordes of workers traipsing about with CSR files it might actually scale.

> Similarly, it would be useful to have an easily findable tool/script
> for doing ACME in a semi-offline way that doesn't presume that the ACME
> client has any kind of direct control over the servers that will be
> configured with the certificates. Such a tool could be installed once
> by a site and then used to generate certs for the various "web-managed"
> devices that need them.

Probably for that type of environment you'd want to do DNS validation: That is, have your certificate-obtaining tool able to reach out to your DNS service and add TXT records for validation so that it can obtain a certificate for any name in the domains you control. Of course the parameters of how exactly this works will vary from one site to another, particularly depending on which DNS servers they use, and whether they're a Unix house or not. Also it matters whether you're going to have the devices create CSRs, or just inject a new private key when you give them a certificate.

Here's an example of Steve, who hand-rolled such a solution, his approach also deploys the certificates via SSH but for the "web-managed" devices you mention that isn't an option and may need to remain manual for now.

https://www.crc.id.au/using-centralised-management-with-lets-encrypt/

You'll also see people building on other Unix shell tools like:

https://github.com/srvrco/getssl or https://acme.sh/

Inevitably an OpenBSD purist has written one in C that's split into a dozen privilege-separated components:

https://kristaps.bsd.lv/acme-client/

And somebody who likes Python, but didn't fall in love with Certbot (the EFF's Python ACME implementation that was once the "official" client) wrote:

https://github.com/plinss/acmebot

Peter Gutmann

unread,
Feb 14, 2017, 11:28:01 PM2/14/17
to mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
Jakob Bohm via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>Unfortunately, for these not-quite-web-server things (printers, routers
>etc.), automating use of the current ACME Let's encrypt protocol with or
>without hardcoding the Let's Encrypt URL is a non-starter for anyone using
>these things in a more secure network and/or beyond the firmware renewal
>availability from the vendor.

That's one of the least concerns with IoS devices. For one thing they're
mostly going to have RFC 1918 addresses or non-qualified names, which CAs
aren't supposed to issue certs for (not that that's ever stopped them in the
past). Then the CA needs to connect back to the device to verify connection
to the domain name it's issuing the cert for, which shouldn't be possible for
any IoS device that's set up properly. And I'm sure there's more...

Peter.

Gervase Markham

unread,
Feb 15, 2017, 12:26:09 PM2/15/17
to mozilla-dev-s...@lists.mozilla.org
On 13/02/17 16:17, Steve Medin wrote:
> Getting all user agents with interest is issuance limits to implement
> the CA Issuers form of AIA for dynamic path discovery and educating
> server operators to get out of the practice of static chain
> installation on servers would make CA rollovers fairly fluid and less
> subject to operator error of failing to install the proper
> intermediate.

Regardless of the merits of this proposal, this is:
https://bugzilla.mozilla.org/show_bug.cgi?id=399324
which was reported 10 years ago, and resolved WONTFIX a year ago. It
seems unlikely that this decision will be reversed, it will be
implemented in Firefox and Chrome for Android, and then become
ubiquitous, any time soon.

Gerv

Gervase Markham

unread,
Feb 15, 2017, 12:27:28 PM2/15/17
to okaphone.e...@gmail.com
On 13/02/17 17:34, okaphone.e...@gmail.com wrote:
> Isn't this mostly something that CAs should keep in mind when they
> setup "shop"?
>
> I mean it would be nice to have a way of avoiding that kind of impact
> of course, but if they think it's best to put all their eggs in one
> basket... ;-)

Well, if it's harder for us to dis-trust an intermediate with many leafs
due to the site impact, the CA may decide to do it that way precisely
because it is harder!

Gerv

Gervase Markham

unread,
Feb 15, 2017, 12:33:50 PM2/15/17
to mozilla-dev-s...@lists.mozilla.org
On 13/02/17 19:22, Jeremy Rowley wrote:
> As we tied the intermediate to a specific set of companies (which correlated
> roughly to a specific volume of certificates), renewal and pinning were
> non-issues. As long as each company was identified under the same umbrella,
> an entity renewing, ordering a new cert, or pinning received the same
> intermediate each time and was tied to the specific entity.

This seems like a sane idea. Any CA which was required to rotate its
intermediates would not be required to rotate them on a time basis; they
could choose any rotation scheme they liked which kept them within the
per-intermediate limits.

_However_, if multiple intermediates are being issued under at once, and
there is a process or other problem, the likelihood of them all being
affected is high. (The rest of the validation path would likely be the
same.) Therefore, you haven't necessarily solved the problem.

Can a more complex rotation scheme square this circle?

Gerv

Jakob Bohm

unread,
Feb 15, 2017, 3:40:09 PM2/15/17
to mozilla-dev-s...@lists.mozilla.org
On 14/02/2017 22:03, Nick Lamb wrote:
> On Tuesday, 14 February 2017 17:55:18 UTC, Jakob Bohm wrote:
>> Unfortunately, for these not-quite-web-server things (printers, routers
>> etc.), automating use of the current ACME Let's encrypt protocol with
>> or without hardcoding the Let's Encrypt URL is a non-starter for anyone
>> using these things in a more secure network and/or beyond the firmware
>> renewal availability from the vendor.
>
> Whilst I agree there are challenges, I think greater automation is both possible and necessary for these things.
>
>> On a simple network where public certs are acceptable, such devices
>> will often need to get renewed certificates long past the availability
>> of upstream firmware updates to adapt to ecosystem changes (such as
>> Let's Encrypt switching to an incompatible ACME version in the year
>> 2026 or WoSign free certs becoming a thing of the past in 2016).
>
> Ecosystem changes that make stuff stop working are much more likely to be algorithmic changes (Does your printer know SHA-3? Elliptic curve crypto? Will it work if we need quantum-resistant crypto?).
>

Broken algorithms can still be used on closed networks where the
encryption is secondary to the perimeter protection. Biggest problem
would be Browsers aggressively removing algorithms by (once again)
failing to consider the intranet use cases.

The real world equivalent is the use of ultra-primitive locks on the
inside doors of a house, while using high quality locks on outside
doors (public servers).

>> On a secure network, existence and address of each such device should
>> not be revealed to an outside entity (such as Let's encrypt admins),
>> let alone anyone who knows how to read CT logs. For such devices I
>> generally use an in-house CA which is trusted only in-house and uses
>> the validation procedure "The subject is known personally to the CA
>> admin and the transport of the CSR and cert have been secured by
>> out-of-band means"
>
> Like the manual verification of SSH host fingerprints, I fear such a
> system most often looks successful because it's not coming up against
> any serious adversaries rather than because it's actually implemented
> in a sound way. Unless everybody is very careful it easily becomes the
> Yale lock of PKIs, successfully keeping out small children and
sufficient
> to show legally that you intended to forbid entry, but not exactly an
> impediment to organised criminals.
>
> Also it's weird that you mentioned transporting the CSR and certificate
> out of band. I can kind of get that if you take the CSR from the device
> to the CA issuer by hand then you feel as though you avoid MITM
> replacement of the CSR so it makes your reasoning about the Subject
> simpler. But why the certificate ?
>

Cert transport is important only for the devices where PKCS#12
transport is the norm. Not every embedded CPU has a high quality RNG.

> In this scenario (personal knowledge of subject's identity) I am
> currently fairly confident that something like SCEP is the right
> approach. As with your manual system I expect that SCEP will often
> be deployed in a fashion that does not resist attack, but in
> _principle_ it's possible to have this work well and unlike hordes
> of workers traipsing about with CSR files it might actually scale.
>

Who said anything about hordes of workers? The process is centralized
with a small number of people completing the whole process. Process
would be different if the IoT devices did not pass through the central
office before deployment or if a huge number of devices needed to be
set up on an assembly line.
Thanks for the tips, this was really just a side-jab at the Let's
Encrypt homepage for not grouping ACME clients be their features, only
by their origin.

okaphone.e...@gmail.com

unread,
Feb 15, 2017, 3:50:41 PM2/15/17
to mozilla-dev-s...@lists.mozilla.org
Ehm... play chicken? Nah, perhaps better not. ;-)

So you really would like to make distrust more doable. But if it doesn't "hurt" enough you don't get the effect you want either. Difficult to know what level would be optimum.

So I guess that means what you really need is a certain scalability in the solution.

(Thanks for explaining. I'm just trying to understand what is happening here.)

Gervase Markham

unread,
Mar 1, 2017, 6:44:16 AM3/1/17
to mozilla-dev-s...@lists.mozilla.org
On 13/02/17 12:23, Gervase Markham wrote:
> The GoDaddy situation raises an additional issue.
....
> What can be done about the potential future issue (which might happen
> with any large CA) of the need to untrust a popular intermediate?
> Suggestions welcome.

Reviewing the discussion, I unfortunately don't see any workable
solutions proposed yet. I think AIA chasing is a red herring. Jeremy's
engagement on intermediate rotation was illuminating, but it seems to me
that having multiple intermediates in play at the same time over an
extended period is very likely not to solve the problem, because any
issuance problem would cut across them all.

If customers tend to renew annually, one could imagine a "January
intermediate", "February intermediate" and so on, and one uses the
former every January, etc. This might reduce the need for an
intermediate change when an EE cert changes, as I have sympathy for the
view that in today's world changing intermediate does make the process a
little more error prone. (Although it shouldn't, and that's a technology
fail I hope can be addressed.) Then, if you have an issuance problem
which persisted for a month but which has led to a situation where you
can't trust anything off the intermediates used during those times, only
1/6th of your outstanding certs from that root are at risk of needing
immediate change rather than all of them.

I guess the question is: is it worth it? Are the chances of this proving
useful in an actual scenario high enough compared to the cost and hassle
of imposing such a scheme on all CAs? If we decide to dis-trust the
intermediate under such a scheme, is the CA practically as stuffed as it
would be if it had just used one intermediate? :-)

Gerv

okaphone.e...@gmail.com

unread,
Mar 1, 2017, 7:06:36 AM3/1/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, 1 March 2017 12:44:16 UTC+1, Gervase Markham wrote:
> On 13/02/17 12:23, Gervase Markham wrote:
> > The GoDaddy situation raises an additional issue.
> ....
> > What can be done about the potential future issue (which might happen
> > with any large CA) of the need to untrust a popular intermediate?
> > Suggestions welcome.
> ...
> If customers tend to renew annually, one could imagine a "January
> intermediate", "February intermediate" and so on, and one uses the
> former every January, etc.
> ...

Or a different intermediate each day? ;-)

I guess what you really are looking for is being able to distrust a CA for a date range. Any requirement that doesn't produce that is probably not worth the effort.

CU Hans

Jakob Bohm

unread,
Mar 1, 2017, 12:42:06 PM3/1/17
to mozilla-dev-s...@lists.mozilla.org
On 01/03/2017 12:43, Gervase Markham wrote:
> On 13/02/17 12:23, Gervase Markham wrote:
>> The GoDaddy situation raises an additional issue.
> ....
>> What can be done about the potential future issue (which might happen
>> with any large CA) of the need to untrust a popular intermediate?
>> Suggestions welcome.
>
> Reviewing the discussion, I unfortunately don't see any workable
> solutions proposed yet. I think AIA chasing is a red herring. Jeremy's
> engagement on intermediate rotation was illuminating, but it seems to me
> that having multiple intermediates in play at the same time over an
> extended period is very likely not to solve the problem, because any
> issuance problem would cut across them all.
>
> If customers tend to renew annually, one could imagine a "January
> intermediate", "February intermediate" and so on, and one uses the
> former every January, etc. This might reduce the need for an
> intermediate change when an EE cert changes, as

CA's tend to encourage ahead-of-time annual renewals, offering to add
the leftover days to the validitiy period of the replacement
certificate (this is presumably also the reason BR max validity periods
are slightly more than a whole number of years, for example a 3 year
certificate renewed 2½ month ahead of expiry would have a validity of
38.5 months, which is less than 39).

> I have sympathy for the
> view that in today's world changing intermediate does make the process a
> little more error prone. (Although it shouldn't, and that's a technology
> fail I hope can be addressed.) Then, if you have an issuance problem
> which persisted for a month but which has led to a situation where you
> can't trust anything off the intermediates used during those times, only
> 1/6th of your outstanding certs from that root are at risk of needing
> immediate change rather than all of them.

I would say a lot could be improved if certificate issuance customer
messages provided the *relevant* chains and their individual
certificates directly and with human-friendly names, it would greatly
reduce the confusion compared to the common practice of asking each EE
to manually navigate disorganized support pages for individual
certificates with semi-numerical names such as "Foo intermediate G3"

For example the confirmation mail or individualized web page could
be something like this:

--- Begin example for an OV SSL cert validated by the Joe's certs RA ---
Here is your new SSL/TLS certificate for www.example.com, example.com
and static.example.com (serial number
1231231421432453255268924750932758934750): example_com_2017_03_17.cer
(PEM format).

Needed intermediary certificates to put on your server:

All in one file (including your certificate):
example_com_2017_03_17_with_chain.pem (PEM format as needed by most
servers) Or example_com_2017_03_17_with_chain.p7 (PKCS#7 for Microsoft
IIS, Exchange etc.)

One at a time (These are included in the all-in-one files above except
the ones marked with an X, which your server shouldn't send)

NiceCA-via-JoeRA-SSL-Regular-March-2017.cer (PEM format)

X NiceCA-SHA256RSA-Root-2016.cer (PEM format)

NiceCA-SHA256RSA-Root-2016-cross-by-UserFirst.cer (PEM format)

X UserFirst-ancient-root.cer (PEM format)

--- End example ---
0 new messages