Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

13,252 views
Skip to first unread message

Ryan Sleevi

unread,
Jul 1, 2020, 5:06:22 PM7/1/20
to mozilla-dev-security-policy
I've created a new batch of certificates that violate 4.9.9 of the BRs,
which was introduced with the first version of the Baseline Requirements as
a MUST. This is https://misissued.com/batch/138/

A quick inspection among the affected CAs include O fields of: QuoVadis,
GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus,
Actalis, Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.

Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
include an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP
Delegated Responder within Section 4.2.2.2 as indicated by the presence of
the id-kp-OCSPSigning as an EKU.

These certificates lack the necessary extension, and as such, violate the
BRs. As the vast majority of these were issued on-or-after 2013-02-01, the
Effective Date of Mozilla Root CA Policy v2.1, these are misissued. You
could also consider the effective date as 2013-05-15, described later in
[1] , without changing the results.

This batch is NOT comprehensive. According to crt.sh, there are
approximately 293 certificates that meet the criteria of "issued by a
Mozilla-trusted, TLS-capable CA, with the OCSPSigning EKU, and without
pkix-nocheck". misissued.com had some issues with parsing some of these
certificates, due to other non-conformities, so I only included a sample.

Censys.io is aware of approximately 276 certificates that meet this
criteria, as you can see at [2]. The differences in perspectives
underscores the importance of CAs needing to carefully examine the
certificates they've issued to understand.

It's important for CAs to understand this is Security Relevant. While they
should proceed with revoking these CAs within seven (7) days, as defined
under the Baseline Requirements Section 4.9.1.2, the degree of this issue
likely also justifies requiring witnessed Key Destruction Reports, in order
to preserve the integrity of the issuer of these certificates (which may
include the CA's root).

The reason for this is simple: In every case I examined, these are
certificates that appear to nominally be intended as Issuing CAs, not as
OCSP Responder Certificates. It would appear that many CAs were unfamiliar
with RFC 6960 when constructing their certificate profiles, and similarly
ignored discussion of this issue in the past [3], which highlighted the
security impact of this. I've flagged this as a SECURITY matter for CAs to
carefully review, because in the cases where a third-party, other than the
Issuing CA, operates such a certificate, the Issuing CA has delegated the
ability to mint arbitrary OCSP responses to this third-party!

For example, consider a certificate like https://crt.sh/?id=2657658699 .
This certificate, from HARICA, meets Mozilla's definition of "Technically
Constrained" for TLS, in that it lacks the id-kp-serverAuth EKU. However,
because it includes the OCSP Signing EKU, this certificate can be used to
sign arbitrary OCSP messages for HARICA's Root!

This also applies to non-technically-constrained sub-CAs. For example,
consider this certificate https://crt.sh/?id=21606064 . It was issued by
DigiCert to Microsoft, granting Microsoft the ability to provide OCSP
responses for any certificate issued by Digicert's Baltimore CyberTrust
Root. We know from DigiCert's disclosures that this is independently
operated by Microsoft.

Unfortunately, revocation of this certificate is simply not enough to
protect Mozilla TLS users. This is because this Sub-CA COULD provide OCSP
for itself that would successfully validate, AND provide OCSP for other
revoked sub-CAs, even if it was revoked. That is, if this Sub-CA's key was
maliciously used to sign a GOOD response for itself, it would be accepted.
These security concerns are discussed in Section 4.2.2.2.1 of RFC 6960, and
is tied to a reliance on the CRL. Mozilla users COULD be protected through
the use of OneCRL, although this would not protect other PKI participants
or use cases that don't use OneCRL.

A little more than a third of the affected certificates have already been
revoked, which is encouraging. However, I'm not aware of any incident
reports discussing this failure, nor am I aware of any key destruction
reports to provide assurance that these keys cannot be used maliciously.
While this seems like a benign failure of "we used the wrong profile", it
has a meaningful security impact to end users, even if it was made with the
best of intentions.

This has been a requirement of the BRs since the first version, and is
spelled out within the OCSP RFCs, and CAs are expected to be deeply
knowledgeable in both of these areas. There is no excusing such an
oversight, especially if it was (most likely) to work around an issue with
a particular CA Software Vendor's product. Recall that the same
justification (work around an issue in a vendor's product) has been used to
justify MITM interception. Ignorance and malice are, unfortunately, often
indistinguishable, and thus have to be treated the same.

While I'll be looking to create Compliance Incidents for the affected CAs,
and attempting to report through their problem reporting mechanisms, the
fact that it is the constrained CAs which are not yet required to be
disclosed, and most likely invisible to CT (e.g. S/MIME issuing CAs that do
not issue TLS), still pose substantial risk, that it requires every CA
closely examine their practices.

CAs affected MUST ensure they revoke such certificates within 7 days, as
per 4.9.1.2 (5) and 4.9.1.2 (6)

[1]
https://wiki.mozilla.org/CA:CertificatePolicyV2.1#Time_Frames_for_included_CAs_to_comply_with_the_new_policy
[2]
https://censys.io/certificates?q=%28parsed.extensions.extended_key_usage.ocsp_signing%3Atrue+and+validation.nss.valid%3Atrue+and+parsed.validity.start%3A%5B2013-05-15+TO+*%5D%29+and+not+parsed.unknown_extensions.id%3A1.3.6.1.5.5.7.48.1.5
[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/XQd3rNF4yOo/bXYjt1mZAwAJ

Ryan Sleevi

unread,
Jul 1, 2020, 10:10:26 PM7/1/20
to Ryan Sleevi, mozilla-dev-security-policy
On Wed, Jul 1, 2020 at 5:05 PM Ryan Sleevi <ry...@sleevi.com> wrote:

> While I'll be looking to create Compliance Incidents for the affected CAs,
>

This is now done, I believe. However, as mentioned, just because a
compliance bug was not filed does not mean that a CA may not be affected;
it may just be that CT does not know of the cert and the CA did not
disclose it via CCADB. I only filed incidents for CAs where the issuer is
not already revoked via OneCRL.

https://bugzilla.mozilla.org/show_bug.cgi?id=1649961 - Actalis
https://bugzilla.mozilla.org/show_bug.cgi?id=1649963 - ATOS
https://bugzilla.mozilla.org/show_bug.cgi?id=1649944 - Camerfirma
https://bugzilla.mozilla.org/show_bug.cgi?id=1649951 - DigiCert
https://bugzilla.mozilla.org/show_bug.cgi?id=1649943 - Firmaprofesional
https://bugzilla.mozilla.org/show_bug.cgi?id=1649937 - GlobalSign
https://bugzilla.mozilla.org/show_bug.cgi?id=1649945 - HARICA
https://bugzilla.mozilla.org/show_bug.cgi?id=1649947 - Microsec
https://bugzilla.mozilla.org/show_bug.cgi?id=1649938 - QuoVadis
https://bugzilla.mozilla.org/show_bug.cgi?id=1649964 - PKIoverheid
https://bugzilla.mozilla.org/show_bug.cgi?id=1649962 - SECOM
https://bugzilla.mozilla.org/show_bug.cgi?id=1649942 - SK ID
https://bugzilla.mozilla.org/show_bug.cgi?id=1649941 - T-Systems
https://bugzilla.mozilla.org/show_bug.cgi?id=1649939 - WISeKey

Peter Gutmann

unread,
Jul 1, 2020, 11:48:29 PM7/1/20
to mozilla-dev-security-policy, ry...@sleevi.com
Ryan Sleevi via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST include
>an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP Delegated
>Responder within Section 4.2.2.2 as indicated by the presence of the id-kp-
>OCSPSigning as an EKU.

Unless I've misread your message, the problem isn't the presence or not of a
nocheck extension but the invalid presence of an OCSP EKU:

>I've flagged this as a SECURITY matter [...] the Issuing CA has delegated the


>ability to mint arbitrary OCSP responses to this third-party

So the problem would be the presence of the OCSP EKU when it shouldn't be
there, not the absence of the nocheck extension.

Peter.

Ryan Sleevi

unread,
Jul 2, 2020, 12:31:16 AM7/2/20
to Peter Gutmann, mozilla-dev-security-policy, ry...@sleevi.com
On Wed, Jul 1, 2020 at 11:48 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:
Not quite. It’s both.

The BR violation is caused by the lack of the extension.

The security issue is caused by the presence of the EKU.

However, since some CAs only view things through the lens of BR/program
violations, despite the sizable security risk they pose, the compliance
incident is what is tracked. The fact that it’s security relevant is
provided so that CAs understand that revocation is necessary, and that it’s
also not sufficient, because of how dangerous the issue is.

Pedro Fuentes

unread,
Jul 2, 2020, 2:19:34 AM7/2/20
to mozilla-dev-s...@lists.mozilla.org
Hello,
Our understanding when creating this SubCA was that the CA certificate should include the EKUs that would be allowed to issue, and therefore, as it would generate certificates for the OCSP responders, it should include such EKU, the same it would include the EKU for clientAuthentication, for example.
Can you please clarify why this is not correct and what is the security problem it creates?
Thanks,
Pedro

Pedro Fuentes

unread,
Jul 2, 2020, 3:11:10 AM7/2/20
to mozilla-dev-s...@lists.mozilla.org
Sorry, my message was incomplete... please read the las part as:

Can you please clarify why this is not correct and what is the security problem it creates if the CA is not operated externally?

Paul van Brouwershaven

unread,
Jul 2, 2020, 3:23:19 AM7/2/20
to ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Pedro Fuentes
Thanks for raising this issue Ryan, I'm trying to update
http://revocationcheck.com/ to cover this issue.

>From my understanding:

The OCSPnocheck extension is only required for a delegated OCSP responder
certificate as it can't provide answers for itself.
For a CA certificate in (CA signed responses) the OCSPnocheck extension
MUST NOT be present as it's not authorized to create a status for itself.

A CA certificate MUST NOT include the OCSPsigning EKU, even when
using CA signed responses.
When using CA signed responses the EKU digitalSignature MUST be set.
Delegated OCSP signing certificates MUST only have the OCSPsigning EKU set
(Microsoft A12)
Delegated OCSP signing certificates MUST be issued directly by the CA that
is identified in the request as the issuer of the EE certificate (RFC6960
4.2.2.2)

But as Pedro also mentioned, the EKU extension in intermediate certificates
acts as a constraint on the permitted EKU OIDs in end-entity certificates,
which means you won't be able to use delegated OCSP signing certificates
with strict EKU validation on the path? While not every client might have
strict validation on this, it would be really confusing if it's required
for one EKU and forbidden for the other.

On Thu, 2 Jul 2020 at 08:19, Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Hello,
> Our understanding when creating this SubCA was that the CA certificate
> should include the EKUs that would be allowed to issue, and therefore, as
> it would generate certificates for the OCSP responders, it should include
> such EKU, the same it would include the EKU for clientAuthentication, for
> example.
> Can you please clarify why this is not correct and what is the security
> problem it creates?
> Thanks,
> Pedro
>
> El jueves, 2 de julio de 2020, 6:31:16 (UTC+2), Ryan Sleevi escribió:
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>


--
Regards,

Paul van Brouwershaven

http://linkedin.com/in/pvanbrouwershaven
http://facebook.com/p.vanbrouwershaven
http://twitter.com/vanbroup

Rob Stradling

unread,
Jul 2, 2020, 6:14:37 AM7/2/20
to mozilla-dev-security-policy, ry...@sleevi.com
> This batch is NOT comprehensive. According to crt.sh, there are approximately 293 certificates that meet the criteria of "issued by a Mozilla-trusted, TLS-capable CA, with the OCSPSigning EKU, and without pkix-nocheck". misissued.com had some issues with parsing some of these certificates, due to other non-conformities, so I only included a sample.

I just reproduced this result. I've posted my SQL query and (thanks to GitHub) a searchable TSV report of all 293 certificates here:
https://gist.github.com/robstradling/6c737c97a7a3ab843b6f24747fc9ad1f

________________________________
From: dev-security-policy <dev-security-...@lists.mozilla.org> on behalf of Ryan Sleevi via dev-security-policy <dev-secur...@lists.mozilla.org>
Sent: 01 July 2020 22:05
To: mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>
Subject: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


I've created a new batch of certificates that violate 4.9.9 of the BRs,
which was introduced with the first version of the Baseline Requirements as
a MUST. This is https://misissued.com/batch/138/

A quick inspection among the affected CAs include O fields of: QuoVadis,
GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus,
Actalis, Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.

Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
include an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP
Delegated Responder within Section 4.2.2.2 as indicated by the presence of
While I'll be looking to create Compliance Incidents for the affected CAs,
and attempting to report through their problem reporting mechanisms, the
fact that it is the constrained CAs which are not yet required to be
disclosed, and most likely invisible to CT (e.g. S/MIME issuing CAs that do
not issue TLS), still pose substantial risk, that it requires every CA
closely examine their practices.

CAs affected MUST ensure they revoke such certificates within 7 days, as
per 4.9.1.2 (5) and 4.9.1.2 (6)

[1]
https://wiki.mozilla.org/CA:CertificatePolicyV2.1#Time_Frames_for_included_CAs_to_comply_with_the_new_policy
[2]
https://censys.io/certificates?q=%28parsed.extensions.extended_key_usage.ocsp_signing%3Atrue+and+validation.nss.valid%3Atrue+and+parsed.validity.start%3A%5B2013-05-15+TO+*%5D%29+and+not+parsed.unknown_extensions.id%3A1.3.6.1.5.5.7.48.1.5
[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/XQd3rNF4yOo/bXYjt1mZAwAJ

Peter Mate Erdosi

unread,
Jul 2, 2020, 7:04:29 AM7/2/20
to mozilla-dev-security-policy
Just for my better understanding, is the following CA certificate
"TLS-capable"?

X509v3 Basic Constraints critical:
CA:TRUE
X509v3 Key Usage critical:
Certificate Sign, CRL Sign
X509v3 Extended Key Usage:
Time Stamping, OCSP Signing


Peter

Rob Stradling

unread,
Jul 2, 2020, 7:21:51 AM7/2/20
to mozilla-dev-security-policy, Peter Mate Erdosi
Hi Peter. The "following CA certificate" (which I'll call Certificate X) is not capable of issuing id-kp-serverAuth leaf certificates that will be trusted by Mozilla, but that fact is entirely irrelevant to this discussion. Notice that Ryan wrote "issued by a Mozilla-trusted, TLS-capable CA" rather than "is a Mozilla-trusted, TLS-capable CA".

Certificate X contains the id-kp-OCSPSigning EKU. This means that it can be used as a delegated OCSP signer, to sign OCSP responses on behalf of its issuer. If its issuer is "a Mozilla-trusted, TLS-capable CA", then all of its issuer's delegated OCSP signer certificates are in scope for the BRs and for the Mozilla Root Store Policy.

Certificate X is an intermediate CA certificate, which is capable of issuing id-kp-timeStamping leaf certificates. That's all very nice, but it doesn't alter the fact that Certificate X is also a (misissued) delegated OCSP signing certificate that is in scope for the BRs and the Mozilla Root Store Policy.

________________________________
From: dev-security-policy <dev-security-...@lists.mozilla.org> on behalf of Peter Mate Erdosi via dev-security-policy <dev-secur...@lists.mozilla.org>
Sent: 02 July 2020 12:04
To: mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>
Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert

Pedro Fuentes

unread,
Jul 2, 2020, 7:52:34 AM7/2/20
to mozilla-dev-s...@lists.mozilla.org
El jueves, 2 de julio de 2020, 9:23:19 (UTC+2), Paul van Brouwershaven escribió:
> But as Pedro also mentioned, the EKU extension in intermediate certificates
> acts as a constraint on the permitted EKU OIDs in end-entity certificates,
> which means you won't be able to use delegated OCSP signing certificates
> with strict EKU validation on the path? While not every client might have
> strict validation on this, it would be really confusing if it's required
> for one EKU and forbidden for the other.
>

If we look at the BR, it says:
"[^**]: Generally Extended Key Usage will only appear within end entity certificates (as highlighted in RFC 5280 (4.2.1.12)), however, Subordinate CAs MAY include the extension to further protect relying parties until the use of the extension is consistent between Application Software Suppliers whose software is used by a substantial portion of Relying Parties worldwide."

Therefore, in my humble opinion it's fully logical to understand this requirement "as it's written", which is to restrict the CA and protect relying parties... In other words, the BR is clearly saying that the meaning of the EKU in SubCAs MUST be understood as a constraint and NOT to express the EKU of the certificate itself. The same applies to other EKUs, for example the meaning of serverAuth EKU is EVIDENTLY associated to a constraint, and no one understands that the CA certificate can be used to protect a web server, as in that case the rest of the certificate profile should be also consistent with the requirements of the leaf TLS certs... I think it's not logical to consider in the BR the implications of setting some EKU and not the others.

I would consider this as two derived issues that need to be considered separately and appropriately:

#1. There's an evident "gap" in the BR in section 7.1.2.2-g that is creating a potential inconsistency with the RFC, and also creates incompatibilities with certain software solutions.

#2. This inconsistency could provoke in certain conditions a security risk, and in particular this applies in case of externally-operated subCAs. This security risk must analysed and mitigated by, maybe, revoking these SubCAs.

But I would say that just considering this as a unique problem that needs a unique solution is not appropriate.

Best,
Pedro

Peter Mate Erdosi

unread,
Jul 2, 2020, 7:59:42 AM7/2/20
to mozilla-dev-security-policy
Hi Rob, thanks for the clarification.

What will be the situation if the issuer is a Root CA instead of the "TLS
capable (intermediate or subordinate) CA"?
As far as I understood till now, it is not misissued, if the root CA cannot
be considered as an "Mozilla-trusted, TLS-capable CA".

And considering chapter 7.1.2.1 b) of CAB Forum BRG, extendedKeyUsage MUST
NOT be present in root CA certificates, but "If the Root CA Private Key is
used for signing OCSP responses, then the digitalSignature bit MUST be
set", which is the same in the 7.1.2.2 e) : ". If the Subordinate CA
Private Key is used for signing OCSP responses, then the digitalSignature
bit MUST be set."

I have not seen that the SQL query considered with digitalSignature bit,
but as I interpreted until now, the CA cannot sign OCSP responses without
setting the digitalSignature bit even the OCSPSigning EKU is used. And
Mozilla requires the BRG-conformant CAs also, isn't it?

So, I am a bit confused.


Thanks again,

Peter

On Thu, Jul 2, 2020 at 1:21 PM Rob Stradling <r...@sectigo.com> wrote:

> Hi Peter. The "following CA certificate" (which I'll call Certificate X)
> is not capable of issuing id-kp-serverAuth leaf certificates that will be
> trusted by Mozilla, but that fact is entirely irrelevant to this
> discussion. Notice that Ryan wrote "*issued by* a Mozilla-trusted,
> TLS-capable CA" rather than "*is* a Mozilla-trusted, TLS-capable CA".
>
> Certificate X contains the id-kp-OCSPSigning EKU. This means that it can
> be used as a delegated OCSP signer, to sign OCSP responses on behalf of its
> issuer. If its issuer is "a Mozilla-trusted, TLS-capable CA", then all of
> its issuer's delegated OCSP signer certificates are in scope for the BRs
> and for the Mozilla Root Store Policy.
>
> Certificate X is an intermediate CA certificate, which is capable of
> issuing id-kp-timeStamping leaf certificates. That's all very nice, but it
> doesn't alter the fact that Certificate X is also a (misissued) delegated
> OCSP signing certificate that is in scope for the BRs and the Mozilla Root
> Store Policy.
>
> ------------------------------
> *From:* dev-security-policy <dev-security-...@lists.mozilla.org>
> on behalf of Peter Mate Erdosi via dev-security-policy <
> dev-secur...@lists.mozilla.org>
> *Sent:* 02 July 2020 12:04
> *To:* mozilla-dev-security-policy <
> mozilla-dev-s...@lists.mozilla.org>
> *Subject:* Re: SECURITY RELEVANT FOR CAs: The curious case of the

Neil Dunbar

unread,
Jul 2, 2020, 8:11:22 AM7/2/20
to dev-secur...@lists.mozilla.org

On 02/07/2020 12:52, Pedro Fuentes via dev-security-policy wrote:
> If we look at the BR, it says:
> "[^**]: Generally Extended Key Usage will only appear within end entity certificates (as highlighted in RFC 5280 (4.2.1.12)), however, Subordinate CAs MAY include the extension to further protect relying parties until the use of the extension is consistent between Application Software Suppliers whose software is used by a substantial portion of Relying Parties worldwide."
>
> Therefore, in my humble opinion it's fully logical to understand this requirement "as it's written", which is to restrict the CA and protect relying parties... In other words, the BR is clearly saying that the meaning of the EKU in SubCAs MUST be understood as a constraint and NOT to express the EKU of the certificate itself.

Pedro,

I think that the problem here isn't what the BRs indicate the reading of
EKUs in a CA certificate should be.

It's that RFC 6960 (Section 4.2.2.2) states that 

> OCSP signing delegation SHALL be designated by the inclusion of
> id-kp-OCSPSigning in an extended key usage certificate extension
> included in the OCSP response signer's certificate.


In other words, if a certificate X (CA or otherwise) contains that EKU
value, by definition, it becomes a valid delegated OCSP responder
certificate, regardless of the intentions surrounding EKU interpretation
in CA certificates. Thus, OCSP responses signed by that X, on behalf of
X's issuing CA, _would_ be properly validated by compliant RP software.
If a hostile party grabs hold of the private key for the CA certificate,
their harm is not limited to the PKI described by the original CA
certificate, but extends to all of the sibling certificates of X

Now, it's true that the BRs also require the id-pkix-ocsp-nocheck
extension too, but RFC 6960 does not require it (it's just the way to
say "trust this delegated cert for as long as it is valid", and don't
consult OCSP/CRLs).

Regards,

Neil

Paul van Brouwershaven

unread,
Jul 2, 2020, 10:34:58 AM7/2/20
to dev-secur...@lists.mozilla.org
I did do some testing on EKU chaining in Go, but from my understand this
works the same for Microsoft:


An OCSP responder certificate with Extended Key Usage OCSPSigning, but an
issuing CA without the EKU (result: certificate specifies an incompatible
key usage)

https://play.golang.org/p/XSsKfxytx3O


The same chain but now the ICA includes the Extended Key Usage OCSPSigning
(result: ok)

https://play.golang.org/p/XL7364nSCe8


Microsoft requires the EKU to be present in issuing CA certificates:



*Issuing CA certificates that chain to a participating Root CA must be
constrained to a single EKU (e.g., separate Server Authentication, S/MIME,
Code Signing, and Time Stamping uses. This means that a single Issuing CA
must not combine server authentication with S/MIME, code signing or time
stamping EKU. A separate intermediate must be used for each use case.
https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#a-root-requirements
<https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#a-root-requirements>
(8)*


Technically constraining issuing CA’s based on the EKU as Microsoft
requires feels like a good thing to do. But if we leave out the OCSPSigning
EKU we must leave out all EKU constraints (and talk to Microsoft) or move
away from delegated OCSP signing certificates and all move to CA signed
responses.

Ryan Sleevi

unread,
Jul 2, 2020, 10:41:11 AM7/2/20
to Paul van Brouwershaven, MDSP
On Thu, Jul 2, 2020 at 10:34 AM Paul van Brouwershaven via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

> I did do some testing on EKU chaining in Go, but from my understand this
> works the same for Microsoft:
>

Go has a bug https://twitter.com/FiloSottile/status/1278501854306095104

The understanding for Microsoft isn't correct, as linked earlier in the
reference materials.


> Microsoft requires the EKU to be present in issuing CA certificates:


> *Issuing CA certificates that chain to a participating Root CA must be
> constrained to a single EKU (e.g., separate Server Authentication, S/MIME,
> Code Signing, and Time Stamping uses. This means that a single Issuing CA
> must not combine server authentication with S/MIME, code signing or time
> stamping EKU. A separate intermediate must be used for each use case.
>
> https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#a-root-requirements
> <
> https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#a-root-requirements
> >
> (8)*
>

Did you paste the wrong section? This doesn't seem to be consistent with
what you're saying, and perhaps it was just a bad copy/paste? Even if
quoting Microsoft policy, how do you square this with: "A CA must
technically constrain an OCSP responder such that the only EKU allowed is
OCSP Signing." (from that same section)

Did you read the related thread where this was previously discussed on
m.d.s.p.?

Technically constraining issuing CA’s based on the EKU as Microsoft
> requires feels like a good thing to do. But if we leave out the OCSPSigning
> EKU we must leave out all EKU constraints (and talk to Microsoft) or move
> away from delegated OCSP signing certificates and all move to CA signed
> responses.


That's not correct, and is similar to the mistake I originally/previously
made, and was thankfully corrected on, which also highlighted the
security-relevant nature of it. I encourage you to give another pass at
Robin's excellent write-up, at
https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/XQd3rNF4yOo/bXYjt1mZAwAJ

Paul van Brouwershaven

unread,
Jul 2, 2020, 1:16:05 PM7/2/20
to ry...@sleevi.com, MDSP
On Thu, 2 Jul 2020 at 16:41, Ryan Sleevi <ry...@sleevi.com> wrote:

>
> On Thu, Jul 2, 2020 at 10:34 AM Paul van Brouwershaven via
> dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
>> I did do some testing on EKU chaining in Go, but from my understand this
>> works the same for Microsoft:
>>
>
> Go has a bug https://twitter.com/FiloSottile/status/1278501854306095104
>
> The understanding for Microsoft isn't correct, as linked earlier in the
> reference materials.
>

I wasn't aware that this would be for ADCS only.
The Windows certificate viewer doesn't validate the purpose but after a
quick test with the powershell command Test-Certificate, it does look to
validate the EKU path on Windows 10:

Get-ChildItem -Path
Cert:\currentUser\addressbook\63D6AEAD044E9D720930D7F814B7C74DBB541572 |
Test-Certificate -User -AllowUntrustedRoot -EKU "1.3.6.1.5.5.7.3.9"
WARNING: Chain status:
CERT_TRUST_IS_NOT_VALID_FOR_USAGE
Test-Certificate : The certificate is not valid for the requested usage.
0x800b0110 (-2146762480 CERT_E_WRONG_USAGE)
At line:1 char:94
+ ... DBB541572 | Test-Certificate -User -AllowUntrustedRoot -EKU "1.3.6.1.
...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:Certificate) [Test-Certificate],
Exception
+ FullyQualifiedErrorId :
CryptographicError,Microsoft.CertificateServices.Commands.TestCertificate


(the certificates in the test above are from the chain generated by my
previous example)


>
>> Microsoft requires the EKU to be present in issuing CA certificates:
>
>
>> *Issuing CA certificates that chain to a participating Root CA must be
>> constrained to a single EKU (e.g., separate Server Authentication, S/MIME,
>> Code Signing, and Time Stamping uses. This means that a single Issuing CA
>> must not combine server authentication with S/MIME, code signing or time
>> stamping EKU. A separate intermediate must be used for each use case.
>>
>> https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#a-root-requirements
>> <
>> https://docs.microsoft.com/en-us/security/trusted-root/program-requirements#a-root-requirements
>> >
>> (8)*
>>
>
> Did you paste the wrong section? This doesn't seem to be consistent with
> what you're saying, and perhaps it was just a bad copy/paste? Even if
> quoting Microsoft policy, how do you square this with: "A CA must
> technically constrain an OCSP responder such that the only EKU allowed is
> OCSP Signing." (from that same section)
>

No, it does state that the EKU for other purposes must be set in the ussing
CA, my point here is that when you set these it does exclude OCSP signing.

The other item of the policy you refer to is confusing as it doesn't seem
to make a difference between CA signed and CA delegated responses, it might
even prohibit CA signed responses?


> Did you read the related thread where this was previously discussed on
> m.d.s.p.?
>
> Technically constraining issuing CA’s based on the EKU as Microsoft
>> requires feels like a good thing to do. But if we leave out the
>> OCSPSigning
>> EKU we must leave out all EKU constraints (and talk to Microsoft) or move
>> away from delegated OCSP signing certificates and all move to CA signed
>> responses.
>
>
> That's not correct, and is similar to the mistake I originally/previously
> made, and was thankfully corrected on, which also highlighted the
> security-relevant nature of it. I encourage you to give another pass at
> Robin's excellent write-up, at
> https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/XQd3rNF4yOo/bXYjt1mZAwAJ
>

Thanks, it's an interesting thread, but as shown above, Windows does
validate the EKU chain, but doesn't look to validate it for delegated OCSP
signing certificates?

Ryan Sleevi

unread,
Jul 2, 2020, 1:26:14 PM7/2/20
to Paul van Brouwershaven, Ryan Sleevi, MDSP
On Thu, Jul 2, 2020 at 1:15 PM Paul van Brouwershaven <
pa...@vanbrouwershaven.com> wrote:

> That's not correct, and is similar to the mistake I originally/previously
>> made, and was thankfully corrected on, which also highlighted the
>> security-relevant nature of it. I encourage you to give another pass at
>> Robin's excellent write-up, at
>> https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/XQd3rNF4yOo/bXYjt1mZAwAJ
>>
>
> Thanks, it's an interesting thread, but as shown above, Windows does
> validate the EKU chain, but doesn't look to validate it for delegated OCSP
> signing certificates?
>

The problem is providing the EKU as you're doing, which forces chain
validation of the EKU, as opposed to validating the OCSP response, which
does not.

A more appropriate test is to install the test root R as a locally trusted
CA, issue an intermediate I (without the EKU/only id-kp-serverAuth), issue
an OCSP responder O (with the EKU), and issue a leaf cert L. You can then
validate the OCSP response from the responder cert (that is, an OCSP
response signed by the chain O-I-R) for the certificate L-I-R.

Pedro Fuentes

unread,
Jul 2, 2020, 2:34:22 PM7/2/20
to mozilla-dev-s...@lists.mozilla.org
Hello.
Sorry if this question is incorrect, but I’d like to know if it would acceptable that, for CAs that are owned and operated by the same entity that the Root, the CA certificate is reissued with the same key pair without the offending EKU, instead of doing a full issuance with new keys.
I consider this particular case as less risky than externally operated CAs, so I wonder if this could make possible an smoother solution.
Your comments and guidance are appreciated.
Thanks,
Pedro

Paul van Brouwershaven

unread,
Jul 2, 2020, 2:42:50 PM7/2/20
to ry...@sleevi.com, MDSP
When validating the EKU using `Test-Certificate` Windows states it's
invalid, but when using `certutil` it's accepted or not explicitly checked.
https://gist.github.com/vanbroup/64760f1dba5894aa001b7222847f7eef

When/if I have time I will try to do some further tests with a custom setup
to see if the EKU is validated at all.

Ryan Sleevi

unread,
Jul 2, 2020, 4:11:52 PM7/2/20
to Pedro Fuentes, mozilla-dev-security-policy
This is definitely a hard question, but I don't see how we can easily
resolve that. That's why the comments about Key Destruction were made.

So, first, let me say it definitely mitigates *some* of the security
concerns, particularly the most major one: a third-party being able to
arbitrarily "unrevoke" a certificate, particularly "their" certificate. In
the cases of 3P Sub-CAs, this is just so fundamentally terrifying. Even if
the Sub-CA "hasn't" abused such a capability, the mere knowledge that they
could gives them greater flexibility to "live dangerously" - or to make
them a ripe target for compromise.

Now assuming the keys are all part of the same (audited) CA infrastructure,
what does the risk perspective there look like? In effect, for every
Issuing CA that has issued one of these, Browsers/Relying Parties have no
assurance that any revocations are "correct". This is important, because
when a CA revokes a Sub-CA, even their own, we often stop worrying about
audits, for example, or stop looking for misissued certificates, because
"of course" these certificates can't be used for that purpose. The mere
existence of these certificates undermines that whole design: we have to
treat every revoked Sub-CA as if it was unrevoked, /especially/ if that
Sub-CA was the one that had the EKU.

Now, it might be tempting to say "Well, can't we audit the key usage to
make sure it never signs a delegated OCSP response"? But that shifts the
burden and the risk now onto the client software, for what was actually a
CA mistake. A new form of audit would have to be designed to account for
that, and browsers would have to think *very* carefully about what controls
were suitable, whether the auditor was qualified to examine those controls
and had the necessary knowledge. In short, it requires Browsers/the client
to work through every possible thing that could go wrong with this key
existing, and then think about how to defend against it. While the CA might
see this as "saving" a costly revocation, it doesn't really "save" anything
- it just shifts all the cost onto browsers.

It might be tempting to ask how many had the digitalSignature KU, and can
we check the KU on OCSP responses to make sure it matches? In theory,
clients wouldn't accept it, so they wouldn't be unrevocable and able to
cause shenanigans, and we're saved! But again, this is a cost transfer:
every client and relying party now needs to be updated to enforce this, and
work out the compatibility issues, and test and engineer. And even then, it
might be months or years for users to be protected, when the BRs are
supposed to provide protection "within 7 days". Even "clever" alternatives,
like "Don't allow a delegated responder to provide a response for itself"
don't fully address the issue, because it can still provide responses for
others, and that would further require mutating the revocation checking
process described in RFC 5280 to "skip" OCSP (or fall back) to CRLs. All of
this is more complexity, more testing, and contributes to the body of "dark
knowledge" needed for a secure implementation, which makes it harder to
write new browsers / new libraries to verify certificates.

This is the risk analysis we expect CAs to work through, and think about.
What is the cost of this decision on others? Often, CAs focus
(understandably) on the cost to those they've issued certificates to, but
ignore the externalized ecosystem that they're simply shifting those costs
to. The BRs try to force the CA to account for this up front, because they
*know* that if *anything* goes wrong, they have 7 days to revoke, but then
they don't design their PKIs to be resilient for that.

You can imagine a CA that was rotating issuing intermediates every year
would be in a better position, for example, if this was a "previous"
mistake, since fixed. The impact/blast radius of revoking an intermediate
is a linear decay tied to how many unexpired certificates from that
intermediate there are, which is precisely why you should rotate often.
It's a point I've made often, especially with respect to certificate
lifetimes, but it still doesn't seem to have been taken to heart yet by
many. I'm encouraged GlobalSign's new infrastructure appears to be doing
so, and although it was also affected by this issue, it's hopefully
"easier" for them to clean up versus others.

But this is what disaster recovery plans are for. The "compromise" of a
delegated signing CA is, as noted in RFC 6960, in many ways *as bad as*
compromise of the CA key. CAs have to get better at planning for things
blowing up, and I hope every incident report looks at exactly those best
practices: minimizing the pain of revoking a cert within 7d.

For CAs with non-TLS certs, this is going to be especially painful to
revoke, but it's still necessary. This underscores the "Don't use the TLS
root to issue non-TLS certs".

Pedro Fuentes

unread,
Jul 2, 2020, 5:30:29 PM7/2/20
to mozilla-dev-s...@lists.mozilla.org
Hello Ryan,
Thanks for your detailed response.

Just to be sure that we are in the same page. My question was about reissuing a new CA using the same key pair, but this implies also the revocation of the previous version of the certificate.

You elaborate the need to revoke, but this would be still done anyway.

Thanks,
Pedro

Ryan Sleevi

unread,
Jul 2, 2020, 5:33:05 PM7/2/20
to Pedro Fuentes, mozilla-dev-security-policy
On Thu, Jul 2, 2020 at 5:30 PM Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Hello Ryan,
> Thanks for your detailed response.
>
> Just to be sure that we are in the same page. My question was about
> reissuing a new CA using the same key pair, but this implies also the
> revocation of the previous version of the certificate.
>

Right, but this doesn't do anything, because the previous key pair can be
used to sign an OCSP response that unrevokes itself.

This is the problem and why "key destruction" is the best of the
alternatives (that I discussed) for ensuring that this doesn't happen,
because it doesn't shift the cost to other participants.

Pedro Fuentes

unread,
Jul 2, 2020, 6:05:16 PM7/2/20
to mozilla-dev-s...@lists.mozilla.org
I understand your rational, but my point is that this is happening in the same infrastructure where the whole PKI is operated, and under the responsibility of the same operator of the Root. In my understanding the operator of the Root has full rights to do delegated OCSP responses if those responses are produced by its own OCSP responders.

I'm failing to see what is the main problem you don't consider solved. As per your own dissertations in the related posts, there are two issues:

1. The certificate contains incorrect extensions, so it's a misissuance. This is solved by revoking the certificate, and this is done not only internally in the PKI, but also in OneCRL.

2. The operator of the SubCA could produce improper revocation responses, so this is a security risk. This risk is already difficult to find if the operator of the subCA is the same operator of the Root... If such entity wants to do a wrongdoing, there are far easier ways to do it, than overcomplicated things like unrevoking its own subCA...

Sorry, but I don't see the likeliness of the risks you evoke... I see the potential risk in externally operated CAs, but not here.

Ryan Sleevi

unread,
Jul 2, 2020, 6:25:19 PM7/2/20
to Pedro Fuentes, mozilla-dev-security-policy
On Thu, Jul 2, 2020 at 6:05 PM Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I understand your rational, but my point is that this is happening in the
> same infrastructure where the whole PKI is operated, and under the
> responsibility of the same operator of the Root. In my understanding the
> operator of the Root has full rights to do delegated OCSP responses if
> those responses are produced by its own OCSP responders.
>
> I'm failing to see what is the main problem you don't consider solved. As
> per your own dissertations in the related posts, there are two issues:
>
> 1. The certificate contains incorrect extensions, so it's a misissuance.
> This is solved by revoking the certificate, and this is done not only
> internally in the PKI, but also in OneCRL.
>

This solves *nothing* for anyone not using OneCRL. It doesn't meet the
obligations the CA warranted within its CP/CPS. It doesn't meet the BRs. It
simply "shifts" risk onto everyone else in the ecosystem, and that's a
grossly negligent and irresponsible thing to do.

"Revoking the certificate" is the minimum bar, which is already a promise
the CA made, to everyone who decides to trust that CA, that they will do
within 7 days. But it doesn't mitigate the risk.


> 2. The operator of the SubCA could produce improper revocation responses,
> so this is a security risk. This risk is already difficult to find if the
> operator of the subCA is the same operator of the Root... If such entity
> wants to do a wrongdoing, there are far easier ways to do it, than
> overcomplicated things like unrevoking its own subCA...
>
> Sorry, but I don't see the likeliness of the risks you evoke... I see the
> potential risk in externally operated CAs, but not here.


The risk is just the same! As a CA, I can understand you would say "Surely,
we would never do anything nefarious", but as a member of the community,
why should we trust what you say? Why would the risk be any different with
externally operated CAs? After all, they're audited to, like roots,
shouldn't the risk be the same? Of course you'd realize that no, they're
not the same, because the CA has no way of truly knowing the sub-CA is
being nefarious. The same is true for the Browser trusting the root: it has
no way of knowing you're not being nefarious.

Look, we've had Root CAs that have actively lied in this Forum,
misrepresenting things to the community they later admitted they knew were
false, and had previously been an otherwise CA in good standing (or at
least, no worse standing than other CAs). A CA is a CA, and the risk is
treated the same.

The line of argument being pursued here is a bit like saying "If no one
abuses this, what's the harm?" I've already shown how any attempt to
actually verify it's not abused ends up just shifting whatever cost onto
Relying Parties, when it's the CA and the Subscribers that should bear the
cost, because it's the CA that screwed up. I simply can't see how "just
trust us" is better than objective verification, especially when "just
trust us" got us into this mess in the first place. How would you provide
assurances to the community that this won't be abused? And how is the cost
for the community, in risk, better?

Tim Hollebeek

unread,
Jul 2, 2020, 6:37:19 PM7/2/20
to ry...@sleevi.com, Peter Gutmann, Mozilla
So, from our perspective, the security implications are the most important here.
We understand them, and even in the absence of any compliance obligations they
would constitute an unacceptable risk to trustworthiness of our OCSP responses,
so we have already begun the process of replacing the ICAs we are responsible for.
There are already several key ceremonies scheduled and they will continue through
the holiday weekend. We're prioritizing the ICAs that are under the control of third
parties and/or outside our primary data centers, as they pose the most risk. We are
actively working to mitigate internal ICAs as well. Expect to see revocations start
happening within the next day or two.

I understand the attraction of using a BR compliance issue to attract attention to
this issue, but honestly, that shouldn't be necessary. The BRs don't really adequately
address the risks of the OCSPSigning EKU, and there's certainly lots of room for
improvement there. I think, especially in the short term, it is more important to
focus on how to mitigate the security risks and remove the inappropriate EKU from
the affected ICAs. We can fix the BRs later.

It's also important to note that, much like SHA-1, this issue doesn't respect the
normal assumptions about certificate hierarchies. Non-TLS ICAs can have a significant
impact on their TLS-enabled siblings. This means that CA review needs to extend
beyond the certificates that would traditionally be in scope for the BRs.

I would also caution CAs to carefully analyze the implications before blindly adding the
pkix-ocsp-nocheck extension to their ICAs. That might fix the compliance issue,
but in the grand scheme of things probably makes the problem worse, as ICAs
have fairly long lifetimes, and doing so effectively makes the inadvertent delegated
responder certificate unrevokable. So while the compliance problems might be
fixed, it makes resolving the security issues much more challenging.

-Tim

> -----Original Message-----
> From: dev-security-policy <dev-security-...@lists.mozilla.org>
> On Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Thursday, July 2, 2020 12:31 AM
> To: Peter Gutmann <pgu...@cs.auckland.ac.nz>
> Cc: ry...@sleevi.com; Mozilla <mozilla-dev-security-
> pol...@lists.mozilla.org>
> Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the
> Dangerous Delegated Responder Cert
>
> On Wed, Jul 1, 2020 at 11:48 PM Peter Gutmann
> <pgu...@cs.auckland.ac.nz>
> wrote:
>
> > Ryan Sleevi via dev-security-policy
> > <dev-secur...@lists.mozilla.org>
> > writes:
> >
> > >Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> > include
> > >an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP Delegated
> > >Responder within Section 4.2.2.2 as indicated by the presence of the
> > id-kp-
> > >OCSPSigning as an EKU.
> >
> > Unless I've misread your message, the problem isn't the presence or
> > not of a nocheck extension but the invalid presence of an OCSP EKU:
> >
> > >I've flagged this as a SECURITY matter [...] the Issuing CA has
> > >delegated
> > the
> > >ability to mint arbitrary OCSP responses to this third-party
> >
> > So the problem would be the presence of the OCSP EKU when it shouldn't
> > be there, not the absence of the nocheck extension.
>
>
> Not quite. It’s both.
>
> The BR violation is caused by the lack of the extension.
>
> The security issue is caused by the presence of the EKU.
>
> However, since some CAs only view things through the lens of BR/program
> violations, despite the sizable security risk they pose, the compliance incident
> is what is tracked. The fact that it’s security relevant is provided so that CAs
> understand that revocation is necessary, and that it’s also not sufficient,
> because of how dangerous the issue is.

Pedro Fuentes

unread,
Jul 2, 2020, 6:42:21 PM7/2/20
to mozilla-dev-s...@lists.mozilla.org
Hello Ryan,
I’m fully understanding your argumentative line, but I’d still have a question for you:

Does the operator of a root and it’s hierarchy have the right to delegate OCSP responses to its own responders?

If your answer is “No”, then I don’t have anything else to say, but if your answer is “Yes”, then I’ll be having still a hard time to see the security risk derived of this issue.

Thanks.

Ben Wilson

unread,
Jul 2, 2020, 7:13:33 PM7/2/20
to Ryan Sleevi, mozilla-dev-security-policy
All,


Thank you to Ryan for identifying this problem, and to all of you who are
earnestly investigating what this problem means and the impact to your CA
hierarchies. Mozilla::pkix requires that an OCSP responder certificate be
an end entity certificate, so we believe that Firefox and Thunderbird are
not impacted by this problem. Historically, as per
https://bugzilla.mozilla.org/show_bug.cgi?id=991209#c10, Mozilla has
allowed CA certificates to have the OCSP signing EKU because some CAs
reported that some Microsoft server software required CA certificates to
have the id-kp-OCSPSigning EKU.

The comments in the code[1] say

// When validating anything other than an delegated OCSP signing cert,

// reject any cert that also claims to be an OCSP responder, because such

// a cert does not make sense. For example, if an SSL certificate were to

// assert id-kp-OCSPSigning then it could sign OCSP responses for itself,

// if not for this check.

// That said, we accept CA certificates with id-kp-OCSPSigning because

// some CAs in Mozilla's CA program have issued such intermediate

// certificates, and because some CAs have reported some Microsoft server

// software wrongly requires CA certificates to have id-kp-OCSPSigning.

// Allowing this exception does not cause any security issues because we

// require delegated OCSP response signing certificates to be end-entity

// certificates.

Additionally, as you all know, Firefox uses OneCRL for checking the
revocation status of intermediate certificates, so as long as the revoked
intermediate certificate is in OneCRL, the third-party would not be able to
“unrevoke” their certificate (for Firefox). Therefore, Mozilla does not
need the certificates that incorrectly have the id-kp-OCSPSigning EKU to be
revoked within the next 7 days, as per section 4.9.1.2 of the BRs.

However, as Ryan has pointed out in this thread, others may still have risk
because they may not have a OneCRL equivalent, or they may have certificate
verification implementations that behave differently than mozilla::pkix in
regards to processing OCSP responder certificates. Therefore, it is
important to identify a path forward to resolve the security risk that this
problem causes to the ecosystem.

We are concerned that revoking these impacted intermediate certificates
within 7 days could cause more damage to the ecosystem than is warranted
for this particular problem. Therefore, Mozilla does not plan to hold CAs
to the BR requirement to revoke these certificates within 7 days. However,
an additional Incident Report for delayed revocation will still be
required, as per our documented process[2]. We want to work with CAs to
identify a path forward, which includes determining a reasonable timeline
and approach to replacing the certificates that incorrectly have the
id-kp-OCSPSigning EKU (and performing key destruction for them).

Therefore, we are looking forward to your continued input in this
discussion about the proper response for CAs to take to resolve the
security risks caused by this problem, and ensure that this problem is not
repeated in future certificates. We also look forward to your suggestions
on how we can improve OCSP responder requirements in Mozilla’s Root Store
Policy, and to your continued involvement in the CA/Browser Forum to
improve the BRs.

Thanks,


Ben

[1]
https://dxr.mozilla.org/mozilla-central/rev/c68fe15a81fc2dc9fc5765f3be2573519c09b6c1/security/nss/lib/mozpkix/lib/pkixcheck.cpp#858-869

[2] https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation


On Wed, Jul 1, 2020 at 3:06 PM Ryan Sleevi via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I've created a new batch of certificates that violate 4.9.9 of the BRs,
> which was introduced with the first version of the Baseline Requirements as
> a MUST. This is https://misissued.com/batch/138/
>
> A quick inspection among the affected CAs include O fields of: QuoVadis,
> GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus,
> Actalis, Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.
>
> Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> include an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP
> Delegated Responder within Section 4.2.2.2 as indicated by the presence of

Ryan Sleevi

unread,
Jul 2, 2020, 8:10:10 PM7/2/20
to Pedro Fuentes, mozilla-dev-security-policy
On Thu, Jul 2, 2020 at 6:42 PM Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Does the operator of a root and it’s hierarchy have the right to delegate
> OCSP responses to its own responders?
>
> If your answer is “No”, then I don’t have anything else to say, but if
> your answer is “Yes”, then I’ll be having still a hard time to see the
> security risk derived of this issue.
>

Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
the "security risk".

I totally appreciate that your argument is "but we wouldn't misuse the
key". The "risk" that I'm talking about is how can anyone, but the CA, know
that's true? All of the compliance obligations assume certain facts when
the CA is operating a responder. This issue violates those assumptions, and
so it violates the controls, and so we don't have any way to be confident
that the key is not misused.

I think the confusion may be from the overloading of the word "risk". Here,
I'm talking about "the possibility of something bad happening". We don't
have any proof any 3P Sub-CAs have mis-signed OCSP responses: but we seem
to agree that there's risk of that happening. It seems we disagree on
whether there is risk of the CA themselves doing it. I can understand the
view that says "Of course the CA wouldn't", and my response is that the
risk is still the same: there's no way to know, and it's still a
possibility.

I can understand that our views may differ: you may see 3P as "great risk"
and 1p as "acceptable risk". However, from the view of a browser or a
relying party, "1p" and "3p" are the same: they're both CAs. So the risk is
the same, and the risk is unacceptable for both cases.

Ryan Sleevi

unread,
Jul 2, 2020, 8:22:43 PM7/2/20
to Ben Wilson, Ryan Sleevi, mozilla-dev-security-policy
On Thu, Jul 2, 2020 at 7:13 PM Ben Wilson <bwi...@mozilla.com> wrote:

> We are concerned that revoking these impacted intermediate certificates
> within 7 days could cause more damage to the ecosystem than is warranted
> for this particular problem. Therefore, Mozilla does not plan to hold CAs
> to the BR requirement to revoke these certificates within 7 days. However,
> an additional Incident Report for delayed revocation will still be
> required, as per our documented process[2]. We want to work with CAs to
> identify a path forward, which includes determining a reasonable timeline
> and approach to replacing the certificates that incorrectly have the
> id-kp-OCSPSigning EKU (and performing key destruction for them).
>

I'm not sure I understand this. The measurement is "damage to the
ecosystem", but the justification is "Firefox is protected, even though
many others are not" (e.g. OpenSSL-derived systems, AFAICT), because
AFAICT, Firefox does a non-standard (but quite reasonable) thing.

I can totally appreciate the answer "The risk to Mozilla is low", but this
response seems... different? It also seems to place CAs that adhere to
4.9.1.2, because they designed their systems robustly, at greater
disadvantage from those that did not, and seems like it only encourages the
problem to get worse over time, not better. Regardless, I do hope that any
delay for revocation is not treated as a "mitigate the EKU incident", but
rather more specifically, "what is your plan to ensure every Sub-CA able to
be revoked as required by 4.9.1.2", which almost invariably means
automating certificate issuance and regularly rotating intermediates. If we
continue to allow CAs to place Mozilla, or broadly, browsers, as somehow
responsible for the consequences of the CA's design decisions, things will
only get worse.

Setting aside the security risk factors, which understandably for Mozilla
are seen as low, at its core, this is a design issue for any CA that can't
or doesn't meet the obligations they warranted Mozilla, and the broader
community, that they would meet. Getting to a path where this design issue,
this lack of agility, is remediated is essential, not just in the "oh no,
what if the key is compromised" risk, but within the broader "how do we
have an agile ecosystem?" Weak entropy with serial numbers "should" have
been the wake-up call on investing in this.

Ryan Sleevi

unread,
Jul 2, 2020, 9:09:36 PM7/2/20
to Tim Hollebeek, Mozilla, Peter Gutmann, ry...@sleevi.com
Thanks Tim.

It’s deeply reassuring to see DigiCert tackling this problem responsibly
and head-on.

And thank you for particularly calling attention to the fact that blindly
adding id-pkix-ocsp-nocheck to these ICAs introduces worse security
problems. This is why RFC 6960 warns so specifically on this.

What does a robust design look like?
- Omit the EKU for ICAs. You can work around the ADCS issue using Sectigo’s
guidance.
- For your actual delegated responders, omitting OCSP URLs can help “some”
clients, but not all. A sensible minimum profile is:
- basicConstraints:CA=FALSE
- extKeyUsage=id-kp-OCSPSigning (and ONLY that)
- validity period of 90 days or less (30?)
- id-pkix-ocsp-nocheck

basicConstraints is to guarantee it works with Firefox. EKU so it’s a
delegated responder (and only that). Short lived because nocheck means it’s
high risk.

Invariably, any profile (e.g. in the CABForum) would also need to ensure
that these keys are protected to the same assurance level as CA keys,
because of the similar function they pose. I had previously proposed both
the lifetime and protection requirements in CABF, but met significant
opposition. This still lives in
https://github.com/sleevi/cabforum-docs/pull/2/files , although several of
these changes have found their way in through other ballots, such as SC31
in the SCWG if the CABF.

On Thu, Jul 2, 2020 at 6:37 PM Tim Hollebeek <tim.ho...@digicert.com>
wrote:
> > > Ryan Sleevi via dev-security-policy
> > > <dev-secur...@lists.mozilla.org>
> > > writes:
> > >
> > > >Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> > > include
> > > >an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP Delegated
> > > >Responder within Section 4.2.2.2 as indicated by the presence of the
> > > id-kp-
> > > >OCSPSigning as an EKU.
> > >
> > > Unless I've misread your message, the problem isn't the presence or
> > > not of a nocheck extension but the invalid presence of an OCSP EKU:
> > >
> > > >I've flagged this as a SECURITY matter [...] the Issuing CA has
> > > >delegated
> > > the
> > > >ability to mint arbitrary OCSP responses to this third-party
> > >
> > > So the problem would be the presence of the OCSP EKU when it shouldn't
> > > be there, not the absence of the nocheck extension.
> >
> >
> > Not quite. It’s both.
> >
> > The BR violation is caused by the lack of the extension.
> >
> > The security issue is caused by the presence of the EKU.
> >
> > However, since some CAs only view things through the lens of BR/program
> > violations, despite the sizable security risk they pose, the compliance
> incident
> > is what is tracked. The fact that it’s security relevant is provided so
> that CAs
> > understand that revocation is necessary, and that it’s also not
> sufficient,
> > because of how dangerous the issue is.

Filippo Valsorda

unread,
Jul 3, 2020, 12:32:33 AM7/3/20
to ry...@sleevi.com, Paul van Brouwershaven, dev-security-policy
2020-07-02 10:40 GMT-04:00 Ryan Sleevi via dev-security-policy <dev-secur...@lists.mozilla.org>:
> On Thu, Jul 2, 2020 at 10:34 AM Paul van Brouwershaven via
> dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
> > I did do some testing on EKU chaining in Go, but from my understand this
> > works the same for Microsoft:
> >
>
> Go has a bug https://twitter.com/FiloSottile/status/1278501854306095104

Yep. In fact, Go simply doesn't have an OCSP verifier. We should fix that! I filed an issue: https://golang.org/issues/40017 <https://github.com/golang/go/issues/40017>

The pieces are there (OCSP request serialization and response parsing, signature verification, a chain builder) but the logic stringing them together is not. That includes building the chain without requesting the EKU up the path, and then checking the EKU only on the Responder itself.

It's unfortunate that the Mozilla requirement (that the Responder must be an EE) is not standard, because that would have allowed the OCSP EKU to work like any other, nested up the chain, but that's just not how it works and it's too late to change, so it has to be special-cased out of the chain nesting requirement, or it wouldn't be possible to mint an Intermediate that can in turn mint Responders, without making the Intermediate a Responder itself.

Pedro Fuentes

unread,
Jul 3, 2020, 3:24:05 AM7/3/20
to mozilla-dev-s...@lists.mozilla.org
>
> Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
> the "security risk".

But the point then is that a delegated responder that had the required "noCheck" extension wouldn't be affected by this issue and CAs wouldn't need to react, and therefore the issue to solve is the "mis-issuance" itself due to the lack of the extension, not the fact that the CA certificate could be used to do delegated responses for the same operator of the Root, which is acceptable, as you said.

In fact the side effect is that delegated responders operated externally that have the required no check extension don't seem to be affected by the issue and would be deemed acceptable, without requiring further action to CAs, while the evident risk problem is still there.

>
> I can understand that our views may differ: you may see 3P as "great risk"
> and 1p as "acceptable risk". However, from the view of a browser or a
> relying party, "1p" and "3p" are the same: they're both CAs. So the risk is
> the same, and the risk is unacceptable for both cases.

But this is not actually like that, because what is required now to CAs is to react appropriately to this incident, and you are imposing a unique approach while the situations are fundamentally different. It's not the same the derivations of this issue for CAs that had 3P delegation (or a mix of 1P and 3P), and the ones, like us, that don't have such delegation.

In our particular case, where we have three affected CAs, owned and operated by WISeKey, we are proposing this action plan, for which we request feedback:
1.- Monday, new CAs will be created with new keys, that will be used to substitute the existing ones
2.- Monday, the existing CAs would be reissued with the same keys, removing the OCSP Signing EKU and with A REDUCED VALIDITY OF THREE MONTHS
3.- The existing CAs will be disabled for any new issuance, and will only be kept operative for signing CRLs and to attend revocation requests
4.- Within the 7 days period, the previous certificate of the CAs will be revoked, updating CCADB and OneCRL
5.- Once the re-issued certificates expire, we will destroy the keys and write the appropriate report

In my humble opinion, this plan is:
- Solving the BR compliance issue by revoking the offending certificate within the required period
- Reducing even more the potential risk of hypothetical misuse of the keys by establishing a short life-time

I hope this plan is acceptable.

Best,
Pedro

Paul van Brouwershaven

unread,
Jul 3, 2020, 6:35:49 AM7/3/20
to ry...@sleevi.com, MDSP
For those who are interested, in contrast to the direct EKU validation with
Test-Certificate, certutil does validate the OCSP signing EKU on the
delegated OCSP signing certificate but doesn't validate the
certificate chain for the OCSP signing EKU.

Full test script and output can be found here:
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

Ryan Sleevi

unread,
Jul 3, 2020, 7:26:07 AM7/3/20
to Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
Hi Pedro,

I’m not sure how best to proceed here. It seems like we’ve reached a point
where you’re wanting to discuss possible ways to respond to this, as a CA,
and it feels like this should be captured on the bug.

I’m quite worried here, because this reply demonstrates that we’re at a
point where there is still a rather large disconnect, and I’m not sure how
to resolve it. It does not seem that there’s an understanding here of the
security issues, and while I want to help as best I can, I also believe
it’s appropriate that we accurately consider how well a CA understands
security issue as part of considering incident response. I want there to be
a safe space for questions, but I’m also deeply troubled by the confusion,
and so I don’t know how to balance those two goals.

On Fri, Jul 3, 2020 at 3:24 AM Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> >
> > Yes. But that doesn't mean we blindly trust the CA in doing so. And
> that's
> > the "security risk".
>
> But the point then is that a delegated responder that had the required
> "noCheck" extension wouldn't be affected by this issue and CAs wouldn't
> need to react, and therefore the issue to solve is the "mis-issuance"
> itself due to the lack of the extension, not the fact that the CA
> certificate could be used to do delegated responses for the same operator
> of the Root, which is acceptable, as you said.


I don’t understand why this is difficult to understand. If you have the
noCheck extension, then as RFC 6960, you need to make that certificate
short lived. The BRs require the cert have the extension.

Similarly, if something goes wrong with such a responder, you also have to
consider revoking the root, because it as-bad-as a root key compromise.

In fact the side effect is that delegated responders operated externally
> that have the required no check extension don't seem to be affected by the
> issue and would be deemed acceptable, without requiring further action to
> CAs, while the evident risk problem is still there.


The “nocheck” discussion extension here is to highlight the compliance
issue.

The underlying issue is a security issue: things capable of providing OCSP
responses that shouldn’t be.

It seems you understand the security issue when viewing external sub-CAs:
they can now impact the security of the issuer.

It seems we’re at an impasse for understanding the issue for
internally-operates Sub CAs: this breaks all of the auditable controls and
assurance frameworks, and breaks the security goals of a “correctly”
configured delegated responder, as discussed in the security considerations
throughout RFC 6960.


>
> >
> > I can understand that our views may differ: you may see 3P as "great
> risk"
> > and 1p as "acceptable risk". However, from the view of a browser or a
> > relying party, "1p" and "3p" are the same: they're both CAs. So the risk
> is
> > the same, and the risk is unacceptable for both cases.
>
> But this is not actually like that, because what is required now to CAs is
> to react appropriately to this incident, and you are imposing a unique
> approach while the situations are fundamentally different. It's not the
> same the derivations of this issue for CAs that had 3P delegation (or a mix
> of 1P and 3P), and the ones, like us, that don't have such delegation.


The burden is for your CA to establish that, in the incident response. I’ve
seen nothing from you to reasonably establish that; you just say “but it’s
different”. And that worries me, because it seems you don’t recognize that
all of the controls and tools and expectations we have, both in terms of
audits but also in all of the checks we make (for example, with crt.sh)
*also* lose their credibility for as long as this exists.

Again, I understand and appreciate the view that you seem to be advocating:
“If nothing goes wrong, no one is harmed. If third-parties were involved,
things could go wrong, so we understand that. But we won’t let anything go
wrong ourselves.”

But you seem to be misunderstanding what I’m saying: “If anything goes
wrong, we will not be able to detect it, and all of our assumptions and
safety features will fail. We could try and design new safety features, but
now we’re having to literally pay for your mistake, which never should have
happened in the first place.”

That is a completely unfair and unreasonable thing for WISeKey to ask of
the community: for everyone to change and adapt because WISeKey failed to
follow the expectations.

The key destruction is the only way I can see being able to provide some
assurance that “things won’t go wrong, because it’s impossible for them to
go wrong, here’s the proof”

Anything short of that is asking the community to either accept the
security risk that things can go wrong, or for everyone to go modify their
code, including their tools to do things like check CT, to appropriately
guard against that. Which is completely unreasonable. That’s how
fundamental this breaks the assumptions here.


In our particular case, where we have three affected CAs, owned and
> operated by WISeKey, we are proposing this action plan, for which we
> request feedback:
> 1.- Monday, new CAs will be created with new keys, that will be used to
> substitute the existing ones
> 2.- Monday, the existing CAs would be reissued with the same keys,
> removing the OCSP Signing EKU and with A REDUCED VALIDITY OF THREE MONTHS


To match your emphasis, THIS DOES NOTHING to solve the security problem. It
doesn’t *matter* the short validity of the new, what matters is the *old*
certificates and whether the *old* certificates private keys still exist.
Which is the thing you’re still continuing.


> 3.- The existing CAs will be disabled for any new issuance, and will only
> be kept operative for signing CRLs and to attend revocation requests
> 4.- Within the 7 days period, the previous certificate of the CAs will be
> revoked, updating CCADB and OneCRL


This does nothing to address the security risk. It *only* addresses the
compliance issue.

5.- Once the re-issued certificates expire, we will destroy the keys and
> write the appropriate report


This ignores the security issues and just focuses on the compliance issues.
Which is, I think, an extremely poor response for a CA. If that was the
response, then like I said, the mitigation I would encourage clients to do
is remove trust in those intermediates. Since that turns out to be rather
difficult, I think the safest option would be to remove trust in the root.

The most important thing I want CAs to solve is the security issue. That’s
why I highlighted *every* CA, not just those I reported, needs to examine
their systems. The point isn’t to focus on compliance and ignore security:
it’s to solve the problem at its core.

Because of the nature of this issue, anything short of revoking the
intermediates with a key destruction is asking “trust us to not screw up
but we have no way to prove we didn’t and detecting it is now even harder
and oh if we do you have to revoke the issuer CA/Root anyways”. Rather than
wait for something to go wrong, and accept that risk, I’d rather we just
tackle it head on and start removing trust in whatever is needed. That’s
how big the risk is, and how just hoping things won’t go wrong isn’t a
strategy.

In my humble opinion, this plan is:
> - Solving the BR compliance issue by revoking the offending certificate
> within the required period
> - Reducing even more the potential risk of hypothetical misuse of the keys
> by establishing a short life-time
>
> I hope this plan is acceptable.


This plan still highlights a misunderstanding about the security issues,
and as a consequence, doesn’t seem to understand that it doesn’t reduce the
potential risk. Revoking the old intermediate does nothing for the security
risk, unless and until the key is no longer usable, or everyone in the
world changes their code to defend against these situations. That’s just
how this bug works.

Pedro Fuentes

unread,
Jul 3, 2020, 8:06:19 AM7/3/20
to mozilla-dev-s...@lists.mozilla.org
Ryan,
I don’t think I’m failing to see the security problem, but we evidently have different perception of the risk level for the particular case of internal delegation.
Anyway I will just cease in my intent and just act as it’s expected, looking as guidance to the reaction of other CAs where possible.

I would just have a last request for you. I would appreciate if you can express your views on Ben’s message about Mozilla’s position, in particular about the 7-day deadline.
I think it’s of extreme benefit for all if the different browsers are aligned.

Thanks,
Pedro

Rob Stradling

unread,
Jul 3, 2020, 8:28:59 AM7/3/20
to ry...@sleevi.com, Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
On 03/07/2020 12:24, Ryan Sleevi via dev-security-policy wrote:
<snip>
> The key destruction is the only way I can see being able to provide some
> assurance that “things won’t go wrong, because it’s impossible for them to
> go wrong, here’s the proof”

Ryan, distrusting the root(s) would be another way to provide this
assurance (for up-to-date clients anyway), although I'd be surprised if
any of the affected CAs would prefer to go that route!

--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

Arvid Vermote

unread,
Jul 3, 2020, 10:04:58 AM7/3/20
to ry...@sleevi.com, mozilla-dev-security-policy
GlobalSign recognizes the reported security issue and associated risk, and
is working on a plan to remediate the impacted CA hierarchies with first
priority on terminating those branches that include issuing CA with private
keys outside of GlobalSign's realm. We will soon share an initial plan on
our Bugzilla ticket https://bugzilla.mozilla.org/show_bug.cgi?id=1649937.

One question we have for the root store operators specifically is what type
of assurance they are looking for on the key destruction activities. In the
past we've both done key destruction ceremonies without and with (e.g. in
the case of addressing a compliance issue like
https://bugzilla.mozilla.org/show_bug.cgi?id=1591005) an external auditor
witnessing the destruction and issuing an independent ISAE3000 witnessing
report.

> -----Original Message-----
> From: dev-security-policy <dev-security-...@lists.mozilla.org>
On
> Behalf Of Ryan Sleevi via dev-security-policy
> Sent: woensdag 1 juli 2020 23:06
> To: mozilla-dev-security-policy
<mozilla-dev-s...@lists.mozilla.org>
> Subject: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous
> Delegated Responder Cert
>
> I've created a new batch of certificates that violate 4.9.9 of the BRs,
which was
> introduced with the first version of the Baseline Requirements as a MUST.
This is
> https://misissued.com/batch/138/
>
> A quick inspection among the affected CAs include O fields of: QuoVadis,
> GlobalSign, Digicert, HARICA, Certinomis, AS Sertifitseeimiskeskus,
Actalis,
> Atos, AC Camerfirma, SECOM, T-Systems, WISeKey, SCEE, and CNNIC.
>
> Section 4.9.9 of the BRs requires that OCSP Delegated Responders MUST
> include an id-pkix-ocsp-nocheck extension. RFC 6960 defines an OCSP
> Delegated Responder within Section 4.2.2.2 as indicated by the presence of
the
> in the past [3], which highlighted the security impact of this. I've
flagged this as a
> SECURITY matter for CAs to carefully review, because in the cases where a
> third-party, other than the Issuing CA, operates such a certificate, the
Issuing CA
> has delegated the ability to mint arbitrary OCSP responses to this

Ryan Sleevi

unread,
Jul 3, 2020, 10:43:47 AM7/3/20
to Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
On Fri, Jul 3, 2020 at 8:06 AM Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Ryan,
> I don’t think I’m failing to see the security problem, but we evidently
> have different perception of the risk level for the particular case of
> internal delegation.
> Anyway I will just cease in my intent and just act as it’s expected,
> looking as guidance to the reaction of other CAs where possible.


Again, I don’t disagree that it’s reasonable to see a difference between 1P
and 3P if you’re a CA. But I look at the risk of “what attacks are now
enabled if this is unaddressed”?

Put differently, if a CA used a long-lived delegated responder, and I found
out about it, I would absolutely be concerned and expect explanations.
However, if the “worst” thing the delegated responder could do, if
compromised, was “simply” sign OCSP responses, that’s at least better than
the situation we’re in here. Because these delegated responders can, in
many cases, cause new issuance, and because they are actively signing
things other than OCSP responses (e.g. certificates), the threat model
becomes unreasonably complex.

The BRs have clear guidance on who absorbs the risk for a CA’s mistake: the
CA and anyone who has obtained certificates from the CA. Historically,
browsers have rather nobly stepped in and absorbed that risk for the
ecosystem, introducing ever more complex solutions to try to allow site
operators to continue with business as usual with minimal disruption. But
that’s not what the “social contract” of a CA says should happen, and it’s
not what the CP/CPS of the CA says will happen.

It’s true that 4.9.1.2 “only” demands revocation, but any CA not treating
this as recognizing the security impact and the similarity to a key misuse
and/or compromise - even if “hypothetically” - does a disservice to all.

I would just have a last request for you. I would appreciate if you can
> express your views on Ben’s message about Mozilla’s position, in particular
> about the 7-day deadline.
> I think it’s of extreme benefit for all if the different browsers are
> aligned.


I participate here largely in an individual capacity, except as specified.
I’ve already shared my risk analysis with Ben on that thread, but I also
think it would be grossly negligent for a CA to try to look for “browser
alignment” as a justification for ignoring the BRs, especially when so many
platforms and libraries are put at risk by the CA’s misissuance. It sounds
like you’re looking for an explicit “exception”, and while Ben’s message
seems like a divergence from past Mozilla positions, I think it at least
maintains consistency that “this is an incident, no exceptions”.

Again, while wanting to ensure a safe space for questions, as a CA, your
responsibility is to understand these issues and act accordingly. I wholly
appreciate wanting to have an open and transparent discussion about the
facts, and I am quite sensitive to the fact that there very well can be
information being overlooked. As I said, and will say again: this is for
your incident response to demonstrate, and as with any CA, and for any
incident, you will be judged on how well and how timely you respond to the
risks. Similarly, if you fail to revoke on time, how comprehensively you
mitigate the lack of timeliness so that you can comprehensively demonstrate
it will never happen again.

Every time we encounter some non-compliance with an intermediate, CAs push
back on their 4.9.1.2 obligations, saying it would be too disruptive. Heck,
we’re still in a place where CAs are arguing even revoking the Subscriber
cert under 4.9.1.1 is too disruptive, despite the CA claiming they would do
so. This **has** to change, and so it needs to be clear that the first
order is to expect a CA to **do what they said they would**, and revoke on
the timetable defined. If there truly is an exceptional situation that
prevents this, and the CA has a second incident for complying with the BRs
for not revoking, then the only way that can or should not result in
distrust of the CA is if their incident report can show that they
understand the issue sufficiently and can commit to never delaying
revocation again by showing comprehensively the step they are taking.

There is little evidence that the majority of CAs are capable of this, but
it’s literally been a Baseline Requirement since the start. For lack of a
better analogy: it’s a borrower who is constantly asking for more and more
credit to keep things going, and so they don’t default. If they default,
you are unlikely to get your money back, but if you continue to loan them
more and more, you’re just increasing your own risk for if/when they do
default. The CA is the borrower, defaulting is being distrusted, browsers
are the lenders, and the credit being extended is how flexible they are
when misissuance events occur. Taking a delay on revocation for this issue
is asking for a *huge* loan. For some CAs, that’s beyond the credit they
have available, and the answer is no. For others, it might be yes. Just
like credit scores are used in credit, different CAs have different risks
based on how well they’ve handled past issues and, in some part, based on
how well they handle this issue.

The best way you can avoid taking on new “trust” debt, and even potentially
pay down some of any existing debt caused by past incidents, is to promptly
revoke and provide a key destruction ceremony report to that effect. Short
of that, it’s going to depend on the incident report as to what happens and
whether further credit is extended.

>

Ryan Sleevi

unread,
Jul 3, 2020, 10:47:56 AM7/3/20
to Arvid Vermote, mozilla-dev-security-policy, ry...@sleevi.com
On Fri, Jul 3, 2020 at 10:04 AM Arvid Vermote <arvid....@globalsign.com>
wrote:

> GlobalSign recognizes the reported security issue and associated risk, and
> is working on a plan to remediate the impacted CA hierarchies with first
> priority on terminating those branches that include issuing CA with private
> keys outside of GlobalSign's realm. We will soon share an initial plan on
> our Bugzilla ticket https://bugzilla.mozilla.org/show_bug.cgi?id=1649937.
>
> One question we have for the root store operators specifically is what type
> of assurance they are looking for on the key destruction activities. In the
> past we've both done key destruction ceremonies without and with (e.g. in
> the case of addressing a compliance issue like
> https://bugzilla.mozilla.org/show_bug.cgi?id=1591005) an external auditor
> witnessing the destruction and issuing an independent ISAE3000 witnessing
> report.


Since the goal here is to be able to demonstrate, with some reasonable
assurance, that this key will not come back and haunt the ecosystem, my
intent of the suggestion was yes, an independently witnessed ceremony with
an appropriate report to that fact (e.g. ISAE300)

The reason for this is that so much of our current design around controls
and audits assume that once something is revoked, the key can no longer do
harm and is not interesting if it gets compromised. This threat model
defeats that assumption, because for the lifetime of the responder
certificate(s) associated with that key, it can be misused to revoke
itself, or its siblings, and cause pain anew.

I suspect that the necessity of destruction ceremony is probably influenced
by a variety of factors, such as how long the responder cert is valid for.
This is touched on some in RFC 6960. I don’t know what the “right” answer
is, but my gut is that any responder cert valid for more than a year from
now would benefit from such a report. If it’s less than a year out, and
internally operated, that maybe is reasonable to not require a report? I’m
not sure where that line is, and this is where the CAs can share their
analysis of the facts to better inform and find the “right” balance here.

>

Peter Bowen

unread,
Jul 3, 2020, 10:58:06 AM7/3/20
to Ryan Sleevi, Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
Ryan,

I have read through this thread and am also somewhat perplexed.

I want to be clear, I'm posting only for myself, as an individual, not
on behalf of any current or former employers.

On Fri, Jul 3, 2020 at 4:26 AM Ryan Sleevi via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:
> On Fri, Jul 3, 2020 at 3:24 AM Pedro Fuentes via dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
>
> > >
> > > Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
> > > the "security risk".
> >
> > But the point then is that a delegated responder that had the required
> > "noCheck" extension wouldn't be affected by this issue and CAs wouldn't
> > need to react, and therefore the issue to solve is the "mis-issuance"
> > itself due to the lack of the extension, not the fact that the CA
> > certificate could be used to do delegated responses for the same operator
> > of the Root, which is acceptable, as you said.
>
>
> I don’t understand why this is difficult to understand. If you have the
> noCheck extension, then as RFC 6960, you need to make that certificate
> short lived. The BRs require the cert have the extension.

I think this is difficult to understand because you are adding
requirements that don't currently exist.

There is nothing in the BRs or other obligations of the CA to make a
delegated OCSP responder certificate short-lived. From 6960:

"CAs may choose to issue this type of certificate with a very short
lifetime and renew it frequently."

6960 explicitly says "may" is defined in RFC 2119, which says:

"MAY This word, or the adjective "OPTIONAL", mean that an item is
truly optional."

Contrast with "should" which is "This word, or the adjective
"RECOMMENDED", mean that there may exist valid reasons in particular
circumstances to ignore a particular item, but the full implications
must be understood and carefully weighed before choosing a different
course."

While it may be viewed as best practice to have short lived responder
certificates, it must not be viewed as a hard requirement for the BRs
or for the Mozilla program. As you have pointed out previously, a
browser could make this a requirement, but I am unaware of any
publicly available requirement ot do so.

> Similarly, if something goes wrong with such a responder, you also have to
> consider revoking the root, because it as-bad-as a root key compromise.

I think we can all agree to this point.

> In fact the side effect is that delegated responders operated externally
> > that have the required no check extension don't seem to be affected by the
> > issue and would be deemed acceptable, without requiring further action to
> > CAs, while the evident risk problem is still there.
>
>
> The “nocheck” discussion extension here is to highlight the compliance
> issue.
>
> The underlying issue is a security issue: things capable of providing OCSP
> responses that shouldn’t be.
>
> It seems you understand the security issue when viewing external sub-CAs:
> they can now impact the security of the issuer.
>
> It seems we’re at an impasse for understanding the issue for
> internally-operates Sub CAs: this breaks all of the auditable controls and
> assurance frameworks, and breaks the security goals of a “correctly”
> configured delegated responder, as discussed in the security considerations
> throughout RFC 6960.

This is where I disagree. Currently CAs can create as many delegated
OCSP responder certificates as they like with as many distinct keys as
they like. There are no public requirements from any browser on the
protection of OCSP responder keys, as far as I know. The few
requirements on revocation information are to provide accurate
information and provide the information within 10 seconds "under
normal operating conditions" (no SLO if the CA determines it is not
operating under normal conditions).

For the certificates you identified in the beginning of this thread,
we know they have a certain level of key protection - they are all
required to be managed using cryptographic modules that are validated
as meeting overall Level 3 requirements in FIPS 140. We also know
that there CAs are monitoring these keys as they have an obligation in
BR 6.2.6 to "revoke all certificates that include the Public Key
corresponding to the communicated Private Key" if the "Subordinate
CA’s Private Key has been communicated to an unauthorized person or an
organization not affiliated with the Subordinate CA".

I agree with Pedro here. If the CA has control over the keys in the
certificates in question, then I do not see that there is a risk that
is greater than already exists. The CA can determine that these are
approved OCSP responders and easily assess whether they have controls
in place since the creation of the certificate that provide assurance
that all OCSP responses signed using the key were accurate (if any
such responses exist). They can also easily validate that they have
controls around these keys to provide assurance that any future OCSP
responses signed using the key will be accurate, the same that they
presumably do for their other OCSP responders.

So, from my personal view, what needs to happen here is that each CA
needs to acknowledge each of the keys in the certificats as a valid
OCSP responder key, according to their internal procedures for such,
if they have not already done so. Then they need to revoke the
certificates and issue new certificates with different serial numbers
that do not have the OCSP EKU to correct the certificate profile
issue. While not required, they should also consider having controls
in place for the lifetime of the key (until destruction) to provide
assurance that any OCSP responses it signs are accurate, as we know
some certificate status consumers may not check the validity of the
response signer certificate.

This situation does suggest that root programs should consider adding
public requirements around:
1) Delegation of OCSP response generation to third parties
2) Maximum validity period of OCSP responder certificates
3) Use of CA private keys, specifically requirements for data they sign

I also will not that any CA that determines that a key that has been
included a certificate usable for OCSP response signing but that key
is not authorized for OCSP response generation should immediately
ensure destruction of that key and should document their consideration
to revoke the CA that issued such certificate so their auditor can
verify that they considered the risks of doing so.

Thanks,
Peter

Ryan Sleevi

unread,
Jul 3, 2020, 12:18:49 PM7/3/20
to Peter Bowen, Ryan Sleevi, Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
On Fri, Jul 3, 2020 at 10:57 AM Peter Bowen <pzb...@gmail.com> wrote:

> While it may be viewed as best practice to have short lived responder
> certificates, it must not be viewed as a hard requirement for the BRs
> or for the Mozilla program. As you have pointed out previously, a
> browser could make this a requirement, but I am unaware of any
> publicly available requirement ot do so.
>

Thanks, and I think you're making a useful clarification here. The 'need'
being talked about is the logical consequence of a secure design, and the
assumptions that go with it, as well as the role the BRs play in "profiling
down" RFC 5280-and-related into sensible subsets appropriate for their use
case.

I think we're in agreement here that nocheck is required, and that the
consequences of nocheck's presence, namely:
CAs issuing such a certificate should realize that a compromise of
the responder's key is as serious as the compromise of a CA key
used to sign CRLs, at least for the validity period of this
certificate. CAs may choose to issue this type of certificate with
a very short lifetime and renew it frequently.

As BR-consuming clients, the majority of implementations I looked at don't
recursively check revocation for the delegated responder. That is, rather
than "nocheck" determining client behaviour (vs its absence), "nocheck" is
used to reflect what the clients will do regardless. This can be
understandably seen as 'non-ideal', but gets back into some of the
discussion with Corey regarding profiles and client behaviours.

"need" isn't an explicitly spelled out requirement, I agree, but falls from
the logical consequences of designing such a system and ensuring equivalent
security properties. For example, consider 4.9.10's requirement that OCSP
responses for subordinate CA certificates not have a validity greater than
1 year (!!). We know that's a clear security goal, and so a CA needs to
ensure they can meet that property. If issuing a Delegated Responder, it
logically follows that its validity period should be a year or less,
because if that Delegated Responder is compromised, the objective of 4.9.10
can't be fulfilled. I agree that a CA might argue "well, we published a new
response, and 4.9.10 doesn't say anything about having to result, just
perform the action", but I think we can agree that such an approach, even
if technically precise, calls into question much of their overall
interpretation of the BRs.


> > It seems we’re at an impasse for understanding the issue for
> > internally-operates Sub CAs: this breaks all of the auditable controls
> and
> > assurance frameworks, and breaks the security goals of a “correctly”
> > configured delegated responder, as discussed in the security
> considerations
> > throughout RFC 6960.
>
> This is where I disagree. Currently CAs can create as many delegated
> OCSP responder certificates as they like with as many distinct keys as
> they like. There are no public requirements from any browser on the
> protection of OCSP responder keys, as far as I know. The few
> requirements on revocation information are to provide accurate
> information and provide the information within 10 seconds "under
> normal operating conditions" (no SLO if the CA determines it is not
> operating under normal conditions).
>

I suspect this is the reopening of the discussion about "the CA
organization" or "the CA certificate"; does 6.2.7 apply to all Private Keys
that logically make up the CA "the organization"'s services, or is 6.2.7
only applicable to keys with CA:TRUE. Either extreme is an unsatisfying
answer: do the TLS keys need to be on FIPS modules? (no, ideally not). Does
this only apply to CA keys and not to delegated responders (no, ideally
not).

Going back to 6960 and the requirement of pkix-nocheck, we know that such a
responder certificate is 'as powerful as' the Private Key associated with
the CA Certificate for which the responder is a responder for. Does the
short-lived validity eliminate the need for protection?

I suspect when you disagree, is with respect to the auditable
controls/assurance frameworks, and less with respect to security goals
captured in 6960, since we seemed to agree on those above.


> For the certificates you identified in the beginning of this thread,
> we know they have a certain level of key protection - they are all
> required to be managed using cryptographic modules that are validated
> as meeting overall Level 3 requirements in FIPS 140. We also know
> that there CAs are monitoring these keys as they have an obligation in
> BR 6.2.6 to "revoke all certificates that include the Public Key
> corresponding to the communicated Private Key" if the "Subordinate
> CA’s Private Key has been communicated to an unauthorized person or an
> organization not affiliated with the Subordinate CA".
>

Sure, but we know that such revocation is largely symbolic in the existence
of these certificates for the vast majority of clients, and so the security
goal cannot be reasonably demonstrated while that Private Key still exists.

Further, once this action is performed according to 6.2.6, it disappears
with respect to the obligations under the existing auditing/reporting
frameworks. This is a known deficiency of the BRs, which you rather
comprehensively tried to address when representing a CA member of the
Forum, in your discussion about object hierarchies and signing actions. A
CA may have provisioned actions within 6.2.10 of their CPS, but that's not
a consistent baseline that they can rely on.

At odds here is how to square with the CA performing the action, but not
achieving the result of that action?


> I agree with Pedro here. If the CA has control over the keys in the
> certificates in question, then I do not see that there is a risk that
> is greater than already exists. The CA can determine that these are
> approved OCSP responders and easily assess whether they have controls
> in place since the creation of the certificate that provide assurance
> that all OCSP responses signed using the key were accurate (if any
> such responses exist). They can also easily validate that they have
> controls around these keys to provide assurance that any future OCSP
> responses signed using the key will be accurate, the same that they
> presumably do for their other OCSP responders.
>

But that's not an assurance "we", the relying party, have. We don't have a
way to know what process the CA used to assess what controls they have in
place, and whether those controls were reliable. After all, CAs were
supposed to have controls in place with respect to Delegated Responders,
and this incident is, in part, because some CAs assessed those controls as
compliant, but they were not. This is the breakdown of assurance I spoke to
above. You yourself are familiar with the fact that 5.4.1 of the BRs
doesn't actually require that CAs maintain logs of the messages they've
signed via the HSM, so that's not a guarantee we can simply take at face
value.

Obviously, we're talking about a spectrum of responses, and I am of course
interested if there are options other than what I've outlined, based on
facts not yet considered.

I would hope we would agree that if a CA simply revoked the certificate,
and did nothing beyond that (no reissuance, no destruction), it would not
provide the necessary assurance regarding that key.
- The key would be outside the scope of much of the audited activities,
which are presently oriented around "certificates"
- For the lifetime of the certificates that were revoked, we have to
worry that they, or their siblings, may "come back to haunt us"

Pedro's option is to reissue a certificate for that key, which as you point
out, keeps the continuity of CA controls associated with that key within
the scope of the audit. I believe this is the heart of Pedro's risk
analysis justification.
- However, controls like you describe are not ones that are audited, nor
consistent between CAs
- They ultimately rely on the CA's judgement, which is precisely the
thing an incident like this calls into question, and so it's understandable
not to want to throw "good money after bad"


> So, from my personal view, what needs to happen here is that each CA
> needs to acknowledge each of the keys in the certificats as a valid
> OCSP responder key, according to their internal procedures for such,
> if they have not already done so. Then they need to revoke the
> certificates and issue new certificates with different serial numbers
> that do not have the OCSP EKU to correct the certificate profile
> issue. While not required, they should also consider having controls
> in place for the lifetime of the key (until destruction) to provide
> assurance that any OCSP responses it signs are accurate, as we know
> some certificate status consumers may not check the validity of the
> response signer certificate.
>

Is this from the point of view of "what meets the compliance obligations"
or "what demonstrates the CA understands the risks, has sufficient
safeguards, and can demonstrate them other than 'trust us'"?

I readily admit, I framed the security issue within a compliance issue
precisely because CAs were having trouble recognizing the security issue. I
would hate to see the response predicated on treating it like a compliance
issue, though. When you talk about "while not required", that is indeed the
heart of what I'm talking about in terms of how do we mitigate the security
risks? "It's a security risk, but we're not required to address it" is, I
think, a reasonable ground to criticize the incident response and the CA's
handling of it.

The question of controls in place for the lifetime of the certificate is
the "cost spectrum" I specifically talked about in
https://www.mail-archive.com/dev-secur...@lists.mozilla.org/msg13530.html
, namely "Well, can't we audit the key usage to make sure it never signs a
delegated OCSP response".

As I mentioned, I think the response to these incidents is going to have to
vary on a CA-by-CA basis, because there isn't a one-size-fits-all category
for mitigating these risk factors. Some CAs may be able to demonstrate a
sufficient number of controls that mitigate these concerns in a way that
some browsers will accept. But I don't think we can treat this as "just" a
compliance failure, particularly when the failure means that the intended
result of a significant number of obligations cannot be met because of it.


> This situation does suggest that root programs should consider adding
> public requirements around:
> 1) Delegation of OCSP response generation to third parties
> 2) Maximum validity period of OCSP responder certificates
> 3) Use of CA private keys, specifically requirements for data they sign
>

Whole heartedly agree that this reveals greater clarity of profiles are
needed, and will be working on resurrecting/rebasing
https://github.com/sleevi/cabforum-docs/pull/2 to try and account for some
of this. Those are "prevent" solutions, and we're focused here on "detect
and mitigate" solutions.

Peter Bowen

unread,
Jul 3, 2020, 4:19:22 PM7/3/20
to Ryan Sleevi, Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
On Fri, Jul 3, 2020 at 9:18 AM Ryan Sleevi <ry...@sleevi.com> wrote:
>
>
>
> On Fri, Jul 3, 2020 at 10:57 AM Peter Bowen <pzb...@gmail.com> wrote:
>>
>> While it may be viewed as best practice to have short lived responder
>> certificates, it must not be viewed as a hard requirement for the BRs
>> or for the Mozilla program. As you have pointed out previously, a
>> browser could make this a requirement, but I am unaware of any
>> publicly available requirement ot do so.
>
>
> Thanks, and I think you're making a useful clarification here. The 'need' being talked about is the logical consequence of a secure design, and the assumptions that go with it, as well as the role the BRs play in "profiling down" RFC 5280-and-related into sensible subsets appropriate for their use case.
>
> I think we're in agreement here that nocheck is required, and that the consequences of nocheck's presence, namely:
> CAs issuing such a certificate should realize that a compromise of
> the responder's key is as serious as the compromise of a CA key
> used to sign CRLs, at least for the validity period of this
> certificate. CAs may choose to issue this type of certificate with
> a very short lifetime and renew it frequently.
>
> As BR-consuming clients, the majority of implementations I looked at don't recursively check revocation for the delegated responder. That is, rather than "nocheck" determining client behaviour (vs its absence), "nocheck" is used to reflect what the clients will do regardless. This can be understandably seen as 'non-ideal', but gets back into some of the discussion with Corey regarding profiles and client behaviours.

So we are in agreement that is a certificate consumer bug if it fails
to check revocation on certificates that do not have nocheck set.
(Yes, I know that is a massive set of negations, sorry.). Good news is
that all the certificates are in the category of "need revocation
checking".


>> > It seems we’re at an impasse for understanding the issue for
>> > internally-operates Sub CAs: this breaks all of the auditable controls and
>> > assurance frameworks, and breaks the security goals of a “correctly”
>> > configured delegated responder, as discussed in the security considerations
>> > throughout RFC 6960.
>>
>> This is where I disagree. Currently CAs can create as many delegated
>> OCSP responder certificates as they like with as many distinct keys as
>> they like. There are no public requirements from any browser on the
>> protection of OCSP responder keys, as far as I know. The few
>> requirements on revocation information are to provide accurate
>> information and provide the information within 10 seconds "under
>> normal operating conditions" (no SLO if the CA determines it is not
>> operating under normal conditions).
>
>
> I suspect this is the reopening of the discussion about "the CA organization" or "the CA certificate"; does 6.2.7 apply to all Private Keys that logically make up the CA "the organization"'s services, or is 6.2.7 only applicable to keys with CA:TRUE. Either extreme is an unsatisfying answer: do the TLS keys need to be on FIPS modules? (no, ideally not). Does this only apply to CA keys and not to delegated responders (no, ideally not).

As I read the BRs today, the requirement only applies to CA keys and
not to keys for delegated responders. However that does not matter in
this case, because all the certificates you identified are for CAs, so
we know their keys are in HSMs.

> Going back to 6960 and the requirement of pkix-nocheck, we know that such a responder certificate is 'as powerful as' the Private Key associated with the CA Certificate for which the responder is a responder for. Does the short-lived validity eliminate the need for protection?
>
> I suspect when you disagree, is with respect to the auditable controls/assurance frameworks, and less with respect to security goals captured in 6960, since we seemed to agree on those above.
>
>>
>> For the certificates you identified in the beginning of this thread,
>> we know they have a certain level of key protection - they are all
>> required to be managed using cryptographic modules that are validated
>> as meeting overall Level 3 requirements in FIPS 140. We also know
>> that there CAs are monitoring these keys as they have an obligation in
>> BR 6.2.6 to "revoke all certificates that include the Public Key
>> corresponding to the communicated Private Key" if the "Subordinate
>> CA’s Private Key has been communicated to an unauthorized person or an
>> organization not affiliated with the Subordinate CA".
>
>
> Sure, but we know that such revocation is largely symbolic in the existence of these certificates for the vast majority of clients, and so the security goal cannot be reasonably demonstrated while that Private Key still exists.
>
> Further, once this action is performed according to 6.2.6, it disappears with respect to the obligations under the existing auditing/reporting frameworks. This is a known deficiency of the BRs, which you rather comprehensively tried to address when representing a CA member of the Forum, in your discussion about object hierarchies and signing actions. A CA may have provisioned actions within 6.2.10 of their CPS, but that's not a consistent baseline that they can rely on.
>
> At odds here is how to square with the CA performing the action, but not achieving the result of that action?

As long as the key is a CA key, the obligations stand.

>> I agree with Pedro here. If the CA has control over the keys in the
>> certificates in question, then I do not see that there is a risk that
>> is greater than already exists. The CA can determine that these are
>> approved OCSP responders and easily assess whether they have controls
>> in place since the creation of the certificate that provide assurance
>> that all OCSP responses signed using the key were accurate (if any
>> such responses exist). They can also easily validate that they have
>> controls around these keys to provide assurance that any future OCSP
>> responses signed using the key will be accurate, the same that they
>> presumably do for their other OCSP responders.
>
>
> But that's not an assurance "we", the relying party, have. We don't have a way to know what process the CA used to assess what controls they have in place, and whether those controls were reliable. After all, CAs were supposed to have controls in place with respect to Delegated Responders, and this incident is, in part, because some CAs assessed those controls as compliant, but they were not. This is the breakdown of assurance I spoke to above. You yourself are familiar with the fact that 5.4.1 of the BRs doesn't actually require that CAs maintain logs of the messages they've signed via the HSM, so that's not a guarantee we can simply take at face value.
>
> Obviously, we're talking about a spectrum of responses, and I am of course interested if there are options other than what I've outlined, based on facts not yet considered.
>
> I would hope we would agree that if a CA simply revoked the certificate, and did nothing beyond that (no reissuance, no destruction), it would not provide the necessary assurance regarding that key.
> - The key would be outside the scope of much of the audited activities, which are presently oriented around "certificates"
> - For the lifetime of the certificates that were revoked, we have to worry that they, or their siblings, may "come back to haunt us"

Agreed, simply revoking doesn't solve the issue; arguably it makes it
worse than doing nothing.

> Pedro's option is to reissue a certificate for that key, which as you point out, keeps the continuity of CA controls associated with that key within the scope of the audit. I believe this is the heart of Pedro's risk analysis justification.
> - However, controls like you describe are not ones that are audited, nor consistent between CAs
> - They ultimately rely on the CA's judgement, which is precisely the thing an incident like this calls into question, and so it's understandable not to want to throw "good money after bad"

To be clear, I don't necessarily see this as a bad judgement on the
CA's part. Microsoft explicitly documented that _including_ the OCSP
EKU was REQURIED in the CA certificate if using a delegated OCSP
responder (see https://support.microsoft.com/en-us/help/2962991/you-cannot-enroll-in-an-online-certificate-status-protocol-certificate).
Using a delegated OCSP responder can be a significant security
enhancement in some CA designs, such as when the CA key itself is
stored offline.

>> So, from my personal view, what needs to happen here is that each CA
>> needs to acknowledge each of the keys in the certificats as a valid
>> OCSP responder key, according to their internal procedures for such,
>> if they have not already done so. Then they need to revoke the
>> certificates and issue new certificates with different serial numbers
>> that do not have the OCSP EKU to correct the certificate profile
>> issue. While not required, they should also consider having controls
>> in place for the lifetime of the key (until destruction) to provide
>> assurance that any OCSP responses it signs are accurate, as we know
>> some certificate status consumers may not check the validity of the
>> response signer certificate.
>
>
> Is this from the point of view of "what meets the compliance obligations" or "what demonstrates the CA understands the risks, has sufficient safeguards, and can demonstrate them other than 'trust us'"?
>
> I readily admit, I framed the security issue within a compliance issue precisely because CAs were having trouble recognizing the security issue. I would hate to see the response predicated on treating it like a compliance issue, though. When you talk about "while not required", that is indeed the heart of what I'm talking about in terms of how do we mitigate the security risks? "It's a security risk, but we're not required to address it" is, I think, a reasonable ground to criticize the incident response and the CA's handling of it.
>
> The question of controls in place for the lifetime of the certificate is the "cost spectrum" I specifically talked about in https://www.mail-archive.com/dev-secur...@lists.mozilla.org/msg13530.html , namely "Well, can't we audit the key usage to make sure it never signs a delegated OCSP response".
>
> As I mentioned, I think the response to these incidents is going to have to vary on a CA-by-CA basis, because there isn't a one-size-fits-all category for mitigating these risk factors. Some CAs may be able to demonstrate a sufficient number of controls that mitigate these concerns in a way that some browsers will accept. But I don't think we can treat this as "just" a compliance failure, particularly when the failure means that the intended result of a significant number of obligations cannot be met because of it.

As you pointed out, I pushed for a number of improvements in WebTrust
audits when I was involved with operation of a publicly trusted CA.
One of those resulted in practitioner guidance that is relevant here.
In the guidance at
https://www.cpacanada.ca/en/business-and-accounting-resources/audit-and-assurance/overview-of-webtrust-services/practitioner-qualification-and-guidance
, you can see two relevant items:

- Reporting When Certain Criteria Not Applicable as Services Not Performed by CA
- Disclosure of Changes in Scope or Roots with no Activity

This makes it clear that auditors can report on the absence of
activity. In an attestation engagement, the auditor can also provide
an opinion on statements in the attestation that can be tested,
regardless of whether they match a specific criteria in the audit
criteria.

Given this, I believe relying parties and root programs could
determine there are sufficient controls via audit reporting. The CA
can include statements in the assertion that (as applicable) the keys
were not used to sign any OCSP responses or that the keys were not
used to sign OCSP responses for certificates issued by CAs other than
the CA identified as the subject of the CA certificate where key is
bound to that subject.

I think (but an auditor would need to confirm) that an auditor could
be in a position to make statements about past periods based on the
controls they observed at the time, as recorded in their work papers.
For example, a CA might have controls that a given key is only used
during specific ceremonies where all the ceremonies are known to not
contain inappropriate OCSP response signing. Alternatively the
configuration of the HSM and attached systems that the auditor
validated at the time may clearly show that OCSP signing is not
possible and the auditor may have observed controls that the key is
restricted to only be used with the observed system configuration.

I agree that we cannot make blanket statements that apply to all CAs,
but these are some examples where it seems like there are alternatives
to key destruction.

Thanks,
Peter

Ryan Sleevi

unread,
Jul 3, 2020, 5:30:47 PM7/3/20
to Peter Bowen, Ryan Sleevi, Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen <pzb...@gmail.com> wrote:

> >> For the certificates you identified in the beginning of this thread,
> >> we know they have a certain level of key protection - they are all
> >> required to be managed using cryptographic modules that are validated
> >> as meeting overall Level 3 requirements in FIPS 140. We also know
> >> that there CAs are monitoring these keys as they have an obligation in
> >> BR 6.2.6 to "revoke all certificates that include the Public Key
> >> corresponding to the communicated Private Key" if the "Subordinate
> >> CA’s Private Key has been communicated to an unauthorized person or an
> >> organization not affiliated with the Subordinate CA".
> >
> >
> > Sure, but we know that such revocation is largely symbolic in the
> existence of these certificates for the vast majority of clients, and so
> the security goal cannot be reasonably demonstrated while that Private Key
> still exists.
> >
> > Further, once this action is performed according to 6.2.6, it disappears
> with respect to the obligations under the existing auditing/reporting
> frameworks. This is a known deficiency of the BRs, which you rather
> comprehensively tried to address when representing a CA member of the
> Forum, in your discussion about object hierarchies and signing actions. A
> CA may have provisioned actions within 6.2.10 of their CPS, but that's not
> a consistent baseline that they can rely on.
> >
> > At odds here is how to square with the CA performing the action, but not
> achieving the result of that action?
>
> As long as the key is a CA key, the obligations stand.
>

Right, but we're in agreement that these obligations don't negate the risk
posed - e.g. of inappropriate signing - right? And those obligations
ostensibly end once the CA has revoked the certificate, since the status
quo today doesn't require key destruction ceremonies for CAs. I think that
latter point is something discussed in the CA/Browser Forum with respect to
"Audit Lifecycle", but even then, that'd be difficult to work through for
CA keys that were compromised.

Put differently: If the private key for such a responder is compromised, we
lose the ability to be assured it hasn't been used for shenanigans. As
such, either we accept the pain now, and get assurance it /can't/ be used
for shenanigans, or we have hope that it /won't/ be used for shenanigans,
but are in an even worse position if it is, because this would necessitate
revoking the root.

It is, at the core, a gamble on the order of "Pay me now vs pay me later".
If we address the issue now, it's painful up-front, but consistent with the
existing requirements and mitigates the potential-energy of a security
mistake. If we punt on the issue, we're simply storing up more potential
energy for a more painful revocation later.

I recognize that there is a spectrum here on options. In an abstract sense,
the calculus of "the key was destroyed a month before it would have been
compromised" is the same as "the key was destroyed a minute before it would
have been compromised" - the risk was dodged. But "the key was destroyed a
second after it was compromised" is doom.


> > Pedro's option is to reissue a certificate for that key, which as you
> point out, keeps the continuity of CA controls associated with that key
> within the scope of the audit. I believe this is the heart of Pedro's risk
> analysis justification.
> > - However, controls like you describe are not ones that are audited,
> nor consistent between CAs
> > - They ultimately rely on the CA's judgement, which is precisely the
> thing an incident like this calls into question, and so it's understandable
> not to want to throw "good money after bad"
>
> To be clear, I don't necessarily see this as a bad judgement on the
> CA's part. Microsoft explicitly documented that _including_ the OCSP
> EKU was REQURIED in the CA certificate if using a delegated OCSP
> responder (see
> https://support.microsoft.com/en-us/help/2962991/you-cannot-enroll-in-an-online-certificate-status-protocol-certificate
> ).
> Using a delegated OCSP responder can be a significant security
> enhancement in some CA designs, such as when the CA key itself is
> stored offline.
>

Oh, I agree on the value of delegated responders, for precisely that
reason. I think the bad judgement is not trying to find an alternative
solution. Some CAs did, and I think that highlights strength. Other CAs, no
doubt, simply said "Sorry, we can't do it" or "You need to run a different
platform"

I'm not trying to suggest bad /intentions/, but I am trying to say that
it's bad /judgement/, no different than the IP address discussions had in
the CA/B Forum or internal server names. The intentions were admirable, the
execution was inappropriate.
Right, and you know I'm familiar with these options :) I'm thoroughly glad
WebTrust has been responsive to the needs of the ecosystem.

That said, I think it's reasonable to question what exactly the controls
are used, because different auditors' judgement as to the level of
assurance needed to report on those criteria is going to differ, just like
we see different CAs' responses differ.

Let's assume an "ideal" CA behaviour, which would be to ensure that they
use the Detailed Controls Report to include the controls tested as part of
forming the basis of the opinion. Could this provide a level of assurance
appropriate? Very possibly! But ingesting and examining that is simply a
cost that is passed on, from the CA, onto all the Relying Parties, for
which Browsers generally do the bulk of representation of those interests.
Using your example, of including these statements, means that for the
lifetime of the CA associated with that key, browsers would have to be
fastidious in their inspection of reports.

And what happens if things go wrong? Well, we're back to revoking the root.
All we've done is incur greater cost / "trust debt" along the way.


> Given this, I believe relying parties and root programs could
> determine there are sufficient controls via audit reporting. The CA
> can include statements in the assertion that (as applicable) the keys
> were not used to sign any OCSP responses or that the keys were not
> used to sign OCSP responses for certificates issued by CAs other than
> the CA identified as the subject of the CA certificate where key is
> bound to that subject.
>
> I think (but an auditor would need to confirm) that an auditor could
> be in a position to make statements about past periods based on the
> controls they observed at the time, as recorded in their work papers.
> For example, a CA might have controls that a given key is only used
> during specific ceremonies where all the ceremonies are known to not
> contain inappropriate OCSP response signing. Alternatively the
> configuration of the HSM and attached systems that the auditor
> validated at the time may clearly show that OCSP signing is not
> possible and the auditor may have observed controls that the key is
> restricted to only be used with the observed system configuration.
>

CAs have proposed this before, but this again falls into an area of whether
or not it provides a suitable degree of assurance. This is, for example,
part of why new CAs coming in to Mozilla's program need to have keys
subjected to the BRs from their creation; because the level of assurance
with respect to these "pre-verification" events is questionable.


> I agree that we cannot make blanket statements that apply to all CAs,
> but these are some examples where it seems like there are alternatives
> to key destruction.
>

Right, and I want to acknowledge, there are some potentially viable paths
specific to WebTrust, for which I have no faith with respect to ETSI
precisely because of the nature and design of ETSI audits, that, in an
ideal world, could provide the assurance desired. However, the act of that
assurance is to shift the cost of supervision and maintenance onto the
Browser, and I think that's reasonable to be concerned. At best, it would
only be a temporary solution, and the cost would only be justified if there
was a clear set of actionable systemic improvements attached with it that
demonstrated why this situation was exceptional, despite being a BR
violation.

This is the cost calculus for CAs to demonstrate in their incident report,
but I think the baseline expectation should be set that it's asking for
significant cost to be taken on by consumers/browsers, as a way of
offsetting the cost to any of the Subscribers of that CA. In some cases, it
may be a justified tradeoff, particularly for CAs that have been exemplary
models of best practices and/or systemic improvements. In other cases, I
think it's asking too much to be taken on faith, and too much credit to be
extended, in what is fundamentally an "unjust" model in which a CA commits
to something, but then when asked to actually deliver on that commitment,
reneges on it

Pedro Fuentes

unread,
Jul 4, 2020, 6:22:04 AM7/4/20
to mozilla-dev-s...@lists.mozilla.org
El viernes, 3 de julio de 2020, 18:18:49 (UTC+2), Ryan Sleevi escribió:
> Pedro's option is to reissue a certificate for that key, which as you point
> out, keeps the continuity of CA controls associated with that key within
> the scope of the audit. I believe this is the heart of Pedro's risk
> analysis justification.

I didn't want to participate here for now and just learn from other's opinions, but as my name has been evoked, I'd like to make a clarification.

My proposal was not JUST to reissue the certificate with the same key. My proposal was to reissue the certificate with the same key AND a short lifetime (3 months) AND do a proper key destruction after that period.

As I said, this:
- Removes the offending EKU
- Makes the certificate short-lived, for its consideration as delegated responder
- Ensures that the keys are destroyed for peace of mind of the community

And all that was, of course, pondering the security risk based on the fact that the operator of the key is also operating the keys of the Root and is also rightfully operating the OCSP services for the Root.

I don't want to start another discussion, but I just feel necessary making this clarification, in case my previous message was unclear.

Best.

Ryan Sleevi

unread,
Jul 4, 2020, 10:03:44 AM7/4/20
to Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
On Sat, Jul 4, 2020 at 6:22 AM Pedro Fuentes via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> El viernes, 3 de julio de 2020, 18:18:49 (UTC+2), Ryan Sleevi escribió:
> > Pedro's option is to reissue a certificate for that key, which as you
> point
> > out, keeps the continuity of CA controls associated with that key within
> > the scope of the audit. I believe this is the heart of Pedro's risk
> > analysis justification.
>
> I didn't want to participate here for now and just learn from other's
> opinions, but as my name has been evoked, I'd like to make a clarification.
>
> My proposal was not JUST to reissue the certificate with the same key. My
> proposal was to reissue the certificate with the same key AND a short
> lifetime (3 months) AND do a proper key destruction after that period.
>
> As I said, this:
> - Removes the offending EKU
> - Makes the certificate short-lived, for its consideration as delegated
> responder
> - Ensures that the keys are destroyed for peace of mind of the community
>
> And all that was, of course, pondering the security risk based on the fact
> that the operator of the key is also operating the keys of the Root and is
> also rightfully operating the OCSP services for the Root.
>
> I don't want to start another discussion, but I just feel necessary making
> this clarification, in case my previous message was unclear.


Thanks! I really appreciate you clarifying, as I had actually missed that
you proposed key destruction at the end of this. I agree, this is a
meaningfully different proposal that tries to balance the risks of
compliance while committing to a clear transition date.

>

Pedro Fuentes

unread,
Jul 4, 2020, 10:22:06 AM7/4/20
to mozilla-dev-s...@lists.mozilla.org
Thanks, Ryan.
I’m happy we are now in understanding to this respect.

Then I’d change the literally ongoing plan. We should have the new CAs hopefully today. Then I would do maybe also today the reissuance of the bad ones and I’ll revoke the offending certificates during the period.

Best.

Ryan Sleevi

unread,
Jul 4, 2020, 11:10:51 AM7/4/20
to Pedro Fuentes, mozilla-dev-s...@lists.mozilla.org
Pedro: I said I understood you, and I thought we were discussing in the
abstract.

I encourage you to reread this thread to understand why such a response
varies on a case by case basis. I can understand your *attempt* to balance
things, but I don’t think it would be at all appropriate to treat your
email as your incident response.

You still need to holistically address the concerns I raised. As I
mentioned in the bug: either this is a safe space to discuss possible
options, which will vary on a CA-by-CA basis based on a holistic set of
mitigations, or this was having to repeatedly explain to a CA why they were
failing to recognize a security issue.

I want to believe it’s the former, and I would encourage you, that before
you decide to delay revocation, you think very carefully. Have you met the
Mozilla policy obligations on a delay to revocation? Perhaps it’s worth
re-reading those expectations, before you make a decision that will also
fail to uphold community expectations.

Pedro Fuentes

unread,
Jul 4, 2020, 11:27:54 AM7/4/20
to mozilla-dev-s...@lists.mozilla.org
Ryan,
I'm moving our particular discussions to Bugzilla.

I just want to clarify, again, that I'm not proposing to delay the revocation of the offending CA certificate, what I'm proposing is to give more time to the key destruction. Our position right now, is that the certificate would be revoked in any case during the 7 day period.

Thanks,
Pedro

mark.a...@gmail.com

unread,
Jul 4, 2020, 12:52:20 PM7/4/20
to mozilla-dev-s...@lists.mozilla.org
On Friday, July 3, 2020 at 5:30:47 PM UTC-4, Ryan Sleevi wrote:
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen wrote:
>

I feel compelled to respond here for the first time even though I have never participated in CA/B forum proceeding and have never read through a single one of the 55 BRs that have been published over the last 8 years.

I was informed yesterday that I would have to replace just over 300 certificates in 5 days because my CA is required by rules from the CA/B forum to revoke its subCA certificate.

This is insane!
Those 300 certificates are used to secure healthcare information systems at a time when the global healthcare system is strained by a global pandemic. I have to coordinate with more than 30 people to make this happen. This includes three subsidiaries and three contract partner organizations as well as dozens of managers and systems engineers. One of my contract partners follows the guidance of an HL7 specification that requires them to do certificate pinning. When we replace these certificates we must give them 30 days lead time to make the change.

After wading through this very long chain of messages I see little discussion of the impact this will have on end users. Ryan Sleevi, in the name of Google, is purporting to speak for the end users, but it is obvious that Ryan does not understand the implication of applying these rules.

Peter Bowen says
> ... simply revoking doesn't solve the issue; arguably it makes it
> worse than doing nothing.

You are absolutely right, Peter. Doctors will not be able to communicate with each other effectively and people could die if the CA/B forum continues to blindly follow its rules without consideration for the greater impact this will have on the security of the internet.

In the CIA triad Availability is as important as Confidentiality. Has anyone done a threat model and a serious risk analysis to determine what a reasonable risk mitigation strategy is?

Mark Arnott

unread,
Jul 4, 2020, 12:53:04 PM7/4/20
to mozilla-dev-s...@lists.mozilla.org
On Friday, July 3, 2020 at 5:30:47 PM UTC-4, Ryan Sleevi wrote:
> On Fri, Jul 3, 2020 at 4:19 PM Peter Bowen wrote:
>
I feel compelled to respond here for the first time even though I have never participated in CA/B forum proceeding and have never read through a single one of the 55 BRs that have been published over the last 8 years.

I was informed yesterday that I would have to replace just over 300 certificates in 5 days because my CA is required by rules from the CA/B forum to revoke its subCA certificate.

This is insane!
Those 300 certificates are used to secure healthcare information systems at a time when the global healthcare system is strained by a global pandemic. I have to coordinate with more than 30 people to make this happen. This includes three subsidiaries and three contract partner organizastions as well as dozens of managers and systems engineers. One of my contract partners follows the guidance of an HL7 specification that requires them to do certificate pinning. When we replace these certificates we must give them 30 days lead time to make the change.

Ryan Sleevi

unread,
Jul 4, 2020, 2:06:53 PM7/4/20
to mark.a...@gmail.com, mozilla-dev-security-policy
On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> This is insane!
> Those 300 certificates are used to secure healthcare information systems
> at a time when the global healthcare system is strained by a global
> pandemic. I have to coordinate with more than 30 people to make this
> happen. This includes three subsidiaries and three contract partner
> organizations as well as dozens of managers and systems engineers. One of
> my contract partners follows the guidance of an HL7 specification that
> requires them to do certificate pinning. When we replace these
> certificates we must give them 30 days lead time to make the change.
>

As part of this, you should re-evaluate certificate pinning. As one of the
authors of that specification, and indeed, my co-authors on the
specification agree, certificate pinning does more harm than good, for
precisely this reason.

Ultimately, the CA is responsible for following the rules, as covered in
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation . If
they're not going to revoke, such as for the situation you describe,
they're required to treat this as an incident and establish a remediation
plan to ensure it does not happen again. In this case, a remediation plan
certainly involves no longer certificate pinning (it is not safe to do),
and also involves implementing controls so that it doesn't require 30
people, across three subsidiaries, to replace "only" 300 certificates. The
Baseline Requirements require those certificates to be revoked in as short
as 24 hours, and so you need to design your systems robustly to meet that.

There are proposals to the Baseline Requirements which would ensure this is
part of the legal agreement you make with the CA, to make sure you
understand these risks and expectations. It's already implicitly part of
the agreement you made, and you're expected to understand the legal
agreements you enter into. It's unfortunate that this is the first time
you're hearing about them, because the CA is responsible for making sure
their Subscribers know about this.


> After wading through this very long chain of messages I see little
> discussion of the impact this will have on end users. Ryan Sleevi, in the
> name of Google, is purporting to speak for the end users, but it is obvious
> that Ryan does not understand the implication of applying these rules.
>

I realize you're new here, and so I encourage you to read
https://wiki.mozilla.org/CA/Policy_Participants for context about the
nature of participation.

I'm very familiar with the implications of applying these rules, both
personally and professionally. This is why policies such as
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation exist.
This is where such information is shared, gathered, and considered, as
provided by the CA. It is up to the CA to demonstrate the balance of
equities, but also to ensure that going forward, they actually adhere to
the rules they agreed to as a condition of trust. Simply throwing out
agreements and contractual obligations when it's inconvenient,
*especially *when
these were scenarios contemplated when they were written and CAs
acknowledged they would take steps to ensure they're followed, isn't a
fair, equitable, or secure system.

This is the unfortunate nature of PKI: as a system, the cost of revocation
is often not properly accounted for when CAs or Subscribers are designing
their systems, and so folks engage in behaviours that increase risk, such
as lacking automation or certificate pinning. For lack of a better analogy,
it's like a contract that was agreed, a service rendered, and then refusing
to pay the invoice because it turns out, it's actually more than you can
pay. We wouldn't accept that within businesses, so why should we accept
here? CAs warrant to the community that they understand the risks and have
designed their systems, as they feel appropriate, to account for that. That
some have failed to do is unfortunate, but that highlights poor design by
the CA, not the one sending the metaphorical invoice for what was agreed to.

Just like with invoices that someone can't pay, sometimes it makes sense to
work on payment plans, collaboratively. But now that means the person who
was expecting the money similarly may be short, and that can quickly
cascade into deep instability, so has to be done with caution. That's what
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation is about

However, if someone is regularly negligent in paying their bills, and have
to continue to work on payment agreements, eventually, you'll stop doing
business with them, because you realize that they are a risk. That's the
same as when we talk about distrust.

Peter Bowen says
> > ... simply revoking doesn't solve the issue; arguably it makes it
> > worse than doing nothing.
>
> You are absolutely right, Peter. Doctors will not be able to communicate
> with each other effectively and people could die if the CA/B forum
> continues to blindly follow its rules without consideration for the greater
> impact this will have on the security of the internet.


To be clear; "the issue" we're talking about is only truly 'solved' by the
rotation and key destruction. Anything else, besides that, is just a risk
calculation, and the CA is responsible for balancing that. Peter's
highlighting how the fix for the *compliance* issued doesn't fix the
*security* issue, as other CAs, like DigiCert, have also noted.

The process, as described at
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation ,
understands these tradeoffs and acknowledges that CAs, by and large, have
been derelict in their duties to balance them. It's easy to blame folks
holding CAs accountable to their commitments and contractual obligations,
but that doesn't make a better system. CAs that are faced in these
difficult positions are supposed to come up with plans to correct it, but
the system has to be moving to a place where this no longer happens. This
can be by the eventual distrusting of a CA that continues to have incidents
and fails to design for them, or by the distrusting of a CA that continues
to disregard their commitments. It's possible for a single incident to be
so severe it leads to distrust, but in general, these incidents are viewed
as collaborations where the CA comes up with a plan to systematically
identify the issue and fix it.

Peter Bowen

unread,
Jul 4, 2020, 3:01:34 PM7/4/20
to Ryan Sleevi, mark.a...@gmail.com, mozilla-dev-security-policy
One of the things that can be very non-obvious to many people is that
"incident" as Ryan describes it is not a binary thing. When Ryan says
"treat this as an incident" it is not necessarily the same kind of
incident system where there is a goal to have zero incidents forever.
In some environments the culture is that any incident is a career
limiting event or has financial impacts - for example, a factory might
pay out bonuses to employees for every month in which zero incidents
are reported. This does not align with what Ryan speaks about.
Instead, based on my experience working with Ryan, incidents are the
trigger for blameless postmortems which are used to teach. Google
documented this in their SRE book
(https://landing.google.com/sre/sre-book/chapters/postmortem-culture/
) and AWS includes this as part of their well-architected framework
(https://wa.aws.amazon.com/wat.concept.coe.en.html ).

One of the challenges is that not everyone in the WebPKI ecosystem has
aligned around the same view of incidents as learning opportunities.
This makes it very challenging for CAs to find a path that suits all
participants and frequently results in hesitancy to use the blameless
post-mortem version of incidents.

> > After wading through this very long chain of messages I see little
> > discussion of the impact this will have on end users. Ryan Sleevi, in the
> > name of Google, is purporting to speak for the end users, but it is obvious
> > that Ryan does not understand the implication of applying these rules.
> >
>
> I realize you're new here, and so I encourage you to read
> https://wiki.mozilla.org/CA/Policy_Participants for context about the
> nature of participation.

To clarify what Ryan is saying here: he is pointing out that he is not
representing the position of Google or Alphabet, rather he is stating
he is acting as an independent party.

As you can see from earlier messages, Mozilla has clearly stated that
they are NOT requiring revocation in 7 days in this case, as they
judge the risk from revocation greater than the risks from not
revoking on that same timeframe. Ben Wilson, who does represent
Mozilla, stated:

"Mozilla does not need the certificates that incorrectly have the
id-kp-OCSPSigning EKU to be revoked within the next 7 days, as per
section 4.9.1.2 of the BRs. We want to work with CAs to identify a
path forward, which includes determining a reasonable timeline and
approach to replacing the certificates that incorrectly have the
id-kp-OCSPSigning EKU (and performing key destruction for them)."

The reason this discussion is ongoing is that Ryan does work for
Google and it is widely understood that: 1) certificates that are not
trusted by the Google Chrome browser in its default configuration
(e.g. install on a home version of Windows with no further
configuration) or not trusted on widely used Android devices by
default are not commercially viable as they do not meet the needs of
many organizations and individuals who request certificates and 2)
Ryan appears to be highly influential in Chrome and Android decision
making about what certificates to trust.

If Google were to officially state something similar to Mozilla, then
this thread would likely resolve itself quickly. Yes, there are other
trust stores to deal with, but they have historically not engaged in
this Mozilla forum, so discussion here is not helpful for them.

> This is the unfortunate nature of PKI: as a system, the cost of revocation
> is often not properly accounted for when CAs or Subscribers are designing
> their systems, and so folks engage in behaviours that increase risk, such
> as lacking automation or certificate pinning.

This is the nature of PKIs that are used with browsers today. As you,
Ryan, have frequently stated, one of the big challenges is when a PKI
is used for multiple purposes. It seems that what Mark is pointing
out is that HL7 has a set of contradictory requirements to those of
Chrome. In some environments, it would be completely reasonable to
have a certificate policy that certificates must NOT be revoked with
less than 180 days notice (or 30 days or similar). In these
environments availability is far more critical than confidentiality.
I've seen environments that would strongly prefer to use
TLS_ECDHE_NONE_WITH_AES_256_GCM if such a thing existed, meaning they
would simply encrypt data without any authentication of the remote
party. I've also seen environments that would prefer a scheme whereby
certificates never expire and are only invalidated if the relying
party records another certificate for the same subject with a newer
issued date. This makes sense for them, given other controls in place.

For the future, HL7 probably would be well served by working to create
a separate PKI that meets their needs. This would enable a different
risk calculation to be used - one that is specific to the HL7 health
data interoperability world. I don't know if you or your organization
has influence in HL7, but it is something worth pushing if you can.

Thanks,
Peter

Mark Arnott

unread,
Jul 4, 2020, 5:32:12 PM7/4/20
to mozilla-dev-s...@lists.mozilla.org
On Saturday, July 4, 2020 at 2:06:53 PM UTC-4, Ryan Sleevi wrote:
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>
> As part of this, you should re-evaluate certificate pinning. As one of the
> authors of that specification, and indeed, my co-authors on the
> specification agree, certificate pinning does more harm than good, for
> precisely this reason.
>
I agree that certificate pinning is a bad practice, but it is not a decision that I made or that I can reverse quickly. It will take time to convince several different actors that this needs to change.

> I realize you're new here, and so I encourage you to read
> https://wiki.mozilla.org/CA/Policy_Participants for context about the
> nature of participation.

Thank you for helping me understand who the participants in this discussion are and what roles they fill.

> I'm very familiar with the implications of applying these rules, both
> personally and professionally. This is why policies such as
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation exist.
> This is where such information is shared, gathered, and considered, as
> provided by the CA. It is up to the CA to demonstrate the balance of
> equities, but also to ensure that going forward, they actually adhere to
> the rules they agreed to as a condition of trust. Simply throwing out
> agreements and contractual obligations when it's inconvenient,
> *especially *when
> these were scenarios contemplated when they were written and CAs
> acknowledged they would take steps to ensure they're followed, isn't a
> fair, equitable, or secure system.

I think that the lack of fairness comes from the fact that the CA/B forum only represents the view points of two interests - the CAs and the Browser vendors. Who represents the interests of industries and end users? Nobody.


Mark Arnott

unread,
Jul 4, 2020, 5:32:13 PM7/4/20
to mozilla-dev-s...@lists.mozilla.org
On Saturday, July 4, 2020 at 3:01:34 PM UTC-4, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 11:06 AM Ryan Sleevi via dev-security-policy
> <dev-secur...@lists.mozilla.org> wrote:

> One of the challenges is that not everyone in the WebPKI ecosystem has
> aligned around the same view of incidents as learning opportunities.
> This makes it very challenging for CAs to find a path that suits all
> participants and frequently results in hesitancy to use the blameless
> post-mortem version of incidents.
>
Why aren't we hearing more from the 14 CAs that this affects. Correct me if I am wrong, but the CA/B form has something like 23 members?? An issue that affects 14 CAs indicates a problem with the way the forum collaborates (or should I say 'fails to work together') Maybe this incident should have followed a responsible disclosure process and not been fully disclosed right before holidays in several nations.

> To clarify what Ryan is saying here: he is pointing out that he is not
> representing the position of Google or Alphabet, rather he is stating
> he is acting as an independent party.

> As you can see from earlier messages, Mozilla has clearly stated that
> they are NOT requiring revocation in 7 days in this case, as they
> judge the risk from revocation greater than the risks from not
> revoking on that same timeframe. Ben Wilson, who does represent
> Mozilla, stated:

> If Google were to officially state something similar to Mozilla, then
> this thread would likely resolve itself quickly. Yes, there are other
> trust stores to deal with, but they have historically not engaged in
> this Mozilla forum, so discussion here is not helpful for them.

Thank you for explaining that. We need to hear the official position from Google. Ryan Hurst are you out there?

> For the future, HL7 probably would be well served by working to create
> a separate PKI that meets their needs. This would enable a different
> risk calculation to be used - one that is specific to the HL7 health
> data interoperability world. I don't know if you or your organization
> has influence in HL7, but it is something worth pushing if you can.

This has been discussed in the past and abandoned, but this incident will probably restart that discussion.

Buschart, Rufus

unread,
Jul 4, 2020, 6:15:55 PM7/4/20
to mozilla-dev-security-policy, ry...@sleevi.com, mark.a...@gmail.com
Dear Mark!

> -----Original Message-----
> From: dev-security-policy <dev-security-...@lists.mozilla.org> On Behalf Of Ryan Sleevi via dev-security-policy
> Sent: Samstag, 4. Juli 2020 20:06
>
> On Sat, Jul 4, 2020 at 12:52 PM mark.arnott1--- via dev-security-policy < dev-secur...@lists.mozilla.org> wrote:
>
> > This is insane!
> > Those 300 certificates are used to secure healthcare information
> > systems at a time when the global healthcare system is strained by a
> > global pandemic.

Thank you for bringing in your perspective as a certificate consumer. We at Siemens - as a certificate consumer - also have ~ 700 k affected personal S/MIME certificates out in the field, all of them stored on smart cards (+ code signing and TLS certificates ...). You can imagine, that rekeying them on short notice would be a total nightmare.

> To be clear; "the issue" we're talking about is only truly 'solved' by the rotation and key destruction. Anything else, besides that, is just
> a risk calculation, and the CA is responsible for balancing that. Peter's highlighting how the fix for the *compliance* issued doesn't fix
> the *security* issue, as other CAs, like DigiCert, have also noted.

Currently, I'm not convinced, that the underlying security issue (whose implication I of course fully understand and don't want to downplay) can only be fixed by revoking the issuing CAs and destructing the old keys. Sadly, all the brilliant minds on this mailing list are discussing compliance issues and the interpretation of RFCs, BRGs and 15-year-old Microsoft announcements, but it seems nobody is trying to find (or at least publicly discuss) a solution that can solve the security issue, is BRG / RFC compliant and doesn't require the replacement of millions of certificates - especially since many of those millions of certificates are not even TLS certificates and their consumers never expected the hard revocation deadlines of the BRGs to be of any relevance for them. And therefore they didn't design their infrastructure to be able to do an automated mass-certificate exchange.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.b...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Siemens Aktiengesellschaft: Chairman of the Supervisory Board: Jim Hagemann Snabe; Managing Board: Joe Kaeser, Chairman, President and Chief Executive Officer; Roland Busch, Klaus Helmrich, Cedrik Neike, Ralf P. Thomas; Registered offices: Berlin and Munich, Germany; Commercial registries: Berlin Charlottenburg, HRB 12300, Munich, HRB 6684; WEEE-Reg.-No. DE 23691322

Ryan Sleevi

unread,
Jul 4, 2020, 6:43:22 PM7/4/20
to Mark Arnott, mozilla-dev-s...@lists.mozilla.org
On Sat, Jul 4, 2020 at 5:32 PM Mark Arnott via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Why aren't we hearing more from the 14 CAs that this affects. Correct me
> if I am wrong, but the CA/B form has something like 23 members?? An issue
> that affects 14 CAs indicates a problem with the way the forum collaborates
> (or should I say 'fails to work together') Maybe this incident should have
> followed a responsible disclosure process and not been fully disclosed
> right before holidays in several nations.


This was something disclosed 6 months ago and 6 years ago. This is not
something “new”. The disclosure here is precisely because CAs failed, when
engaged privately, to understand both the compliance failure and the
security risk.

Unfortunately, debates about “responsible” disclosure have existed for as
long as computer security has been an area of focus, and itself was a term
that was introduced as way of having the language favor the vendor, not the
reporter. We have a security risk introduced by a compliance failure, which
has been publicly known for some time, and which some CAs have dismissed as
not an issue. Transparency is an essential part of bringing attention and
understanding. This is, in effect, a “20-year day”. It’s not some new
surprise.

Even if disclosed privately, the CAs would still be under the same 7 day
timeline. The mere act of disclosure triggers this obligation, whether
private or public. That’s what the BRs obligate CAs to do.


> Thank you for explaining that. We need to hear the official position from
> Google. Ryan Hurst are you out there?


Just to be clear: Ryan Hurst does not represent Google/Chrome’s decisions
on certificates. He represents the CA, which is accountable to
Google/Chrome just as it is to Mozilla/Firefox or Apple/Safari.

In the past, and when speaking on behalf of Google/Chrome, it’s been
repeatedly emphasized: Google/Chrome does not grant exceptions to the
Baseline Requirements. In no uncertain terms, Google/Chrome does not give
CAs blank checks to ignore violations of the Baseline Requirements.

Ben’s message, while seeming somewhat self-contradictory in messaging,
similarly reflects Mozilla’s long-standing position that it does not grant
exceptions to the BRs. They treat violations as incidents, as Ben’s message
emphasized, including the failure to revoke, and as Peter highlighted, both
Google and Mozilla work through a public post-mortem process that seeks to
understand the facts and nature of the CA’s violations and how the
underlying systemic issues are being addressed. If a CA demonstrates poor
judgement in handling these incidents, they may be distrusted, as some have
in the past. However, CAs that demonstrate good judgement and demonstrate
clear plans for improvement are given the opportunity to do so.
Unfortunately, because some CAs think that the exact same plan should work
for everyone, it’s necessary to repeatedly make it clear that there are no
exceptions, and that each situation is case-by-case.

This is not a Google/Mozilla issue either: as Mozilla reminds CAs at
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , delayed
revocation issues affect everyone, and CAs need to directly communicate
with all of the root programs that they have made representations to.
WISeKey knows how to do this, but they also know what the expectation and
response will be, which is aligned with the above.

Some CAs have had a string of failures, and around matters like this, and
thus know that they’re at risk of being seen as CAs that don’t take
security seriously, which may lead to distrust. Other CAs recognize that
security, while painful, is also a competitive advantage, and so look to be
leaders in an industry of followers and do the right thing, especially when
this leadership can help ensure greater flexibility if/when they do have an
incident. Other CAs may be in uniquely difficult positions where they see
greater harm resulting, due to specific decisions made in the past that
were not properly thought through: but the burden falls to them to
demonstrate that uniqueness, that burden, and both what steps the CA is
taking to mitigate that risk **and the cost to the ecosystem** and what
steps they’re taking to prevent that going forward. Each CA is different
here, which is why blanket statements aren’t a one-size fits all solution.

I’m fully aware there are some CAs who are simply not prepared to rotate
intermediates within a week, despite them promising they were capable of
doing so. Those CAs need to have a plan to establish that capability, they
need to truly make sure this is exceptional and not just a continuance of a
pattern of problematic behavior, and they need to be transparent about all
of this. That’s consistent with all of my messages to date, and consistent
with Ben’s message regarding Mozilla’s expectations. They are different
ways of saying the same thing: you can’t sweep this under the rug, you need
to have a plan, and you need to understand the risks.

To use the payment analogy: No one is going to say “If you can’t pay, don’t
worry about it”, especially not when that will be used as precedent for
future actions. Like, I might entirely be inclined to say that if we were
sharing lunch and you forgot your wallet, but that doesn’t mean you can
then not pay back the $100,000 you were invoices for simply because I
covered your lunch. Yet that’s exactly the perspective we see from some
CAs, and that’s why I emphasize the incident so strongly.

Nor is a business going to tell everyone that has outstanding invoices
going to say “Everyone can go on a payment plan on the exact same per-month
terms until they can pay back the full amount”; for some, their outstanding
debt is already substantial, and for others, the terms are going to vary
based on how much is owed. These are blanket one-size-fits-all statements
don’t happen. However, as spelled out in
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation , if a CA
is going to ask to go on a payment plan, they have obligations to meet and
need to carefully think about the terms they’re proposing. Paying back $10
over 20 years is not reasonable, just like paying back $100,000 may not be:
it’s situational, depends on the facts, and that’s what these incident bugs
are meant to gather and track.

> For the future, HL7 probably would be well served by working to create
> > a separate PKI that meets their needs. This would enable a different
> > risk calculation to be used - one that is specific to the HL7 health
> > data interoperability world. I don't know if you or your organization
> > has influence in HL7, but it is something worth pushing if you can.
>
> This has been discussed in the past and abandoned, but this incident will
> probably restart that discussion.


This is encouraging. This is exactly the sort of things we look for from
CAs when responding, as per
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation - how are
they transitioning participants who pose risk off, and how are they
mitigating that risk while they do so. Some CAs have addressed this via
contracts: making it unambiguous and clear they will revoke as needed, so
Subscribers know ip front. Some have addressed this through certificate
profiles: e.g. forcing certificates to be shorter lived so as to reduce the
pain of these rotations by encouraging or even necessitating automation.
And many CAs have, in the past, worked to establish alternative transition
plans for their Subscribers that move on to alternative PKIs that can
provide the stability and certainty they need. These are all good things,
and they’re all things we look for from CAs as part of understanding
https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation

Eric Mill

unread,
Jul 4, 2020, 6:55:57 PM7/4/20
to Buschart, Rufus, mozilla-dev-security-policy, ry...@sleevi.com, mark.a...@gmail.com
On Sat, Jul 4, 2020 at 3:15 PM Buschart, Rufus via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> ...especially since many of those millions of certificates are not even
> TLS certificates and their consumers never expected the hard revocation
> deadlines of the BRGs to be of any relevance for them. And therefore they
> didn't design their infrastructure to be able to do an automated
> mass-certificate exchange.
>

This is a clear, straightforward statement of perhaps the single biggest
core issue that limits the agility and security of the Web PKI: certificate
customers (particularly large enterprises) don't seem to actually expect
they may have to revoke many certificates on short notice, despite it being
extremely realistic that they may need to do so. We're many years into the
Web PKI now, and there have been multiple mass-revocation events along the
way. At some point, these expectations have to change and result in
redesigns that match them.

As Ryan [Sleevi] said, neither Mozilla nor Google employ some binary
unthinking process where either all the certs are revoked or all the CAs
who don't do it are instantly cut loose. If a CA makes a decision to not
revoke, citing systemic barriers to meeting the security needs of the
WebPKI that end users rely on, their incident reports are expected to
describe how the CA will work towards systemic solutions to those barriers
- to project a persuasive vision of why these sorts of events will not
result in a painful crucible going forward.

It's extremely convenient and cost-effective for organizations to rely on
the WebPKI for non-public-web needs, and given that the WebPKI is still
(relatively) more agile than a lot of private PKIs, there will likely
continue to be security advantages for organizations that do so. But if the
security and agility needs of the WebPKI don't align with an organization's
needs, using an alternate PKI is a reasonable solution that reduces
friction on both sides of the equation.

--
Eric Mill
617-314-0966 | konklone.com | @konklone <https://twitter.com/konklone>

Matthew Hardeman

unread,
Jul 4, 2020, 7:00:09 PM7/4/20
to Mark Arnott, mozilla-dev-s...@lists.mozilla.org
Just chiming in as another subscriber and relying party, with a view to
speaking to the other subscribers on this topic.

To the extent that your use case is not specifically the WebPKI as pertains
to modern browsers, it was clear to me quite several years ago and gets
clearer every day: the WebPKI is not for you, us, or anyone outside that
very particular scope.

Want to pin server cert public keys in an app? Have a separate TLS
endpoint for that with an industry or org specific private PKI behind it.

Make website endpoints that need to face broad swathes of public users’ web
browsers participate in the WebPKI. Get client certs and API endpoints out
of it.

That was the takeaway I had quite some years ago and I’ve been saved much
grief for having moved that way.

On Saturday, July 4, 2020, Ryan Sleevi via dev-security-policy <
> > For the future, HL7 probably would be well served by working to create
> > > a separate PKI that meets their needs. This would enable a different
> > > risk calculation to be used - one that is specific to the HL7 health
> > > data interoperability world. I don't know if you or your organization
> > > has influence in HL7, but it is something worth pushing if you can.
> >
> > This has been discussed in the past and abandoned, but this incident will
> > probably restart that discussion.
>
>
> This is encouraging. This is exactly the sort of things we look for from
> CAs when responding, as per
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation - how are
> they transitioning participants who pose risk off, and how are they
> mitigating that risk while they do so. Some CAs have addressed this via
> contracts: making it unambiguous and clear they will revoke as needed, so
> Subscribers know ip front. Some have addressed this through certificate
> profiles: e.g. forcing certificates to be shorter lived so as to reduce the
> pain of these rotations by encouraging or even necessitating automation.
> And many CAs have, in the past, worked to establish alternative transition
> plans for their Subscribers that move on to alternative PKIs that can
> provide the stability and certainty they need. These are all good things,
> and they’re all things we look for from CAs as part of understanding
> https://wiki.mozilla.org/CA/Responding_To_An_Incident#Revocation

Buschart, Rufus

unread,
Jul 4, 2020, 7:21:02 PM7/4/20
to Eric Mill, mozilla-dev-security-policy, ry...@sleevi.com, mark.a...@gmail.com

From: Eric Mill <er...@konklone.com>
Sent: Sonntag, 5. Juli 2020 00:55
To: Buschart, Rufus (SOP IT IN COR) <rufus.b...@siemens.com>
Cc: mozilla-dev-security-policy <mozilla-dev-s...@lists.mozilla.org>; ry...@sleevi.com; mark.a...@gmail.com
Subject: Re: SECURITY RELEVANT FOR CAs: The curious case of the Dangerous Delegated Responder Cert


On Sat, Jul 4, 2020 at 3:15 PM Buschart, Rufus via dev-security-policy <dev-secur...@lists.mozilla.org<mailto:dev-secur...@lists.mozilla.org>> wrote:
...especially since many of those millions of certificates are not even TLS certificates and their consumers never expected the hard revocation deadlines of the BRGs to be of any relevance for them. And therefore they didn't design their infrastructure to be able to do an automated mass-certificate exchange.

This is a clear, straightforward statement of perhaps the single biggest core issue that limits the agility and security of the Web PKI: certificate customers (particularly large enterprises) don't seem to actually expect they may have to revoke many certificates on short notice, despite it being extremely realistic that they may need to do so. We're many years into the Web PKI now, and there have been multiple mass-revocation events along the way. At some point, these expectations have to change and result in redesigns that match them.

[>] Maybe I wasn’t able to bring over my message: those 700 k certificates that are hurting us most, have never been “WebPKI” certificates. They are from technically constrained issuing CAs that are limited to S/MIME and client authentication. We are just ‘collateral damage’ from a compliance point of view (of course not in a security pov). In the upcoming BRGs for S/MIME I hope that the potential technical differences between TLS certificates (nearly all stored as P12 files in on-line server) and S/MIME certificates (many of them stored off-line on smart-cards or other tokens) will be reflected also in the revocation requirements. For WebPKI (aka TLS) certificates, we are getting better based on the lessons learned of the last mass exchanges.

It's extremely convenient and cost-effective for organizations to rely on the WebPKI for non-public-web needs, and given that the WebPKI is still (relatively) more agile than a lot of private PKIs, there will likely continue to be security advantages for organizations that do so. But if the security and agility needs of the WebPKI don't align with an organization's needs, using an alternate PKI is a reasonable solution that reduces friction on both sides of the equation.

[>] But we are talking in S/MIME also about “public needs”: It’s about the interchange of signed and encrypted emails between different entities that don’t share a private PKI.

--
Eric Mill
617-314-0966 | konklone.com<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkonklone.com%2F&data=02%7C01%7Crufus.buschart%40siemens.com%7Cf2f26fa2fbe34263c93808d8206d650e%7C38ae3bcd95794fd4addab42e1495d55a%7C1%7C1%7C637295001505543267&sdata=6H0qC1Ag9B1vuJ05d2BFrvaL0WBv5grib86q2NOSLuA%3D&reserved=0> | @konklone<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftwitter.com%2Fkonklone&data=02%7C01%7Crufus.buschart%40siemens.com%7Cf2f26fa2fbe34263c93808d8206d650e%7C38ae3bcd95794fd4addab42e1495d55a%7C1%7C1%7C637295001505543267&sdata=7MwPiz4jDprx9fNj8gcwq6W59s6VcZ46yotvY4dsqTs%3D&reserved=0>



With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.b...@siemens.com
www.twitter.com/siemens<http://www.twitter.com/siemens>
www.siemens.com/ingenuityforlife<https://siemens.com/ingenuityforlife>
[cid:image0...@01D6526A.82B24320]

Peter Gutmann

unread,
Jul 4, 2020, 9:21:47 PM7/4/20
to Buschart, Rufus, Eric Mill, ry...@sleevi.com, mark.a...@gmail.com, mozilla-dev-security-policy
Eric Mill via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>This is a clear, straightforward statement of perhaps the single biggest core
>issue that limits the agility and security of the Web PKI

That's not the biggest issue by a long shot. The biggest issue is that the
public PKI (meaning public/commercial CAs, not sure what the best collective
noun for that is) assumes that the only possible use for certificates is the
web. For all intents and purposes, public PKI = Web PKI. For example for
embedded systems, SCADA devices, anything on an RFC 1918 LAN, and much more,
the only appropriate expiry date for a certificate is never. However, since
the Web PKI has decided that certificates should constantly expire because
$reasons, everything that isn't the web has to deal with this, or more usually
suffer under it.

The same goes for protocols like HTTP and TLS, the current versions (HTTP/2 /3
and TLS 1.3) are designed for efficient content delivery by large web service
providers above everything else. When some SCADA folks requested a few minor
changes to the SCADA-hostile HTTP/2 from the WG, not mandatory but just
negotiable options to make it more usable in a SCADA environment, the response
was "let them eat HTTP/1.1". In other words they'd explicitly forked HTTP,
there was HTTP/2 for the web and HTTP/1.1 for the rest of them.

So the problem isn't "everyone should do what the Web PKI wants, no matter how
inappropriate it is in their environment", it's "CAs (and protocol designers)
need to acknowledge that something other than the web exists and accommodate
it".

Peter.

Ryan Sleevi

unread,
Jul 4, 2020, 9:36:17 PM7/4/20
to Peter Gutmann, Buschart, Rufus, Eric Mill, ry...@sleevi.com, mark.a...@gmail.com, mozilla-dev-security-policy
On Sat, Jul 4, 2020 at 9:21 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> So the problem isn't "everyone should do what the Web PKI wants, no matter
> how
> inappropriate it is in their environment", it's "CAs (and protocol
> designers)
> need to acknowledge that something other than the web exists and
> accommodate
> it".


And they are accomodated - by using something other than the Web PKI.

Your examples of SCADA are apt: there's absolutely no reason to assume a
default phone device, for example, should be able to manage a SCADA device.
Of course we'd laugh at that and say "Oh god, who would do something that
stupid?"

Yet that's what happens when one tries to make a one-size fits-all PKI.

Of course the PKI technologies accommodate these scenarios: you use locally
trusted anchors, specific to your environment, and hope that the OS vendor
does not remove support for your use case in a subsequent update. Yet it
would be grossly negligent if we allowed SCADA, in your example, to hold
back the evolution of the Web. As you yourself note, it's something other
than the Web. And it can use its own PKI.

Peter Gutmann

unread,
Jul 4, 2020, 9:41:27 PM7/4/20
to ry...@sleevi.com, Buschart, Rufus, Eric Mill, mark.a...@gmail.com, mozilla-dev-security-policy
Ryan Sleevi <ry...@sleevi.com> writes:

>And they are accomodated - by using something other than the Web PKI.

That's the HTTP/2 "let them eat cake" response again. For all intents and
purposes, PKI *is* the Web PKI. If it wasn't, people wouldn't be worrying
about having to reissue/replace certificates that will never be used in a web
context because of some Web PKI requirement that doesn't apply to them.

Peter.

Ryan Sleevi

unread,
Jul 4, 2020, 9:46:53 PM7/4/20
to Peter Gutmann, ry...@sleevi.com, Buschart, Rufus, Eric Mill, mark.a...@gmail.com, mozilla-dev-security-policy
On Sat, Jul 4, 2020 at 9:41 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:
Thanks Peter, but I fail to see how you're making your point.

The problem that "PKI *is* the Web PKI" is the problem to be solved. That's
not a desirable outcome, and exactly the kind of thing we'd expect to see
as part of a CA transition.

PKI is a technology, much like HTTP/2 is a protocol. Unlike your example,
of HTTP/2 not being considerate of SCADA devices, PKI is an abstract
technology fully capable of addressing the SCADA needs. The only
distinction is that, by design and rather intentionally, it doesn't mean
that the billions of devices out there, in their default configuration, can
or should expect to talk to SCADA servers. I'm would hope you recognize why
that's undesirable, just like it would be if your phone were to ship with a
SCADA client. At the end of the day, this is something that should require
a degree of intentionality. Whether it's HL7 or SCADA, these are limited
use cases that aren't part of a generic and interoperable Web experience,
and it's not at all unreasonable to think they may require additional,
explicit configuration to support.

Matt Palmer

unread,
Jul 4, 2020, 10:08:51 PM7/4/20
to dev-secur...@lists.mozilla.org
On Sat, Jul 04, 2020 at 12:51:32PM -0700, Mark Arnott via dev-security-policy wrote:
> I think that the lack of fairness comes from the fact that the CA/B forum
> only represents the view points of two interests - the CAs and the Browser
> vendors. Who represents the interests of industries and end users?
> Nobody.

CAs claim that they represent what I assume you mean by "industries" (that
is, the entities to which WebPKI certificates are issued). If you're
unhappy with the way which your interests are being represented by your CA,
I would encourage you to speak with them. Alternately, anyone can become an
"Interested Party" within the CA/B Forum, which a brief perusal of the CA/B
Forum website will make clear.

- Matt

Matt Palmer

unread,
Jul 4, 2020, 10:13:02 PM7/4/20
to dev-secur...@lists.mozilla.org
On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy wrote:
> I was informed yesterday that I would have to replace just over 300
> certificates in 5 days because my CA is required by rules from the CA/B
> forum to revoke its subCA certificate.

The possibility of such an occurrence should have been made clear in the
subscriber agreement with your CA. If not, I encourage you to have a frank
discussion with your CA.

> In the CIA triad Availability is as important as Confidentiality. Has
> anyone done a threat model and a serious risk analysis to determine what a
> reasonable risk mitigation strategy is?

Did you do a threat model and a serious risk analysis before you chose to
use the WebPKI in your application?

- Matt

Peter Bowen

unread,
Jul 4, 2020, 10:42:35 PM7/4/20
to Matt Palmer, dev-secur...@lists.mozilla.org
I think it is important to keep in mind that many of the CA
certificates that were identified are constrained to not issue TLS
certificates. The certificates they issue are explicitly excluded
from the Mozilla CA program requirements.

The issue at hand is caused by a lack of standardization of the
meaning of the Extended Key Usage certificate extension when included
in a CA-certificate. This has resulted in some software developers
taking certain EKUs in CA-certificates to act as a constraint (similar
to Name Constraints), some to take it as the purpose for which the
public key may be used, and some to simultaneously take both
approaches - using the former for id-kp-serverAuth key purpose and the
latter for the id-kp-OCSPSigning key purpose.

I don't think it is reasonable to assert that everyone impacted by
this should have been aware of the possibly of revocation - it is
completely permissible under all browser programs to issue end-entity
certificates with infinite duration that guarantee that they will
never be revoked, even in the case of full key compromise, as long as
the certificate does not assert a key purpose in the EKU that is
covered under the policy. The odd thing in this case is that the
subCA certificate itself is the certificate in question.

As several others have indicated, WebPKI today is effectively a subset
of the more generic shared PKI. It is beyond time to fork the WebPKI
from the general PKI and strongly consider making WebPKI-only CAs that
are subordinate to the broader PKI; these WebPKI-only CAs can be
carried by default in public web browsers and operating systems, while
the broader general PKI roots can be added locally (using centrally
managed policies or local configuration) by those users who what a
superset of the WebPKI.

Thanks,
Peter

Ryan Sleevi

unread,
Jul 4, 2020, 10:56:10 PM7/4/20
to Peter Bowen, Matt Palmer, dev-secur...@lists.mozilla.org
On Sat, Jul 4, 2020 at 10:42 PM Peter Bowen via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> As several others have indicated, WebPKI today is effectively a subset
> of the more generic shared PKI. It is beyond time to fork the WebPKI
> from the general PKI and strongly consider making WebPKI-only CAs that
> are subordinate to the broader PKI; these WebPKI-only CAs can be
> carried by default in public web browsers and operating systems, while
> the broader general PKI roots can be added locally (using centrally
> managed policies or local configuration) by those users who what a
> superset of the WebPKI.
>

+1. This is the only outcome that, long term, balances the tradeoffs
appropriately.

Matt Palmer

unread,
Jul 5, 2020, 12:36:42 AM7/5/20
to dev-secur...@lists.mozilla.org
On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
> <dev-secur...@lists.mozilla.org> wrote:
> >
> > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy wrote:
> > > I was informed yesterday that I would have to replace just over 300
> > > certificates in 5 days because my CA is required by rules from the CA/B
> > > forum to revoke its subCA certificate.
> >
> > The possibility of such an occurrence should have been made clear in the
> > subscriber agreement with your CA. If not, I encourage you to have a frank
> > discussion with your CA.
> >
> > > In the CIA triad Availability is as important as Confidentiality. Has
> > > anyone done a threat model and a serious risk analysis to determine what a
> > > reasonable risk mitigation strategy is?
> >
> > Did you do a threat model and a serious risk analysis before you chose to
> > use the WebPKI in your application?
>
> I think it is important to keep in mind that many of the CA
> certificates that were identified are constrained to not issue TLS
> certificates. The certificates they issue are explicitly excluded
> from the Mozilla CA program requirements.

Yes, I'm aware of that.

> I don't think it is reasonable to assert that everyone impacted by
> this should have been aware of the possibly of revocation

At the limits, I agree with you. However, to whatever degree that there is
complaining to be done, it should be directed at the CA(s) which sold a
product that, it is now clear, was not fit for whatever purpose it has been
put to, and not at Mozilla.

> it is completely permissible under all browser programs to issue
> end-entity certificates with infinite duration that guarantee that they
> will never be revoked, even in the case of full key compromise, as long as
> the certificate does not assert a key purpose in the EKU that is covered
> under the policy. The odd thing in this case is that the subCA
> certificate itself is the certificate in question.

And a sufficiently[1] thorough threat modelling and risk analysis exercise
would have identified the hazard of a subCA certificate that needed to be
revoked, assessed the probability of that hazard occurring, and either
accepted the risk (and thus have no reasonable cause for complaint now), or
would have controlled the risk until it was acceptable.

That there are people cropping up now demanding that Mozilla do a risk
analysis for them indicates that they themselves didn't do the necessary
risk analysis beforehand, which pegs my irony meter.

I wonder how these Masters of Information Security have "threat modelled"
the possibility that their chosen CA might get unceremoniously removed from
trust stores. Show us yer risk register!

- Matt

[1] one might also substitute "impossibly" for "sufficiently" here; I've
done enough "risk analysis" to know that trying to enumerate all possible
threats is an absurd notion. The point I'm trying to get across is
that someone asking Mozilla to do what they can't is not the iron-clad,
be-all-and-end-all argument that some appear to believe it is.

Buschart, Rufus

unread,
Jul 5, 2020, 5:30:47 PM7/5/20
to mozilla-dev-security-policy
> From: dev-security-policy <dev-security-...@lists.mozilla.org> On Behalf Of Matt Palmer via dev-security-policy
> Sent: Sonntag, 5. Juli 2020 06:36
>
> On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> > On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
> > <dev-secur...@lists.mozilla.org> wrote:
> > >
> > > > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy wrote:
> > > >
> > > > In the CIA triad Availability is as important as Confidentiality.
> > > > Has anyone done a threat model and a serious risk analysis to
> > > > determine what a reasonable risk mitigation strategy is?
> > >
> > > Did you do a threat model and a serious risk analysis before you
> > > chose to use the WebPKI in your application?
> >
> > I think it is important to keep in mind that many of the CA
> > certificates that were identified are constrained to not issue TLS
> > certificates. The certificates they issue are explicitly excluded
> > from the Mozilla CA program requirements.
>
> Yes, I'm aware of that.
>
> > I don't think it is reasonable to assert that everyone impacted by
> > this should have been aware of the possibly of revocation
>
> At the limits, I agree with you. However, to whatever degree that there is complaining to be done, it should be directed at the CA(s)
> which sold a product that, it is now clear, was not fit for whatever purpose it has been put to, and not at Mozilla.

Let me quote from the NSS website of Mozilla (https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Overview):

If you want to add support for SSL, S/MIME, or other Internet security standards to your application, you can use Network Security Services (NSS) to implement
all your security features. NSS provides a complete open-source implementation of the crypto libraries used by AOL, Red Hat, Google, and other companies in a
variety of products, including the following:
* Mozilla products, including Firefox, Thunderbird, SeaMonkey, and Firefox OS.
* [and a long list of additional reference applications]

Probably it would be good if someone from Mozilla team steps in here, but S/MIME _is_ an advertised use-case for NSS. And the Mozilla website says nowhere, that the demands and rules of WebPKI / CA/B-Forum overrule all other demands. It is simply not to be expected by a consumer of S/MIME certificates that they become invalid within 7 days just because the BRGs for TLS certificates are requiring it. This feels close to intrusive behavior of the WebPKI community.

With best regards,
Rufus Buschart

Siemens AG
Siemens Operations
Information Technology
Value Center Core Services
SOP IT IN COR
Freyeslebenstr. 1
91058 Erlangen, Germany
Tel.: +49 1522 2894134
mailto:rufus.b...@siemens.com
www.twitter.com/siemens

www.siemens.com/ingenuityforlife

Buschart, Rufus

unread,
Jul 5, 2020, 5:39:04 PM7/5/20
to mozilla-dev-security-policy

> From: dev-security-policy <dev-security-...@lists.mozilla.org> On Behalf Of Ryan Sleevi via dev-security-policy
+1. Maybe a first step would be to write an RFC that explains, how technical constraining based on EKU (and Certificate Policies) through the layers of a multi-tier-PKI-Hierarchy should work. We have seen in this thread, that different Application Software Suppliers have different ideas, sometimes not even consistent within their application. I would be willing to support it.

Ryan Sleevi

unread,
Jul 5, 2020, 6:26:50 PM7/5/20
to Buschart, Rufus, mozilla-dev-security-policy
On Sun, Jul 5, 2020 at 5:30 PM Buschart, Rufus via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> > From: dev-security-policy <dev-security-...@lists.mozilla.org>
Mozilla already places requirements on S/MIME revocation:
https://github.com/mozilla/pkipolicy/blob/master/rootstore/policy.md#62-smime
- The only difference is that for Subscriber certificates, a timeline is
not yet attached.

The problem is that this is no different than if you issued a TLS-capable
S/MIME issuing CA, which, as we know from past incidents, many CAs did
exactly that, and had to revoke them due to the lack of appropriate audits.
Your Root is subject to the TLS policies, because that Root was enabled for
TLS, and so everything the Root issues is, to some extent, subject to those
policies.

The solution here for CAs has long been clear: maintaining separate
hierarchies, from the root onward, for separate purposes, if they
absolutely want to avoid any cohabitation of responsibilities. They *can*
continue on the current path, but they have to plan for the most
restrictive policy applying throughout that hierarchy and designing
accordingly. Continuing to support other use cases from a common root -
e.g. TLS client authentication, document signing, timestamping, etc -
unnecessarily introduces additional risk, and for limited concrete benefit,
either for users or for the CA. Having to maintain two separate "root"
certificates in a root store, one for each purpose, is no more complex than
having to maintain a single root trusted for two purposes; and
operationally, for the CA, is is vastly less complex.

Matt Palmer

unread,
Jul 5, 2020, 7:51:24 PM7/5/20
to dev-secur...@lists.mozilla.org
On Sun, Jul 05, 2020 at 09:30:33PM +0000, Buschart, Rufus via dev-security-policy wrote:
> > From: dev-security-policy <dev-security-...@lists.mozilla.org> On Behalf Of Matt Palmer via dev-security-policy
> > Sent: Sonntag, 5. Juli 2020 06:36
> >
> > On Sat, Jul 04, 2020 at 07:42:12PM -0700, Peter Bowen wrote:
> > > On Sat, Jul 4, 2020 at 7:12 PM Matt Palmer via dev-security-policy
> > > <dev-secur...@lists.mozilla.org> wrote:
> > > >
> > > > > On Sat, Jul 04, 2020 at 08:42:03AM -0700, Mark Arnott via dev-security-policy wrote:
> > > > >
> > > > > In the CIA triad Availability is as important as Confidentiality.
> > > > > Has anyone done a threat model and a serious risk analysis to
> > > > > determine what a reasonable risk mitigation strategy is?
> > > >
> > > > Did you do a threat model and a serious risk analysis before you
> > > > chose to use the WebPKI in your application?
> > >
> > > I think it is important to keep in mind that many of the CA
> > > certificates that were identified are constrained to not issue TLS
> > > certificates. The certificates they issue are explicitly excluded
> > > from the Mozilla CA program requirements.
> >
> > Yes, I'm aware of that.
> >
> > > I don't think it is reasonable to assert that everyone impacted by
> > > this should have been aware of the possibly of revocation
> >
> > At the limits, I agree with you. However, to whatever degree that there is complaining to be done, it should be directed at the CA(s)
> > which sold a product that, it is now clear, was not fit for whatever purpose it has been put to, and not at Mozilla.
>
> Let me quote from the NSS website of Mozilla (https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/Overview):
>
> If you want to add support for SSL, S/MIME, or other Internet security standards to your application, you can use Network Security Services (NSS) to implement
> all your security features. NSS provides a complete open-source implementation of the crypto libraries used by AOL, Red Hat, Google, and other companies in a
> variety of products, including the following:

[snip]

Are you using NSS for your S/MIME implementation? If not, I fail to see how
it is relevant here.

- Matt

Ryan Hurst

unread,
Jul 5, 2020, 8:04:55 PM7/5/20
to mozilla-dev-s...@lists.mozilla.org
On Saturday, July 4, 2020 at 3:43:22 PM UTC-7, Ryan Sleevi wrote:
> > Thank you for explaining that. We need to hear the official position from
> > Google. Ryan Hurst are you out there?

Although Ryan Sleevi has already pointed this out, since I was named explicitly, I wanted to respond and re-affirm that I am not responsible for Chrome's (or anyone else's) root program. I represent Google Trust Services (GTS), a Certificate Authority (CA) that is subject to the same requirements as any other WebPKI CA.

While I am watching this issue closely, as I do all WebPKI related incidents, since this is not an issue that directly impacts GTS I have chosen to be a quiet observer.

With that said, as a long time member of the WebPKI, and in a personal capacity, I would say one of the largest challenges in operating a CA is how to handle incidents when they occur. In every incident, I try to keep in mind is that a CAs ultimate responsibility is to the users that rely on the certificates they issue.

This means when balancing the impact of decisions a CA should give weight to protecting those users. This reality unfortunately also means that sometimes it is necessary to take actions that may cause pain for the subscribers they provide services to.

Wherever possible a CA should minimize pain on the relying party but more times than not, the decision to use the WebPKI for these non-browser TLS use cases was done to externalize the costs of deploying a dedicated PKI that is fit for purpose and as with most trade-offs there may be later consequences to that decision.

As for my take on this topic, I think Peter Bowen has done an excellent job capturing the issue, it's risks, origins, and the choices available.

Peter Gutmann

unread,
Jul 5, 2020, 11:48:22 PM7/5/20
to Matt Palmer, dev-secur...@lists.mozilla.org
Matt Palmer via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>If you're unhappy with the way which your interests are being represented by
>your CA, I would encourage you to speak with them.

It's not the CAs, it's the browsers, and many other types of clients. Every
Internet-enabled (meaning web-enabled) device is treated by browsers as if it
were a public web server, no matter how illogical and nonsensical that
actually is. You don't have a choice to opt out of the Web PKI because all of
the (mainstream) clients you can use force you into it. Ever tried connecting
to a local (RFC1918 LAN) IoT device that has a self-signed cert?

It's not really the CAs that are the problem, everything you're likely to use
assumes there's only the Web PKI and nothing else. When the clients all
enforce use of the Web PKI, there's no way out even if the CAs want to help
you.

Peter.

Peter Gutmann

unread,
Jul 5, 2020, 11:52:05 PM7/5/20
to dev-secur...@lists.mozilla.org
Several people write:

>Go to another CA.

>Talk to your CA.

>Have a frank discussion with your CA.

This phrase seems to be the PKI equivalent of "come back with some code",
which is in turn the OSS equivalent of the more widely-recognised "FOAD".

Peter.

Matt Palmer

unread,
Jul 6, 2020, 12:18:59 AM7/6/20
to dev-secur...@lists.mozilla.org
On Mon, Jul 06, 2020 at 03:48:06AM +0000, Peter Gutmann wrote:
> Matt Palmer via dev-security-policy <dev-secur...@lists.mozilla.org> writes:
> >If you're unhappy with the way which your interests are being represented by
> >your CA, I would encourage you to speak with them.
>
> It's not the CAs, it's the browsers, and many other types of clients.

How, exactly, is it not CAs fault that they claim to represent their
customers in the CA/B Forum, and then fail to do so effectively?

> Ever tried connecting to a local (RFC1918 LAN) IoT device that has a
> self-signed cert?

If we expand "IoT device" to include, say, IPMI web-based management
interfaces, then yes, I do so on an all-too-regular basis. But mass-market
web browsers are not built specifically for that use-case, so the fact that
they don't do a stellar job is hardly a damning indictment on them.

That IoT/IPMI devices piggyback on mass-market web browsers (and the Web PKI
they use) is, as has been identified previously, an example of externalising
costs, which doesn't always work out as well as the implementers might have
liked. That it doesn't end well is hardly the fault of the Web PKI, the
BRs, or the browsers.

Your question is roughly equivalent to "ever tried fitting a screw with a
hammer?", or perhaps "ever tried making a request to https://google.com
using telnet and a pen and paper?". That your arithmetic skills might not
be up to doing a TLS negotiation by hand is not the fault of TLS, it's that
you're using the wrong tool for the job.

- Matt

Dimitris Zacharopoulos

unread,
Jul 6, 2020, 1:12:13 AM7/6/20
to dev-secur...@lists.mozilla.org

I'd like to chime-in on this particular topic because I had similar
thoughs with Pedro and Peter.

I would like to echo Pedro's, Peter's and other's argument that it is
unreasonable for Relying Parties and Browsers to say "I trust the CA
(the Root Operator) to do the right thing and manage their Root Keys
adequately", and not do the same for their _internally operated_ and
audited Intermediate CA Certificates. The same Operator could do "nasty
things" with revocation, without needing to go to all the trouble of
creating -possibly- incompatible OCSP responses (at least for some
currently known implementations) using a CA Certificate that has the
id-kp-OCSPSigning EKU. Browsers have never asked for public records on
"current CA operations", except in very rare cases where the CA was
accused of "bad behavior". Ryan's response on
https://bugzilla.mozilla.org/show_bug.cgi?id=1649939#c8 seems
unreasonably harsh (too much "bad faith" in affected CAs, even of these
CA Certificates were operated by the Root Operator). There are auditable
events that auditors could check and attest to, if needed, for example
OCSP responder configuration changes or key signing operations, and
these events are kept/archived according to the Baseline Requirements
and the CA's CP/CPS. This attestation could be done during a "special
audit" (as described in the ETSI terminology) and possibly a
Point-In-Time audit (under WebTrust).

We did some research and this "convention", as explained by others,
started from Microsoft.

In
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/dn786428(v=ws.11),
one can read "if a CA includes EKUs to state allowed certificate usages,
then it EKUs will be used to restrict usages of certificates issued by
this CA" in the paragraph titled "Extended Key Usage Constraints".

Mozilla agreed to this convention and added it to Firefox
https://bugzilla.mozilla.org/show_bug.cgi?id=725351. The rest of the
information was already covered in this thread (how it also entered into
the Mozilla Policy).

IETF made an attempt to set an extention for EKU constraints
(https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/)
where Rob Stradling made an indirect reference in
https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ
(Rob, please correct me if I'm wrong).

There was a follow-up discussion in IETF that resulted that noone should
deal with this issue
(https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/).
A day later, all attempts died off because noone would actually
implement this(?)
https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/.
If this extension was standardized, we would probably not be having this
issue right now. However, this entire topic demonstrates the necessity
to standardize the EKU existence in CA Certificates as constraints for
EKUs of leaf certificates.

We even found a comment referencing the CA/B Forum about whether it has
accepted that EKUs in CA Certificates are considered constraints
(https://mailarchive.ietf.org/arch/msg/spasm/Y1V_vbEw91D2Esv_SXxZpo-aQgc/).
Judging from the result and the discussion of this issue, even today, it
is unclear how the CA/B Forum (as far as its Certificate Consumers are
concerned) treats EKUs in CA Certificates.

CAs that enabled the id-kp-OCSPSigning EKU in the Intermediate CA
Profiles were following the letter of the Baseline Requirements to
"protect relying parties". According to the BRs 7.1.2.2:

/"Generally Extended Key Usage will only appear within end entity
certificates (as highlighted in RFC 5280 (4.2.1.12)), however,
Subordinate CAs MAY include the extension to further *protect
**r**elying parties* until the use of the extension is consistent
between Application Software Suppliers whose software is used by a
substantial portion of Relying Parties worldwide."/

So, on one hand, a Root Operator was trying to do "the right thing"
following the agreed standards and go "above and beyond" to "protect"
relying parties by adding this EKU in the issuing CA Certificate (at a
minimum it "protected" users using Microsoft that required this "EKU
Chaining"), and on the other hand it unintentionally tripped into a case
where a CA Certificate with such an EKU could be used  in an OCSP
responder service to sign status messages for its parent.

There was also an interesting observation that came up during a recent
discussion. As mandated by RFC 5280 (4.2.1.12), EKUs are supposed to be
normative constrains to *end-entity Certificates*, not CA Certificates.
Should RFC 6960 need to be read in conjunction with RFC 5280 and not on
its own? Should conforming OCSP Clients to the Publicly-Trusted
Certificates (PTC) and BR-compliant solutions, implement both? If the
answer is yes, this means that a "conforming" OCSP client should not
place trust on the id-kp-OCSPSigning EKU in a *CA* Certificate
(basicConstraints: CA:TRUE). It is quite possible that browser PTC
conforming implementations, should not only take RFC 5280 into account
but also  BRs 7.1.2.2, since this is the only normative policy
conformance tool for handling the EKUs in CA Certificates as a
constraint. This interpretation is consistent with what the CAs have
implemented, and is also consistent with the implementation of
mozilla::pkix code. Do others share this interpretation? It sounds a bit
"stretched" but it's not the first time we see different interpretations
on normative requirements. This is somewhat similar to Corey's
interpretation about the keyUsage extension.

I support analyzing the practical implications on existing Browser
software to check if existing Web Browsers verify and accept an OCSP
response signed by a delegated CA Certificate (with the
id-kp-OCSPSigning EKU) on behalf of its parent. We already know the
answer for Firefox. Do we know whether Apple, Chromium and Microsoft web
browsers treat OCSP responses signed from delegated CA Certificates
(that include the id-kp-OCSPSigning EKU) on behalf of their parent
RootCA, as valid? Some tests were performed by Paul van Brouwershaven
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d. This
does not dismiss any of the previous statements of putting additional
burden on clients or the security concerns, it's just to assess this
particular incident with particular popular web browsers. This analysis
could be taken into account, along with other parameters (like existing
controls) when deciding timelines for mitigation, and could also be used
to assess the practical security issues/impact. Until this analysis is
done, we must all assume that the possible attack that was described by
Ryan Sleevi can actually succeed with Apple, Chrome and Microsoft Web
browsers.

As others have already highlighted, estimating the real practical attack
threat and likeliness of occuring, is critical to the remediation
timeline, because it affects not just TLS-server issuingCAs but even
Technically Constrained non-TLS ones, some of which are unaudited.

Finally, a lot has been said about separate hierarchies per usage. Ryan
Sleevi, representing Google Chrome at the CA/B Forum, is very clear
about Google's position pushing for separate hierarchies per usage [1].
It would be greatly appreciated if there were similar positions from
other Certificate Consumers that handle more than the server TLS usage
in their Root Stores (Apple, Microsoft and Mozilla). I plan on starting
such a discussion as stated in [1] at the CA/B Forum, but since this is
a Mozilla Forum, it would be great if we could have the position of the
Mozilla CA Certificate Policy owner. If this position is aligned with
Google's, as Rufus highlighted, some pages on the wiki (and possibly the
policy?) will need to be updated to highlight this position. When new
CAs apply, they should clearly be guided, with a "strong preference", to
include separate hierarchy for S/MIME and separate for server TLS. A
plan could also be discussed for existing included CAs that currently
have both trust bits enabled in their Roots, which is the majority
judging from
https://ccadb-public.secure.force.com/mozilla/IncludedCACertificateReport.


Thanks,
Dimitris.


[1]:
https://lists.cabforum.org/pipermail/servercert-wg/2020-July/002048.html

On 2020-07-03 3:09 π.μ., Ryan Sleevi via dev-security-policy wrote:
> On Thu, Jul 2, 2020 at 6:42 PM Pedro Fuentes via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>> Does the operator of a root and it’s hierarchy have the right to delegate
>> OCSP responses to its own responders?
>>
>> If your answer is “No”, then I don’t have anything else to say, but if
>> your answer is “Yes”, then I’ll be having still a hard time to see the
>> security risk derived of this issue.
>>
> Yes. But that doesn't mean we blindly trust the CA in doing so. And that's
> the "security risk".
>
> I totally appreciate that your argument is "but we wouldn't misuse the
> key". The "risk" that I'm talking about is how can anyone, but the CA, know
> that's true? All of the compliance obligations assume certain facts when
> the CA is operating a responder. This issue violates those assumptions, and
> so it violates the controls, and so we don't have any way to be confident
> that the key is not misused.
>
> I think the confusion may be from the overloading of the word "risk". Here,
> I'm talking about "the possibility of something bad happening". We don't
> have any proof any 3P Sub-CAs have mis-signed OCSP responses: but we seem
> to agree that there's risk of that happening. It seems we disagree on
> whether there is risk of the CA themselves doing it. I can understand the
> view that says "Of course the CA wouldn't", and my response is that the
> risk is still the same: there's no way to know, and it's still a
> possibility.
>
> I can understand that our views may differ: you may see 3P as "great risk"
> and 1p as "acceptable risk". However, from the view of a browser or a
> relying party, "1p" and "3p" are the same: they're both CAs. So the risk is
> the same, and the risk is unacceptable for both cases.

Ryan Sleevi

unread,
Jul 6, 2020, 2:54:35 AM7/6/20
to Dimitris Zacharopoulos, dev-secur...@lists.mozilla.org
On Mon, Jul 6, 2020 at 1:12 AM Dimitris Zacharopoulos via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

> Ryan's response on
> https://bugzilla.mozilla.org/show_bug.cgi?id=1649939#c8 seems
> unreasonably harsh (too much "bad faith" in affected CAs, even of these
> CA Certificates were operated by the Root Operator).


Then revoke within 7 days, as required. That’s a discussion with WISeKey,
not HARIC, and HARICA needs to have its own incident report and be judged
on it. I can understand wanting to wait to see what others do first, but
that’s not leadership.

The duty is on the CA to demonstrate nothing can go wrong and nothing has
gone wrong. Unlike a certificate the CA “intended” as a responder, there is
zero guarantee about the controls, unless and until the CA establishes the
facts around such controls. The response to Pedro is based on Peter Bowen’s
suggestion that controls are available, and uses those controls.

As an ETSI-audited CA, I can understand why you might balk, because the
same WebTrust controls aren’t available and the same assurance isn’t
possible. The baseline assurance expectation is you will revoke in 7 days.
That’s not unreasonable, that’s the promise you made the community you
would do.

It’s understandable that it turns out to be more difficult than you
thought. You want more time to mitigate, to avoid disruption. As expected,
you’re expected to file an incident report on that revocation delay, in
addition to an incident report on the certificate profile issue that was
already filed, that examines why you’re delaying, what you’re doing to
correct that going forward, and your risk analysis. You need to establish
that why and how nothing can go wrong: simply saying “it’s a CA key” or
“trust us” surely can’t be seen as sufficient.

There are auditable
> events that auditors could check and attest to, if needed, for example
> OCSP responder configuration changes or key signing operations, and
> these events are kept/archived according to the Baseline Requirements
> and the CA's CP/CPS. This attestation could be done during a "special
> audit" (as described in the ETSI terminology) and possibly a
> Point-In-Time audit (under WebTrust).


This demonstrates a failed understanding about the level of assurance these
audits provide. A Point in Time Audit doesn’t establish that nothing has
gone wrong or will go wrong; just at a single instant, the configuration
looks good enough. The very moment the auditors leave the CA can configure
things to go wrong, and that assurance lost. I further believe you’re
confusing this with an Agreed Upon Procedures report.

In any event, the response to WISeKey is acknowledging a path forward
relying on audits. The Relying Party bears all the risk in accepting such
audits. The path you describe above, without any further modification, is
just changing “trust us (to do it right)” to “trust our auditor”, which is
just as risky. I outlined a path to “trust, but verify,” to allow some
objective evaluation. Now it just seems like “we don’t like that either,”
and this just recycled old proposals that are insufficient.

Look, the burden is on the CA to demonstrate how nothing can go wrong or
has gone wrong. This isn’t a one size fits all solution. If you have a
specific proposal from HARICA, filing it, before the revocation deadline,
where you show your work and describe your plan and timeline, is what’s
expected. It’s the same expectation as before this incident and consistent
with Ben’s message. But you have to demonstrate why, given the security
concerns, this is acceptable, and “just trust us” can’t be remotely seen as
reasonable.

We


Who is we here? HARICA? The CA Security Council? The affected CAs in
private collaboration? It’s unclear which of the discussions taking place
are being referenced here.

If this extension was standardized, we would probably not be having this
> issue right now. However, this entire topic demonstrates the necessity
> to standardize the EKU existence in CA Certificates as constraints for
> EKUs of leaf certificates.


This is completely the *wrong* takeaway.

Solving this, at the CABF level via profiles, would clearly resolve this.
If the OCSPSigning EKU was prohibited from appearing with other EKUs, as
proposed, this would have resolved it. There’s no guarantee that a
hypothetical specification would have resolved this, since the
ambiguity/issue is not with respect to the EKU in a CA cert, it’s whether
or not the profile for an OCSP Responder is allowed to assert the CA bit.
This *same* ambiguity also exists for TLS certs, and Mozilla has similar
non-standard behavior here that prevents a CA cert from being a server cert
unless it’s also self-signed.


There was also an interesting observation that came up during a recent
> discussion.


You mean when I dismissed this line of argument? :)

As mandated by RFC 5280 (4.2.1.12), EKUs are supposed to be
> normative constrains to *end-entity Certificates*, not CA Certificates.
> Should RFC 6960 need to be read in conjunction with RFC 5280 and not on
> its own?


Even when you read them together, nothing prohibits them from appearing on
CA certificates, not prohibits their semantic interpretation. As mentioned
above, this is no different than the hypothetical misissued “google.com” CA
cert, by putting a SAN, a TLS EKU, no KU/only CertSign/CRLSign in the EKU,
but adding a SAN. Is that misissued?

The argument you’re applying here would say it’s not misissued: after all,
it’s a CA cert, the KU means it “cannot” be used for TLS, and, using your
logic, the BRs don’t explicitly prohibit it. Yet that very certificate can
be used to authenticate “google.com”, and it shouldn’t matter how many
certificates that hypothetical CA has issued, the security risk is there.

Similarly, it might be argued that you know it hasn’t been used to attack
Google because, hey, it’s in an HSM, at the CA, so what’s the big deal? The
big deal there, as with here, is the potential for mischief and the mere
existence of that cert, for which no control prohibits the CA from MITMing
*using* their HSM.

If you can understand that, then you can understand the risk, and why the
burden falls to the CA to demonstrate.

I support analyzing the practical implications on existing Browser
> software to check if existing Web Browsers verify and accept an OCSP
> response signed by a delegated CA Certificate (with the
> id-kp-OCSPSigning EKU) on behalf of its parent. We already know the
> answer for Firefox.


Note: NSS is vulnerable, which implies Thunderbird (which does not, AFAIK,
use Mozilla::pkix, from what I can tell) is also vulnerable.

Do we know whether Apple, Chromium and Microsoft web
> browsers treat OCSP responses signed from delegated CA Certificates
> (that include the id-kp-OCSPSigning EKU) on behalf of their parent
> RootCA, as valid?


Yes

Some tests were performed by Paul van Brouwershaven
> https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d.


As mentioned, those tests weren’t correct. I’ve provided sample test cases
to several other browser vendors, and heard back or demonstrated that
they’re vulnerable. As are the majority of open-source TLS libraries with
support for OCSP.

This
> does not dismiss any of the previous statements of putting additional
> burden on clients or the security concerns, it's just to assess this
> particular incident with particular popular web browsers. This analysis
> could be taken into account, along with other parameters (like existing
> controls) when deciding timelines for mitigation, and could also be used
> to assess the practical security issues/impact. Until this analysis is
> done, we must all assume that the possible attack that was described by
> Ryan Sleevi can actually succeed with Apple, Chrome and Microsoft Web
> browsers.


You should assume this, regardless, with any incident report. That is,
non-compliance arguments around the basis of “oh, but it shouldn’t work”
aren't really a good look. The CA needs to do their work, and if they know
they don’t plan to revoke on time, make sure they publish their work for
others to check.

It’s not hard to whip up a synthetic response for testing: I know, because
I’ve done it (more aptly, constructed 72 possible permutations of EKU, KU,
and basicConstraints) and tested those against popular libraries or traced
through their handling of these.

Mozilla’s only protection seems to be the CA bit check, not the KU argument
as Corey made, and as noted when it was introduced, doesn’t seem like it
was a prohibited configuration/required check by any specification.

This is why it is important for CAs to reliably demonstrate that the key
cannot be misused, whether revoking at 7 days or longer, and why any
proposal for “longer” needs to come with some objectively verifiable plan
for demonstrating it wasn’t / isn’t misused, and which does something more
than “trust us (and/or our auditor)”. I gave one such proposal to WISeKey,
and HARICA is free to make its own proposal based on the facts specific to
HARICA.

But, again, I want to stress when it comes to audits: they are not magic
pixie dust we sprinkle on and say “voilà, we have created assurance”. It’s
understandable for CAs to misunderstand audits, particularly if they’re the
ones producing them and don’t consume any. Understanding the limitations
they provide is, unfortunately, necessary for browsers, and my remarks here
and to WISeKey reflect knowing where, and how, a CA could exploit audits to
provide the semblance of assurance while causing trouble. ETSI and WebTrust
provide different assurance levels and different fundamental approaches, so
what works for WebTrust may not (almost certainly, will not) work for ETSI.
Unfortunately, when an incident relies on audits as part of the
justification for delays, this means that those limits and differences have
to be factored in, and cannot just be dismissed or hand-waved away.

Dimitris Zacharopoulos

unread,
Jul 6, 2020, 3:38:47 AM7/6/20
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On 6/7/2020 9:47 π.μ., Ryan Sleevi wrote:
> I can understand wanting to wait to see what others do first, but
> that’s not leadership.

This is a security community, and it is expected to see and learn from
others, which is equally good of proposing new things. I'm not sure what
you mean by "leadership". Leadership for who?

> We
>
>
> Who is we here? HARICA? The CA Security Council? The affected CAs in
> private collaboration? It’s unclear which of the discussions taking
> place are being referenced here.

HARICA.

> There was also an interesting observation that came up during a
> recent
> discussion.
>
>
> You mean when I dismissed this line of argument? :)

Yep. You have dismissed it but others may have not. If no other voices
are raised, then your argument prevails :)


Dimitris.

Ryan Sleevi

unread,
Jul 6, 2020, 4:03:51 AM7/6/20
to Dimitris Zacharopoulos, dev-secur...@lists.mozilla.org, ry...@sleevi.com
On Mon, Jul 6, 2020 at 3:38 AM Dimitris Zacharopoulos <ji...@it.auth.gr>
wrote:

> On 6/7/2020 9:47 π.μ., Ryan Sleevi wrote:
>
> I can understand wanting to wait to see what others do first, but that’s
> not leadership.
>
>
> This is a security community, and it is expected to see and learn from
> others, which is equally good of proposing new things. I'm not sure what
> you mean by "leadership". Leadership for who?
>

Leadership as a CA affected by this, taking steps to follow through on
their commitments and operate beyond reproach, suspicion, or doubt.

As a CA, the business is built on trust, and that is the most essential
asset. Trust takes years to build and seconds to lose. Incidents, beyond
being an opportunity to share lessons learned and mitigations applied,
provide an opportunity for a CA to earn trust (by taking steps that are
disadvantageous for their short-term interests but which prioritize being
irreproachable) or lose trust (by taking steps that appear to minimize or
dismiss concerns or fail to take appropriate action).

Tim’s remarks on behalf of DigiCert, if followed through on, stand in stark
contrast to remarks by others. And that’s encouraging, in that it seems
that past incidents at DigiCert have given rise to a stronger focus on
security and compliance than May have existed there in the past, and which
there were concerns about with the Symantec PKI acquisition/integration.
Ostensibly, that is an example of leadership: making difficult choices to
prioritize relying parties over subscribers, and to focus on removing
any/all doubt.

You mean when I dismissed this line of argument? :)
>
>
> Yep. You have dismissed it but others may have not. If no other voices are
> raised, then your argument prevails :)
>

I mean, it’s not a popularity contest :)

It’s a question of what information is available to the folks ultimately
deciding things. If there is information being overlooked, if there are
facts worth considering, this is the time to bring it up. Conclusions will
ultimately be decided by those trusting these certificates, but that’s why
it’s important to introduce any new information that may have been
overlooked.

>

Paul van Brouwershaven

unread,
Jul 6, 2020, 4:39:25 AM7/6/20
to ry...@sleevi.com, Dimitris Zacharopoulos, MDSP
>
> Some tests were performed by Paul van Brouwershaven
> > https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d.
>
> As mentioned, those tests weren’t correct. I’ve provided sample test cases
> to several other browser vendors, and heard back or demonstrated that
> they’re vulnerable. As are the majority of open-source TLS libraries with
> support for OCSP.


Ryan, you made a statement about a bug in Golang, the test case linked by
Dimitris was about the follow-up tests I did with certutil and
Test-Certificate in powershell.

As follow up to Dimitris comments I tested the scenario where a
sibling issuing CA [ICA 2] with the OCSP signing EKU (but without
digitalSignature KU) under [ROOT] would sign a revoked OCSP response for
[ICA] also under [ROOT]
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

I was actually surprised to see that certutil fails to validate decode the
OCSP response in this scenario. But this doesn't say it's not a problem as
other responders or versions might accept the response.

I will try to perform the same test on Mac in a moment.

Dimitris Zacharopoulos

unread,
Jul 6, 2020, 6:08:39 AM7/6/20
to ry...@sleevi.com, dev-secur...@lists.mozilla.org
On 6/7/2020 11:03 π.μ., Ryan Sleevi via dev-security-policy wrote:
>> Yep. You have dismissed it but others may have not. If no other voices are
>> raised, then your argument prevails:)
>>
> I mean, it’s not a popularity contest:)

As others have highlighted already, there are times where people get
confused by you posting by default in a personal capacity. It is easy to
confuse readers when using the word "I" in your emails.

Even if you use your "Google Chrome hat" to make a statement, there
might be a different opinion or interpretation from the Mozilla Module
owner where this Forum is mainly for. There's more agreement than
disagreement between Mozilla and Google when it comes to policy so I
hope my statement was not taken the wrong way as an attempt to "push"
for a disagreement.

I have already asked for the Mozilla CA Certificate Policy owner's
opinion regarding separate hierarchies for Mozilla Root program in
https://groups.google.com/d/msg/mozilla.dev.security.policy/EzjIkNGfVEE/jOO2NhKAAwAJ,
highlighting your already clearly stated opinion on behalf of Google,
because I am interested to hear their opinion as well. I hope I'm not
accused of doing something wrong by asking for more "voices", if there
are any.



Dimitris Zacharopoulos

unread,
Jul 6, 2020, 6:10:04 AM7/6/20
to Paul van Brouwershaven, MDSP
On 6/7/2020 11:39 π.μ., Paul van Brouwershaven via dev-security-policy
wrote:
> As follow up to Dimitris comments I tested the scenario where a
> sibling issuing CA [ICA 2] with the OCSP signing EKU (but without
> digitalSignature KU) under [ROOT] would sign a revoked OCSP response for
> [ICA] also under [ROOT]
> https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d
>
> I was actually surprised to see that certutil fails to validate decode the
> OCSP response in this scenario. But this doesn't say it's not a problem as
> other responders or versions might accept the response.
>
> I will try to perform the same test on Mac in a moment.

Thank you very much Paul, this is really helpful.

Dimitris.

Paul van Brouwershaven

unread,
Jul 6, 2020, 7:21:16 AM7/6/20
to Dimitris Zacharopoulos, MDSP
Summary of some OCSP client tests:

- `Root` is self signed, and does not have any EKU's
- 'ICA' is signed by 'Root' with the EKU ServerAuth and ClientAuth
- 'ICA 2' is signed by 'Root' with the EKU ServerAuth, ClientAuth and
OCSPSigning
- 'Server certificate' is signed by `ICA` with the EKU ServerAuth and
ClientAuth
- Both `ICA 2` and `ICA` have their own delegated OCSP responder
certificate.
- `ICA 2` signs an OCSP response for `ICA` and overrules the response
created by the delegated responder.

certutil (Windows): Recognizes but rejects the revoked response
openssl (Ubuntu & MacOS): Accepts the response
ocspcheck (MacOS): Accepts the response

Output and script located on:
https://gist.github.com/vanbroup/84859cd10479ed95c64abe6fcdbdf83d

On Mon, 6 Jul 2020 at 12:09, Dimitris Zacharopoulos <ji...@it.auth.gr>
wrote:

Rob Stradling

unread,
Jul 6, 2020, 7:47:22 AM7/6/20
to Dimitris Zacharopoulos, dev-secur...@lists.mozilla.org
On 06/07/2020 06:11, Dimitris Zacharopoulos via dev-security-policy wrote:
<snip>
> IETF made an attempt to set an extention for EKU constraints
> (https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/)
> where Rob Stradling made an indirect reference in
> https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ
>
> (Rob, please correct me if I'm wrong).
>
> There was a follow-up discussion in IETF that resulted that noone should
> deal with this issue
> (https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/).
> A day later, all attempts died off because noone would actually
> implement this(?)
> https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/.
> If this extension was standardized, we would probably not be having this
> issue right now. However, this entire topic demonstrates the necessity
> to standardize the EKU existence in CA Certificates as constraints for
> EKUs of leaf certificates.

If only we could edit RFC2459 so that it (1) defined an "EKU
constraints" extension and (2) said that the EKU extension MUST NOT
appear in CA certificates...

Unfortunately, we're more than 20 years too late to do that. And whilst
it completely sucks that real-world use of the EKU extension comes with
some nasty footguns, I just don't see how you'd ever persuade the WebPKI
ecosystem to adopt a new "EKU Constraints" extension at this point in
history.

--
Rob Stradling
Senior Research & Development Scientist
Sectigo Limited

Rob Stradling

unread,
Jul 6, 2020, 9:37:48 AM7/6/20
to Dimitris Zacharopoulos, dev-secur...@lists.mozilla.org
On 06/07/2020 12:47, Rob Stradling via dev-security-policy wrote:
> On 06/07/2020 06:11, Dimitris Zacharopoulos via dev-security-policy wrote:
> <snip>
>> IETF made an attempt to set an extention for EKU constraints
>> (https://datatracker.ietf.org/doc/draft-housley-spasm-eku-constraints/)
>> where Rob Stradling made an indirect reference in
>> https://groups.google.com/d/msg/mozilla.dev.security.policy/f5-URPoNarI/yf2YLpKJAQAJ
>>
>>
>> (Rob, please correct me if I'm wrong).
>>
>> There was a follow-up discussion in IETF that resulted that noone should
>> deal with this issue
>> (https://mailarchive.ietf.org/arch/msg/spasm/3zZzKa2lcT3gGJOskVrnODPBgM0/).
>>
>> A day later, all attempts died off because noone would actually
>> implement this(?)
>> https://mailarchive.ietf.org/arch/msg/spasm/_gJTeUjxc2kmDcRyWPb9slUF47o/.
>> If this extension was standardized, we would probably not be having this
>> issue right now. However, this entire topic demonstrates the necessity
>> to standardize the EKU existence in CA Certificates as constraints for
>> EKUs of leaf certificates.

Oh, I misread.

Standardizing the use of the existing EKU extension in CA certificates
as a constraint for permitted EKUs in leaf certificates has been
proposed at IETF before. Probably many times before. However, plenty
of people take the (correct, IMHO) view that the EKU extension was not
intended to be (ab)used in this way, and so the chances of getting
"rough consensus" for a Standards Track RFC to specify this seems rather
remote.

I suppose it might be worth drafting an Informational RFC that explains
how the EKU extension is used in practice, what the footguns are and how
to avoid them, what the security implications are of doing EKU wrong, etc.

zxzxz...@gmail.com

unread,
Jul 7, 2020, 11:07:20 AM7/7/20
to mozilla-dev-s...@lists.mozilla.org
On Thursday, July 2, 2020 at 12:06:22 AM UTC+3, Ryan Sleevi wrote:
> Unfortunately, revocation of this certificate is simply not enough to
> protect Mozilla TLS users. This is because this Sub-CA COULD provide OCSP
> for itself that would successfully validate, AND provide OCSP for other
> revoked sub-CAs, even if it was revoked.

If I understand correctly, the logic behind the proposal to destroy intermediate CA private key now, is to avoid a situation that in case this intermediate CA private key is later compromised the intermediate CA becomes non-revocable until it expires.

So the action now is required to mitigate a potential security risk that can materialize later.

Can't the affected CAs decide on their own whether to destroy the intermediate CA private key now, or in case the affected intermediate CA private key is later compromised, revoke the root CA instead?

Matt Palmer

unread,
Jul 7, 2020, 10:36:58 PM7/7/20
to dev-secur...@lists.mozilla.org
On Mon, Jul 06, 2020 at 10:53:50AM -0700, zxzxzx66669--- via dev-security-policy wrote:
> Can't the affected CAs decide on their own whether to destroy the
> intermediate CA private key now, or in case the affected intermediate CA
> private key is later compromised, revoke the root CA instead?

No, because there's no reason to believe that a CA would follow through on
their decision, and rapid removal of trust anchors (which is what "revoke
the root CA" means in practice) has all sorts of unpleasant consequences
anyway.

- Matt

Ryan Sleevi

unread,
Jul 7, 2020, 11:02:56 PM7/7/20
to Matt Palmer, MDSP
Er, not quite?

I mean, yes, removing the root is absolutely the final answer, even if
waiting until something "demonstrably" bad happens.

The question is simply whether or not user agents will accept the risk of
needing to remove the root suddenly, and with significant (e.g. active)
attack, or whether they would, as I suggest, take steps to remove the root
beforehand, to mitigate the risk. The cost of issuance plus the cost of
revocation are a fixed cost: it's either pay now or pay later. And it seems
like if one needs to contemplate revoking roots, it's better to do it
sooner, than wait for it to be an inconvenient or inopportune time. This is
why I meant earlier, when I said a solution that tries to wait until the
'last possible minute' is just shifting the cost of misissuance onto
RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
shift costs onto the ecosystem like that seems like it's not a CA that can
be trusted to, well, be trustworthy.

ccam...@gmail.com

unread,
Jul 10, 2020, 12:01:01 PM7/10/20
to mozilla-dev-s...@lists.mozilla.org
Wouldn't be enough to check that OCSP responses are signed with a certificate which presents the (mandatory, by BR) id-pkix-ocsp-nocheck? I've not checked, but I don't think that subordinate CA certificates have that extension

zxzxz...@gmail.com

unread,
Jul 10, 2020, 12:01:57 PM7/10/20
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, July 8, 2020 at 6:02:56 AM UTC+3, Ryan Sleevi wrote:
> The question is simply whether or not user agents will accept the risk of
> needing to remove the root suddenly, and with significant (e.g. active)
> attack, or whether they would, as I suggest, take steps to remove the root
> beforehand, to mitigate the risk. The cost of issuance plus the cost of
> revocation are a fixed cost: it's either pay now or pay later. And it seems
> like if one needs to contemplate revoking roots, it's better to do it
> sooner, than wait for it to be an inconvenient or inopportune time. This is
> why I meant earlier, when I said a solution that tries to wait until the
> 'last possible minute' is just shifting the cost of misissuance onto
> RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
> shift costs onto the ecosystem like that seems like it's not a CA that can
> be trusted to, well, be trustworthy.


This assumes that the private key of these intermediate CAs will inevitably get compromised.

Why such an assumption?

Following the same argument we can assume that the private key of any root CA will inevitably get compromised and suggest all CAs to revoke their roots already today. Does not seem to make sense.

Tofu Kobe

unread,
Jul 10, 2020, 12:55:59 PM7/10/20
to zxzxz...@gmail.com, mozilla-dev-s...@lists.mozilla.org
Mr. zxzxzx66669,

The "real" risk, which is illustrated through an adversary,
vulnerability, impact probability, risk mitigation strategy and the
residual risk doesn't matter. Hence is not discussed. I've yet to see a
comprehensive risk assessment on this matter.

The primary reason there is no real discussion is all the CAs have
chickened out due to the "distrust" flag from Mr. Sleevi. This is
supposed to be a community to freely discuss but he essentially
mentioned arguing = distrust. "Distrust" is equivalent to a death
sentence to a CA. So...can't really blame em chickening out.

As an individual observing this whole situation, I'm wondering too.
You are not alone.

Best regards,

T.K.


On 7/10/2020 7:35 PM, zxzxzx66669--- via dev-security-policy wrote:
> On Wednesday, July 8, 2020 at 6:02:56 AM UTC+3, Ryan Sleevi wrote:
>> The question is simply whether or not user agents will accept the risk of
>> needing to remove the root suddenly, and with significant (e.g. active)
>> attack, or whether they would, as I suggest, take steps to remove the root
>> beforehand, to mitigate the risk. The cost of issuance plus the cost of
>> revocation are a fixed cost: it's either pay now or pay later. And it seems
>> like if one needs to contemplate revoking roots, it's better to do it
>> sooner, than wait for it to be an inconvenient or inopportune time. This is
>> why I meant earlier, when I said a solution that tries to wait until the
>> 'last possible minute' is just shifting the cost of misissuance onto
>> RPs/Browsers, by leaving them to clean up the mess. And a CA that tries to
>> shift costs onto the ecosystem like that seems like it's not a CA that can
>> be trusted to, well, be trustworthy.
>
> This assumes that the private key of these intermediate CAs will inevitably get compromised.
>
> Why such an assumption?
>
> Following the same argument we can assume that the private key of any root CA will inevitably get compromised and suggest all CAs to revoke their roots already today. Does not seem to make sense.

Ryan Sleevi

unread,
Jul 10, 2020, 1:30:57 PM7/10/20
to ccam...@gmail.com, mozilla-dev-security-policy
On Fri, Jul 10, 2020 at 12:01 PM ccampetto--- via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Wouldn't be enough to check that OCSP responses are signed with a
> certificate which presents the (mandatory, by BR) id-pkix-ocsp-nocheck?
> I've not checked, but I don't think that subordinate CA certificates have
> that extension


You're describing a behaviour change to all clients, in order to work
around the CA not following the profile.

This is a common response to many misissuance events: if the client
software does not enforce that CAs actually do what they say, then it's not
really a rule. Or, alternatively, that the only rules should be what
clients enforce. We see this come up from time to time, e.g. certificate
lifetimes, but this is a way of externalizing the costs/risks onto clients.

None of this changes what clients, in the field, today do. And if the
problem was caused by a CA, isn't it reasonable to expect the problem to be
fixed by the CA?

Oscar Conesa

unread,
Jul 11, 2020, 1:18:02 PM7/11/20
to dev-secur...@lists.mozilla.org
As a summary of the situation, we consider that:

a) Affected certificates do not comply with the norm (EKU OCSPSigning
without OCSP-no-check extension). They are misissued and they must be
revoked

b) This non-compliance issue has potential security risks in case of key
compromise and/or malicious use of the keys, as indicated by Ryan Sleevi.

c) No key has been compromised nor has the malicious or incorrect use of
the key been detected, so at the moment there are no security incidents

d) There are two groups of affected CAs: (i) CAs that maintain sole
control of the affected keys and (ii) CAs that have delegated the
control of these keys to other entities.

e) In the case of CAs who DO NOT have sole control of the affected keys:
in addition to revoking the affected certificates, they should request
the delegated entities to proceed with the destruction of the keys in a
safe and audited manner. This does not guarantee 100% that all copies of
the keys will indeed be destroyed, as audits and procedures have their
limitations. But it does guarantee that the CA has done everything  in
their power to avoid the compromise of these keys.

f) For CAs that DO have sole control of the keys: There is no reason to
doubt the CA's ability to continue to maintain the security of these
keys, so the CA could reuse the keys by reissuing the certificate with
the same keys. If there are doubts about the ability of a CA to protect
its own critical keys, that CA cannot be considered "trusted" in any way.

g) On the other hand, if the affected certificate (with EKU OCSPSigning)
does not have the KU Digital Signature, then that certificate cannot
generate valid OCSP responses according to the standard. This situation
has two consequences: (i) the CA cannot generate OCSP responses by
mistake using this certificate, since its own software prevents it, and
(ii) in the event that an attacker compromises the keys and uses
modified software to generate malicious OCSP responses, it will be also
necessary that the client software had a bug that validated these
malicious and malformed OCSP responses. In this case, the hypothetical
scenarios involving security risks are even more limited.


Filippo Valsorda

unread,
Jul 11, 2020, 6:36:30 PM7/11/20
to dev-secur...@lists.mozilla.org
2020-07-11 13:17 GMT-04:00 Oscar Conesa via dev-security-policy <dev-secur...@lists.mozilla.org>:
> f) For CAs that DO have sole control of the keys: There is no reason to
> doubt the CA's ability to continue to maintain the security of these
> keys, so the CA could reuse the keys by reissuing the certificate with
> the same keys. If there are doubts about the ability of a CA to protect
> its own critical keys, that CA cannot be considered "trusted" in any way.

In this section, you argue that we (the relying party ecosystem, I am speaking in my personal capacity) should not worry about the existence of unrevokable ICAs with long expiration dates, because we can trust CAs to operate them safely.

> g) On the other hand, if the affected certificate (with EKU OCSPSigning)
> does not have the KU Digital Signature, then that certificate cannot
> generate valid OCSP responses according to the standard. This situation
> has two consequences: (i) the CA cannot generate OCSP responses by
> mistake using this certificate, since its own software prevents it, and
> (ii) in the event that an attacker compromises the keys and uses
> modified software to generate malicious OCSP responses, it will be also
> necessary that the client software had a bug that validated these
> malicious and malformed OCSP responses. In this case, the hypothetical
> scenarios involving security risks are even more limited.

In this section, you argue that we can't trust CAs to apply the id-kp-OCSPSigning EKU correctly and it's then our responsibility to check the rest of the profile for consistency.

These two arguments seem at odds to me.

Ryan Sleevi

unread,
Jul 11, 2020, 8:22:13 PM7/11/20
to Oscar Conesa, MDSP
On Sat, Jul 11, 2020 at 1:18 PM Oscar Conesa via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> f) For CAs that DO have sole control of the keys: There is no reason to
> doubt the CA's ability to continue to maintain the security of these
> keys, so the CA could reuse the keys by reissuing the certificate with
> the same keys. If there are doubts about the ability of a CA to protect
> its own critical keys, that CA cannot be considered "trusted" in any way.
>

While Filippo has pointed out the logical inconsistency of f and g, I do
want to establish here the problem with f, as written.

For CAs that DO have sole control of the keys: There is no reason to
*trust* the CA's ability to continue to maintain the security of the keys.

I want to be clear here: CAs are not trusted by default. The existence of a
CA, within a Root Program, is not a blanket admission of trust in the CA.
We can see this through elements such as annual audits, and we can also see
this in the fact that, for the most part, CAs have largely not been removed
on the basis of individual incident reports.

CAs seem to assume that they're trusted until they prove otherwise, when
rather, the opposite is true: we constantly view CAs through the lens of
distrusting them, and it is only by the CA's action and evidence that we
hold off on removing trust in them. Do they follow all of the requirements?
Do they disclose sufficient detail in how they operate? Do they maintain
annual audits with independent evaluation? Do they handle incident reports
thoughtfully and thoroughly, or do they dismiss or minimize them?

As it comes to this specific issue: there is zero reason to trust that a
CA's key, intended for issuing intermediates, is sufficiently protected
from being able to issue OCSP responses. As you point out in g), that's not
a thing some CAs have expected to need to do, so why would or should they?
The CA needs to provide sufficient demonstration of evidence that this has
not, can not, and will not happen. And even then, it's merely externalizing
risk: the community has to constantly be evaluating that evidence in
deciding whether to continue. That's why any failure to revoke, or any
revocation by rotating EKUs but without rolling keys, is fundamentally
insufficient.

The question is not "Do these keys need to be destroyed", but rather, "when
do these keys need to be destroyed" - and CAs need to come up with
meaningful plans to get there. I would consider it unacceptable if that
process lasted a year, and highly questionable if it lasted 9 months,
because these all rely on clients, globally, accepting the risk that a
control will fail. If a CA is going beyond the 7 days require by the BRs -
which, to be clear, it would seem the majority are - they absolutely need
to come up with a plan to remove this eventual risk, and detail their logic
for the timeline about when, how, and why they've chosen when they chose.

As I said, there's no reason to trust the CA here: there are plenty of ways
the assumed controls are insufficient. The CA needs to demonstrate why.

Oscar Conesa

unread,
Jul 12, 2020, 4:19:21 PM7/12/20
to MDSP
On 12/7/20 2:21, Ryan Sleevi wrote:
> I want to be clear here: CAs are not trusted by default. The existence
> of a CA, within a Root Program, is not a blanket admission of trust in
> the CA.

Here we have a deep disagreement: A CA within a Root Program must be
considered as a trusted CA by default. Mistrust in a CA about its
ability to operate safely can occur BEFORE being admitted in the Root
Program or AFTER being removed of the Root Program. Relaying parties
trust in the Root Program (this implies that they trust all the CAs that
are part of the program without exception).

To obtain this confidence, CAs must comply with all the requirements
that are imposed on them in the form of Policies, Norms, Standards and
Audits that are decided on an OBJECTIVE basis for all CAs. The
fulfillment of all these requirements must be NECESSARY, but also
SUFFICIENT to stay in the Root Program.

Some CAs may want to assume a leadership role in the sector and
unilaterally assume more additional strict security controls. That is
totally legitimate. But it is also legitimate for other CAs to assume a
secondary role and limit ourselves to complying with all the
requirements of the Root Program. You cannot remove a CA from a Root
Program for not meeting fully SUBJETIVE additional requirements.

I want to highlight that both the "destruction of uncompromised keys"
and "the prohibition to reuse uncompromised keys" are two security
controls that do not appear in any requirement of the Mozilla Root
Program, so CAs have no obligation to fulfill them. If someone considers
these security controls as necessary, they can be requested to be
included in the next version of the corresponding standard.

Matt Palmer

unread,
Jul 12, 2020, 6:59:58 PM7/12/20
to dev-secur...@lists.mozilla.org
On Sun, Jul 12, 2020 at 10:13:59PM +0200, Oscar Conesa via dev-security-policy wrote:
> Some CAs may want to assume a leadership role in the sector and unilaterally
> assume more additional strict security controls. That is totally legitimate.
> But it is also legitimate for other CAs to assume a secondary role and limit
> ourselves to complying with all the requirements of the Root Program. You
> cannot remove a CA from a Root Program for not meeting fully SUBJETIVE
> additional requirements.

I fear that your understanding of the Mozilla Root Store Policy is at odds
with the text of that document.

"Mozilla MAY, at its sole discretion, decide to disable (partially or fully)
or remove a certificate at any time and for any reason."

I'd like to highlight the phrase "at its sole discretion", and also "for any
reason".

If the CA Module owner wakes up one day and, having had a dream which causes
them to dislike the month of July, decides that all CAs whose root
certificates have a notBefore in July must be removed, the impacted CAs do
not have any official cause for complaint. I have no doubt that such an
arbitrary decision would be reversed, and the consequences would not make it
into production, but the decision would not be reversed because it "cannot"
happen, but rather because it is contrary to the interests of Mozilla and
the user community which Mozilla serves.

- Matt

Ryan Sleevi

unread,
Jul 12, 2020, 9:03:25 PM7/12/20
to Oscar Conesa, MDSP
On Sun, Jul 12, 2020 at 4:19 PM Oscar Conesa via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> To obtain this confidence, CAs must comply with all the requirements
> that are imposed on them in the form of Policies, Norms, Standards and
> Audits that are decided on an OBJECTIVE basis for all CAs. The
> fulfillment of all these requirements must be NECESSARY, but also
> SUFFICIENT to stay in the Root Program.
>

As Matt Palmer points out, that's not consistent with how any root program
behaves. This is not unique to Mozilla, as you find similar text at
Microsoft and Google, and I'm sure you'd find similar text with Apple were
they to say anything.

Mozilla's process is transparent, in that it seeks to weigh public
information, but it's inherently subjective: this can easily be seen by the
CP/CPS reviews, which are, by necessity, a subjective evaluation of risk
criteria based on documentation. You can see this in the CAs that have been
accepted, and the applications that have been rejected, and the CAs that
have been removed and those that have not been. Relying parties act on the
information available to them, including how well the CA handles and
responds to incidents.

Some CAs may want to assume a leadership role in the sector and
> unilaterally assume more additional strict security controls. That is
> totally legitimate. But it is also legitimate for other CAs to assume a
> secondary role and limit ourselves to complying with all the
> requirements of the Root Program. You cannot remove a CA from a Root
> Program for not meeting fully SUBJETIVE additional requirements.
>

CAs have been, can be, and will continue to be. I think it should be
precise here: we're talking about an incident response. Were things as
objective as you present, then every CA who has misissued such a
certificate would be at immediate risk of total and complete distrust. We
know that's not a desirable outcome, nor a likely one, so we recognize that
there is, in fact, a shade of gray here for judgement.

That judgement is whether or not the CA is taking the issue seriously, and
acting to assume a leadership role. CAs that fail to do so are CAs that
pose risk, and it may be that the risk they pose is unacceptable. Key
destruction is one way to reassure relying parties that the risk is not
possible. I think a CA that asked for it to be taken on faith,
indefinitely, is a CA that fundamentally misunderstands the purpose and
goals of a root program.

Chema Lopez

unread,
Jul 13, 2020, 1:40:16 PM7/13/20
to dev-secur...@lists.mozilla.org
>From my point of view, the arguments at
https://www.mail-archive.com/dev-secur...@lists.mozilla.org/msg13642.html
are
as incontestable as the ones stated by Corey Bonnell here:
https://www.mail-archive.com/dev-secur...@lists.mozilla.org/msg13541.html
.


RFC5280 and RFC6960 have to be considered and thus, a certificate without
KU digitalSignature is not an OCSP Responder. We can not choose what to
comply with or what is mandatory or if a RFC is mandatory but BR "profiles"
the RFC. And when I say "we" I mean all the players, especially the ones in
the CA / Browser forum.


And yes, relying parties need to check this. For its own benefit, relying
parties need to understand how a proper OCSP response is made and check it
properly.


It is astonishing how what looks like a bad practice of (some) relying
parties has mutated into a security risk at CAs side.


It is not only a matter of CA's leading the solution of a, at least
questionable security risk. It is a matter of working all together.


It is not a secret that CA /B Forum is not living its better moments, in
part, due to unilateral decisions of (again, some) browsers against the
democratic (in terms of CA/B Forum bylaws) decision of a ballot.


It is time to collaborate again between CAs and Browsers instead of the
latelly usual (some) Browsers slapping CAs. For transparency sake, I think
that it would be a nice initiative from Browsers to disclose their
practices regarding the validation of OCSP Responses and working all
together, improve or even design practices on this to be followed,
although following RFC 5280 and RFC 6960 should be sufficient.


Thanks,

Chema.
It is loading more messages.
0 new messages