Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Mozilla Policy and CCADB Disclosure scope

331 views
Skip to first unread message

Gervase Markham

unread,
May 19, 2017, 9:48:14 AM5/19/17
to mozilla-dev-s...@lists.mozilla.org
We need to have a discussion about the appropriate scope for:

1) the applicability of Mozilla's root policy
2) required disclosure in the CCADB

The two questions are related, with 2) obviously being a subset of 1).
It's also possible we might decide that for some certificates, some
subset of the Mozilla policy applies, but not all of it.

I'm not even sure how best to frame this discussion, so let's have a go
from this angle, and if it runs into the weeds, we can try again another
way.

The goal of scoping the Mozilla policy is, to my mind, to have Mozilla
policy sufficiently broadly applicable that it covers all
publicly-trusted certs and also doesn't leave unregulated sufficiently
large number of untrusted certs inside publicly-trusted hierarchies that
it will hold back forward progress on standards and security.

The goal of CCADB disclosure is to see what's going on inside the WebPKI
in sufficient detail that we don't miss important things. Yes, that's vague.

Here follow a list of scenarios for certificate issuance. Which of these
situations should be in full Mozilla policy scope, which should be in
partial scope (if any), and which of those should require CCADB
disclosure? Are there scenarios I've missed?

A) Unconstrained intermediate
AA) EE below
B) Intermediate constrained to id-kp-serverAuth
BB) EE below
C) Intermediate constrained to id-kp-emailProtection
CC) EE below
D) Intermediate constrained to anyEKU
DD) EE below
E) Intermediate usage-constrained some other way
EE) EE below
F) Intermediate name-constrained (dnsName/ipAddress)
FF) EE below
G) Intermediate name-constrained (rfc822Name)
GG) EE below
H) Intermediate name-constrained (srvName)
HH) EE below
I) Intermediate name-constrained some other way
II) EE below

If a certificate were to only be partially in scope, one could imagine
it being exempt from one or more of the following sections of the
Mozilla policy:

* BR Compliance (2.3)
* Audit (3.1) and auditors (3.2)
* CP and CPS (3.3)
* CCADB (4)
* Revocation (6)

It's also further possible that BR Compliance could be split, requiring
compliance to some parts of the BRs but not others.

So this is a complicated question!

One reasonable enquiry would be: what is the status quo? I _think_ it's
as follows:

1) Applicability (Mozilla Root Store Policy section 1.1):
all except E), EE), I) and II), but only those EE certs with no EKU,
or at least one of serverAuth, emailProtection or anyEKU.
2) Disclosure (CCADB Common Policy section 4):
A), B), D), possibly H) and I) depending on EKU.

Is that right?

Gerv

Matthew Hardeman

unread,
May 19, 2017, 3:41:20 PM5/19/17
to mozilla-dev-s...@lists.mozilla.org
Not speaking as to the status quo, but rather in terms of updates/changes which might be considered for incorporation into policy would be to recognize the benefit of name constrained intermediates and allow a reduction in burden to entities holding and utilizing name constrained intermediates, both in SSL Server Authentication, and Email Protection. (Probably also allow that OCSP signing, client authentication, certain encrypted storage extended key usages, etc, be allowed).

>From a perspective of risk to the broader web PKI, it would appear that a properly name constrained intermediate with (for example) only the Server and Client TLS authentication ekus with name constraints limited to particular validated domains (via dnsName constraint along with excluding wildcard IP/netmask for IPv4 and IPv6) is really no substantively more risky than a multi-SAN wildcard certificate with the same domains. Indeed, in the kind of enterprise environment where such an intermediate might be used, it is quite possible that the one intermediate certificate in an enterprise CA may result in fewer wildcard certificates being distributed within that organization. (Electing instead to dynamically generate system and purpose-specific certificates internally without further direct external issuance cost for the organization).

Similarly name constraints for email protection and client auth certificates within limited domain trees could be useful for S/MIME and login certs.

If I am not gravely mistaken with respect to the overall risk to the ecosystem, it would seem that clear rules encouraging issuance of such intermediates in lieu of other mechanisms which require close observation of the eventual signatures issued by the intermediate (Can we really say we trust that a CA cut for an enterprise customer is being held by a trusted CA in their infrastructure and constrained as to EE certificate contents by systems and rules at said CA can really be trusted? Even if the CA says that's how it is?)

I would propose, for example, that the intermediate certificate itself and the process which led to its issuance is fully part of the scope of the program and requirements. The validation data to be authenticated and preserved by the CA, etc. With respect to the running of the technically constrained CA, is it possible to reward the narrow technical constraints with a hands-off approach to the use of the intermediate in issuing down-line certificates, especially end-entity certificates? For example, no requirement of audit by the enterprise holding the technically constrained intermediate, and no requirement for audit or disclosure of certificates issued by the enterprise from that technically constrained intermediate.

In short, a compromise of the technically constrained intermediate has quite limited scope of harm and generally impacts only matters for which the enterprise that lost control of the technically constrained intermediate is at least one of the parties to any transaction. (For example, a bank losing control of a trusted but technically constrained intermediate might cause an outside customer to believe that they're at the bank's website, but even while the outside customer is a victim the bank is also the victim in this circumstance.) In reality, the loss of a wildcard certificate with the same domain(s) is just as damning for the same bank. The difference is that it is highly probably that a wildcard certificate with those domains will be installed in multiple systems, and far less likely that the technically constrained intermediate certificate will be.

In short, as an enterprise customer, managing an in-house PKI infrastructure with external CA costs controlled at the price of issuance of a technically constrained intermediate versus buying hundreds of specific endpoint certificates may be attractive to a category of enterprises with complex needs for shared internal/external trust paths for certificates for certain systems and may encourage better deployment practice by cost optimization.

In an ideal world, if (and only if) the integrity of the Web PKI is not compromised by taking a freer hand with technically constrained intermediates of limited scope, it is my belief that overall security and adoption of good practice might be improved by reducing the compliance burdens upon enterprises that would utilize such a constrained intermediate. If this is so, providing clear rules for the standards and issuance of these certificates may create a marketplace for a relatively new and unknown product (technically constrained CAs that are standardized to the point that they're on the price list just like an SSL cert is) to enter the market on more competitive terms. Perhaps a product for which various CAs can offer greater differentiation and value models.

I think the right compromise might be to treat the intermediate as in-scope for the policy and that the issuance of the intermediate itself is subject to audit and web trust, but that the down-line activities engaged in with a name constrained intermediate need not be subject to audit, disclosure, or policies beyond those which are already enforced by the technical constraints.

As to disclosure of these name constrained intermediates, I should think that if they became popular, even among largish enterprises, there might arise quite a lot of such intermediates. Perhaps rather than in CCADB, these name constrained intermediates should be required as a matter of policy to be submitted to CT logs (to an acceptable number of logs, with an acceptable number of those under separate administrative control). A possibility to strengthen issuance of these types of certificates would be to require submission in pre-certificate form with an effective "publish for opposition" period of a week or two to elapse after CT pre-submission before the signed final certificate is returned to the purchaser.

Just my thoughts...

Matt

Nick Lamb

unread,
May 19, 2017, 9:57:17 PM5/19/17
to mozilla-dev-s...@lists.mozilla.org
On Friday, 19 May 2017 20:41:20 UTC+1, Matthew Hardeman wrote:
> From a perspective of risk to the broader web PKI, it would appear that a properly name constrained intermediate with (for example) only the Server and Client TLS authentication ekus with name constraints limited to particular validated domains (via dnsName constraint along with excluding wildcard IP/netmask for IPv4 and IPv6) is really no substantively more risky than a multi-SAN wildcard certificate with the same domains.

Unlike a wildcard, the constrained intermediate impacts all names under that tree. For example a certificate for *.example.com definitely isn't valid for mail.research.example.com, www.research.example.com etc. whereas a constrained intermediate for example.com _is_ able to issue for those names.

But yes, overall Matt's approach makes sense to me, lightweight disclosure such as via CT logging of such intermediates is appropriate from what I can see. Issuance _of_ the intermediates needs to have good oversight, but we don't need to freak out about the issuance _from_ them too much. If they're badly run they will join in that a huge number of poorly looked after end entity certificates, and have not dissimilar risk, narrowed to just the affected subject domain(s).

Matthew Hardeman

unread,
May 19, 2017, 11:03:35 PM5/19/17
to mozilla-dev-s...@lists.mozilla.org
Thanks, Nick, for the comment on the scope difference in the dnsName constraints vs. SAN wildcard. I hadn't contemplated that. As you note, the real risk isn't dissimilar. (I would presume that this is because a CA willing to issue a SAN dnsName of *.example.com would also presumably issue a SAN dnsName of *.research.example.com.)

Having given further consideration to potential exposures and risks of certificates issued downline of the name constrained intermediate, I did come up with one metric in which the risk profile _could_ be different versus a wildcard EE cert: the wildcard EE certs issued by a CA in the normal course of business must comply with the BRs pertaining to length of validity of validation information and maximum length of validity period of EE certificate life (and, of course, with the rest of the BRs). It's not clear that such a standard or requirement would presently apply to a name constrained technically constrained intermediate certificate.

Perhaps it makes sense to only accord preferential treatment of name + eku constrained intermediates which are also constrained to limited period of validity. Perhaps that constraint should mirror or only slightly exceed the limits on EE certs.

Out of curiosity, I checked crt.sh's CA certificates disclosure report to see if I could find an in-the-wild name constrained intermediate that I think largely reflects the overall attributes/constraints which might allow for preferential treatment, and found one for Southern Company (an energy & telecom company in the southeast USA). The intermediate certificate in question is at: https://crt.sh/?id=11501550

Southern Company seems to use this certificate to issue some of the EE certificates on some of their public facing websites. https://southernlinc.com features a leaf certificate issued from this technically constrained intermediate.

Features, in particular, that I think may allow for preferential treatment of these intermediates, as demonstrated in the crt.sh linked intermediate above, include:

1. Pathlen constraint in this example was 0, but I would presume a pathlen 1 for policy CAs to allow for separate internal intermediates might be allowable too. Not sure if allowing that adds a further risk exposure or not? The effective eku applied to the EE certificate is the most restrictive eku of the certificates in the chain including the leaf certificate, right? (Thus, declaring the further intermediate to have lesser / different name constraints or extra eku permissions would not be effective, correct?)
2. The eku values here are constrained to TLS Server Auth, TLS Client Auth, Email Protection, and OCSP signing
3. As email protection eku is selected, at least one permitted rfc822 name constraint exists. Here, there are actually two: .sourthernco.com and southernco.com (presumably to allow issuance of email certs covering anything in the southernco.com namespace.
4. As TLS server auth eku is present, name constraint exclusions for the whole IPv4 and IPv6 IP space are present. Additionally, a number of permitted dnsName constraints are provided, listing out domains over which the validated organization has control.

In contemplating the risk of such intermediates spefically to the Web PKI, the only risks that I can presently imagine to exceed that of wildcard EE certificates thus pertain to validity period. The example certificate I found has a 5 year validity period.

It seems to me, however, that while I am aware that one could cause the constrained intermediate to sign a certificate which is NOT baseline requirements compliant, modern Web PKI software would not honor this. With the exception of that caveat, it would appear that proper name constraints, validity period, ekus, and path length constraints can combine to effectively force subordinate leaf certificates into alignment with the baseline requirements. Thus, it would seem that disclosure of certificates descending from such intermediates is likely unnecessary.

As long as the validation and issuance of such constrained intermediates is watched by an externally audit-able mechanism like CT, I don't think these intermediates themselves would need specific disclosure in CCADB.

Thanks,

Matt

Gervase Markham

unread,
May 22, 2017, 4:41:50 AM5/22/17
to Matthew Hardeman
On 19/05/17 20:40, Matthew Hardeman wrote:
> Not speaking as to the status quo, but rather in terms of
> updates/changes which might be considered for incorporation into
> policy would be to recognize the benefit of name constrained
> intermediates and allow a reduction in burden to entities holding and
> utilizing name constrained intermediates, both in SSL Server
> Authentication, and Email Protection. (Probably also allow that OCSP
> signing, client authentication, certain encrypted storage extended
> key usages, etc, be allowed).

This is certainly a question worth considering. I think a careful
comparative risk analysis is in order, and so thank you for starting
that process.

The issue with excluding any certificate or group of certificates from
the entire scope of the policy is that the issuer would then be free to
issue SHA-1 certs, certs with bad or unpermitted algorithms, and so on.
Are you suggesting that EE certs issued from such an intermediate be
entirely unregulated, or that we should strip down the regulation to
merely technical requirements, ignoring requirements on audit, CP/CPS,
revocation etc.?

>> From a perspective of risk to the broader web PKI, it would appear
>> that a properly name constrained intermediate with (for example)
>> only the Server and Client TLS authentication ekus with name
>> constraints limited to particular validated domains (via dnsName
>> constraint along with excluding wildcard IP/netmask for IPv4 and
>> IPv6) is really no substantively more risky than a multi-SAN
>> wildcard certificate with the same domains.

I currently agree this is broadly true, with the exception of the
lifetime issue which you raise in a later message.

There would be little point in such a TCSC having a max lifetime equal
to the max lifetime of an EE cert, because then after day 1, the EE
certs it issues couldn't have the max lifetime (because the EE cert
can't last longer than the intermediate!). So perhaps max lifetime of EE
+ 1 year, so the issuing TCSC needs to be replaced once a year in order
for the organization to continue issuing max length certs?

The sub-subdomain issue is also a difference, but my current view is
that it doesn't have much of an effect on the risk profile in practice.

> As to disclosure of these name constrained intermediates, I should
> think that if they became popular, even among largish enterprises,
> there might arise quite a lot of such intermediates. Perhaps rather
> than in CCADB, these name constrained intermediates should be
> required as a matter of policy to be submitted to CT logs (to an
> acceptable number of logs, with an acceptable number of those under
> separate administrative control).

If we exempt the certs they issue from CP/CPS and audit requirements,
the need for such TCSCs to be disclosed in CCADB is much reduced.

Gerv

Matthew Hardeman

unread,
May 22, 2017, 11:43:45 AM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 3:41:50 AM UTC-5, Gervase Markham wrote:
> On 19/05/17 20:40, Matthew Hardeman wrote:
> > Not speaking as to the status quo, but rather in terms of
> > updates/changes which might be considered for incorporation into
> > policy would be to recognize the benefit of name constrained
> > intermediates and allow a reduction in burden to entities holding and
> > utilizing name constrained intermediates, both in SSL Server
> > Authentication, and Email Protection. (Probably also allow that OCSP
> > signing, client authentication, certain encrypted storage extended
> > key usages, etc, be allowed).
>
> This is certainly a question worth considering. I think a careful
> comparative risk analysis is in order, and so thank you for starting
> that process.

I'm a long time lurker of the list and happy to contribute what thoughts that I may. I was uncertain whether the question(s) that I raised were within the scope of this particular thread, but as you have engaged in dialogue I'll proceed under the assumption that they are unless otherwise directed.

>
> The issue with excluding any certificate or group of certificates from
> the entire scope of the policy is that the issuer would then be free to
> issue SHA-1 certs, certs with bad or unpermitted algorithms, and so on.
> Are you suggesting that EE certs issued from such an intermediate be
> entirely unregulated, or that we should strip down the regulation to
> merely technical requirements, ignoring requirements on audit, CP/CPS,
> revocation etc.?
>

I am suggesting that regulations pertaining to issuance from these technically constrained subCAs be stripped down to purely technically enforced requirements if and only if this can be done without disproportionate risk to the Web PKI. My belief is that compelling product offerings could arise in this space if there exist guidelines for the issuance of the technically constrained subCAs and if proper issuance and management of the lifecycle of these subCA certificates can be construed in a way that does not burden the root programs and their member CAs with significant manual audit duties AND which still preserves the integrity of the web PKI.

Regarding specifically the risk of the holder of a technically constrained subCA issuing a certificate with an SHA-1 signature or other improper signature / algorithm, my belief at this time is that with respect to the web PKI, we should be able to rely upon the modern client software to exclude these certificates from functioning. My understanding was that IE / Edge was the last holdout on that front but that it now distrusts SHA-1 signatures.

> >> From a perspective of risk to the broader web PKI, it would appear
> >> that a properly name constrained intermediate with (for example)
> >> only the Server and Client TLS authentication ekus with name
> >> constraints limited to particular validated domains (via dnsName
> >> constraint along with excluding wildcard IP/netmask for IPv4 and
> >> IPv6) is really no substantively more risky than a multi-SAN
> >> wildcard certificate with the same domains.
>
> I currently agree this is broadly true, with the exception of the
> lifetime issue which you raise in a later message.
>
> There would be little point in such a TCSC having a max lifetime equal
> to the max lifetime of an EE cert, because then after day 1, the EE
> certs it issues couldn't have the max lifetime (because the EE cert
> can't last longer than the intermediate!). So perhaps max lifetime of EE
> + 1 year, so the issuing TCSC needs to be replaced once a year in order
> for the organization to continue issuing max length certs?

While I certainly think it would be fine to extend the life of a technically constrained subCA significantly beyond that of an EE certificate, as long as the risks are balanced, I do have a question here:

How do the various validation routines in the field today validate a scenario in which a leaf certificate's validity period exceeds a validity period constraint upon the chosen trust path? Is the certificate treated as trusted, but only to the extent that the present time is within the most restrictive view of the validity period in the chain, or is the certificate treated as invalid regardless for failure to fully conform to the technical policy constraints promulgated by the chain?

The reason that I raise this question pertains to the lifecycle of EE certificates issued subordinate to a technically constrained subCA such that the until date of the leaf certificate exceeds the until date on the parent subCA. If we contemplate a subsequent renewal of the technically constrained subCA with same SPKI, it occurs to me that the subCA can issue a certificate which has an until date which exceeds the subCA until date and then merely change what subCA certificate is distributed to build the trust path, thus allowing the certificate to remain valid as long as the subCA is renewed and deployed on time. This may be especially the case if AIA chasing upon initial path creation / validation failure becomes common practice.

>
> The sub-subdomain issue is also a difference, but my current view is
> that it doesn't have much of an effect on the risk profile in practice.
>

I believe this is particularly the case if we presume that a technically constrained subCA will have been issued by way of at least an organization validation. Including, of course, demonstration of ownership & control of the included domains (or at a minimum a validated authorization to include a domain owned & controlled by another party).

> > As to disclosure of these name constrained intermediates, I should
> > think that if they became popular, even among largish enterprises,
> > there might arise quite a lot of such intermediates. Perhaps rather
> > than in CCADB, these name constrained intermediates should be
> > required as a matter of policy to be submitted to CT logs (to an
> > acceptable number of logs, with an acceptable number of those under
> > separate administrative control).
>
> If we exempt the certs they issue from CP/CPS and audit requirements,
> the need for such TCSCs to be disclosed in CCADB is much reduced.

I submit, then, that the real questions become further analysis and feedback of the risk(s) followed by specification and guidance on what specific constraints would form up the certificate profile which would have the reduced CP/CPS, audit, and disclosure burdens. As a further exercise, it seems likely that to truly create a market in which an offering of this nature from CAs would grow in prevalence, someone would need to carry the torch to see such guidance (or at least the relevant portions) make way into the baseline requirements and other root programs. Is that a reasonable assessment?


>
> Gerv

Gervase Markham

unread,
May 22, 2017, 12:50:59 PM5/22/17
to Matthew Hardeman
On 22/05/17 16:43, Matthew Hardeman wrote:
> Regarding specifically the risk of the holder of a technically
> constrained subCA issuing a certificate with an SHA-1 signature or
> other improper signature / algorithm, my belief at this time is that
> with respect to the web PKI, we should be able to rely upon the
> modern client software to exclude these certificates from
> functioning. My understanding was that IE / Edge was the last
> holdout on that front but that it now distrusts SHA-1 signatures.

So your proposal is that technical requirements should be enforced
in-product rather than in-policy, and so effectively there's no need for
policy for the EE certs under a TCSC.

This is not an unreasonable position.

> How do the various validation routines in the field today validate a
> scenario in which a leaf certificate's validity period exceeds a
> validity period constraint upon the chosen trust path? Is the
> certificate treated as trusted, but only to the extent that the
> present time is within the most restrictive view of the validity
> period in the chain, or is the certificate treated as invalid
> regardless for failure to fully conform to the technical policy
> constraints promulgated by the chain?

Good question. I think the former, but Ryan Sleevi might have more info,
because I seem to remember him discussing this scenario and its compat
constraints recently.

Either way, it's a bad idea, because the net effect is that your cert
suddenly stops working before the end date in it, and so you are likely
to be caught short.

> I submit, then, that the real questions become further analysis and
> feedback of the risk(s) followed by specification and guidance on
> what specific constraints would form up the certificate profile which
> would have the reduced CP/CPS, audit, and disclosure burdens. As a
> further exercise, it seems likely that to truly create a market in
> which an offering of this nature from CAs would grow in prevalence,
> someone would need to carry the torch to see such guidance (or at
> least the relevant portions) make way into the baseline requirements
> and other root programs. Is that a reasonable assessment?

Well, it wouldn't necessarily need to make its way into other places.
Although that's always nice.

Gerv

Matthew Hardeman

unread,
May 22, 2017, 1:41:44 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 11:50:59 AM UTC-5, Gervase Markham wrote:

> So your proposal is that technical requirements should be enforced
> in-product rather than in-policy, and so effectively there's no need for
> policy for the EE certs under a TCSC.
>
> This is not an unreasonable position.
>

That is a correct assessment of my position. If we are able to unambiguously enforce a policy matter by technological means -- and most especially where such technological means already exist and are deployed -- that we should be able to rely upon those technology constraints to relieve the administrative burden of auditing and enforcing compliance through business process.

> > How do the various validation routines in the field today validate a
> > scenario in which a leaf certificate's validity period exceeds a
> > validity period constraint upon the chosen trust path? Is the
> > certificate treated as trusted, but only to the extent that the
> > present time is within the most restrictive view of the validity
> > period in the chain, or is the certificate treated as invalid
> > regardless for failure to fully conform to the technical policy
> > constraints promulgated by the chain?
>
> Good question. I think the former, but Ryan Sleevi might have more info,
> because I seem to remember him discussing this scenario and its compat
> constraints recently.
>
> Either way, it's a bad idea, because the net effect is that your cert
> suddenly stops working before the end date in it, and so you are likely
> to be caught short.

Here I would concur that it would be bad practice for precisely the reason you indicate. I was mostly academically interested in the specifics of that topic. I would agree that extending the certificate lifecycle to some period beyond the max EE validity period would alleviate the need. Having said that, I can still envision workable scenarios and value cases for such technically constrained CA certificates even if it were deemed unacceptable to extend their validity period.

>
> > I submit, then, that the real questions become further analysis and
> > feedback of the risk(s) followed by specification and guidance on
> > what specific constraints would form up the certificate profile which
> > would have the reduced CP/CPS, audit, and disclosure burdens. As a
> > further exercise, it seems likely that to truly create a market in
> > which an offering of this nature from CAs would grow in prevalence,
> > someone would need to carry the torch to see such guidance (or at
> > least the relevant portions) make way into the baseline requirements
> > and other root programs. Is that a reasonable assessment?
>
> Well, it wouldn't necessarily need to make its way into other places.
> Although that's always nice.

Agreed. I primarily made mention of the other rules, etc. because it occurs to me that part of the same standards of what might qualify for preferential / different treatment of technically constrained subCAs with respect to disclosure might also neatly align with issuance policy as might pertain in, for example, your separate thread titled "Policy 2.5 Proposal: Fix definition of constraints for id-kp-emailProtection"

The question of audit & disclosure requirements pertaining to technically constrained subCAs seems to be ripe for discussion. I note that Doug Beattie recently sought clarification regarding this question on the matter of a name constrained subCA with the emailProection eku only several days ago in the thread "Next CA Communication"

Thanks,

Matt

Ryan Sleevi

unread,
May 22, 2017, 3:14:57 PM5/22/17
to Matthew Hardeman, mozilla-dev-security-policy
On Mon, May 22, 2017 at 1:41 PM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Monday, May 22, 2017 at 11:50:59 AM UTC-5, Gervase Markham wrote:
> > > How do the various validation routines in the field today validate a
> > > scenario in which a leaf certificate's validity period exceeds a
> > > validity period constraint upon the chosen trust path? Is the
> > > certificate treated as trusted, but only to the extent that the
> > > present time is within the most restrictive view of the validity
> > > period in the chain, or is the certificate treated as invalid
> > > regardless for failure to fully conform to the technical policy
> > > constraints promulgated by the chain?
> >
> > Good question. I think the former, but Ryan Sleevi might have more info,
> > because I seem to remember him discussing this scenario and its compat
> > constraints recently.
> >
> > Either way, it's a bad idea, because the net effect is that your cert
> > suddenly stops working before the end date in it, and so you are likely
> > to be caught short.
>
> Here I would concur that it would be bad practice for precisely the reason
> you indicate. I was mostly academically interested in the specifics of
> that topic. I would agree that extending the certificate lifecycle to some
> period beyond the max EE validity period would alleviate the need. Having
> said that, I can still envision workable scenarios and value cases for such
> technically constrained CA certificates even if it were deemed unacceptable
> to extend their validity period.
>

As Gerv notes, clients behave inconsistently with respect to this.

With respect to what is specified in RFC 5280, the critical requirement is
that all certificates in the chain be valid at the time of evaluation. This
allows, for example, the replacement of an intermediate certificate to
'extend' the lifetime of the leaf to its originally expressed value.

However, some clients require that the validity periods be 'nested'
appropriately - and, IIRC, at one time Mozilla NSS equally required this.

So the need exists to define some upper-bound for the TCSC relative to the
risk.

One approach is to make an argument that the upper-bound for a certificate
is bounded on the validity period of an equivalently issued leaf
certificate - that is, say, 825 days.
Another approach is to make an argument that since a CA can validate a
domain at T=0, issue a certificate at T=0 with a validity period of 825
days, then issue a certificate at T=824 with a validity period of 825 days,
the 'net' validity period of a domain validation is T=(825 days * 2) - 1
second.
(Here, I'm using 825 as shorthand for the cascading dates)

Another approach is to make an argument that such validations are already
accounted for in the EV Guidelines, in which a certificate may be issued
for 27 months, but for which the domain must be revalidated at 13 months.
In this case, the TCSC might be issued for an 'extended' period (perhaps on
an order of many-years), with the expectation that the CA will revoke the
TCSC if the domain does not (periodically) revalidate.

Each of these approaches are with their own tradeoffs in design,
complexity, and risk, and argue more or less strongly for disclosure
because of it.

My own take is that I would prefer to see TCSCs uniquely named (e.g.
through the use of dnQualifier), limited to the validity period permitted
of leaf certs (since they're, effectively, "ultra" wildcards), with some
robust diclosure & revocation story. I'm concerned that extending the
period of time is to incentivize such certs, which introduce additional
risks to the evolution of the Web PKI.

Ryan Sleevi

unread,
May 22, 2017, 3:21:41 PM5/22/17
to Gervase Markham, mozilla-dev-security-policy, Matthew Hardeman
On Mon, May 22, 2017 at 12:50 PM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 22/05/17 16:43, Matthew Hardeman wrote:
> > Regarding specifically the risk of the holder of a technically
> > constrained subCA issuing a certificate with an SHA-1 signature or
> > other improper signature / algorithm, my belief at this time is that
> > with respect to the web PKI, we should be able to rely upon the
> > modern client software to exclude these certificates from
> > functioning. My understanding was that IE / Edge was the last
> > holdout on that front but that it now distrusts SHA-1 signatures.
>
> So your proposal is that technical requirements should be enforced
> in-product rather than in-policy, and so effectively there's no need for
> policy for the EE certs under a TCSC.
>
> This is not an unreasonable position.
>

I think it may be based on an incomplete understanding of the evolution of
the Web PKI. While it's certainly correct that we've been able to
technically mitigate many risks, it's not been without issue. The historic
path to deprecation has been on the basis of establishing some form of
sunset date or requirements change, either within the CA/Browser Forum or
through policy, with the understanding and appreciation that, on or after
that sunset date, it can be technically enforced without any breakage (save
for misissued certificates).

TCSCs substantially change that dynamic, in a way that I believe would be
detrimental towards further evolution. This is already a concern when
thinking about requirements such as Certificate Transparency - despite the
majority of commercial CAs (and thus, equally, commercially-managed managed
CAs) - TCSCs that are in existence may be ill-prepared to handle such
transition. We saw this itself with the imposition of the Baseline
Requirements, which thankfully saw many enterprise-managed CAs become
commercially-managed CAs, due to their inability to abide by the
requirements, so we can reasonably conclude that future requirements will
also be challenging for enterprise-managed CAs, which TCSCs effectively are.

Consider, on one extreme, if every of the Top 10000 sites used TCSCs to
issue their leaves. A policy, such as deprecating SHA-1, would be
substantially harder, as now there's a communication overhead of O(10000 +
every root CA) rather than O(# of root store CAs).

It may be that the benefits of TCSCs are worth such risk - after all, the
Web Platform and the evolution of its related specs (URL, Fetch, HTML)
deals with this problem routinely. But it's also worth noting the
incredible difficulty and friction of deprecating insecure, dangerous APIs
- and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
as such, may represent a significant slowdown in progress, and a
corresponding significant increase in user-exposed risk.

This is why it may be more useful to take a principled approach, and to, on
a case by case basis, evaluate the risk of reducing requirements for TCSCs
(which are already required to abide by the BRs, and simply exempted from
auditing requirements - and this is independent of any Mozilla
dispensations), both in the short-term and in the "If every site used this"
long-term.

Matthew Hardeman

unread,
May 22, 2017, 3:42:32 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 2:14:57 PM UTC-5, Ryan Sleevi wrote:

> Another approach is to make an argument that such validations are already
> accounted for in the EV Guidelines, in which a certificate may be issued
> for 27 months, but for which the domain must be revalidated at 13 months.
> In this case, the TCSC might be issued for an 'extended' period (perhaps on
> an order of many-years), with the expectation that the CA will revoke the
> TCSC if the domain does not (periodically) revalidate.
>
> Each of these approaches are with their own tradeoffs in design,
> complexity, and risk, and argue more or less strongly for disclosure
> because of it.
>
> My own take is that I would prefer to see TCSCs uniquely named (e.g.
> through the use of dnQualifier), limited to the validity period permitted
> of leaf certs (since they're, effectively, "ultra" wildcards), with some
> robust diclosure & revocation story. I'm concerned that extending the
> period of time is to incentivize such certs, which introduce additional
> risks to the evolution of the Web PKI.

Hi, Ryan,

Thanks for the information regarding the chain validity periods and their impact as to the RFC defined behavior and in-field implementation behaviors. I suspected there would be some differences across implementations.

I am inclined to agree with your assessment as to the validity period. The future in which I would envision more common use of TCSCs would still be a future that encourages leaf deployment automation and, ideally, quite limited with respect to the validity period (at least, I should say, with respect to TLS server certificates).

To any such extent that TCSCs might discourage server operators from establishing good certificate and key lifecycle management, or server operators might attempt to rely upon a TCSC as a way of generating longer-than-BR-allows-for-validity leaf certificates, I would say that policy should probably prevent this as even a potential incentive.

I would be interested to learn more of your perspective on "robust disclosure & revocation story". What constitutes a robust disclosure? For example, does that imply a mandatory timely publication in CCADB? With respect to a revoked TCSC, does that require formalized submission to the root programs for distribution in their respective centralized revocation distribution mechanisms (OneCRL, etc.)? Which remaining features of a TCSC provide capabilities which might be mitigated by this level of disclosure versus mere mandatory publication to CT?

Thanks,

Matt

Peter Bowen

unread,
May 22, 2017, 3:43:14 PM5/22/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Fri, May 19, 2017 at 6:47 AM, Gervase Markham via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
> We need to have a discussion about the appropriate scope for:
>
> 1) the applicability of Mozilla's root policy
> 2) required disclosure in the CCADB
>
> The two questions are related, with 2) obviously being a subset of 1).
> It's also possible we might decide that for some certificates, some
> subset of the Mozilla policy applies, but not all of it.
>
> I'm not even sure how best to frame this discussion, so let's have a go
> from this angle, and if it runs into the weeds, we can try again another
> way.
>
> The goal of scoping the Mozilla policy is, to my mind, to have Mozilla
> policy sufficiently broadly applicable that it covers all
> publicly-trusted certs and also doesn't leave unregulated sufficiently
> large number of untrusted certs inside publicly-trusted hierarchies that
> it will hold back forward progress on standards and security.
>
> The goal of CCADB disclosure is to see what's going on inside the WebPKI
> in sufficient detail that we don't miss important things. Yes, that's vague.
>
> Here follow a list of scenarios for certificate issuance. Which of these
> situations should be in full Mozilla policy scope, which should be in
> partial scope (if any), and which of those should require CCADB
> disclosure? Are there scenarios I've missed?

You seem to be assuming each of A-I have a path length constraint of
0, as your scenarios don't include CA-certs below each category.
I would say that any CA-certificate signed by a CA that does not have
name constraints and not constrained to things outside the set
{id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
This would mean that the top level of all constrained hierarchies is
disclosed but subordinate CAs further down the tree and EE certs are
not. I think that this is a reasonable trade off of privacy vs
disclosure.

Thanks,
Peter

Matthew Hardeman

unread,
May 22, 2017, 4:02:11 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 2:43:14 PM UTC-5, Peter Bowen wrote:

>
> I would say that any CA-certificate signed by a CA that does not have
> name constraints and not constrained to things outside the set
> {id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
> This would mean that the top level of all constrained hierarchies is
> disclosed but subordinate CAs further down the tree and EE certs are
> not. I think that this is a reasonable trade off of privacy vs
> disclosure.

I would agree that those you've identified as "should be disclosed" definitely should be disclosed. I am concerned, however, that SOME of the remaining certificates beyond those should probably also be disclosed. For safety sake, it may be better to start with an assumption that all CA and SubCA certificates require full disclosure to CCADB and then define particular specific rule sets for those which don't require that level.

Matthew Hardeman

unread,
May 22, 2017, 4:34:37 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 2:21:41 PM UTC-5, Ryan Sleevi wrote:
> > > Regarding specifically the risk of the holder of a technically
> > > constrained subCA issuing a certificate with an SHA-1 signature or
> > > other improper signature / algorithm, my belief at this time is that
> > > with respect to the web PKI, we should be able to rely upon the
> > > modern client software to exclude these certificates from
> > > functioning. My understanding was that IE / Edge was the last
> > > holdout on that front but that it now distrusts SHA-1 signatures.
> >
> > So your proposal is that technical requirements should be enforced
> > in-product rather than in-policy, and so effectively there's no need for
> > policy for the EE certs under a TCSC.
> >
> > This is not an unreasonable position.
> >
>
> I think it may be based on an incomplete understanding of the evolution of
> the Web PKI. While it's certainly correct that we've been able to
> technically mitigate many risks, it's not been without issue. The historic
> path to deprecation has been on the basis of establishing some form of
> sunset date or requirements change, either within the CA/Browser Forum or
> through policy, with the understanding and appreciation that, on or after
> that sunset date, it can be technically enforced without any breakage (save
> for misissued certificates).

I set forth that there are absolutely limitations to my knowledge, but feel that I have a fair grasp as to the history and to the evolution.

>
> TCSCs substantially change that dynamic, in a way that I believe would be
> detrimental towards further evolution. This is already a concern when
> thinking about requirements such as Certificate Transparency - despite the
> majority of commercial CAs (and thus, equally, commercially-managed managed
> CAs) - TCSCs that are in existence may be ill-prepared to handle such
> transition. We saw this itself with the imposition of the Baseline
> Requirements, which thankfully saw many enterprise-managed CAs become
> commercially-managed CAs, due to their inability to abide by the
> requirements, so we can reasonably conclude that future requirements will
> also be challenging for enterprise-managed CAs, which TCSCs effectively are.

I'm not certain that I accept the premise that TCSCs fundamentally or substantively change that dynamic. Particularly if the validity period of the TCSC is limited in much the same manner as an EE certificate, it would seem that there's a sufficiently limited time window to changing any needed aspect of the restrictions in a TCSC.

>
> Consider, on one extreme, if every of the Top 10000 sites used TCSCs to
> issue their leaves. A policy, such as deprecating SHA-1, would be
> substantially harder, as now there's a communication overhead of O(10000 +
> every root CA) rather than O(# of root store CAs).

I definitely concede that there would arise risks in having more TCSCs in deployment, with respect specifically to compatibility, if and only if an expectation of lax timelines and enforcement were required.

I think the key issue which held back the SHA-256 migration was not CA readiness as much as server administrator and consuming application (generally proprietary non-browser stuff on dual-propose (web + proprietary interface) shared endpoints) pushback.

Indeed, many mis-issuances which occurred (WoSign, Startcom, Symantec) seem to have been attempts to improperly satisfy end-customer demand for certificates in those kinds of use cases.

>
> It may be that the benefits of TCSCs are worth such risk - after all, the
> Web Platform and the evolution of its related specs (URL, Fetch, HTML)
> deals with this problem routinely. But it's also worth noting the
> incredible difficulty and friction of deprecating insecure, dangerous APIs
> - and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
> as such, may represent a significant slowdown in progress, and a
> corresponding significant increase in user-exposed risk.

I think part of the right balance would be to presume that those who are advanced enough to use TCSCs will do what maintenance is necessary to continue to comply with technically enforced browser requirements, assuming there is reasonable notice of those changes. Ultimately, it should also be a burden of the sponsoring CA to communicate to their TCSCs about impending changes.

>
> This is why it may be more useful to take a principled approach, and to, on
> a case by case basis, evaluate the risk of reducing requirements for TCSCs
> (which are already required to abide by the BRs, and simply exempted from
> auditing requirements - and this is independent of any Mozilla
> dispensations), both in the short-term and in the "If every site used this"
> long-term.

If individual case basis assessment requiring anything more than a "this sunCA certificates meets rule specification XYZ123 and thus requires no CCADB publication and entities below this cert are not subject to audit". then the utility of a distinct TCSC classification become severely limited in usefulness, as it would naturally have a very high barrier to deployment in terms of time-to-approval, time-to-creation, time-to-delivery, etc. It would also necessarily increase the price for the buyer of the TCSC and necessarily increase the cost for the sponsoring CA in man-hours dedicated to the process.

It may in fact be that there are risks of TCSCs that I've failed to account for which might well justify such an outcome.

Having said that, I think that future compatibility concerns in the face of the potential of more TCSCs being deployed can be headed off by taking a firm stance toward the necessity of those entities reliant on TCSCs keeping their infrastructure and practices up to date.

Deployment in this mode should probably be regarded as "This is for the advanced class. If that isn't you and/or you encounter problems, go back to working with a CA for your EE certificates."

Thanks,

Matt

Peter Bowen

unread,
May 22, 2017, 4:46:14 PM5/22/17
to Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org
On Mon, May 22, 2017 at 1:02 PM, Matthew Hardeman via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:
> On Monday, May 22, 2017 at 2:43:14 PM UTC-5, Peter Bowen wrote:
>
>>
>> I would say that any CA-certificate signed by a CA that does not have
>> name constraints and not constrained to things outside the set
>> {id-kp-serverAuth, id-kp-emailProtection, anyEKU} should be disclosed.
>> This would mean that the top level of all constrained hierarchies is
>> disclosed but subordinate CAs further down the tree and EE certs are
>> not. I think that this is a reasonable trade off of privacy vs
>> disclosure.
>
> I would agree that those you've identified as "should be disclosed" definitely should be disclosed. I am concerned, however, that SOME of the remaining certificates beyond those should probably also be disclosed. For safety sake, it may be better to start with an assumption that all CA and SubCA certificates require full disclosure to CCADB and then define particular specific rule sets for those which don't require that level.

Right now the list excludes anything with a certain set of name
constraints and anything that has EKU constraints outside the in-scope
set. I'm suggesting that the first "layer" of CA certs always should
be disclosed.

Thanks,
Peter

Matthew Hardeman

unread,
May 22, 2017, 4:48:03 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
>
> Right now the list excludes anything with a certain set of name
> constraints and anything that has EKU constraints outside the in-scope
> set. I'm suggesting that the first "layer" of CA certs always should
> be disclosed.
>

I understand now. In that, I fully concur.

Ryan Sleevi

unread,
May 22, 2017, 4:50:30 PM5/22/17
to Matthew Hardeman, mozilla-dev-security-policy
On Mon, May 22, 2017 at 4:34 PM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> I'm not certain that I accept the premise that TCSCs fundamentally or
> substantively change that dynamic. Particularly if the validity period of
> the TCSC is limited in much the same manner as an EE certificate, it would
> seem that there's a sufficiently limited time window to changing any needed
> aspect of the restrictions in a TCSC.
>

Right, but I reject that :)

The window of change is not simply the validity window of the extant
certificates. That is, you cannot, at T=1, necessitate some change from
T=0. The approach that has been used has been to phase in - e.g. after
T=[some point in the future], all publicly trusted certificates need to
change.

The success or failure of the ability to make those changes has been gated
upon the diversity of the ecosystem. That is, technically diverse CAs -
those that operate on diverse infrastructures, perhaps over the course of
several years of acquisitions - have been the most difficult to adapt to
changing requirements. Those with homogenous infrastructures have
demonstrated their willingness and ability to change in a timely fashion.

When you extend that to TCSCs, it simply doesn't scale. Changing an
algorithm like SHA-1, on the extreme side, is no longer a centralized
problem, but a decentralized one. That's the benefit being argued for
TCSCs, but its also the extreme detriment. On the other side, things that
should be 'simple' changes - like the proper encoding of a certificate,
which apparently CAs find remarkably hard, are now matters of coordinating
between ISVs implementing, organizations deploying, testing, etc. In
today's world, this is a problem, but not an overwhelming one. In a world
of TCSCs, it certainly is.


>
> >
> > Consider, on one extreme, if every of the Top 10000 sites used TCSCs to
> > issue their leaves. A policy, such as deprecating SHA-1, would be
> > substantially harder, as now there's a communication overhead of O(10000
> +
> > every root CA) rather than O(# of root store CAs).
>
> I definitely concede that there would arise risks in having more TCSCs in
> deployment, with respect specifically to compatibility, if and only if an
> expectation of lax timelines and enforcement were required.
>
> I think the key issue which held back the SHA-256 migration was not CA
> readiness as much as server administrator and consuming application
> (generally proprietary non-browser stuff on dual-propose (web + proprietary
> interface) shared endpoints) pushback.
>

Unfortunately, this is not an accurate summary :) The SHA-256 migration was
very much held back by CA readiness, moreso than server administrator
unreadiness. Many CAs were simply not capable of issuing SHA-256
certificates as recently as late 2014/early 2015, either directly or
through their APIs. And we're talking LARGE CAs.


> I think part of the right balance would be to presume that those who are
> advanced enough to use TCSCs will do what maintenance is necessary to
> continue to comply with technically enforced browser requirements, assuming
> there is reasonable notice of those changes. Ultimately, it should also be
> a burden of the sponsoring CA to communicate to their TCSCs about impending
> changes.
>

There is no such evidence to support such presumption, and ample evidence
(as from the BRs) to suggest it doesn't work.

While I agree it should be the burden of the sponsoring CA, the
externalities are entirely incorrect to ensure that happens. That is, if a
CA fails to communicate, much like SHA-1 saw, then it becomes an issue with
a site operator/TCSC operator being ill-prepared for a browser change, and
the result is a broken site in the browser, which then further increases
warning fatigue on the user.

In an ideal world, it'd work like you describe. The past decade of CA
changes have shown the world is anything but ideal :)


> Having said that, I think that future compatibility concerns in the face
> of the potential of more TCSCs being deployed can be headed off by taking a
> firm stance toward the necessity of those entities reliant on TCSCs keeping
> their infrastructure and practices up to date.
>
> Deployment in this mode should probably be regarded as "This is for the
> advanced class. If that isn't you and/or you encounter problems, go back
> to working with a CA for your EE certificates."
>

A firm stance to use users as hostages in negotiations? Browsers undertake
that with fear and trembling - because as much as you can say it, and the
end of the day, the user is going to blame the most recent one to change -
which will consistently be the browser.

Matthew Hardeman

unread,
May 22, 2017, 5:35:18 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 3:50:30 PM UTC-5, Ryan Sleevi wrote:

> Right, but I reject that :)
>

I hope to better understand your position. In transitioning from a long time lurker to actively commenting on this list, it is my hope to contribute what that I usefully can, bow out gracefully when I can not, but above all to learn at least as much as I contribute.

> > Having said that, I think that future compatibility concerns in the face
> > of the potential of more TCSCs being deployed can be headed off by taking a
> > firm stance toward the necessity of those entities reliant on TCSCs keeping
> > their infrastructure and practices up to date.
> >
> > Deployment in this mode should probably be regarded as "This is for the
> > advanced class. If that isn't you and/or you encounter problems, go back
> > to working with a CA for your EE certificates."
> >
>
> A firm stance to use users as hostages in negotiations? Browsers undertake
> that with fear and trembling - because as much as you can say it, and the
> end of the day, the user is going to blame the most recent one to change -
> which will consistently be the browser.

It is within the above that I can see a real problem in making more broad use of TCSCs problematic. If the browser community does not effectively move in the fashion of a single actor with a breaking change when necessary for addressing a security concern, I would agree that frankly anything which adds additional field deployment scenarios into the ecosystem will only make things worse.

On the other hand, perhaps the lesson to be learned there is that better concensus as to scheduling of impact of breaking changes should be negotiated amongst the browser participants and handed down in one voice to the Root CAs and onward to the web community.

The user can't blame Chrome if Safari and Firefox break for the same use case in quite near term. When there is no one left to blame for a broken website BUT the broken website, the blame will be taxed where it is deserved.

One particular hesitation that I have in fully accepting your position is that it would seem that your position would recommend a Web PKI with a very few concentrated actors working subject to the best practices and with minimal differentiation. (Say, for example, a LetsEncrypt and 3 distinct competitors diverse of geography and management but homogenous as to intent and operational practice.) The 4 CAs could quickly be communicated with and could adapt to the needs of the community. Extrapolating from LetsEncrypt's performance also suggests it would be technologically feasible for just a few entities to pull this off, too. Yet, I don't see a call for that. Where's the balance in between and how does one arrive at that?

Ryan Sleevi

unread,
May 22, 2017, 7:43:21 PM5/22/17
to Matthew Hardeman, mozilla-dev-security-policy
On Mon, May 22, 2017 at 5:35 PM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> It is within the above that I can see a real problem in making more broad
> use of TCSCs problematic. If the browser community does not effectively
> move in the fashion of a single actor with a breaking change when necessary
> for addressing a security concern, I would agree that frankly anything
> which adds additional field deployment scenarios into the ecosystem will
> only make things worse.
>
> On the other hand, perhaps the lesson to be learned there is that better
> concensus as to scheduling of impact of breaking changes should be
> negotiated amongst the browser participants and handed down in one voice to
> the Root CAs and onward to the web community.
>
> The user can't blame Chrome if Safari and Firefox break for the same use
> case in quite near term. When there is no one left to blame for a broken
> website BUT the broken website, the blame will be taxed where it is
> deserved.
>

There are both technical and non-technical hurdles that prevent that from
being meaningfully accomplished. On the non-technical front, the nature of
relationships with third-party entities (CAs) makes it complex to act in a
coordinated fashion, for what are hopefully obvious reasons.

On a pragmatic, technical front, the asymmetric release cycles prevent
there from ever being a true Flag Day, and as such, means there's always a
first mover penalty, and there's always jockeying to avoid the pain of that
first-mover penalty. I'm not sure whether to draw the parallel to the
prisoner's dilemma, but it's worth pointing out that Microsoft was the
first to announce a SHA-1 date, and the last to implement it (having only
recently shipped it, after other browsers worked through the issues - and
the user pain).

To achieve parity, one would need to implement a concerted flag day, much
like World IPv6 Day. Unfortunately, such flag days inherently mean there is
a limit to the degree of testing and assessing breakage, and any bugs -
bugs that would cause the change to be reverted or fixed - cannot be fixed
ahead of time.

These are issues that the browser community is unable to solve. Not
unwilling, but on a purely technical front, unable to achieve while also
serving their goals of shipping reliable products to users.


> One particular hesitation that I have in fully accepting your position is
> that it would seem that your position would recommend a Web PKI with a very
> few concentrated actors working subject to the best practices and with
> minimal differentiation. (Say, for example, a LetsEncrypt and 3 distinct
> competitors diverse of geography and management but homogenous as to intent
> and operational practice.) The 4 CAs could quickly be communicated with
> and could adapt to the needs of the community. Extrapolating from
> LetsEncrypt's performance also suggests it would be technologically
> feasible for just a few entities to pull this off, too. Yet, I don't see a
> call for that. Where's the balance in between and how does one arrive at
> that?


The Web PKI already has virtually zero differentiation. That's a foregone
conclusion, by virtue of compliance to the Baseline Requirements. That is,
the only real differentiation is on ubiquity of roots, probability of
removal (due to misissuance), and price.

That said, despite this, it should be for very obvious reasons, much like
above, why the obvious conclusion is not one that is actively pursued.

Would such a system be better on a security front? In many ways, yes. The
distributed nature of the Web PKI was not, as some might claim, an
intentional design goal for security, but was done moreso for non-technical
reasons, such as perceived liability. Having 5 entities with keys to the
Internet is, unquestionably, better than having 5000 entities with the keys
to the Internet. To date, most systems have maintained an unbounded root
store (modulo per-company limits), because there has not been a desire to
include technical differentiation. One could just as easily see a goal
that, in the furtherance of Internet security, a root store limiting it to
10 CAs, all implementing a common issuance API, and objectively measured in
terms of things like performance, availability, and systemtic security.
However, as you can see from just the inclusion reviews as it stands,
that's a time consuming and difficult task, and for most root stores, the
amount of time that vendors are dedicating average around 1.5 - 2 people
for the entire company, which is far less than needed to implement such
changes.

But to the original point - can browsers unilaterally cut off (potentially
large) swaths of the Internet? No. And a profile of TCSCs that has 10,000
of them can easily mean that's what it entails. If it was otherwise
possible, we would have HTTPS-by-default by now - but as you can see from
those discussions, or the discussions of disabling plugins (which are an
security disaster) by default, changes in the Web Platform move glacially
slow compared to the current agility and flexibility of the Web PKI.

Peter Bowen

unread,
May 22, 2017, 7:58:17 PM5/22/17
to Ryan Sleevi, mozilla-dev-security-policy, Gervase Markham, Matthew Hardeman
On Mon, May 22, 2017 at 12:21 PM, Ryan Sleevi via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:
> Consider, on one extreme, if every of the Top 10000 sites used TCSCs to
> issue their leaves. A policy, such as deprecating SHA-1, would be
> substantially harder, as now there's a communication overhead of O(10000 +
> every root CA) rather than O(# of root store CAs).

Why do you need to add 10,000 communication points? A TCSC is, by
definition, a subordinate CA. The WebPKI is not a single PKi, is a
set of parallel PKIs which do not share a common anchor. The browser
to CA relationship is between the browser vendor and each root CA.
This is O(root CA operator) not even O(every root CA). If a root CA
issues 10,000 subordinate CAs, then they better have a compliance plan
in place to have assurance that all of them will do the necessary
things.

> It may be that the benefits of TCSCs are worth such risk - after all, the
> Web Platform and the evolution of its related specs (URL, Fetch, HTML)
> deals with this problem routinely. But it's also worth noting the
> incredible difficulty and friction of deprecating insecure, dangerous APIs
> - and the difficulty in SHA-1 (or commonNames) for "enterprise" PKIs - and
> as such, may represent a significant slowdown in progress, and a
> corresponding significant increase in user-exposed risk.
>
> This is why it may be more useful to take a principled approach, and to, on
> a case by case basis, evaluate the risk of reducing requirements for TCSCs
> (which are already required to abide by the BRs, and simply exempted from
> auditing requirements - and this is independent of any Mozilla
> dispensations), both in the short-term and in the "If every site used this"
> long-term.

It seems this discussion is painting TCSCs with a broad brush. I
don't see anything in this discussion that makes the TCSC relationship
any different from any other subordinate CA. Both can be operated
either by the same organization that operates the root CA or an
unrelated organization. The Apple and Google subordinate CAs are
clearly not TCSCs but raise the same concerns. If there were 10,000
subordinates all with WebTrust audits, you would have the exact same
problem.

Thanks,
Peter

Ryan Sleevi

unread,
May 22, 2017, 8:24:42 PM5/22/17
to Peter Bowen, Ryan Sleevi, mozilla-dev-security-policy, Gervase Markham, Matthew Hardeman
On Mon, May 22, 2017 at 7:58 PM, Peter Bowen <pzb...@gmail.com> wrote:
>
> Why do you need to add 10,000 communication points? A TCSC is, by
> definition, a subordinate CA. The WebPKI is not a single PKi, is a
> set of parallel PKIs which do not share a common anchor. The browser
> to CA relationship is between the browser vendor and each root CA.
> This is O(root CA operator) not even O(every root CA). If a root CA
> issues 10,000 subordinate CAs, then they better have a compliance plan
> in place to have assurance that all of them will do the necessary
> things.
>

https://groups.google.com/d/msg/mozilla.dev.security.policy/yS_L_OgI5qk/OhLX9iyZBAAJ
specifically proposed

"For example, no requirement of audit by the enterprise holding the
technically constrained intermediate, and no requirement for audit or
disclosure of certificates issued by the enterprise from the technically
constrained subordinate."

You're certainly correct that, under today's scheme, TCSCs exemption from
requirements under the Baseline Requirements simply requires Self-Audits
(Pursuant to Section 8.7). However, that does not mean that TCSCs must be
on the same infrastructure as the issuing CA - simply that "the CA which
signed the Subordinate CA SHALL monitor adherance to the CA's CP and the
SubCA's CPS" and a sampling audit, by the issuing CA, of either one
certificate or three percent of certificates issued.

That's a much weaker requirement than subCAs.


> It seems this discussion is painting TCSCs with a broad brush. I
> don't see anything in this discussion that makes the TCSC relationship
> any different from any other subordinate CA. Both can be operated
> either by the same organization that operates the root CA or an
> unrelated organization. The Apple and Google subordinate CAs are
> clearly not TCSCs but raise the same concerns. If there were 10,000
> subordinates all with WebTrust audits, you would have the exact same
> problem.
>

Indeed, although the realities and costs of that make it unpractical - as
do the risks exposed to CAs (as recently seen) in engaging in such
relationships without sufficient and appropriate oversight.

But I'm responding in the context of the desired goal, and not simply
today's reality, since it is the goal that is far more concerning.

Matthew Hardeman

unread,
May 22, 2017, 9:34:49 PM5/22/17
to mozilla-dev-s...@lists.mozilla.org
On Monday, May 22, 2017 at 7:24:42 PM UTC-5, Ryan Sleevi wrote:

> https://groups.google.com/d/msg/mozilla.dev.security.policy/yS_L_OgI5qk/OhLX9iyZBAAJ
> specifically proposed
>
> "For example, no requirement of audit by the enterprise holding the
> technically constrained intermediate, and no requirement for audit or
> disclosure of certificates issued by the enterprise from the technically
> constrained subordinate."
>
> You're certainly correct that, under today's scheme, TCSCs exemption from
> requirements under the Baseline Requirements simply requires Self-Audits
> (Pursuant to Section 8.7). However, that does not mean that TCSCs must be
> on the same infrastructure as the issuing CA - simply that "the CA which
> signed the Subordinate CA SHALL monitor adherance to the CA's CP and the
> SubCA's CPS" and a sampling audit, by the issuing CA, of either one
> certificate or three percent of certificates issued.
>
> That's a much weaker requirement than subCAs.
>

It's true that I set forth a particular goal that I represented as part of my interest in seeing the bar for the issuance of a properly designed TCSC lowered. I concur that the realization of that goal would mean that there are far more unique systems issuing publicly trusted (although within a very narrowly defined window as enforced by technical constraints) certificates.

I even concede that that alone does create a potential for compatibility issues should a need arise to make a global web pki-wide change to certificate issuance (say, for example, sudden need to deprecate SHA-256 signatures in favor of NGHA-1 [ostensibly the Next Great Hash Algorithm Instance #1]). For mitigation of that matter, I firmly believe that any research and development in the area of improved techniques for demonizing server administrators would be most beneficial.

>
> > It seems this discussion is painting TCSCs with a broad brush. I
> > don't see anything in this discussion that makes the TCSC relationship
> > any different from any other subordinate CA. Both can be operated
> > either by the same organization that operates the root CA or an
> > unrelated organization. The Apple and Google subordinate CAs are
> > clearly not TCSCs but raise the same concerns. If there were 10,000
> > subordinates all with WebTrust audits, you would have the exact same
> > problem.
> >
>
> Indeed, although the realities and costs of that make it unpractical - as
> do the risks exposed to CAs (as recently seen) in engaging in such
> relationships without sufficient and appropriate oversight.

If I understand correctly, then, the issue is that you wish to minimize the growth of distinct issuing systems wherever they may occur in the PKI hierarchy, not TCSCs in particular.

>
> But I'm responding in the context of the desired goal, and not simply
> today's reality, since it is the goal that is far more concerning.

If I understand correctly, your position is that full disclosure and indexing of the TCSCs is to be desired principally because the extra effort of doing so may discourage their prevalence in deployment?

Ryan Sleevi

unread,
May 23, 2017, 9:05:30 AM5/23/17
to Matthew Hardeman, mozilla-dev-security-policy
On Mon, May 22, 2017 at 9:34 PM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I even concede that that alone does create a potential for compatibility
> issues should a need arise to make a global web pki-wide change to
> certificate issuance (say, for example, sudden need to deprecate SHA-256
> signatures in favor of NGHA-1 [ostensibly the Next Great Hash Algorithm
> Instance #1]). For mitigation of that matter, I firmly believe that any
> research and development in the area of improved techniques for demonizing
> server administrators would be most beneficial.
>

Note that global pki-wide changes are made regularly - as evidence by the
CA/Browser Forum BRs :)


> If I understand correctly, then, the issue is that you wish to minimize
> the growth of distinct issuing systems wherever they may occur in the PKI
> hierarchy, not TCSCs in particular.
>

That's perhaps asserting an intent, which is more than I said. I simply
highlighted that security is significantly improved through the limited
number of distinct issuing systems, and would be harmed by the introducing
of TCSCs that introduced new and distinct issuing infrastructures.

I would not seek to limit - simply, to highlight where the existing
controls serve a significant and valuable security purpose, and reducing
those controls would undermine that security.


> > But I'm responding in the context of the desired goal, and not simply
> > today's reality, since it is the goal that is far more concerning.
>
> If I understand correctly, your position is that full disclosure and
> indexing of the TCSCs is to be desired principally because the extra effort
> of doing so may discourage their prevalence in deployment?


My position is that disclosure and indexing of TCSCs, and the further
requirements that they be operated in accordance with the BRs (and simply
audited by the Issuing CA), serves a valuable security function, for which
any situation that seeks to remove those requirements should strive to
provide an equivalent security function.

Note that I'm being very careful in what I'm saying, for obvious
non-technical reasons, and hopefully it's clear that the existing
requirements serve an objective and measurable security function, and are
not there to limit growth.

Jakob Bohm

unread,
May 23, 2017, 9:46:14 AM5/23/17
to mozilla-dev-s...@lists.mozilla.org
Maybe there is a simpler, less onerous way to sanely impose new CAB/F or
other policy requirements on TCSC without having them operate as full
fledged public CAs with related complexities.

How about this:

* TCSCs can, by their existing definition, be programmatically
recognized by certificate validation code e.g. in browsers and other
clients.

* If TCSCs are limited, by requirements on BR-complient unconstrained
SubCAs, to lifetimes that are the BR maximum of N years + a few months
(e.g. 2 years + a few months for the latest CAB/F requirements), then
any new CAB/F requirements on the algorithms etc. in SubCAs will be
phased in as quickly as for EE certs.

* If TCSCs cannot be renewed with the same public key, then TCSC issued
EEs are also subject to the same phase in deadlines as regular EEs.

* When issuing new/replacement TCSCs, CA operators should (by policy) be
required to inform the prospective TCSC holders which options in EE
certs (such as key strengths) will not be accepted by relying parties
after certain phase-out dates during the TCSC lifetime. It would then
be foolish (and of little consequence to the WebPKI as a whole) if any
TCSC holders ignore those restrictions.

* With respect to initiatives such as CT-logging, properly written
certificate validation code should simply not impose this below TCSCs.

With the above and similar measures (mostly) already in place, I see no
good reason to subject TCSCs to any of the administrative burdens
imposed on public SubCAs.



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Ryan Sleevi

unread,
May 23, 2017, 10:23:25 AM5/23/17
to Jakob Bohm, mozilla-dev-security-policy
On Tue, May 23, 2017 at 9:45 AM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> * TCSCs can, by their existing definition, be programmatically
> recognized by certificate validation code e.g. in browsers and other
> clients.
>

In theory, true.
In practice, not even close.


> * If TCSCs are limited, by requirements on BR-complient unconstrained
> SubCAs, to lifetimes that are the BR maximum of N years + a few months
> (e.g. 2 years + a few months for the latest CAB/F requirements), then
> any new CAB/F requirements on the algorithms etc. in SubCAs will be
> phased in as quickly as for EE certs.
>

I'm not sure what you're trying to say here, but the limits of lifetime to
EE certs are different than that of unconstrained subCAs (substantially)


> * If TCSCs cannot be renewed with the same public key, then TCSC issued
> EEs are also subject to the same phase in deadlines as regular EEs.
>

Renewing with the same public key is a problematic practice that should be
stopped.


> * When issuing new/replacement TCSCs, CA operators should (by policy) be
> required to inform the prospective TCSC holders which options in EE
> certs (such as key strengths) will not be accepted by relying parties
> after certain phase-out dates during the TCSC lifetime. It would then
> be foolish (and of little consequence to the WebPKI as a whole) if any
> TCSC holders ignore those restrictions.
>

This seems to be operating on an ideal world theory, not a real world
incentives theory.

First, there's the obvious problem that "required to inform" is
fundamentally problematic, and has been pointed out to you by Gerv in the
past. CAs were required to inform for a variety of things - but that
doesn't change market incentives. For that matter, "required to inform" can
be met by white text on a white background, or a box that clicks through,
or a default-checked "opt-in to future communications" requirement. The
history of human-computer interaction (and the gamification of regulatory
action) shows this is a toothless and not meaningful action.

I understand your intent is to be like "Surgeon General's Warning" on
cigarettes (in the US), or more substantive warnings in other countries,
and while that is well-intentioned as a deterrent - and works for some
cases - is to otherwise ignore the public health risk or to try to sweep it
under the rug under the auspices of "doing something".

Similarly, the market incentives are such that the warning will ultimately
be ineffective for some segment of the population. Chrome's own warnings
with SHA-1 - warnings that CAs felt were unquestionably 'too much' - still
showed how many sites were ill-prepared for the SHA-1 breakage (read: many).

Warnings feel good, but they don't do (enough) good. So the calculus comes
down to those making the decision - Gerv and Kathleen on behalf of Mozilla,
or folks like Andrew and I on behalf of Google - of whether or not to
'break' sites that worked yesterday, and which won't work tomorrow. When
that breakage is low, it can fit within the acceptable tolerances -
https://www.chromium.org/blink/removing-features and
https://www.chromium.org/blink try to spell out how we do this in the Web
Platform - but too large, and it becomes a game of chicken.

So even though you say "it would be foolish," every bit of history suggests
it will be done. And since we know this, we also have to consider what the
impact will be afterwards. Breaking a ton of sites is something no browser
manufacturer - or its employees, more specifically - wake up each morning
and say "Gee, I wonder what I can break today!", and so we shouldn't
trivialize the significant risk it would impose.


> * With respect to initiatives such as CT-logging, properly written
> certificate validation code should simply not impose this below TCSCs.
>

"properly written"? What makes it properly written? It just means what you
want as the new policy.


> With the above and similar measures (mostly) already in place, I see no
> good reason to subject TCSCs to any of the administrative burdens
> imposed on public SubCAs.


While I hope I've laid them out for you in a way that can convince you, I
also suspect that the substance will be disregarded because of the source.
That said, the risk of breaking something is not done lightly, and while
you may feel it's the site operators fault - and perhaps even rightfully so
- the cost is not born by the site operator (even when users can't get to
their site!) or the CA (who didn't warn "hard enough"), but by the user.
And systems that externalize cost onto the end user are not good systems.

Jakob Bohm

unread,
May 23, 2017, 11:53:03 AM5/23/17
to mozilla-dev-s...@lists.mozilla.org
On 23/05/2017 16:22, Ryan Sleevi wrote:
> On Tue, May 23, 2017 at 9:45 AM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>> * TCSCs can, by their existing definition, be programmatically
>> recognized by certificate validation code e.g. in browsers and other
>> clients.
>>
>
> In theory, true.
> In practice, not even close.
>
>

Note as this is about a proposed future policy, this is about validation
code updated if and when such a policy is enacted. Current validation
code has no reason to check a non-existent policy.

What part of "Has DNS/e-mail name constraints to at least second-level
domains or TLDs longer than 3 chars", "Has DN name constraints that
limit at least O and C", "Has EKU limitations that exclude AnyEKU and
anything else problematic", "Has lifetime and other general constraints
within the limits of EE certs" AND "Has a CTS" cannot be detected
programmatically?

Or could this be solved by require such "TCSC light" SubCA certs to
carry a specific CAB/F policy OID with CT-based community enforcement
that all SubCA certs with this policy OID comply with the more stringent
non-computable requirements likely to be in such a policy (if passed)?

>> * If TCSCs are limited, by requirements on BR-complient unconstrained
>> SubCAs, to lifetimes that are the BR maximum of N years + a few months
>> (e.g. 2 years + a few months for the latest CAB/F requirements), then
>> any new CAB/F requirements on the algorithms etc. in SubCAs will be
>> phased in as quickly as for EE certs.
>>
>
> I'm not sure what you're trying to say here, but the limits of lifetime to
> EE certs are different than that of unconstrained subCAs (substantially)

I am trying to limit the scope of this to the kind of TCSC (Technically
Constrained SubCA) that Matthew was advocating for. Thus none of this
applies to long lived or public SubCAs.

If an organization wants ongoing TCSC availability, they may subscribe
to getting a fresh TCSC halfway through the lifetime of the previous
one, to provide a constantly overlapping chain of SubCAs.

>
>
>> * If TCSCs cannot be renewed with the same public key, then TCSC issued
>> EEs are also subject to the same phase in deadlines as regular EEs.
>>
>
> Renewing with the same public key is a problematic practice that should be
> stopped.
>

Some other people seem to disagree, however in this case I am
constraining the discussion to a specific case where this would be
forbidden (And enforced via CT logging of the TCSC certs). Thus no
debate on that particular issue.

>
>> * When issuing new/replacement TCSCs, CA operators should (by policy) be
>> required to inform the prospective TCSC holders which options in EE
>> certs (such as key strengths) will not be accepted by relying parties
>> after certain phase-out dates during the TCSC lifetime. It would then
>> be foolish (and of little consequence to the WebPKI as a whole) if any
>> TCSC holders ignore those restrictions.
>>
>
> This seems to be operating on an ideal world theory, not a real world
> incentives theory.
>
> First, there's the obvious problem that "required to inform" is
> fundamentally problematic, and has been pointed out to you by Gerv in the
> past. CAs were required to inform for a variety of things - but that
> doesn't change market incentives. For that matter, "required to inform" can
> be met by white text on a white background, or a box that clicks through,
> or a default-checked "opt-in to future communications" requirement. The
> history of human-computer interaction (and the gamification of regulatory
> action) shows this is a toothless and not meaningful action.
>
> I understand your intent is to be like "Surgeon General's Warning" on
> cigarettes (in the US), or more substantive warnings in other countries,
> and while that is well-intentioned as a deterrent - and works for some
> cases - is to otherwise ignore the public health risk or to try to sweep it
> under the rug under the auspices of "doing something".
>

It would more be like disclaimer telling their customers that if they
issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
won't work in a lot of browsers, please for your own protection, issue
only SHA-256 or stronger certs. So the incentive for the issuing CA is
to minimize tech support calls and angry customers.

If the CA fails to inform their customers, the customer will get angry,
but the WebPKI will be unaffected.

> Similarly, the market incentives are such that the warning will ultimately
> be ineffective for some segment of the population. Chrome's own warnings
> with SHA-1 - warnings that CAs felt were unquestionably 'too much' - still
> showed how many sites were ill-prepared for the SHA-1 breakage (read: many).
>
> Warnings feel good, but they don't do (enough) good. So the calculus comes
> down to those making the decision - Gerv and Kathleen on behalf of Mozilla,
> or folks like Andrew and I on behalf of Google - of whether or not to
> 'break' sites that worked yesterday, and which won't work tomorrow. When
> that breakage is low, it can fit within the acceptable tolerances -
> https://www.chromium.org/blink/removing-features and
> https://www.chromium.org/blink try to spell out how we do this in the Web
> Platform - but too large, and it becomes a game of chicken.
>
> So even though you say "it would be foolish," every bit of history suggests
> it will be done. And since we know this, we also have to consider what the
> impact will be afterwards. Breaking a ton of sites is something no browser
> manufacturer - or its employees, more specifically - wake up each morning
> and say "Gee, I wonder what I can break today!", and so we shouldn't
> trivialize the significant risk it would impose.
>

One could also add a requirement that certain occasional messages,
prewritten by the CAB/F shall be forwarded verbatim to all TCSC holders.
For example a notice about the SHA-1 deprecation (historic example).

>
>> * With respect to initiatives such as CT-logging, properly written
>> certificate validation code should simply not impose this below TCSCs.
>>
>
> "properly written"? What makes it properly written? It just means what you
> want as the new policy.
>

"Properly written" = "Written to take into account the new relaxed
policy proposed by Matthew (not me), if and when adopted", I have argued
above why I think this should be technically doable.

Matthew (not I) have argued why such a policy might be good. I have
merely provided technical input as to how to handle certain
implementation issues.

>
>> With the above and similar measures (mostly) already in place, I see no
>> good reason to subject TCSCs to any of the administrative burdens
>> imposed on public SubCAs.
>
>
> While I hope I've laid them out for you in a way that can convince you, I
> also suspect that the substance will be disregarded because of the source.
> That said, the risk of breaking something is not done lightly, and while
> you may feel it's the site operators fault - and perhaps even rightfully so
> - the cost is not born by the site operator (even when users can't get to
> their site!) or the CA (who didn't warn "hard enough"), but by the user.
> And systems that externalize cost onto the end user are not good systems.
>

Believe me, I have read the substance of your messages, and have tried
to respond with intelligent arguments, and/or acceptance where
applicable.

Ryan Sleevi

unread,
May 23, 2017, 12:19:13 PM5/23/17
to Jakob Bohm, mozilla-dev-security-policy
On Tue, May 23, 2017 at 11:52 AM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> Note as this is about a proposed future policy, this is about validation
> code updated if and when such a policy is enacted. Current validation
> code has no reason to check a non-existent policy.
>

Mozilla strives, to the best possible way, to be interoperable with other
vendors, and not introduce security risks that would affect others, nor
unduly require things that would inhibit others.

In this aspect, the proposal of TCSCs - and the rest of the radical changes
you propose - are incompatible with many other libraries.

While you're true that Mozilla could change their code at any point, much
of the Web Platform's evolution - and in particular, TLS - has been
achieved through multi-vendor collaboration.

This is why it's important, when making proposals, to not simply work on a
blank canvas and attempt to sketch something, but to be aware of the lines
in the ecosystem that exist and the opportunities for collaboration - and
the times in which it's important to "go it alone".

What part of "Has DNS/e-mail name constraints to at least second-level
> domains or TLDs longer than 3 chars", "Has DN name constraints that
> limit at least O and C", "Has EKU limitations that exclude AnyEKU and
> anything else problematic", "Has lifetime and other general constraints
> within the limits of EE certs" AND "Has a CTS" cannot be detected
> programmatically?
>

These are not things that can be reliably implemented across the ecosystem,
nor would they be reasonable costs to bear for the proposed benefits, no.


> Or could this be solved by require such "TCSC light" SubCA certs to
> carry a specific CAB/F policy OID with CT-based community enforcement
> that all SubCA certs with this policy OID comply with the more stringent
> non-computable requirements likely to be in such a policy (if passed)?
>

No.


> I am trying to limit the scope of this to the kind of TCSC (Technically
> Constrained SubCA) that Matthew was advocating for. Thus none of this
> applies to long lived or public SubCAs.
>
> If an organization wants ongoing TCSC availability, they may subscribe
> to getting a fresh TCSC halfway through the lifetime of the previous
> one, to provide a constantly overlapping chain of SubCAs.
>

Except this doesn't meaningfully address the "day+1" issuance problem that
was highlighted, unless you proposed that the non-nesting constraints that
I mentioned aren't relevant.


> It would more be like disclaimer telling their customers that if they
> issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
> won't work in a lot of browsers, please for your own protection, issue
> only SHA-256 or stronger certs. So the incentive for the issuing CA is
> to minimize tech support calls and angry customers.
>
> If the CA fails to inform their customers, the customer will get angry,
> but the WebPKI will be unaffected.


And I'm trying to tell you that your model of the incentives is wrong, and
it does not work like that, as can be shown by every other real world
deprecation.

If they made the disclaimer, and yet still 30% of sites had these, browsers
would not turn it off. As such, the disclaimer would be pointless - the
incentive structure is such that browsers aren't going to start throwing
users under the bus.

When the browser makes the change, the issuing CA does not get the calls.
The site does not get the calls. The browser gets the anger. This is
because "most recent to change is first to blame" - and it was the browser,
not the CA, that made the most recent change.

This is how it has worked out for every change in the past. And while I
appreciate your optimism that it would work with TCSCs, there's nothing in
this proposal that would change that incentive structure, such as to ensure
that you don't have 30% of the Internet doing "Whatever thing will be
deprecated", and as a consequence, _it will not be deprecate_.


One could also add a requirement that certain occasional messages,
> prewritten by the CAB/F shall be forwarded verbatim to all TCSC holders.
> For example a notice about the SHA-1 deprecation (historic example).
>

The CA/Browser Forum did not do such documentation, but we also have ample
evidence that the notices were disregarded, not forwarded to the right
people, went to people whose mailboxes were turned off (since it was 3
years since they last got a cert), etc.

Again, I appreciate your optimism that it would work, but I'm speaking from
experience and evidence to say it does not. That's the core of the problem
here - TCSCs being 'unrestricted' mean that the existing problems in making
evolutionary changes amplify, the number of parties to update grows, and
the ability to make change significantly slows.

It may be that unrestricted TCSCs are 'so amazing' that they justify this
cost to the ecosystem. If that's the case, it's a far more productive
avenue to discuss _why_ that is, rather than set out the _how_ to do it,
since well of "how" has been plumbed deep by browsers trying to make
positive security changes, and we know that these proposals don't work.
However, if there's a compelling reason why - why the Web PKI should move
to a more rigid, harder to change system - then it could be worth
re-exploring.

Matthew Hardeman

unread,
May 23, 2017, 12:33:50 PM5/23/17
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, May 23, 2017 at 10:53:03 AM UTC-5, Jakob Bohm wrote:

>
> Or could this be solved by require such "TCSC light" SubCA certs to
> carry a specific CAB/F policy OID with CT-based community enforcement
> that all SubCA certs with this policy OID comply with the more stringent
> non-computable requirements likely to be in such a policy (if passed)?
>

I wish to clarify a couple points of what I proposed.

With respect to the topic of this thread -- the certificate policy & disclosure scope at Mozilla, I have proposed that particular categories of intermediate certificate (name constrained subCAs with particular features) might be reasonably subjected to a lower burden, requiring no formal disclosure to Mozilla beyond that their existence and issuance be CT logged. Also, I proposed that further subCAs and EEs issued descending from those constrained subCAs be regarded as entirely beyond the scope of the Mozilla Policy and disclosure.

I maintain that I've not seen presented a compelling technical reason that would suggest that such change to Mozilla policy would reduce security in the Web PKI if adopted. If this is the case, reducing requirement for disclosure to CCADB and attendant audit statements, etc, for these TCSCs would seem to reduce work burden on Mozilla as well as the public CAs.

Quite separately, I would personally like to see some BR changes similarly in line with the above, but I am not positioned to make such a request, as I am not a CA. Further, I acknowledge that this thread is probably not the appropriate forum for that particular case to be pleaded.

Having said all of that, I wish to make clear that I have not proposed that the technological burdens of certificate issuance by an entity utilizing a technically constrained subCA should be lightened in actual issuance practice:

Specifically, I am a supporter of Certificate Transparency. I see no reason, for example, why an EE certificate issued subordinate to a TCSC should be exempted from Chrome's CT Policy, etc. An enterprise PKI utilizing a TCSC could certainly submit the certificates they issue to CT logging. Those same certificates do, in fact, chain to trusted roots. I can think of no reason that a CT log would reject those submissions.

I wish to clarify that my position is that EE certificates issued subordinate to a name constrained CA need be of no concern to Mozilla and the other programs from a monitoring perspective relies upon the quite limited scope of effect the EE certificate can have after accounting for the regulations in the TCSC.

In short, I believe that the need to enforce audits, etc, over what an enterprise who have been issued a proper TCSC actually does with that TCSC is unnecessary, because anything they could do would be limited in scope to their own operations. This includes issuing certificates which don't comply with CT logging, etc. I fully believe the same standards of technical constraint applied to certificates of a public CA would also apply to trust in certificates issued subordinate to a TCSC.

I just think there's no need to concern themselves if someone quite clever (whatever that means) decides to ASN.1 encode a Trollface GIF and roll that into an EE cert subordinate to their corporate TCSC. No need to report that as a BR violation. No need for the sponsoring public CA to be concerned if they discover that upon audit, because I think there's no need for said audit. Because anything that audit could have found could have been discovered by browser validation code, with the judgement rendered instantly and with proportionate consequence: (i.e. this is garbage, not a certificate, I'm going with the untrusted interstitial error).

Jakob Bohm

unread,
May 23, 2017, 1:19:03 PM5/23/17
to mozilla-dev-s...@lists.mozilla.org
On 23/05/2017 18:18, Ryan Sleevi wrote:
> On Tue, May 23, 2017 at 11:52 AM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>>
>> Note as this is about a proposed future policy, this is about validation
>> code updated if and when such a policy is enacted. Current validation
>> code has no reason to check a non-existent policy.
>>
>
> Mozilla strives, to the best possible way, to be interoperable with other
> vendors, and not introduce security risks that would affect others, nor
> unduly require things that would inhibit others.
>
> In this aspect, the proposal of TCSCs - and the rest of the radical changes
> you propose - are incompatible with many other libraries.
>

NOT my proposal, I was trying to help out with technical details of
Matthew's proposal, that's all.

> While you're true that Mozilla could change their code at any point, much
> of the Web Platform's evolution - and in particular, TLS - has been
> achieved through multi-vendor collaboration.
>

Which I repeatedly referred to, in my latest e-mail I phrased it as "If
and when" such a policy would be enacted.

> This is why it's important, when making proposals, to not simply work on a
> blank canvas and attempt to sketch something, but to be aware of the lines
> in the ecosystem that exist and the opportunities for collaboration - and
> the times in which it's important to "go it alone".
>

I fully agree with that, and wrote so.

> What part of "Has DNS/e-mail name constraints to at least second-level
>> domains or TLDs longer than 3 chars", "Has DN name constraints that
>> limit at least O and C", "Has EKU limitations that exclude AnyEKU and
>> anything else problematic", "Has lifetime and other general constraints
>> within the limits of EE certs" AND "Has a CTS" cannot be detected
>> programmatically?
>>
>
> These are not things that can be reliably implemented across the ecosystem,
> nor would they be reasonable costs to bear for the proposed benefits, no.
>

You seem keen to be reject things out of hand, with no explanation.
Good luck convincing Matthew or others that way.

>
>> Or could this be solved by require such "TCSC light" SubCA certs to
>> carry a specific CAB/F policy OID with CT-based community enforcement
>> that all SubCA certs with this policy OID comply with the more stringent
>> non-computable requirements likely to be in such a policy (if passed)?
>>
>
> No.
>
>
>> I am trying to limit the scope of this to the kind of TCSC (Technically
>> Constrained SubCA) that Matthew was advocating for. Thus none of this
>> applies to long lived or public SubCAs.
>>
>> If an organization wants ongoing TCSC availability, they may subscribe
>> to getting a fresh TCSC halfway through the lifetime of the previous
>> one, to provide a constantly overlapping chain of SubCAs.
>>
>
> Except this doesn't meaningfully address the "day+1" issuance problem that
> was highlighted, unless you proposed that the non-nesting constraints that
> I mentioned aren't relevant.

The idea would be: TCSC issued for BR maximum period (N years plus M
months), fresh TCSC issued every M months, customer can always issue up
to at least N years.

I do realize the M months in the BRs are for another business purpose
related to renewal payments, but because TCSCs issue to non-paying
internal users, they don't need those months for the payment use case.

>
>
>> It would more be like disclaimer telling their customers that if they
>> issue a SHA-1 cert after 2016-01-01 from their SHA-256 TCSC, it probably
>> won't work in a lot of browsers, please for your own protection, issue
>> only SHA-256 or stronger certs. So the incentive for the issuing CA is
>> to minimize tech support calls and angry customers.
>>
>> If the CA fails to inform their customers, the customer will get angry,
>> but the WebPKI will be unaffected.
>
>
> And I'm trying to tell you that your model of the incentives is wrong, and
> it does not work like that, as can be shown by every other real world
> deprecation.
>
> If they made the disclaimer, and yet still 30% of sites had these, browsers
> would not turn it off. As such, the disclaimer would be pointless - the
> incentive structure is such that browsers aren't going to start throwing
> users under the bus.
>
> When the browser makes the change, the issuing CA does not get the calls.
> The site does not get the calls. The browser gets the anger. This is
> because "most recent to change is first to blame" - and it was the browser,
> not the CA, that made the most recent change.
>
> This is how it has worked out for every change in the past. And while I
> appreciate your optimism that it would work with TCSCs, there's nothing in
> this proposal that would change that incentive structure, such as to ensure
> that you don't have 30% of the Internet doing "Whatever thing will be
> deprecated", and as a consequence, _it will not be deprecate_.
>

OK, that is a sad state of affairs, that someone will have to solve for
this to fly.

>
> One could also add a requirement that certain occasional messages,
>> prewritten by the CAB/F shall be forwarded verbatim to all TCSC holders.
>> For example a notice about the SHA-1 deprecation (historic example).
>>
>
> The CA/Browser Forum did not do such documentation, but we also have ample
> evidence that the notices were disregarded, not forwarded to the right
> people, went to people whose mailboxes were turned off (since it was 3
> years since they last got a cert), etc.
>

With TCSC renewal every few months (see above), such dead addresses
would be less likely. And we are talking about a medium (someone said
10000) number of admins, not everyone with a HTTPS website and
associated unexpired cert.

> Again, I appreciate your optimism that it would work, but I'm speaking from
> experience and evidence to say it does not. That's the core of the problem
> here - TCSCs being 'unrestricted' mean that the existing problems in making
> evolutionary changes amplify, the number of parties to update grows, and
> the ability to make change significantly slows.
>
> It may be that unrestricted TCSCs are 'so amazing' that they justify this
> cost to the ecosystem. If that's the case, it's a far more productive
> avenue to discuss _why_ that is, rather than set out the _how_ to do it,
> since well of "how" has been plumbed deep by browsers trying to make
> positive security changes, and we know that these proposals don't work.
> However, if there's a compelling reason why - why the Web PKI should move
> to a more rigid, harder to change system - then it could be worth
> re-exploring.
>

I'll leave that argumentation to Matthew and others actually proposing
this, I was just helping at the technical corners.

Ryan Sleevi

unread,
May 23, 2017, 1:39:05 PM5/23/17
to Matthew Hardeman, mozilla-dev-security-policy
On Tue, May 23, 2017 at 12:33 PM, Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I just think there's no need to concern themselves if someone quite clever
> (whatever that means) decides to ASN.1 encode a Trollface GIF and roll that
> into an EE cert subordinate to their corporate TCSC. No need to report
> that as a BR violation. No need for the sponsoring public CA to be
> concerned if they discover that upon audit, because I think there's no need
> for said audit. Because anything that audit could have found could have
> been discovered by browser validation code, with the judgement rendered
> instantly and with proportionate consequence: (i.e. this is garbage, not a
> certificate, I'm going with the untrusted interstitial error).


I think it may be that you're looking at this issue from a per-site matter,
rather than an ecosystem issue.

I agree that, in theory, the most 'damage' you could do is to a single site
(although there are TCSCs with dozens or hundreds of domains). But from an
ecosystem perspective, it's incredibly damaging - the ability to reject
trollface GIFs used to exploit users, for example, is now no longer a
matter of contacting CAs / updating the BRs, but a coordinated change
across the entire ecosystem, and where turning off support can easily break
sites (and thus cause users more pain)

Even if we start with a maximally strict model in clients (which, for what
it's worth, RFC 5280 specifically advises against - and thankfully so,
otherwise something like CT could never have been deployed), as we change
the ecosystem, we'll need to deprecate things.

Consider this: There is nothing stopping a CA from making a "TCSC in a
box". I am quite certain that, as proposed, it would be far more economical
for CAs to spin up a TCSC for every one of their customers, and then allow
complete and total issuance from it. This is already on the border of
possibility in today's world, due a loophole in intermediate key generation
ceremony text. By posting it here, I'm sure some enterprising CA will
realize this new opportunity :)

The mitigation, however, has been that it's not "wild west" of PKI (the
very thing the BRs set out to stop), and instead a constrained profile.

Setting aside even the 'damage' aspect, consider the ecosystem impact.
Assume a wildwest - we would not have been able to effectively curtail and
sunset SHA-1. We would not have been able to deploy and require Certificate
Transparency. We would not have been able to raise the minimum RSA key
size. That's because all of these things, at the time the efforts began,
were at significantly high rates to cause breakages. Even with the Baseline
Requirements, even with ample communications and PR blitzes, these changes
still were razor thin in terms of the breakages vendors would be willing to
tolerate. Microsoft and Apple, for example, weren't able to tolerate the
initial SHA-1 pain, and relied on Chrome and Firefox to push the ecosystem
forward in that respect.

It's in this holistic picture we should be mindful of the risk of these
changes - the ability to make meaningful change, in a timely fashion, while
minimizing breakage. And while it's easy to say that "Oh, the site's wrong,
interstitial" - that just acculturates users to errors, inducing warning
fatigue and undermining the value of having errors at all. It also
undermines the security assurances of HTTPS itself - because now it's
harder to ensure it meets whatever minimum bar deemed necessary to ensure
users confidentiality, privacy, and integrity.

Matthew Hardeman

unread,
May 23, 2017, 3:45:43 PM5/23/17
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, May 23, 2017 at 12:39:05 PM UTC-5, Ryan Sleevi wrote:

> Setting aside even the 'damage' aspect, consider the ecosystem impact.
> Assume a wildwest - we would not have been able to effectively curtail and
> sunset SHA-1. We would not have been able to deploy and require Certificate
> Transparency. We would not have been able to raise the minimum RSA key
> size. That's because all of these things, at the time the efforts began,
> were at significantly high rates to cause breakages. Even with the Baseline
> Requirements, even with ample communications and PR blitzes, these changes
> still were razor thin in terms of the breakages vendors would be willing to
> tolerate. Microsoft and Apple, for example, weren't able to tolerate the
> initial SHA-1 pain, and relied on Chrome and Firefox to push the ecosystem
> forward in that respect.
>

I don't disagree with the ecosystem impact concept to which you have referred. Where I diverge is in my belief that we already do have a wild west situation. There are LOTS of Root CA members and lots of actual roots and way way more unconstrained intermediates. So many that SHA-1 was already a nightmare to deprecate and move forward on.

As a brief aside, let's talk about SHA-1 migration and the lessons that should have been learned earlier and how they weren't and how I don't see anything to suggest that it will be better next time, regardless of whether my humble proposal even got consideration -- much less that someone should take up the torch and carry it to adoption. History already provided a great example of urgent need for deprecation of a hash algorithm in the Web PKI. The MD5 deprecation. Not having been a participant other than as an end-enterprise in either of these slow moving processes, I can not say for certain... but... A few Google searches don't make me believe that the SHA-1 migration was any smoother or more efficient than the MD5 migration. As I read, it appears to be arguable that the SHA-1 migration to SHA-256 was even slower and messier.

The point I come around to is that in most ecosystems, there's a "criticality" of size at which everything gets harder to coordinate changes. In many such ecosystems, once you cross that boundary, increased size of that ecosystem and number of unique participants has a diminishing effect on the overall difficulty of coordinating changes.

What rational basis makes you believe that the next hash algorithm migration will be better than this most recent one?

The way I see it, absent some incredible new mitigating circumstances, the next time a rotation to a new hash algorithm is needed, the corpus of Root CA participants and Root CA Certificates / Issuance systems will be larger than it was this time. It seems to get larger all the time, as a trend.

My argument is: as probability of smooth transition asymptotically approaches 0, taking actions which ensure that the probability still more closely approaches 0 will have increasingly lower practical cost, as we can just admit it's not going to be a smooth transition.

At this point, I feel I should back away. I feel I've made a fairly compelling case (at least, I shall say, the best case for it that I could make) for the limited impact that the specific changes as to Mozilla policy pertaining to audit & disclosure for TCSCs compliant to certain guidelines would have. I also accept that this isn't really the place to lobby for baseline requirements changes. A CA will have to carry that torch, if any are interested.

I have very much enjoyed this dialogue and hope that I've contributed some useful thoughts to the discussion.

Thanks,

Matt

Ryan Sleevi

unread,
May 23, 2017, 6:56:21 PM5/23/17
to Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org
On Tue, May 23, 2017 at 3:45 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Tuesday, May 23, 2017 at 12:39:05 PM UTC-5, Ryan Sleevi wrote:
>
> > Setting aside even the 'damage' aspect, consider the ecosystem impact.
> > Assume a wildwest - we would not have been able to effectively curtail
> and
> > sunset SHA-1. We would not have been able to deploy and require
> Certificate
> > Transparency. We would not have been able to raise the minimum RSA key
> > size. That's because all of these things, at the time the efforts began,
> > were at significantly high rates to cause breakages. Even with the
> Baseline
> > Requirements, even with ample communications and PR blitzes, these
> changes
> > still were razor thin in terms of the breakages vendors would be willing
> to
> > tolerate. Microsoft and Apple, for example, weren't able to tolerate the
> > initial SHA-1 pain, and relied on Chrome and Firefox to push the
> ecosystem
> > forward in that respect.
> >
>
> I don't disagree with the ecosystem impact concept to which you have
> referred. Where I diverge is in my belief that we already do have a wild
> west situation. There are LOTS of Root CA members and lots of actual roots
> and way way more unconstrained intermediates. So many that SHA-1 was
> already a nightmare to deprecate and move forward on.
>
> As a brief aside, let's talk about SHA-1 migration and the lessons that
> should have been learned earlier and how they weren't and how I don't see
> anything to suggest that it will be better next time, regardless of whether
> my humble proposal even got consideration -- much less that someone should
> take up the torch and carry it to adoption. History already provided a
> great example of urgent need for deprecation of a hash algorithm in the Web
> PKI. The MD5 deprecation. Not having been a participant other than as an
> end-enterprise in either of these slow moving processes, I can not say for
> certain... but... A few Google searches don't make me believe that the
> SHA-1 migration was any smoother or more efficient than the MD5 migration.
> As I read, it appears to be arguable that the SHA-1 migration to SHA-256
> was even slower and messier.


I don't think that is a reasonable conclusion. The MD5 transition took 5
years from active exploit. SHA-1 was dead the same week of the shattered.it
work. Way more middleboxes were prepared for the transition - and browsers
had much smoother transitions.

Was it ideal? No.
Was it significantly better? Yes. In part because of the BRs banning
issuance.

>
> The point I come around to is that in most ecosystems, there's a
> "criticality" of size at which everything gets harder to coordinate
> changes. In many such ecosystems, once you cross that boundary, increased
> size of that ecosystem and number of unique participants has a diminishing
> effect on the overall difficulty of coordinating changes.
>
> What rational basis makes you believe that the next hash algorithm
> migration will be better than this most recent one?


See above. The CA/Browser Forum continues to discuss the lessons learned,
but it's certainly gotten better.

But more importantly - there are plenty of incremental changes - like CT -
that don't require wholesale replacements. For the next five years, I'm
particularly concerned with improving OCSP Stapling and CT support - and
those certainly don't suffer (from the CA side) of the limits you describe.

The way I see it, absent some incredible new mitigating circumstances, the
> next time a rotation to a new hash algorithm is needed, the corpus of Root
> CA participants and Root CA Certificates / Issuance systems will be larger
> than it was this time. It seems to get larger all the time, as a trend.


I disagree. I believe we're getting better, in time.

At this point, I feel I should back away. I feel I've made a fairly
> compelling case (at least, I shall say, the best case for it that I could
> make) for the limited impact that the specific changes as to Mozilla policy
> pertaining to audit & disclosure for TCSCs compliant to certain guidelines
> would have. I also accept that this isn't really the place to lobby for
> baseline requirements changes. A CA will have to carry that torch, if any
> are interested.


Oh, I would say this is absolutely the place (although perhaps in a forked
thread) for that discussion. The baselines are reflective of what browser
baselines are, and if you want to change browser baselines, there is no
greater place for that public discussion than Mozilla.

To be clear: I'm critical of the goal in large part because I used to argue
the same position you're now arguing, with many of the same arguments. The
experiences in enacting meaningful change, and the challenges therein, as
well as lots of time spent contemplating the economic incentives for the
various ecosystem actors to support change, have me far more concerned
about the potential harm :)

Gervase Markham

unread,
May 31, 2017, 11:34:34 AM5/31/17
to mozilla-dev-s...@lists.mozilla.org
On 19/05/17 14:47, Gervase Markham wrote:
> We need to have a discussion about the appropriate scope for:
>
> 1) the applicability of Mozilla's root policy
> 2) required disclosure in the CCADB

I've now reviewed the previous discussion we had on this topic, here:
https://groups.google.com/forum/#!msg/mozilla.dev.security.policy/ZMUjQ6xHrDA/ySofsF_PAgAJ

In it, Ryan writes: "This wording implies that technically constrained
sub-CAs, from a Mozilla Policy standpoint, are not required to adhere to
the Baseline Requirements."

I don't believe that's true now, if it ever was. I think the scope
statement in policy 2.4.1 section 1.1 and the BR applicability statement
in section 2.3 makes it clear that Mozilla policy applies, and Mozilla
policy expects the BRs to apply, to all certificates in publicly-trusted
hierarchies except those which are constrained to not issue/be used for
SSL or email.

However, that discussion suggests to me that we should do the following:

* Given CT, and the need within it to disclose TCSCs, the privacy
argument seems to have been abandoned. So I think it's reasonable to
also require disclosure of TCSCs themselves. This allows checking that
they are indeed appropriately constrained. The obvious place to put them
is the CCADB but it's possible we could consider something CT-based, as
we don't need to track the paperwork the CCADB stores (audits, etc.).

* So the options for intermediate certs would change from "(technically
constrained) or (publicly disclosed and audited)" to "(publicly
disclosed) and (technically constrained or fully audited)".

* In terms of my original message, this would mean adding type F) to the
CCADB disclosure list.

* We should consider going above and beyond the BRs by tweaking the
parameters for the section 8.7 audit of the certs below a TCSC. At the
moment, it's "the greater of one certificate or at least three percent
of the Certificates issued". I think it should be more like: MAX(MIN(5
certificates, all certificates), 3% of certificates). In other words:

Issued Audited
0 0
1 1
....
5 5
6 5
....
166 5
167 6
....

Auditing just a single certificate (currently OK up until 33 are issued)
makes it too easy to overlook problems when volumes are small.

Comments?

Gerv

Matthew Hardeman

unread,
May 31, 2017, 12:26:26 PM5/31/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, May 31, 2017 at 10:34:34 AM UTC-5, Gervase Markham wrote:

> However, that discussion suggests to me that we should do the following:
>
> * Given CT, and the need within it to disclose TCSCs, the privacy
> argument seems to have been abandoned. So I think it's reasonable to
> also require disclosure of TCSCs themselves. This allows checking that
> they are indeed appropriately constrained. The obvious place to put them
> is the CCADB but it's possible we could consider something CT-based, as
> we don't need to track the paperwork the CCADB stores (audits, etc.).
>

Regardless of whether policy would ultimately dictate that they must be disclosed in CCADB, I do think mandatory disclosure via CT mechanisms would encourage public participation in discovery of improperly constrained certificates. It also would ultimately allow potentially affected third parties to monitor for mis-issued TCSCs with constraints that would allow for infringement of their [the affected third parties'] properties.

> * So the options for intermediate certs would change from "(technically
> constrained) or (publicly disclosed and audited)" to "(publicly
> disclosed) and (technically constrained or fully audited)".
>
> * In terms of my original message, this would mean adding type F) to the
> CCADB disclosure list.

That would seem to be case, if CCADB disclosure of these is still required in light of CT. If you're not also tracking the additional matters that one would want in CCADB for roots and less stringently constrained intermediates but not for TCSCs, one wonders if there's really value in having them in CCADB?

>
> * We should consider going above and beyond the BRs by tweaking the
> parameters for the section 8.7 audit of the certs below a TCSC. At the
> moment, it's "the greater of one certificate or at least three percent
> of the Certificates issued". I think it should be more like: MAX(MIN(5
> certificates, all certificates), 3% of certificates). In other words:
>
> Issued Audited
> 0 0
> 1 1
> ....
> 5 5
> 6 5
> ....
> 166 5
> 167 6
> ....
>
> Auditing just a single certificate (currently OK up until 33 are issued)
> makes it too easy to overlook problems when volumes are small.
>
> Comments?

I still maintain that if the TCSC is correctly construed and the validation library is correct, it would seem difficult for even random hot garbage wrapped with a correct signature by TCSC's key to surface an actual immediate risk to the Web PKI. If I'm right about that, I would ask Mozilla to deviate in the other direction, waiving the 8.7 requirements as to Mozilla's policy for the certs below the TCSC. That said, if the decision is to require compliance in-line with or exceeding the 8.7 requirement, I do like the direction you're heading. One is too small a number to identify a pattern of behavior.

>
> Gerv
0 new messages