Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Policy 2.5 Proposal: Add definition of "mis-issuance"

351 views
Skip to first unread message

Gervase Markham

unread,
May 31, 2017, 12:04:53 PM5/31/17
to mozilla-dev-s...@lists.mozilla.org
It has been suggested we need a formal definition of what we consider
mis-issuance. The closest we have is currently a couple of sentence in
section 7.3:

"A certificate that includes domain names that have not been verified
according to section 3.2.2.4 of the Baseline Requirements is considered
to be mis-issued. A certificate that is intended to be used only as an
end entity certificate but includes a keyUsage extension with values
keyCertSign and/or cRLSign or a basicConstraints extension with the cA
field set to true is considered to be mis-issued."

This is clearly not an exhaustive list; one would also want to include
BR violations, RFC violations, and insufficient EV vetting, at least.

The downside of defining it is that CAs might try and rules-lawyer us in
a particular situation.

Here's some proposed text which provides more clarity while hopefully
avoiding rules-lawyering:

"The category of mis-issued certificates includes (but is not limited
to) those issued to someone who should not have received them, those
containing information which was not properly validated, those having
incorrect technical constraints, and those using algorithms other than
those permitted."

If you have suggestions on how to improve this definition, let's keep
brevity in mind :-)

This is: https://github.com/mozilla/pkipolicy/issues/76

-------

This is a proposed update to Mozilla's root store policy for version
2.5. Please keep discussion in this group rather than on Github. Silence
is consent.

Policy 2.4.1 (current version):
https://github.com/mozilla/pkipolicy/blob/2.4.1/rootstore/policy.md
Update process:
https://wiki.mozilla.org/CA:CertPolicyUpdates

Matthew Hardeman

unread,
May 31, 2017, 1:02:25 PM5/31/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, May 31, 2017 at 11:04:53 AM UTC-5, Gervase Markham wrote:
>
> If you have suggestions on how to improve this definition, let's keep
> brevity in mind :-)

Perhaps some reference to technologically incorrect syntax (i.e. an incorrectly encoded certificate) being a mis-issuance?

How far does "those containing information which was not properly validated" go? Does that leave the opportunity for someone's tortured construction of the rule to suggest that a certificate that everyone agrees is NOT mis-issued is in fact technically mis-issued?

Gervase Markham

unread,
Jun 1, 2017, 4:35:56 AM6/1/17
to Matthew Hardeman
On 31/05/17 18:02, Matthew Hardeman wrote:
> Perhaps some reference to technologically incorrect syntax (i.e. an incorrectly encoded certificate) being a mis-issuance?

Well, if it's so badly encoded Firefox doesn't recognise it, we don't
care too much (apart from how it speaks to incompetence). If Firefox
does recognise it, then I'm not sure "misissuance" is the right word if
all the data is correct.

> How far does "those containing information which was not properly validated" go? Does that leave the opportunity for someone's tortured construction of the rule to suggest that a certificate that everyone agrees is NOT mis-issued is in fact technically mis-issued?

Certs containing data which is not properly validated, which
nevertheless happens by chance to be correct, are still mis-issued,
because they are BR-non-compliant. It may be hard to detect this case,
but I think it should be in the definition. A CA has a positive duty to
validate/revalidate all data within the timescales established.

Gerv

Ryan Sleevi

unread,
Jun 1, 2017, 8:50:22 AM6/1/17
to Gervase Markham, mozilla-dev-security-policy, Matthew Hardeman
On Thu, Jun 1, 2017 at 4:35 AM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 31/05/17 18:02, Matthew Hardeman wrote:
> > Perhaps some reference to technologically incorrect syntax (i.e. an
> incorrectly encoded certificate) being a mis-issuance?
>
> Well, if it's so badly encoded Firefox doesn't recognise it, we don't
> care too much (apart from how it speaks to incompetence). If Firefox
> does recognise it, then I'm not sure "misissuance" is the right word if
> all the data is correct.
>

I would encourage you to reconsider this, or perhaps I've misunderstood
your position. To the extent that Mozilla's mission includes "The
effectiveness of the Internet as a public resource depends upon
interoperability (protocols, data formats, content) ....", the
well-formedness and encoding directly affects Mozilla users (sites working
in Vendors A, B, C but not Mozilla) and the broader ecosystem (sites
Mozilla users are protected from that vendors A, B, C are not).

I think considering this in the context of "CA problematic practices" may
help make this clearer - they are all things that speak to either
incompetence or confusion (and a generous dose of Hanlon's Razor) - but
their compatibility issues presented both complexity and risk to Mozilla
users.

So I would definitely encourage that improper application of the protocols
and data formats constitutes misissuance, as they directly affect
interoperability and indirectly affect security :)


>
> > How far does "those containing information which was not properly
> validated" go? Does that leave the opportunity for someone's tortured
> construction of the rule to suggest that a certificate that everyone agrees
> is NOT mis-issued is in fact technically mis-issued?
>
> Certs containing data which is not properly validated, which
> nevertheless happens by chance to be correct, are still mis-issued,
> because they are BR-non-compliant. It may be hard to detect this case,
> but I think it should be in the definition. A CA has a positive duty to
> validate/revalidate all data within the timescales established.
>

Wholeheartedly agree.

Gervase Markham

unread,
Jun 1, 2017, 9:03:33 AM6/1/17
to ry...@sleevi.com, Matthew Hardeman
On 01/06/17 13:49, Ryan Sleevi wrote:
> I would encourage you to reconsider this, or perhaps I've misunderstood
> your position. To the extent that Mozilla's mission includes "The
> effectiveness of the Internet as a public resource depends upon
> interoperability (protocols, data formats, content) ....", the
> well-formedness and encoding directly affects Mozilla users (sites working
> in Vendors A, B, C but not Mozilla) and the broader ecosystem (sites
> Mozilla users are protected from that vendors A, B, C are not).

My point is not that we are entirely indifferent to such problems, but
that perhaps the category of "mis-issuance" is the wrong one for such
errors. I guess it depends what we mean by "mis-issuance" - which is the
entire point of this discussion!

So, if mis-issuance means there is some sort of security problem, then
my original definition still seems like a good one to me. If
mis-issuance means any problem where the certificate is not as it should
be, then we need a wider definition.

I wonder whether we need a new word for certificates which are bogus for
a non-security-related reason. "Mis-constructed"?

Gerv

Matthew Hardeman

unread,
Jun 1, 2017, 2:35:23 PM6/1/17
to mozilla-dev-s...@lists.mozilla.org
On Thursday, June 1, 2017 at 8:03:33 AM UTC-5, Gervase Markham wrote:

>
> My point is not that we are entirely indifferent to such problems, but
> that perhaps the category of "mis-issuance" is the wrong one for such
> errors. I guess it depends what we mean by "mis-issuance" - which is the
> entire point of this discussion!
>
> So, if mis-issuance means there is some sort of security problem, then
> my original definition still seems like a good one to me. If
> mis-issuance means any problem where the certificate is not as it should
> be, then we need a wider definition.
>

It was in that spirit that I raised the questions that I did.

For example, when I mentioned "those containing information which was not properly validated", I meant to imply that in a rather tortured construction, taken ad absurdum, every bit and byte of that certificate is certainly "information" and the certificate overall most certainly contains every byte of itself. Having said that, can we literally say that a certificate is only properly issued if quite literally every byte within it has been "validated"? Or are we really just talking about the certificate subject? But not really because we also certainly mean that contents of certain certificate extensions, like SAN, contain only validated information. At the same time, though, what documentation would one draw upon to assert that serial number had been "validated"? Validated in this context generally means supported via documentation as true and correct. The serial number is meant to contain a random component. We can validate the mechanism that generated the serial number, but we can not say that the serial number itself is validated. If a policy OID is added to the certificate, is it validated in the sense that my CP/CPS says that I may put OID X with meaning Y there? Or is it just a series of bytes which point to an OID within the OID tree assigned to my enterprise? I wonder if the pedant can use these arguments to call any certificate "mis-issued" under the proposed definition. If so, I wonder if we should care if such a tortured argument might be made.

Nick Lamb

unread,
Jun 1, 2017, 5:40:38 PM6/1/17
to mozilla-dev-s...@lists.mozilla.org
I think a broad definition is appropriate here. Mozilla is not obliged to do anything at all, much less anything drastic if it is discovered that mis-issuance has occurred. At most we might think it time to re-evaluate this policy.

Fools are endlessly inventive so a too narrow definition runs the risk of missing something so obvious that when inevitably a CA gets it wrong we're astonished to find it's not included.

One risk I do anticipate is CAs which have inadequately bound together separate validation steps, such as where a domain validation was done by an applicant, and a CSR proving control of a particular private key has been presented by an applicant, but it turns out they weren't actually the same applicant, so it will be an error to issue a certificate binding the two together.

Peter Kurrasch

unread,
Jun 1, 2017, 7:47:54 PM6/1/17
to mozilla-dev-s...@lists.mozilla.org
So how about this:

A proper certificate is one that...

- contains the data as provided by the requester that the requester intended to use;

- contains the data as provided by the issuer that the issuer intended to use;

- contains data that has been properly verified by the issuer, to the extent that the data is verifiable in the first place;

- uses data that is recognized as legitimate for a certificate's intended use, per the relevant standards, specifications, recommendations, ‎and policies, as well as the software products that are likely to utilize the certificate;

- is suitably constructed in accordance with the relevant standards, specifications, recommendations, and policies, as appropriate; and

- is produced by equipment and systems whose integrity is assured by the issuer and verified by ‎the auditors.

Thus, failing one or more of the above conditions will constitute a mis-issuance situation.


From: Matthew Hardeman via dev-security-policy
Sent: Thursday, June 1, 2017 1:35 PM‎

On Thursday, June 1, 2017 at 8:03:33 AM UTC-5, Gervase Markham wrote:

>
> My point is not that we are entirely indifferent to such problems, but
> that perhaps the category of "mis-issuance" is the wrong one for such
> errors. I guess it depends what we mean by "mis-issuance" - which is the
> entire point of this discussion!
>
> So, if mis-issuance means there is some sort of security problem, then
> my original definition still seems like a good one to me. If
> mis-issuance means any problem where the certificate is not as it should
> be, then we need a wider definition.
>

It was in that spirit that I raised the questions that I did.‎

 I wonder if the pedant can use these arguments to call any certificate "mis-issued" under the proposed definition. If so, I wonder if we should care if such a tortured argument might be made.

> I wonder whether we need a new word for certificates which are bogus for
> a non-security-related reason. "Mis-constructed"?
>
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy


Jakob Bohm

unread,
Jun 1, 2017, 9:11:18 PM6/1/17
to mozilla-dev-s...@lists.mozilla.org
On 31/05/2017 18:04, Gervase Markham wrote:
> It has been suggested we need a formal definition of what we consider
> mis-issuance. The closest we have is currently a couple of sentence in
> section 7.3:
>
> "A certificate that includes domain names that have not been verified
> according to section 3.2.2.4 of the Baseline Requirements is considered
> to be mis-issued. A certificate that is intended to be used only as an
> end entity certificate but includes a keyUsage extension with values
> keyCertSign and/or cRLSign or a basicConstraints extension with the cA
> field set to true is considered to be mis-issued."
>
> This is clearly not an exhaustive list; one would also want to include
> BR violations, RFC violations, and insufficient EV vetting, at least.
>
> The downside of defining it is that CAs might try and rules-lawyer us in
> a particular situation.
>
> Here's some proposed text which provides more clarity while hopefully
> avoiding rules-lawyering:
>
> "The category of mis-issued certificates includes (but is not limited
> to) those issued to someone who should not have received them, those
> containing information which was not properly validated, those having
> incorrect technical constraints, and those using algorithms other than
> those permitted."
>

How about: Any issued certificate which violates any applicable policy,
requirement or standard, which was not requested by all its alleged
subject(s) or which should otherwise not have been issued, is by
definition mis-issued. Policies and requirements include but are not
limited to this policy, the CCADB policy, the applicable CPS, and the
baseline requirements. Any piece of data which technically resembles an
X.509 or PKCS#6 extended certificate, and which is signed by the CA
private key is considered an issued certificate.

(Note: I mention the ancient PKCS#6 certificate type because mis-issuing
those is still mis-issuance, even if there is no current reason to
validly issue those).

(Note: The last sentence above was phrased to try to cover semi-garbled
certificates, without accidentally banning things like CRLs and OCSP
responses).


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Peter Bowen

unread,
Jun 1, 2017, 10:19:31 PM6/1/17
to Ryan Sleevi, mozilla-dev-security-policy, Gervase Markham, Matthew Hardeman
On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:
> On Thu, Jun 1, 2017 at 4:35 AM, Gervase Markham via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>> On 31/05/17 18:02, Matthew Hardeman wrote:
>> > Perhaps some reference to technologically incorrect syntax (i.e. an
>> incorrectly encoded certificate) being a mis-issuance?
>>
>> Well, if it's so badly encoded Firefox doesn't recognise it, we don't
>> care too much (apart from how it speaks to incompetence). If Firefox
>> does recognise it, then I'm not sure "misissuance" is the right word if
>> all the data is correct.
>>
>
> I would encourage you to reconsider this, or perhaps I've misunderstood
> your position. To the extent that Mozilla's mission includes "The
> effectiveness of the Internet as a public resource depends upon
> interoperability (protocols, data formats, content) ....", the
> well-formedness and encoding directly affects Mozilla users (sites working
> in Vendors A, B, C but not Mozilla) and the broader ecosystem (sites
> Mozilla users are protected from that vendors A, B, C are not).
>
> I think considering this in the context of "CA problematic practices" may
> help make this clearer - they are all things that speak to either
> incompetence or confusion (and a generous dose of Hanlon's Razor) - but
> their compatibility issues presented both complexity and risk to Mozilla
> users.
>
> So I would definitely encourage that improper application of the protocols
> and data formats constitutes misissuance, as they directly affect
> interoperability and indirectly affect security :)

I think the policy needs to be carefully thought out here, as there is
no limitation to what can be signed with the key used to sign
certificates. What is a malformed certificate to one person might be
a valid document to someone else. Maybe you could disallow signing
things that are not valid ASN.1 DER?

Thanks,
Peter

Ryan Sleevi

unread,
Jun 2, 2017, 7:28:17 AM6/2/17
to Peter Bowen, Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Matthew Hardeman
On Thu, Jun 1, 2017 at 10:19 PM, Peter Bowen via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
> > So I would definitely encourage that improper application of the
> protocols
> > and data formats constitutes misissuance, as they directly affect
> > interoperability and indirectly affect security :)
>
> I think the policy needs to be carefully thought out here, as there is
> no limitation to what can be signed with the key used to sign
> certificates. What is a malformed certificate to one person might be
> a valid document to someone else. Maybe you could disallow signing
> things that are not valid ASN.1 DER?
>

I'm not really sure I see the slippery slope argument you mention.

On the most basic level, this is describing what Mozilla considers
misissuance - warranting at least some discussion and, if appropriate,
updates to policies. It naturally suggests that, in order to improve both
security and transparency, the policy should be maximally applicable, and
then allow situational constructs to override, rather than to try to
accommodate every unknown hypothetically good activity.

For example, I think we'd easily agree that improper DER encoding is
misissuance - that's an improper application of the data format. I think
we'd also hopefully agree that if, for example, a CA asserted a set of KUs
that were 'validated', but not conformant to RFC 5280 (e.g. 5280 says MUST
NOT and a CA does it anyways), then that's an improper application of the
data format/protocol.

I suspect you're raising a concern since a CA can use a SIGNED{ToBeSigned}
construct from RFC 6025[1] to express a signature over a structure defined
by "ToBeSigned", and wanting to distinguish that, for example, a
certificate is not a CRL, as they're distinguished from their ToBeSigned
construct. I would argue here that any signatures produced / structures
provided should have an appropriate protocol or data format definition to
justify the application of that signature, and that it would be misissuance
in the absence of that support. Logically, I'm suggesting it's misissuance
to, for example, expose a prehash signing oracle using a CA key, or to sign
arbitrary data if it's not encoded 'like' a certificate (without having an
equivalent appropriate standard defining what the CA is signing)

It seems far better for avoiding confusion to treat everything as wrong,
and then if it is indeed acceptable, to modify the policy at that time. I
don't think we can reasonably give "the benefit of the doubt" anymore.

[1] https://tools.ietf.org/html/rfc6025#section-2.3

Peter Bowen

unread,
Jun 2, 2017, 9:42:04 AM6/2/17
to Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Matthew Hardeman
On Fri, Jun 2, 2017 at 4:27 AM, Ryan Sleevi <ry...@sleevi.com> wrote:
>
>
> On Thu, Jun 1, 2017 at 10:19 PM, Peter Bowen via dev-security-policy
> <dev-secur...@lists.mozilla.org> wrote:
>>
>> On Thu, Jun 1, 2017 at 5:49 AM, Ryan Sleevi via dev-security-policy
>> > So I would definitely encourage that improper application of the
>> > protocols
>> > and data formats constitutes misissuance, as they directly affect
>> > interoperability and indirectly affect security :)
>>
>> I think the policy needs to be carefully thought out here, as there is
>> no limitation to what can be signed with the key used to sign
>> certificates. What is a malformed certificate to one person might be
>> a valid document to someone else. Maybe you could disallow signing
>> things that are not valid ASN.1 DER?
>
>
> I suspect you're raising a concern since a CA can use a SIGNED{ToBeSigned}
> construct from RFC 6025[1] to express a signature over a structure defined
> by "ToBeSigned", and wanting to distinguish that, for example, a certificate
> is not a CRL, as they're distinguished from their ToBeSigned construct. I
> would argue here that any signatures produced / structures provided should
> have an appropriate protocol or data format definition to justify the
> application of that signature, and that it would be misissuance in the
> absence of that support. Logically, I'm suggesting it's misissuance to, for
> example, expose a prehash signing oracle using a CA key, or to sign
> arbitrary data if it's not encoded 'like' a certificate (without having an
> equivalent appropriate standard defining what the CA is signing)

Yes, my concern is that this could make SIGNED{ToBeSigned} considered
misissuance if ToBeSigned is not a TBSCertificate. For example, if I
could sign an ASN.1 sequence which had the following syntax:

TBSNotCertificate ::= {
notACertificate UTF8String,
COMPONENTS OF TBSCertificate
}

Someone could argue that this is mis-issuance because the resulting
"certificate" is clearly corrupt, as it fails to start with an
INTEGER. On the other hand, I think that this is clearly not
mis-issuance of a certificate, as there is no sane implementation that
would accept this as a certificate.

Thanks,
Peter

Ryan Sleevi

unread,
Jun 2, 2017, 9:55:35 AM6/2/17
to Peter Bowen, Ryan Sleevi, Gervase Markham, mozilla-dev-security-policy, Matthew Hardeman
Would it be a misissuance of a certificate? Hard to argue, I think.

Would it be a misuse of key? I would argue yes, unless the
TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
WD, at the least).

As a practical matter, this largely only applies to the use of signatures
for which collisions are possible - since, of course, the TBSNotCertificate
might be constructed in such a way to collide with the TBSCertificate.
As a "assume a jackass genie is interpreting the policy" matter, what about
situations where a TBSNotCertificate has the same structure as
TBSCertificate? The fact that they are identical representations
on-the-wire could be argued as irrelevant, since they are non-identical
representations "in the spec". Unfortunately, this scenario has come up
once before already - in the context of RFC 6962 (and hence the
clarifications in the Baseline Requirements) - so it's not unreasonable a
scenario to expect.

The general principle I was trying to capture was one of "Only sign these
defined structures, and only do so in a manner conforming to their
appropriate encoding, and only do so after validating all the necessary
information. Anything else is 'misissuance' - of a certificate, a CRL, an
OCSP response, or a Signed-Thingy"

Jakob Bohm

unread,
Jun 2, 2017, 10:10:26 AM6/2/17
to mozilla-dev-s...@lists.mozilla.org
Thing is, that there are still serious work involving the definition of
new CA-signed things, such as the recent (2017) paper on a super-
compressed CRL-equivalent format (available as a Firefox plugin).

Banning those by policy would be as bad as banning the first OCSP
responder because it was not yet on the old list {Certificate, CRL}.

Hence my suggested phrasing of "Anything that resembles a certificate"
(my actual wording a few posts up was more precise of cause).

Note that signing a wrong CRL or OCSP response is still bad, but not
mis-issuance.

Ryan Sleevi

unread,
Jun 2, 2017, 11:12:55 AM6/2/17
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Fri, Jun 2, 2017 at 10:09 AM Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
> Thing is, that there are still serious work involving the definition of
> new CA-signed things, such as the recent (2017) paper on a super-
> compressed CRL-equivalent format (available as a Firefox plugin).


This does ny rely on CA signatures - but also perfectly demonstrates the
point - that these things should be getting widely reviewed before
implementing.


>
> Banning those by policy would be as bad as banning the first OCSP
> responder because it was not yet on the old list {Certificate, CRL}.


This argument presumes technical competence of CAs, for which collectively
there is no demonstrable evidence.

Functionally, this is identical to banning the "any other method" for
domain validation. Yes, it allowed flexibility - but at the extreme cost to
security.

If there are new and compelling thing to sign, the community can review and
the policy be updated. I cannot understand the argument against this basic
security sanity check.


>
> Hence my suggested phrasing of "Anything that resembles a certificate"
> (my actual wording a few posts up was more precise of cause).


Yes, and I think that wording is insufficient and dangerous, despite your
understandable goals, for the reasons I outlined.


>
> Note that signing a wrong CRL or OCSP response is still bad, but not
> mis-issuance.


What would you call it? A malformed OCSP response, when signed, can be
indistinguishable from a certificate.

There is little objective technical or security reason to distinguish the
thing that is signed - it should be a closed set (whitelists, not
blacklists), just like algorithms, keysizes, or validation methods - due to
the significant risk to security and stability.

>
>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded

Peter Bowen

unread,
Jun 2, 2017, 2:23:53 PM6/2/17
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
On Fri, Jun 2, 2017 at 8:12 AM, Ryan Sleevi wrote:
> On Fri, Jun 2, 2017 at 10:09 AM Jakob Bohm wrote:
>
>> On 02/06/2017 15:54, Ryan Sleevi wrote:
>> > On Fri, Jun 2, 2017 at 9:33 AM, Peter Bowen wrote:
>> >
>> >> Yes, my concern is that this could make SIGNED{ToBeSigned} considered
>> >> misissuance if ToBeSigned is not a TBSCertificate. For example, if I
>> >> could sign an ASN.1 sequence which had the following syntax:
>> >>
>> >> TBSNotCertificate ::= {
>> >> notACertificate UTF8String,
>> >> COMPONENTS OF TBSCertificate
>> >> }
>> >>
>> >> Someone could argue that this is mis-issuance because the resulting
>> >> "certificate" is clearly corrupt, as it fails to start with an
>> >> INTEGER. On the other hand, I think that this is clearly not
>> >> mis-issuance of a certificate, as there is no sane implementation that
>> >> would accept this as a certificate.
>> >>
>> >
>> > Would it be a misissuance of a certificate? Hard to argue, I think.
>> >
>> > Would it be a misuse of key? I would argue yes, unless the
>> > TBSNotCertificate is specified/accepted for use in the CA side (e.g. IETF
>> > WD, at the least).
>> >
>> >
>> > The general principle I was trying to capture was one of "Only sign these
>> > defined structures, and only do so in a manner conforming to their
>> > appropriate encoding, and only do so after validating all the necessary
>> > information. Anything else is 'misissuance' - of a certificate, a CRL, an
>> > OCSP response, or a Signed-Thingy"
>> >
>>
>> Thing is, that there are still serious work involving the definition of
>> new CA-signed things, such as the recent (2017) paper on a super-
>> compressed CRL-equivalent format (available as a Firefox plugin).
>
>
> This does ny rely on CA signatures - but also perfectly demonstrates the
> point - that these things should be getting widely reviewed before
> implementing.
>>
>> Banning those by policy would be as bad as banning the first OCSP
>> responder because it was not yet on the old list {Certificate, CRL}.
>
>
> This argument presumes technical competence of CAs, for which collectively
> there is no demonstrable evidence.
>
> Functionally, this is identical to banning the "any other method" for
> domain validation. Yes, it allowed flexibility - but at the extreme cost to
> security.
>
> If there are new and compelling thing to sign, the community can review and
> the policy be updated. I cannot understand the argument against this basic
> security sanity check.
>
>
>>
>> Hence my suggested phrasing of "Anything that resembles a certificate"
>> (my actual wording a few posts up was more precise of cause).
>
>
> Yes, and I think that wording is insufficient and dangerous, despite your
> understandable goals, for the reasons I outlined.
>
> There is little objective technical or security reason to distinguish the
> thing that is signed - it should be a closed set (whitelists, not
> blacklists), just like algorithms, keysizes, or validation methods - due to
> the significant risk to security and stability.

Back in November 2016, I suggested that we try to create stricter
rules around CAs:
https://cabforum.org/pipermail/public/2016-November/008966.html and
https://groups.google.com/d/msg/mozilla.dev.security.policy/UqjD1Rff4pg/8sYO2uzNBwAJ.
It generated some discussion but I never pushed things forward. Maybe
the following portion should be part of Mozilla policy?

Private Keys which are CA private keys must only be used to generate signatures
that meet the following requirements:

1. The signature must be over a SHA-256, SHA-384, or SHA-512 hash
2. The data being signed must be one of the following:
* CA Certificate (a signed TBSCertificate, as defined in [RFC
5280](https://tools.ietf.org/html/rfc5280), with a
id-ce-basicConstraints extension with the cA component set to true)
* End-entity Certificate (a signed TBSCertificate, as defined in
[RFC 5280](https://tools.ietf.org/html/rfc5280), that is not a CA
Certificate)
* Certificate Revocation Lists (a signed TBSCertList as defined in
[RFC 5280](https://tools.ietf.org/html/rfc5280))
* OCSP response (a signed ResponseData as defined in [RFC
6960](https://tools.ietf.org/html/rfc6960))
* Precertificate (as defined in draft-ietf-trans-rfc6962-bis)
3. Data that does not meet the above requirements must not be signed

Thanks,
Peter

Matt Palmer

unread,
Jun 3, 2017, 10:04:19 PM6/3/17
to dev-secur...@lists.mozilla.org
On Fri, Jun 02, 2017 at 09:54:55AM -0400, Ryan Sleevi via dev-security-policy wrote:
> The general principle I was trying to capture was one of "Only sign these
> defined structures, and only do so in a manner conforming to their
> appropriate encoding, and only do so after validating all the necessary
> information. Anything else is 'misissuance' - of a certificate, a CRL, an
> OCSP response, or a Signed-Thingy"

For whatever it is worth, I am a fan of this way of defining "misissuance".

- Matt

Jakob Bohm

unread,
Jun 5, 2017, 6:22:27 PM6/5/17
to mozilla-dev-s...@lists.mozilla.org
If you read the paper, it contains a proposal for the CAs to countersign
the computed super-crl to confirm that all entries for that CA match the
actual revocations and non-revocations recorded by that CA. This is not
currently deployed, but is an example of something that CAs could safely
do using their private key, provided sufficient design competence by the
central super-crl team.

Another good example could be signing a "certificate white-list"
containing all issued but not revoked serial numbers. Again someone
not a random CA) should provided a well thought out data format
specification that cannot be maliciously confused with any of the
current data types.

>
>>
>> Banning those by policy would be as bad as banning the first OCSP
>> responder because it was not yet on the old list {Certificate, CRL}.
>
>
> This argument presumes technical competence of CAs, for which collectively
> there is no demonstrable evidence.

In this case, it would presume that technical competence exists at high
end crypto research / specification teams defining such items, not at
any CA or vendor. For example any such a format could come from the
IETF, ITU-T, NIST, IEEE, ICAO, or any of the big crypto research centers
inside/outside the US (too many to enumerate in a policy).

Here's one item no-one listed so far (just to demonstrate our collective
lack of imagination):

Using the CA private key to sign a CSR to request cross-signing from
another CA (trusted or untrusted by Mozilla).

>
> Functionally, this is identical to banning the "any other method" for
> domain validation. Yes, it allowed flexibility - but at the extreme cost to
> security.
>

However the failure mode for "signing additional CA operational items"
would be a lot less risky and a lot less reliant on CA competency.

> If there are new and compelling thing to sign, the community can review and
> the policy be updated. I cannot understand the argument against this basic
> security sanity check.
>

It is restrictions for restrictions sake, which is always bad policy
making.

>
>>
>> Hence my suggested phrasing of "Anything that resembles a certificate"
>> (my actual wording a few posts up was more precise of cause).
>
>
> Yes, and I think that wording is insufficient and dangerous, despite your
> understandable goals, for the reasons I outlined.
>


If necessary, one could define a short list of technical characteristics
that would make a signed item non-confusable with a certificate. For
example, it could be a PKCS#7 structure, or any DER structure whose
first element is a published specification OID nested in one or more
layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
added to this.

>
>>
>> Note that signing a wrong CRL or OCSP response is still bad, but not
>> mis-issuance.
>
>
> What would you call it? A malformed OCSP response, when signed, can be
> indistinguishable from a certificate.

An incorrect CRL is an incorrect CRL and falls under the CRL policy
requirements.

An incorrect OCSP response is an incorrect OCSP response and falls under
the OCSP policy requirements.

>
> There is little objective technical or security reason to distinguish the
> thing that is signed - it should be a closed set (whitelists, not
> blacklists), just like algorithms, keysizes, or validation methods - due to
> the significant risk to security and stability.
>

Those whitelists have already proven problematic, banning (for example)
any serious test deployment of well-reviewed algorithms such as non-NIST
curves, SHA-3, non-NIST hashes, quantum-resistant algorithms, perhaps
even RSA-PSS (RFC3447, I haven't worked through the exact wordings to
check for inclusion of this one).

Ryan Sleevi

unread,
Jun 6, 2017, 1:46:13 AM6/6/17
to Jakob Bohm, mozilla-dev-security-policy
On Mon, Jun 5, 2017 at 6:21 PM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> If you read the paper, it contains a proposal for the CAs to countersign
> the computed super-crl to confirm that all entries for that CA match the
> actual revocations and non-revocations recorded by that CA. This is not
> currently deployed, but is an example of something that CAs could safely
> do using their private key, provided sufficient design competence by the
> central super-crl team.
>

I did read the paper - and provide feedback on it.

And that presumption that you're making here is exactly the reason why you
need a whitelist, not a blacklist. "provided sufficient design competence"
does not come for free - it comes with thoughtful peer review and community
feedback. Which can be provided in the aspect of policy.


> Another good example could be signing a "certificate white-list"
> containing all issued but not revoked serial numbers. Again someone
> not a random CA) should provided a well thought out data format
> specification that cannot be maliciously confused with any of the
> current data types.
>

Or a bad example. And that's the point - you want sufficient technical
review (e.g. an SDO ideally, but minimally m.d.s.p review).

Look, you could easily come up with a dozen examples of improved validation
methods - but just because they exist doesn't mean keeping the "any other
method" is good. And, for what it's worth, of those that did shake out of
the discussions, many of them _were_ insecure at first, and evolved through
community discussion.


> In this case, it would presume that technical competence exists at high
> end crypto research / specification teams defining such items, not at
> any CA or vendor. For example any such a format could come from the
> IETF, ITU-T, NIST, IEEE, ICAO, or any of the big crypto research centers
> inside/outside the US (too many to enumerate in a policy).
>

And so could new signature algorithms. But that doesn't mean there
shouldn't be a policy on signature algorithms.


> Here's one item no-one listed so far (just to demonstrate our collective
> lack of imagination):
>

This doesn't need imagination - it needs solid review. No one is
disagreeing with you that there can't be improvements. But let's start with
the actual concrete matters at hand, appropriately reviewed by the
Mozilla-using community that serves a purpose consistent with the mission,
or doesn't pose risks to users.


> However the failure mode for "signing additional CA operational items"
> would be a lot less risky and a lot less reliant on CA competency.


That is demonstrably not true. Just look at the CAs who have had issues
with their signing ceremonies. Or the signatures they've produced.


> It is restrictions for restrictions sake, which is always bad policy
> making.
>

No it's not. You would have to reach very hard to find a single security
engineer would argue that a blacklist is better than a whitelist for
security. It's not - you validate your inputs, you don't just reject the
badness you can identify. Unless you're an AV vendor, which would explain
why so few security engineers work at AV vendors.


> If necessary, one could define a short list of technical characteristics
> that would make a signed item non-confusable with a certificate. For
> example, it could be a PKCS#7 structure, or any DER structure whose
> first element is a published specification OID nested in one or more
> layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
> added to this.
>

You could try to construct such a definition - but that's a needless
technical complexity with considerable ambiguity for a hypothetical
situation that you are the only one advocating for, and using an approach
that has repeatedly lead to misinterpretations and security failures.


> An incorrect CRL is an incorrect CRL and falls under the CRL policy
> requirements.
>
> An incorrect OCSP response is an incorrect OCSP response and falls under
> the OCSP policy requirements.


This is an unnecessary ontology split, because it leaves it ambiguous where
something that 'ends up in the middle' is. Which is very much the risk from
these things (e.g. SHA-1 signing of OCSP responses, even if the
certificates signed are SHA-256)


> Those whitelists have already proven problematic, banning (for example)
> any serious test deployment of well-reviewed algorithms such as non-NIST
> curves, SHA-3, non-NIST hashes, quantum-resistant algorithms, perhaps
> even RSA-PSS (RFC3447, I haven't worked through the exact wordings to
> check for inclusion of this one).


I suspect this is the core of our disagreement. It has prevented a number
of insecure deployments or incompatible deployments that would pose
security or compatibility risk to the Web Platform. Crypto is not about
"everything and the kitchen sink" - which you're advocating both here and
overall - it's about having a few, well reviewed, well-oiled joints. The
unnecessary complexity harms the overall security of the ecosystem through
its complexity and harms interoperability - both key values in Mozilla's
mission statement.

Rather than arguing for the sake of the hypothetical, what some CA "might"
want to do, it's far more productive to have the actual use cases with
actual interest in deployment (of which none of those things are) come
forward to have the public discussion. Otherwise, we're just navel gazing,
and it's unproductive :)

Gervase Markham

unread,
Jun 6, 2017, 5:17:25 AM6/6/17
to Matt Palmer
On 04/06/17 03:03, Matt Palmer wrote:
> For whatever it is worth, I am a fan of this way of defining "misissuance".

So you think we should use the word "misissuance" for all forms of
imperfect issuance, and then have a gradated reaction depending on the
type and circumstances, rather than use the word "misissuance" for a
security problem, and another word (e.g. "misconstructed") for the other
ones?

Gerv

Jakob Bohm

unread,
Jun 6, 2017, 2:29:26 PM6/6/17
to mozilla-dev-s...@lists.mozilla.org
On 06/06/2017 07:45, Ryan Sleevi wrote:
> On Mon, Jun 5, 2017 at 6:21 PM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>> If you read the paper, it contains a proposal for the CAs to countersign
>> the computed super-crl to confirm that all entries for that CA match the
>> actual revocations and non-revocations recorded by that CA. This is not
>> currently deployed, but is an example of something that CAs could safely
>> do using their private key, provided sufficient design competence by the
>> central super-crl team.
>>
>
> I did read the paper - and provide feedback on it.
>
> And that presumption that you're making here is exactly the reason why you
> need a whitelist, not a blacklist. "provided sufficient design competence"
> does not come for free - it comes with thoughtful peer review and community
> feedback. Which can be provided in the aspect of policy.
>

I am saying that setting an administrative policy for inclusion in a
root program is not the place to do technical reviews of security
protocols. And I proceeded to list places that *do* perform such peer
review at the highest level of competency, but had to note that the list
would be too long to enumerate in a stable root program policy.

>
>> Another good example could be signing a "certificate white-list"
>> containing all issued but not revoked serial numbers. Again someone
>> not a random CA) should provided a well thought out data format
>> specification that cannot be maliciously confused with any of the
>> current data types.
>>
>
> Or a bad example. And that's the point - you want sufficient technical
> review (e.g. an SDO ideally, but minimally m.d.s.p review).

SDO? Unfamiliar with that TLA.

And why should Mozilla (and every other root program) be consulted to
unanimously preapprove such technical work? This will create a massive
roadblock for progress. I really see no reason to create another FIPS
140 style bureaucracy of meaningless rule enforcement (not to be
confused with the actual security tests that are also part of FIPS 140
validation).

>
> Look, you could easily come up with a dozen examples of improved validation
> methods - but just because they exist doesn't mean keeping the "any other
> method" is good. And, for what it's worth, of those that did shake out of
> the discussions, many of them _were_ insecure at first, and evolved through
> community discussion.
>

Interestingly, the list of revocation checking methods supported by
Chrome (and proposed to be supported by future Firefox versions) is
essentially _empty_ now. Which is completely insecure.

>
>> Here's one item no-one listed so far (just to demonstrate our collective
>> lack of imagination):
>>
>
> This doesn't need imagination - it needs solid review. No one is
> disagreeing with you that there can't be improvements. But let's start with
> the actual concrete matters at hand, appropriately reviewed by the
> Mozilla-using community that serves a purpose consistent with the mission,
> or doesn't pose risks to users.
>

Within *this thread* proposed policy language would have banned that.

And neither I, nor any other participant seemed to realize this specific
omission until my post this morning.

>
>> However the failure mode for "signing additional CA operational items"
>> would be a lot less risky and a lot less reliant on CA competency.
>
>
> That is demonstrably not true. Just look at the CAs who have had issues
> with their signing ceremonies. Or the signatures they've produced.

Did any of those involve erroneously signing non-certificates of a
wholly inappropriate data type?

>
>
>> It is restrictions for restrictions sake, which is always bad policy
>> making.
>>
>
> No it's not. You would have to reach very hard to find a single security
> engineer would argue that a blacklist is better than a whitelist for
> security. It's not - you validate your inputs, you don't just reject the
> badness you can identify. Unless you're an AV vendor, which would explain
> why so few security engineers work at AV vendors.

I am not an AV vendor.

Technical security systems work best with whitelists wherever possible.

Human-to-human policy making works best with blacklists wherever
possible.

Root inclusion policies are human-to-human policies.

>
>
>> If necessary, one could define a short list of technical characteristics
>> that would make a signed item non-confusable with a certificate. For
>> example, it could be a PKCS#7 structure, or any DER structure whose
>> first element is a published specification OID nested in one or more
>> layers of SEQUENCE or SET tags, perhaps more safe alternatives could be
>> added to this.
>>
>
> You could try to construct such a definition - but that's a needless
> technical complexity with considerable ambiguity for a hypothetical
> situation that you are the only one advocating for, and using an approach
> that has repeatedly lead to misinterpretations and security failures.
>

Indeed, and I was trying not to until forced by posts rejecting simply
saying that if it looks like a certificate, it counts as a certificate
issuance for policy purposes.

>
>> An incorrect CRL is an incorrect CRL and falls under the CRL policy
>> requirements.
>>
>> An incorrect OCSP response is an incorrect OCSP response and falls under
>> the OCSP policy requirements.
>
>
> This is an unnecessary ontology split, because it leaves it ambiguous where
> something that 'ends up in the middle' is. Which is very much the risk from
> these things (e.g. SHA-1 signing of OCSP responses, even if the
> certificates signed are SHA-256)
>

Just trying to preserve existing ontologies. Prior to this thread,
failures in OCSP and CRL operations were never classified as
"mis-issuance", because it shares nothing relevant with "mis-issuance".

For example you cannot "revoke a mis-issued OCSP response" within 24
hours by adding it to CRLs etc. It's nonsense.

>
>> Those whitelists have already proven problematic, banning (for example)
>> any serious test deployment of well-reviewed algorithms such as non-NIST
>> curves, SHA-3, non-NIST hashes, quantum-resistant algorithms, perhaps
>> even RSA-PSS (RFC3447, I haven't worked through the exact wordings to
>> check for inclusion of this one).
>
>
> I suspect this is the core of our disagreement. It has prevented a number
> of insecure deployments or incompatible deployments that would pose
> security or compatibility risk to the Web Platform. Crypto is not about
> "everything and the kitchen sink" - which you're advocating both here and
> overall - it's about having a few, well reviewed, well-oiled joints. The
> unnecessary complexity harms the overall security of the ecosystem through
> its complexity and harms interoperability - both key values in Mozilla's
> mission statement.
>

I think you are exaggerating my position here. What I am trying to
avoid is a frozen monoculture ecosystem that will fail spectacularly
when the single permitted security configuration is proven inadequate,
because every player in the ecosystem was forced, by policy, to not have
any alternatives ready.

> Rather than arguing for the sake of the hypothetical, what some CA "might"
> want to do, it's far more productive to have the actual use cases with
> actual interest in deployment (of which none of those things are) come
> forward to have the public discussion. Otherwise, we're just navel gazing,
> and it's unproductive :)
>

The attitudes on this newsgroup seem to strongly discourage any attempt
to to express such interest. Thus I would not expect any CA wishing to
stay in the root program to risk expressing such interest here.

As a non-CA, I have the freedom to advocate that they be given a fair
chance.

Ryan Sleevi

unread,
Jun 6, 2017, 4:08:54 PM6/6/17
to Jakob Bohm, mozilla-dev-security-policy
On Tue, Jun 6, 2017 at 2:28 PM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> I am saying that setting an administrative policy for inclusion in a
> root program is not the place to do technical reviews of security
> protocols.


Of course it is. It is the only one that has reliably worked in the history
of the Web PKI. I would think that would be abundantly evident over the
past five years.


> And I proceeded to list places that *do* perform such peer
> review at the highest level of competency, but had to note that the list
> would be too long to enumerate in a stable root program policy.
>

Except none of them are, as evidenced by what they've turned out. The only
place where Mozilla users are considered, en masse, is in Mozilla policy.
It is the one and only place Mozilla can ensure its needs are appropriately
and adequately reflected.


> SDO? Unfamiliar with that TLA.
>

Standards defining organization.


> And why should Mozilla (and every other root program) be consulted to
> unanimously preapprove such technical work? This will create a massive
> roadblock for progress. I really see no reason to create another FIPS
> 140 style bureaucracy of meaningless rule enforcement (not to be
> confused with the actual security tests that are also part of FIPS 140
> validation).
>

This is perhaps the disconnect. It's not meaningless. A significant amount
of the progress made in the past five years in the Web PKI has come from
one of two things:
1) Mozilla or Google forbidding something
2) Mozilla or Google requiring something

The core of your argument seems to be that you don't believe Mozilla can
update it's policy in a timely fashion (which this list provides ample
counter-evidence to this), or that the Mozilla community should not be
consulted about what is appropriate for the Mozilla community (which is, on
its face, incorrect).

Look, you could easily come up with a dozen examples of improved validation
>> methods - but just because they exist doesn't mean keeping the "any other
>> method" is good. And, for what it's worth, of those that did shake out of
>> the discussions, many of them _were_ insecure at first, and evolved
>> through
>> community discussion.
>>
>>
> Interestingly, the list of revocation checking methods supported by
> Chrome (and proposed to be supported by future Firefox versions) is
> essentially _empty_ now. Which is completely insecure.
>

Not really "interestingly", because it's not a response to the substance of
the point, but in fact goes to an unrelated (and technically incorrect)
tangent.

Rather than engage with you on that derailment, do you agree with the
easily-supported (by virtue of the CABF Validation WG's archives) that CAs
proposed the use of insecure methods for domain validation, and those were
refined in time to be more appropriately secure? That's something easily
supported.


> Within *this thread* proposed policy language would have banned that.


> And neither I, nor any other participant seemed to realize this specific
> omission until my post this morning.
>

Yes, and? You're showing exactly the value of community review - and where
it would be better to make a mistake that prevents something benign, rather
than allows something dangerous, given the pattern and practice we've seen
over the past decades.


> However the failure mode for "signing additional CA operational items"
>>> would be a lot less risky and a lot less reliant on CA competency.
>>>
>>
>>
>> That is demonstrably not true. Just look at the CAs who have had issues
>> with their signing ceremonies. Or the signatures they've produced.
>>
>
> Did any of those involve erroneously signing non-certificates of a
> wholly inappropriate data type?
>

I'm not sure I fully understand or appreciate the point you're trying to
make, but I feel like you may have misunderstood mine.

We know that CAs have had issues with their signing ceremonies (e.g.
signing tbsCertificates that they should not have)
We know that CAs have had issues with integrating new technologies (e.g.
CAA misissuance)
We know that CAs have had considerable issues adhering to the relevant
standards (e.g. certlint, x509lint, Mozilla Problematic Practices)

Signing data is heavily reliant on CA competency, and that's in
unfortunately short supply, as the economics of the CA market make it easy
to fire all the engineers, while keeping the sales team, and outsourcing
the rest.


> I am not an AV vendor.
>
> Technical security systems work best with whitelists wherever possible.
>
> Human-to-human policy making works best with blacklists wherever
> possible.
>
> Root inclusion policies are human-to-human policies.
>

Root inclusion policies are the embodiment of technical security systems.
While there is a human aspect in determining the trustworthiness for
inclusion, it is the technical competency that is the core to that trust.
The two are deeply related, and the human aspect of the CA trust business
is deeply flawed - as the past decade of issuance shows you.

The mitigation for this was, is, and will be technical mitigation for human
failures. You want to remove humans from the loop, not depend on them.


> Just trying to preserve existing ontologies. Prior to this thread,
> failures in OCSP and CRL operations were never classified as
> "mis-issuance", because it shares nothing relevant with "mis-issuance".
>
> For example you cannot "revoke a mis-issued OCSP response" within 24 hours
> by adding it to CRLs etc. It's nonsense.
>

Of course they were! They were and are part of the Baseline Requirements as
policy violations (e.g. 'unrevoking' a certificate, a CRL with a
certificateSuspension, etc).

The ontology you seek to preserve wasn't actually an enshrined policy. If
your debate is that the word 'issue' can only apply to certificates, and
that OCSP responses are 'signed', as are CRLs, then all you've stated is
that there is 'misissuance' and 'missigning', which are technically
identical activities, but utilize different verbs.

A CA that unrevokes an intermediate has violated the Mozilla Root
Certificate Policy. That's always been the case.

Jakob Bohm

unread,
Jun 6, 2017, 5:27:27 PM6/6/17
to mozilla-dev-s...@lists.mozilla.org
On 06/06/2017 22:08, Ryan Sleevi wrote:
> On Tue, Jun 6, 2017 at 2:28 PM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>>
>> I am saying that setting an administrative policy for inclusion in a
>> root program is not the place to do technical reviews of security
>> protocols.
>
>
> Of course it is. It is the only one that has reliably worked in the history
> of the Web PKI. I would think that would be abundantly evident over the
> past five years.
>

I have yet to see (but I haved studied ancient archives) the root
program and or the CAB/F doing actual review of technical security
protocols and data formats.

>
>> And I proceeded to list places that *do* perform such peer
>> review at the highest level of competency, but had to note that the list
>> would be too long to enumerate in a stable root program policy.
>>
>
> Except none of them are, as evidenced by what they've turned out. The only
> place where Mozilla users are considered, en masse, is in Mozilla policy.
> It is the one and only place Mozilla can ensure its needs are appropriately
> and adequately reflected.
>
>
>> SDO? Unfamiliar with that TLA.
>>
>
> Standards defining organization.
>

Ah, like the very examples I gave of competent protocol review
organizations that should do this.

>
>> And why should Mozilla (and every other root program) be consulted to
>> unanimously preapprove such technical work? This will create a massive
>> roadblock for progress. I really see no reason to create another FIPS
>> 140 style bureaucracy of meaningless rule enforcement (not to be
>> confused with the actual security tests that are also part of FIPS 140
>> validation).
>>
>
> This is perhaps the disconnect. It's not meaningless. A significant amount
> of the progress made in the past five years in the Web PKI has come from
> one of two things:
> 1) Mozilla or Google forbidding something
> 2) Mozilla or Google requiring something
>

Yes, but there is a fundamental difference between Mozilla/Google
enforcing best practices and Mozilla/Google arbitrarily banning
progress.

> The core of your argument seems to be that you don't believe Mozilla can
> update it's policy in a timely fashion (which this list provides ample
> counter-evidence to this), or that the Mozilla community should not be
> consulted about what is appropriate for the Mozilla community (which is, on
> its face, incorrect).
>

No, I am saying the the root program is the wrong place to do technical
review and acceptance/rejection of additional CA features that might
improve security with non-Mozilla code, with the potential that at some
future point in time Mozilla might decide to start including such
facilities.

For example, the Mozilla root program was not the right place to discuss
if CAs should be allowed to do CT logging at a time when only Google
code was actually using that.

The right place was Google submitting the CT system to a standard
organization (in this case the IETF), and once any glaring security
holes had been reviewed out, begin to have some CAs actually do this,
before the draft RFC could have the implementations justifying
publication as a standards track RFC. Which is, I believe, exactly what
happened. The Mozilla root policy did not need to change to allow this
work to be done.

One litmus-test for a good policy would be "If this policy had existed
before CT, and Mozilla was not involved with CT at all, would this
policy had interfered with the introduction of CT by Google".


> Look, you could easily come up with a dozen examples of improved validation
>>> methods - but just because they exist doesn't mean keeping the "any other
>>> method" is good. And, for what it's worth, of those that did shake out of
>>> the discussions, many of them _were_ insecure at first, and evolved
>>> through
>>> community discussion.
>>>
>>>
>> Interestingly, the list of revocation checking methods supported by
>> Chrome (and proposed to be supported by future Firefox versions) is
>> essentially _empty_ now. Which is completely insecure.
>>
>
> Not really "interestingly", because it's not a response to the substance of
> the point, but in fact goes to an unrelated (and technically incorrect)
> tangent.
>
> Rather than engage with you on that derailment, do you agree with the
> easily-supported (by virtue of the CABF Validation WG's archives) that CAs
> proposed the use of insecure methods for domain validation, and those were
> refined in time to be more appropriately secure? That's something easily
> supported.
>

I am not at all talking about "domain validation" and the restrictions
that had to be imposed to stop bad CA practices.

I am talking about allowing non-Mozilla folk, working with competent
standard defining organizations to create additional security
measures requiring signatures from involved CAs.


>
>> Within *this thread* proposed policy language would have banned that.
>
>
>> And neither I, nor any other participant seemed to realize this specific
>> omission until my post this morning.
>>
>
> Yes, and? You're showing exactly the value of community review - and where
> it would be better to make a mistake that prevents something benign, rather
> than allows something dangerous, given the pattern and practice we've seen
> over the past decades.
>

That is a retorical trick. While the fact that I found one concrete
omission does add to the policy review process. It also remains a valid
example of how easily something like that could (and was) missed.

>
>> However the failure mode for "signing additional CA operational items"
>>>> would be a lot less risky and a lot less reliant on CA competency.
>>>>
>>>
>>>
>>> That is demonstrably not true. Just look at the CAs who have had issues
>>> with their signing ceremonies. Or the signatures they've produced.
>>>
>>
>> Did any of those involve erroneously signing non-certificates of a
>> wholly inappropriate data type?
>>
>
> I'm not sure I fully understand or appreciate the point you're trying to
> make, but I feel like you may have misunderstood mine.
>
> We know that CAs have had issues with their signing ceremonies (e.g.
> signing tbsCertificates that they should not have)

Which is not applicable.

> We know that CAs have had issues with integrating new technologies (e.g.
> CAA misissuance)

Which is not proof that all new technologies should therefore be
preemptively banned or require preapproval by organizations with
no interest in using those technologies.

> We know that CAs have had considerable issues adhering to the relevant
> standards (e.g. certlint, x509lint, Mozilla Problematic Practices)

Which is not a reason to retard the creation of new standards.

>
> Signing data is heavily reliant on CA competency, and that's in
> unfortunately short supply, as the economics of the CA market make it easy
> to fire all the engineers, while keeping the sales team, and outsourcing
> the rest.
>

Which is why I am heavily focused on allowing new technology to be be
developed by competent non-CA staff (such as IETF), then field tested in
cooperation with a few CAs with sufficient engineers, before deciding if
it is good enough, and well enough documented, to be deployed by all the
other CAs.

>
>> I am not an AV vendor.
>>
>> Technical security systems work best with whitelists wherever possible.
>>
>> Human-to-human policy making works best with blacklists wherever
>> possible.
>>
>> Root inclusion policies are human-to-human policies.
>>
>
> Root inclusion policies are the embodiment of technical security systems.
> While there is a human aspect in determining the trustworthiness for
> inclusion, it is the technical competency that is the core to that trust.
> The two are deeply related, and the human aspect of the CA trust business
> is deeply flawed - as the past decade of issuance shows you.

They are a human-to-human policy on what and how technical systems may
be deployed. They are not the actual technical embodiments of those
requirements.

>
> The mitigation for this was, is, and will be technical mitigation for human
> failures. You want to remove humans from the loop, not depend on them.
>

"2. A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law." (Asimov)

>
>> Just trying to preserve existing ontologies. Prior to this thread,
>> failures in OCSP and CRL operations were never classified as
>> "mis-issuance", because it shares nothing relevant with "mis-issuance".
>>
>> For example you cannot "revoke a mis-issued OCSP response" within 24 hours
>> by adding it to CRLs etc. It's nonsense.
>>
>
> Of course they were! They were and are part of the Baseline Requirements as
> policy violations (e.g. 'unrevoking' a certificate, a CRL with a
> certificateSuspension, etc).

"Unrevoking a certificate" is a previously common practice banned by a
BR, completely separately from the rules about issuance and
mis-issuance.

>
> The ontology you seek to preserve wasn't actually an enshrined policy. If
> your debate is that the word 'issue' can only apply to certificates, and
> that OCSP responses are 'signed', as are CRLs, then all you've stated is
> that there is 'misissuance' and 'missigning', which are technically
> identical activities, but utilize different verbs.
>
> A CA that unrevokes an intermediate has violated the Mozilla Root
> Certificate Policy. That's always been the case.
>

Yes, but that's not the only thing that can go wrong in CRL and OCSP
signing. A key example from the past year was when the revocation of
one of Globalsign's cross-certs caused their OCSP software to
erroneously report millions of valid certificates as revoked. Stopping
this damage was not a BR-violating unrevocation of those millions of
certificates, and was accepted as a purely technical incident that did
not jeopardize security. None of the "mis-issuance" policy requirements
were invoked.

Similarly, the fact that until recently it was not prohibited for OCSP
responders to reply "valid" for unknown certs did not imply that each
such incident was a "mis-issuance", and none of the "mis-issuance"
policy requirements were invoked. Nor should they be invoked if it is
found that some CAs fail to implement the new policy of not doing that.

Signing bad CRLs and/or bad OCSP responses may be purely technical
incidents, or may be security incidents. And there are policy rules for
that which differ significantly from the rules for mis-issuing an actual
certificate and/or an actual precertificate.

Rob Stradling

unread,
Jun 7, 2017, 6:55:57 AM6/7/17
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org, Ryan Sleevi
On 06/06/17 22:26, Jakob Bohm wrote:
> On 06/06/2017 22:08, Ryan Sleevi wrote:
<snip>
>> Signing data is heavily reliant on CA competency, and that's in
>> unfortunately short supply, as the economics of the CA market make it
>> easy to fire all the engineers, while keeping the sales team, and
>> outsourcing the rest.

Ryan, thankfully at least some CAs have some engineers. :-)

> Which is why I am heavily focused on allowing new technology to be be
> developed by competent non-CA staff (such as IETF),

Jakob, if I interpret that literally it seems you're objecting to CA
staff contributing to IETF efforts. If so, may I advise you to beware
of TLS Feature (aka Must Staple), CAA, CT v1 (RFC6962) and especially CT
v2 (6962-bis)?

;-)

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Nick Lamb

unread,
Jun 7, 2017, 10:43:13 AM6/7/17
to mozilla-dev-s...@lists.mozilla.org
On Tuesday, 6 June 2017 21:08:54 UTC+1, Ryan Sleevi wrote:
> Standards defining organization.

More usually a Standards _Development_ Organization. I wouldn't usually feel the need to offer this correction but in this context we care a good deal about the fact that SDOs are where the actual engineering is done, where the expertise about the particular niche being standardised exists.

Even in the IETF, which is unusual in having some pretty technical people making its top level decisions, the serious work is mostly done in specialist working groups, with their products percolating up afterwards. For most standards bodies the top level stuff is purely politics - at the ITU the members are (notionally) sovereign nations themselves, same for the UPU, at ISO they're entire national standards bodies, and so on - utterly unsuitable to the meat of standards development itself. Most expertise is instead present in smaller, specialised SDOs in these cases.

Anyway, to Jakob's point it is _extremely_ unlikely that a new piece of infrastructure will spring into existence fully formed and ready for use in anger in the Web PKI without enough time for Mozilla, and m.d.s.policy to evaluate it and if necessary update the relevant policy documents. Much more likely, in my opinion, is that something half-baked is tried by a CA, and later realised to have opened an unsuspected hole in security.

Nothing even prevents this policy being updated to permit, for example, trials of some particular promising new idea that needs testing at scale, although I think in most cases that won't be necessary. Consider the CRL signing idea, this can be tested perfectly well without using any trusted CA or subCA keys at all. A final production version would probably use trusted keys, but you don't need to start with them to see it work.

Jakob Bohm

unread,
Jun 7, 2017, 11:18:46 AM6/7/17
to mozilla-dev-s...@lists.mozilla.org
On 07/06/2017 12:55, Rob Stradling wrote:
> On 06/06/17 22:26, Jakob Bohm wrote:
>> On 06/06/2017 22:08, Ryan Sleevi wrote:
> <snip>
>>> Signing data is heavily reliant on CA competency, and that's in
>>> unfortunately short supply, as the economics of the CA market make it
>>> easy to fire all the engineers, while keeping the sales team, and
>>> outsourcing the rest.
>
> Ryan, thankfully at least some CAs have some engineers. :-)
>
>> Which is why I am heavily focused on allowing new technology to be be
>> developed by competent non-CA staff (such as IETF),
>
> Jakob, if I interpret that literally it seems you're objecting to CA
> staff contributing to IETF efforts. If so, may I advise you to beware
> of TLS Feature (aka Must Staple), CAA, CT v1 (RFC6962) and especially CT
> v2 (6962-bis)?
>

No, I was just stating that if (as suggested by Mr. Sleevi) the Mozilla
root program does not trust CA engineers to design new to-be-signed data
formats, maybe Mozilla could at least trust designs that have been
positively peer reviewed in organizations such as the IETF, the NIST
computer security/crypto groups, etc. etc.

I was in no way suggesting that CA engineers do not participate in those
efforts, giving as an example their participation in early CT
deployments together with Google engineers.

Jakob Bohm

unread,
Jun 7, 2017, 11:25:22 AM6/7/17
to mozilla-dev-s...@lists.mozilla.org
Note that I also had a second, related, point: The possibility that such
a new piece of infrastructure was, for other reasons, not endorsed by
Mozilla, but of great interest to one of the other root programs (not
all of which are browser vendors).

Conversely consider the possibility that some other root programs had a
similarly restrictive policy and refused to support the introduction of
CT PreCertificates. This could have stopped a useful improvement that
Mozilla seems to like.

The Golden rule would then imply that Mozilla should not reserve to
itself a power it would not want other root programs to have.

Ryan Sleevi

unread,
Jun 7, 2017, 11:42:23 AM6/7/17
to Jakob Bohm, mozilla-dev-security-policy
On Wed, Jun 7, 2017 at 11:25 AM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:
>
> Note that I also had a second, related, point: The possibility that such
> a new piece of infrastructure was, for other reasons, not endorsed by
> Mozilla, but of great interest to one of the other root programs (not
> all of which are browser vendors).
>
> Conversely consider the possibility that some other root programs had a
> similarly restrictive policy and refused to support the introduction of
> CT PreCertificates. This could have stopped a useful improvement that
> Mozilla seems to like.
>
> The Golden rule would then imply that Mozilla should not reserve to
> itself a power it would not want other root programs to have.


As much as I love an application of the Golden Rule as the next person, I
think it again misunderstands the realities on the ground in the Web PKI,
and puts an ideal ahead of the practical security implications.

Root programs do sometimes disagree. For example, Microsoft reserves the
right to revoke certificates it disagrees with. That right means that other
browser programs cannot effectively use CRLs or OCSP for revocation, as to
do so is to recognize Microsoft's ability to (negatively) affect their
users. They have means of working out those disagreements.

You're positioning the hypothetical as if it's inflexible - but the reality
is, each root program exists to first and foremost best reflect the needs
of its userbase. For Microsoft, they've determined that's best accomplished
by giving themselves unilateral veto power over the CAs they trust. For
Mozilla, recognizing its global mission, it tries to operate openly and
best serving the ecosystem.

The scenario you remark as undesirable - the blocking of precertificates -
is, in fact, a desirable and appropriate outcome. As Nick has noted, these
efforts do not spring forth like Athena, fully grown and robust, but
iteratively develop through the community process - leaving ample time for
Mozilla to update and respond. This, too, has ample evidence of it
happening - see the discussions related to CAA, or the validation methods
employed by ACME.

The idealized view of SDOs - that they are infallible organizations or
represent some neutral arbitration - is perhaps misguided. For example,
it's trivial to publish an I-D within the IETF as a draft, and it's not
unreasonable to have an absolutely terrible technology assigned an RFC (for
example, an informational submission documenting an existing, but insecure,
technology). As Nick mentions, the reality - and far more common occurrence
- is someone being clever and a half and doing it with good intentions, but
bad security. And those decisions - and that flexibility - create
significantly more risk for the overall ecosystem.

I suspect we will continue to disagree on this point, so I doubtful can
offer more meaningful arguments to sway you, and certainly would not appeal
to authority. I would simply note that the scenarios you raise as
hypotheticals do not and have not played out as you suggest, and if the
overall goal of this discussion is to ensure the security of Mozilla users,
then the contents, provenance, and correctness of what is signed plays an
inescapable part of that security, and the suggested flexibility is
inimical to that security.

Jakob Bohm

unread,
Jun 7, 2017, 5:14:50 PM6/7/17
to mozilla-dev-s...@lists.mozilla.org
On 07/06/2017 17:41, Ryan Sleevi wrote:
> On Wed, Jun 7, 2017 at 11:25 AM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>>
>> Note that I also had a second, related, point: The possibility that such
>> a new piece of infrastructure was, for other reasons, not endorsed by
>> Mozilla, but of great interest to one of the other root programs (not
>> all of which are browser vendors).
>>
>> Conversely consider the possibility that some other root programs had a
>> similarly restrictive policy and refused to support the introduction of
>> CT PreCertificates. This could have stopped a useful improvement that
>> Mozilla seems to like.
>>
>> The Golden rule would then imply that Mozilla should not reserve to
>> itself a power it would not want other root programs to have.
>

So much rhetoric, so little substance...

>
> As much as I love an application of the Golden Rule as the next person, I
> think it again misunderstands the realities on the ground in the Web PKI,
> and puts an ideal ahead of the practical security implications.
> > Root programs do sometimes disagree. For example, Microsoft reserves the
> right to revoke certificates it disagrees with. That right means that other
> browser programs cannot effectively use CRLs or OCSP for revocation, as to
> do so is to recognize Microsoft's ability to (negatively) affect their
> users. They have means of working out those disagreements.

Bad example, but illustrates that reasonable agreement is not always
obtainable.

>
> You're positioning the hypothetical as if it's inflexible - but the reality
> is, each root program exists to first and foremost best reflect the needs
> of its userbase. For Microsoft, they've determined that's best accomplished
> by giving themselves unilateral veto power over the CAs they trust. For
> Mozilla, recognizing its global mission, it tries to operate openly and
> best serving the ecosystem.
>

I'm saying that if someone not interested in some good protocol had veto
power over CAs using that protocol, then that someone could be
inflexible to the detriment of everyone else. Hence the desire not to
establish a precedent for such veto power.

The goodness of Mozilla does not minimize that precedent, only increases
the danger it might be cited by less honorable root programs.

> The scenario you remark as undesirable - the blocking of precertificates -
> is, in fact, a desirable and appropriate outcome. As Nick has noted, these
> efforts do not spring forth like Athena, fully grown and robust, but
> iteratively develop through the community process - leaving ample time for
> Mozilla to update and respond. This, too, has ample evidence of it
> happening - see the discussions related to CAA, or the validation methods
> employed by ACME.

So what would/should the CT project have done if some inflexible root
program had vetoed its deployment?

And ample time to respond does not guarantee a fair response, especially
when considering the extension of such veto power to less flexible root
programs.

>
> The idealized view of SDOs - that they are infallible organizations or
> represent some neutral arbitration - is perhaps misguided. For example,
> it's trivial to publish an I-D within the IETF as a draft, and it's not
> unreasonable to have an absolutely terrible technology assigned an RFC (for
> example, an informational submission documenting an existing, but insecure,
> technology). As Nick mentions, the reality - and far more common occurrence
> - is someone being clever and a half and doing it with good intentions, but
> bad security. And those decisions - and that flexibility - create
> significantly more risk for the overall ecosystem.

When talking about the IETF as an SDO, I am obviously talking about IETF
standards track working groups, not individual submission RFCs or BOF
talks at IETF meetings.

>
> I suspect we will continue to disagree on this point, so I doubtful can
> offer more meaningful arguments to sway you, and certainly would not appeal
> to authority. I would simply note that the scenarios you raise as
> hypotheticals do not and have not played out as you suggest, and if the
> overall goal of this discussion is to ensure the security of Mozilla users,
> then the contents, provenance, and correctness of what is signed plays an
> inescapable part of that security, and the suggested flexibility is
> inimical to that security.
>


Gervase Markham

unread,
Jun 8, 2017, 5:33:20 AM6/8/17
to Matt Palmer
On 04/06/17 03:03, Matt Palmer wrote:
> For whatever it is worth, I am a fan of this way of defining "misissuance".

This is an "enumerating badness" vs. "enumerating goodness" question. My
original draft attempted to (without limitation) enumerate some badness,
and you and Ryan are suggesting that it would be better instead to
enumerate goodness.

I agree. However, enumerating goodness is a bit harder because you need
to make sure you get all the goodness, so as not to accidentally ban
something you want. This we could do, but I feel it would require
consultation with CAs.

Therefore, I will add the non-limiting enumerating badness version to
the policy, as an improvement on the current wording which also
enumerates badness, but I've filed these two issues:

https://github.com/mozilla/pkipolicy/issues/86
https://github.com/mozilla/pkipolicy/issues/85

on improving this further in the future.

Gerv
0 new messages