Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

General issues that came up in the DarkMatter discussion(s)

1,291 views
Skip to first unread message

Jakob Bohm

unread,
Mar 7, 2019, 12:59:43 PM3/7/19
to mozilla-dev-s...@lists.mozilla.org
This thread is intended to be a catalog of general issues that come/came
up at various points in the DarkMatter discussions, but which are not
about DarkMatter specifically.

Each response in this thread should have a subject line of the single
issue it discusses and should not mention DarkMatter except to mention
the Timestamp, message-id and Author of the message in which it came up.

Further discussion of each issue should be in response to that issue.

Each new such issue should be a response directly to this introductory
post, and I will make a few such subject posts myself.

Once again, no further mentions of Darkmatter in this thread are
allowed, keep those in the actual Darkmatter threads.


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Jakob Bohm

unread,
Mar 7, 2019, 1:30:03 PM3/7/19
to mozilla-dev-s...@lists.mozilla.org
In the cause of the other discussion it was revealed that EJBCA by PrimeKey
has apparently:

1. Made serial numbers with 63 bits of entropy the default. Which is
not in compliance with the BRs for globally trusted CAs and SubCAs.

2. Mislead CAs to believe this setting actually provided 64 bits of
entropy.

3. Discouraged CAs from changing that default.

This raises 3 derived concerns:

4. Any CA using the EJBCA platform needs to manually check if they
have patched EJBCA to comply with the BR entropy requirement despite
EJBCAs publisher (PrimeKey) telling them otherwise.
Maybe this should be added to the next quarterly mail from Mozilla to
the CAs.

5. Is it good for the CA community that EJBCA seems to be the only
generally available software suite for large CAs to use?

6. Should the CA and root program community be more active in ensuring
compliance by critical CA infrastructure providers such as EJBCA and
the companies providing global OCSP network hosting.


The above issue first came up in Message ID
<mailman.266.1551055169....@lists.mozilla.org>
posted on Mon, 25 Feb 2019 08:39:07 UTC by Scott Rea, and subsequently
lead to a number of replies, including at least one reply from Mike
Kushner from EJBCA and a discovery that Google Trust Services was
also hit with this issue to the tune of 100K non-compliant certificates.

Jakob Bohm

unread,
Mar 7, 2019, 2:45:25 PM3/7/19
to mozilla-dev-s...@lists.mozilla.org
Currently the Mozilla root program contains a large number of roots that
are apparently single-nation CA programs serving their local community
almost exclusively, including by providing certificates that they can
use to serve content with the rest of the world.

For purposes of this, I define a national CA as a CA that has publicly
self-declared that it serves a single geographic community almost
exclusively, with that area generally corresponding to national borders
of a country or territory.

As highlighted by the discussion, this raises some common concerns for
such CAs:

1. Due to the technical way Mozilla products handle the root program
data, each national CA is trusted to issue certificates for anyone
anywhere in the world despite them not having any self-declared
interest to do so. This constitutes an unintentional security risk
as highlighted years ago by the 2011 DigiNotar (NL) incident.

2. For a variety of reasons, the existence of all these globally trusted
national CAs, has made establishment of such national CAs a matter of
pride for governments, regardless if they currently have such CAs.

3. There is a legitimate concern that any national CA (government run or
not) may be used by that government as a means to project force in a
manner inconsistent with being trusted outside that country (as
reflected in current Mozilla policy), but consistent with a general
view of the rights of nations (as expressed in the UN charter and
ancient traditions).

4. Some of the greatest nations on Earth have had their official
national CAs rejected by the root program because of #1 or #3,
including the US federal bridge CA and China's CNNIC.

This in turn leads to some practical issues:

5. Should the root program policies provide rules that enforce the
self-declared scope restrictions on a CA. For example if a CA
has declared that it only intends to issue for entities in the
Netherlands, should certificates for entities beyond that be
considered as misissuance incidents for that reason alone
(DigiNotar involved misissuance in a much more literal sense).

6. How should rules for the meaning of such geographical intent be
mapped for things like IP address certificates ? For example
should the rules use the geography indicated in NRO address space
assignments to national ISPs? Or perhaps some information provided
by ISPs themselves? (Commercial IP-to-country databases have a too
high error rate for certificate policy use).

7. How should rules for the meaning of such geographical intent be
mapped for certificates for domains under gTLDs such as visit-
countryname.org or countryname-government.com ?

8. Should Mozilla champion a specification for adding such geographic
restrictions to CA cert name constraints in a manner that is both
backward compatible with other clients and adaptive to the ongoing
movement/reassignment of name spaces to/between nations.

9. Should Mozilla attempt to enforce such intent in its clients (Firefox
etc.) once the technical data exists?

10. The root trust data provided in the Firefox user interface does not
clearly indicate the national or other affiliation of the trusted
roots, such that concerned users may make informed decisions
accordingly. Ditto for the root program dumps provided to other
users of the Mozilla root program data (inside and outside the Mozilla
product family). For example, few users outside Scandinavia would
know that "Sonera" is really a national CA for the countries in which
Telia-Sonera is the incumbent Telco (Finland, Sweden and Åland).


This overall issue was touched repeatedly in the thread, especially
point 3 above, but the earliest I could find was in Message ID
<mailman.257.1550879505....@lists.mozilla.org>
posted on Fri, 22 Feb 2019 23:45:39 UTC by "cooperq"

On 07/03/2019 18:59, Jakob Bohm wrote:

Ryan Sleevi

unread,
Mar 7, 2019, 5:03:12 PM3/7/19
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
Do you believe there is new information or insight you’re providing from
the last time this was discussed and decided?

For example:
https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/JP1gk7atwjg

https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/tr_PDVsZ6-k

https://groups.google.com/forum/m/#!searchin/mozilla.dev.security.policy/Government$20CAs/mozilla.dev.security.policy/qpwFbcRfBmk

I included the search query in the URL, so that you can examine for
yourself what new insight or information is being provided. I may have
missed some salient point in your message, but I didn’t see any new insight
or information that warranted revisiting such discussion.

In the spirit of
https://www.mozilla.org/en-US/about/forums/etiquette/ , it may be best to
let sleeping dogs lie here, rather than continuing this thread. However, if
you feel there has been some significant new information that’s been
overlooked, perhaps you can clearly and succinctly highlight that new
information.

Peter Gutmann

unread,
Mar 7, 2019, 8:47:27 PM3/7/19
to mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
Jakob Bohm via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>This raises 3 derived concerns:

And a fourth, which has been overlooked during all the bikeshedding...
actually I'll call it question 0, since that's what it should have been:

0. Given that the value of 64 bits was pulled out of thin air (or possibly
less well-lit regions), does it really matter whether it's 63 bits, 64
bits, 65 3/8th bits, or e^i*pi bits?

Peter.

Matthew Hardeman

unread,
Mar 7, 2019, 8:58:00 PM3/7/19
to Peter Gutmann, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
On Thu, Mar 7, 2019 at 7:47 PM Peter Gutmann via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> 0. Given that the value of 64 bits was pulled out of thin air (or possibly
> less well-lit regions), does it really matter whether it's 63 bits, 64
> bits, 65 3/8th bits, or e^i*pi bits?
>

I was actually joking on Twitter...

Let's say there's a CA that specializes in -- among other things -- special
requests...

What if they typically utilize 71-bits of entropy, encoded with a fixed
high-order bit value of 0, to ensure no extra encoding, and the 7/8 of one
byte + the following 8 bytes are fully populated with 71 bits of entropy as
requested from an appropriate entropy source...

What if a special customer (who may be a degenerate gambler, but isn't
necessarily -- it's merely theorized) insists that they're only going to
accept a "lucky" certificate whose overall serial number decimal value is
any one of the set of any and all prime numbers which may be expressed in
the range of 71-bit unsigned integers?

Can the CA's agent just request the cert, review the to-be-signed
certificate data, and reject and retry until they land on a prime? Then
issue that certificate?

Does current policy address that? Should it?

Peter Gutmann

unread,
Mar 7, 2019, 9:15:06 PM3/7/19
to Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
Matthew Hardeman <mhar...@gmail.com> writes:

>Can the CA's agent just request the cert, review the to-be-signed certificate
>data, and reject and retry until they land on a prime? Then issue that
>certificate?
>
>Does current policy address that? Should it?

Yeah, you can get arbitrarily silly with this. For example my code has always
used 8-byte serial numbers (based on the German Tank Problem, nothing to do
with the BR), it requests 9 bytes of entropy and, if the first byte of the 8
that gets used is zero uses the surplus byte, and if that's still zero sets it
to 1 (again nothing to do with the BR, purely as an ASN.1 encoding thing so
you always get a fixed-length value). So there's a bias of 1/64K values. Is
that small enough? What if I make it 32 bits, so it's 1/4G values? What
about 48 bits? What if I use a variant of what you're suggesting, a >64-bit
structured value that contains 64 bits of entropy (so perhaps something using
parity bits or similar), is that valid?

As I said above, you can get arbitrarily silly with this. I'm sure if we
looked at other CA's code at the insane level of nitpickyness that
DarkMatter's use of EJBCA has been examined, we'd find reasons why their
implementations are non-compliant as well.

Peter.

Matthew Hardeman

unread,
Mar 7, 2019, 9:17:50 PM3/7/19
to Peter Gutmann, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
On Thu, Mar 7, 2019 at 8:14 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

>
> As I said above, you can get arbitrarily silly with this. I'm sure if we
> looked at other CA's code at the insane level of nitpickyness that
> DarkMatter's use of EJBCA has been examined, we'd find reasons why their
> implementations are non-compliant as well.


As if on queue, comes now GoDaddy with its confession.

Peter Gutmann

unread,
Mar 7, 2019, 9:18:14 PM3/7/19
to Matthew Hardeman, Peter Gutmann, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
I wrote:

As I said above, you can get arbitrarily silly with this. I'm sure if we
looked at other CA's code at the insane level of nitpickyness that
DarkMatter's use of EJBCA has been examined, we'd find reasons why their
implementations are non-compliant as well.

Seconds after sending it, this arrived:

As of 9pm AZ on 3/6/2019 GoDaddy started researching the 64bit certificate
Serial Number issue. We have identified a significant quantity of
certificates (> 1.8million) not meeting the 64bit serial number requirement.

I rest my case.

Oh, and the BR's need an update so that half the CAs on the planet aren't
suddenly non-BR compliant based on the DarkMatter-specific interpretation.

Peter.

Peter Gutmann

unread,
Mar 7, 2019, 9:20:21 PM3/7/19
to Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
Matthew Hardeman <mhar...@gmail.com> writes:

>As if on queue, comes now GoDaddy with its confession.

I swear I didn't plan that in advance :-).

Peter.

Matthew Hardeman

unread,
Mar 7, 2019, 9:23:52 PM3/7/19
to Peter Gutmann, mozilla-dev-s...@lists.mozilla.org, Jakob Bohm
On Thu, Mar 7, 2019 at 8:20 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> I swear I didn't plan that in advance :-).


I believe you. When the comedy is this good, it's because it wrote itself.
:-)

Ryan Sleevi

unread,
Mar 7, 2019, 9:29:21 PM3/7/19
to Peter Gutmann, mozilla-dev-s...@lists.mozilla.org
On Thu, Mar 7, 2019 at 9:18 PM Peter Gutmann via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Oh, and the BR's need an update so that half the CAs on the planet aren't
> suddenly non-BR compliant based on the DarkMatter-specific interpretation.


Past analysis and discussion have shown the interpretation is hardly
specific to a single CA. It was a problem quite literally publicly
discussed during the drafting and wording of the ballot. References were
provided to those discussions. Have you gone and reviewed them? It might be
helpful to do so, before making false statements that mislead.

Matthew Hardeman

unread,
Mar 7, 2019, 9:48:06 PM3/7/19
to Ryan Sleevi, Peter Gutmann, mozilla-dev-s...@lists.mozilla.org
On Thu, Mar 7, 2019 at 8:29 PM Ryan Sleevi via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Past analysis and discussion have shown the interpretation is hardly
> specific to a single CA. It was a problem quite literally publicly
> discussed during the drafting and wording of the ballot. References were
> provided to those discussions. Have you gone and reviewed them? It might be
> helpful to do so, before making false statements that mislead.
>

The actual text of the guideline is quite clear -- in much the same manner
that frosted glass is.

"Effective September 30, 2016, CAs SHALL generate non-sequential
Certificate serial numbers greater than zero (0) containing at least 64
bits of output from a CSPRNG. " [1]

Irrespective of the discussion underlying the modifications of the BRs to
incorporate this rule, there are numerous respondent CAs of varying
operational vintage, varying size, and varying organizational complexity.

The history underlying a rule should not be necessary to implement and
faithfully obey a rule. And yet...

Rather than have us theorize as to why non-compliance with this rule seems
to be so widespread, even by a number of organizations which have more
typically adhered to industry best practices, would you be willing to posit
a plausible scenario for why all of this non-compliance has gone on for so
long and by so many across so many certificates?

Additionally, assuming a large CA with millions of issued certificates
using an actual 64-bit random serial number... Should the CA also do an
exhaustive issued-serial-number search to ensure that the to-be-signed
serial number is not off-by-one in either direction from a previously
issued certificate serial number? However implausible, if it occurred,
this would indeed result in having participated in the issuance of 2
certificates with sequential serial numbers.

I agree with Peter Gutmann's statement. Whatever the cause for the final
language in BR 7.1, the language as presently presented is awful and needs
to be fixed in such a manner as will eliminate ambiguity within the rules.
I cannot imagine that would hurt compliance, but I rather suspect it may
improve it.

[1] https://cabforum.org/wp-content/uploads/CA-Browser-Forum-BR-1.6.3.pdf

bif

unread,
Mar 7, 2019, 9:53:57 PM3/7/19
to mozilla-dev-s...@lists.mozilla.org
Ballot 164 statement of intent is pretty clear: (arbitrary) 64 bit of randomness was needed to defeat collisions in broken MD5.

With SHA2, the missing 1 bit does not seem to have any impact on the possible collisions.

But BRs are not to be interpreted, just to be applied to the letter, whether it makes sense or not. When it no longer makes sense, the wording can be improved for the future.

PS replacing handful of certs within 5 days is fairly easy; replacing thousands (or millions, as we find out) is much less likely. Should BRs account for that?

Matthew Hardeman

unread,
Mar 7, 2019, 10:03:42 PM3/7/19
to bif, mozilla-dev-security-policy
On Thu, Mar 7, 2019 at 8:54 PM bif via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> But BRs are not to be interpreted, just to be applied to the letter,
> whether it makes sense or not. When it no longer makes sense, the wording
> can be improved for the future.
>

Indeed. But following BR 7.1 to the letter apparently doesn't get you all
the way to compliance, by some opinions. After all, nothing in 7.1
requires anything as to the quality of the underlying CSPRNG utilized. It
does not specify whether the 64-bits must be comprised of sequential bits
of data output by the CSPRNG, nor does it specify that one is not permitted
to discard inconvenient values (assuming you seek replacement values from
the CSPRNG).

It is therefore my belief that either the BR 7.1 guideline wrong/inadequate
or the opinions which would hold that following BR 7.1 to the written
letter are not quite adequate are wrong.

Matt Palmer

unread,
Mar 7, 2019, 10:28:17 PM3/7/19
to dev-secur...@lists.mozilla.org
On Thu, Mar 07, 2019 at 09:03:22PM -0600, Matthew Hardeman via dev-security-policy wrote:
> On Thu, Mar 7, 2019 at 8:54 PM bif via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
> > But BRs are not to be interpreted, just to be applied to the letter,
> > whether it makes sense or not. When it no longer makes sense, the wording
> > can be improved for the future.
>
> Indeed. But following BR 7.1 to the letter apparently doesn't get you all
> the way to compliance, by some opinions.

No, *misinterpreting* BR 7.1 doesn't get you all the way to compliance.

> After all, nothing in 7.1
> requires anything as to the quality of the underlying CSPRNG utilized.

The "CS" is "CSPRNG" stands for "cryptographically secure", and "CSPRNG" is
defined in the BRs.

> It
> does not specify whether the 64-bits must be comprised of sequential bits
> of data output by the CSPRNG,

Nor does it need to.

> nor does it specify that one is not permitted
> to discard inconvenient values (assuming you seek replacement values from
> the CSPRNG).

If you generate a 64-bit random value, then discard some values based on any
sort of quality test, the end result is a 64-bit value with
less-than-64-bits of randomness. The reduction in randomness depends on the
exact quality function employed.

- Matt

Ryan Sleevi

unread,
Mar 7, 2019, 10:32:17 PM3/7/19
to Matthew Hardeman, Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Thu, Mar 7, 2019 at 9:47 PM Matthew Hardeman <mhar...@gmail.com> wrote:

>
> The actual text of the guideline is quite clear -- in much the same manner
> that frosted glass is.
>

> "Effective September 30, 2016, CAs SHALL generate non-sequential
> Certificate serial numbers greater than zero (0) containing at least 64
> bits of output from a CSPRNG. " [1]
>

Isn’t it amazing how “at least” is one of those requirements where you can
look at it, and ask “Should I do the absolute bare minimum, or should I
maybe build in safety?” I find it amazing how, when you rely on doing the
bare minimum, it can somehow backfire.

Irrespective of the discussion underlying the modifications of the BRs to
> incorporate this rule, there are numerous respondent CAs of varying
> operational vintage, varying size, and varying organizational complexity.
>
> The history underlying a rule should not be necessary to implement and
> faithfully obey a rule. And yet...
>

It isn’t required. A basic understanding of ASN.1 is all that’s required,
combined with critical and defensive thinking.

You don’t have to be a CA to have that. As previously provided, there was
discussion on m.d.s.p. a year ago about that. You can find discussions on
zlint about it [1] [2].

These aren’t skills participating in the discussions here necessarily
require, but are absolutely required of CAs operating globally trusted
PKIs.

Rather than have us theorize as to why non-compliance with this rule seems
> to be so widespread, even by a number of organizations which have more
> typically adhered to industry best practices, would you be willing to posit
> a plausible scenario for why all of this non-compliance has gone on for so
> long and by so many across so many certificates?
>

As noted, it has been called out in the past. You can see issues with how,
purely from a linting perspective, the best we can say is something looks
wrong, and to have the CAs explain. I think the framing and implication of
that last question is profoundly unhelpful and misguided. The answer is
that there are a number of CAs continuing to have issues [3], and this is
merely a symptom of yet another issue. These issues would be far easier to
close out if CAs were consistent in following the expectations of incident
reporting, but we continue to see CAs struggle with performing any sort of
meaningful introspective analysis.

While I don’t want to throw Thomas and the PrimeKey folks under the bus
here, it’s clear that the incidents being reported are that CAs are
outsourcing their compliance requirements. They have an obligation to
review and evaluate the code they use - whether it’s EJBCA, ADCS, UniCERT,
or some other stack. Every responsible CA should be having their compliance
teams holistically engage in evaluating the software they use, looking for
other issues. The incident responses we are seeing demonstrate some of them
being proactive in this. It would be absolutely disastrous for a currently
trusted CA to demonstrate this issue in 6 months - not on the basis of the
single bit, but due to the complete dereliction of professional duty to
stay abreast of the industry and compliance that it would represent.

Additionally, assuming a large CA with millions of issued certificates
> using an actual 64-bit random serial number... Should the CA also do an
> exhaustive issued-serial-number search to ensure that the to-be-signed
> serial number is not off-by-one in either direction from a previously
> issued certificate serial number? However implausible, if it occurred,
> this would indeed result in having participated in the issuance of 2
> certificates with sequential serial numbers.
>

These strawman arguments demonstrate a lack of understanding of the
fundamental issue. It’s rather defensible for a CA to issue a one-byte
serial number - or even sequential serial numbers - as you hypothesize,
while still being compliant with the requirements. If such a matter were to
be brought - e.g. to the CA’s problem reporting email - they could examine
and determine that, no, they did have 64 bits of entropy, and it was merely
the probability that what could happen, would.

But that’s not what we’re talking about, and while it is posed as an
argumentum ad absurdum, it belies the substance of what is more meaningful:
how a CA monitors the discussions, ensures compliance, and investigates
issues. A CA that makes a meaningful investigation into the context and
history of an issue, or who takes steps to do more than the bare minimum,
and takes actions to be beyond reproach, is far, far better for the
ecosystem.

Incident reports are the opportunity for the CA to demonstrate how it is
improving, and for the industry to learn and identify risks and challenges
to collectively improve. CAs that promote and encourage that are far more
helpful to the ecosystem.

Frankly, to some extent, it doesn’t matter whether or not participants here
want to debate how well they understood it. It matters whether CAs did -
and they are both expected to me as-or-more-knowledgeable than participants
here, and to rise to a higher standard of expectations.

>
[1]
https://github.com/zmap/zlint/issues/187
[2]
https://github.com/zmap/zlint/pull/112
[3]
https://wiki.mozilla.org/CA/Incident_Dashboard

>

Matthew Hardeman

unread,
Mar 7, 2019, 10:34:50 PM3/7/19
to Matt Palmer, MDSP
On Thu, Mar 7, 2019 at 9:28 PM Matt Palmer via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> The "CS" is "CSPRNG" stands for "cryptographically secure", and "CSPRNG" is
> defined in the BRs.
>

Yes. There are various levels of qualification and quality for algorithms
and entropy sources bearing that designation and they've changed over the
years.


>
> > It
> > does not specify whether the 64-bits must be comprised of sequential bits
> > of data output by the CSPRNG,
>
> Nor does it need to.
>

Really, why not? The rule says that 64-bits of output from a CSPRNG must
be utilized. It does not clearly delineate that one can't be choosy about
which 64 to take.


>
> > nor does it specify that one is not permitted
> > to discard inconvenient values (assuming you seek replacement values from
> > the CSPRNG).
>
> If you generate a 64-bit random value, then discard some values based on
> any
> sort of quality test, the end result is a 64-bit value with
> less-than-64-bits of randomness. The reduction in randomness depends on
> the
> exact quality function employed.
>

I understand well the reasons that entropy is desired and I understand well
exactly the way, mathematically, that this behavior would reduce total
entropy. My complaint is that nothing in the rule demands an actual set
minimum amount of true entropy even though that result is clearly what was
really desired.

Jakob Bohm

unread,
Mar 7, 2019, 11:38:27 PM3/7/19
to mozilla-dev-s...@lists.mozilla.org
I was stating that the the very specific discussion that recently
unfolded (and which I promised not to mention by name in this thread)
has contained very many opinions on the topic. In fact, the majority of
posts by others have circled on either the entropy issue or this very
issue of what criteria and procedures should be used for trusting
national CAs and if those criteria should be changed.

Your own posts on Feb 28, 2019 13:54 UTC and Mar 4, 2019 16:31 UTC were
among those posts, as were posts by hackurx, Alex Gaynor, nadim, Wayne
Thayer, Kristian Fiskerstrand and Mathew Hardeman.

I took care not to state what decisions should be made, merely to
summarize the issues in a clear and seemingly non-controversial way,
trying to be inclusive of the opinions stated by all sides. If there
are additional points on the topic that I forgot or that may arise later
in the specific discussion, they can and should be added such that there
will be a useful basis for discussion of whatever should or should not
be done long term, once the specific single case has been handled.

I did not wake this sleeping dog, it was barking and yanking its chain
all week.

Peter Gutmann

unread,
Mar 7, 2019, 11:38:53 PM3/7/19
to Matt Palmer, dev-secur...@lists.mozilla.org
Matt Palmer via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>If you generate a 64-bit random value, then discard some values based on any
>sort of quality test, the end result is a 64-bit value with less-than-64-bits
>of randomness.

That's not what 7.1 says, merely:

CAs SHALL generate non-sequential Certificate serial numbers greater than

zero (0) containing at least 64 bits of output from a CSPRNG

There's nothing there about whether you can, for example, discard values that
you don't like and generate another one (in fact it specifically requires that
you reject the value 0 and generate another one). In particular, for your
objection, how is one totally random value different from another?
Specifically, if I discard a totally random value that has the high bit set
(because of ASN.1 encoding issues) and take the next value generated, how is
that (a) not compliant with 7.1 and (b) different from another totally random
value that happens to not have the high bit set in the first place?

What if I call every cert that would end up with the sign bit set a test cert
and only issue the ones where they're not set? Again, fully compliant with
the wording of 7.1, but presumably not compliant with your particular
interpretation of the wording (OK, it might be, I'm sure you'll let me know if
it is or isn't). That's the problem with rules-lawyering, if you're going to
insist on your own very specific interpretation of a loosely-worded
requirement then it's open season for anyone else to find dozens of other
fully compatible but very different interpretations.

And, again, question zero: Given that the value of 64 bits was pulled out of
thin air, why does it even matter?

Can we just agree that the bikeshed can be any colour people want as long as
you're not using lead-based paint and move on from this bottomless pit?

Peter.

Ryan Sleevi

unread,
Mar 8, 2019, 12:10:17 AM3/8/19
to Jakob Bohm, mozilla-dev-security-policy
On Thu, Mar 7, 2019 at 11:38 PM Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 07/03/2019 23:02, Ryan Sleevi wrote:
> > Do you believe there is new information or insight you’re providing from
> > the last time this was discussed and decided?
>

> I took care not to state what decisions should be made, merely to
> summarize the issues in a clear and seemingly non-controversial way,
> trying to be inclusive of the opinions stated by all sides.


These issues have already been decided. Neither the previous posting, nor
this, adds any new information or value to the discussion. The answers have
been provided by Module Owners and Peers previously, as to the questions
that you believe need answered, as I just demonstrated. There is no need to
pose or summarize them as somehow unanswered questions - a cursory
examination, as demonstrated, reveals they have been discussed, debated,
and decided.

If you believe there is significantly new information that merits
revisiting, the burden is on you to demonstrate that and contextualize it
to see how it compares to the past conversation. However, at present,
attempting to simply repeat questions that have already been answered, as
if to prompt new debate, is not only unproductive - it's actively
detrimental.

This thread should end here.

Peter Bowen

unread,
Mar 8, 2019, 12:27:31 AM3/8/19
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Thu, Mar 7, 2019 at 11:45 AM Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Currently the Mozilla root program contains a large number of roots that
> are apparently single-nation CA programs serving their local community
> almost exclusively, including by providing certificates that they can
> use to serve content with the rest of the world.
>
> For purposes of this, I define a national CA as a CA that has publicly
> self-declared that it serves a single geographic community almost
> exclusively, with that area generally corresponding to national borders
> of a country or territory.
>


> 5. Should the root program policies provide rules that enforce the
> self-declared scope restrictions on a CA[?]


This has been discussed and the decision was no. This in turn moots your
6-9.

10. The root trust data provided in the Firefox user interface does not
> clearly indicate the national or other affiliation of the trusted
> roots, such that concerned users may make informed decisions
> accordingly. Ditto for the root program dumps provided to other he st
> users of the Mozilla root program data (inside and outside the Mozilla
> product family). For example, few users outside Scandinavia would
> know that "Sonera" is really a national CA for the countries in which
> Telia-Sonera is the incumbent Telco (Finland, Sweden and Åland).
>

Mozilla has specifically chosen to not distinguish between "government
CAs", "national CAs", "commercial CAs", "global CAs", etc. The same rules
apply to every CA in the program. Therefore, the "national or other
affiliation" is not something that is relevant to the end user.

These have all been discussed before and do not appear to be relevant to
any current conversation.

Thanks,
Peter

okaphone.e...@gmail.com

unread,
Mar 8, 2019, 3:08:35 AM3/8/19
to mozilla-dev-s...@lists.mozilla.org
Could be me but when I read this spec as a programmer, I would probably decide that the serial number needs to be bigger than 64 bits. After all the specs require you to exclude the value zero. Because of that the entropy of the serial number can only be at least 64 bits if its size is actually larger than 64 bits.

And if you need more than 64 bits anyway... having a bit more entropy than required is not going to hurt, so I'd probably go for 128 bits. ;-)

CU Hans

Matt Palmer

unread,
Mar 8, 2019, 4:10:54 AM3/8/19
to dev-secur...@lists.mozilla.org
On Thu, Mar 07, 2019 at 08:47:46PM -0600, Matthew Hardeman via dev-security-policy wrote:
> On Thu, Mar 7, 2019 at 8:29 PM Ryan Sleevi via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
> > Past analysis and discussion have shown the interpretation is hardly
> > specific to a single CA. It was a problem quite literally publicly
> > discussed during the drafting and wording of the ballot. References were
> > provided to those discussions. Have you gone and reviewed them? It might be
> > helpful to do so, before making false statements that mislead.
>
> "Effective September 30, 2016, CAs SHALL generate non-sequential
> Certificate serial numbers greater than zero (0) containing at least 64
> bits of output from a CSPRNG. " [1]
>
> Irrespective of the discussion underlying the modifications of the BRs to
> incorporate this rule, there are numerous respondent CAs of varying
> operational vintage, varying size, and varying organizational complexity.

Yes, there are, and they all have a huge burden of trust placed on them.

> The history underlying a rule should not be necessary to implement and
> faithfully obey a rule.

I absolutely agree with this. Thankfully, there is no requirement to
understand the history behind the changes under discussion in order to
correctly implement it.

> Rather than have us theorize as to why non-compliance with this rule seems
> to be so widespread, even by a number of organizations which have more
> typically adhered to industry best practices, would you be willing to posit
> a plausible scenario for why all of this non-compliance has gone on for so
> long and by so many across so many certificates?

Because, like so many other things that go on for a long time before they're
discovered, nobody took a look.

> Additionally, assuming a large CA with millions of issued certificates
> using an actual 64-bit random serial number... Should the CA also do an
> exhaustive issued-serial-number search to ensure that the to-be-signed
> serial number is not off-by-one in either direction from a previously
> issued certificate serial number? However implausible, if it occurred,
> this would indeed result in having participated in the issuance of 2
> certificates with sequential serial numbers.

Having sequential serial numbers is not problematic. Having *predictable*
serial numbers is problematic.

- Matt

Jakob Bohm

unread,
Mar 8, 2019, 7:31:17 AM3/8/19
to mozilla-dev-s...@lists.mozilla.org
On 08/03/2019 06:27, Peter Bowen wrote:
> ...
>
> Mozilla has specifically chosen to not distinguish between "government
> CAs", "national CAs", "commercial CAs", "global CAs", etc. The same rules
> apply to every CA in the program. Therefore, the "national or other
> affiliation" is not something that is relevant to the end user.
>
> These have all been discussed before and do not appear to be relevant to
> any current conversation.
>

Many (not me) in the recent discussion of a certain CA have called
for this to be changed one way or another. This is the only thing
that is new.

As I wrote earlier, there were a lot of general policy ideas and
questions mixed into the discussion of that specific case, and my
post was an attempt to summarize those questions and ideas raised by
others.

Maybe the ultimate result will be no change, maybe not. The
discussion certainly has been raised by a lot of people.

Ryan Sleevi

unread,
Mar 8, 2019, 8:19:08 AM3/8/19
to Jakob Bohm, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 7:31 AM Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Many (not me) in the recent discussion of a certain CA have called
> for this to be changed one way or another. This is the only thing
> that is new.
>

I do not believe this is an accurate or correct summary of others
viewpoints that have been shared, and certainly not to the degree that it's
reasonable to highlight "other" people as a basis for violating etiquette.

Given the concerns raised, and as has been pointed out, the decisions made,
unless you personally believe there is significant new information that
would reasonably prompt reconsideration of the original decision, please do
not attempt threads like this. As has been previously summarized, the
present discussion is very similar to past discussions. The burden is to
demonstrate what new information warrants reconsidering, and that burden
rests with the person opening the thread. This is why it's best to let
people speak for themselves and advocate their own positions.

Mike Kushner

unread,
Mar 8, 2019, 10:39:56 AM3/8/19
to mozilla-dev-s...@lists.mozilla.org
Hi Jakob,

On Thursday, March 7, 2019 at 7:30:03 PM UTC+1, Jakob Bohm wrote:
> In the cause of the other discussion it was revealed that EJBCA by PrimeKey
> has apparently:
>
> 1. Made serial numbers with 63 bits of entropy the default. Which is
> not in compliance with the BRs for globally trusted CAs and SubCAs.

This has been the default setting since 2001, predating Ballot 164 quite some. At the time this was more than sufficient, and we hadn't reviewed it.

> 2. Mislead CAs to believe this setting actually provided 64 bits of
> entropy.

I'm presuming that you're not assigning any intention to the above.

We have always been open with that EJBCA generates 64 bit serial numbers, compliant with RFC5280 (3280 back then) and X.690. We weren't, at start of the previous thread, aware of the requirements made by CABF Ballot 164 as we've only been active in following the proceedings for the last couple of years, but as vendors we've historically viewed the responsibility configuring EJBCA correctly to meet existing standards as up to the end customer. We've been aware of the BR requirement for some time now, but we were not aware of the detailed previous discussion related to 63 vs 64 bits that has been held.

> 3. Discouraged CAs from changing that default.

This is not true in the least. Serial number size has been configurable since at least 2008 (long before B164). There is a note in the configuration file about not changing it unless you understand what you're doing, but that pertains to not lowering it below 64 bits. Raising the SN size (and thus entropy) has in fact been done by several of our customers (in response to B164, or for whatever other reason).

> This raises 3 derived concerns:
>
> 4. Any CA using the EJBCA platform needs to manually check if they
> have patched EJBCA to comply with the BR entropy requirement despite
> EJBCAs publisher (PrimeKey) telling them otherwise.
> Maybe this should be added to the next quarterly mail from Mozilla to
> the CAs.

A patch isn't required as the value is configurable in a config file, and in response to the concerns raised here we've added functionality for changing the serial number size without requiring a change to the config files.

Again, I don't agree with your statement that we've told anybody that EJBCA, by default configuration, complies with the BR, as much configuration is needed to issue BR compliant certificates. A correct technical description of serial numbers is documented in the configuration file. The documentation did not consider the above discussion regarding entropy, as 64 bit serial numbers were leagues past the entropy requirements prior to 2016.

The documentation absolutely does encourage the end user to feel free to use larger serial number sizes, EJBCA is used in a lot of non public CA use cases, and as such ability to configure according to BR requirement or not is a key feature.

It does bear mentioning that the purpose of requiring serial number entropy was to mitigate against pre-imaging attacks against SHA-1, something which today has no bearing whatsoever. While I'm not claiming that there will never ever be a collision found against SHA256, but I hope you understand that this is a compliance issue against a requirement that for now has no has any security impact.

> 5. Is it good for the CA community that EJBCA seems to be the only
> generally available software suite for large CAs to use?

While I agree that diversity is good for any ecosystem, I would remind you that a majority of CA's don't actually run our software, and many run their own proprietary solutions. The availability of EJBCA is likely due to the fact that the vast majority of our code is FOSS (and the rest if always available to customers), which is partly due to our wish to make PKI available to all, and partly because we encourage external inspection of our implementation.


> 6. Should the CA and root program community be more active in ensuring
> compliance by critical CA infrastructure providers such as EJBCA and
> the companies providing global OCSP network hosting.

Absolutely, and I would say that many already are. We don't have any goal other than making EJBCA as compliant as possible, and have always aimed to lay a step ahead of existing standards. Since 2016 (as a part of maturing as a software vendor) we changed focus in how we handle requirements work by actively following various standards orgs (including CABF), instead of receiving requirements from our customers.

Cheers,
Mike

Matthew Hardeman

unread,
Mar 8, 2019, 6:50:55 PM3/8/19
to Matt Palmer, MDSP
On Fri, Mar 8, 2019 at 3:10 AM Matt Palmer via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

Having sequential serial numbers is not problematic. Having *predictable*
> serial numbers is problematic.


My problem with this is that, if we parse the english language constructs
of the rule as stated in the BRs, the first requirement of a certificate
serial number is literally "non-sequential Certificate serial numbers", and
then furthermore that these must consist consist of at least 64 bits of
output from a CSPRNG.

Both your and Ryan Sleevi's comments seem to suggest that the
non-sequential part doesn't really matter when it arises incidentally as
long as they're randomly generated and that two certificates with
certificate serial numbers off-by-one from each other would not be a
problem.

I am well aware of the reason for the entropy in the certificate serial
number. What I'm having trouble with is that there can be no dispute that
two certificates with serial numbers off by one from each other, no matter
how you wind up getting there, are in fact sequential serial numbers and
that this would appear to be forbidden explicitly.

It seems that in reality that your perspective calls upon the CA to act
according to the underlying risk that the rule attempts to mitigate rather
than abide the literal text. That seems a really odd way to construe a
rule.

Ryan Sleevi

unread,
Mar 8, 2019, 7:05:05 PM3/8/19
to Matthew Hardeman, Matt Palmer, MDSP
On Fri, Mar 8, 2019 at 6:50 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I am well aware of the reason for the entropy in the certificate serial
> number. What I'm having trouble with is that there can be no dispute that
> two certificates with serial numbers off by one from each other, no matter
> how you wind up getting there, are in fact sequential serial numbers and
> that this would appear to be forbidden explicitly.
>
> It seems that in reality that your perspective calls upon the CA to act
> according to the underlying risk that the rule attempts to mitigate rather
> than abide the literal text. That seems a really odd way to construe a
> rule.
>

I think this is fundamentally an unhelpful way to think about and frame the
problem, but I think it's largely a result of not having to be a CA placed
in this position.

You're absolutely correct that two certificates, placed next to eachother,
could appear sequential. Someone might then make a claim that the CA has
violated the requirements. The CA can then respond by discussing how they
actually validate serial numbers, and the whole matter can be dismissed as
compliant.

You're fixating on the "rules lawyering" part, but that's not a remotely
productive framing. A CA that is doing the right thing can demonstrate how
they were doing it. A CA that is doing the wrong thing can't. That
'problem' you see is easily and rapidly resolved by such an explanation.

This framing is exactly why the ZLint side of things warns, rather than
errors - because it can arise in legitimate cases. If there's concern that
it's arising in illegitimate cases, a report is made, the CA investigates,
and the information shared, and we all move on. It's not that hard or
unreasonable :)

A CA is expected to be adversarially reading the BRs, and from that, either
highlighting concerns to m.d.s.p. and the CA/Browser Forum ("you COULD
interpret it like this") or taking steps to mitigate even under the most
adversarial reading. If and when an incident occurs, a CA that performed
such steps is in a far better place to explain the incident and the steps
taken. A CA that doesn't, and says "We didn't know", is more likely
indicative of a CA not adversarially reading or critically evaluating it.

Matthew Hardeman

unread,
Mar 8, 2019, 7:22:24 PM3/8/19
to mozilla-dev-s...@lists.mozilla.org
On Friday, March 8, 2019 at 6:05:05 PM UTC-6, Ryan Sleevi wrote:

> You're absolutely correct that two certificates, placed next to eachother,
> could appear sequential. Someone might then make a claim that the CA has
> violated the requirements. The CA can then respond by discussing how they
> actually validate serial numbers, and the whole matter can be dismissed as
> compliant.

Let's set aside certificates for a moment and talk about serial numbers, elsewhere definitionally defined as positive integers.

Certificate serial number A (represented as plain unencoded integer): 123456
Certificate serial number B (represented as plain unencoded integer): 123457

Can we agree that those two numbers are factually provable as sequential as pertains integer mathematics?

If so, then regardless of when (or in what order) two different certificates arise in which those serial numbers feature, as long as they arise as certificates issued by the same issuing CA, two certificates with definitionally sequential numbers have at that point been issued.

Pursuant to the plain language of 7.1 as written, that circumstance -- regardless of how it would occur -- would appear to be a misissuance.

I concur with you fully that a CA (and anyone, really) should view the BRs with an adversarial approach to review.

The rule as written requires that the output bits have come from a CSPRNG. But it doesn't say that they have to come from a single invocation of a CSPRNG or that they have to be collected as a contiguous bit stream from the CSPRNG with no bits of output from the CSPRNG discarded and replaced by further invocation of the CSPRNG. Clearly a technicality, but shouldn't the rules be engineered with the assumption that implementers (or their software vendors) might take a different interpretation?

Peter Gutmann

unread,
Mar 8, 2019, 8:11:31 PM3/8/19
to mozilla-dev-s...@lists.mozilla.org
I didn't post this as part of yesterday's message because I didn't want to
muddy the waters even further, but let's look at the exact wording of BR 7.1:

CAs SHALL generate non-sequential Certificate serial numbers greater than

zero (0) containing at least 64 bits of output from a CSPRNG

Note the comment I made yesterday:

That's the problem with rules-lawyering, if you're going to insist on your
own very specific interpretation of a loosely-worded requirement then it's
open season for anyone else to find dozens of other fully compatible but
very different interpretations.

So lets look at the most pathologically silly but still fully compliant with
BR 7.1 serial number you can come up with. Most importantly, 7.1 it never
says what form those bits should be in, merely that it needs to contain "at
least 64 bits of output from a CSPRNG". In particular, it doesn't specify
which order those bits should be in, or which bits should be used, as long as
there's at least 64.

So the immediate application of this observation is to make any 64-bit value
comply with the ASN.1 encoding rules: If the first bit is 1 (so the sign bit
is set), swap it with any convenient zero bit elsewhere in the value.
Similarly, if the first 9 bits are zero, swap one of them with a one bit from
somewhere else. Fully compliant with BR 7.1, and now also fully compliant
with ASN.1 DER.

Let's take it further. Note that there's no requirement for the order to be
preserved. So let's define the serial number as:

serialNumber = sortbits( CSPRNG64() );

On average you're going to get a 50:50 mix of ones and zeroes, so your serial
numbers are all going to be:

0x00000000FFFFFFFF

plus/minus a few bits around the middle. When encoded, this will actually be
0x00FFFFFFFF, with the remaining zero bits implicit - feel free to debate
whether the presence of implict zero bits is compliant with BR 7.1 or not.

Anyway, continuing, you can also choose to alternate the bits so you still get
a fixed-length value:

0x5555555555555555

(plus/minus a bit or two at the LSB, as before).

Or you could sort the bits into patterns, for example to display as rude
messages in ASCII:

"BR7SILLY"

Or, given that you've got eight virtual pixels to play with, create ASCII art
in a series of certificates, e.g. encode one line of an emoji in each serial
number.

Getting back to the claim that "BR 7.1 allows any serial number except 0",
here's how you get this:

At one end of the range, your bit-selection rule is "discard every one bit
except the 64th one", so your serial number is:

0x0000000000000001

or, when DER encoded:

0x01

At the other end of the scale, "discard every zero bit except the first one":

0x7FFFFFFFFFFFFFFF

or INT_MAX.

All fully compliant with the requirement that:

CAs SHALL generate non-sequential Certificate serial numbers greater than

zero (0) containing at least 64 bits of output from a CSPRNG

I should note in passing that this also allows all the certificates you issue
to have the same serial number, 1, since they're non-sequential and greater
than zero.

Peter.

Ryan Sleevi

unread,
Mar 8, 2019, 8:42:47 PM3/8/19
to Peter Gutmann, mozilla-dev-s...@lists.mozilla.org
On Fri, Mar 8, 2019 at 8:11 PM Peter Gutmann via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> I didn't post this as part of yesterday's message because I didn't want to
> muddy the waters even further, but let's look at the exact wording of BR
> 7.1:
>
<snip>

> All fully compliant with the requirement that:
>
> CAs SHALL generate non-sequential Certificate serial numbers greater than
> zero (0) containing at least 64 bits of output from a CSPRNG
>

I'm not sure this will be a very productive or valuable line of discussion.

As best I can tell, you're discussing pathological interpretations,
implicitly, it seems, with a critique of the current language. Complaining
about things "as they are" doesn't really help anyone, other than perhaps
the speaker, but if there is a positive suggestion that you believe
addresses the concerns you see, it may be useful to just make that argument
and make it clearly. It may be that you don't know what the proposed
language should be, in which case, you should clearly state that.

Alternatively, you're attempting to explore "What would happen if I was a
CA and I gave these answers". As you seem to acknowledge, silly answers get
silly results. It's well known in this community that attempting to hem and
haw through creative interpretations of language, rather than trying to
operate beyond reproach (and thus actively try to avoid silly answers),
tend to be less likely to gain or retain trust. The world is fully of silly
answers from silly people, and while that's great, it doesn't seem very
worthwhile or productive to discuss - all that really matters is whether
expected-to-be-smart people, CAs, are the ones giving silly answers.

Of course, there are quite glaring flaws in the argument, particularly that
"all" of these are compliant. None of them are compliant under any
reasonable reading. None of those are defensible outputs of a CSPRNG, they
are outputs of Peter's Silly Algorithm, a non-cryptographically strong
non-pseudo-random-number-generator.

The reason we can see these arguments as silly as the burden of proof rests
on demonstrating how it complies. Language lawyering as whether "contains"
allows "As input to my Silly Algorithm" is a tactic we can take as a
thought experiment, but it's not productive, because all that matters is
whether or not a CA is silly enough to make such a silly argument. And if
they are that silly, they are unlikely to find such a silly answer
well-received.

Of course, all of this is understandable, when you consider that the 'how'
you respond to an incident is just as important as the 'what' of the
incident response. A CA that responds to incidents using silly examples is
not doing themselves favors. Focusing on the fact that someone "could" give
silly answers is to simply ignore whether or not it would be wise or
defensible to do so, or whether there are alternatives that avoid silliness
entirely.

Peter Gutmann

unread,
Mar 8, 2019, 8:43:09 PM3/8/19
to mozilla-dev-s...@lists.mozilla.org, Peter Gutmann
I wrote:

>So the immediate application of this observation is to make any 64-bit value
>comply with the ASN.1 encoding rules: If the first bit is 1 (so the sign bit
>is set), swap it with any convenient zero bit elsewhere in the value.
>Similarly, if the first 9 bits are zero, swap one of them with a one bit from
>somewhere else. Fully compliant with BR 7.1, and now also fully compliant
>with ASN.1 DER.

Oops, need to clarify that: Note the specific use of "swap one of them". You
can't just drop in a zero bit you made up yourself, you have to use one of the
original zero bits that came from the CSPRNG or you won't be compliant with BR
7.1 any more. So you need to swap in a genuine zero bit from elsewhere in the
value, not just replace it with your own made-up zero bit.

Peter.

Peter Gutmann

unread,
Mar 8, 2019, 9:27:57 PM3/8/19
to ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org
Ryan Sleevi <ry...@sleevi.com> writes:

>I'm not sure this will be a very productive or valuable line of discussion.

What I'm pointing out is that beating up CAs over an interpretation of the
requirements that didn't exist until about a week ago when it was pointed out
in relation to DarkMatter is unfair on the CAs.  If you're going to impose a
specific interpretation on them then get it added to the BRs at a future date
and enforce it then, don't retroactively punish CAs for something that didn't
exist until a week or two ago.

>Of course, there are quite glaring flaws in the argument, particularly that
>"all" of these are compliant. None of them are compliant under any reasonable
>reading.

Again, it's your definition of "reasonable".  A number of CAs, who applied
their own reasonable reading of the same requirements, seem to think
otherwise.  They're now being punished for the fact that their reasonable
reading differs from Mozilla's reasonable reading.

>I would strongly caution CAs against adopting any of these interpretations,
>and suggest it would be best for CAs to wholly ignore the message referenced.

"Pay no attention to the message behind the curtain".

Peter.

Ryan Sleevi

unread,
Mar 8, 2019, 9:38:30 PM3/8/19
to Peter Gutmann, ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org
On Fri, Mar 8, 2019 at 9:27 PM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

> Ryan Sleevi <ry...@sleevi.com> writes:
>
> >I'm not sure this will be a very productive or valuable line of
> discussion.
>
> What I'm pointing out is that beating up CAs over an interpretation of the
> requirements that didn't exist until about a week ago


I'm not sure if there's any value in continuing to highlight that you're
factually misrepresenting things, rather significantly, and thus
undermining much of your contribution.

Several times now, multiple people have pointed out the discussions related
to this that happened prior to, during, and following the introduction of
this requirement. Your choice to ignore or deny such evidence is extremely
counter-productive.


> If you're going to impose a
> specific interpretation on them then get it added to the BRs at a future
> date
> and enforce it then, don't retroactively punish CAs for something that
> didn't
> exist until a week or two ago.


This framing is factually and materially false. There is no retroactive
punishment occurring, just as the guidance was long-existing.

I don't see there being any opportunity to productively engage, given the
good-faith effort to correct your misunderstanding, which you still persist
in advocating. Similarly, I do not think it at all helpful that you
continue to ignore the objectives and goals of the incident response
process, the value and importance it serves the community, and the
expectations of the CAs.

Perhaps there's an argument to be made that we should litigate what "the"
means. It would be a fantastic spectacle, but it would be both thoroughly
unproductive and fail to achieve any of the goals or objectives of a
healthy Web PKI. Such exercises can and should be conducted elsewhere,
while the rest of us try to make progress on improving how CAs respond to
incidents caused by behaviours long-documented as incompatible with the
requirements.

Ryan Sleevi

unread,
Mar 8, 2019, 9:52:33 PM3/8/19
to Matthew Hardeman, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 7:22 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Pursuant to the plain language of 7.1 as written, that circumstance --
> regardless of how it would occur -- would appear to be a misissuance.
>

I've already addressed this line of reasoning several times now as to what
the expectations are. Indeed, the very message you're replying to has
already addressed the scenario and the expectations, so I'm not sure
repeating it is furthering much new insight.

The rule as written requires that the output bits have come from a CSPRNG.
> But it doesn't say that they have to come from a single invocation of a
> CSPRNG or that they have to be collected as a contiguous bit stream from
> the CSPRNG with no bits of output from the CSPRNG discarded and replaced by
> further invocation of the CSPRNG. Clearly a technicality, but shouldn't
> the rules be engineered with the assumption that implementers (or their
> software vendors) might take a different interpretation?
>

We can only do so much to defend against the jackass genie [1]. Likewise,
there is no defensible interpretation that an algorithm that, say, takes
bits from a CSPRNG and either keeps or discards them based on some
non-CSPRNG algorithm, and then outputs them, still represents a CSPRNG,
just as we cannot say that we sample bits from RFC 3514 [2] and only keep
the good ones, and thus ensuring we never corrupt our machine with evil.
The substance of this argument is quite literally haggling over what
"output" and "contains" is, and that, frankly, is obnoxious.

I appreciate the attention to detail, but I find it difficult to feel that
it is a good faith effort that is designed to produce results consistent
with the goals that many of this community have and share, and thus don't
think it would be a particularly valuable thing to continue discussing.
While there is certainly novelty in the approach, which is not at all
unfamiliar in the legal profession [3], care should be taken to make sure
that we are making forward progress, rather than beating dead horses.

I'm not going to tell you to stop talking about this, as I think it'd be
improper to police such a thread given my involvement in it. However, I
want to highlight that nothing productive has emerged from it, and the
creative interpretations being advocated. We are no further in
understanding any of the questions being proposed by those applying
creative interpretations, because the principles have not changed. Every
time a CA has tried to apply such problematic logic, it has worked out
poorly for them in this community. whether arguing it was "ambiguous" that
MITM was prohibited, even though there was a requirement to validate (and
ensure validation of) all domain names in a certificate or that it's not
"misissuance", even though no evidence can be provided that the
requirements were followed, nothing good has come of that.

Indeed, I think advocating this line of reasoning is doing more active
harm, to this community and to CAs, and I hope it's abundantly clear as to
why that is.

[1] https://tvtropes.org/pmwiki/pmwiki.php/Main/JackassGenie
[2] https://www.ietf.org/rfc/rfc3514.txt
[3] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_dispute

Matthew Hardeman

unread,
Mar 8, 2019, 10:29:09 PM3/8/19
to Ryan Sleevi, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 8:52 PM Ryan Sleevi <ry...@sleevi.com> wrote:

I appreciate the attention to detail, but I find it difficult to feel that
> it is a good faith effort that is designed to produce results consistent
> with the goals that many of this community have and share, and thus don't
> think it would be a particularly valuable thing to continue discussing.
> While there is certainly novelty in the approach, which is not at all
> unfamiliar in the legal profession [3], care should be taken to make sure
> that we are making forward progress, rather than beating dead horses.
>

In the spirit of demonstrating good faith, looking forward, and perhaps
even making a useful contribution), I have started a new thread [1] in
which I propose alternative language which might replace the specification
presently in BR 7.1. I would appreciate your thoughts on it.

[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/PDzNNsxhzLU/F0uxY6qmCAAJ

>

Ryan Sleevi

unread,
Mar 8, 2019, 10:49:58 PM3/8/19
to Matthew Hardeman, Ryan Sleevi, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 10:29 PM Matthew Hardeman <mhar...@gmail.com>
wrote:

> I would appreciate your thoughts on it.
>

I consider the matter more than settled, based on the clear historic
evidence, so I see no value in engaging further. The amount of time and
energy necessary to evaluate and reason about it seems extremely wasteful,
given the ease at which alternative and plainly obvious solutions to comply
with the existing language exist. I consider that only a single CA has
represented any ambiguity as being their explanation as to why the
non-compliance existed, and even then, clarifications to resolve that
ambiguity already existed, had they simply been sought.

I do appreciate the effort, even though I believe it misspent. I do
sincerely hope that this thoroughly beaten horse corpse may be left alone
for some time. Of all the things to improve in the BRs that can have real
and meaningful improvement to the ecosystem, this does not rate on my Top
100.

>

Matthew Hardeman

unread,
Mar 8, 2019, 10:55:03 PM3/8/19
to Ryan Sleevi, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 9:49 PM Ryan Sleevi <ry...@sleevi.com> wrote:

> I consider that only a single CA has represented any ambiguity as being
> their explanation as to why the non-compliance existed, and even then,
> clarifications to resolve that ambiguity already existed, had they simply
> been sought.
>

Please contemplate this question, which is intended as rhetorical, in the
most generous and non-judgmental light possible. Have you contemplated the
possibility that only one CA attempted to do so because you've stated your
interpretation and because they're subject to your judgement and mercy,
rather than because the text as written reflects a single objective
mechanism which matches your own position?

Ryan Sleevi

unread,
Mar 8, 2019, 11:03:26 PM3/8/19
to Matthew Hardeman, Ryan Sleevi, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 10:54 PM Matthew Hardeman <mhar...@gmail.com>
wrote:

>
>

Peter Bowen

unread,
Mar 8, 2019, 11:33:20 PM3/8/19
to Matthew Hardeman, Ryan Sleevi, mozilla-dev-security-policy
On Fri, Mar 8, 2019 at 7:55 PM Matthew Hardeman via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Fri, Mar 8, 2019 at 9:49 PM Ryan Sleevi <ry...@sleevi.com> wrote:
>
> > I consider that only a single CA has represented any ambiguity as being
> > their explanation as to why the non-compliance existed, and even then,
> > clarifications to resolve that ambiguity already existed, had they simply
> > been sought.
> >
>
> Please contemplate this question, which is intended as rhetorical, in the
> most generous and non-judgmental light possible. Have you contemplated the
> possibility that only one CA attempted to do so because you've stated your
> interpretation and because they're subject to your judgement and mercy,
> rather than because the text as written reflects a single objective
> mechanism which matches your own position?
>

Matthew,

I honestly doubt so. It seems that one CA software vendor had a buggy
implementation but we know this is not universal. For example, see
https://github.com/r509/r509/blob/05aaeb1b0314d68d2fcfd2a0502f31659f0de906/lib/r509/certificate_authority/signer.rb#L132
and https://github.com/letsencrypt/boulder/blob/master/ca/ca.go#L511 are
open source CA software packages that clearly do not have the issue.
Further at least one CA has publicly stated their in-house written CA
software does not have the issue.

I know, as the author of cablint, that the main reason I didn't have any
confusion. I didn't add more checks because of the false positive rate
issue; if I checked for 64 or more bits, it would be wrong 50% of the
time. The rate is still unacceptable with even looser rules; in 1/256
cases the top 8 bits will all be zero, leading to a whole the serial being
a whole byte shorter.

I do personally think that the CAs using EJBCA should not be faulted here;
their vendor added an option to be compliant with the BRs and it was very
non-obvious that it had a bug in the implementation. Based on my
experience with software development, we should be encouraging CAs to use
well tested software rather than inventing their own, when possible.

Thanks,
Peter

Dimitris Zacharopoulos

unread,
Mar 9, 2019, 12:46:41 AM3/9/19
to mozilla-dev-security-policy
Adding to this discussion, and to support that there were -in fact-
different interpretations (all in good faith) please check the issue I
had created in Dec 2017 https://github.com/awslabs/certlint/issues/56.

My simple interpretation of the updated requirement in 7.1 at the time
was that "no matter what, the resulting serial number should be at least
64 bits long". However, experts like Peter Bowen, Rob Stradling and Matt
Palmer, paying attention to details of the new requirement, gave a
different interpretation. According to their explanation, if you take
64-bits from a CSPRNG, there is a small but existing probability that
the resulting serialNumber will be something smaller.  So, "shorter"
serial numbers were not considered a violation of the BRs as long as the
64-bits came out of a CSPRNG.

I am personally shocked that a large part of this community considers
that now is the time for CAs to demonstrate "agility to replace
certificates", as lightly as that, without considering the significant
pain that Subscribers will experience having to replace hundreds of
thousands of certificates around the globe. It is very possible that
Relying parties will also suffer availability issues.

As discussed before, automation is one of the goals (other opinions had
been raised, noting security concerns to this automation). Centralized
systems like large web hosting providers or single large Subscribers
like the ones already mentioned in current incident reports, can build
automation easier. However, simple/ordinary Subscribers that don't have
the technical skills to automate the certificate replacement, that
struggled to even install certificates in their TLS servers in the first
place, will create huge burden for no good reason.

I don't know if others share the same view about the interpretation of
7.1 but it seems that some highly respected members of this community
did. If we have to count every CA that had this interpretation, then I
suppose all CAs that were using EJBCA with the default configuration
have the same interpretation.

BTW, the configuration in EJBCA that we are talking about, as the
default number of bytes, had the number "8" in the setting, resulting in
64-bits, not 63. So, as far as the CA administrator was concerned, this
setting resulted in using 64 random bits from a CSPRNG. One would have
to see the internal code to determine that the first bit was replaced by
a zero.

IMO, Mozilla should also treat this as an incident and evaluate the
specific parameters (strict interpretation of section 7.1, CAs did not
deliberately violate the requirement, a globally-respected software
vendor and other experts had a different allowable interpretation of a
requirement, the security impact on Subscribers and Relying Parties for
1 bit of entropy is negligible), and consider treating this incident at
least as the underscore issue. In the underscore case, there was a SCWG
ballot with an effective date where CAs had to ultimately revoke all
certificates that included an underscore.


Thanks,
Dimitris.

On 9/3/2019 6:32 π.μ., Peter Bowen via dev-security-policy wrote:
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

Peter Gutmann

unread,
Mar 9, 2019, 1:13:58 AM3/9/19
to mozilla-dev-security-policy, Dimitris Zacharopoulos
Dimitris Zacharopoulos via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>If we have to count every CA that had this interpretation, then I suppose all
>CAs that were using EJBCA with the default configuration have the same
>interpretation.

There's also an unknown number of CAs not using EJBCA that may have even
further interpretations. For example my code, which I'll point out in advance
has nothing to do with the BR and predates the existence of the CAB Forum
itself, may or may not be compliant with whatever Mozilla's interpretation of
7.1 is. I literally have no idea whether it meets Mozilla's expectations. It
doesn't do what EJBCA does, so at least it's OK there, but beyond that I have
no idea whether it does what Mozilla wants or not.

I assume any number of other CAs are in the same position, and given that if
they guessed wrong they have to revoke an arbitrarily large number of certs,
it's in their best interests to keep their heads down and wait for this to
blow over.

So perhaps instead of trying to find out which of the hundreds of CAs in the
program aren't compliant, we can check which ones are. Would any CA that
thinks it's compliant let us know, and indicate why they think they're
compliant? For example "we take 64 bits of CSPRNG output, pad it with a
leading <whatever>, and use that as the serial number", in other words what
Matthew Hardeman suggested, would seem to be OK.

Peter.

Ryan Sleevi

unread,
Mar 9, 2019, 7:38:06 AM3/9/19
to Dimitris Zacharopoulos, mozilla-dev-security-policy
I’m chiming in, Dimtris, as it sounds like you may have unintentionally
misrepresented the discussion and positions, and I want to provide you, and
possibly HARICA, the guidance and clarity it needs in this matter.

On Sat, Mar 9, 2019 at 12:46 AM Dimitris Zacharopoulos via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

> I am personally shocked that a large part of this community considers
> that now is the time for CAs to demonstrate "agility to replace
> certificates", as lightly as that, without considering the significant
> pain that Subscribers will experience having to replace hundreds of
> thousands of certificates around the globe. It is very possible that
> Relying parties will also suffer availability issues.


I believe this significantly misunderstands the discussion and motivation.
Having read all of the discussion to date, I do not believe this is at all
an accurate framing of the expectations or motivations. I would humbly ask
that you provide citations to back this claim.

You are correct if you were to say one or two people have provided such a
goal, but that’s certainly not consistent with the majority of the
discussion to date from the root program participants. Indeed, the
expectation expressed is that, *as with every other incident*, the CA
consistently follow the expectations.

I highlight this, because I don’t think it’s reasonable to conflate
existing expectations, which have been repeatedly clarified, as somehow
motivated by some other motivation based on one or two participants’ views.

If you truly feel this way, please revisit the discussion in
https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/S2KNbJSJ-hs
, as I hope that mine and Wayne’s responses can demonstrate this. Judging
by that thread, only a single voice has expressed something remotely as to
how you’ve phrased it.

I don't know if others share the same view about the interpretation of
> 7.1 but it seems that some highly respected members of this community
> did. If we have to count every CA that had this interpretation, then I
> suppose all CAs that were using EJBCA with the default configuration
> have the same interpretation.


I believe this also misunderstands the discussion to date, and as a
consequence misrepresents this. I don’t believe it is reasonable or fact
based to suggest that the CAs that had incidents necessarily shared the
interpretation. The incident reports demonstrate that there are a myriad
reasons, beyond interpretive differences, that one can find themselves in
such a situation. Avoiding conflating the two is necessary, although if you
feel it is justified, then I would implore you when summarizing others
views to support your view, that you provide the direct links and
references. This makes it easier to respond to and provide CAs the
necessary clarity of expectations, as well as allows other participants to
evaluate and judge themselves the accuracy of the summary.

BTW, the configuration in EJBCA that we are talking about, as the
> default number of bytes, had the number "8" in the setting, resulting in
> 64-bits, not 63. So, as far as the CA administrator was concerned, this
> setting resulted in using 64 random bits from a CSPRNG. One would have
> to see the internal code to determine that the first bit was replaced by
> a zero.


This is exactly the point. CAs have an obligation to understand the code
they’re using, regardless of the software platform. The failing is not with
EJBCA, it is with the CAs that have done so. While there are a number of
considerable and profound benefits to using EJBCA - most notably, it seems,
in the dearth of issues the CAs presently reporting have had compared to
those using closed-source platforms or home-grown platforms - the strength
of that platform is not a reasonable mitigation for the CAs not being
thorough.

Every CA, when Ballot 164 was passed, had a clear obligation to review how
they construct serial numbers, from the technical implementation to the
policy configuration. CAs that did so could have placed in a request for
the newly announced functionality, or, given the open source nature,
contributed such a change themselves.

That is guidance that stands regardless of the serial number, and trying to
conflate it as somehow a unique response to this incident only does it a
disservice.

IMO, Mozilla should also treat this as an incident and evaluate the
> specific parameters (strict interpretation of section 7.1, CAs did not
> deliberately violate the requirement, a globally-respected software
> vendor and other experts had a different allowable interpretation of a
> requirement, the security impact on Subscribers and Relying Parties for
> 1 bit of entropy is negligible), and consider treating this incident at
> least as the underscore issue. In the underscore case, there was a SCWG
> ballot with an effective date where CAs had to ultimately revoke all
> certificates that included an underscore.


The response from Wayne in
https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/S2KNbJSJ-hs
already sets the clear expectations, which are and have been consistent
with how we handle incidents. It’s thoroughly unproductive to have this
discussion every time there is an incident, especially when the principles
and goals are quite sound and to the benefit of users. There is nothing
punitive about them, and rather clear guidance about the expectations.

While a point-by-point comparison to underscores could be made, with the
result being a very clear diverengence in the material facts, the end
result would still be the consistent application of principles. I don’t
think treating misissuances as popularity contests is indeed productive -
that same logic would have us ignore many of the requirements, given
https://wiki.mozilla.org/CA/Incident_Dashboard .

>

Dimitris Zacharopoulos

unread,
Mar 9, 2019, 2:49:24 PM3/9/19
to ry...@sleevi.com, mozilla-dev-security-policy


On 9/3/2019 2:37 μ.μ., Ryan Sleevi wrote:
> I’m chiming in, Dimtris, as it sounds like you may have
> unintentionally misrepresented the discussion and positions, and I
> want to provide you, and possibly HARICA, the guidance and clarity it
> needs in this matter.
>
> On Sat, Mar 9, 2019 at 12:46 AM Dimitris Zacharopoulos via
> dev-security-policy <dev-secur...@lists.mozilla.org
> <mailto:dev-secur...@lists.mozilla.org>> wrote:
>
> I am personally shocked that a large part of this community considers
> that now is the time for CAs to demonstrate "agility to replace
> certificates", as lightly as that, without considering the
> significant
> pain that Subscribers will experience having to replace hundreds of
> thousands of certificates around the globe. It is very possible that
> Relying parties will also suffer availability issues.
>
>
> I believe this significantly misunderstands the discussion and
> motivation. Having read all of the discussion to date, I do not
> believe this is at all an accurate framing of the expectations or
> motivations. I would humbly ask that you provide citations to back
> this claim.

I must admit that I may have over-reacted with this one, taking one
particular paragraph from
https://groups.google.com/d/msg/mozilla.dev.security.policy/S2KNbJSJ-hs/HNDX5LaZCAAJ


which made me focus on the word "agility" as a requirement that CAs are
ultimately responsible of building, and the sooner the better. Having
worked with Subscribers that had a very hard time to manually install
certificates in legacy web servers, I am very worried that CAs will have
to repeat these tasks because for several cases, there are no tools to
assist the automation process.

>
> You are correct if you were to say one or two people have provided
> such a goal, but that’s certainly not consistent with the majority of
> the discussion to date from the root program participants. Indeed, the
> expectation expressed is that, *as with every other incident*, the CA
> consistently follow the expectations.
>
> I highlight this, because I don’t think it’s reasonable to conflate
> existing expectations, which have been repeatedly clarified, as
> somehow motivated by some other motivation based on one or two
> participants’ views.
>
> If you truly feel this way, please revisit the discussion in
> https://groups.google.com/forum/m/#!topic/mozilla.dev.security.policy/S2KNbJSJ-hs
> , as I hope that mine and Wayne’s responses can demonstrate this.
> Judging by that thread, only a single voice has expressed something
> remotely as to how you’ve phrased it.

I stand corrected.

>
> I don't know if others share the same view about the
> interpretation of
> 7.1 but it seems that some highly respected members of this community
> did. If we have to count every CA that had this interpretation,
> then I
> suppose all CAs that were using EJBCA with the default configuration
> have the same interpretation.
>
>
> I believe this also misunderstands the discussion to date, and as a
> consequence misrepresents this. I don’t believe it is reasonable or
> fact based to suggest that the CAs that had incidents necessarily
> shared the interpretation. The incident reports demonstrate that there
> are a myriad reasons, beyond interpretive differences, that one can
> find themselves in such a situation. Avoiding conflating the two is
> necessary, although if you feel it is justified, then I would implore
> you when summarizing others views to support your view, that you
> provide the direct links and references. This makes it easier to
> respond to and provide CAs the necessary clarity of expectations, as
> well as allows other participants to evaluate and judge themselves the
> accuracy of the summary.

I think I provided a link to an issue in the github repository of
cablint where this topic was briefly discussed in the past.

Although I agree with you on that summarizing others views without them
explicitly saying so (my comment for CAs using EJBCA with the default
configuration) is not very objective, I see that over the past years,
more and more CAs avoid to participate in m.d.s.p. leaving us with no
choice but to "guess". That is unfortunate. At some point, the issue of
less-and-less CA participation in m.d.s.p. should be discussed.

>
> BTW, the configuration in EJBCA that we are talking about, as the
> default number of bytes, had the number "8" in the setting,
> resulting in
> 64-bits, not 63. So, as far as the CA administrator was concerned,
> this
> setting resulted in using 64 random bits from a CSPRNG. One would
> have
> to see the internal code to determine that the first bit was
> replaced by
> a zero.
>
>
> This is exactly the point. CAs have an obligation to understand the
> code they’re using, regardless of the software platform. The failing
> is not with EJBCA, it is with the CAs that have done so. While there
> are a number of considerable and profound benefits to using EJBCA -
> most notably, it seems, in the dearth of issues the CAs presently
> reporting have had compared to those using closed-source platforms or
> home-grown platforms - the strength of that platform is not a
> reasonable mitigation for the CAs not being thorough.

As others have mentioned [1], building a fully custom system from
scratch seems to bring a lot more risks to the ecosystem and although I
agree with the general expectation you described (that CAs must verify
the software's compliance with all the policies/requirements set by the
BRs and Root store programs), for this particular issue, it seems very
unlikely that a CA could perform this level of code analysis, especially
if they interpreted the BRs that all that is needed is 64 random bits
from a CSPRNG. Thus, for this particular topic, you might be raising the
bar to an unreachable level.


>
> Every CA, when Ballot 164 was passed, had a clear obligation to review
> how they construct serial numbers, from the technical implementation
> to the policy configuration. CAs that did so could have placed in a
> request for the newly announced functionality, or, given the open
> source nature, contributed such a change themselves.
>

IMO that is the main difference with other cases of mis-issuance. EJBCA
constructed serial numbers according to the policy and the technical
implementation was getting 64 bits of entropy from a CSPRNG. To the best
of my knowledge and after reading the response from PrimeKey and other
community members [2], [3] (to list some), my conclusion is that this
was certainly not a creative interpretation of the requirements.
I believe Root programs have the necessary policy in place to treat
incidents -in exceptional circumstances- on a case-by-case basis. Wayne
had mentioned in a previous post [4] that Mozilla doesn't want to be
responsible for assessing the potential impact, but that statement took
for granted that there was a definite violation of a requirement.

The question I'm having trouble answering, and I would appreciate if
this was answered by the Mozilla CA Certificate Policy Module Owner, is

"does Mozilla treat this finding as a violation of the current language
of section 7.1 of the CA/B Forum Baseline Requirements"?

I believe answering this question would bring some clarity to the
participating CAs.


Thank you,
Dimitris.


[1]
https://groups.google.com/d/msg/mozilla.dev.security.policy/7WuWS_20758/HfyVD5-sCAAJ

[2]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/OVKywVZIBgAJ

[3]
https://groups.google.com/d/msg/mozilla.dev.security.policy/nnLVNfqgz7g/KSaNV9-vAAAJ

[4]
https://groups.google.com/d/msg/mozilla.dev.security.policy/hbo2SCH8c_M/efVpglelBQAJ

Tomas Gustavsson

unread,
Mar 9, 2019, 3:11:48 PM3/9/19
to mozilla-dev-s...@lists.mozilla.org
Hi,

As others have already pointed out the subject in this thread is incorrect. There are no, and has never been any, 63 bit serial numbers created by EJBCA.

As the specific topic has already been discussed, I just wanted to reference to the post[1] with technical details, if anyone ends up in this thread without background.

Regards,
Tomas

[1]
https://groups.google.com/forum/#!topic/mozilla.dev.security.policy/nnLVNfqgz7g%5B26-50%5D

Ryan Sleevi

unread,
Mar 9, 2019, 3:24:29 PM3/9/19
to Dimitris Zacharopoulos, mozilla-dev-security-policy, ry...@sleevi.com
On Sat, Mar 9, 2019 at 2:49 PM Dimitris Zacharopoulos <ji...@it.auth.gr>
wrote:

> The question I'm having trouble answering, and I would appreciate if this
> was answered by the Mozilla CA Certificate Policy Module Owner, is
>
> "does Mozilla treat this finding as a violation of the current language of
> section 7.1 of the CA/B Forum Baseline Requirements"?
>

I think for Mozilla, this is best answered by Kathleen, Wayne, the Mozilla
CA Policy Peers, and which I am not.

On behalf of Google and the Chrome Root Authority Program, and consistent
with past discussion in the CA/Browser Forum regarding expectations [1], we
do view this as a violation of the Baseline Requirements. As such, the
providing of incident reports, and the engagement with public discussion of
them, represents the most transparent and acceptable course of action.

Historically, we have found that the concerns around incident reporting
have been best addressed through a single, unified, and transparent
engagement in the community. Much as ct-p...@chromium.org has happily and
intentionally supported collaboration from counterparts at Mozilla and
Apple, Mozilla has historically graciously allowed for the unified
discussion on this mailing list, and the use of their bugtracker for the
purpose of engaging publicly and transparently on incident reports that
affect the Web PKI. Should Mozilla have a different interpretation of the
Baseline Requirements’ expectations on this, we’d seek guidance as to
whether or not the bug tracker and mailing list continue to represent the
best place for discussion of this specific issue, although note that
historically, this has been the case.

This should make it clear that CAs which extracted 64 bits of entropy as an
input to an algorithm that then set the sign bit to positive and
potentially decreasing the entropy to 63 bits, as opposed to
unconditionally guaranteeing that there was a positive integer with _at
least_ 64 bits of entropy, are non-compliant with the BRs and program
expectations, and should file incident reports and include such disclosures
in their reporting by and assertions to auditors.

[1]
https://cabforum.org/pipermail/public/2016-April/007245.html

James Burton

unread,
Mar 9, 2019, 4:43:17 PM3/9/19
to Ryan Sleevi, Dimitris Zacharopoulos, mozilla-dev-security-policy
What concerns me overall in this discussion is the fact that some CAs
thought it was completely acceptable to barely scrape through to meet the
most basic minimum of requirements. I hope these CAs have a better security
posture and are not operating at the minimum.

Thank you,

Burton

Wayne Thayer

unread,
Mar 9, 2019, 7:15:50 PM3/9/19
to Dimitris Zacharopoulos, Ryan Sleevi, mozilla-dev-security-policy
On Sat, Mar 9, 2019 at 12:49 PM Dimitris Zacharopoulos via
dev-security-policy <dev-secur...@lists.mozilla.org> wrote:

>
> The question I'm having trouble answering, and I would appreciate if
> this was answered by the Mozilla CA Certificate Policy Module Owner, is
>
> "does Mozilla treat this finding as a violation of the current language
> of section 7.1 of the CA/B Forum Baseline Requirements"?
>
>
Speaking as the CA Certificate Policy Module Owner, and being aware of the
discussions that led to the current wording, I believe the intent of the BR
language is for serial numbers to contain 64-bits of entropy. I certainly
agree that the language could be improved, but I think the meaning is clear
enough and yes I do expect CAs to treat serial numbers that do not actually
consist of 64-bits of entropy as a BR and a Mozilla policy section 5.2
violation.

I believe answering this question would bring some clarity to the
> participating CAs.
>
> Thank you for pointing this out Dimitris. While it seems obvious to me, I
can understand if there is some uncertainty resulting from the opposing
arguments.

- Wayne

Daymion Reynolds

unread,
Mar 11, 2019, 11:24:43 AM3/11/19
to mozilla-dev-s...@lists.mozilla.org
When it comes entropy how does the industry feel about preceding zeros? There have been a few online and offline discussions around the requirement for the most significant bit to be set to (1).

For example:
10000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 which results in an integer of 9223372036854775808.

In my opinion, to achieve a full 64bits of serial number entropy we should not fix any of the bits. What are the thoughts on this?

This following value also has 64bits but does not have the most significant bit set yet seems to meet the section 7.1 baseline requirements.

00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 which results in an integer of 72057594037927936.

In both cases the certificate field header lists the length of the serial number as 64bits.

For GoDaddy, we will be moving to 128bit serial numbers to resolve this permanently.

Ryan Sleevi

unread,
Mar 11, 2019, 11:57:27 AM3/11/19
to Daymion Reynolds, mozilla-dev-s...@lists.mozilla.org
I don’t think there’s anything inherently wrong with an approach that uses
a fixed prefix, whether of one bit or more, provided that there is at least
64 bits of entropy included in the serial prior to encoding to DER.

This means a scheme with guarantees a positive INTEGER will generate
*encoded* serials in the range of one bit to sixty five bits, of the goal
is to use the smallest possible amount of entropy.

However, as you note, this issue only arises when one uses the absolute
minimum. A robust solution is to use 159 bits, the maximum allowed. This
helps ensure that, even when encoded, it will not exceed 20 bytes, this
avoiding any client interpretation issues regarding whether the 20 bytes
mentioned in 5280 are pre-encoding (the intent) or post-encoding (as a few
embedded libraries implemented).

Note, however, even with 159 bits of entropy, it’s still possible to have a
compressed encoding of one byte, due to zero folding. Using a one bit
prefix in addition to the sign bit (thus, two fixed bits in the serial) can
help ensure that a leading run of zero bits are not folded when encoding.

Daymion Reynolds

unread,
Mar 11, 2019, 1:00:22 PM3/11/19
to mozilla-dev-s...@lists.mozilla.org
Glad you agree 64bit serial numbers can have no fixed bits, as a fixed bit in a 64 bit serial number would result in less than 64 bits of entropy. If you are going to fix a significant bit it must be beyond the 64th bit. If your 64 bit serial number does not contain 1's in the significant byte, as long as you still write 64 full bits of data to the cert with 0's left padded, then the desired entropy is achieved and is valid. CAs should keep this in mind while building their revocation lists.

Peter Bowen

unread,
Mar 11, 2019, 4:32:09 PM3/11/19
to Daymion Reynolds, mozilla-dev-s...@lists.mozilla.org
On Mon, Mar 11, 2019 at 10:00 AM Daymion Reynolds via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> Glad you agree 64bit serial numbers can have no fixed bits, as a fixed bit
> in a 64 bit serial number would result in less than 64 bits of entropy. If
> you are going to fix a significant bit it must be beyond the 64th bit. If
> your 64 bit serial number does not contain 1's in the significant byte, as
> long as you still write 64 full bits of data to the cert with 0's left
> padded, then the desired entropy is achieved and is valid. CAs should keep
> this in mind while building their revocation lists.
>

You can't left pad with zeros in DER. DER requires that the maximum
padding is a leading zero to set the sign, otherwise no leading zeros.

You could go for more than 64 bit serial length and set the upper bits to
01 to avoid the issue, so the most significant byte is between 64 - 127
inclusive. You would need to have at least 9 octets for serial number, but
this is no more than what you have 50% of the time now.

Thanks,
Peter

Jaime Hablutzel

unread,
Mar 13, 2019, 10:40:34 PM3/13/19
to mozilla-dev-s...@lists.mozilla.org
> The rule as written requires that the output bits have come from a CSPRNG. But it doesn't say that they have to come from a single invocation of a CSPRNG or that they have to be collected as a contiguous bit stream from the CSPRNG with no bits of output from the CSPRNG discarded and replaced by further invocation of the CSPRNG.

This reasoning has the potential to decrease the security that is provided by a requirement for a given minimum entropy and I'll try to illustrate my point better with the following fictional situation where the requirement would be something like this:

> ... CAs SHALL generate non-sequential Certificate serial numbers greater than zero (0) containing at least 8 bits of output from a CSPRNG.

So we think that we can comply by generating serial numbers with exactly 1 byte fixed size as the requirement asks for 8 bits.

Then we start generating serial number candidates, but we need to perform some filtering:

1. First, as we want to produce one byte constant length positive serial numbers we filter out values where the high-order bit is 1 and we are left with only 128 possible values.
2. Then, we filter out the 0 value and now we have 127 possible values to choose from.
3. Finally, we have to discard serial numbers assigned to previously issued certificates and let's say we've issued 126 certificates previously, so now we're left with only one possible serial number to choose from.

And there it is, full predictability for the next serial number to be generated.

Now, this is just an example but my point is that the interpretation that allowed for one byte fixed size serial numbers was a clear mistake in the context of this fictional requirement.

Nevertheless, in real life we would be reducing 64 bits by just a little (e.g. to 63 bits), but anyway, the security is being reduced, maybe not enough to allow for a real attack... but there is a reduction.

Finally, as I see it, CA's should ellaborate their serial numbers generation strategy guaranteeing that generated serial numbers at all times, now and in the future (after issuing many quadrillions of certificates), will always contain at least 64 bits of unfiltered entropy within them.

Jaime Hablutzel

unread,
Mar 14, 2019, 12:07:49 AM3/14/19
to mozilla-dev-s...@lists.mozilla.org
> I believe Root programs have the necessary policy in place to treat
> incidents -in exceptional circumstances- on a case-by-case basis. Wayne
> had mentioned in a previous post [4] that Mozilla doesn't want to be
> responsible for assessing the potential impact, but that statement took
> for granted that there was a definite violation of a requirement.

It looks like it would be useful to have this exceptions handling procedure in place, especially for situations like the current one with with low security impact but a high potential for producing service disruption everywhere.

Is Mozilla reassessing to introduce a procedure to handle exceptions?.
0 new messages