Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Machine- and human-readable format for root store information?

619 views
Skip to first unread message

Gervase Markham

unread,
Jun 26, 2017, 5:42:15 PM6/26/17
to mozilla-dev-s...@lists.mozilla.org
A few root store operators at the recent CAB Forum meeting informally
discussed the idea of a common format for root store information, and
that this would be a good idea. More and more innovative services find
it useful to download and consume trust store data from multiple
parties, and at the moment there are various hacky solutions and
conversion scripts in use.

Apple are already moving to publish their trust store in
machine-readable form (at the moment, the most machine-readable version
is in their open source repo, and that's often out of date). I'm not
sure what format they are planning, but it may not be too late to sell
them on something common. We currently have certdata.txt, which is
perhaps not ideal as a format; if we moved to something better, we could
always generate certdata.txt from that for those who still needed that form.

I'm told there are a couple of formats out there, including one in XML
(urk). But it would be nice to have something which was both machine and
human readable and writeable; in the age where the bar is set by JSON,
I'm not sure XML counts as that any more.

The trouble is, I'm not sure anyone in those conversations was also
musing about how much free time they had for such work. Is anyone here
interested in taking on the task of gathering requirements and editing a
spec for an (e.g.) JSON root store format?

Gerv

Moudrick M. Dadashov

unread,
Jun 26, 2017, 5:53:53 PM6/26/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
Hi Gerv,

FYI: ETSI TS 119 612 V2.2.1 (2016-04), Electronic Signatures and
Infrastructures (ESI); Trusted Lists
http://www.etsi.org/deliver/etsi_ts/119600_119699/119612/02.02.01_60/ts_119612v020201p.pdf

Thanks,
M.D.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy

Ryan Sleevi

unread,
Jun 26, 2017, 8:37:36 PM6/26/17
to Gervase Markham, mozilla-dev-security-policy
On Mon, Jun 26, 2017 at 9:50 AM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> A few root store operators at the recent CAB Forum meeting informally
> discussed the idea of a common format for root store information, and
> that this would be a good idea. More and more innovative services find
> it useful to download and consume trust store data from multiple
> parties, and at the moment there are various hacky solutions and
> conversion scripts in use.
>

Gerv,

Do you anticipate this being used to build trust decisions in other
products, or simply inform what CAs are trusted (roughly)?

My understanding from the discussions is that this is targeted at the
latter - that is, informative, and not to be used for trust decision
capability - rather than being a full expression of the policies and
capabilities of the root store.

The reason I raise this is that you quickly get into the problem of
inventing a domain-specific language (or vendor-extensible, aka
'non-format') if you're trying to express what the root store does or what
constraints it applies. And that seems a significant amount of work, for
what is an unclear consumption / use case.

I'm hoping you can clarify with the concrete intended users you see Mozilla
wanting to support, and if you could share what the feedback these other
store providers offered.

FWIW, Microsoft's (non-JSON, non-XML) machine readable format is
http://unmitigatedrisk.com/?p=259

Rob Stradling

unread,
Jun 27, 2017, 7:16:44 AM6/27/17
to Gervase Markham, ry...@sleevi.com, mozilla-dev-security-policy
Hi Gerv. crt.sh consumes the various trust store data, so I may be
interested in helping to write a spec. However, it depends very much on
how the end product would be used.

If the aim is to replace certdata.txt, authroot.stl, etc, with this new
format, then I'm more interested.

If the aim is to offer yet another mechanism for obtaining trust store
data (which may fall out of sync with the "official" data), then I'm
less interested.

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Ivan Ristic

unread,
Jun 27, 2017, 9:14:25 AM6/27/17
to Rob Stradling, ry...@sleevi.com, mozilla-dev-security-policy, Gervase Markham
I concur with Rob. If this is something the root stores might officially
adopt, then I'd be willing to help with the work. I think it would be
useful for the ecosystem to make it easier to understand the root stores'
contents; it's a lot of work to do otherwise.

For some background, for Hardenize (a free comprehensive security testing
tool I am now building), we extracted the roots from a bunch of stores and
we try to determine if a particular leaf would be trusted. You can see it
here (some recent stores are missing), bottom right:
https://www.hardenize.com/report/hardenize.com#www_certs
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>



--
Ivan

Gervase Markham

unread,
Jun 27, 2017, 9:58:18 AM6/27/17
to ry...@sleevi.com
On 26/06/17 17:36, Ryan Sleevi wrote:
> Do you anticipate this being used to build trust decisions in other
> products, or simply inform what CAs are trusted (roughly)?

I don't have strong opinions about what people use the data for; I would
hope it would be usable for either purpose. After all, people use
certdata.txt for the latter purpose even though
https://wiki.mozilla.org/CA/Additional_Trust_Changes
exists...

> My understanding from the discussions is that this is targeted at the
> latter - that is, informative, and not to be used for trust decision
> capability - rather than being a full expression of the policies and
> capabilities of the root store.

I want it to be at least as capable as certdata.txt; I agree with the
issues raised in previous discussions about a domain-specific language,
and I don't want to go down the route of attempting something which can
specify arbitrarily-complex restrictions. But it could certainly have
reasonably simple mods like "only trusted for certs issued before date
X", or "name constrained in this way".

Gerv

Gervase Markham

unread,
Jun 27, 2017, 9:59:34 AM6/27/17
to Rob Stradling
On 27/06/17 04:16, Rob Stradling wrote:
> If the aim is to replace certdata.txt, authroot.stl, etc, with this new
> format, then I'm more interested.

I can't speak for other providers, but if such a spec existed, I would
be pushing for Mozilla to maintain our root store in that format, and
auto-generate certdata.txt (and perhaps ExtendedValidation.cpp) on
checkin for legacy uses.

Gerv

Ryan Sleevi

unread,
Jun 27, 2017, 1:36:50 PM6/27/17
to Gervase Markham, Rob Stradling, mozilla-dev-s...@lists.mozilla.org
On Tue, Jun 27, 2017 at 9:58 AM Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 27/06/17 04:16, Rob Stradling wrote:
> > If the aim is to replace certdata.txt, authroot.stl, etc, with this new
> > format, then I'm more interested.
>
> I can't speak for other providers, but if such a spec existed, I would
> be pushing for Mozilla to maintain our root store in that format, and
> auto-generate certdata.txt (and perhaps ExtendedValidation.cpp) on
> checkin for legacy uses.
>


If that is the goal, it may be useful to know what the proposed limitations
/ dependencies are. For example, the translation of the txt to the c file
generated non-trivial concern among the NSS development team to support.

For example, one possible suggestion is to adopt a scheme similar to, or
identical to, Microsoft's authroot.stl, which is PKCS#7, with attributes
for indicating age and expiration, and the ability to extend with
vendor-specific attributes as needed. One perspective would be to say that
Mozilla should just use this work.

However, the NSS developer would rightfully point out the complexity
involved in this - such as what language or tools should be used to
translate this form into the native code. Perl or Python (part of MozBuild)
may be acceptable to the Mozilla developer, but challenging for the NSS
developer. A native tool integrated into the build system (as signtool is
for updating the chk tool) presents a whole host of challenges for
cross-compilers.

Yet if the goal is cross-vendor compatibility, one can argue that is the
best approach, as t changes the number of vendors implementing it to 2,
from the present 1, and thus achieves that goal. As you introduce the
concept of Apple, but which has historically been a non-participant here,
it makes it hard to design a system acceptable to them. Further, one could
reasonably argue that an authroot.stl approach would trouble Apple, much as
other non-SDO driven efforts have, due to IP concerns in the space.
Presumably, such collaboration would need to occur somewhere with
appropriate IP protections.

These criticisms are not meant to suggest I disagree with your goal, merely
that it seems there would be a number of challenges in achieving your goal
that discussion on m.d.s.p. would not resolve. The way to address these
challenges seems to involve getting firm commitments and collaboration with
other vendors (since that is your primary goal), as well as to explore the
constraints and limits of the NSS (and related) build systems, since the
combination of those two factors will determine whether this is just
another complex transition (as changing certdata.c to be generated and not
checked in was) with limited applicability.


> Gerv

Gervase Markham

unread,
Jun 27, 2017, 1:49:40 PM6/27/17
to mozilla-dev-s...@lists.mozilla.org
On 27/06/17 10:35, Ryan Sleevi wrote:
> If that is the goal, it may be useful to know what the proposed limitations
> / dependencies are. For example, the translation of the txt to the c file
> generated non-trivial concern among the NSS development team to support.

I propose it be part of the checkin process (using a CI tool or similar)
rather than part of the build process. Therefore, there would be no new
build-time dependencies for NSS developers.

> For example, one possible suggestion is to adopt a scheme similar to, or
> identical to, Microsoft's authroot.stl, which is PKCS#7, with attributes
> for indicating age and expiration, and the ability to extend with
> vendor-specific attributes as needed. One perspective would be to say that
> Mozilla should just use this work.

That's one option. I would prefer something which is both human and
computer-readable, as certdata.txt (just about) is.

> Yet if the goal is cross-vendor compatibility, one can argue that is the
> best approach, as t changes the number of vendors implementing it to 2,
> from the present 1, and thus achieves that goal. As you introduce the
> concept of Apple, but which has historically been a non-participant here,
> it makes it hard to design a system acceptable to them.

Apple suggested they'd like to make this data available; my hope would
be that if a format could be defined, they might be persuaded to adopt it.

> Further, one could
> reasonably argue that an authroot.stl approach would trouble Apple, much as
> other non-SDO driven efforts have, due to IP concerns in the space.
> Presumably, such collaboration would need to occur somewhere with
> appropriate IP protections.

Like, really? Developing a set of JSON name-value pairs to encode some
fairly simple structured data has potential IP issues? What kind of mad
world do we live in?

> These criticisms are not meant to suggest I disagree with your goal, merely
> that it seems there would be a number of challenges in achieving your goal
> that discussion on m.d.s.p. would not resolve. The way to address these
> challenges seems to involve getting firm commitments and collaboration with
> other vendors (since that is your primary goal),

Well, if there was some chance of someone taking on the work - which
perhaps there seems to be, based on other replies - then that would be a
good next step. But there's no point in me having those discussions if
there's no-one willing to do the work. Hence my original question.

Gerv

Jakob Bohm

unread,
Jun 27, 2017, 2:31:11 PM6/27/17
to mozilla-dev-s...@lists.mozilla.org
On 27/06/2017 19:49, Gervase Markham wrote:
> On 27/06/17 10:35, Ryan Sleevi wrote:
> ...
>> Further, one could
>> reasonably argue that an authroot.stl approach would trouble Apple, much as
>> other non-SDO driven efforts have, due to IP concerns in the space.
>> Presumably, such collaboration would need to occur somewhere with
>> appropriate IP protections.
>
> Like, really? Developing a set of JSON name-value pairs to encode some
> fairly simple structured data has potential IP issues? What kind of mad
> world do we live in?
>

I think he was referring to possible IP concerns with reusing
Microsoft's ASN.1 based format.

P.S.

Note that Microsoft has two variants of their "Certificate Trust List"
format:

One that actually includes all the trusted certificates, which is more
useful as inspiration for this effort.

Another that contains only metadata, but not the actual certs. This is
the one loosely described at http://unmitigatedrisk.com/?p=259



Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Ryan Sleevi

unread,
Jun 27, 2017, 3:18:25 PM6/27/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Tue, Jun 27, 2017 at 1:49 PM Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 27/06/17 10:35, Ryan Sleevi wrote:
> > If that is the goal, it may be useful to know what the proposed
> limitations
> > / dependencies are. For example, the translation of the txt to the c file
> > generated non-trivial concern among the NSS development team to support.
>
> I propose it be part of the checkin process (using a CI tool or similar)
> rather than part of the build process. Therefore, there would be no new
> build-time dependencies for NSS developers.


This was something the NSS developers explicitly moved away from with
respect to certdata.c

> For example, one possible suggestion is to adopt a scheme similar to, or
> > identical to, Microsoft's authroot.stl, which is PKCS#7, with attributes
> > for indicating age and expiration, and the ability to extend with
> > vendor-specific attributes as needed. One perspective would be to say
> that
> > Mozilla should just use this work.
>
> That's one option. I would prefer something which is both human and
> computer-readable, as certdata.txt (just about) is.



Why? Opinions without justification aren't as useful ;)

(To be fair, this is broadly about articulating and agreeing use cases
before too much effort is spent)

Apple suggested they'd like to make this data available; my hope would
> be that if a format could be defined, they might be persuaded to adopt it.



And if they can't, is that justified?

That is, it sounds like you're less concerned about cross-vendor
interoperability, and only concerned with Apple interoperability. Is that
correct?

> Further, one could
> > reasonably argue that an authroot.stl approach would trouble Apple, much
> as
> > other non-SDO driven efforts have, due to IP concerns in the space.
> > Presumably, such collaboration would need to occur somewhere with
> > appropriate IP protections.
>
> Like, really? Developing a set of JSON name-value pairs to encode some
> fairly simple structured data has potential IP issues? What kind of mad
> world do we live in?


It doesn't matter the format - it matters how and where it was developed.

Jakob Bohm

unread,
Jun 27, 2017, 3:23:10 PM6/27/17
to mozilla-dev-s...@lists.mozilla.org
On 26/06/2017 23:53, Moudrick M. Dadashov wrote:
> Hi Gerv,
>
> FYI: ETSI TS 119 612 V2.2.1 (2016-04), Electronic Signatures and
> Infrastructures (ESI); Trusted Lists
> http://www.etsi.org/deliver/etsi_ts/119600_119699/119612/02.02.01_60/ts_119612v020201p.pdf
>

Having skimmed through this document, I find that particular format
unsuited for general use, due to the following issues:

- Excessive inclusion of information duplicated from the certificates
themselves.
- Complete repetition of all information for any root that is trusted
for multiple purposes.
- The use of long ETSI/EU-specific uris to specify simply things such as
"trusted"/"not trusted".
- Apparent lack of syntax for specifying scopes that are global but do
not represent a global authority (such as the UN).

- A notable lack of fields to represent the trust data that real world
commercial root programs typically need to specify for trusted CA
certs.
- The apparent need to go through ETSI-specific registration procedures
to add "extensions" and/or "identifiers" for anything missing.
- Mandatory provision of snail-mail technical support.

- EU specific oddities, such as alternative identifiers for some some EU
member states.

That said, it could provide some inspiration.

Gervase Markham

unread,
Jun 27, 2017, 3:53:22 PM6/27/17
to Ryan Sleevi
On 27/06/17 12:17, Ryan Sleevi wrote:
> This was something the NSS developers explicitly moved away from with
> respect to certdata.c

It would be interesting to know the history of that; but we are in a
different place now in terms of the SCM system we use and the CI tools
available, versus what we were a few years ago.

If you were able to elaborate on the relevant history here, as you
obviously know it, that would be helpful.

>> That's one option. I would prefer something which is both human and
>> computer-readable, as certdata.txt (just about) is.
>
> Why? Opinions without justification aren't as useful ;)

:-) Because human-readable only is clearly silly, and computer-readable
only is harder to maintain (requires tools other than a text editor). I
want it to be easily maintainable, easily browseable and also
unambiguously consumable by tools.

> Apple suggested they'd like to make this data available; my hope would
>> be that if a format could be defined, they might be persuaded to adopt it.
>
> And if they can't, is that justified?
>
> That is, it sounds like you're less concerned about cross-vendor
> interoperability, and only concerned with Apple interoperability. Is that
> correct?

I'm after interoperability with whoever wants to interoperate. The other
benefits I see for Mozilla are being able to better (if not perfectly)
express our root store's opinions on our level of trust for roots in a
single computer-readable file, rather than the combination of a text
file, a C++ file and a wiki page.

Given that the plan is to auto-generate the old formats when necessary,
I didn't think that maintaining the data in a different format would
cause anyone significant difficulty or hardship.

>> Like, really? Developing a set of JSON name-value pairs to encode some
>> fairly simple structured data has potential IP issues? What kind of mad
>> world do we live in?
>
> It doesn't matter the format - it matters how and where it was developed.

As in, if I just make it up and start using it, people will be scared
I'm going to sue them over its use?

Gerv

Jos Purvis (jopurvis)

unread,
Jun 27, 2017, 3:54:33 PM6/27/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 2017-Jun-27, 13:49 , "dev-security-policy on behalf of Gervase Markham via dev-security-policy" wrote:

On 27/06/17 10:35, Ryan Sleevi wrote:
> For example, one possible suggestion is to adopt a scheme similar to, or
> identical to, Microsoft's authroot.stl, which is PKCS#7, with attributes
> for indicating age and expiration, and the ability to extend with
> vendor-specific attributes as needed. One perspective would be to say that
> Mozilla should just use this work.

That's one option. I would prefer something which is both human and
computer-readable, as certdata.txt (just about) is.

One possibility would be to look at the Trust Anchor Management Protocol (TAMP - RFC5934). It uses CMS, which would give you the flexibility to define usages and signed attributes, but it might not land well in terms of human readability, I don’t know. Ryan Hurst over at Google pointed us in that direction and mentioned he was looking at that for his tl-create tool (https://github.com/PeculiarVentures/tl-create), so it might be worth a look. An open standard like that might also allay concerns over something more proprietary like STL.


--
Jos Purvis (jopu...@cisco.com)
.:|:.:|:. cisco systems | Cryptographic Services
PGP: 0xFD802FEE07D19105


Peter Gutmann

unread,
Jun 27, 2017, 10:44:05 PM6/27/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org, Jos Purvis (jopurvis)
Jos Purvis (jopurvis) via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>One possibility would be to look at the Trust Anchor Management Protocol
>(TAMP - RFC5934).

Note that TAMP is one of PKIX' many, many gedanken experiments that were
created with little, most likely no, real-world evaluation before it was
declared ready. It may or may not actually work, and may or may not (and
looking at its incredible complexity and flexbility, almost certainly "may
not") interoperate with any other implementation that turns up. So you'd need
to write a second spec which is a profile of TAMP that nails down what's
expected by an implementation, and then run interop tests to see whether it
works at all.

(In case you're wondering why the CMP protocol, another PKIX cert management
protocol that in theory already does what TAMP does, starts at version 2, it's
because when attempts were made to deploy the initial spec it was found that
it didn't work, so they had to create a "version 2" that tried to patch up the
published standard. Even then, try finding two CMP implementations that can
interop out of the box...).

Peter.

Ryan Sleevi

unread,
Jun 28, 2017, 9:38:43 AM6/28/17
to Gervase Markham, Ryan Sleevi, mozilla-dev-security-policy
On Tue, Jun 27, 2017 at 3:52 PM, Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 27/06/17 12:17, Ryan Sleevi wrote:
> > This was something the NSS developers explicitly moved away from with
> > respect to certdata.c
>
> It would be interesting to know the history of that; but we are in a
> different place now in terms of the SCM system we use and the CI tools
> available, versus what we were a few years ago.
>

Not really, at least from the NSS perspective. There's been the CVS ->
Mercurial -> Git(ish) transitions, but otherwise, the tools and
dependencies have largely remained the same.


> If you were able to elaborate on the relevant history here, as you
> obviously know it, that would be helpful.
>

Well, the obvious issue remains cross-compiling and what dependencies are
acceptable. So the minimal set was - in order to maintain compatibility
with NSS consumers like Red Hat and Oracle - the set of tools already
integrated into the build system. Even the transition to GTests has not
been without controversy, and not all GTests are run by all NSS consumers,
due to the dependency on (modern) C++.

I highlight this because the "Mozilla build environment" is not necessarily
aligned with the "NSS Build environment"


> >> That's one option. I would prefer something which is both human and
> >> computer-readable, as certdata.txt (just about) is.
> >
> > Why? Opinions without justification aren't as useful ;)
>
> :-) Because human-readable only is clearly silly, and computer-readable
> only is harder to maintain (requires tools other than a text editor). I
> want it to be easily maintainable, easily browseable and also
> unambiguously consumable by tools.
>

Put differently: If a human-readable version could be generated from a
machine-readable file, is the objective met or not?

You've put a very particular constraint here (both human and machine
readable), which is very much a subjective question (as to whether it's
human readable), and which arguably can be produced from anything that
meets a machine readable format.

For example, you highlight that computer-readable only requires other tools
to maintain, but that's not intrinsically true (you can have
machine-readable text files, for example), and one in which you're just
shifting the tooling concern from "NSS maintainers" to "NSS consumers"
(which is worth calling out here; it's increasing the scale and scope of
impact).

I can understand the preference, but I'm trying to suss out what the actual
hard requirements and goals are, since as exciting as the prospect is, not
only does it require work (to define said schema), but it requires work to
integrate that schema, and wanting to understand what the long-term payout
is.


> > Apple suggested they'd like to make this data available; my hope would
> >> be that if a format could be defined, they might be persuaded to adopt
> it.
> >
> > And if they can't, is that justified?
> >
> > That is, it sounds like you're less concerned about cross-vendor
> > interoperability, and only concerned with Apple interoperability. Is that
> > correct?
>
> I'm after interoperability with whoever wants to interoperate.


That doesn't really helpfully answer the question, but apologies for not
making it explicit.

You've proposed solutions and goals that appear to align with "We want
Apple to use our format", and are explicitly rejecting "We will
interoperate with Microsoft using their format", while presenting it as "We
want interoperability"

1) Is it correct that you value Apple interoperability (because they've
privately expressed some interest, or which you hope to convince them to,
given their public statements)
2) Is it correct that you do not value Microsoft interoperability (because
you're explicitly defining criteria that would reject that interoperability)
3) If neither party arrives at an interoperable solution, are your goals
met and is the work justified?


> The other
> benefits I see for Mozilla are being able to better (if not perfectly)
> express our root store's opinions on our level of trust for roots in a
> single computer-readable file, rather than the combination of a text
> file, a C++ file and a wiki page.
>

Well, regardless, you need the C file, unless you're also supposing that
NSS directly consume the computer-readable file (adding both performance
and security implications).

The wiki page you mention is already automatically generated (by virtue of
Salesforce), and you're certainly not eliminating that burden of
maintenance, so it seems like you still have three files - the 'source in
tree', the generated code, and the Salesforce-driven output. Can you
explain to me the benefit there?


> Given that the plan is to auto-generate the old formats when necessary,
> I didn't think that maintaining the data in a different format would
> cause anyone significant difficulty or hardship.
>

Transitioning to txt caused hardship for downstream NSS consumers, some of
which had to integrate manually regenerating the C file as part of their
workflows (of which, Google was one). Was that justified by the transition?


> >> Like, really? Developing a set of JSON name-value pairs to encode some
> >> fairly simple structured data has potential IP issues? What kind of mad
> >> world do we live in?
> >
> > It doesn't matter the format - it matters how and where it was developed.
>
> As in, if I just make it up and start using it, people will be scared
> I'm going to sue them over its use?


Yes. This is why, for example, certain large vendors working in the
CA/Browser Forum found it necessary to establish the IP policy.

Certainly, having worked with Microsoft and Apple in a variety of fora, and
been indirect and incidental party to a number of the discussions related
to choice of venue, I feel confident saying that, whether immediately or
eventually, the need for a strong IP commitment will certainly be necessary
once the lawyers are aware. It's not for lack of interest that certain
members do not publicly comment in certain venues - it's part of the
company policy.

If your goal is to produce a standard - and not just a library that
implements said code (and which would be directly integrated into these
vendors products) - you should seriously consider reaching out to these
parties - and their counsel - and determine an appropriate venue. Even
something like WICG may be able to satisfy the various needs, but I have
incredible difficulty imagining "throw it up on GitHub" would suffice, as
would "hash it out in m.d.s.p"

Gervase Markham

unread,
Jun 28, 2017, 5:29:19 PM6/28/17
to ry...@sleevi.com
On 28/06/17 06:38, Ryan Sleevi wrote:
> Not really, at least from the NSS perspective. There's been the CVS ->
> Mercurial -> Git(ish) transitions, but otherwise, the tools and
> dependencies have largely remained the same.

Well, the fact that we now use Git, I suspect, means anyone could plug
in a modern CI tool that did "Oh, you changed file X. Let me regenerate
file Y and check it in alongside". Without really needing anyone's
permission beyond checkin access.

> Put differently: If a human-readable version could be generated from a
> machine-readable file, is the objective met or not?

Well, I don't do the actual maintenance of certdata.txt, but I assume
(perhaps without evidence) that telling whoever does that "hey, you now
need to use this tool to edit the canonical information store, instead
of the text editor you have been using" might not go down well. It
wouldn't if it were me.

> For example, you highlight that computer-readable only requires other tools
> to maintain, but that's not intrinsically true (you can have
> machine-readable text files, for example), and one in which you're just
> shifting the tooling concern from "NSS maintainers" to "NSS consumers"
> (which is worth calling out here; it's increasing the scale and scope of
> impact).

No, because NSS consumers could choose to continue consuming the
(autogenerated by the CI tool) certdata.txt.

> You've proposed solutions and goals that appear to align with "We want
> Apple to use our format", and are explicitly rejecting "We will
> interoperate with Microsoft using their format", while presenting it as "We
> want interoperability"

You want me to rank my goals in order of preference? :-)

If Apple said "we are happy to use the MS format", I guess the next
thing I would do is find Kai or whoever maintains certdata.txt and say
"hey, it's not ideal, but what do you think, for the sake of everyone
using the same thing?".

> 3) If neither party arrives at an interoperable solution, are your goals
> met and is the work justified?

It's not a massive improvement if we are the only group using it. I
think there is value to Mozilla even if MS and Apple don't get on board,
because our root store gets more descriptive of reality, but that value
alone might not be enough to convince someone like the two people who
have expressed interest thusfar to take the time to work on the spec. I
don't know.

> Well, regardless, you need the C file, unless you're also supposing that
> NSS directly consume the computer-readable file (adding both performance
> and security implications).

The C file I meant was ExtendedValidation.cpp.

> The wiki page you mention is already automatically generated (by virtue of
> Salesforce),

No. The wiki page I meant was
https://wiki.mozilla.org/CA/Additional_Trust_Changes . Sorry for not
being clear on this.

Mozilla's opinions on roots are defined by the sum total of:

1) certdata.txt
2) ExtendedValidation.cpp
3) The changes listed on
https://wiki.mozilla.org/CA/Additional_Trust_Changes

It's these 3 files I'm hoping could be combined (almost totally or
totally) into one machine-readable store.

Gerv

Ryan Sleevi

unread,
Jun 28, 2017, 6:09:10 PM6/28/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, June 28, 2017 at 5:29:19 PM UTC-4, Gervase Markham wrote:
> Well, the fact that we now use Git, I suspect, means anyone could plug
> in a modern CI tool that did "Oh, you changed file X. Let me regenerate
> file Y and check it in alongside". Without really needing anyone's
> permission beyond checkin access.

I don't believe the state of NSS infrastructure is well-placed to support that claim. I'd be curious for Kai's/Red Hat's feedback.

> Well, I don't do the actual maintenance of certdata.txt, but I assume
> (perhaps without evidence) that telling whoever does that "hey, you now
> need to use this tool to edit the canonical information store, instead
> of the text editor you have been using" might not go down well. It
> wouldn't if it were me.

It already (effectively) requires a tool to make sure it's done right, AIUI :)

But I think you're still conflating "text" vs "human readable", and I'm not sure that they represent equivalents. That is, "human readable" introduces a subjective element that can easily lead to ratholes about whether or not something is "readable enough", or coming up with sufficient ontologies so that it can "logically map" - just look at XML for the case study in this.

You can have a JSON file, but that doesn't mean it's human-readable in the least.

That's why I'm pushing very hard on that.

> No, because NSS consumers could choose to continue consuming the
> (autogenerated by the CI tool) certdata.txt.

The CI tools don't check in artifacts. You're proposing giving some piece of infrastructure the access to generate and check in files? I believe Mozilla may do that, but NSS does not, and the infrastructure is separately maintained.

> You want me to rank my goals in order of preference? :-)

Moreso be more explicit in the goals. It's trying to figure out how 'much' interoperability is being targeted here :)

> If Apple said "we are happy to use the MS format", I guess the next
> thing I would do is find Kai or whoever maintains certdata.txt and say
> "hey, it's not ideal, but what do you think, for the sake of everyone
> using the same thing?".

Thought experiment: Why not have certdata.txt generate a CI artifact that interoperates for other consumers to use?

Which is all still a facet of the original question: Trying to determine what your goals are / what the 'necessary' vs 'nice to have' features are :)

> It's not a massive improvement if we are the only group using it. I
> think there is value to Mozilla even if MS and Apple don't get on board,
> because our root store gets more descriptive of reality, but that value
> alone might not be enough to convince someone like the two people who
> have expressed interest thusfar to take the time to work on the spec. I
> don't know.

But why doesn't certdata.txt meet that already, then? It's a useful thought experiment to find out what you see the delta as, so that we can understand what are and are not acceptable solutions.

> Mozilla's opinions on roots are defined by the sum total of:
>
> 1) certdata.txt
> 2) ExtendedValidation.cpp
> 3) The changes listed on
> https://wiki.mozilla.org/CA/Additional_Trust_Changes

1 & 2 for sure. I don't believe #3 can or should be, certainly not effectively maintained. Certainly, Google cannot and would not be able to find an acceptable solution on #3, just looking at things like CT, without introducing otherwise meaningless ontologies such as "Follows implementation #37 for this root".

(Which, for what it's worth, is what Microsoft does with the authroot.stl, effectively)

Gervase Markham

unread,
Jun 28, 2017, 7:39:37 PM6/28/17
to Ryan Sleevi
On 28/06/17 15:08, Ryan Sleevi wrote:
> It already (effectively) requires a tool to make sure it's done right, AIUI :)

Well, we should ask Kai what methods he uses to maintain it right now,
and whether he uses a tool.

> You can have a JSON file, but that doesn't mean it's human-readable in the least.

You mean you can stick it all one one line? Or you can choose opaque key
and value names? Or something else?

> The CI tools don't check in artifacts. You're proposing giving some piece of infrastructure the access to generate and check in files?

I am led to understand this is a fairly common pattern these days.

>> If Apple said "we are happy to use the MS format", I guess the next
>> thing I would do is find Kai or whoever maintains certdata.txt and say
>> "hey, it's not ideal, but what do you think, for the sake of everyone
>> using the same thing?".
>
> Thought experiment: Why not have certdata.txt generate a CI artifact that interoperates for other consumers to use?

Because certdata.txt's format is not rich enough to support all the data
we would want to encode in a root store. We could consider extending it,
but why would we roll our own container format when there exist
perfectly good ones?

>> Mozilla's opinions on roots are defined by the sum total of:
>>
>> 1) certdata.txt
>> 2) ExtendedValidation.cpp
>> 3) The changes listed on
>> https://wiki.mozilla.org/CA/Additional_Trust_Changes
>
> 1 & 2 for sure. I don't believe #3 can or should be, certainly not effectively maintained. Certainly, Google cannot and would not be able to find an acceptable solution on #3, just looking at things like CT, without introducing otherwise meaningless ontologies such as "Follows implementation #37 for this root".

There are seven items on the list in #3. The first one is item 2, above.
The second is not a root store modification, technically. The third,
fifth and sixth would be accommodated if the new format had a "notAfter"
field. The fourth and seventh would be accommodated if the new format
had a "name constraints" field.

So putting all of #3, as it currently stands, into a new format seems
eminently doable. That doesn't mean every restriction we ever think of
could be covered, but the current ones (which are ones I can see us
using again in the future) could be.

Gerv

Ryan Sleevi

unread,
Jun 29, 2017, 11:27:23 AM6/29/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, June 28, 2017 at 7:39:37 PM UTC-4, Gervase Markham wrote:
> Well, we should ask Kai what methods he uses to maintain it right now,
> and whether he uses a tool.

For the recent name constraints, it was a tool.

> > You can have a JSON file, but that doesn't mean it's human-readable in the least.
>
> You mean you can stick it all one one line? Or you can choose opaque key
> and value names? Or something else?

Well, the current certdata.txt is a text file. Do you believe it's human-readable, especially sans-comments?

>
> > The CI tools don't check in artifacts. You're proposing giving some piece of infrastructure the access to generate and check in files?
>
> I am led to understand this is a fairly common pattern these days.

Please realize that this makes it impossible to effectively test changes, without running said tool. This is, again, why certdata.txt being generated is part of the build - so that when you change a file, it's reflected in the build and code and you can effectively test.

Moving to a CI system undermines the ability to effectively contribute and test.

That's why "machine-readable" is, in effect, a must-have. Whether or not "human-readable" is (and what constitutes human-readable) is the point of discussion, but if you check in the machine-readable form, then anyone can generate the human-readable form at any time.

>
> >> If Apple said "we are happy to use the MS format", I guess the next
> >> thing I would do is find Kai or whoever maintains certdata.txt and say
> >> "hey, it's not ideal, but what do you think, for the sake of everyone
> >> using the same thing?".
> >
> > Thought experiment: Why not have certdata.txt generate a CI artifact that interoperates for other consumers to use?
>
> Because certdata.txt's format is not rich enough to support all the data
> we would want to encode in a root store. We could consider extending it,
> but why would we roll our own container format when there exist
> perfectly good ones?

Could you explain how you arrive at that conclusion? That may simply be a technical misunderstanding, as certdata.txt's format allows for the expression of arbitrary attributes (as recently added with the "Mozilla Root" attribute) in an appropriate form.

Which may be why we're at cross-purposes here - the existing certdata.txt is already technically capable of expressing the constraints. However, it is a complex technical burden to express that in metadata, rather than in code - and that is true no matter what format you choose.

If your understanding was based on a misunderstanding that "certdata.txt cannot be extended to support arbitrary metadata", then I can easily tell you that's not the case. It's a matter of changing NSS to, rather than express something simply and cleanly in code (relatively speaking), finding an ontology to express the constraint in a machine-readable (but not-code) format, and then code to parse that and apply in 100 lines what might take 5 lines in code.

This is the same as the authroot.stl - both are quite robust, arbitrarily-extensible formats. The choice to not extend is not one about technical limitation, but about unreasonable return for the cost to implement.

>
> >> Mozilla's opinions on roots are defined by the sum total of:
> >>
> >> 1) certdata.txt
> >> 2) ExtendedValidation.cpp
> >> 3) The changes listed on
> >> https://wiki.mozilla.org/CA/Additional_Trust_Changes
> >
> > 1 & 2 for sure. I don't believe #3 can or should be, certainly not effectively maintained. Certainly, Google cannot and would not be able to find an acceptable solution on #3, just looking at things like CT, without introducing otherwise meaningless ontologies such as "Follows implementation #37 for this root".
>
> There are seven items on the list in #3. The first one is item 2, above.
> The second is not a root store modification, technically. The third,
> fifth and sixth would be accommodated if the new format had a "notAfter"
> field. The fourth and seventh would be accommodated if the new format
> had a "name constraints" field.
>
> So putting all of #3, as it currently stands, into a new format seems
> eminently doable. That doesn't mean every restriction we ever think of
> could be covered, but the current ones (which are ones I can see us
> using again in the future) could be.

That takes a very Mozilla-centric view, but that doesn't align with, say, the goal of supporting Apple.

For example, Apple has three CAs where only certain, previously disclosed (via CT) certificates are trusted - https://opensource.apple.com/source/security_certificates/security_certificates-55070.30.7/certificates/allowlist/ - CNNIC and WoSign. In a machine-readable form, either you put that in a unified file, or you come up with an ontology for expressing dependencies that stretches well beyond the sane bounds.

Mozilla's solution to this was, unsurprisingly, with code ( see https://dxr.mozilla.org/mozilla-central/source/security/certverifier/CNNICHashWhitelist.inc ) - would you see that be expressed in the same file?

Similarly, consider Google's implementation of CT requirements (for Symantec), or the recently removed whitelist for WoSign/StartCom - both implemented via code.

So clearly, we get in situations where not all restrictions are expressible. From a purely pragmatic standpoint, I think it would be undesirable to bake those assumptions and logic into the code. Our goal with Chrome is to remove such transition code as quickly and efficiently as possible - there's no reason to burden a billion users with code 'just in case' a CA has issues again. Rather, we focus our limited time and engineering efforts on helping find solutions that avoid such issues in the first place, while recognizing the conceptual solution is what we may want to redeploy - and if so, we can introduce the specific code for the limited time.

Put differently, rather than try to design and enshrine in code that CAs will fail and we should have methods X, Y, and Z to deal with it, it seems much more beneficial to try to avoid the failure in the first place.

Kai Engert

unread,
Jun 30, 2017, 12:38:59 PM6/30/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
Hello Gerv,

given that today we don't have a single place where all of Mozilla's certificate
trust decisions can be found, introducing that would be a helpful.

I think the new format should be as complete as possible, including both trust
and distrust information, including EV and description of rules for partial
distrust.

As of today, certdata.txt contains:
- whitelisted root CAs (trusted for one or more purposes)
- distrusted/blacklisted certificates (which can be either CAs, intermediate
CAs or end entity certificates), based on varying identification criteria
(sometimes we distrust all matches based on issuer/serial, 
sometimes we are more specific and only distrust if the certificate also
matches exactly a specific hash)

But it doesn't list the additional decisions that Mozilla has implemented in
code:
- additional domain name constraints
- additional validity constraints for issued certificates
- additional required whitelist matching

In the past, some consumers of the Mozilla CA list didn't even implement the few
distrust decision that are already listed in certdata.txt, and had focused only
on the positive trust. I don't know if this was because consumers didn't worry,
or because they didn't even notice, but might have also been done because of
technical limitations.

It would be good if the new format made it very clear that there are distrust
entries, and that trust for some CAs is only partial. The latter could make it
easier for list consumers to identify the partially restricted CA. E.g. some
might decide to rather not trust a restricted CA at all, if the consumer is
technically unable to implement the restricting checks.

We could define identifiers for each class of trust restrictions (CTR), e.g.:
- permitted name constraint
- excluded name constraints
- restricted to serial/name whitelist
- not valid for serial/name blacklist
- restrict validity period of root CA
- restrict allowed validity of issued EE or intermediates
- require successful revocation checking
- require successful Certificate Transparency lookup
- ...

This list could be expanded in the future, so a list consumer that has
implemented all of the older CTRs could decide to not trust new CAs that have
unknown CTRs defined.

There were several comments in this thread about the file format and questions
what we use today.

Let me mention the concept to implement CTRs as "stapled certificate
extensions", e.g. reuse the standard certificate format definitions, create the
binary extension that implements a specific CTR, and embed it into the trust
list file. This approach can allow software to load these extensions somehow in
memory to the certificates, with the effect that standard certificate validation
code can see and use them, without requiring additional logic.

We already use this stapling approach in Firefox and NSS for name constraints.
Because this requires a very specific ASN.1 encoding, we manually used tools to
create such an extension, and then copy the binary data. That might be a
reasonable approach even for the near future, until it can be automated
completely.

Currently the encoding of these name constraints was copied into source code,
but this could also live inside a future trust file, if we define the file
format to represent such binary extensions, and if we enhance the code to load
such extensions dynamically from the list.

Regarding the question how we create new entries for certdata.txt today, we
currently use the NSS tool "addbuiltin". It takes a certificate as input, and
can create both positive trust or distrust lines in the current file format,
that we simply appended to the certdata.txt file.

Regarding which file format should be used for the new master trust list. Unless
we want to change the way how NSS works, it will probably be helpful to continue
to use the certdata.txt file format, even if it's just used as an intermediate
in a double conversion.

Instead of requiring everything to be a single file, maybe it could even work to
use an archive file (e.g. zip), that contains all information in easily
consumable pieces, which would make it unnecessary to serialize and deserialize
the certificates while working with the list, and allows maintainers to use
tools that work with the certificates directly.

E.g. there could be a single JSON file inside that archive, with a well-defined
name, that lists all entries. For each entry, it says if it's a trust, a
distrust, or a restricted-trust entry, and for which purposes (web, email, ...).
It could list the filename of the certificate file this JSON entry refers to
(plus the certifiate's SHA256), or if it's just a distrust entry without a full
certificate, no separate file is required. It would list the CTR classes that
are required. For restrictions, an archive file format would make it easier to
distribute the full details, even if they are large, like whitelists. Stapled
binary extensions, like prepared domain name constraints extensions, could also
simply get included as additional files, referenced by filename from within
JSON.

With this approach, we could also declare that the master location for this
trust list is somewhere outside of NSS (in a separate repository). If we did
that, the primary location could simply be its own HG/GIT repository, with all
the individual files. Releases of Mozilla trust list could be an archive file
that gets published with a checksum file/signature.

Software that implements a specific revision of that trust list would convert
that master data into its own file format, e.g. into certdata.txt, and can get
checked in to the project, so its clear which data was used e.g. in a NSS
release. In the future, we could define how CTRs could get represented as part
of certdata.txt and enable NSS to consume it.

Firefox developers could also have a converter that extracts the portions they
need from the master list, e.g. EV policy information.

Kai

David Adrian

unread,
Jun 30, 2017, 2:47:26 PM6/30/17
to Kai Engert, Gervase Markham, mozilla-dev-s...@lists.mozilla.org
I just want to drop in a couple thoughts from the perspective of Censys
with regard purely to _obtaining_ root stores.

Censys validates certificates against multiple root stores. At the end of
the day, what we want is a reliable and repeatable way to get an up-to-date
version of a root store in PEM format. Right now, obtaining root stores is
a combination of cloning Android source and hoping they don't change their
standard for git tags, parsing an Apple webpage to get a list of tarballs
and hoping the format of the webpage doesn't change, fetching the NSS
source and running it through agl's utility, and then the method linked
above from Ryan Hurst for fetching Microsoft. [1]

This is ridiculous. I don't particularly have strong opinions on how root
stores are released, and I understand wanting to avoid a direct PEM release
to prevent downstream users from consuming it incorrectly, but we _should
not_ have to run a webpage through BeautfulSoup to try to find a root
store. I'd like to see either a reliable URL to fetch that can be converted
to PEM (i.e. what Microsoft does), or some API you can hit to the store
(e.g. what CT does).

[1]: https://github.com/zmap/rootfetch
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>
--
David Adrian
https://dadrian.io

Peter Gutmann

unread,
Jul 1, 2017, 12:46:32 AM7/1/17
to Kai Engert, Gervase Markham, mozilla-dev-s...@lists.mozilla.org, David Adrian
David Adrian via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>I'd like to see either a reliable URL to fetch that can be converted to PEM
>(i.e. what Microsoft does), or some API you can hit to the store (e.g. what
>CT does).

PEM. You keep using that word... I do not think it means what you think it
does. Technically speaking, PEM is the data format for Privacy Enhanced Mail,
usually applied to the ASCII wrapping for the binary data. In practice, it's
used to denote OpenSSL's proprietary private-key format. Neither of those
seem terribly useful for communicating trusted certificates.

If you do want a standard format for them that pretty much anything should
already be able to understand, why not use CMS/PKCS #7 certificate
sets/collections/chains? Almost anything that deals with certs should already
be able to read those. Sure, it won't do metadata, but for that you'll need
to spend three years arguing in a standards group and produce a 100-page RFC
that no-one can get interoperability on. OTOH PKCS #7 works right now.

Peter.

Peter Gutmann

unread,
Jul 1, 2017, 12:52:29 AM7/1/17
to Kai Engert, Gervase Markham, mozilla-dev-s...@lists.mozilla.org, David Adrian, Peter Gutmann
Peter Gutmann via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>You keep using that word... I do not think it means what you think it does.

"... what you think it means". Dammit.

Peter.

David Adrian

unread,
Jul 2, 2017, 3:13:25 PM7/2/17
to Peter Gutmann, Kai Engert, Gervase Markham, mozilla-dev-s...@lists.mozilla.org, David Adrian
To be clear: I don't care what format the certificates are released in, I
am primarily interested in a reliable URL to download for each root store.
I personally will be converting them to OpenSSL-style PEM-encoded-DER to be
used with common X.509 libraries. I suspect others will also be interested
in this format, but I see no reason to bikeshed what PEM means.

On Sat, Jul 1, 2017 at 12:52 AM Peter Gutmann <pgu...@cs.auckland.ac.nz>
wrote:

Kai Engert

unread,
Jul 2, 2017, 5:54:08 PM7/2/17
to David Adrian, Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Fri, 2017-06-30 at 18:46 +0000, David Adrian wrote:
> Censys validates certificates against multiple root stores. At the end of
> the day, what we want is a reliable and repeatable way to get an up-to-date
> version of a root store in PEM format.

Can you please clarify if you're talking about the file format that starts with
a header line
(a) "-----BEGIN CERTIFICATE-----"
or about the file format that starts with
(b) "-----BEGIN TRUSTED CERTIFICATE-----"
?

The Mozilla trust list cannot be correctly represented in file format (a),
because it can only carry a list of certificates, but not trust information.

It cannot carry information like "this CA should be trusted only for email
security, but not for SSL/TLS servers".

It cannot carry information like "this intermediate CA (or this end entity
certificate) must NOT be trusted, although it has been issued by a trusted CA".

Maybe Mozilla shouldn't publish a single, simple PEM file, in format (a) because
it could give consumers the false impression that it's equivalent to the Mozilla
trust.

Potentially Mozilla could publish multiple different PEM files in format (a),
one for the list of CAs that are trusted for email security, and another list of
CAs that are trusted for web security. An additional PEM file could be published
in format (a), which lists all the certificates that are explicitly
blacklisted/distrusted in certdata.txt.

Multiple files would be necessary, beause the standard PEM file format (a)
cannot contain trust or distrust flags, therefore the name of the list would
have to indicate the meaning of each list.

However, file format (b) is able to represent trust and distrust information. I
think it might have been been invented by the OpenSSL project. You can read more
about it on the manual page that can be accessed with "man x509", see the
-trustout and -addtrust and -addreject parameters.

Today's certdata.txt can mostly be represented in file format (b).

However, even today's certdata.txt is incomplete. It doesn't contain the list of
distrusted certificates that Mozilla publishes with the OneCRL project, which
means only the Mozilla applications can benefit from that information, but not
other applications based on NSS. If Mozilla works on developing a consolidated
CA trust and distrust list, ideally that list should contain the distrust
information from OneCRL, too.

But even with that, it will still exclude the dynamic distrust rules that this
newsgroup has decided, which partially restrict some of the trusted CAs, such as
the whitelist-only approval for some CNNIC roots, the restriction to certain
domains for some ANSSI and TUBITAK roots, or the date base limitation for some
StartCom and WoSign roots.

If Mozilla is asked to publish a single file containing all trust in a PEM file
format, which cannot express these partial distrust rules, should Mozilla
include or exclude the partially trusted CAs?

Kai

Gervase Markham

unread,
Jul 3, 2017, 10:13:26 AM7/3/17
to Kai Engert
Hi Kai,

On 30/06/17 17:38, Kai Engert wrote:
> given that today we don't have a single place where all of Mozilla's certificate
> trust decisions can be found, introducing that would be a helpful.

I'm glad you see the value in my goal :-)

> I think the new format should be as complete as possible, including both trust
> and distrust information, including EV and description of rules for partial
> distrust.

I agree, as long as we can stay away from defining a format of arbitrary
complexity.

> It would be good if the new format made it very clear that there are distrust
> entries, and that trust for some CAs is only partial. The latter could make it
> easier for list consumers to identify the partially restricted CA. E.g. some
> might decide to rather not trust a restricted CA at all, if the consumer is
> technically unable to implement the restricting checks.

Yes, indeed.

> Regarding the question how we create new entries for certdata.txt today, we
> currently use the NSS tool "addbuiltin". It takes a certificate as input, and
> can create both positive trust or distrust lines in the current file format,
> that we simply appended to the certdata.txt file.

Ah, OK. So you would not be against the idea of using a tool to maintain
the list in the future?

> Regarding which file format should be used for the new master trust list. Unless
> we want to change the way how NSS works, it will probably be helpful to continue
> to use the certdata.txt file format, even if it's just used as an intermediate
> in a double conversion.

I certainly think we should continue to maintain the store in that
format. The question is whether that format is the canonical format, or
a derivative format. My feeling was that if we want to be able to add
these new forms of restriction, EV status and so on, we should define a
new format. Ryan seems to think we may be able to do this within the
existing certdata.txt format.

> Instead of requiring everything to be a single file, maybe it could even work to
> use an archive file (e.g. zip), that contains all information in easily
> consumable pieces, which would make it unnecessary to serialize and deserialize
> the certificates while working with the list, and allows maintainers to use
> tools that work with the certificates directly.

I think that runs the greater risk of people creating systems which just
trust every certificate in the bundle...

> With this approach, we could also declare that the master location for this
> trust list is somewhere outside of NSS (in a separate repository). If we did
> that, the primary location could simply be its own HG/GIT repository, with all
> the individual files. Releases of Mozilla trust list could be an archive file
> that gets published with a checksum file/signature.

We could do this with any approach. Are you interested in the idea of
making the trust list an independently-maintained item, which is just
pulled into NSS each time an NSS release is done?

Gerv

Kai Engert

unread,
Jul 3, 2017, 11:10:14 AM7/3/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Mon, 2017-07-03 at 15:12 +0100, Gervase Markham wrote:
>
> > I think the new format should be as complete as possible, including both
> > trust
> > and distrust information, including EV and description of rules for partial
> > distrust.
>
> I agree, as long as we can stay away from defining a format of arbitrary
> complexity.

I agree the complexity shouldn't be part of the format. It might be sufficient
to have an identifier for the type of restriction, which is described elsewhere,
plus a flexible list of {name,value} pairs for parameters.


> > Regarding the question how we create new entries for certdata.txt today, we
> > currently use the NSS tool "addbuiltin". It takes a certificate as input,
> > and
> > can create both positive trust or distrust lines in the current file format,
> > that we simply appended to the certdata.txt file.
>
> Ah, OK. So you would not be against the idea of using a tool to maintain
> the list in the future?

Do you have a particular kind of tool in mind?

I'd prefer a simple open source tool that operates on files, which can be used
from a command line, with a free license, e.g. MPL2.


> > Regarding which file format should be used for the new master trust list.
> > Unless
> > we want to change the way how NSS works, it will probably be helpful to
> > continue
> > to use the certdata.txt file format, even if it's just used as an
> > intermediate
> > in a double conversion.
>
> I certainly think we should continue to maintain the store in that
> format. The question is whether that format is the canonical format, or
> a derivative format. My feeling was that if we want to be able to add
> these new forms of restriction, EV status and so on, we should define a
> new format. Ryan seems to think we may be able to do this within the
> existing certdata.txt format.

I don't have a strong preference.

I agree that it should be possible to extend the existing certdata.txt file
format. For meta level items that NSS cannot consume yet, we could define new
identifiers that NSS ignores (or might potentially process at a later time).

However, the certdata.txt file format was built specifically around the needs of
NSS, and we currently have the flexibility to change it in any way we want to.
Also it's based on the idea that the file elements will rather directly
converted into PKCS#11 objects. Other root stores might not use PKCS#11 to store
their list of root CAs at all.

If the intention is to define a file format that is shared with other groups,
who would be the owner of the file format? What if another group needs to
introduce additional fields into the file format, that aren't of interest to
Mozilla or NSS?

Having a more abstract file format could give anyone more flexibility to add
more information, that don't need to be coordinated with others prior to adding
them, and allows consumers to ignore the fields they aren't interested in.

For example, some root store maintainer might invent the golden circle of CA
vouching, which everyone else considers questionable. It might require to store
a flexible list of vouchers for each CA. With JSON it would be trivial to add
another arbitrary length list for that.

So, if the intention is to have a shared file format that everyone can accept,
today and in the future, a more flexible file format seems more appropriate to
me.


> > Instead of requiring everything to be a single file, maybe it could even
> > work to
> > use an archive file (e.g. zip), that contains all information in easily
> > consumable pieces, which would make it unnecessary to serialize and
> > deserialize
> > the certificates while working with the list, and allows maintainers to use
> > tools that work with the certificates directly.
>
> I think that runs the greater risk of people creating systems which just
> trust every certificate in the bundle...

There could be ways to avoid that, for example by using subdirectories, named
like:
- trusted-for-ssl-tls-only
- partially-trusted-for-ssl-tls-only
- trusted-for-email-security-only
- partially-trusted-for-email-security-only
- trusted-for-multiple-uses
- partially-trusted-for-multiple-uses
- distrusted

This is just a thought. If there's too much doubt, I don't mind to stay with the
concept of having a single file that contains serializations of all attributes.


> > With this approach, we could also declare that the master location for this
> > trust list is somewhere outside of NSS (in a separate repository). If we did
> > that, the primary location could simply be its own HG/GIT repository, with
> > all
> > the individual files. Releases of Mozilla trust list could be an archive
> > file
> > that gets published with a checksum file/signature.
>
> We could do this with any approach. Are you interested in the idea of
> making the trust list an independently-maintained item, which is just
> pulled into NSS each time an NSS release is done?

Yes, I had previously suggested this here:
https://bugzilla.mozilla.org/show_bug.cgi?id=1294150

It would make it easier to publish new versions of the root CA list,
independently of software versions.

Converting and copying a snapshot of the root CA list into the repository of a
software project, to make it clear which data set was used by a particular
software release, might sufficiently address the concerns that had been raised
in that bug.

Kai

Kai Engert

unread,
Jul 3, 2017, 11:53:46 AM7/3/17
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org
On Wed, 2017-06-28 at 15:08 -0700, Ryan Sleevi via dev-security-policy wrote:
> On Wednesday, June 28, 2017 at 5:29:19 PM UTC-4, Gervase Markham wrote:
> > Well, the fact that we now use Git,

NSS and the root store don't use Git, it uses HG/Mercurial.


> > I suspect, means anyone could plug
> > in a modern CI

CI meaning Continuous Integration ?


> > tool that did "Oh, you changed file X. Let me regenerate
> > file Y and check it in alongside". Without really needing anyone's
> > permission beyond checkin access.

I'd prefer a bit more control. Any conversion script, which translates from a
new high level file format, to a specific technical file format used by our
software, could have bugs.

If everything is automated, there's more risk that changes might not get
reviewed, and bugs aren't identified.


> I don't believe the state of NSS infrastructure is well-placed to support that
> claim. I'd be curious for Kai's/Red Hat's feedback.

I'm not sure I correctly understand this sentence, but if you're asking if we
have such conversion magic, we don't.

There's the technicaly possibility of having commit hooks. But I'm not sure I
like that approach.


> > Well, I don't do the actual maintenance of certdata.txt, but I assume
> > (perhaps without evidence) that telling whoever does that "hey, you now
> > need to use this tool to edit the canonical information store, instead
> > of the text editor you have been using" might not go down well. It
> > wouldn't if it were me.
>
> It already (effectively) requires a tool to make sure it's done right, AIUI :)
>
> But I think you're still conflating "text" vs "human readable", and I'm not
> sure that they represent equivalents. That is, "human readable" introduces a
> subjective element that can easily lead to ratholes about whether or not
> something is "readable enough", or coming up with sufficient ontologies so
> that it can "logically map" - just look at XML for the case study in this.
>
> You can have a JSON file, but that doesn't mean it's human-readable in the
> least.
>
> That's why I'm pushing very hard on that.

I wouldn't call our existing certdata.txt format easily human readable either.
It's only human readable because our tool, which produces new entries, also adds
human readable comments. It would be very difficult to notice if the text
differes from the binary presentation (unless you write and execute a
verification script that ensures everything matches). We currently achieve the
matching (hopefully) by carefully reviewing changes.

I would discourage a few things when introducing a JSON file format, like, avoid
unnecessary changes in line wrapping or reordering, to make it easier to compare
different revisions.


> > No, because NSS consumers could choose to continue consuming the
> > (autogenerated by the CI tool) certdata.txt.
>
> The CI tools don't check in artifacts.

What does artifact mean in this context?


> You're proposing giving some piece of infrastructure the access to generate
> and check in files? I believe Mozilla may do that, but NSS does not, and the
> infrastructure is separately maintained.
>
> > You want me to rank my goals in order of preference? :-)
>
> Moreso be more explicit in the goals. It's trying to figure out how 'much'
> interoperability is being targeted here :)
>
> > If Apple said "we are happy to use the MS format", I guess the next
> > thing I would do is find Kai or whoever maintains certdata.txt and say
> > "hey, it's not ideal, but what do you think, for the sake of everyone
> > using the same thing?".
>
> Thought experiment: Why not have certdata.txt generate a CI artifact that
> interoperates for other consumers to use?

Are you suggesting that we should convert certdata.txt into a file format that
others would prefer to consume?

Yes, that's another option.

But it wouldn't solve the idea to also store the Mozilla EV attributes in the
common place. Firefox developers would have to start converting information
found inside NSS to Firefox application code.


> Which is all still a facet of the original question: Trying to determine what
> your goals are / what the 'necessary' vs 'nice to have' features are :)
>
> > It's not a massive improvement if we are the only group using it. I
> > think there is value to Mozilla even if MS and Apple don't get on board,
> > because our root store gets more descriptive of reality, but that value
> > alone might not be enough to convince someone like the two people who
> > have expressed interest thusfar to take the time to work on the spec. I
> > don't know.
>
> But why doesn't certdata.txt meet that already, then? It's a useful thought
> experiment to find out what you see the delta as, so that we can understand
> what are and are not acceptable solutions.
>
> > Mozilla's opinions on roots are defined by the sum total of:
> >
> > 1) certdata.txt
> > 2) ExtendedValidation.cpp
> > 3) The changes listed on
> > https://wiki.mozilla.org/CA/Additional_Trust_Changes
>
> 1 & 2 for sure. I don't believe #3 can or should be, certainly not effectively
> maintained.

I think Mozilla could and should try to. See my suggestion to use invented
identifiers for describing each category of invented partial distrust.

Kai


> Certainly, Google cannot and would not be able to find an acceptable solution
> on #3, just looking at things like CT, without introducing otherwise
> meaningless ontologies such as "Follows implementation #37 for this root".
>

Ryan Sleevi

unread,
Jul 3, 2017, 8:47:59 PM7/3/17
to Kai Engert, Ryan Sleevi, mozilla-dev-security-policy
On Mon, Jul 3, 2017 at 11:53 AM, Kai Engert <ka...@kuix.de> wrote:

> > > I suspect, means anyone could plug
> > > in a modern CI
>
> CI meaning Continuous Integration ?
>

Yes. Gerv's proposal rests on the idea of having a file committed that
explains it in human-readable and machine-readable (simultaneously) form,
and then have a continuous integration build translate that into something
consumable by NSS, and then commit that generated file back into the tree
(as I understand it). For example, the resulting certdata.txt or certdata.c


> I'd prefer a bit more control. Any conversion script, which translates
> from a
> new high level file format, to a specific technical file format used by our
> software, could have bugs.
>
> If everything is automated, there's more risk that changes might not get
> reviewed, and bugs aren't identified.
>

Agreed


> > I don't believe the state of NSS infrastructure is well-placed to
> support that
> > claim. I'd be curious for Kai's/Red Hat's feedback.
>
> I'm not sure I correctly understand this sentence, but if you're asking if
> we
> have such conversion magic, we don't.
>

That's what I was asking about.


> There's the technicaly possibility of having commit hooks. But I'm not
> sure I
> like that approach.
>

I agree.


> I would discourage a few things when introducing a JSON file format, like,
> avoid
> unnecessary changes in line wrapping or reordering, to make it easier to
> compare
> different revisions.
>

Right. And JSON can't have comments. So we'd lose substantially in
expressiveness.


> > > No, because NSS consumers could choose to continue consuming the
> > > (autogenerated by the CI tool) certdata.txt.
> >
> > The CI tools don't check in artifacts.
>
> What does artifact mean in this context?
>

"Artifact" = generated file run as part of a build process, and then
checked back in.


> > Thought experiment: Why not have certdata.txt generate a CI artifact that
> > interoperates for other consumers to use?
>
> Are you suggesting that we should convert certdata.txt into a file format
> that
> others would prefer to consume?
>
> Yes, that's another option.
>
> But it wouldn't solve the idea to also store the Mozilla EV attributes in
> the
> common place. Firefox developers would have to start converting information
> found inside NSS to Firefox application code.
>

I'm not sure I fully understand your response. The suggestion was that if
there's some 'other format' that leads interoperability to downstream
consumers, it 'could' be a path to take certdata.txt and have a tool that
can generate that 'other format' from certdata.txt.

The purpose of this thought experiment was to find what, if any,
limitations exist in certdata.txt. You've highlighted a very apt and
meaningful one, in theory - which is that EV data is a Mozilla Firefox (and
exclusively Firefox) concept, while trust records are an aspect of the root
store, hence, the dual expression between Mozilla Firefox source and NSS
source. If we wanted to make "EV" a portion of NSS (which makes no sense
for, say, Thunderbird), we could certainly express that - but it means
carrying around unneeded and unused attributes for other NSS consumers.


> > > Mozilla's opinions on roots are defined by the sum total of:
> > >
> > > 1) certdata.txt
> > > 2) ExtendedValidation.cpp
> > > 3) The changes listed on
> > > https://wiki.mozilla.org/CA/Additional_Trust_Changes
> >
> > 1 & 2 for sure. I don't believe #3 can or should be, certainly not
> effectively
> > maintained.
>
> I think Mozilla could and should try to. See my suggestion to use invented
> identifiers for describing each category of invented partial distrust.
>

I don't disagree we can - on a technical level. But I don't agree that the
ontology of invented partial distrust holds, nor is it terribly useful to
try to expect us to generalize distrust for the various ways in which CAs
fail the community. That said, even when thinking about the concepts, the
fact that the goal is presently woefully underspecified means we cannot
have a good objective discussion about why "Apply the WoSign policy" is
better or worse than a notion of "Distrust certificates after this date" -
or perhaps even a more complex policy, like "Distrust X certificates after
A date, Y certificates after B date, Z certificates after C date, unless
conditions M, N, O are also satisfied"

Gervase Markham

unread,
Jul 5, 2017, 7:33:13 AM7/5/17
to Ryan Sleevi
On 29/06/17 16:27, Ryan Sleevi wrote:
> Well, the current certdata.txt is a text file. Do you believe it's human-readable, especially sans-comments?

Human readability is, of course, a little bit of a continuum. You can
open it in a text editor and get some sense of what's going on, but it's
far from ideal.

How it is sans-comments is irrelevant, because it has comments. :-)

(For those not familiar, here's a sample from certdata.txt:

# Trust for Certificate "Verisign/RSA Secure Server CA"
CKA_CLASS CK_OBJECT_CLASS CKO_NETSCAPE_TRUST
CKA_TOKEN CK_BBOOL CK_TRUE
CKA_PRIVATE CK_BBOOL CK_FALSE
CKA_MODIFIABLE CK_BBOOL CK_FALSE
CKA_LABEL UTF8 "Verisign/RSA Secure Server CA"
CKA_CERT_SHA1_HASH MULTILINE_OCTAL
\104\143\305\061\327\314\301\000\147\224\141\053\266\126\323\277
\202\127\204\157
END
CKA_CERT_MD5_HASH MULTILINE_OCTAL
\164\173\202\003\103\360\000\236\153\263\354\107\277\205\245\223
END
....

> Please realize that this makes it impossible to effectively test changes, without running said tool. This is, again, why certdata.txt being generated is part of the build - so that when you change a file, it's reflected in the build and code and you can effectively test.

Of course, those changing the root store might need access to the
compilation tool. But from a Mozilla PoV, that's just Kai normally. And
if people were used to editing and consuming certdata.txt, they could
continue to do it that way.

Thought experiment for you: if we decided to make the root store its own
thing with its own repo and its own release schedule, and therefore NSS
became a downstream consumer of it, where on occasion someone would
"take a release" by generating and checking in certdata.txt from
whatever format we decided to use, what problems would that cause?

> That's why "machine-readable" is, in effect, a must-have.

I'm not sure anyone is arguing with that.

> So clearly, we get in situations where not all restrictions are expressible.

Sure. As I said, I'm not interested in an arbitrarily complex file
format, so it will always be possible to come up with restrictions we
can't encode. But whatever format Apple chooses, unless they go the
"arbitrary complexity" path, they will have that problem, no?

Gerv

Gervase Markham

unread,
Jul 5, 2017, 7:35:09 AM7/5/17
to Kai Engert
On 03/07/17 16:53, Kai Engert wrote:
> On Wed, 2017-06-28 at 15:08 -0700, Ryan Sleevi via dev-security-policy wrote:
>> On Wednesday, June 28, 2017 at 5:29:19 PM UTC-4, Gervase Markham wrote:
>>> Well, the fact that we now use Git,
>
> NSS and the root store don't use Git, it uses HG/Mercurial.

Yes, apologies. I guess I meant $MODERN_VCS.

>>> I suspect, means anyone could plug
>>> in a modern CI
>
> CI meaning Continuous Integration ?

Yes.

Gerv

Gervase Markham

unread,
Jul 5, 2017, 7:40:02 AM7/5/17
to Kai Engert
On 03/07/17 16:09, Kai Engert wrote:
> I'd prefer a simple open source tool that operates on files, which can be used
> from a command line, with a free license, e.g. MPL2.

Of course.

> If the intention is to define a file format that is shared with other groups,
> who would be the owner of the file format?

Good question.

> What if another group needs to
> introduce additional fields into the file format, that aren't of interest to
> Mozilla or NSS?

Using something like JSON means that people can add arbitrary keys for
their own use that everyone else can ignore. We'd need a lightweight
mechanism for how to do that, but it's not an uncommon pattern.

>> We could do this with any approach. Are you interested in the idea of
>> making the trust list an independently-maintained item, which is just
>> pulled into NSS each time an NSS release is done?
>
> Yes, I had previously suggested this here:
> https://bugzilla.mozilla.org/show_bug.cgi?id=1294150

I think that having a new file format which encoded more or all of the
restrictions on CAs would mitigate some of the issues raised in that bug.

Gerv

Ryan Sleevi

unread,
Jul 5, 2017, 1:08:56 PM7/5/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Wed, Jul 5, 2017 at 4:32 AM Gervase Markham via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 29/06/17 16:27, Ryan Sleevi wrote:
> > Well, the current certdata.txt is a text file. Do you believe it's
> human-readable, especially sans-comments?
>
> Human readability is, of course, a little bit of a continuum. You can
> open it in a text editor and get some sense of what's going on, but it's
> far from ideal.


Unfortunately, your answers don't really help capture your goals - and thus
make this a very difficult endeavor to satisfy.

You haven't really established on what principles you believe JSON (which
seems to be your preferred format, and which does not support comments) is
more favorable than the current format.

That is, the difference between, say:
"label": "Verisign/RSA Secure Server CA"
And
CKA_LABEL "Verisign/RSA Secure Server CA"

I would argue there isn't a meaningful difference for "human readability",
and it's more a subjective preference. Before we fixate on those, I'm
hoping we should get objective use cases nailed down. That's why I'm trying
to understand how you're evaluating that spectrum. Is it because it's
something you'd like to maintain, because you think it should be "readable"
on a webpage, etc?


> How it is sans-comments is irrelevant, because it has comments. :-)


It isn't, because JSON can't.


> Of course, those changing the root store might need access to the
> compilation tool. But from a Mozilla PoV, that's just Kai normally. And
> if people were used to editing and consuming certdata.txt, they could
> continue to do it that way.


I'm thinking you may have misunderstood? Are you suggesting certdata.txt is
canonical or not? Otherwise, they can't continue doing it hat way - they
would have to use whatever format you adopt, and whatever new tools.

>
> Thought experiment for you: if we decided to make the root store its own
> thing with its own repo and its own release schedule, and therefore NSS
> became a downstream consumer of it, where on occasion someone would
> "take a release" by generating and checking in certdata.txt from
> whatever format we decided to use, what problems would that cause?


Would you see it being as independent, or subservient to Firefox? If you
saw it as independent, then you would presumably need to ensure that - like
today - Firefox-specific needs, like EV or trust restrictions, did not
creep into the general code.

Of course, it seems like your argument is you want to express the Firefox
behaviors either directly in NSS (as some trust restrictions are, via code)
or via some external, shared metafile, which wouldn't be relevant to
non-Firefox consumers.

More broadly, that proposal simply adds more work and moving parts, and
arguably undermines your stated goals - because downstream parties like
those identified are not interested in what the "upstream root store" is
doing - they're interested in what Firefox is doing, and to get that, they
would need to consume certdata.txt as well.

I'm fairly certain we're not on the same page as to what problems consumers
are facing in this space, and this may be contributing to the
misunderstanding. If you look at major parties doing stuff in this space -
Cloudflare's CFSSL, SSLLabs, Censys - the goal is generally "trusted by
Firefox," as the goal is debugging and helping users properly configure.
crt.sh is more interested in "trusted by NSS," due to the policy
enforcement.

That is - there are two separate problems - trusted by browser X, and
trusted by root program Y. We should at least recognize these as related,
but separable problems. The need to identify the former is why, for
example, folks scrape the historic releases (or maintain copies, such as of
the Microsoft CTLs).

>
> > So clearly, we get in situations where not all restrictions are
> expressible.
>
> Sure. As I said, I'm not interested in an arbitrarily complex file
> format, so it will always be possible to come up with restrictions we
> can't encode.


I'm still not sure I understand what you believe is arbitrarily complex.
All restrictions can be encoded - it's a question of whether the complexity
is useful. For example, you could encode a BPF like state machine for
restrictions - which can be fully encoded and processed, but which would
add code. But one could easily make the argument that a BPF like filter
library is useful and worthwhile for any number of Root stores.

It's very easy to get lost in this games, and so perhaps it may be useful
if you could contemplate what your core goals are, for Mozilla. I'm not
sure it would be fair to express hypothetical's for Apple or others, in
their absence, but I hope you can appreciate why this feels like a lot of
"ambiguous make work," as specified.

But whatever format Apple chooses, unless they go the
> "arbitrary complexity" path, they will have that problem, no?


Are you familiar with how Apple currently expresses their root store and
how applications consume it?

I don't think it is necessary for Mozilla to do anything here, certainly
not yet, certainly not without their involvement and a clearer set of
requirements. Apple currently has a workflow - based on a Ruby script,
Xcode, and processing files in a directory. If you want to know what a
given macOS or iOS version supports, your options are to download the
.tar.gz and attempt to do the same work, but even then, you may not get the
updates a given security patch point release supports (they're generally
documented in the release notes, but not always).

Having them express that in any reliable form, consistent with any changes
- with even a modicum of documentation - is a substantial improvement. The
community already supports certdata.txt and authroots.stl fine - the issue
isn't trying to support N formats (which is trivial relative to the
complexity proposed here for NSS, at least based on the objectives shared
so far), it's needing to support just a format.

The two formats currently used by the industry are both binary -
Microsoft's, which is documented (in the SDK headers) and PKCS#7, and
Mozilla, which in distribution is an opaque PKCS#11 DLL that you have to
execute methods on, or in source is a text file expression of a PKCS#11
attribute set. Both permit arbitrary complexity - but only one vendor
(Microsoft) uses it to any great extent. Even if Apple switches to some
unified format, its likely to imagine it will be a binary file, even if
just a .plist (binary XML-ish), and so the same problems I mentioned will
all still exist.

Rob Stradling

unread,
Jul 5, 2017, 4:03:27 PM7/5/17
to Ryan Sleevi, Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On 05/07/17 18:08, Ryan Sleevi wrote:
> On Wed, Jul 5, 2017 at 4:32 AM Gervase Markham wrote:
<snip>
> That is, the difference between, say:
> "label": "Verisign/RSA Secure Server CA"
> And
> CKA_LABEL "Verisign/RSA Secure Server CA"
>
> I would argue there isn't a meaningful difference for "human readability",
> and it's more a subjective preference. Before we fixate on those, I'm
> hoping we should get objective use cases nailed down. That's why I'm trying
> to understand how you're evaluating that spectrum. Is it because it's
> something you'd like to maintain, because you think it should be "readable"
> on a webpage, etc?
>
>> How it is sans-comments is irrelevant, because it has comments. :-)
>
> It isn't, because JSON can't.

Unless...

{"label":"Verisign/RSA Secure Server CA","comments":"These are some
comments"}

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Kai Engert

unread,
Jul 6, 2017, 9:45:20 AM7/6/17
to ry...@sleevi.com, mozilla-dev-security-policy
On Mon, 2017-07-03 at 20:47 -0400, Ryan Sleevi wrote:
> On Mon, Jul 3, 2017 at 11:53 AM, Kai Engert <ka...@kuix.de> wrote:
>
> > > > I suspect, means anyone could plug
> > > > in a modern CI
> >
> > CI meaning Continuous Integration ?
> >
>
> Yes. Gerv's proposal rests on the idea of having a file committed that
> explains it in human-readable and machine-readable (simultaneously) form,
> and then have a continuous integration build translate that into something
> consumable by NSS, and then commit that generated file back into the tree
> (as I understand it). For example, the resulting certdata.txt or certdata.c

Ok. Should we go that path, then I'd prefer if the new file format lived in its
own repository, and that the conversion would be done as a manual step, and the
conversion results (certdata.txt for NSS, something else for Firefox EV data
etc.) get checked in to the NSS and Firefox repositories, together with version
information about the source. This would enable us to compare the converted
results and review them for correctness.

> Right. And JSON can't have comments. So we'd lose substantially in
> expressiveness.

I agree with Rob's comment that comments could be added as attributes, if
necessary. But ideally, everything that's necessary as a comment could just be
added a real attributes. The tool to add new entries could produce the various
human readable values that humans want to see, like the extracted subject/issuer
names, fingerprints.

It would be good if the tool offered a consistency check, to verify that all
derived attributes match the embedded certificates. (Or simpler, just regenerate
them.)


> "Artifact" = generated file run as part of a build process, and then
> checked back in.

Thanks for the explanation.


> > > Thought experiment: Why not have certdata.txt generate a CI artifact that
> > > interoperates for other consumers to use?
> >
> > Are you suggesting that we should convert certdata.txt into a file format
> > that
> > others would prefer to consume?
> >
> > Yes, that's another option.
> >
> > But it wouldn't solve the idea to also store the Mozilla EV attributes in
> > the
> > common place. Firefox developers would have to start converting information
> > found inside NSS to Firefox application code.
> >
>
> I'm not sure I fully understand your response.

My response was based on my interpretation of Gerv's suggestion, which I
understood as follows:
- certdata.txt remains the master, keeps maintained and published with NSS
- we define a new file format that's accepted as the standard for several
root stores
- we convert certdata.txt to that interchange format
- we publish the conversion result (the Artifact)

My comment meant, if certdata.txt is the master, and if certdata.txt is supposed
to be the master source for the complete set of CA trust/distrust information,
then it would also be the master place to store any EV attributes.

As a consequence, adding such EV attributes to the Firefox code, if required,
would require an additional conversion process, from certdata.txt, to the
subsets that the Firefox code needs to embed.


> The suggestion was that if
> there's some 'other format' that leads interoperability to downstream
> consumers, it 'could' be a path to take certdata.txt and have a tool that
> can generate that 'other format' from certdata.txt.

Understood. I was commenting on the consequence it would have for EV and Firefox
embedded code.


> The purpose of this thought experiment was to find what, if any,
> limitations exist in certdata.txt. You've highlighted a very apt and
> meaningful one, in theory - which is that EV data is a Mozilla Firefox (and
> exclusively Firefox) concept, while trust records are an aspect of the root
> store, hence, the dual expression between Mozilla Firefox source and NSS
> source. If we wanted to make "EV" a portion of NSS (which makes no sense
> for, say, Thunderbird), we could certainly express that - but it means
> carrying around unneeded and unused attributes for other NSS consumers.

Correct. If we defined certdata.txt as the master source for all data, we'd have
to carry all attributes that Firefox needs.

I don't see a problem with that, however, it would require full agreement from
the Firefox developers, that certdata.txt is indeed the master location, and
that the Firefox code must never fork this information, but only ever pick up
converted snapshots from certdata.txt. Not sure if this could be enforced.


> I don't disagree we can - on a technical level. But I don't agree that the
> ontology of invented partial distrust holds, nor is it terribly useful to
> try to expect us to generalize distrust for the various ways in which CAs
> fail the community.

Well, the invented partial distrust mechanisms are the status quo, and it seems
this group hasn't been able to identify better practical solutions yet.

Why not document the status quo in a structured way, if it allows other
consumers to benefit? Maybe projects like OpenSSL would start to implement these
rules, too, if they were clearly documented?


> That said, even when thinking about the concepts, the
> fact that the goal is presently woefully underspecified means we cannot
> have a good objective discussion about why "Apply the WoSign policy" is
> better or worse than a notion of "Distrust certificates after this date" -
> or perhaps even a more complex policy, like "Distrust X certificates after
> A date, Y certificates after B date, Z certificates after C date, unless
> conditions M, N, O are also satisfied"

I think so far this discussion was about
"How can we document decisions about partial CA distrust?".

Can't this technical specification issue remain completely separate from the
process of finding answers to the question
"How should the trust list adjusted to react to CA incidents" ?

I believe your point is, today's partial distrust rules were arbitrary
decisions, and in the future, any kind of completely different arbitrary rules
might be decided.

I don't think that's a problem. As long as we define some identifier for each
specific distrust rule (parameterized e.g. by cutoff date), and have some wiki
page that clearly explains the rules for each such category, we still empower
third party consumers to obtain this information more easily.

Kai

Gervase Markham

unread,
Jul 6, 2017, 10:52:25 AM7/6/17
to Kai Engert, ry...@sleevi.com
On 06/07/17 14:44, Kai Engert wrote:
> My response was based on my interpretation of Gerv's suggestion, which I
> understood as follows:
> - certdata.txt remains the master, keeps maintained and published with NSS
> - we define a new file format that's accepted as the standard for several
> root stores
> - we convert certdata.txt to that interchange format
> - we publish the conversion result (the Artifact)

My apologies. My suggestion is almost what you say, but with the
difference that the new format is the master (as it contains more info
than certdata.txt does) and certdata.txt gets regenerated whenever NSS
takes a new release of the root list, rather than the other way around.

So in this scenario the EV C++ file would be directly generated from the
new format; certdata.txt would not need to carry EV info. In fact, the
file format of certdata.txt would be unchanged.

Gerv

Gervase Markham

unread,
Jul 6, 2017, 10:57:50 AM7/6/17
to Ryan Sleevi
On 05/07/17 18:08, Ryan Sleevi wrote:
> That is, the difference between, say:
> "label": "Verisign/RSA Secure Server CA"
> And
> CKA_LABEL "Verisign/RSA Secure Server CA"

Not much, but you've picked the clearest part of certdata.txt to compare :-)

> It isn't, because JSON can't.

As Rob notes, you can basically have them in all but name.

> I'm thinking you may have misunderstood? Are you suggesting certdata.txt is
> canonical or not? Otherwise, they can't continue doing it hat way - they
> would have to use whatever format you adopt, and whatever new tools.

I apologise that I seem not to have made this clear; my suggestion is
that the new file is canonical and (near-)complete, and certdata.txt,
ExtendedValidation.cpp and other files get generated from it, whenever
NSS/Firefox want to take a new release of the root store.

> Would you see it being as independent, or subservient to Firefox? If you
> saw it as independent, then you would presumably need to ensure that - like
> today - Firefox-specific needs, like EV or trust restrictions, did not
> creep into the general code.

I don't think that follows. EV trustworthiness is a property of the root
store. The root program makes those decisions, and it's entirely
appropriate that they be encoded in root program releases. We also make
decisions on "trust restrictions", so I'm not sure why you call that a
"Firefox-specific need".

> Of course, it seems like your argument is you want to express the Firefox
> behaviors either directly in NSS (as some trust restrictions are, via code)
> or via some external, shared metafile, which wouldn't be relevant to
> non-Firefox consumers.

Perhaps this is the disconnect. Several non-Firefox consumers have said
they are very interested in an encoding of the root program's partial
trust decisions.

> doing - they're interested in what Firefox is doing, and to get that, they
> would need to consume certdata.txt as well.

No, because they could consume whatever copy of the upstream file
Firefox had imported.

I don't expect "Mozilla's root store's trust view" and "Trusted by
Firefox" ever to diverge, apart from due to time skew, and perhaps
occasionally due to unencodeable restrictions.

Anyway, off on holiday, back in 3 weeks :-)

Gerv

Ryan Sleevi

unread,
Jul 6, 2017, 11:32:55 AM7/6/17
to Gervase Markham, Ryan Sleevi, mozilla-dev-security-policy
On Thu, Jul 6, 2017 at 10:57 AM, Gervase Markham <ge...@mozilla.org> wrote:

> On 05/07/17 18:08, Ryan Sleevi wrote:
> > That is, the difference between, say:
> > "label": "Verisign/RSA Secure Server CA"
> > And
> > CKA_LABEL "Verisign/RSA Secure Server CA"
>
> Not much, but you've picked the clearest part of certdata.txt to compare
> :-)
>

Sure - because you haven't given much of a sense for what human readability
means. That is, whether or not \104\143 is more or less readable than 68:8F
(hex) or aI8= (base64) or NCHQ==== (base32), as an example.

The presumption here seems to be "format that I'm familiar with", but
that's a fairly subjective read. We already have machine-readability, and
we've established that tool-generation is strongly preferred (both for
correctness and consistency), so human-writability does not seem like it's
agreed upon goal. So where does human-readability factor in, and does it
make more sense to derive human-readability from the existing
machine-readability?


>
> > It isn't, because JSON can't.
>
> As Rob notes, you can basically have them in all but name.
>

I don't think that really holds, but I'm surprised to see no one pointing
it out yet.

For example, there is a meaningful difference between

# This is the CA with serial abcd
CKA_LABEL UTF8 "Verisign/RSA Secure Server CA"

# This is the hash 00:ab:cd:ef
CKA_CERT_SHA1_HASH MULTILINE_OCTAL
\104\143\305\061\327\314\301\000\147\224\141\053\266\126\323\277
\202\127\204\157
END

If you wanted to express that in JSON, using Rob's bit, you'd end up with
{
"label": "VeriSign/RSA Secure Server CA",
"comment": "This is the CA with serial abcd"
},
{
"sha1_hash": "\x00\xab\xcd\xed",
"comment": "This is the hash 00:ab:cd:ef"
}

Except that wouldn't be a valid JSON string (or at least, not all
expressible byte sequences are, as they'd result in invalid unicode
sequences), so you'd have to do a further transformation, such as base64
decoding (or de-hexing), which means its once again less human-maintainable.

I suspect we're at the risk of ratholing here, but the lack of JSON
comments is a well-known limitation that continually negatively affects
those who pursue JSON schemas, so we should not be so quick to brush away
what is frequently a maintenance compliant.


> > Would you see it being as independent, or subservient to Firefox? If you
> > saw it as independent, then you would presumably need to ensure that -
> like
> > today - Firefox-specific needs, like EV or trust restrictions, did not
> > creep into the general code.
>
> I don't think that follows. EV trustworthiness is a property of the root
> store. The root program makes those decisions, and it's entirely
> appropriate that they be encoded in root program releases. We also make
> decisions on "trust restrictions", so I'm not sure why you call that a
> "Firefox-specific need".
>

EV trustworthiness is an aspect of the application code - in this case, a
Web browser with UI surface being exposed. Do you believe EV makes sense
for, say, a utility like cURL or wget? Or for an application like PHP? Does
the EV issuance status of a CA affect something like Thunderbird?

Or consider other stores - like Chrome - in which EV-SSL status is granted
not solely by the presence of policy, but the associated Certificate
Transparency information. One cannot equivalently determine EV status
solely based on a policy status - it's more than that.


> > Of course, it seems like your argument is you want to express the Firefox
> > behaviors either directly in NSS (as some trust restrictions are, via
> code)
> > or via some external, shared metafile, which wouldn't be relevant to
> > non-Firefox consumers.
>
> Perhaps this is the disconnect. Several non-Firefox consumers have said
> they are very interested in an encoding of the root program's partial
> trust decisions.
>

Could you recall where this happened? It doesn't seem from this thread,
beyond Kai's remarks, but perhaps you're evaluating against the previous
threads?

No, because they could consume whatever copy of the upstream file
> Firefox had imported.
>
> I don't expect "Mozilla's root store's trust view" and "Trusted by
> Firefox" ever to diverge, apart from due to time skew, and perhaps
> occasionally due to unencodeable restrictions.
>

But they already do, regularly. Compare Firefox ESR with Firefox Beta with
Firefox stable, and then compare that with NSS releases (and different OS
distributions of those releases). There is already an inherent divergence.

bel...@gmail.com

unread,
Jul 25, 2017, 7:20:49 AM7/25/17
to mozilla-dev-s...@lists.mozilla.org
Hello Kai,

On Friday, June 30, 2017 at 7:38:59 PM UTC+3, Kai Engert wrote:
> Hello Gerv,
>
> I think the new format should be as complete as possible, including both trust
> and distrust information, including EV and description of rules for partial
> distrust.
>
> As of today, certdata.txt contains:
> - whitelisted root CAs (trusted for one or more purposes)
> - distrusted/blacklisted certificates (which can be either CAs, intermediate
> CAs or end entity certificates), based on varying identification criteria
> (sometimes we distrust all matches based on issuer/serial, 
> sometimes we are more specific and only distrust if the certificate also
> matches exactly a specific hash)
>
> But it doesn't list the additional decisions that Mozilla has implemented in
> code:
> - additional domain name constraints
> - additional validity constraints for issued certificates
> - additional required whitelist matching

...

> We could define identifiers for each class of trust restrictions (CTR), e.g.:
> - permitted name constraint
> - excluded name constraints
> - restricted to serial/name whitelist
> - not valid for serial/name blacklist
> - restrict validity period of root CA
> - restrict allowed validity of issued EE or intermediates
> - require successful revocation checking
> - require successful Certificate Transparency lookup
> - ...
>
> This list could be expanded in the future, so a list consumer that has
> implemented all of the older CTRs could decide to not trust new CAs that have
> unknown CTRs defined.


Let me introduce an IETF draft https://datatracker.ietf.org/doc/draft-belyavskiy-certificate-limitation-policy/

The draft is in initial phase. I made a presentation based on it during the SAAG meeting on IETF 99. It describes a possible format for such list of limitations applied to trusted certificates. The specification is designed to avoid as much limitations hard-coded in applications as possible.

So if there is any interest in improving in finalizing the draft, it will be great. I got at least some interest in it from the OpenSSL team.

--
Sincerely yours, Dmitry Belyavskiy
0 new messages