Restricting roots to one TLD

67 views
Skip to first unread message

Gervase Markham

unread,
Mar 12, 2007, 11:56:55 AM3/12/07
to
I am interested in investigating with the NSS developers whether it
would be possible to restrict a particular root certificate to signing
end entity certificates only for domains with a particular TLD.

For example, I would like to admit the CA of the Government of Lilliput
to the root store, because it meets most of the criteria. However, they
don't have an audit (or perhaps their audit documents are classified).
This is understandable; the citizens of Lilliput must already trust
their government anyway (or not); an audit would achieve very little to
enhance that confidence.

However, because citizens of the rest of the world should not be
required to trust the government of Lilliput, I would like to make it so
that chains ending at their root are only reported as valid if the
domain name in question ends in .ll (the Lilliputian TLD).

Is this technically feasible? Would this function be best implemented in
NSS or at a higher level?

Gerv

Wan-Teh Chang

unread,
Mar 12, 2007, 8:09:23 PM3/12/07
to dev-tec...@lists.mozilla.org

Yes, this is technically feasible. The CA's certificate can have
a "name constraints" extension that constrains the domain names in
the .ll name space. See RFC 3280, Section 4.2.1.11 Name Constraints.

Wan-Teh

Frank Hecker

unread,
Mar 12, 2007, 8:50:55 PM3/12/07
to
Wan-Teh Chang wrote:
> Gervase Markham wrote:
>> I am interested in investigating with the NSS developers whether it
>> would be possible to restrict a particular root certificate to signing
>> end entity certificates only for domains with a particular TLD.

In this context Gerv's reference is to end-entity certs used for
SSL/TLS-enabled servers. My understanding is that name constraints could
be used in a similar way for S/MIME certs, e.g., requiring that the
subject alternative name end in .ll or whatever. I'm guessing for object
signing certs one could implement a restriction based on the presence
of, e.g., C=LL in the subject DN.

> Yes, this is technically feasible. The CA's certificate can have
> a "name constraints" extension that constrains the domain names in
> the .ll name space. See RFC 3280, Section 4.2.1.11 Name Constraints.

Of course using name constraints in the classic sense requires the
cooperation of the CA (since they have to add the extension to the CA
cert). I think Gerv was thinking of the more general case where for
policy reasons we might want to impose constraints on a CA even in the
case where there was no name constraints extension.

I guess if the NSS code already has code to implement name constraints
anyway (i.e., keying off the extension) then it would be in theory
possible for the code to optionally act as if name constraints were in
effect even if they not specified in the CA cert itself. (E.g., the name
constraints information could be stored as metadata with the root CA.)

Frank

--
Frank Hecker
hec...@mozillafoundation.org

Gervase Markham

unread,
Mar 13, 2007, 6:26:29 AM3/13/07
to
Frank Hecker wrote:
> Of course using name constraints in the classic sense requires the
> cooperation of the CA (since they have to add the extension to the CA
> cert). I think Gerv was thinking of the more general case where for
> policy reasons we might want to impose constraints on a CA even in the
> case where there was no name constraints extension.

That is correct.

And (forgive my lack of knowledge here) is it possible that even if the
government of Lilliput were happy for their root cert to be restricted
in this way, they may not technically be able to reissue it with the
constraint built in? And even if they are technically able to, they may
not wish to.

> I guess if the NSS code already has code to implement name constraints
> anyway (i.e., keying off the extension) then it would be in theory
> possible for the code to optionally act as if name constraints were in
> effect even if they not specified in the CA cert itself. (E.g., the name
> constraints information could be stored as metadata with the root CA.)

That sounds, to my untrained ear, like a reasonable implementation
strategy which would achieve the goal.

Gerv

Bob Relyea

unread,
Mar 13, 2007, 2:40:56 PM3/13/07
to Frank Hecker, dev-tec...@lists.mozilla.org
Frank Hecker wrote:

> Wan-Teh Chang wrote:
>
>> Gervase Markham wrote:
>>
>>> I am interested in investigating with the NSS developers whether it
>>> would be possible to restrict a particular root certificate to
>>> signing end entity certificates only for domains with a particular TLD.
>>
>
> In this context Gerv's reference is to end-entity certs used for
> SSL/TLS-enabled servers. My understanding is that name constraints
> could be used in a similar way for S/MIME certs, e.g., requiring that
> the subject alternative name end in .ll or whatever. I'm guessing for
> object signing certs one could implement a restriction based on the
> presence of, e.g., C=LL in the subject DN.
>
>> Yes, this is technically feasible. The CA's certificate can have
>> a "name constraints" extension that constrains the domain names in
>> the .ll name space. See RFC 3280, Section 4.2.1.11 Name Constraints.
>
>
> Of course using name constraints in the classic sense requires the
> cooperation of the CA (since they have to add the extension to the CA
> cert). I think Gerv was thinking of the more general case where for
> policy reasons we might want to impose constraints on a CA even in the
> case where there was no name constraints extension.

In addition, we only parse these kinds of constraints on intermediate
certs (we currently don't have a mechanism to place name constraints on
a trusted root. Even if the trusted root had constraints itself, they
would be ignored once we identify the cert as trusted.

Paul Hoffman

unread,
Mar 13, 2007, 7:14:35 PM3/13/07
to Bob Relyea, dev-tec...@lists.mozilla.org
At 11:40 AM -0700 3/13/07, Bob Relyea wrote:
>In addition, we only parse these kinds of constraints on
>intermediate certs (we currently don't have a mechanism to place
>name constraints on a trusted root. Even if the trusted root had
>constraints itself, they would be ignored once we identify the cert
>as trusted.

A related question that I was intending to do some research on: if a
trust anchor ("trusted root" in this thread) has an expiration date
in the past, doe NSS still treat it as a trust anchor, or does it
ignore it?

Gervase Markham

unread,
Mar 14, 2007, 6:00:21 AM3/14/07
to
Paul Hoffman wrote:
> A related question that I was intending to do some research on: if a
> trust anchor ("trusted root" in this thread) has an expiration date in
> the past, doe NSS still treat it as a trust anchor, or does it ignore it?

I can't say for certain because I haven't seen the code, but I would
certainly hope it ignores it!

Then again, this situation almost never arises, because CAs create roots
for a duration of 30 years, and then deprecate them after five years or so.

Gerv

Gervase Markham

unread,
Mar 14, 2007, 6:03:29 AM3/14/07
to
Bob Relyea wrote:
> In addition, we only parse these kinds of constraints on intermediate
> certs (we currently don't have a mechanism to place name constraints on
> a trusted root. Even if the trusted root had constraints itself, they
> would be ignored once we identify the cert as trusted.

Would someone be able to estimate how much work it would be to extend
the name constraints mechanism to the trusted roots, and add the ability
for NSS to store its own name constraints, to be added to any in the
root itself?

As you know, we do have a number of government CAs under consideration,
and I would like to at least be able to propose that we make it policy
that government-run or controlled CAs are trusted only for their
country's TLD. I think it's only fair. But clearly I can't do that if
it's a great deal of work, or if the NSS team is opposed to it for
technical or other reasons.

Gerv

Paul Hoffman

unread,
Mar 14, 2007, 1:47:44 PM3/14/07
to Gervase Markham, dev-tec...@lists.mozilla.org
At 10:00 AM +0000 3/14/07, Gervase Markham wrote:
>Paul Hoffman wrote:
>>A related question that I was intending to do some research on: if
>>a trust anchor ("trusted root" in this thread) has an expiration
>>date in the past, doe NSS still treat it as a trust anchor, or does
>>it ignore it?
>
>I can't say for certain because I haven't seen the code, but I would
>certainly hope it ignores it!

I would hope that NSS *would* use this information because it is what
the CA has asserted about itself. RFC 3280 does not require that the
processor use this information.

>Then again, this situation almost never arises, because CAs create
>roots for a duration of 30 years, and then deprecate them after five
>years or so.

That is not a good way to set a security policy. There are many
reasons why an organization might want to push a locally-generated
trust root to its users, but would not want that trust root to be
valid forever, particularly if they don't fully trust their ability
to keep the private key secret for the long term.

So, does someone who knows how to read the appropriate part of the
code have an answer about this?

--Paul Hoffman

Nelson Bolyard

unread,
Mar 16, 2007, 6:58:03 PM3/16/07
to
Gervase Markham wrote:
> Bob Relyea wrote:
>> In addition, we only parse these kinds of constraints on intermediate
>> certs (we currently don't have a mechanism to place name constraints
>> on a trusted root. Even if the trusted root had constraints itself,
>> they would be ignored once we identify the cert as trusted.
>
> Would someone be able to estimate how much work it would be to extend
> the name constraints mechanism to the trusted roots, and add the ability
> for NSS to store its own name constraints, to be added to any in the
> root itself?

I won't estimate in man hours, but I will say that
a) what is being proposed is non-standard behavior, and if implemented in NSS
requires non-standard extensions to standards-based APIs to accomplish.
b) it would require changes to many different parts of NSS, if it is done
inside of NSS.
c) it could be done by PSM, and perhaps it should be.

Your proposal would require storing the equivalent of a name constraints
extension along with the root CA cert. It would also require additional
processing, because name constraints are generally not processed inside
trust anchors. That is, usually a CA puts the name constraints extension
into subordinate CA certs that it issues, and a root CA does not attempt
to constrain itself. Standard RFC 3280 cert chain validation does not
expect the "trust anchor" (root) to have any such constraints, and the
algorithm in that standard does not use such into in a root cert, if present.

(As an aside, EV also requires non-standard behavior and if implemented in
NSS would also require similar (though fewer) non-standard extensions. EV
requires the storage of one or more EV policy OIDs that are permitted to be
used in certs issued by the root CA and its subordinates. (One universal EV
policy OID would have eliminated this extra burden). EV requires additional
non-standard processing, during the validation of a cert chain, to check that
an EE cert's policy OID is the one permitted for that root CA.)

NSS stores certificates and meta-information about certificates in its
cert DB which is inside of a PKCS#11 module. The interface to that data
is the PKCS#11 API, which is a standardized API.

PKCS#11 allows vendor-defined extensions to its API. It allows vendor-
defined object types and vendor-defined attributes for standard objects.

NSS already uses vendor defined "trust" objects to store trust information
about certs stored in the PKCS#11 module. The extensions for your proposal
and for EV would necessitate creating more vendor-defined extensions
(objects and/or attributes). It would require
a) database changes
b) PKCS#11 module changes,
c) changes to the cert chain validation code (which has recently been
reimplemented to be fully RFC 3280 compliant, albeit this work is not
yet integrated).

Alternatively these non-standard extensions could be implemented in PSM.
PSM would be free to store the information in any non-standard format
it likes. After a cert chain has been validated by NSS's cert path
validation code, PSM could do additional checks on policy OIDs and/or
RFC 822 names according to the rules/info it has stored.

Since mozilla clients are likely the only NSS-based applications that
will want these extensions, I think a case can be made that these
extensions should be done in PSM rather than in NSS.

Gervase Markham

unread,
Mar 20, 2007, 8:08:07 AM3/20/07
to
Nelson Bolyard wrote:
> Your proposal would require storing the equivalent of a name constraints
> extension along with the root CA cert. It would also require additional
> processing, because name constraints are generally not processed inside
> trust anchors. That is, usually a CA puts the name constraints extension
> into subordinate CA certs that it issues, and a root CA does not attempt
> to constrain itself. Standard RFC 3280 cert chain validation does not
> expect the "trust anchor" (root) to have any such constraints, and the
> algorithm in that standard does not use such into in a root cert, if present.

OK. And the root certificate is treated as a special case in the cert
chain validation? In other words, the work would be more than doing the
equivalent of changing:

if (!root)
{
check_name_constraints():
}

into

if (!root || cert.has_external_name_constraints_imposed)
{
check_name_constraints();
}

?

> (As an aside, EV also requires non-standard behavior and if implemented in
> NSS would also require similar (though fewer) non-standard extensions. EV
> requires the storage of one or more EV policy OIDs that are permitted to be
> used in certs issued by the root CA and its subordinates. (One universal EV
> policy OID would have eliminated this extra burden).

...but would have removed the possibility of us easily revoking the EV
status of some CAs, but not others.

> Alternatively these non-standard extensions could be implemented in PSM.
> PSM would be free to store the information in any non-standard format
> it likes. After a cert chain has been validated by NSS's cert path
> validation code, PSM could do additional checks on policy OIDs and/or
> RFC 822 names according to the rules/info it has stored.

I presume it has access to the information it would need via the API?

> Since mozilla clients are likely the only NSS-based applications that
> will want these extensions, I think a case can be made that these
> extensions should be done in PSM rather than in NSS.

OK. It's good to know that you have no objections to it being added in
this way.

My concern about implementing it "on top of" NSS is that, as I
understand it, it would require a parallel database to be kept of
"certificate metadata". Would this not raise architectural objections?

Gerv

Kyle Hamilton

unread,
Mar 20, 2007, 7:19:04 PM3/20/07
to Gervase Markham, dev-tec...@lists.mozilla.org, Kyle Hamilton
I thought we'd had this type of conversation before... or maybe it
was on the TLS discussion list, and I'm not remembering. Regardless...

A "trust anchor" is a public key. (It's not a certificate that
contains the public key, or anything which can be validated with the
public key -- it's the public key itself. It just so happens that an
X.509 certificate is a convenient package for it.)

A "CA" is a string of characters. It's not a public key or a trust
anchor. (This is the concept that made possible the "S/MIME
including bogus CA certificate disables certificate validation"
attack of a few years ago.)

All of the workarounds that have been emplaced are limited,
necessarily, by these two concepts. Now, you're advocating placing
an external limit on the trust allowed to be delegated from a trust
anchor. (which is also what EV requires.) However, this is an
explicit violation of the concept of a trust anchor: a trust anchor
is the anchor for absolute trust.

Within the X.500 model, the better way to do this would be to
generate a CA for Mozilla, convert the self-signed trust anchors into
CSRs, have the Mozilla CA re-sign them with the appropriate
restrictions and validations, and embed the Mozilla root into the
library as the absolute trust anchor. (This would also provide an
"audit trail" for the operations performed... i.e., "who said that
this CA can't sign a bank's certificate?" "Why can't I verify this
person's cert even though he's a citizen of the country that signed
it, and he's living abroad at the moment?" and others.)

I am ABSOLUTELY against any concept of a "silent and unaccountable"
restriction being placed, on anything. With unaccountability and
silence comes a "what the hell is it set that way for? I'll just fix
it..." mentality without the benefit of being able to see the
reasoning behind it. At least if there's a true anchor that signs
things, an explanatory URL could be placed in its X.509 package and
they could see an explanation of the reasoning.

-Kyle H

> _______________________________________________
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto

Gervase Markham

unread,
Mar 21, 2007, 6:44:23 AM3/21/07
to
Kyle Hamilton wrote:
> I thought we'd had this type of conversation before... or maybe it was
> on the TLS discussion list, and I'm not remembering. Regardless...

I don't remember participating in one; maybe I wasn't around, or maybe
it was elsewhere. Regardless, you need to dust off your trusty cluebat.

> A "trust anchor" is a public key. (It's not a certificate that contains
> the public key, or anything which can be validated with the public key
> -- it's the public key itself. It just so happens that an X.509
> certificate is a convenient package for it.)
>
> A "CA" is a string of characters. It's not a public key or a trust
> anchor. (This is the concept that made possible the "S/MIME including
> bogus CA certificate disables certificate validation" attack of a few
> years ago.)
>
> All of the workarounds that have been emplaced are limited, necessarily,
> by these two concepts. Now, you're advocating placing an external limit
> on the trust allowed to be delegated from a trust anchor. (which is
> also what EV requires.) However, this is an explicit violation of the
> concept of a trust anchor: a trust anchor is the anchor for absolute trust.

We already place external limits on trust anchors in this way. For
example, certificates in the root store can be allowed to sign websites
(or not), emails (or not) and code (or not). Why are you happy with a
distinction being made between "websites" and "email", but not with one
being made between "this set of websites" and "that set of websites"?

In fact, one could argue that the Mozilla Foundation is already the
ultimate trust anchor, as we choose the certificates to place in the
root store. Most users of products which use the store (e.g. Firefox)
are ultimately trusting us to make good decisions about what CA root
certs to include.

> Within the X.500 model, the better way to do this would be to generate a
> CA for Mozilla, convert the self-signed trust anchors into CSRs, have
> the Mozilla CA re-sign them with the appropriate restrictions and
> validations, and embed the Mozilla root into the library as the absolute
> trust anchor.

We could do this. However, it would place a highly important private key
into the hands of an organisation with little experience in key
management at the level required (that is, the Mozilla Project). I'm not
sure the security of the system would improve as a result.

I guess we could outsource this function. But that has its own issues,
many of which I'm sure you can imagine.

> (This would also provide an "audit trail" for the
> operations performed... i.e., "who said that this CA can't sign a bank's
> certificate?"

Is there any audit trail today if you go to a site with a CAcert
certificate, get an error dialog and ask "who said that this CA can't
sign this site's certificate"?

> I am ABSOLUTELY against any concept of a "silent and unaccountable"
> restriction being placed, on anything. With unaccountability and
> silence comes a "what the hell is it set that way for? I'll just fix
> it..." mentality without the benefit of being able to see the reasoning
> behind it. At least if there's a true anchor that signs things, an
> explanatory URL could be placed in its X.509 package and they could see
> an explanation of the reasoning.

I agree that we need a "lack of silence" and we need accountability for
our actions in this regard, but they do not have to be embedded in the
certificate chain.

I fancy that people who are technically capable of, when faced with a
certificate problem, analysing the chain, finding the embedded URL which
explains the policy, visiting and it and reading it, are also capable of
(in the alternative technical scenario) realising that the organisation
which shipped the product must have put a restriction on, and heading
over to their website to find out why (if it's not obvious).
Particularly if the user agent makes it clear why the error has occurred.

Gerv

Kyle Hamilton

unread,
Mar 21, 2007, 4:09:51 PM3/21/07
to Gervase Markham, dev-tec...@lists.mozilla.org
On 3/21/07, Gervase Markham <ge...@mozilla.org> wrote:
> >
> > All of the workarounds that have been emplaced are limited, necessarily,
> > by these two concepts. Now, you're advocating placing an external limit
> > on the trust allowed to be delegated from a trust anchor. (which is
> > also what EV requires.) However, this is an explicit violation of the
> > concept of a trust anchor: a trust anchor is the anchor for absolute trust.
>
> We already place external limits on trust anchors in this way. For
> example, certificates in the root store can be allowed to sign websites
> (or not), emails (or not) and code (or not). Why are you happy with a
> distinction being made between "websites" and "email", but not with one
> being made between "this set of websites" and "that set of websites"?

All told, I'm not. I think it's a horribly coarse and nearly
completely useless way of doing things. See, identity is identity.
The only function that limiting the types of things that a root can
sign certificates for is to raise the bar and force people who want to
do certain things (like sign code) to get identity certificates from
more expensive sources. The net result is something that smells of
collusion: "If you want to run code on our browsers, you have to pay
dearly for the privilege." The only thing that makes it stink less is
that the browser manufacturer doesn't get a cut of the profit.

To be perfectly honest, in my view X.509 is nearly completely broken
as a protocol, and even more broken as a paradigm. The original
specification was created in a time when the only function of
cryptography was by spies and national
intelligence/counterintelligence networks, and had no realistic
experience to back it up. X.509v2 was intended to patch it to say
"oh, wait, we need to have some means of telling people when a
certificate is invalid if it becomes invalid before it expires".
X.509v3 wasn't even done by the ITU, it was done by the IETF.

However, idealistic arguments aren't going to sway anyone. For better
or for worse, we have an infrastructure that requires additional
patching to make up for the deficiencies in the paradigm (including
the concept of "an anchor is an emplacement of absolute trust", when
the original paradigm also specified a singleton trust anchor as "the
anchor is the emplacement of absolute trust").

Incidentally, this problem has been in place since before the ITAR was
replaced by the EAR -- I recall "how to get 128-bit encryption" how-to
documents and tools for Netscape Navigator's certificate store. (This
was during the days of server gated cryptography.)

> In fact, one could argue that the Mozilla Foundation is already the
> ultimate trust anchor, as we choose the certificates to place in the
> root store. Most users of products which use the store (e.g. Firefox)
> are ultimately trusting us to make good decisions about what CA root
> certs to include.

No, a 'trust anchor' is a technical location where all trust can be
proven to devolve from (the private key, with the one-to-one
correspondence to the public key).

The choice of certificates is made by an authority. Without an
anchor, though, it's possible to willy-nilly add certificates to the
database and mark them as trusted (and I hope no one decides to do so
in a combination phishing attack).

> > Within the X.500 model, the better way to do this would be to generate a
> > CA for Mozilla, convert the self-signed trust anchors into CSRs, have
> > the Mozilla CA re-sign them with the appropriate restrictions and
> > validations, and embed the Mozilla root into the library as the absolute
> > trust anchor.
>
> We could do this. However, it would place a highly important private key
> into the hands of an organisation with little experience in key
> management at the level required (that is, the Mozilla Project). I'm not
> sure the security of the system would improve as a result.

This is part of why I say X.509 is so broken. The paradigms in place
only work when reality is clubbed like a baby seal to "conform". They
don't really work in any kind of open-source environment, and
implementing them requires a certain degree of organizational
inflexibility.

The folks at Microsoft have done a lot more thinking about the concept
of a public key infrastructure, and what it means, and what kinds of
extensions have to be in place to properly support it. One of their
extensions is "this certificate will be used to verify organizational
acceptance of lists of trusted roots." I haven't the faintest idea
how to USE it, but it at least exists in their certificate services
server.

> Is there any audit trail today if you go to a site with a CAcert
> certificate, get an error dialog and ask "who said that this CA can't
> sign this site's certificate"?

That's straightforward when the anchor isn't in the store.

What happens when the anchor is already there?

> > I am ABSOLUTELY against any concept of a "silent and unaccountable"
> > restriction being placed, on anything. With unaccountability and
> > silence comes a "what the hell is it set that way for? I'll just fix
> > it..." mentality without the benefit of being able to see the reasoning
> > behind it. At least if there's a true anchor that signs things, an
> > explanatory URL could be placed in its X.509 package and they could see
> > an explanation of the reasoning.
>
> I agree that we need a "lack of silence" and we need accountability for
> our actions in this regard, but they do not have to be embedded in the
> certificate chain.

I completely disagree, partly for the "local store manipulation"
reason earlier, and partly because they give a full affirmation that a
given policy was intended.

> I fancy that people who are technically capable of, when faced with a
> certificate problem, analysing the chain, finding the embedded URL which
> explains the policy, visiting and it and reading it, are also capable of
> (in the alternative technical scenario) realising that the organisation
> which shipped the product must have put a restriction on, and heading
> over to their website to find out why (if it's not obvious).

Challenge: go to http://www.mozilla.org/ and find the root inclusion
policy following links (and only following links) from that URL.
Report how many links you had to go through, and how many unhelpful or
otherwise useless links you also followed. For bonus points, find a
root inclusion policy on a TLS/SSL-encrypted page served with a
"Mozilla Foundation" certificate which additionally states all of the
approved root certificates and their thumbprints.

Embedding a URL in the certificate chain makes things a lot easier for
the person who's trying to do the troubleshooting. It also makes the
support burden for the browser vendor lighter.

> Particularly if the user agent makes it clear why the error has occurred.

Much of this set of problems could be worked around if we could touch
the chrome, but we've had many arguments on this list about the fact
that we can't. Has this policy been changed?

-Kyle H

Gervase Markham

unread,
Mar 22, 2007, 7:58:20 AM3/22/07
to
Kyle Hamilton wrote:
> See, identity is identity.

I don't agree.

"This site's identity is www.example.com" is a different sort of
identity to "This site is owned and operated by Foo Corp. of Bermuda",
which is again different to "This site is owned and operated by Gervase
Markham, of Enfield, London, UK, passport number XXXXXXXX". But all have
their place.

> The only function that limiting the types of things that a root can
> sign certificates for is to raise the bar and force people who want to
> do certain things (like sign code) to get identity certificates from
> more expensive sources.

Or, alternatively, sources which make correspondingly more effort to
ascertain that you are who you say you are. This has the side effect of
making the work take more time, and therefore cost more.

> To be perfectly honest, in my view X.509 is nearly completely broken
> as a protocol, and even more broken as a paradigm.

You are entitled to hold that view. However, we're not planning to ditch
the current way of securing user <-> site communications any time soon,
and I suspect we aren't about to reinvent the certificate wheel either.

>> In fact, one could argue that the Mozilla Foundation is already the
>> ultimate trust anchor, as we choose the certificates to place in the
>> root store. Most users of products which use the store (e.g. Firefox)
>> are ultimately trusting us to make good decisions about what CA root
>> certs to include.
>
> No, a 'trust anchor' is a technical location where all trust can be
> proven to devolve from (the private key, with the one-to-one
> correspondence to the public key).

<sigh> You know what I meant, surely? "The Mozilla Foundation is
ultimately the location where all trust can be proven to devolve from",
with the proof being that you downloaded the software from us and then
ran it on your machine and used it to access websites/send email.

> The choice of certificates is made by an authority. Without an
> anchor, though, it's possible to willy-nilly add certificates to the

> database and mark them as trusted.

Currently, that's regarded as a feature, although ordinary users are
discouraged from using it. What good would it serve to try and put
technical measures in place to prevent additions to the root store?

> That's straightforward when the anchor isn't in the store.
>
> What happens when the anchor is already there?

The error dialog says "Sorry, the CA who signed this certificate is not
permitted to sign a certificate for this website." or similar, better
wording.

> Challenge: go to http://www.mozilla.org/ and find the root inclusion
> policy following links (and only following links) from that URL.
> Report how many links you had to go through, and how many unhelpful or
> otherwise useless links you also followed.

Site Map | Security Center | Mozilla CA certificate policy. No
backtracking required.

That's not to say that it couldn't be more obvious. But we also have to
face the fact that 99.99% of people don't give a stuff about our root
inclusion policy.

> For bonus points, find a
> root inclusion policy on a TLS/SSL-encrypted page served with a
> "Mozilla Foundation" certificate which additionally states all of the
> approved root certificates and their thumbprints.

The approved roots are the ones in the store that you get when you
download e.g. Firefox (which is signed by a MoFo certificate).

>> Particularly if the user agent makes it clear why the error has occurred.
>
> Much of this set of problems could be worked around if we could touch
> the chrome, but we've had many arguments on this list about the fact
> that we can't. Has this policy been changed?

People keep stating that we can't, but I've not seen any evidence of
this, despite asking for it more than once. Who is the phantom evildoer
who blocks all chrome changes on principle?

Kai made some chrome changes recently, I believe. He rewrote some error
messages to be better.

Gerv

Kyle Hamilton

unread,
Mar 22, 2007, 12:13:15 PM3/22/07
to Gervase Markham, dev-tec...@lists.mozilla.org
On 3/22/07, Gervase Markham <ge...@mozilla.org> wrote:
> Kyle Hamilton wrote:

>
> > The only function that limiting the types of things that a root can
> > sign certificates for is to raise the bar and force people who want to
> > do certain things (like sign code) to get identity certificates from
> > more expensive sources.
>
> Or, alternatively, sources which make correspondingly more effort to
> ascertain that you are who you say you are. This has the side effect of
> making the work take more time, and therefore cost more.

Any individual or organization that can run code on my machine can do
anything to my machine that the code allows them to do. How am I
protected, for example, just by knowing that a given software was
signed by Sony BMG when it inserts a driver into the stack of my
Windows XP machine's CD-ROM functionality? "I know who to sue" --
that's been the historical statement of why X.509 is necessary -- but
quite honestly there's no real set of protections there.

> >> In fact, one could argue that the Mozilla Foundation is already the
> >> ultimate trust anchor, as we choose the certificates to place in the
> >> root store. Most users of products which use the store (e.g. Firefox)
> >> are ultimately trusting us to make good decisions about what CA root
> >> certs to include.
> >
> > No, a 'trust anchor' is a technical location where all trust can be
> > proven to devolve from (the private key, with the one-to-one
> > correspondence to the public key).
>
> <sigh> You know what I meant, surely? "The Mozilla Foundation is
> ultimately the location where all trust can be proven to devolve from",
> with the proof being that you downloaded the software from us and then
> ran it on your machine and used it to access websites/send email.

The Mozilla Foundation is the authority which determines whether a
given root certificate is included in its default certificate list.
If you're going to assert that it's "provable", you suddenly create a
lot more liability for the Foundation -- because it's not provable.
For example, if you upgrade Firefox, does the root certificate store
get replaced? If so, then how can local administrators assert their
own certificate policy? If not, then how can the user know that the
certificate list in the database has been unmodified?

Quoting my question and your answer:
> For bonus points, find a
> root inclusion policy on a TLS/SSL-encrypted page served with a
> "Mozilla Foundation" certificate which additionally states all of the
> approved root certificates and their thumbprints.

The approved roots are the ones in the store that you get when you
download e.g. Firefox (which is signed by a MoFo certificate).

How can I, as a user, verify that the certificates that are in my
store are the ones that the Mozilla Foundation put there? How can I,
as a matter of forensics, tell if it's been adulterated?

(Also: your assertion that signing the roots would place a strain on
the organization, by placing a highly-trusted key in the hands of an
organization which has no experience in key management of that nature,
has just been obviated. The MoFo code-signing private key is just as
highly trusted, in this particular model, yet it's harder to verify
and still leaves the end contents subject to tampering after they have
been emplaced.)

> > The choice of certificates is made by an authority. Without an
> > anchor, though, it's possible to willy-nilly add certificates to the
> > database and mark them as trusted.
>
> Currently, that's regarded as a feature, although ordinary users are
> discouraged from using it. What good would it serve to try and put
> technical measures in place to prevent additions to the root store?

It is certainly a feature -- but it's a "feature" in the same way that
debugging capabilities in operating systems are "features". In the
right hands, properly used, they do much good. However, they also
open up the capability for much harm.

My question, which has been inadequately addressed thus far, is this:
How can I verify that the certificates that I did not place in the
store by affirmative user or administrator action are, in actuality,
the certificates that the Mozilla Foundation approved?

You can put a technical measure in place to attempt to prevent
additions to the root store, or you can make the data that MoFo placed
there verifiable back to MoFo.

>
> > That's straightforward when the anchor isn't in the store.
> >
> > What happens when the anchor is already there?
>
> The error dialog says "Sorry, the CA who signed this certificate is not
> permitted to sign a certificate for this website." or similar, better
> wording.

Great, make Big Brother even more intrusive by preventing
organizations like CAcert from signing any useful certificate.

I prefer gracefully degraded functionality which still allows it to
"work", while enforcing warnings on form submission.

> >> Particularly if the user agent makes it clear why the error has occurred.
> >
> > Much of this set of problems could be worked around if we could touch
> > the chrome, but we've had many arguments on this list about the fact
> > that we can't. Has this policy been changed?
>
> People keep stating that we can't, but I've not seen any evidence of
> this, despite asking for it more than once. Who is the phantom evildoer
> who blocks all chrome changes on principle?

To find that, just find who blocks all the proposed chrome changes in
the security area of bugzilla. I believe you can look for the issue
about S/MIME certificate handling presenting a denial of service if
they included a bogus root certificate with their certificate chain,
and the same name as an existing issuer. This was fixed in 2004, but
that issue is the one that comes most clearly to mind.

> Kai made some chrome changes recently, I believe. He rewrote some error
> messages to be better.

"Rewriting error messages to be better" is not the same as "change any
part of the workflow".

If we /can/ touch the chrome, then why are the important pieces of the
certificates still hidden in the full DER parse tree? Why is it still
so difficult to get the certificate's full subject?

-Kyle H

Gervase Markham

unread,
Mar 23, 2007, 6:10:19 AM3/23/07
to
Kyle Hamilton wrote:
> The Mozilla Foundation is the authority which determines whether a
> given root certificate is included in its default certificate list.
> If you're going to assert that it's "provable", you suddenly create a
> lot more liability for the Foundation -- because it's not provable.
> For example, if you upgrade Firefox, does the root certificate store
> get replaced?

Yes, potentially.

> If so, then how can local administrators assert their
> own certificate policy?

As I understand it, when local admins or end users add their own certs,
it goes into a different location, with the store being the union of both.

> How can I, as a user, verify that the certificates that are in my
> store are the ones that the Mozilla Foundation put there? How can I,
> as a matter of forensics, tell if it's been adulterated?

We assert, by signing the download binaries, that the certificates you
get are the ones we wanted to put there.

Or are you asking how we ensure that the ones we put there are the ones
we wanted to put there?

> (Also: your assertion that signing the roots would place a strain on
> the organization, by placing a highly-trusted key in the hands of an
> organization which has no experience in key management of that nature,
> has just been obviated. The MoFo code-signing private key is just as
> highly trusted, in this particular model, yet it's harder to verify
> and still leaves the end contents subject to tampering after they have
> been emplaced.)

The risk is less great, because someone who obtained the signing key
would also need to break into our download infrastructure in order to
place the trojaned binary. And if they had that power, they wouldn't
bother with subverting the root store, they'd just insert the trojan
code directly.

But yes, you are right, the code signing key is valuable.

> My question, which has been inadequately addressed thus far, is this:
> How can I verify that the certificates that I did not place in the
> store by affirmative user or administrator action are, in actuality,
> the certificates that the Mozilla Foundation approved?

By comparing the contents of your store to the appropriate version of
the file "certdata.txt" in the Mozilla CVS tree.

In the future, there may be a list on a web page. Anyone willing to take
on that work would be welcomed with open arms.

> To find that, just find who blocks all the proposed chrome changes in
> the security area of bugzilla.

Oh, for goodness sake. The way for the name in your head to get into my
head with full fidelity transmission is for you to actually type it into
your reply. (Then perhaps the person you are talking about can defend
their actions.)

The last time I looked into this, in one sense there were no candidates
at all (in that, I couldn't find an explicit "no, you can't, there's a
chrome freeze" statement) and in one sense there were lots of
candidates, in that various people had expressed opinions of various
strengths about the UI. I have no intention of trawling through Bugzilla
again to reach the same conclusion.

If you want something done about this, name names and give examples. If
you really feel you can't criticise someone in public, then send me
private mail (still with the examples). Otherwise, I will continue to
assert there is no such freeze and you are being paranoid.

> If we /can/ touch the chrome, then why are the important pieces of the
> certificates still hidden in the full DER parse tree? Why is it still
> so difficult to get the certificate's full subject?

Where's the bug with the attached patch and the comments saying "No, you
can't put this in because I'm unilaterally instituting a chrome freeze"?

Gerv

Paul Hoffman

unread,
Mar 26, 2007, 9:20:11 AM3/26/07
to Gervase Markham, dev-tec...@lists.mozilla.org
At 10:10 AM +0000 3/23/07, Gervase Markham wrote:
>Kyle Hamilton wrote:
>> The Mozilla Foundation is the authority which determines whether a
>> given root certificate is included in its default certificate list.
>> If you're going to assert that it's "provable", you suddenly create a
>> lot more liability for the Foundation -- because it's not provable.
>> For example, if you upgrade Firefox, does the root certificate store
>> get replaced?
>
>Yes, potentially.

If true, this is a security bug. If I have removed FooCA because they
have been proven untrustworthy, and the Mozilla Foundation adds it
back in when I do a needed update for security reasons, that is a
violation of basic security principles.

If the cert store gets replaced *silently*, that is a horrible security bug.

Kyle Hamilton

unread,
Mar 27, 2007, 2:34:14 AM3/27/07
to Paul Hoffman, dev-tec...@lists.mozilla.org, Gervase Markham
On 3/26/07, Paul Hoffman <phof...@proper.com> wrote:
> At 10:10 AM +0000 3/23/07, Gervase Markham wrote:
> >Kyle Hamilton wrote:
> >> The Mozilla Foundation is the authority which determines whether a
> >> given root certificate is included in its default certificate list.
> >> If you're going to assert that it's "provable", you suddenly create a
> >> lot more liability for the Foundation -- because it's not provable.
> >> For example, if you upgrade Firefox, does the root certificate store
> >> get replaced?
> >
> >Yes, potentially.
>
> If true, this is a security bug. If I have removed FooCA because they
> have been proven untrustworthy, and the Mozilla Foundation adds it
> back in when I do a needed update for security reasons, that is a
> violation of basic security principles.
>
> If the cert store gets replaced *silently*, that is a horrible security bug.

You mean like Thawte, which started issuing domain-validated certs
under a root with a CPS that stated that it would only be used with
certificates with a higher degree of identity assertion? (Granted,
this was after it was acquired by Verisign, but that doesn't change
the fact that the company violated the root's CPS, was informed of it
over a year ago, and has never resolved the issue. The last I had
heard was "I'll bring this up with the company lawyers.")

-Kyle H

Nelson B

unread,
Mar 27, 2007, 4:15:15 AM3/27/07
to
Here's a suggestion for the participants in this thread.
Instead of all this conjecture, imagining various bad designs for NSS and
then criticizing them, try to figure out how the products *really* work.
There are major clues in Certificate Manager.

Here are some hints.

1. The root CA list that comes with the product is in a read-only shared
library. Nothing the user can do with the product alters the contents of
that shared library in any way. The shared library is updated only when
the product is updated.

2. Any certificates added by the user, and any trust information edited
by the user, is stored in the user's cert database. The trust information
in the user's cert database overrides ALL other trust information stored in
any other cert store, including the product's root CA list. All *apparent*
modifications of the root CA list are actually edits to the trust
information in the user's cert database.

3. The only modifications the product ever makes to the trust information
in the user's cert DB are initiated by the user. Product updates don't
modify the set of certs or trust information in the user's cert DB.
On those rare occasions where the format of the cert DB changes, the
information in the old cert DB is migrated to the new cert DB.

--
Nelson B

Paul Hoffman

unread,
Mar 27, 2007, 9:07:48 AM3/27/07
to dev-tec...@lists.mozilla.org
At 1:15 AM -0700 3/27/07, Nelson B wrote:
>Here's a suggestion for the participants in this thread.
>Instead of all this conjecture, imagining various bad designs for NSS and
>then criticizing them, try to figure out how the products *really* work.

Did that and failed. It may be my stupidity, or it may be the lack of
documentation overview and so on, or both.

>There are major clues in Certificate Manager.

Thanks for the pointer! (The doc page is clearly for 1.x and not 2.x,
but it is close enough to follow.)

Gervase Markham

unread,
Apr 12, 2007, 11:30:40 AM4/12/07
to
Paul Hoffman wrote:
> At 10:00 AM +0000 3/14/07, Gervase Markham wrote:
>> Paul Hoffman wrote:
>>> A related question that I was intending to do some research on: if a
>>> trust anchor ("trusted root" in this thread) has an expiration date
>>> in the past, doe NSS still treat it as a trust anchor, or does it
>>> ignore it?
>>
>> I can't say for certain because I haven't seen the code, but I would
>> certainly hope it ignores it!
>
> I would hope that NSS *would* use this information because it is what
> the CA has asserted about itself. RFC 3280 does not require that the
> processor use this information.

I may have been unclear here. By "ignores it", I meant "ignore the
cert", not "ignore the expiration date". I believe that's the original
sense in which you used it, but I could be misreading you.

Gerv

Gervase Markham

unread,
Apr 12, 2007, 11:33:49 AM4/12/07
to
Nelson Bolyard wrote:
> Your proposal would require storing the equivalent of a name constraints
> extension along with the root CA cert. It would also require additional
> processing, because name constraints are generally not processed inside
> trust anchors. That is, usually a CA puts the name constraints extension
> into subordinate CA certs that it issues, and a root CA does not attempt
> to constrain itself. Standard RFC 3280 cert chain validation does not
> expect the "trust anchor" (root) to have any such constraints, and the
> algorithm in that standard does not use such into in a root cert, if present.
>
> (As an aside, EV also requires non-standard behavior and if implemented in
> NSS would also require similar (though fewer) non-standard extensions. EV
> requires the storage of one or more EV policy OIDs that are permitted to be
> used in certs issued by the root CA and its subordinates.

And it is planned (is there a plan?) to store this information using the
vendor-defined extensions you mentioned in your message?

Where does NSS/PSM store the values of the three checkboxes you can set
through the UI to decide whether a cert is trusted for web/SSL/code?

> Alternatively these non-standard extensions could be implemented in PSM.
> PSM would be free to store the information in any non-standard format
> it likes. After a cert chain has been validated by NSS's cert path
> validation code, PSM could do additional checks on policy OIDs and/or
> RFC 822 names according to the rules/info it has stored.

(Question repeated for completeness)

I presume it has access to the information it would need via the API?

> Since mozilla clients are likely the only NSS-based applications that


> will want these extensions, I think a case can be made that these
> extensions should be done in PSM rather than in NSS.

My concern about implementing it "on top of" NSS is that, as I

Paul Hoffman

unread,
Apr 12, 2007, 1:11:52 PM4/12/07
to Gervase Markham, dev-tec...@lists.mozilla.org

Ah! We are in agreement. If a cert says "I expire on this date in the
past", we both would prefer that NSS would use the information and
not use it as a trust anchor.

I still cannot find the code that would or would not implement this, however.

Kaspar Brand

unread,
Apr 14, 2007, 5:08:49 AM4/14/07
to Paul Hoffman, dev-tec...@lists.mozilla.org, Gervase Markham
Paul Hoffman wrote:
> Ah! We are in agreement. If a cert says "I expire on this date in the
> past", we both would prefer that NSS would use the information and
> not use it as a trust anchor.
>
> I still cannot find the code that would or would not implement this, however.

It's in cert_VerifyCertChain(), I would say, cf.

http://lxr.mozilla.org/security/source/security/nss/lib/certhigh/certvfy.c#755
and
http://lxr.mozilla.org/security/source/security/nss/lib/certhigh/certvfy.c#820

In the first case, SEC_ERROR_EXPIRED_ISSUER_CERTIFICATE is set when
checking the signature of a cert and the issuer cert is expired (calls
CERT_VerifySignedData(), which in turn has a check based on
CERT_CheckCertValidTimes()):

> 745 /* verify the signature on the cert */
> 746 if ( checkSig ) {
> 747 rv = CERT_VerifySignedData(&subjectCert->signatureWrap,
> 748 issuerCert, t, wincx);
> 749
> 750 if ( rv != SECSuccess ) {
> 751 if (sigerror) {
> 752 *sigerror = PR_TRUE;
> 753 }
> 754 if ( PORT_GetError() == SEC_ERROR_EXPIRED_CERTIFICATE ) {
> 755 PORT_SetError(SEC_ERROR_EXPIRED_ISSUER_CERTIFICATE);
> 756 LOG_ERROR_OR_EXIT(log,issuerCert,count+1,0);
> 757 } else {
> 758 PORT_SetError(SEC_ERROR_BAD_SIGNATURE);
> 759 LOG_ERROR_OR_EXIT(log,subjectCert,count,0);
> 760 }
> 761 }
> 762 }

If this test is skipped (because the caller explicitly choose not to
check signatures by setting checkSig to false), then there's still the
CRL check where an expired trust anchor would be rejected, IINM:

> 819 /* check revoked list (issuer) */
> 820 rv = SEC_CheckCRL(handle, subjectCert, issuerCert, t, wincx);

This will call CERT_CheckCRL() in turn, which again has a check for the
validity of the issuing cert by calling CERT_CheckCertValidTimes() - cf.
http://lxr.mozilla.org/security/source/security/nss/lib/certdb/crl.c#2570.

Kaspar

Paul Hoffman

unread,
Apr 15, 2007, 8:02:57 PM4/15/07
to Kaspar Brand, dev-tec...@lists.mozilla.org, Gervase Markham
This is very good to hear. I have no idea if this is true for other
browsers or OS components that have root stores.

Man, this would be a good research project for some adventurous undergrad.

Kyle Hamilton

unread,
Apr 16, 2007, 7:13:05 AM4/16/07
to Mozilla Crypto
I should mention that on the tls@ietf list, there's been a fair amount
of discussion on this topic. The concept that is put forth is that
the trust anchor is the key -- and any metadata that the key surrounds
itself with (such as a certificate, for ease of trust anchor
distribution) is non-binding.

This gets into the concept of "key continuity management" for an
entity as opposed to hierarchal trust for the entity. This is
unfortunately a concept which is foreign to most X.509
implementations.

My view? If a trust anchor asserts its validity ending on a given
date, that's a policy decision asserted by that trust anchor (even
though a CA is identified by its name, not by its key).

-Kyle H

> _______________________________________________
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>


--

-Kyle H

Paul Hoffman

unread,
Apr 16, 2007, 12:26:09 PM4/16/07
to Kyle Hamilton, Mozilla Crypto
At 4:13 AM -0700 4/16/07, Kyle Hamilton wrote:
>I should mention that on the tls@ietf list, there's been a fair amount
>of discussion on this topic. The concept that is put forth is that
>the trust anchor is the key -- and any metadata that the key surrounds
>itself with (such as a certificate, for ease of trust anchor
>distribution) is non-binding.

My reading of the archives is that there is disagreement on this.
These are TLS folks who have been forced to live with the silliness
of PKIX.

I would characterize the sentiment on the list more as "non-binding
but often useful".

>This gets into the concept of "key continuity management" for an
>entity as opposed to hierarchal trust for the entity. This is
>unfortunately a concept which is foreign to most X.509
>implementations.

Fully agree. The PKIX WG has made even more of a mess of key and cert
management than they have with the cert format.

>My view? If a trust anchor asserts its validity ending on a given
>date, that's a policy decision asserted by that trust anchor (even
>though a CA is identified by its name, not by its key).

Agree.

Reply all
Reply to author
Forward
0 new messages