Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Name Constraints

516 views
Skip to first unread message

Richard Barnes

unread,
Mar 6, 2015, 7:26:47 PM3/6/15
to mozilla-dev-s...@lists.mozilla.org
Hey all,

I've been doing some research on the potential benefits of adding name
constraints into the Mozilla root program. I've drafted an initial
proposal and put it on a wiki page:

https://wiki.mozilla.org/CA:NameConstraints

Questions and comments are very welcome. There's a lot of details to work
out here, but I think there's some significant benefit to be realized.

--Richard

Ryan Sleevi

unread,
Mar 6, 2015, 8:40:59 PM3/6/15
to Richard Barnes, mozilla-dev-s...@lists.mozilla.org
This seems unfortunate, especially given ICANN's efforts to extend the set
of gTLDs.

While it might seem simple to argue from a security benefit, the reality
is that it further ensures "too big to fail", by reducing the number of
CAs that can issue for a given name.

If a CA wishes to extend beyond the assigned scope, it would now have a 1
month waiting period, although there will inevitably be a queue, and then
have to wait for a 12-18 month upgrade period for projects that have used
the name constrained roots.

We've already seen the negative effects this can have on roots wishing to
migrate to stronger algorithms (ECC, SHA-2), in which they have to wait a
long time in the queue.

Given that sites in consideration already have multiple existing ways to
mitigate these threats (among them, Certificate Transparency, Public Key
Pinning, and CAA), and that there are further proposed solutions to
mitigate the risks (such as OCSP Must Staple), I'm curious what are the
specific benefits you see versus the real costs for users and CAs.

While the CA costs are both obvious and somewhat mentioned above, consider
the user costs. If there's a site that operates in multiple gTLDs (say,
for sake of example, Google), the set of CAs they can now use are the set
of CAs that are authorized to issue for the union of those domains, or
they must issue and manage multiple certificates for multiple domains and
manage them, their policies, and their expirations separately. As we know,
many users of certificates complain the operational costs are a
significant burden, and while ACME hopes to address some of them, it's
also hopefully evident that it will fail to do so for some time.

What would you imagine the name restrictions for the major CAs to be? Or
for Let's Encrypt's nascent CA? Presumably unrestricted, correct?

Jeremy Rowley

unread,
Mar 8, 2015, 2:49:18 PM3/8/15
to ryan-mozde...@sleevi.com, Richard Barnes, mozilla-dev-s...@lists.mozilla.org
+1 to Ryan's comments. The plan locks small CAs into being small while letting big CAs continue to dominate the market. It basically prevents new CAs for even entering the market.
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Eric Mill

unread,
Mar 8, 2015, 2:55:36 PM3/8/15
to ryan-mozde...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Fri, Mar 6, 2015 at 8:39 PM, Ryan Sleevi
<ryan-mozde...@sleevi.com> wrote:
>
> On Fri, March 6, 2015 4:26 pm, Richard Barnes wrote:
> > Hey all,
> >
> > I've been doing some research on the potential benefits of adding name
> > constraints into the Mozilla root program. I've drafted an initial
> > proposal and put it on a wiki page:
> >
> > https://wiki.mozilla.org/CA:NameConstraints
> >
> > Questions and comments are very welcome. There's a lot of details to work
> > out here, but I think there's some significant benefit to be realized.
>
> This seems unfortunate, especially given ICANN's efforts to extend the set
> of gTLDs.
>
> While it might seem simple to argue from a security benefit, the reality
> is that it further ensures "too big to fail", by reducing the number of
> CAs that can issue for a given name.

That comes down to how this program is implemented. The intent seems
pretty clearly to identify the space CAs are already issuing in.
Perhaps newer gTLDs merit some unrestrained time in the wild before
they're constrained in this way -- or perhaps it's simpler to make the
gradiations more black and white (e.g. "unrestricted" vs "niche" CAs,
and avoiding "somewhat unrestricted" or "nearly unrestricted").

For CAs whose business model is designed for a specific subset of the
web, a name constraint program could clear a path to entry without
endangering domains who are not designed to be served by that CA.


> If a CA wishes to extend beyond the assigned scope, it would now have a 1
> month waiting period, although there will inevitably be a queue, and then
> have to wait for a 12-18 month upgrade period for projects that have used
> the name constrained roots.
>
> We've already seen the negative effects this can have on roots wishing to
> migrate to stronger algorithms (ECC, SHA-2), in which they have to wait a
> long time in the queue.

This is a great point, and suggests that name constraint updates
should either a) have a clear and defined update path, or b) only be
implemented when the chances of updates are low.

I would argue that a name constraint program could accomplish a couple
things beyond just limiting raw attack surface area:

* Add friction to applicants that claim in their initial application
to serve a specific subset of the web, and then wish to expand their
issuance surface area after their inclusion.

* Reduce the friction for niche CAs to be included in the first place.
For tightly constrained CAs, it's plausible to imagine that the
operational complexity they need to demonstrate can be reduced.


> Given that sites in consideration already have multiple existing ways to
> mitigate these threats (among them, Certificate Transparency, Public Key
> Pinning, and CAA), and that there are further proposed solutions to
> mitigate the risks (such as OCSP Must Staple), I'm curious what are the
> specific benefits you see versus the real costs for users and CAs.

CT, CAA, and PKP are all great advances for securing the web, and none
of them is complete. They each extend the web's defenses in different
ways. Name constraints address a different problem, and only augment
these extensions.

* CT only detects misissuance after the fact.
* CAA constrains what CAs can issue for a particular domain, not what
domains a CA is allowed to issue for. Domain owners must individually
opt-in to CAA.
* PKP is similar to CAA, and also requires domain owners to opt-in. It
can also be dangerous (crypto.cat finally got un-pin-blocked in Chrome
41).

By contrast, name constraints protect *everyone*, even if the domain
owner has never heard of them, or heard of CT, CAA, or PKP.


> While the CA costs are both obvious and somewhat mentioned above, consider
> the user costs. If there's a site that operates in multiple gTLDs (say,
> for sake of example, Google), the set of CAs they can now use are the set
> of CAs that are authorized to issue for the union of those domains, or
> they must issue and manage multiple certificates for multiple domains and
> manage them, their policies, and their expirations separately. As we know,
> many users of certificates complain the operational costs are a
> significant burden, and while ACME hopes to address some of them, it's
> also hopefully evident that it will fail to do so for some time.
>
> What would you imagine the name restrictions for the major CAs to be? Or
> for Let's Encrypt's nascent CA? Presumably unrestricted, correct?

Constraining the current major unrestricted CAs seems thorny. But the
clearest example to me of the benefit of name constraints is the US
government's FPKI application:

https://bugzilla.mozilla.org/show_bug.cgi?id=478418
https://bug478418.bugzilla.mozilla.org/attachment.cgi?id=8561777

While this is not finalized, and the specific constrained domains in
the application are not accurate (.gov.us is not a public suffix, or
in use at all), name constraints seem to be a highly practical way of
bringing government CAs into the trusted root program.

Ensuring that a US government CA can only issue for .gov and .mil
reduces the amount of trust the root program (and thus, the web) needs
to place in the US government, and lessens the burden of work the CA
needs to do to protect against mis-issuance. As importantly, this will
not constrain what CAs that .gov and .mil domains can use -- they can
still go out to the private sector, as they currently do, to protect
their domains. However, this means they will have options, and
probably cheaper ones at that.

While the US government is unique in owning their own TLDs, there are
other government CAs already in the program, and pending application,
that could benefit from constraints. I know an animating motivation
for the constraint program is the compromise of a French government
intermediate certificate.[1]

I understand that if the name constraint program gets too fancy, it
could add unwanted complexity to the trusted root program and alter
the CA market in undesired ways.

However, if properly implemented, I think it can very much protect
website owners the world over from attack, without making the CA
market more of an oligopoly than it already is. In the best case, it
leads to a wider marketplace whose business functions are more
accurately described and enforced.

-- Eric

[1] https://blog.mozilla.org/security/2013/12/09/revoking-trust-in-one-anssi-certificate/

>
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy




--
konklone.com | @konklone

Ryan Sleevi

unread,
Mar 8, 2015, 3:50:26 PM3/8/15
to Eric Mill, mozilla-dev-s...@lists.mozilla.org, ryan-mozde...@sleevi.com, Richard Barnes
On Sun, March 8, 2015 11:53 am, Eric Mill wrote:
> That comes down to how this program is implemented. The intent seems
> pretty clearly to identify the space CAs are already issuing in.
> Perhaps newer gTLDs merit some unrestrained time in the wild before
> they're constrained in this way -- or perhaps it's simpler to make the
> gradiations more black and white (e.g. "unrestricted" vs "niche" CAs,
> and avoiding "somewhat unrestricted" or "nearly unrestricted").
>
> For CAs whose business model is designed for a specific subset of the
> web, a name constraint program could clear a path to entry without
> endangering domains who are not designed to be served by that CA.

This is a dangerous line of reasoning, I think.

One reason is because it encourages calcifying the trust space ("If you
were there already, you can stay there; if you weren't, you're now kept
out")

Another reason is it encourages the trust store to be used for regional or
arbitrary distinctions ("not designed to be served by that CA"). This
amounts to naught more than recognizing borders on the Internet, a
somewhat problematic practice, for sure. That is, for every "constrained"
CA that you can imagine that ONLY wants to issue for a .ccTLD, you can
also imagine the inverse, where ONLY a given CA is allowed to issue for
that .ccTLD. The reasoning behind the two are identical, but the
implications of the latter - to online trust - are far more devastating.

> This is a great point, and suggests that name constraint updates
> should either a) have a clear and defined update path, or b) only be
> implemented when the chances of updates are low.

It's nigh impossible to quantify (b), given the rate of gTLD adoption. It
also favours incumbants who have the ability to issue for those domains,
since any upstarts need to demonstrate need/desire to issue for the new
gTLDs.

The problem with (a) is already an issue, so why would or should we
believe it to be solved now?

> * Add friction to applicants that claim in their initial application
> to serve a specific subset of the web, and then wish to expand their
> issuance surface area after their inclusion.

Why is this a good thing, and why should it be seen as such?

> * Reduce the friction for niche CAs to be included in the first place.
> For tightly constrained CAs, it's plausible to imagine that the
> operational complexity they need to demonstrate can be reduced.

I strongly disagree with this sentiment. The holders of a .ccTLD domain
have just as much desire and reason to have strong security as the holders
of a .com domain. This idea that somehow we can be less stringent with a
.ccTLD-constrained CA is downright dangerous, because it suggests now a
balkanization of web security as a desirable outcome.

> By contrast, name constraints protect *everyone*, even if the domain
> owner has never heard of them, or heard of CT, CAA, or PKP.

I'm well aware of the distinctions between CT/CAA/PKP. The issue here is
simply one that the existing measures exist for site operators. The
argument for why we need "yet another" - one that is centrally managed,
slow to update, inherently political, and lacking firm criteria - is
somewhat problematic.

> While this is not finalized, and the specific constrained domains in
> the application are not accurate (.gov.us is not a public suffix, or
> in use at all), name constraints seem to be a highly practical way of
> bringing government CAs into the trusted root program.

Stop right here.

Why is this a good thing?
Why is it in the interest of Mozilla's users?
Why is it in the interest of the Web at large?

There's a fundamental mistake that assumes bringing the government CAs in
(of which the US FPKI is but one example) is somehow a good thing. The
closest it comes is "Well, the government said users must use the Federal
PKI, ergo it's nice that they CAN use the federal PKI", but that's simply
an argument that private industry should exist to enable governments'
legislative whims on technology.

We've already seen the impact the past decades legislative whims have had
on security. While FREAK is perhaps a modern example, the complexity and
security implications of FIPS 140-(1/2/3) remain a matter of active
discussion. Let alone the complexity involved with say, Fortezza, which
has been exploitable in NSS in the past.

As it relates to online trust ecosystem, we can see these government CAs
have either botched things quite spectacularly (India CCA) or been highly
controversial (CNNIC). The arguments for CNNIC aren't "Well, if they're
only MITMing .cn users, that's OK", it's "Well, they could MITM".

That's why name constraints are misleading. They exist because we lack
confidence that these government CAs (often audited under ETSI) are
competent to operate the technology necessary to be stewards of Internet
trust. The solution shouldn't be to find ways to hinder their damage, the
solution should be to make it impossible for them to damage things in the
first place.

Name constraints, as presented, give tacit approval to the CAs constrained
to botch things, as long as they do so only in their little fiefdoms. But
when these fiefdoms easily represent millions-to-billions of Internet
users, especially in emerging markets, do we really believe that their
needs are being served?

That is, in essence, why I think a change like this is so dangerous. It
strives to draw borders around the (secure) Internet, and to acknowledge
that what you do in your own borders, to your own users, is an issue
between you and them. I don't think that's a good state for anyone to be
in.

Eric Mill

unread,
Mar 8, 2015, 5:14:34 PM3/8/15
to ryan-mozde...@sleevi.com, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
On Sun, Mar 8, 2015 at 3:49 PM, Ryan Sleevi
<ryan-mozde...@sleevi.com> wrote:
> On Sun, March 8, 2015 11:53 am, Eric Mill wrote:
>> That comes down to how this program is implemented. The intent seems
>> pretty clearly to identify the space CAs are already issuing in.
>> Perhaps newer gTLDs merit some unrestrained time in the wild before
>> they're constrained in this way -- or perhaps it's simpler to make the
>> gradiations more black and white (e.g. "unrestricted" vs "niche" CAs,
>> and avoiding "somewhat unrestricted" or "nearly unrestricted").
>>
>> For CAs whose business model is designed for a specific subset of the
>> web, a name constraint program could clear a path to entry without
>> endangering domains who are not designed to be served by that CA.
>
> One reason is because it encourages calcifying the trust space ("If you
> were there already, you can stay there; if you weren't, you're now kept
> out")

It _could_ be done that way, but definitely doesn't have to be.


> Another reason is it encourages the trust store to be used for regional or
> arbitrary distinctions ("not designed to be served by that CA"). This
> amounts to naught more than recognizing borders on the Internet, a
> somewhat problematic practice, for sure. That is, for every "constrained"
> CA that you can imagine that ONLY wants to issue for a .ccTLD, you can
> also imagine the inverse, where ONLY a given CA is allowed to issue for
> that .ccTLD. The reasoning behind the two are identical, but the
> implications of the latter - to online trust - are far more devastating.

I can imagine the inverse, but that doesn't at all mean it will
necessarily occur. I share some of your misgivings about scanning the
gTLD space and bluntly shaving off space wherever you can. But the
program could be as targeted as the trusted root program wants it to
be.

You envision a more constricted market, but properly managed this
could expand the market and increase the competition for popular TLDs.


>> This is a great point, and suggests that name constraint updates
>> should either a) have a clear and defined update path, or b) only be
>> implemented when the chances of updates are low.
>
> It's nigh impossible to quantify (b), given the rate of gTLD adoption. It
> also favours incumbants who have the ability to issue for those domains,
> since any upstarts need to demonstrate need/desire to issue for the new
> gTLDs.

The program would clearly need to go in stages, testing assumptions
along the way. Letting the gTLD market mature, while the program
tackles the traditional TLD space, could be one way to go about it.


>> * Add friction to applicants that claim in their initial application
>> to serve a specific subset of the web, and then wish to expand their
>> issuance surface area after their inclusion.
>
> Why is this a good thing, and why should it be seen as such?

Because an initial application should describe what a CA wishes to do,
and their inclusion be decided based on that description. If a CA
applies to the trusted root program and declares they're only
interested in selling *.club domains, or *.yahoo.com domains, changing
their mind on that after inclusion should come at a real but
non-prohibitive cost.

CAs are charged, collectively, with guarding the trust of the web.
This doesn't seem like an unreasonable piece of friction to add.


>> * Reduce the friction for niche CAs to be included in the first place.
>> For tightly constrained CAs, it's plausible to imagine that the
>> operational complexity they need to demonstrate can be reduced.
>
> I strongly disagree with this sentiment. The holders of a .ccTLD domain
> have just as much desire and reason to have strong security as the holders
> of a .com domain. This idea that somehow we can be less stringent with a
> .ccTLD-constrained CA is downright dangerous, because it suggests now a
> balkanization of web security as a desirable outcome.

I don't think I was sufficiently clear. If a CA comes and wants to be
able to issue certificates for *.us or *.ly, or any ccTLD which is
operated openly, that shouldn't imply any reduction in auditing or
operational sophistication as compared to an unconstrained CA.

TLDs and public suffixes like *.gov.uk, *.gouv.fr, *.gov, or *.mil --
these are not operated openly. I know that at least in the case of
.gov, the registration process itself is centrally managed and is
effectively "organizationally validated". There are legitimate reasons
that a CA constrained to these domains could require less burdensome
auditing practices and agree to take on more liability for those
domains. It's up to each individual domain to choose to use that CA,
because this proposal is not intending to restrict which CAs a domain
can use.

Decisions like this should be based on the actual operational
realities of a domain. It's not about geographic location, or
balkanization of the web, even though government domains are
effectively geographic in nature. If certain name constraints
realistically -- present and future -- identify a centrally managed
piece of the domain space, than a CA targeting that rigidly defined
space should be evaluated on what additional levels of trust are
required.

>
>> By contrast, name constraints protect *everyone*, even if the domain
>> owner has never heard of them, or heard of CT, CAA, or PKP.
>
> I'm well aware of the distinctions between CT/CAA/PKP. The issue here is
> simply one that the existing measures exist for site operators. The
> argument for why we need "yet another" - one that is centrally managed,
> slow to update, inherently political, and lacking firm criteria - is
> somewhat problematic.

"centrally managed, slow to update, inherently political, and lacking
firm criteria" already describes the trusted root program. With the
way web PKI works today, I don't see how it can be otherwise. If name
constraints change those politics in a way that increases protections
for the web's every day users, this seems like an improvement.


>> While this is not finalized, and the specific constrained domains in
>> the application are not accurate (.gov.us is not a public suffix, or
>> in use at all), name constraints seem to be a highly practical way of
>> bringing government CAs into the trusted root program.
>
> Stop right here.
>
> Why is this a good thing?
> Why is it in the interest of Mozilla's users?
> Why is it in the interest of the Web at large?
>
> There's a fundamental mistake that assumes bringing the government CAs in
> (of which the US FPKI is but one example) is somehow a good thing.

An unconstrained government CA doesn't meet my definition of "a good
thing" either.

However, I don't understand why the current group of CAs that make up
the trusted root store today represent entities that hold the
interests of me, the Web, or Mozilla's users at heart.

Instead, I see some of the best people and organizations who _do_ have
the Web's best interests at heart spending tremendous amounts of time
and energy on convincing an organization in the existing trust store
to cross-sign them (e.g. Let's Encrypt) -- or on building a tolerably
usable and automatable interface as a reseller on top of other
CA/reseller organizations that could yank their relationship at any
time (e.g. SSLMate).

The private sector has created a distinctly uninspiring,
security-hostile situation for managing the web PKI.

> The
> closest it comes is "Well, the government said users must use the Federal
> PKI, ergo it's nice that they CAN use the federal PKI", but that's simply
> an argument that private industry should exist to enable governments'
> legislative whims on technology.

The US government has not, to my knowledge, mandated anything about
what CAs domains can use for web PKI. Which I'm extremely glad of,
because it means that in my professional work (which, full disclosure,
is currently for an independent civilian agency in the US government)
I can take full advantage of advances in the private sector, and start
publicly pushing the envelope on certificate management while in a
public sector role.


> We've already seen the impact the past decades legislative whims have had
> on security. While FREAK is perhaps a modern example, the complexity and
> security implications of FIPS 140-(1/2/3) remain a matter of active
> discussion. Let alone the complexity involved with say, Fortezza, which
> has been exploitable in NSS in the past.

I'm certainly not going to defend the US government's terrible
cryptographic choices over the years. I also think we're getting a bit
far afield from the merits of a general name constraint program.


> As it relates to online trust ecosystem, we can see these government CAs
> have either botched things quite spectacularly (India CCA) or been highly
> controversial (CNNIC). The arguments for CNNIC aren't "Well, if they're
> only MITMing .cn users, that's OK", it's "Well, they could MITM".

There's a big difference between *.cn and *.gov.cn, or *.alibaba.cn.


> That's why name constraints are misleading. They exist because we lack
> confidence that these government CAs (often audited under ETSI) are
> competent to operate the technology necessary to be stewards of Internet
> trust. The solution shouldn't be to find ways to hinder their damage, the
> solution should be to make it impossible for them to damage things in the
> first place.
>
> Name constraints, as presented, give tacit approval to the CAs constrained
> to botch things, as long as they do so only in their little fiefdoms. But
> when these fiefdoms easily represent millions-to-billions of Internet
> users, especially in emerging markets, do we really believe that their
> needs are being served?
>
> That is, in essence, why I think a change like this is so dangerous. It
> strives to draw borders around the (secure) Internet, and to acknowledge
> that what you do in your own borders, to your own users, is an issue
> between you and them. I don't think that's a good state for anyone to be
> in.

Putting governments aside for a second: why is the status quo acceptable?

We're all spending a *lot* of time strengthening HTTPS and widening
its use. We're putting a bunch of patches in place to make HTTPS
stronger and making the surface area for MITM smaller and more
expensive to attack.

Once we've got patched-up HTTPS in place across the internet, and
manage to make the deprecation of HTTP an inevitability -- which will
take some years -- what's the long term plan? More patches?

I truly hope that the long-term plan for trust in the web PKI does not
continue to mean a tightly guarded cluster of mostly for-profit
companies, even when we've bent the curve of their incentives somewhat
towards protecting users.

I do not believe that the definition of secure origins is sustainable
over the course of this century without the ability to extend the
roots of that trust more widely in a safe way. It's not about
governments -- they're just the entities on the table in front of us
in the immediate future and with the broadest reach. We should discuss
whether and how name constraints can contribute to a future for the
trusted root program where people and organizations have more freedom
to choose who guards their trust.

-- Eric

--
konklone.com | @konklone

Ryan Sleevi

unread,
Mar 8, 2015, 11:02:12 PM3/8/15
to Eric Mill, mozilla-dev-s...@lists.mozilla.org, ryan-mozde...@sleevi.com, Richard Barnes
On Sun, March 8, 2015 2:12 pm, Eric Mill wrote:
> TLDs and public suffixes like *.gov.uk, *.gouv.fr, *.gov, or *.mil --
> these are not operated openly. I know that at least in the case of
> .gov, the registration process itself is centrally managed and is
> effectively "organizationally validated". There are legitimate reasons
> that a CA constrained to these domains could require less burdensome
> auditing practices and agree to take on more liability for those
> domains. It's up to each individual domain to choose to use that CA,
> because this proposal is not intending to restrict which CAs a domain
> can use.

The first problem is this definition of "openly" is nebulous. Even ICANN
hasn't quite nailed it down (see the .brand registry for an example)

With some of the few legacy exceptions, most operate at second or third
level domains. A commercial CA is already entirely capable of issuing name
constrained sub-CAs to such entities (or, as it may be, cross-signing such
certificates), such that the operational day-to-day issuance is managed by
the government CA, but the root of trust is based in, for lack of a better
word, a commercial CA.

The Baseline Requirements already provide for this - and with a less
burdensome auditing practice. So if this was indeed the reason for such a
change, no change to the root program (except, perhaps, to further
discourage government CAs) is needed.

(I'm not even touching the complexity of public suffices here, other than
knowing that there is a domain boundary isn't itself reasonable to assume
a trust boundary)

> Instead, I see some of the best people and organizations who _do_ have
> the Web's best interests at heart spending tremendous amounts of time
> and energy on convincing an organization in the existing trust store
> to cross-sign them (e.g. Let's Encrypt) -- or on building a tolerably
> usable and automatable interface as a reseller on top of other
> CA/reseller organizations that could yank their relationship at any
> time (e.g. SSLMate).

Which this does zero to change, and is not so much an issue with trust
store management as much as it is the mistaken idea that software is ever
complete.

(The whole reason to wait on cross-signing for launching is for ubiquity,
which includes far more platforms than those using NSS or Mozilla's trust
store)

> The US government has not, to my knowledge, mandated anything about
> what CAs domains can use for web PKI. Which I'm extremely glad of,
> because it means that in my professional work (which, full disclosure,
> is currently for an independent civilian agency in the US government)
> I can take full advantage of advances in the private sector, and start
> publicly pushing the envelope on certificate management while in a
> public sector role.

The entire reason we allow ETSI in the program is that ETSI exists to
provide guidance for governments that DO want to dictate how trust
services behave - that is, backed by law. The FPKI doesn't use ETSI, but
plenty have (DigiNotar, TurkTrust, India CCA, and ANSSI, among others).

> I'm certainly not going to defend the US government's terrible
> cryptographic choices over the years. I also think we're getting a bit
> far afield from the merits of a general name constraint program.

It's hardly the US government, which is the point. Consider GOST (which is
required to be used) or SEED (which was also required to be used).
Considering that ETSI is the trust service enabler for requiring that
specific CAs be used, why is this unrelated? It's core to the discussion -
we're reacting to ETSI audited CAs that are part of national schemes, and
their inability to operate on a level of international competence.

If it really was about *.gov.ccTLD, then a name constrained CA, where it
could then be cross-certified and could, itself, do cross-certification of
its regional trust store participants, then we already have a system that
is shovel-ready and working.

The problem is the same though - these regional CAs want to serve wherever
their citizens want. If a new bank in a random country wants a .com
instead of a .ccTLD, then they couldn't.

> There's a big difference between *.cn and *.gov.cn, or *.alibaba.cn.

Except that's not what's been proposed, so it's somewhat irrelevant.

> Once we've got patched-up HTTPS in place across the internet, and
> manage to make the deprecation of HTTP an inevitability -- which will
> take some years -- what's the long term plan? More patches?

Less reactionism? A process that lends itself to being apolitical - even
if controversial - seems far better than a process that says "I don't
trust you, but it's OK, it's your own citizens" - which is inherently what
is behind such a statement.

Reducing the scope of risk assumes that there is some degree of risk
greater with these CAs. If there is, we should be looking at the core
issues. Is ETSI at fault here as an audit standard? Is it the quality of
auditors? What is the real root issue to solve - because saying we don't
trust someone but giving them the keys to a duchy, rather than the
kingdom, doesn't seem to be in anyone's real interests.

> I do not believe that the definition of secure origins is sustainable
> over the course of this century without the ability to extend the
> roots of that trust more widely in a safe way. It's not about
> governments -- they're just the entities on the table in front of us
> in the immediate future and with the broadest reach. We should discuss
> whether and how name constraints can contribute to a future for the
> trusted root program where people and organizations have more freedom
> to choose who guards their trust.

Unsurprising, while this sounds inspiring and motivational, what it boils
down to is "I want root stores to update faster", so that the problem of
ubiquity is no longer an issue. I agree, it'd be awesome if all software
updated faster. But even the most "open" and "people-minded" software
distributions can't get this right (Debian security updates, I'm looking
at you &#3232;_&#3232;)

And even with these update systems, they only work if people update.
Windows has had a root autoupdate since Vista. How many users do? It's
depressing.

And that's not even touching Android - whether we are talking how often
people update the OS or how often people update applications, which
generally only prompt on WiFi - it's depressing.

And until you solve *those* problems, it won't matter what Firefox does to
improve the root update, because they aren't the long tail (despite CAs
claiming otherwise).

Michael Ströder

unread,
Mar 9, 2015, 11:40:05 AM3/9/15
to mozilla-dev-s...@lists.mozilla.org
Ryan Sleevi wrote:
> Given that sites in consideration already have multiple existing ways to
> mitigate these threats (among them, Certificate Transparency, Public Key
> Pinning, and CAA),

Any clients which already make use of CAA RRs in DNS?

Or did you mean something else with the acronym CAA?

Ciao, Michael.

Ryan Sleevi

unread,
Mar 9, 2015, 1:12:51 PM3/9/15
to "Michael Ströder", mozilla-dev-s...@lists.mozilla.org
On Mon, March 9, 2015 8:38 am, Michael Ströder wrote:
> Any clients which already make use of CAA RRs in DNS?
>
> Or did you mean something else with the acronym CAA?
>
> Ciao, Michael.

CAA (RFC 6844) is not for clients. It's for CAs, as another way of
restricting CAs authorized to issue for a domain.

Since Ballot 125 in the CA/Browser Forum (
https://cabforum.org/2014/10/14/ballot-125-caa-records/ ) CAs are required
to disclose how they use/process CAA records.

Phillip Hallam-Baker

unread,
Mar 9, 2015, 1:52:40 PM3/9/15
to Michael Ströder, mozilla-dev-s...@lists.mozilla.org
On Mon, Mar 9, 2015 at 11:38 AM, Michael Ströder <mic...@stroeder.com>
wrote:

> Ryan Sleevi wrote:
> > Given that sites in consideration already have multiple existing ways to
> > mitigate these threats (among them, Certificate Transparency, Public Key
> > Pinning, and CAA),
>
> Any clients which already make use of CAA RRs in DNS?
>
> Or did you mean something else with the acronym CAA?
>
> Ciao, Michael.
>
>
Sites can use CAA. But the checking is not meant to happen in the client as
the client cannot know what the CAA records looked like when the cert was
issued.

A third party can check the CAA records for each new entry on a CT log
however. And I bet that every CA that implements CAA will immediately start
doing so in the hope of catching out their competitors.


CAA also provides an extensible mechanism that could be used for more
general key distribution if you were so inclined.

Martin Rublik

unread,
Mar 10, 2015, 4:11:52 AM3/10/15
to dev-secur...@lists.mozilla.org
On 9. 3. 2015 18:11, Ryan Sleevi wrote:
> Since Ballot 125 in the CA/Browser Forum (
> https://cabforum.org/2014/10/14/ballot-125-caa-records/ ) CAs are required
> to disclose how they use/process CAA records.

Does somebody know how many CAs actualy use CAA records?

Kind regards

Martin

Ryan Sleevi

unread,
Mar 10, 2015, 4:18:58 AM3/10/15
to Martin Rublik, dev-secur...@lists.mozilla.org
As noted in the link I posted, CAs have until April 14, 2015 to disclose
in their CP/CPS if and how they handle CAA records. Until then, it's
inconclusive.


Gervase Markham

unread,
Mar 10, 2015, 2:33:37 PM3/10/15
to Eric Mill
On 08/03/15 11:53, Eric Mill wrote:
> That comes down to how this program is implemented. The intent seems
> pretty clearly to identify the space CAs are already issuing in.
> Perhaps newer gTLDs merit some unrestrained time in the wild before
> they're constrained in this way

I don't understand this point. The plan restricts CAs to issue to a
whitelist of TLDs, it doesn't restrain gTLDs.

If all CAs are restricted to a whitelist of TLDs, then the number of CAs
who could issue for a new gTLD would be 0. That's clearly bad. So some
CAs need to be unconstrained. How do we choose which, and how is that
not massively market-distorting?

> For CAs whose business model is designed for a specific subset of the
> web, a name constraint program could clear a path to entry without
> endangering domains who are not designed to be served by that CA.

I don't think anyone opposes CAs requesting voluntary name constraints.
:-) The proposal here is that we impose them on CAs without their consent.

> This is a great point, and suggests that name constraint updates
> should either a) have a clear and defined update path, or b) only be
> implemented when the chances of updates are low.

b) sounds like "predict the future business models of lots of companies".

> * Reduce the friction for niche CAs to be included in the first place.
> For tightly constrained CAs, it's plausible to imagine that the
> operational complexity they need to demonstrate can be reduced.

Which TLDs do you think it would be OK to issue for with "reduced
operational complexity" on the part of the CA? Which TLDs are in your
"less valuable" bucket?

> Constraining the current major unrestricted CAs seems thorny. But the
> clearest example to me of the benefit of name constraints is the US
> government's FPKI application:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=478418
> https://bug478418.bugzilla.mozilla.org/attachment.cgi?id=8561777
>
> While this is not finalized, and the specific constrained domains in
> the application are not accurate (.gov.us is not a public suffix, or
> in use at all), name constraints seem to be a highly practical way of
> bringing government CAs into the trusted root program.

As Ryan asked: why do we want to do that?

> While the US government is unique in owning their own TLDs, there are
> other government CAs already in the program, and pending application,
> that could benefit from constraints. I know an animating motivation
> for the constraint program is the compromise of a French government
> intermediate certificate.[1]

The idea of forcibly constraining government CAs to issue for their own
TLDs is, to me, a lot more plausible. For one thing, many government CAs
don't have the same audits that non-governmental CAs do.

The difficulty here is defining "governmental", particularly in
countries where the "N" in "NGO" is silent.

Gerv

Gervase Markham

unread,
Mar 10, 2015, 2:47:49 PM3/10/15
to mozilla-dev-s...@lists.mozilla.org
On 06/03/15 16:26, Richard Barnes wrote:
> https://wiki.mozilla.org/CA:NameConstraints

I am unconvinced by the wisdom of this proposal.

Others have made some of the same points, but my major objection is that
I feel that this will calcify the trust list and entrench existing
players, because inevitably the "ubiquity" of CAs who have constraints
released under the release process will suffer for years afterwards due
to old clients, and so their offerings will be uncompetitive.

Other objections include:

* This plan can't cope with the flood of new gTLDs without creating
some "unconstrained" CAs, at which point we've just handed over a
market oligopoly. How do we decide who gets that?

* It's not possible for it both to be true that "The name constraints
for a root should allow issuance for any name that the CA wishes to
issue for" and "The content of the permitted suffix list will be
determined by community consensus". Which is it?

"Too big to fail" is a problem - the solution is more CAs and more
competition. "Weakest CA breaks everything" is a problem - the solution
is fewer CAs and less competition. How to reconcile? Many solutions
which solve one make the other worse. This is a solution which addresses
"weakest CA breaks everything" but makes "too big to fail" worse.

There are solutions which address these problems without aggravating
them. CAA is one example. But this isn't.

I _do_ think we should do the following:

* Encourage CAs to volunteer for name constraints, as HARICA did,
starting by approaching those CAs who have never issued for .com;

* Forcibly name constrain _government_ CAs to their own TLDs.
Government CAs are a different breed; slower to update, different
audits, no competition to put them out of business.

Gerv

Ryan Sleevi

unread,
Mar 10, 2015, 6:15:35 PM3/10/15
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
On Tue, March 10, 2015 11:33 am, Gervase Markham wrote:
> The idea of forcibly constraining government CAs to issue for their own
> TLDs is, to me, a lot more plausible. For one thing, many government CAs
> don't have the same audits that non-governmental CAs do.
>
> The difficulty here is defining "governmental", particularly in
> countries where the "N" in "NGO" is silent.

I think you touched on what I was (somewhat intentionally) skirting.

Currently, the Mozilla inclusion policy allows several audit schemes
- ETSI TS 101 456 v1.4.3+
- ETSI TS 102 042 v2.3.1+
- ISO 21188:2006
- WebTrust Principles for CAs 2.0+ & WebTrust SSL BRs v1.1+
- WebTrust Principles for CAs EV 1.4+

Now, ISO 21188 is a bit anachronistic. I'm surprised we still allow that,
and none of the current included CAs follow that, and I don't think anyone
would miss it if it was dropped.

WebTrust is handled by AICPA/CPA Canada , and the list of global
practitioners is at
http://www.webtrust.org/licensed-webtrust-practitions-international/item64419.aspx
. Of course, anyone can become licensed, per
http://www.webtrust.org/signing-up-for-the-trust-services-program/item64422.aspx

ETSI audits are a bit different. Like WebTrust, a set of criteria are
developed (derived) from the CA/Browser Forum requirements. These are then
enshrined into the ETSI TS guidelines. These guidelines then become
incorporated into local legislative frameworks, which is also responsible
for establishing the auditor qualifications. This is already a mess for
those familiar with it, as some ETSI TS adopting countries have not set up
qualified auditors, and thus CAs within that country need to get audits
from qualified international practitioners. A better summary can be seen
on https://cabforum.org/2014/10/16/ballot-135-etsi-auditor-qualifications/

I mention all of this because it's easy to suggest that the ETSI TS
framework allows for governments to set up their national accreditation
body to allow an NGO-but-really-GO to perform the audits. It's also true
that many of the recent issues have been with ETSI TS-audited entities.

However, a quick scan of https://wiki.mozilla.org/CA:IncludedCAs shows
that there are plenty of government CAs (e.g. Government of Hong Kong,
Government of Spain, Government of Hong Kong) are WebTrust audited, and by
Big Firms (whether that increases or decreases your trust is a different
matter), just as there are a large number that are ETSI TS audited.

Why do I say all this? If you wanted to argue that government CAs don't
have the same audits as non-government CAs - a question of ETSI vs
WebTrust - one solution would be to simply disallow ETSI audits, on the
basis that the government operating the CA gets to (effectively, even if
not in principle) define the audit rigor. I'm sure Erwann would chime in
on why that might be unwise, but it does highlight that if we really do
believe that government CAs are not audited to the same level as
non-governmental CAs, then shouldn't we try to reduce the audit
discrepancy, rather than try to limit the issuance? Wouldn't that serve
more users AND offer a more agile path for future changes?

Peter Kurrasch

unread,
Mar 12, 2015, 6:54:45 PM3/12/15
to Richard Barnes, mozilla-dev-s...@lists.mozilla.org
‎I think some good work has been done here but the proposed solution is the wrong way to go since it establishes boundaries for CA's that are "locked in" at the browser, either in the form of a special table or in the trusted root store itself. While a method is provided for making changes to the boundaries, those changes won't be applied retroactively which means the older versions will be locked out as a CA expands it's service offering.

This backwards compatibility problem is a fatal flaw, but I have an alternative in mind: establish and enforce boundaries within the intermediates. The browser can enforce a policy that a technical constraint be specified somewhere between the root and the end cert. Where exactly in the chain that happens is not important so long as it's found and the boundaries established are not violated. The absence of the constraint would flag an error. Or, perhaps, a special table would be used to provide "default" boundaries.

The advantage here is that CA's would maintain the flexibility to expand or contract the DNS space in which they choose to operate. Presumably this would not be a high frequency occurrence. This flexibility not only helps the CA's but it can also help give cert purchasers more options when choosing a CA. That's the theory, anyway. 

It is certainly a good idea to encourage any CA to self-constrain but we do need a way to forcibly constrain all CA's. Allowing any CA to opt-out defeats the whole purpose.

Thoughts?


From: Richard Barnes
Sent: Friday, March 6, 2015 6:26 PM
Subject: Name Constraints

Hey all,

I've been doing some research on the potential benefits of adding name
constraints into the Mozilla root program. I've drafted an initial
proposal and put it on a wiki page:

https://wiki.mozilla.org/CA:NameConstraints

Questions and comments are very welcome. There's a lot of details to work
out here, but I think there's some significant benefit to be realized.

--Richard

Florian Weimer

unread,
Mar 17, 2015, 7:36:40 AM3/17/15
to Richard Barnes, mozilla-dev-s...@lists.mozilla.org
* Richard Barnes:

> I've been doing some research on the potential benefits of adding name
> constraints into the Mozilla root program. I've drafted an initial
> proposal and put it on a wiki page:
>
> https://wiki.mozilla.org/CA:NameConstraints
>
> Questions and comments are very welcome.

A PKIX-compliant implementation of Name Constraints is not effective
in the browser PKI because these constraints are not applied to the
Common Name.

NSS used to be non-compliant (and deliberately so), so the constraints
do work there, but I don't know if that's still the case.

Gervase Markham

unread,
Mar 19, 2015, 9:55:01 AM3/19/15
to Peter Kurrasch, Richard Barnes
On 12/03/15 22:54, Peter Kurrasch wrote:
> This backwards compatibility problem is a fatal flaw, but I have an
> alternative in mind: establish and enforce boundaries within the
> intermediates. The browser can enforce a policy that a technical
> constraint be specified somewhere between the root and the end cert.
> Where exactly in the chain that happens is not important so long as it's
> found and the boundaries established are not violated. The absence of
> the constraint would flag an error. Or, perhaps, a special table would
> be used to provide "default" boundaries.

What would prevent that constraint being extremely lax?

Or what would prevent a CA issuing one intermediate for all the TLDs
starting a-m, and another for all of the TLDs starting n-z?

The mere presence of a restriction is not a meaningful restriction :-)

> It is certainly a good idea to encourage any CA to self-constrain but we
> do need a way to forcibly constrain all CA's. Allowing any CA to opt-out
> defeats the whole purpose.

And not allowing CAs to opt out means we are forcibly constraining the
business areas in which particular CAs may operate. I shudder at the
thought of the task of trying to do that in a fair manner. (And I don't
think "preserve the status quo" is fair.)

Gerv

Peter Kurrasch

unread,
Mar 24, 2015, 1:02:03 AM3/24/15
to Gervase Markham, Richard Barnes, mozilla-dev-s...@lists.mozilla.org
Hi Gerv,

Obviously you are correct, it wouldn't make much sense to say "please constrain yourself to everything...or almost everything!"

I think the only way for my alternative to work is to just develop a system of increased scrutiny of the intermediates, to develop a more rigorous set of policy agreements and technical verifications. I myself am conflicted about that since on the one hand it seems like the totally wrong approach yet on the other hand I wonder if it's inevitable, that we'll basically have to come up with a system anyway.

Some thoughts:

1) As a first step on the path to fairness, perhaps there can be agreement that the goal of any name constraint policy should be the idea that a single root does not "get the whole internet". Maybe a whole CA organization might, but a single root should not. Could everyone agree?

2) ‎I picture a broadcast mechanism along the lines of OneCRL that would/could play a role in helping determine when a root's scope has become too broad. This mechanism combined with live browsing data could flag potential problems and conflicts with the policy agreements.

3) I was hoping that by now someone would have spoken up here regarding how much of an appetite some of the larger CA's have in discussing constraints. I assume the smaller ones are open to the idea and some are already doing it, but the bigger players...?

4) Do any CA's have public announcements or policies regarding newer TLD's and their plans to issue certs for them? Has anyone said "we will not issue certs for the .ninja domain"?


I guess a final thought is that the work Richard (?) did to come up with an initial set of constraints for the trusted roots is a good place to start‎ the conversation of how to fairly divvy up the DNS space. It's like saying to the CA's, "since these are the areas where your business is, why not just constrain yourself to these TLD's?" As long as it's not carved in stone it should be a reasonable way to go...?


From: Gervase Markham
Sent: Thursday, March 19, 2015 8:53 AM
To: Peter Kurrasch; Richard Barnes; mozilla-dev-s...@lists.mozilla.org
Subject: Re: Name Constraints

On 12/03/15 22:54, Peter Kurrasch wrote:
> This backwards compatibility problem is a fatal flaw, but I have an
> alternative in mind: establish and enforce boundaries within the
> intermediates. The browser can enforce a policy that a technical
> constraint be specified somewhere between the root and the end cert.
> Where exactly in the chain that happens is not important so long as it's
> found and the boundaries established are not violated. The absence of
> the constraint would flag an error. Or, perhaps, a special table would
> be used to provide "default" boundaries.

What would prevent that constraint being extremely lax?

Or what would prevent a CA issuing one intermediate for all the TLDs
starting a-m, and another for all of the TLDs starting n-z?

The mere presence of a restriction is not a meaningful restriction :-)

> It is certainly a good idea to encourage any CA to self-constrain but we
> do need a way to forcibly constrain all CA's. Allowing any CA to opt-out
> defeats the whole purpose.

Gervase Markham

unread,
Mar 24, 2015, 5:26:26 AM3/24/15
to Peter Kurrasch, Richard Barnes
On 24/03/15 05:01, Peter Kurrasch wrote:
> 1) As a first step on the path to fairness, perhaps there can be
> agreement that the goal of any name constraint policy should be the idea
> that a single root does not "get the whole internet". Maybe a whole CA
> organization might, but a single root should not. Could everyone agree?

I don't agree on that, because I don't yet think that a forced name
constraints policy for all CAs is a good idea at all.

Your proposal might reduce the risk to some degree, but not much. If I
broke into Foo CA's issuing system, and Foo CA has two roots, one for
one half of the internet, and the other for the other half, then I can
just use whichever half I need. This provides extra protection only in
the case where a CA is part-compromised and it happens to be the wrong
part for what the attacker wants to do.

The other problem is that some CAs don't have more than one root, and in
fact it's been both Mozilla and Microsoft policy to encourage CAs not to
multiply roots without end. I heard a soft limit of 3 being mentioned at
one point for Microsoft's program, although that may have been a rumour.
Certainly, some CAs in our program only have a single root. Do they get
penalized by being given only half the Internet because of that?

> 2) ‎I picture a broadcast mechanism along the lines of OneCRL that
> would/could play a role in helping determine when a root's scope has
> become too broad. This mechanism combined with live browsing data could
> flag potential problems and conflicts with the policy agreements.

This all sounds like a massive technical effort and an administrative
nightmare, as well as affecting reliability (as all complex systems do).
You would need to make a clear case for a significant improvement in the
security of the internet, realisable in the short to medium term, in
order for something like this to even be contemplated.

> I guess a final thought is that the work Richard (?) did to come up with
> an initial set of constraints for the trusted roots is a good place to
> start‎ the conversation of how to fairly divvy up the DNS space. It's
> like saying to the CA's, "since these are the areas where your business
> is, why not just constrain yourself to these TLD's?" As long as it's not
> carved in stone it should be a reasonable way to go...?

If you were running a business with, say, 10 different product lines,
and the government came along and said "you're currently making these 10
different products; we are going to pass a law which says you can't make
any other products without making it public that you intend to move into
a new area of business, asking us for permission and, if we decide to
give it, waiting a year or so", how would you react?

Gerv

Peter Kurrasch

unread,
Mar 24, 2015, 8:40:25 AM3/24/15
to Gervase Markham, Richard Barnes, mozilla-dev-s...@lists.mozilla.org
I'm confused because it sounds like you're advocating for the status quo but I didn't think that was your position?

  Original Message  
From: Gervase Markham
Sent: Tuesday, March 24, 2015 4:25 AM
To: Peter Kurrasch; Richard Barnes; mozilla-dev-s...@lists.mozilla.org
Subject: Re: Name Constraints

On 24/03/15 05:01, Peter Kurrasch wrote:
> 1) As a first step on the path to fairness, perhaps there can be
> agreement that the goal of any name constraint policy should be the idea
> that a single root does not "get the whole internet". Maybe a whole CA
> organization might, but a single root should not. Could everyone agree?

I don't agree on that, because I don't yet think that a forced name
constraints policy for all CAs is a good idea at all.

Your proposal might reduce the risk to some degree, but not much. If I
broke into Foo CA's issuing system, and Foo CA has two roots, one for
one half of the internet, and the other for the other half, then I can
just use whichever half I need. This provides extra protection only in
the case where a CA is part-compromised and it happens to be the wrong
part for what the attacker wants to do.

The other problem is that some CAs don't have more than one root, and in
fact it's been both Mozilla and Microsoft policy to encourage CAs not to
multiply roots without end. I heard a soft limit of 3 being mentioned at
one point for Microsoft's program, although that may have been a rumour.
Certainly, some CAs in our program only have a single root. Do they get
penalized by being given only half the Internet because of that?

> 2) ‎I picture a broadcast mechanism along the lines of OneCRL that
> would/could play a role in helping determine when a root's scope has
> become too broad. This mechanism combined with live browsing data could
> flag potential problems and conflicts with the policy agreements.

This all sounds like a massive technical effort and an administrative
nightmare, as well as affecting reliability (as all complex systems do).
You would need to make a clear case for a significant improvement in the
security of the internet, realisable in the short to medium term, in
order for something like this to even be contemplated.

> I guess a final thought is that the work Richard (?) did to come up with
> an initial set of constraints for the trusted roots is a good place to
> start‎ the conversation of how to fairly divvy up the DNS space. It's
> like saying to the CA's, "since these are the areas where your business
> is, why not just constrain yourself to these TLD's?" As long as it's not
> carved in stone it should be a reasonable way to go...?

Gervase Markham

unread,
Mar 24, 2015, 9:20:56 AM3/24/15
to Peter Kurrasch, Richard Barnes
On 24/03/15 12:40, Peter Kurrasch wrote:
> I'm confused because it sounds like you're advocating for the status
> quo but I didn't think that was your position?

I am not in favour of non-consensual name constraints for CAs in
general. I think it would either be ineffective in improving security
(in milder forms) or lead to Mozilla making massive interventions into
the CA market which cannot possibly be done in a fair and reasonable
manner, along with a massive admininstrative burden (in stronger forms).

I am in favour of verbally encouraging all CAs to accept consensual name
constraints, and I am in favour of name-constraining a set of government
CAs to the TLD(s) associated with their country. It's a point of
discussion as to whether this is all government CAs simply because they
are controlled by governments, or just those who have not had the
standard audits, because they don't meet the standard criteria.

Gerv

Peter Kurrasch

unread,
Mar 24, 2015, 5:13:09 PM3/24/15
to Gervase Markham, Richard Barnes, mozilla-dev-s...@lists.mozilla.org
‎Looping back in the mail list which was dropped by mistake....

The issues at hand are: Who will choose to self-constrain? Who should be forced to constrain? Who benefits from any constraints made?

To that last question, it's a bit of a paradox because we are asking an entity to take action that has minimal benefit to itself. The benefit from the constraints actually goes to everyone else on the internet!

True, an argument could be made that a CA which constrains itself will be less of a target ‎for attack because their ability to issue fraudulent certs is, in theory, reduced. While I don't disagree with that argument I don't find it all that persuasive because quite apart from whether a CA is a desirable target, once it's been compromised the result is the same: everyone within that CA's sphere of influence is at risk. If that sphere of influence is "the whole internet" we now have a big problem. If that sphere is only "everyone in .cn" then I'm still concerned, but less so.

So that's the thinking behind my previous suggestion that "nobody gets the whole internet". A compromise or sloppiness or deliberate fraud at one CA should not mean that everyone is potentially at risk.


Now, as to who will want to self-constrain, I don't think it's a very long list. Anyone who chooses to do so should be lauded, of course, but they are basically doing it out of the goodness of their hearts.  As I said, the benefit doesn't really go to the CA and since there is a potential loss of business if they can't issue certs for some customers I really don't see a strong motivation to self-constrain.


As to who should be forced to constrain, this is controversial. I would argue that everyone should be forced, but that has certain problems. One can argue that only government-run and certain other CA's should be forced but then we are put in the position of having to decide objectively which ones are more‎ trustworthy than others. That can be a tricky path to navigate and doesn't change the underlying threat: that any CA can be a victim of outright attack, sloppy operations, deliberate bad acts, and even simple mistakes.

So while it may be safer, forcing constraints on everyone creates problems. And while it may be more doable, forcing only some CA's might not have enough of an impact. It's a giant risk/reward calculation. 


Hopefully this better explains where I'm coming from.

From: Gervase Markham
Sent: Tuesday, March 24, 2015 12:37 PM‎

On 24/03/15 17:26, Peter Kurrasch wrote:
> Be careful you don't invalidate your whole argument: that people
> should self-constrain even though the security benefit is minimal.

It depends from whose perspective. The security benefit to the CA system
of HARICA, the Greek academic CA, name-constraining itself to .gr, .org
and .net (I think) was minimal. But the security benefit to HARICA
itself was significant, because if they can't issue for .com it makes
them much less of a target.

So I think some smaller CAs may be open to voluntarily taking on name
constraints.

> I'm also not sure I see the reason to target government-run CA's?

You really don't see any reason why people might be less trusting of
government-run CAs? :-)

Also, we have an audit exception for government-run CAs. They often have
internal audits only, and we can't easily tell them to go away and get a
WebTrust audit. So we might decide that in order to take advantage of
that exception, you have to be name constrained.

Gerv

Gervase Markham

unread,
Mar 25, 2015, 7:54:55 AM3/25/15
to mozilla-dev-s...@lists.mozilla.org
On 24/03/15 21:12, Peter Kurrasch wrote:
> As to who should be forced to constrain, this is controversial. I would
> argue that everyone should be forced, but that has certain problems. One
> can argue that only government-run and certain other CA's should be
> forced but then we are put in the position of having to decide
> objectively which ones are more‎ trustworthy than others. That can be a
> tricky path to navigate and doesn't change the underlying threat: that
> any CA can be a victim of outright attack, sloppy operations, deliberate
> bad acts, and even simple mistakes.

Forcing everyone to constrain does not solve this problem of having to
decide who is more trustworthy. It just transfers it.

All CAs want to issue for .com. Which ones do you allow to do so? (Let's
say for the sake of argument that they have all already done so in the
past.)

Gerv

Peter Kurrasch

unread,
Mar 25, 2015, 11:59:44 PM3/25/15
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
Perhaps I chose my words poorly because my intention actually was to avoid having to pass judgment at all. Instead of saying to a CA "we don't trust you enough, please constrain" I was hoping for something along the lines of "everybody is asked to constrain to make the internet safer for everyone".

In terms of who gets to issue for .com, I wouldn't impose a limit of who can do it, just that you have to tell us you're doing it. If a intermediate were to be constrained to .com, .net, and .org and nothing else, I would be fine with that. That would actually be quite an accomplishment if we could get every CA to just agree to that much.


  Original Message  
From: Gervase Markham
Sent: Wednesday, March 25, 2015 6:54 AM
To: mozilla-dev-s...@lists.mozilla.org
Subject: Re: Name Constraints

On 24/03/15 21:12, Peter Kurrasch wrote:
> As to who should be forced to constrain, this is controversial. I would
> argue that everyone should be forced, but that has certain problems. One
> can argue that only government-run and certain other CA's should be
> forced but then we are put in the position of having to decide
> objectively which ones are more‎ trustworthy than others. That can be a
> tricky path to navigate and doesn't change the underlying threat: that
> any CA can be a victim of outright attack, sloppy operations, deliberate
> bad acts, and even simple mistakes.

Forcing everyone to constrain does not solve this problem of having to
decide who is more trustworthy. It just transfers it.

All CAs want to issue for .com. Which ones do you allow to do so? (Let's
say for the sake of argument that they have all already done so in the
past.)

Gerv

Gervase Markham

unread,
Mar 26, 2015, 6:08:45 AM3/26/15
to Peter Kurrasch
On 26/03/15 03:59, Peter Kurrasch wrote:
> Perhaps I chose my words poorly because my intention actually was to
> avoid having to pass judgment at all. Instead of saying to a CA "we
> don't trust you enough, please constrain" I was hoping for something
> along the lines of "everybody is asked to constrain to make the
> internet safer for everyone".

But you say "asked" - and that's the entire difference between my
position and yours. I am saying "'ask' is OK; 'require' is not". You are
arguing for 'require'.

> In terms of who gets to issue for .com, I wouldn't impose a limit of
> who can do it, just that you have to tell us you're doing it. If a
> intermediate were to be constrained to .com, .net, and .org and
> nothing else, I would be fine with that. That would actually be quite
> an accomplishment if we could get every CA to just agree to that
> much.

It depends on the configuration of the CA's systems, but I'm not
convinced that a CA significantly improves its security posture by
having 10 intermediates which split the entire DNS space up into 10
pieces, rather than one. Those certs may well all be tied to the same
issuing system.

Also, it means they would have to cut a new intermediate every month, at
the moment, if they wanted to serve the new gTLD market.

Gerv

Peter Kurrasch

unread,
Mar 26, 2015, 9:19:09 AM3/26/15
to mozilla-dev-s...@lists.mozilla.org, Gervase Markham
Yes, I am arguing for 'require' so I'll restate: "Everybody is required to constrain in order to make the Internet safer for everyone. CA's may change their constraints at a later date, but you have to tell us."

As I stated previously, the security benefit ‎is not to the CA itself but to everyone else on the Internet--regular, everyday users. When a CA system becomes compromised, the bad actor will say: "Cool! Now, how much damage can I inflict?" We should have a way to impose boundaries that intend to limit that damage.

My whole viewpoint regarding name constraints is that it is a solvable problem. It's not a easy problem to solve, but it can be done.  This whole debate, though, is starting to get tedious ‎because while I can make any number of suggestions (many of which would be controversial!) what's missing here is how much appetite Mozilla has to change the status quo. 

So, how much work does Mozilla feel like doing?

  Original Message  
From: Gervase Markham
Sent: Thursday, March 26, 2015 5:07 AM‎

On 26/03/15 03:59, Peter Kurrasch wrote:
> Perhaps I chose my words poorly because my intention actually was to
> avoid having to pass judgment at all. Instead of saying to a CA "we
> don't trust you enough, please constrain" I was hoping for something
> along the lines of "everybody is asked to constrain to make the
> internet safer for everyone".

But you say "asked" - and that's the entire difference between my
position and yours. I am saying "'ask' is OK; 'require' is not". You are
arguing for 'require'.

> In terms of who gets to issue for .com, I wouldn't impose a limit of
> who can do it, just that you have to tell us you're doing it. If a
> intermediate were to be constrained to .com, .net, and .org and
> nothing else, I would be fine with that. That would actually be quite
> an accomplishment if we could get every CA to just agree to that
> much.

Gervase Markham

unread,
Mar 26, 2015, 11:45:49 AM3/26/15
to Peter Kurrasch
On 26/03/15 13:18, Peter Kurrasch wrote:
> So, how much work does Mozilla feel like doing?

You know my views; other Mozilla people will have to speak for
themselves. And then we'll have an argument to see whose view prevails
:-) However, this dialogue was very useful in exploring some of the
ramifications of a constraints policy, so thank you.

Gerv

Brian Smith

unread,
Apr 2, 2015, 10:58:34 PM4/2/15
to Florian Weimer, mozilla-dev-s...@lists.mozilla.org, Richard Barnes
Florian Weimer <f...@deneb.enyo.de> wrote:
> A PKIX-compliant implementation of Name Constraints is not effective
> in the browser PKI because these constraints are not applied to the
> Common Name.
>
> NSS used to be non-compliant (and deliberately so), so the constraints
> do work there, but I don't know if that's still the case.

mozilla::pkix does apply name constraints to domain names in the CN attribute.

https://mxr.mozilla.org/mozilla-central/source/security/pkix/test/gtest/pkixnames_tests.cpp#2186

Note that this is "PKIX-compliant" because RFC 5280 lets an
implementation apply additional constraints on top of the RFC 5280
rules.

Cheers,
Brian
0 new messages