On Sun, Mar 8, 2015 at 3:49 PM, Ryan Sleevi
<
ryan-mozde...@sleevi.com> wrote:
> On Sun, March 8, 2015 11:53 am, Eric Mill wrote:
>> That comes down to how this program is implemented. The intent seems
>> pretty clearly to identify the space CAs are already issuing in.
>> Perhaps newer gTLDs merit some unrestrained time in the wild before
>> they're constrained in this way -- or perhaps it's simpler to make the
>> gradiations more black and white (e.g. "unrestricted" vs "niche" CAs,
>> and avoiding "somewhat unrestricted" or "nearly unrestricted").
>>
>> For CAs whose business model is designed for a specific subset of the
>> web, a name constraint program could clear a path to entry without
>> endangering domains who are not designed to be served by that CA.
>
> One reason is because it encourages calcifying the trust space ("If you
> were there already, you can stay there; if you weren't, you're now kept
> out")
It _could_ be done that way, but definitely doesn't have to be.
> Another reason is it encourages the trust store to be used for regional or
> arbitrary distinctions ("not designed to be served by that CA"). This
> amounts to naught more than recognizing borders on the Internet, a
> somewhat problematic practice, for sure. That is, for every "constrained"
> CA that you can imagine that ONLY wants to issue for a .ccTLD, you can
> also imagine the inverse, where ONLY a given CA is allowed to issue for
> that .ccTLD. The reasoning behind the two are identical, but the
> implications of the latter - to online trust - are far more devastating.
I can imagine the inverse, but that doesn't at all mean it will
necessarily occur. I share some of your misgivings about scanning the
gTLD space and bluntly shaving off space wherever you can. But the
program could be as targeted as the trusted root program wants it to
be.
You envision a more constricted market, but properly managed this
could expand the market and increase the competition for popular TLDs.
>> This is a great point, and suggests that name constraint updates
>> should either a) have a clear and defined update path, or b) only be
>> implemented when the chances of updates are low.
>
> It's nigh impossible to quantify (b), given the rate of gTLD adoption. It
> also favours incumbants who have the ability to issue for those domains,
> since any upstarts need to demonstrate need/desire to issue for the new
> gTLDs.
The program would clearly need to go in stages, testing assumptions
along the way. Letting the gTLD market mature, while the program
tackles the traditional TLD space, could be one way to go about it.
>> * Add friction to applicants that claim in their initial application
>> to serve a specific subset of the web, and then wish to expand their
>> issuance surface area after their inclusion.
>
> Why is this a good thing, and why should it be seen as such?
Because an initial application should describe what a CA wishes to do,
and their inclusion be decided based on that description. If a CA
applies to the trusted root program and declares they're only
interested in selling *.club domains, or *.
yahoo.com domains, changing
their mind on that after inclusion should come at a real but
non-prohibitive cost.
CAs are charged, collectively, with guarding the trust of the web.
This doesn't seem like an unreasonable piece of friction to add.
>> * Reduce the friction for niche CAs to be included in the first place.
>> For tightly constrained CAs, it's plausible to imagine that the
>> operational complexity they need to demonstrate can be reduced.
>
> I strongly disagree with this sentiment. The holders of a .ccTLD domain
> have just as much desire and reason to have strong security as the holders
> of a .com domain. This idea that somehow we can be less stringent with a
> .ccTLD-constrained CA is downright dangerous, because it suggests now a
> balkanization of web security as a desirable outcome.
I don't think I was sufficiently clear. If a CA comes and wants to be
able to issue certificates for *.us or *.ly, or any ccTLD which is
operated openly, that shouldn't imply any reduction in auditing or
operational sophistication as compared to an unconstrained CA.
TLDs and public suffixes like *.
gov.uk, *.
gouv.fr, *.gov, or *.mil --
these are not operated openly. I know that at least in the case of
.gov, the registration process itself is centrally managed and is
effectively "organizationally validated". There are legitimate reasons
that a CA constrained to these domains could require less burdensome
auditing practices and agree to take on more liability for those
domains. It's up to each individual domain to choose to use that CA,
because this proposal is not intending to restrict which CAs a domain
can use.
Decisions like this should be based on the actual operational
realities of a domain. It's not about geographic location, or
balkanization of the web, even though government domains are
effectively geographic in nature. If certain name constraints
realistically -- present and future -- identify a centrally managed
piece of the domain space, than a CA targeting that rigidly defined
space should be evaluated on what additional levels of trust are
required.
>
>> By contrast, name constraints protect *everyone*, even if the domain
>> owner has never heard of them, or heard of CT, CAA, or PKP.
>
> I'm well aware of the distinctions between CT/CAA/PKP. The issue here is
> simply one that the existing measures exist for site operators. The
> argument for why we need "yet another" - one that is centrally managed,
> slow to update, inherently political, and lacking firm criteria - is
> somewhat problematic.
"centrally managed, slow to update, inherently political, and lacking
firm criteria" already describes the trusted root program. With the
way web PKI works today, I don't see how it can be otherwise. If name
constraints change those politics in a way that increases protections
for the web's every day users, this seems like an improvement.
>> While this is not finalized, and the specific constrained domains in
>> the application are not accurate (.
gov.us is not a public suffix, or
>> in use at all), name constraints seem to be a highly practical way of
>> bringing government CAs into the trusted root program.
>
> Stop right here.
>
> Why is this a good thing?
> Why is it in the interest of Mozilla's users?
> Why is it in the interest of the Web at large?
>
> There's a fundamental mistake that assumes bringing the government CAs in
> (of which the US FPKI is but one example) is somehow a good thing.
An unconstrained government CA doesn't meet my definition of "a good
thing" either.
However, I don't understand why the current group of CAs that make up
the trusted root store today represent entities that hold the
interests of me, the Web, or Mozilla's users at heart.
Instead, I see some of the best people and organizations who _do_ have
the Web's best interests at heart spending tremendous amounts of time
and energy on convincing an organization in the existing trust store
to cross-sign them (e.g. Let's Encrypt) -- or on building a tolerably
usable and automatable interface as a reseller on top of other
CA/reseller organizations that could yank their relationship at any
time (e.g. SSLMate).
The private sector has created a distinctly uninspiring,
security-hostile situation for managing the web PKI.
> The
> closest it comes is "Well, the government said users must use the Federal
> PKI, ergo it's nice that they CAN use the federal PKI", but that's simply
> an argument that private industry should exist to enable governments'
> legislative whims on technology.
The US government has not, to my knowledge, mandated anything about
what CAs domains can use for web PKI. Which I'm extremely glad of,
because it means that in my professional work (which, full disclosure,
is currently for an independent civilian agency in the US government)
I can take full advantage of advances in the private sector, and start
publicly pushing the envelope on certificate management while in a
public sector role.
> We've already seen the impact the past decades legislative whims have had
> on security. While FREAK is perhaps a modern example, the complexity and
> security implications of FIPS 140-(1/2/3) remain a matter of active
> discussion. Let alone the complexity involved with say, Fortezza, which
> has been exploitable in NSS in the past.
I'm certainly not going to defend the US government's terrible
cryptographic choices over the years. I also think we're getting a bit
far afield from the merits of a general name constraint program.
> As it relates to online trust ecosystem, we can see these government CAs
> have either botched things quite spectacularly (India CCA) or been highly
> controversial (CNNIC). The arguments for CNNIC aren't "Well, if they're
> only MITMing .cn users, that's OK", it's "Well, they could MITM".
There's a big difference between *.cn and *.
gov.cn, or *.
alibaba.cn.
> That's why name constraints are misleading. They exist because we lack
> confidence that these government CAs (often audited under ETSI) are
> competent to operate the technology necessary to be stewards of Internet
> trust. The solution shouldn't be to find ways to hinder their damage, the
> solution should be to make it impossible for them to damage things in the
> first place.
>
> Name constraints, as presented, give tacit approval to the CAs constrained
> to botch things, as long as they do so only in their little fiefdoms. But
> when these fiefdoms easily represent millions-to-billions of Internet
> users, especially in emerging markets, do we really believe that their
> needs are being served?
>
> That is, in essence, why I think a change like this is so dangerous. It
> strives to draw borders around the (secure) Internet, and to acknowledge
> that what you do in your own borders, to your own users, is an issue
> between you and them. I don't think that's a good state for anyone to be
> in.
Putting governments aside for a second: why is the status quo acceptable?
We're all spending a *lot* of time strengthening HTTPS and widening
its use. We're putting a bunch of patches in place to make HTTPS
stronger and making the surface area for MITM smaller and more
expensive to attack.
Once we've got patched-up HTTPS in place across the internet, and
manage to make the deprecation of HTTP an inevitability -- which will
take some years -- what's the long term plan? More patches?
I truly hope that the long-term plan for trust in the web PKI does not
continue to mean a tightly guarded cluster of mostly for-profit
companies, even when we've bent the curve of their incentives somewhat
towards protecting users.
I do not believe that the definition of secure origins is sustainable
over the course of this century without the ability to extend the
roots of that trust more widely in a safe way. It's not about
governments -- they're just the entities on the table in front of us
in the immediate future and with the broadest reach. We should discuss
whether and how name constraints can contribute to a future for the
trusted root program where people and organizations have more freedom
to choose who guards their trust.
-- Eric
--
konklone.com | @konklone