Here are my initial attempts at a policy and accompanying FAQ:
http://www.hecker.org/mozilla/certificate-policy/
http://www.hecker.org/mozilla/certificate-faq/
The FAQ is incomplete; I want to do a section on rationales behind the
policy, but haven't had time to do a proper draft yet. However I can
describe here some of my motivations and rationales for the policy
approach I personally prefer; think of this as a first draft of the
rationales FAQ:
* After doing a couple mozilla.org policies, I've decided I like to keep
the policies themselves relatively short and general, and push detailed
discussions into the FAQ. Hence the particular form I've chosen. For
purposes of discussion you can consider the "policy" in toto to be the
policy document itself supplemented by the guidance provided in the FAQ.
* As a public project we need a policy and associated decision process
that is relatively transparent. However at the same time I don't want to
over-specify things in the policy (see also above) and would prefer to
leave some flexibility for the application of human judgement by
whomever is charged with making the actual decisions (whom I'll refer to
as the "evaluators" in the discussion below).
To take one example, at a high-level I think it's appropriate to take
CA-related risks into account in making decisions, and at an
intermediate-level (as specified in the FAQ) I think it's appropriate to
evaluate how well CAs do in protecting signing keys and related
material. However I don't think it is appropriate to mandate a specific
approach to key protection; I prefer to defer to the evaluators' judgement.
* As I've previously mentioned in another post, I personally prefer a
policy that evaluates not only CA-related risks and risk mitigation, but
also potential benefits of including a CA's certificates.
Besides being a better approach in general IMO, I think such an approach
is specifically suited to the situation the Mozilla project finds itself
in: We have a lot of CAs whose certificates have been included as a
matter of course in Mozilla based on their inclusion in Netscape 6 and
7, and IMO it's pretty unlikely that we're going to go back and give
those CAs the same level of scrutiny we give new CAs.
IMO this is unlikely for three reasons: First, there are a lot of
pre-existing CA certs, and we don't have a lot of resources to do CA
vetting. I think most if not all attention will be focused on the more
pressing issue of looking at new CAs. Second, if we do go back and vet
already-included CAs, we have limited options for what we can do in the
event we deem a particular CA to be problematic. As previously noted,
it's difficult under the current scheme to "turn off" a CA cert except
through the user's manual intervention. Finally, there are some CAs for
which it would be very disruptive to users if we "turned off" their CA
certs, given the large number of sites using their certs; so in practice
I doubt anyone would ever seriously attempt to do this.
Given that in practice existing CAs are not going to have to go through
the same process as new CAs, I believe it would be unfair on new CAs to
impose strict requirements on them without at the same time formally
considering the potential benefits of including those new CAs, and
giving CAs a chance to make a positive case to us on why their certs
should be included.
* One question that has been raised is why we shouldn't just defer to
third-party judgements on CAs, e.g., WebTrust/AICPA, for legal reasons
and also to take advantage of an already-defined and -operating process
for CA vetting. First, IMO the legal argument is not nearly as
compelling to me as others seem to find it, and as I mentioned in a
previous post I have what I believe to be sound reasons for trying to do
the right thing independent of specifically legal considerations.
Second, it is not clear to me that the goals embodied in the AICPA and
similar evaluation processes overlap 100% with our goals in doing CA
evaluation in the context of the Mozilla project. Therefore I think we
should take AICPA, etc., endorsements into account, but not make them
our sole criteria.
To expand on this point: Despite what people say about "Mozilla is now a
real end user product", IMO Mozilla is fundamentally different than a
commercial software product like IE or Outlook. It is in some sense an
"experimental" product, not in the sense of being unfinished or
bug-ridden, but in the sense that (IMO) one of the major goals of the
project and product should be to help advance the state of the art of
Internet technologies in general, including browsing and mail
technologies in particular. I think this is to the ultimate benefit of
Mozilla users, and I think Mozilla users should take this into account
when deciding whether to use Mozilla or another alternative product.
Now in the case of PKI-based systems, my personal opinion is that the
traditional approaches have in many ways favored the ideal at the
expense of the real (I see this very much in some of the Federal PKI
efforts I've been involved in), and have taken a very commerce-centric
and legal-centric approach to the CA issue. I believe that these factors
and related ones have arguably hindered both innovation in and adoption
of more secure systems for the "ordinary" end user applications
(browsing, email, etc.) that are at the heart of the Mozilla project.
Therefore I do not want to simply adopt wholesale or replicate existing
CA evaluation criteria that come from this traditional approach, but
would much prefer that we have a more flexible policy in deciding which
CA certs to include in Mozilla, one that is more in tune with the nature
of the project and its goals.
That's all my comments for now. Please respond with comments in this
forum. Besides asking for comments, questions, objections, etc., on the
actual policy issues, I'd also like a technical critique on the part of
the FAQ that provides background information. My goal is for that part
of the FAQ to give a solid grounding in the underlying issues for
Mozilla users who are not that knowledgeable about PKIs and CAs, so that
such users can understand what the policy discussions are actually
about. (And of course if you want to contribute new questions and
proposed answers for the FAQ I'd be more than happy to consider
including them.)
Frank
--
Frank Hecker
hec...@hecker.org
However, I thought you might be interested in how the state of
California approves certificate authorities under its Government
Code Section 16.5. This code section deals with digital
signatures on documents that require signatures but are filed
electronically with the state or a local government. PKI keys
used for this must be authenticated no less than keys used for
encryption or for establishing secure communication between a Web
browser and a Web server.
See <http://www.ss.ca.gov/digsig/regulations.htm>. This is the
California Secretary of State's regulation implementing Government
Code Section 16.5. Of particular interest for Mozilla's policy,
see sections 22003(a)6(C) and 22003(a)6(D) of the regulation (a
bit more than half-way down the page). (Section 22003 begins at
<http://www.ss.ca.gov/digsig/regulations.htm#22003>.) 6(C) deals
with how a CA gains approval by the state; 6(D) deals with relying
on national and international accreditation bodies for granting
approval and with revoking approval. The latter contains a link
to a notice that WebTrust audits are accepted for determining
which CAs are approved.
6(C) and 6(D) together might take two pages to print, thereby
meeting the goal of keeping the Mozilla policies short. The
notice about WebTrust audits is itself only a single page.
--
David E. Ross
<http://www.rossde.com/>
I use Mozilla as my Web browser because I want a browser that
complies with Web standards. See <http://www.mozilla.org/>.
I reviewed both the policy and FAQ.
My comments on the policy are in the PDF file at
<http://www.rossde.com/Mozilla_certs/Policy.pdf>. These comments
are in the form of suggested revisions, highlighted in underlined
blue. Those revisions primarily address how a CA's certificates
are approved for inclusion in the default database. My concern is
that CA certificates should indeed be trusted.
Specifically:
#3: I indicate that a CA that fails an audit or loses
accreditation should have its certificates removed and the removal
should be publicized. Mozilla users should not rely on a
deficient CA.
#6 (new): I added this new section to indicate that only reliable
CAs should have their certificates in the default database.
Rather than having the Mozilla Foundation investigate CAs for
reliability, I used standards based on the California
regulations. Then the only effort required of the Foundation
would be to review an audit report and verify that the audit was
conducted by a qualified professional.
#7 (new): Despite wishes to the contrary, you cannot escape the
legalisms. I suggest the Mozilla Foundation's lawyer should word
the necessary clause in your license. Reliance on outside
standards and outside auditors (especially when that reliance is
already recognized in law in the state where the Foundation is
incorporated) will offer some protection against liability, but
you should also make sure the Foundation's general liability
insurance addresses this issue.
My comments on the FAQ are in the PDF file at
<http://www.rossde.com/Mozilla_certs/FAQ.pdf>. I had comments on
only two questions under "Details of the Mozilla Certificate
Policy", one of which relates back to my suggestions regarding the
policy.
I think the Policy is good, except for one comment on
the Risk, which I've responded more towards the FAQ
entry, here:
http://www.hecker.org/mozilla/certificate-faq/policy-details/
> In particular, we will evaluate whether or not a CA
> operates in a manner likely to cause undue risk for
> Mozilla users.
Risk is a very tricky thing to assess. Firstly, risk
cannot be assessed without proper attention to the
value at risk, and the threats against that value.
Secondly, by assessing the risk, however so done, and
then presenting the results for others to rely upon,
liability is created. This liability is perhaps
limited by the price paid by the user ($0) but is
none-the-less present and available for some smart
lawyer to exploit.
One way to overcome this would be to deny any risk-based
assessment (a "common carrier" approach) but this would
then leave Mozilla users at the mercy of costless attacks
that the PKI permits. Another way would be to ask for
the CAs to provide an indemnity; this however is unlikely,
as their own businesses are constructed to reduce their
risks, not increase them.
A better way may be to reflect those risk assessments
back to those that carry the losses - the users.
This could be done by opening up a forum for every new
CA proposal. (Actually, it could be done for all old
ones as well). Just like the current CACert bug that
started this thread, each CA could have an ongoing
forum for user comment.
In this way, users can comment on the information
published, and they can present their findings. This
would mean real scrutiny would now be possible, as
it is likely that Mozilla users have more resources
than the Mozilla Foundation.
Most users would never look at the practices of a CPA,
as a) they have not the time nor patience, or b) there
is nowhere to place their comments and assessments even
if they had the time. However, if there was a defined
forum for comment, it could be hoped that sufficient
close Mozilla users would do sufficient analysis on
the major CAs such that the Mozilla Foundation could
simply refer to the sentiment on the forums.
Thus, they would outsource the risk assessment. As
policy, this would also remove the liability.
Note 1: the original CACert bug, in a near perfect forum:
<http://bugzilla.mozilla.org/show_bug.cgi?id=215243>
Note 2: this form of open governance is practiced in the
gold issuance community, where lack of regulators means
that the users have to protect themselves by demanding
certain measures of issuers.
One other minor comment:
> We may elect to publish submitted information for use
> by Mozilla users and others; please note any information
> which you consider to be proprietary and not for public
> release.
This opens up a bait and switch. Secret information
may be provided to Mozilla that will be supressed and
unavailable to the public. In the event of a dispute,
this information may be relevent to the public party,
but will be unknown to them. I'd recommend that all
information provided be deemed public, non-proprietary,
and publishable by Mozilla.
iang
Of course what Frank Hecker meant was "the probability of loss" :-)
Frank
--
Frank Hecker
hecker.org
Thanks for your comments. I especially appreciate your taking the time
to create suggested revisions.
> #3: I indicate that a CA that fails an audit or loses
> accreditation should have its certificates removed and the removal
> should be publicized. Mozilla users should not rely on a
> deficient CA.
Note that in practice this will be problematic, since AFAIK removing a
cert from the default database affects only users who are installing
Mozilla for the first time. I'll let others speak to this issue.
> #6 (new): I added this new section to indicate that only reliable
> CAs should have their certificates in the default database. Rather
than having the Mozilla Foundation investigate CAs for
> reliability, I used standards based on the California
> regulations. Then the only effort required of the Foundation
> would be to review an audit report and verify that the audit was
> conducted by a qualified professional.
Every time I've worked on a mozilla.org policy there has been at least
one or two "wedge issues" on which people fundamentally disagreed, with
strong opinions on and plausible arguments for either side of the
issue. I suspect that this idea of mandating third-party audit of CAs
will be one of, if not the major wedge issue for any Mozilla Foundation
certificate policy.
For the record, I personally oppose mandating third-party audits as a
condition of including a CA certificate in Mozilla. I think it's fine to
use independent audits (e.g., WebTrust) as an input to the decision, and
peraps as the only thing needed for our decision where a CA has gotten
such a "seal of approval". However I do not believe that we should
automatically reject a CA that has not gone through such an audit; in
that case I think we should rather do our own vetting, to whatever level
we feel necessary.
Before I explain my reasoning, let me first say that I have no objection
in principle to audits and lawyers in the PKI/CA context; in my work
I've been involved in formal security evaluations (FIPS 140 and Common
Criteria) and have worked closely with lawyers as co-workers and also as
a client. However I also believe that there are trade-offs to getting
lawyers and independent auditors involved, and those trade-offs are not
always worth making.
More specifically, I see this proposed independent audit mandate as an
example of insurance: by mandating that all included CAs have undergone
(i.e., paid for and passed) an independent audit, we are presumably
insuring the Mozilla project and the Mozilla Foundation against the
possibility of bad things happening related to the included CA certs.
Now in general getting insurance may or may not make sense; it depends
on the size of the possible loss, the probability of loss, and the cost
of the insurance. In the context of this policy discussion we'll assume
that the possible loss to the project and to the Mozilla Foundation
could be major if not catastrophic, just as when insuring my house I
assume that my house could be completely destroyed.
What about the probability of loss? Insurance makes most sense when the
probability of low is relatively low (so insurance is affordable) but
not too low (in which case insurance may not be necessary). For example,
I consider it relatively unlikely that my house will burn down, but
major house fires do occur (including one in my neighborhood a few years
ago), so it's worth it to me to buy fire insurance for my home. On the
other hand, if someone offered to sell me insurance specifically against
the possibility of a meteor destroying my house, I would not consider
paying even $1 for it -- the probability of loss is so low (1 in 10^8? 1
in 10^9?) that the "expected loss" (loss probability times potential
loss amount) is close to zero. I have better uses for that dollar.
Now the question is: Is the loss we are insuring against here more like
a house fire or more like a meteor strike? The world has been using
browsers and SSL for almost ten years now, and S/MIME-capable email
products and downloadable signed code about as long. Over that time how
many lawsuits have there been involving the issues we're concerned about
here, e.g., failures on the part of CAs, on the part of people who
blithely embedded those CA's certs in applications, and so on? Thousands
of lawsuits? Hundreds? Dozens? A few? One or two? None?
I genuinely don't know the answer to this question. However in all the
discussions around this subject I've never heard anyone cite an actual
example lawsuit or other legal action, so the answer may well be none.
If that's the case, I hope I can be forgiven for concluding that what we
are worried about here is more like a meteor strike than a building fire.
What about the costs of this proposed "insurance"? You might say, "There
is no cost to the Mozilla project or the Mozilla Foundation -- the CAs
pay the cost of audits, and by relying on those audits the Mozilla
Foundation avoids the cost of doing its own CA vetting." But these are
not the costs I am concerned about. IMO the true cost is that by
mandating independent audits for CAs, we make it difficult to field
Mozilla-related applications and product features where we might want to
use a CA that hasn't undergone independent audit (e.g., because they
can't afford it, or whatever).
For example, a growing community of independent developers is creating
extensions for the Mozilla and Firefox browsers and the Thunderbird
email program. These extensions are packaged in the form of so-called
"XPI" files, and are designed to be installed by clicking on a link
pointing to the extension file. Ideally these files should be digitally
signed, with signatures validated prior to installation; Mozilla et.al.
do in fact support this feature. However in practice people don't sign
their extensions, at least the ones I've looked at. Why don't they?
Maybe it's a hassle to get a developer cert for object signing, maybe
it's the cost. (Remember that even small costs can be a significant
barrier to developers in certain countries, or for that matter
developers in certain life circumstances.)
One could imagine the mozdev.org or texturizer.net folks sponsoring a
no-cost CA specifically for use by extension developers, and it's quite
conceivable that they could do a good job of operating such a CA,
particularly if they had help from other individuals and non-profit
groups with CA expertise. However I very much doubt they'd go to the
trouble and expense of having an independent audit.
If we then require independent audits as a condition of having a CA cert
included in Mozilla, etc., then we can't include the extension
developers' CA cert, and that means that Mozilla/Fx/TB users would have
to explicitly download the CA cert before installing the extensions.
Based on experience most people wouldn't do this, so in practice
developers still wouldn't sign their extensions, and users would still
run whatever security risks they run by downloading and installing
unsigned code.
This then is part of the cost of the proposed "insurance". One could no
doubt come up with additional examples of things that might be
beneficial to the Mozilla project and Mozilla users, but would be
foreclosed by this mandated independent audit requirement for included CAs.
Unless someone comes up with a good argument otherwise, my personal
opinion is that this "insurance" is not worth the price the project
would have to pay. As I said earlier, I have no problem with using the
results of independent audits as a factor in deciding whether to include
a particular CA's certs, bt at the same time I believe it is absolutely
necessary to have an alternative approach for CAs that have not been
independently audited and are not likely to be audited. I believe the
most appropriate alternative approach is to do our own vetting according
to some reasonable criteria.
That is my position, and I'm sticking to it unless the opposition is so
overwhelming, and the opposing arguments so compelling, that I would be
stupid not to reconsider.
> #7 (new): Despite wishes to the contrary, you cannot escape the
> legalisms. I suggest the Mozilla Foundation's lawyer should word
> the necessary clause in your license. Reliance on outside
> standards and outside auditors (especially when that reliance is
> already recognized in law in the state where the Foundation is
> incorporated) will offer some protection against liability, but
> you should also make sure the Foundation's general liability
> insurance addresses this issue.
I can certainly suggest this to the Mozilla Foundation. Whether or not
they do anything about it is up to them.
See my response to David Ross for related comments.
> A better way may be to reflect those risk assessments
> back to those that carry the losses - the users.
>
> This could be done by opening up a forum for every new
> CA proposal. (Actually, it could be done for all old
> ones as well). Just like the current CACert bug that
> started this thread, each CA could have an ongoing
> forum for user comment.
I have actually been thinking about this, based on the principle of
providing more transparency into mozilla.org processes and policies. I'd
like to see others weigh in on this issue, whether pro or con. One way
to do this would be through a combination of bugzilla and a forum for
interested parties -- somewhat analogous to the "security group" we
created to address reports of security vulnerabilites, except that in
this case I see no reason not to make this a fully public process.
> Most users would never look at the practices of a CPA,
> as a) they have not the time nor patience, or b) there
> is nowhere to place their comments and assessments even
> if they had the time. However, if there was a defined
> forum for comment, it could be hoped that sufficient
> close Mozilla users would do sufficient analysis on
> the major CAs such that the Mozilla Foundation could
> simply refer to the sentiment on the forums.
>
> Thus, they would outsource the risk assessment. As
> policy, this would also remove the liability.
I agree that "outsourcing" risk assessment in this way, whether in part
or in whole, is worth considering. However it's not clear to me that
this would actually mitigate whatever liability issues might exist. (Of
course, this could still be worth doing for other reasons.)
> One other minor comment:
>
> > We may elect to publish submitted information for use
> > by Mozilla users and others; please note any information
> > which you consider to be proprietary and not for public
> > release.
>
> This opens up a bait and switch. Secret information
> may be provided to Mozilla that will be supressed and
> unavailable to the public. In the event of a dispute,
> this information may be relevent to the public party,
> but will be unknown to them. I'd recommend that all
> information provided be deemed public, non-proprietary,
> and publishable by Mozilla.
That's a good point; I will definitely consider revising this language
along the lines you suggest.
If they are asking for users to trust them in effect (by getting
inclusion) shouldn't security in general (maybe not specifically) about
their security procedures be as open and allow the public at large to know
what they really are trusting...
4.1 is merely a corollary of the "benefits" requirement.
4.2 is only necessary to evaluate the "risks" requirement.
4.3 should add a requirement that the data be compatibly licensed.
I do believe we need more details somewhere on key risk factors.
In the "details of policy" FAQ:
The "How will the Mozilla Foundation decide" entry significantly
understates the risks side of things. I believe the word "undue" should
be removed, as it suggests Mozilla will accept a fairly high level of
risk per CA. Remember, every CA we add increases the risk, as an
attacker only needs to break one of them to succeed. The entry should
probably list risks separatly from benefits.
The discontinuation entry should mention a change in the risk/reward
evaluation as being the most likely reason.
The "free certs" section goes into a digression about email certs. This
information, if it belongs anywhere, belongs in the "how will decide"
entry. The entire second paragraph is redundant with that entry.
In the "Exactly what information" section, I don't entirely agree with
the continuity of CA operations requirement. While continuity
requirements for any CRL and/or OCSP service might make sense, there is
no risk to mozilla users if a listed CA fails to continue issuing certs.
I think you have just opened a big can of worms with this Certificate
policy.
- It should be called a Mozilla Certificate authority policy, not
Certificate policy. I don't think there is any plan to include any
non-CA certificates.
- I think the term "default certificate database" is somewhat ambiguous.
Technically, there is a built-in PKCS#11 module containing a database of
root certificates and trust. This module is separate from the
certificate database associated with each Mozilla profile. In fact, the
root certs module/database can be removed by the user altogether and
security in Mozilla can continue to function without it. I just had to
point that out. The CA certs don't get added to the profile certificate
database, unless their trust is modified.
- I am not a lawyer, but I really think you are underestimating the
liability issues for the foundation if it chooses to select
certificates. Has the Mozilla Foundation hired a lawyer to look at the
issue to make a determination of the liability risks the security policy
exposes the Foundation to, or is the Foundation in the process of hiring
one ? I would love to be wrong, but I think this is definitely something
that needs to be looked at by a lawyer, because it's the sort of thing
that could take down the foundation if not done very carefully. Just
because Mozilla has a legal disclaimer does not mean that you won't be
sued. Commercial software comes with plenty of disclaimers, too.
- As the (soon-to-be-former) AOL/Netscape employee who has been doing
most of the check-ins to the built-in root certs for NSS in recent
years, I know I would not feel comfortable at all with a policy that is
so arbitrary and void of verifiable objective criteria - section 4.1 in
particular.
- The current official certifications for commercial CAs such as
WebTrust are extensive and expensive. They don't match 1 to 1 with the
spirit of the Mozilla foundation, in that they may be overly restrictive
on who can join the party. So they shouldn't be a sine qua non condition
for inclusion.
- Most users don't understand PKI security and are not able to make CA
certificate trust decisions. And it would be indeed laughable to except
them to be able to do so with a pop-up that simply shows a few fields in
the certificate. Ever tried to verify a root CA certificate just by
looking its contents ? What did you do, call a company's 800 number and
check the fingerprint and public key to make sure it matched ? The point
is, you need an external source of trust to help with the decision.
There is no one-size-fits-all list of trusted CAs. That's why trust is
editable, and not static. People are using Mozilla in diverse
environments. I personally use Mozilla as if it were commercial
software, for personal needs such as banking, and wouldn't expect it to
include MyFriendlyNonProfitCAWhoCan'tAffordWebTrust, Joe'sPersonalCA, or
MilitarySecretCA.
In the later two cases, the end-users are savvy enough to install the
certificates themselves, before they actually start to use them (ie.
long before the browser pops-up an "unknown CA - do you want to trust
it?" pop-up).
You on the other hand might want to use
MyFriendlyNonProfitCAWhoCan'tAffordWebTrust without being presented a
trust pop-up that is very hard to act upon.
Unfortunately, I don't know of any organization that will vouch for CAs
in the MyFriendlyNonProfitCAWhoCan'tAffordWebTrust category, but it
sounds like that's what you need here. I don't think it can or should be
the Mozilla foundation itself doing it through its policy.
I also don't think they should be blanket included together with all the
commercial CAs that passed a certification.
I think MF should defer to such a CA verification organization when one
is created. When it does, these CA certs can be compiled into a separate
PKCS#11 module containing only certificates CAs in this category.
The Mozilla browser could then prompt the user for the security policy
he wants to adopt when creating his profile : there could be a checkbox
for the commercial CAs, which would basically be the current built-in
module, and another checkbox for
MyFriendlyNonProfitCAWhoCan'tAffordWebTrustCAs(for lack of a better
term) who did not go through the WebTrust (or other) commercial
certification required to be included in the first group.
The effect of each checkbox would be to load or not load a given PKCS#11
modules containing a set of trusted CA certificates. 0, 1, 2 or n
PKCS#11 modules containing trusted CA certificates can be loaded in
Mozilla in any one profile.
This way, the user makes the decision of which CAs he trusts on a
rational basis when creating his profile with a question that he can answer.
Even if MF relies on a 3rd party whats to absolve them of all
responsibility, after all they still included the certificate regardless
of any 3rd party saying it was ok, and as previously stated,
webtrust/AICPA are a bunch of accountants, with the current certificate
practices resolving around commerce, rather then the 100's of other
purposes certificates can be used for but are too expensive to get and
use. In any case what has webtrust/AICPA done in light of blatant
mistakes by companies they have approved? Without a consequence what is
to stop any CA, commercial or otherwise from caring who they issue
certificates to as long as they make a buck from it?
Ignoring the semantics of any particular legal
threat, it may be worth considering creating a
single corporation, wholly owned by the Foundation,
that is given total responsibility for all CA issues
including creating the default list. This is a
well known ring-fencing or firewalling technique,
and is generally quite acceptable if clearly
documented (and the parent Foundation never makes
any independent judgement or decision). It would
mean that any suit against the single corporation
that made all the decision would not threaten the
rest of the project.
iang
I originally called it the Mozilla CA Certificate Policy, but changed it
just to have a shorter name. I can certainly change it back.
But to play devil's advocate: It is 100% guaranteed that we would never
ever want to include a non-CA cert in Mozilla?
> - I think the term "default certificate database" is somewhat ambiguous.
> Technically, there is a built-in PKCS#11 module containing a database of
> root certificates and trust. This module is separate from the
> certificate database associated with each Mozilla profile. In fact, the
> root certs module/database can be removed by the user altogether and
> security in Mozilla can continue to function without it. I just had to
> point that out. The CA certs don't get added to the profile certificate
> database, unless their trust is modified.
I am open to using different terms and a simple way to explain what
actually is done. Suggestions welcome.
> - I am not a lawyer, but I really think you are underestimating the
> liability issues for the foundation if it chooses to select
> certificates.
That may well be. As I said before, I will certainly submit any proposed
policy to the Mozilla Foundation for approval by the appropriate people
(MF officers, and the MF board if necessary), and recommend that they
have appropriate legal counsel review the policy. But I am not going to
attempt to do the lawyers' job for them; that is not what I'm being paid
to do (well, I'm not being paid anything at all, but you get the point).
Please forgive me now if I rant for a bit: I'd like to have a
conversation about mitigating security risks, but people keep dragging
me off to start a conversation about legal risks. Why is that? What is
it about CA certs (as opposed to a host of other important
security-related issues) that prompts this relentlessly single-minded
focus on bad things that can happen from a legal point of view? (I am
tempted to say, "because with PKI and CAs the lawyers got there first",
but I'll hold that thought for now.)
You may recall that I was the lead on mozilla.org creating a policy on
addressing and disclosing security vulnerabilities in Mozilla. We had
plenty of hard-hitting discussions on how best to mitigate security
risks to Mozilla users. We spent very little time (if any) worrying
about how to mitigate legal risks. But the types of security
vulnerabilities under discussion were fully as serious as the types of
vulnerabilities resulting from breakdowns in the CA cert scheme. (In
fact on first impression I'd take the vulnerabilities to be formally
equivalent: a Mozilla exploit allowing file writing could lead to CA
certs being invisibly added and/or trust flags reset, and a bad CA cert,
e.g., for object signing, could lead to a user downloading exploit code.)
I guess the difference is that with "normal" vulnerabilities we've
internalized the idea that license liability disclaimers do at least a
reasonable job of mitigating any legal risks to developers and
distributors, and we focus primarily on security risks. If we consider
things like formal security certifications (e.g., Common Criteria), it's
as a potentially-useful option for customers who care about it, but of a
somewhat different nature than standard "designing for security", and
not a substitute for it. On the other hand with CA certs we seem to get
paralyzed by the sheer amount and complexity of the legal paperwork and
audit frameworks, to the point where we feel we can't move without
consulting a lawyer.
Past a certain point I just don't understand why this is the case. I
don't understand why we have to consult a lawyer before deciding whether
to add a CA cert, and not when deciding how to best configure Mozilla
security options for the typical user. (And in fact isn't the former
just an special case of the latter?)
As a final point, I've actually looked at the ABA documents, and I can't
figure out how their whole legal discussion applies in the case of
something like Mozilla. IIRC it is organized around the concept of CAs,
certificate holders, and "relying parties". We are certainly not a CA
and not a certificate holder. It's possible that we would be considered
a "relying party", but that role really seems to be played by Mozilla
users, e.g., who connect to certificate-presenting web sites and so on.
I guess we could be considered a sort of agent acting on behalf of a
relying party, but I don't recall the ABA documents addressing that
situation. I'd be interested in any online references that actually
discuss this.
Anyway, that's the end of my rant (at least for now).
> - As the (soon-to-be-former) AOL/Netscape employee who has been doing
> most of the check-ins to the built-in root certs for NSS in recent
> years, I know I would not feel comfortable at all with a policy that is
> so arbitrary and void of verifiable objective criteria - section 4.1 in
> particular.
Then let's come up with some verifiable objective criteria -- but let's
focus on criteria that mitigate security risks, as opposed to legal
risks. The lawyers can take care of themselves.
> - The current official certifications for commercial CAs such as
> WebTrust are extensive and expensive. They don't match 1 to 1 with the
> spirit of the Mozilla foundation, in that they may be overly restrictive
> on who can join the party. So they shouldn't be a sine qua non condition
> for inclusion.
Glad to hear it.
> - Most users don't understand PKI security and are not able to make CA
> certificate trust decisions. And it would be indeed laughable to except
> them to be able to do so with a pop-up that simply shows a few fields in
> the certificate. Ever tried to verify a root CA certificate just by
> looking its contents ? What did you do, call a company's 800 number and
> check the fingerprint and public key to make sure it matched ? The point
> is, you need an external source of trust to help with the decision.
>
> There is no one-size-fits-all list of trusted CAs.
But of course the problem is that in this respect the Mozilla Foundation
offers Mozilla as a one-size-fits-all product, in large part as a
consequence of the design of the underlying security/crypto mechanisms.
We can't easily offer "Mozilla for casual Internet use", "Mozilla for
onlinke banking", "Mozilla for Federal government agencies and
contractors", and so on.
Ideally we could handle this through the extension model being
implemented by Firefox and Thunderbird -- download an extension to
enable a particular set of CA certs for a particular purpose. (Of course
we'd have to address the bootstrap problem of validating such
extensions, e.g., by signing them with an object signing cert issued by
a CA whose cert is present in the base product.) But this is speculation
about the ideal, not the reality we have to deal with right now.
> That's why trust is
> editable, and not static. People are using Mozilla in diverse
> environments. I personally use Mozilla as if it were commercial
> software, for personal needs such as banking, and wouldn't expect it to
> include MyFriendlyNonProfitCAWhoCan'tAffordWebTrust, Joe'sPersonalCA, or
> MilitarySecretCA.
>
> In the later two cases, the end-users are savvy enough to install the
> certificates themselves, before they actually start to use them (ie.
> long before the browser pops-up an "unknown CA - do you want to trust
> it?" pop-up).
The example of MilitarySecretCA reminds me of a point worth emphasizing:
IMO the most significant legal implications of CA certs come in
situations where the certificate holders are large enterprises engaging
in large-dollar-volume commerce or similar activities with relatively
severe consequences if things go awry, and where the parties involved
(the certificate holders) operate in a fairly heavyweight pre-existing
legal frameworks of contracts, etc. To a large extent these parties can
and IMO should be able to "take care of themselves" with regard to CA
certs, by actively vetting the software they use to perform these
activities, including consulting independent auditors, and reconfiguring
it if necessary (e.g., deleting/adding CA certs and setting trust flags
appropriately).
> You on the other hand might want to use
> MyFriendlyNonProfitCAWhoCan'tAffordWebTrust without being presented a
> trust pop-up that is very hard to act upon.
>
> Unfortunately, I don't know of any organization that will vouch for CAs
> in the MyFriendlyNonProfitCAWhoCan'tAffordWebTrust category, but it
> sounds like that's what you need here. I don't think it can or should be
> the Mozilla foundation itself doing it through its policy.
> I also don't think they should be blanket included together with all the
> commercial CAs that passed a certification.
Above you wrote "The current official certifications for commercial CAs
such as WebTrust ... shouldn't be a sine qua non condition for
inclusion", implying that a CA could be included without going through
such a certificate. But here you write "I also don't think
[non-certified CAs] should be blanket included together with all the
commercial CAs that passed a certification." So I assume that you are
using "blanket included" to mean something different than plain
"included", and the that difference is defined in your next statement:
"blanket included" means included in the same PKCS#11 module.
> I think MF should defer to such a CA verification organization when one
> is created. When it does, these CA certs can be compiled into a separate
> PKCS#11 module containing only certificates CAs in this category.
>
> The Mozilla browser could then prompt the user for the security policy
> he wants to adopt when creating his profile : there could be a checkbox
> for the commercial CAs, which would basically be the current built-in
> module, and another checkbox for
> MyFriendlyNonProfitCAWhoCan'tAffordWebTrustCAs(for lack of a better
> term) who did not go through the WebTrust (or other) commercial
> certification required to be included in the first group.
>
> The effect of each checkbox would be to load or not load a given PKCS#11
> modules containing a set of trusted CA certificates. 0, 1, 2 or n
> PKCS#11 modules containing trusted CA certificates can be loaded in
> Mozilla in any one profile.
>
> This way, the user makes the decision of which CAs he trusts on a
> rational basis when creating his profile with a question that he can
> answer.
This is a fine idea, and it matches my naive conception of an
extension-style mechanism to let users customize Mozilla in terms of
accepted CAs as they customize it in terms of features.
But this mechanism doesn't exist today, and may never exist if nobody
does the work of creating it. I want to create a policy now, and what
you seem to be recommending is the policy must mandate independent
audits of CAs until whatever point in the (possibly far distant) future
that the Mozilla implementation provides a way to group CAs in this way.
I don't agree with that.
After reviewing the discussion in this thread (and other threads),
I must conclude that the whole approach to developing a policy is
flawed. A policy should represent specifics based on a more
general philosophy, but I don't think the philosophy itself is
clear in this case.
The first question that must be answered is: Why continue
developing Mozilla? I would hope the answer does NOT revolve
around an exercise in computer science but instead reflects a
desire to create a high-quality software application for personal
and commercial use -- an application for the real world.
If Mozilla is intended for real use, the next question is: Who
uses Mozilla? Given my hope for the answer to the first question,
the answer to this question should be: Anyone who uses the
Internet.
This means that most Mozilla users are not truly sophisticated
software experts.
The answer to the second question raises the next question: In
that context, how are (not how should) CA certificates used?
Clearly (at least to me), the answer is: The primary and most
important use of a CA certificate is to provide the Mozilla user
with assurance that (1) a critical Web site is indeed what it
purports to be and (2) sensitive data communicated to a Web server
travels across the Internet securely.
If this chain of questions and answers is valid, then the Mozilla
Foundation has an obligation to those who use its products to
authenticate not only the validity of each CA certificate in the
default database but also the integrity of the CA's process of
issuing and signing Web server certificates with that CA
certificate. This requires specific, objective, and verifiable
criteria for authenticating both validity and integrity. I
advocate third-party audits because those criteria already exist
and are already being applied through such audits.
No, this does not mean only WebTrust audits. Earlier in this
thread, I cited a California state regulation that specifies
either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
22003(a)6(D) under
<http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
that regulation provides criteria for accepting other
accreditation criteria. However, until other criteria can be
clearly identified and documented, the WebTrust and SAS 70 audits
are the only trustworthy and reliable bases for accepting CA
certificates.
In the end, the real question is: Can we trust and rely on the CA
certificates in the Mozilla default database to protect our
privacy and our assets? The answer to that question will
determine whether we can trust the Mozilla Foundation, which needs
to clarify the underlying philosophy upon which the proposed
policy should be based.
Of course, my original assumption -- my hope for the answer to the
first question -- might not be valid. In this case, Mozilla is
merely an interesting toy; and I will then have to rely on some
other browser for online banking and other critical Web uses.
(It may be that the Mozilla users in the majority
are not sophisticated. But, that does not mean
that the software is written for them.)
> The answer to the second question raises the next question: In
> that context, how are (not how should) CA certificates used?
> Clearly (at least to me), the answer is: The primary and most
> important use of a CA certificate is to provide the Mozilla user
> with assurance that (1) a critical Web site is indeed what it
> purports to be and (2) sensitive data communicated to a Web server
> travels across the Internet securely.
(This is not clear at all. I think it rests on
a number of false assumptions, but those are
quite hard to describe in a quick email, so
I'll skip that here.)
> If this chain of questions and answers is valid, then the Mozilla
> Foundation has an obligation to those who use its products to
> authenticate not only the validity of each CA certificate in the
> default database but also the integrity of the CA's process of
> issuing and signing Web server certificates with that CA
> certificate.
How do you conclude that? As users don't pay
anything, there can not be much of an obligation
of any form, let alone something as sensitive as
the validity of a signature chain (something that
evidently other competitors have also failed to
treat as "obligations").
> No, this does not mean only WebTrust audits. Earlier in this
> thread, I cited a California state regulation that specifies
> either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
> 22003(a)6(D) under
> <http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
> that regulation provides criteria for accepting other
> accreditation criteria. However, until other criteria can be
> clearly identified and documented, the WebTrust and SAS 70 audits
> are the only trustworthy and reliable bases for accepting CA
> certificates.
Is there a specific reason why Mozilla should
decide to write and distribute its software
according to these regulations? It seems to
be a bad idea, on the face of it...
> In the end, the real question is: Can we trust and rely on the CA
> certificates in the Mozilla default database to protect our
> privacy and our assets? The answer to that question will
> determine whether we can trust the Mozilla Foundation, which needs
> to clarify the underlying philosophy upon which the proposed
> policy should be based.
No way. This is FUD. Just because the default
list of certs might have some flaws does not mean
that we or users or anyone should not trust the
Mozilla Foundation. The Foundation is under no
obligation to provide a list to you or anyone.
Trying to shame them into providing your list,
one that you can trust, will achieve nothing for
Mozilla or the users. This is easy to see - if
you could pick the list, as trustworthy, then so
could anyone else. As there is a debate, it is
clear that picking the list is a vexing issue.
Thus, no room for FUD tactics.
> Of course, my original assumption -- my hope for the answer to the
> first question -- might not be valid. In this case, Mozilla is
> merely an interesting toy; and I will then have to rely on some
> other browser for online banking and other critical Web uses.
iang
(It may be that the Mozilla users in the majority
are not sophisticated. But, that does not mean
that the software is written for them.)
> The answer to the second question raises the next question: In
> that context, how are (not how should) CA certificates used?
> Clearly (at least to me), the answer is: The primary and most
> important use of a CA certificate is to provide the Mozilla user
> with assurance that (1) a critical Web site is indeed what it
> purports to be and (2) sensitive data communicated to a Web server
> travels across the Internet securely.
(This is not clear at all. I think it rests on
a number of false assumptions, but those are
quite hard to describe in a quick email, so
I'll skip that here.)
> If this chain of questions and answers is valid, then the Mozilla
> Foundation has an obligation to those who use its products to
> authenticate not only the validity of each CA certificate in the
> default database but also the integrity of the CA's process of
> issuing and signing Web server certificates with that CA
> certificate.
How do you conclude that? As users don't pay
anything, there can not be much of an obligation
of any form, let alone something as sensitive as
the validity of a signature chain (something that
evidently other competitors have also failed to
treat as "obligations").
> No, this does not mean only WebTrust audits. Earlier in this
> thread, I cited a California state regulation that specifies
> either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
> 22003(a)6(D) under
> <http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
> that regulation provides criteria for accepting other
> accreditation criteria. However, until other criteria can be
> clearly identified and documented, the WebTrust and SAS 70 audits
> are the only trustworthy and reliable bases for accepting CA
> certificates.
Is there a specific reason why Mozilla should
decide to write and distribute its software
according to these regulations? It seems to
be a bad idea, on the face of it...
> In the end, the real question is: Can we trust and rely on the CA
> certificates in the Mozilla default database to protect our
> privacy and our assets? The answer to that question will
> determine whether we can trust the Mozilla Foundation, which needs
> to clarify the underlying philosophy upon which the proposed
> policy should be based.
No way. This is FUD. Just because the default
list of certs might have some flaws does not mean
that we or users or anyone should not trust the
Mozilla Foundation. The Foundation is under no
obligation to provide a list to you or anyone.
Trying to shame them into providing your list,
one that you can trust, will achieve nothing for
Mozilla or the users. This is easy to see - if
you could pick the list, as trustworthy, then so
could anyone else. As there is a debate, it is
clear that picking the list is a vexing issue.
Thus, no room for FUD tactics.
> Of course, my original assumption -- my hope for the answer to the
> first question -- might not be valid. In this case, Mozilla is
> merely an interesting toy; and I will then have to rely on some
> other browser for online banking and other critical Web uses.
iang
> Julien Pierre wrote:
>> - It should be called a Mozilla Certificate authority policy, not
>> Certificate policy. I don't think there is any plan to include any
>> non-CA certificates.
>
>
> I originally called it the Mozilla CA Certificate Policy, but changed it
> just to have a shorter name. I can certainly change it back.
Well "CA cerificate" is somewhat redundant (it includes "Certificate"
twice). I would say "Mozilla [built-in] Certificate Authority Policy"
would be a good name.
>
> But to play devil's advocate: It is 100% guaranteed that we would never
> ever want to include a non-CA cert in Mozilla?
It is not guaranteed. You can use the built-ins module for anything you
want, including negative trust on some known compromised popular server
certs (ie. like a global CRL). But I would not recommend such use. I
think in practice you would only ever want root CA certs on it.
>> - I think the term "default certificate database" is somewhat
>> ambiguous. Technically, there is a built-in PKCS#11 module containing
>> a database of root certificates and trust. This module is separate
>> from the certificate database associated with each Mozilla profile. In
>> fact, the root certs module/database can be removed by the user
>> altogether and security in Mozilla can continue to function without
>> it. I just had to point that out. The CA certs don't get added to the
>> profile certificate database, unless their trust is modified.
>
>
> I am open to using different terms and a simple way to explain what
> actually is done. Suggestions welcome.
Well, I don't know yet what the right name should be, but if we choose
to have several modules with different set of certs, then the
distinction becomes more important since there won't be a single
"default certificate database".
> (MF officers, and the MF board if necessary), and recommend that they
Make that "require".
> Please forgive me now if I rant for a bit: I'd like to have a
> conversation about mitigating security risks, but people keep dragging
> me off to start a conversation about legal risks. Why is that? What is
> it about CA certs (as opposed to a host of other important
> security-related issues) that prompts this relentlessly single-minded
> focus on bad things that can happen from a legal point of view? (I am
> tempted to say, "because with PKI and CAs the lawyers got there first",
> but I'll hold that thought for now.)
Does it really need spelling out ? If you have a rogue or compromised
trusted CA in Mozilla, which willingly signs fake server certificates,
that opens the door to all kinds of scams, where Mozilla users will
think they are doing business with somebody when in fact they are not.
Remember that one of the most common uses of SSL is for financial
transactions. If Mozilla users suffer financial losses due to a rogue
trusted CA, you can bet they will sue whoever approved that trusted CA,
disclaimer or not. So it is in the interest of the Foundation not to
make the decision itself.
> Past a certain point I just don't understand why this is the case. I
> don't understand why we have to consult a lawyer before deciding whether
> to add a CA cert, and not when deciding how to best configure Mozilla
> security options for the typical user. (And in fact isn't the former
> just an special case of the latter?)
You have a point. And I think the MF should have a good answer to that
question, since it distributes all the security code, not just the CA
certs. The liability situation is different now that there is an MF,
rather than a corporate distributor of the open-source code.
>> - As the (soon-to-be-former) AOL/Netscape employee who has been doing
>> most of the check-ins to the built-in root certs for NSS in recent
>> years, I know I would not feel comfortable at all with a policy that
>> is so arbitrary and void of verifiable objective criteria - section
>> 4.1 in particular.
>
>
> Then let's come up with some verifiable objective criteria -- but let's
> focus on criteria that mitigate security risks, as opposed to legal
> risks. The lawyers can take care of themselves.
The policy will have to address both risks, for the sake of the MF and
the contributors editing the database.
>> - Most users don't understand PKI security and are not able to make CA
>> certificate trust decisions. And it would be indeed laughable to
>> except them to be able to do so with a pop-up that simply shows a few
>> fields in the certificate. Ever tried to verify a root CA certificate
>> just by looking its contents ? What did you do, call a company's 800
>> number and check the fingerprint and public key to make sure it
>> matched ? The point is, you need an external source of trust to help
>> with the decision.
>>
>> There is no one-size-fits-all list of trusted CAs.
>
>
> But of course the problem is that in this respect the Mozilla Foundation
> offers Mozilla as a one-size-fits-all product, in large part as a
> consequence of the design of the underlying security/crypto mechanisms.
> We can't easily offer "Mozilla for casual Internet use", "Mozilla for
> onlinke banking", "Mozilla for Federal government agencies and
> contractors", and so on.
That single product could still offer the user a choice of several
security policies for the various types of users.
> Ideally we could handle this through the extension model being
> implemented by Firefox and Thunderbird -- download an extension to
> enable a particular set of CA certs for a particular purpose. (Of course
> we'd have to address the bootstrap problem of validating such
> extensions, e.g., by signing them with an object signing cert issued by
> a CA whose cert is present in the base product.)
Downloading a set of certs and asking the user to trust them all is an
even more difficult decision to do than asking the user to trust one
cert ... I think we should limit this discussion to the built-in certs
that are distributed with Mozilla.
>> In the later two cases, the end-users are savvy enough to install the
>> certificates themselves, before they actually start to use them (ie.
>> long before the browser pops-up an "unknown CA - do you want to trust
>> it?" pop-up).
>
>
> The example of MilitarySecretCA reminds me of a point worth emphasizing:
> IMO the most significant legal implications of CA certs come in
> situations where the certificate holders are large enterprises engaging
> in large-dollar-volume commerce or similar activities with relatively
> severe consequences if things go awry, and where the parties involved
> (the certificate holders) operate in a fairly heavyweight pre-existing
> legal frameworks of contracts, etc. To a large extent these parties can
> and IMO should be able to "take care of themselves" with regard to CA
> certs, by actively vetting the software they use to perform these
> activities, including consulting independent auditors, and reconfiguring
> it if necessary (e.g., deleting/adding CA certs and setting trust flags
> appropriately).
I agree. For these applications, the environment is not an open one, and
these Mozilla users can customize the software with their own built-in
list of trusted certs compiled in, as opposed to the one that comes with
Mozilla.
Alternatively, if they want to use the binaries distributed on
Mozilla.org, they can delete the built-in root certs module altogether,
and add their trusted root certs obtained from a known reliable source
manually.
> Above you wrote "The current official certifications for commercial CAs
> such as WebTrust ... shouldn't be a sine qua non condition for
> inclusion", implying that a CA could be included without going through
> such a certificate. But here you write "I also don't think
> [non-certified CAs] should be blanket included together with all the
> commercial CAs that passed a certification." So I assume that you are
> using "blanket included" to mean something different than plain
> "included", and the that difference is defined in your next statement:
> "blanket included" means included in the same PKCS#11 module.
Yes, I mean that there should be different groups of trusted certs for
these categories, in separate PKCS#11 modules, clearly marked and named.
There wouldn't be one default module that would be always trusted.
You could ask the user during profile creation if he wants to trust
X "commercial CAs (entities verified by WebTrust, Inc)"
X "other non-profit CAs (entities verified by MyCheaperAuditCompany, Inc)"
One, both, or neither of those checkboxes could be set by default, but
the user would have to be presented with this choice when creating his
Mozilla profile to make sure he chooses the set he wants.
To preserve compatibility with the current way Mozilla operates, and to
protect non-security savvy Mozilla users, I think the preferred default
would be to have a checkbox next to the commercial CAs, and no checkbox
next to the non-profit CAs.
> This is a fine idea, and it matches my naive conception of an
> extension-style mechanism to let users customize Mozilla in terms of
> accepted CAs as they customize it in terms of features.
>
> But this mechanism doesn't exist today, and may never exist if nobody
> does the work of creating it. I want to create a policy now, and what
> you seem to be recommending is the policy must mandate independent
> audits of CAs until whatever point in the (possibly far distant) future
> that the Mozilla implementation provides a way to group CAs in this way.
> I don't agree with that.
My take on this is, the policy should be carefully examined before it is
decided, it's not something to do in a hurry just because there are a
couple CAs that are shouting that they want to be included right away.
It may well be that the right policy requires some work to actually
implement.
Let's examine the work that's actually required to implement my proposal :
- NSS already has the ability to load any number of PKCS#11 modules. No
code changes needed here.
- It is quite trivial work to generate multiple PKCS#11 root-cert
modules from multiple CA cert lists. All that needs to be done is to
rebuild the builtins directory with a different certdata.txt, and a
different DLL/.so target name . That's mostly scripting/Makefile work.
Again, no code changes are needed.
- PSM already has a UI and code to load PKCS#11 modules manually, under
"Security Devices", which could be used to load alternate/additional
root certificate modules. This is not very user-friendly and buried in
the preferences/privacy and security dialog, but it actually does the job.
- The only real new code that needs to be written to make the process
seamless for Mozilla users is a GUI prompt in the Mozilla profile
creation to ask the user to choose the CA policy(s) he wants to use, and
load the corresponding PKCS#11 module(s), which is done by a single
existing NSS API call. Now, it's been a long time since I wrote any GUI
code, and I have never written any for Mozilla itself, but it does not
strike me as a lot of work.
Of course there is still the dependency on finding a third party to
verify the non-commercial CAs. I think that's something that is
inevitable, and you should start looking for one now. If you can't find
one, somebody else suggested creating a separate legal entity tasked
with that specific role, which would protect the MF from any bad calls
on the CAs included.
I kind of agree with Franks statement about security issues relating to
mozilla Vs this one, surely they are directly related a lot more to MF
liability then any CA issue, as the CA themselves should be liable not
the MF for any poor judgment of certificates issued.
Would you really trust a Web server certificate issued by a CA
that lost its accreditation or received less than an unqualified
opinion on an audit? I would not, and I would be extra suspicious
about server certificates issued by that CA before the negative
action against it. After all, such negative action would be the
result of past discrepancies by the CA, not future discrepancies.
And I would certainly not trust server certificates issued after
the negative action until someone -- definitely not the CA itself
-- pronounced the discrepancies corrected. Then, I would trust
only those server certificates issued after the corrections were
determined.
We are talking about MONEY and PRIVACY. How much risk are you
willing to take with these?
--
David E. Ross
<http://www.rossde.com/>
I use Mozilla as my Web browser because I want a browser that
So I take it you remove a lot of certificates from your copy of Mozilla
then?
>
>After reviewing the discussion in this thread (and other threads),
>I must conclude that the whole approach to developing a policy is
>flawed. A policy should represent specifics based on a more
>general philosophy, but I don't think the philosophy itself is
>clear in this case.
>
What Frank is calling the policy is, I believe, what you are calling the
philosophy. Simply put, it is that the Mozilla Foundation should decide
whether or not to include a CA based on a balancing of the risks and
benefits of doing so.
What we still need to nail down are some more specifics as to how to
evaluate the benefits and risks. I believe Frank's "FAQ" does a
reasonable job of describing how to evaluate the benefits. The risks
side needs much more definition.
>If this chain of questions and answers is valid, then the Mozilla
>Foundation has an obligation to those who use its products to
>authenticate not only the validity of each CA certificate in the
>default database but also the integrity of the CA's process of
>issuing and signing Web server certificates with that CA
>certificate.
>
I'm not sure I'd call it an "obligation", but given the minimalist
threat model I proposed earlier, this is something that is necessary in
order to evaluate the risks.
>No, this does not mean only WebTrust audits. Earlier in this
>thread, I cited a California state regulation that specifies
>either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
>22003(a)6(D) under
><http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
>that regulation provides criteria for accepting other
>accreditation criteria. However, until other criteria can be
>clearly identified and documented, the WebTrust and SAS 70 audits
>are the only trustworthy and reliable bases for accepting CA
>certificates.
>
>
WebTrust and SAS 70 audits outsource the bulk of the risk assessment.
They are only useful if the threat model used for the audit is
compatible with one's own threat model. It is quite possible that their
threat model protects against things that Mozilla users don't care
about, so requiring CAs to pass their criteria might unreasonably
exclude CAs. It also might be possible and worthwhile to perform such a
risk assessment without outsourcing.
But we do clearly need a threat model in order to assess risks.
> David Ross wrote:
>
>> Clearly (at least to me), the answer is: The primary and most
>> important use of a CA certificate is to provide the Mozilla user
>> with assurance that (1) a critical Web site is indeed what it
>> purports to be
>
> (This is not clear at all. I think it rests on
> a number of false assumptions, but those are
> quite hard to describe in a quick email, so
> I'll skip that here.)
As (1) is the definition of a certificate (modulo the fact that
applicability goes beyond just web sites), it is as clear to me as any
derivation from definitions. That you state it is not clear, omitting
any argument, is in no way convincing.
> In the "Exactly what information" section, I don't entirely agree with
> the continuity of CA operations requirement. While continuity
> requirements for any CRL and/or OCSP service might make sense, there is
> no risk to mozilla users if a listed CA fails to continue issuing certs.
I agree with that last sentence. Continuity of operations is primarily
to keep revocation going. If revocation stops, rightful private key
holders are therafter unprotected from damages due to compromised keys.
> > #3: I indicate that a CA that fails an audit or loses
> > accreditation should have its certificates removed and the removal
> > should be publicized. Mozilla users should not rely on a
> > deficient CA.
>
> Note that in practice this will be problematic, since AFAIK removing a
> cert from the default database affects only users who are installing
> Mozilla for the first time. I'll let others speak to this issue.
Frank, Things work rather differently now than they did 4 years ago.
The "built-in" list of CAs, and the built-in list of trust info is
no longer stored in the cert DB. It's in a shared library that gets
replaced when a new (or old) version of mozilla is installed.
If users CHANGE the trust settings on a root CA, or import a new root
CA and trust, the new CA and trust info goes into the cert DB.
Anyway, I think it's easier to remove trust for a built-in root CA now
than before.
Sorry, yes, I should have left that bit out.
The underlying fact here is that a CA certificate
carries a signature from a third party (CA)
on a key for a second party (website).
That's a cryptographic fact, in general, and
other claims are assumptions that may or may
not be founded.
It's by no means definitional whether that
signature delivers anything like "providing
assurance that a critical web site is indeed
what it purports to be." The question is
whether we can move from a cryptographic
statement (this key signs that key) to a
business statement (this site is who they
say they are) with any degree of confidence.
The answer to that seems to be no. Not with
any confidence.
Just as an example of one only amongst a
long list of difficulties, the present issue
is that, as no browser goes to any trouble to
to separate out *which* CA made the claim,
the confidence is reduced to the lowest
common denominator. (There are many more
issues, but that one is apropos.)
iang
PS: C.f, branding discussion started by Tim Dierks.
AFAIK, Peter Gutmann first made the observation
about "one size" security policy resulting in
no security.
Would it make sense for MF to have some assurance by the CA that the CRL
would be kept running for a minimum of 12 months after, either by their
own, or by a 3rd party, or even MF?
The uniting of the business assertion with the cryptographic assertion
is accomplished via 2 step process:
1. The statement from the CA on how the cryptographic assertion is made
- what checks and balances, identification and authentication mechanisms
are employed to assure that the details in the cryptographic assertion
(e.g. name, domain ownership etc) are valid - you can get this from the
Certification Practice Statement [CPS] (this is generally referenced in
the certificate)
2. The audit of the CA by an independant body rating the CA on it's
adherence to it's CPS - in the world of CAs we have SAS 70 and WebTrust
that are prevalent, the latter seeming to gain greater emphasis of late.
I seem to have read somewhere recently that Microsoft was considering
requiring CAs to pass the WebTrust audit before they would allow their
certs to be embedded in their browser - anyone confirm that?
Regards,
-Scott
Ian Grigg wrote:
> _______________________________________________
> mozilla-crypto mailing list
> mozilla...@mozilla.org
> http://mail.mozilla.org/listinfo/mozilla-crypto
The uniting of the business assertion with the cryptographic assertion
is accomplished via 2 step process:
1. The statement from the CA on how the cryptographic assertion is made
- what checks and balances, identification and authentication mechanisms
are employed to assure that the details in the cryptographic assertion
(e.g. name, domain ownership etc) are valid - you can get this from the
Certification Practice Statement [CPS] (this is generally referenced in
the certificate)
2. The audit of the CA by an independant body rating the CA on it's
adherence to it's CPS - in the world of CAs we have SAS 70 and WebTrust
that are prevalent, the latter seeming to gain greater emphasis of late.
I seem to have read somewhere recently that Microsoft was considering
requiring CAs to pass the WebTrust audit before they would allow their
certs to be embedded in their browser - anyone confirm that?
Regards,
-Scott
Ian Grigg wrote:
Were you sleeping the last two/three years, or more ? :-)
It must be since IE 5.5, at the last since IE 6, that CA that did not
passs an audit are not present in the browser built-in list.
The current news is more that XP will try to check if it's list of CA is
up-to-date with the latest version on Windows Update everytime a
certificate chain is verified.
So update to the list, addition or removal, will be very effective very
fast for all XP/CAPI users with an on-line connexion.
The list is updated for older client when they start an update download
from Windows.
Thanks for the info. This has not been the first time, nor will it be
the last, that my ignorance has led me astray.
> If users CHANGE the trust settings on a root CA, or import a new root
> CA and trust, the new CA and trust info goes into the cert DB.
So in essence a new release of Mozilla could remove or "revoke" CA certs
on behalf of all the users who were trusting to Mozilla to do the right
thing, while not affecting users who had exercised their own judgement.
But I guess this is not *quite* true: If a new CA cert were added and
trust flags turned on, that would affect everyone who upgraded to the
new version, and users who preferred to trust their own judgement on CA
certs would not necessarily be alerted during the installation process
or thereafter. Instead they would have to manually check the CA cert
list after the upgrade (or read the release notes).
Frank
--
--
Frank Hecker
hecker.org
David Ross wrote:
> After reviewing the discussion in this thread (and other threads),
> I must conclude that the whole approach to developing a policy is
> flawed. A policy should represent specifics based on a more
> general philosophy, but I don't think the philosophy itself is
> clear in this case.
This is an excellent comment which I'm going to take to heart. I have
concluded that it would be very useful for me to write and post a
"meta-policy" document that clarifies the underlying type of policy I
personally want to see us develop, and why that policy has the features
that it does; this would in essence outline the more general philosophy
behind the policy itself.
> The first question that must be answered is: Why continue
> developing Mozilla? I would hope the answer does NOT revolve
> around an exercise in computer science but instead reflects a
> desire to create a high-quality software application for personal
> and commercial use -- an application for the real world.
Yes, but additional background is useful here: With the founding of the
Mozilla Foundation the explicit focus of the project is now indeed to
produce an end user software product. (Prior to that the nominal focus
was to produce a developer product from which others would create an end
user product.) So, yes, we do want to create an "application for the
real world".
However although Mozilla is an end user product it is not a commercial
proprietary product but rather a non-commercial open source product. IMO
that has implications for what user's expectations are or at least
should be, both in general and in the area of security in particular.
Note carefully: I am *not* saying that users should have lower
expectations regarding the quality and security of non-commercial open
source products like Mozilla. Rather I am saying that users do (or
should) have different expectations about how that quality and security
is going to be maintained in practice.
For a commercial proprietary product a user's expectations are (or
should be) something like this:
* I've paid a vendor good money for this product (whether directly or
indirectly, e.g., for a bundled product like IE).
* The vendor has total control over this product and how it's developed
(since it's a proprietary closed source product).
* If the product has bugs, including security flaws, then I expect that
the vendor will take the money that I and others have given it and
through its own efforts (and no one else's) will provide the necessary
resources (people, systems, etc.) to fix the bugs and provide me with a
better product in the future.
* If this proves not to be the case then I will lose faith in the
product and the vendor, and will look for an alternative vendor and product.
On the other hand, for a non-commercial open source product like Mozilla
a user's expectations are (or should be) something like this:
* I've paid nothing for this product, and the licensing terms are such
that I can do pretty much anything with it, including modifying it using
the source code, redistributing it, and so on.
* The organization (or individual) distributing the product doesn't own
or control all the resources (people or otherwise) used to develop the
product.
* If the product has bugs, including security flaws, then I expect that
the product's distributor and/or others involved with the product will
have established processes that maximize the probability that the bugs
will be fixed and that I will be provided with a better product in the
future.
* If this proves not to be the case then I may lose faith in the
product, the processes, and the distributor and/or others that are
involved with them, and I may look for an alternative product. On the
other hand, I may decide to try to fix my own problems (which is
possible since I have the source code and necessary rights to that
source), or I may decide to participate in the processes myself and help
make them more effective at fixing the bugs that I and possibly others
have found.
Now, you may say: "So what? What does this difference, if indeed it is
real, have to do with anything, including the policy we're discussing?"
I'll come back to this question further on in my comments.
> If Mozilla is intended for real use, the next question is: Who
> uses Mozilla? Given my hope for the answer to the first question,
> the answer to this question should be: Anyone who uses the
> Internet.
> This means that most Mozilla users are not truly sophisticated
> software experts.
Agreed, and more specifically most Mozilla users are not security experts.
> The answer to the second question raises the next question: In
> that context, how are (not how should) CA certificates used?
> Clearly (at least to me), the answer is: The primary and most
> important use of a CA certificate is to provide the Mozilla user
> with assurance that (1) a critical Web site is indeed what it
> purports to be and (2) sensitive data communicated to a Web server
> travels across the Internet securely.
This is true for web server certificates. With email certificates issued
for CAs (e.g., for S/MIME) we have the somewhat different expectation
that the certificate will provide assurance that the entity signing a
signed email message is in fact the entity who controls that email
account. (In other words, if I receive signed email with an accompanying
certificate that lists "jd...@foo.com" as the email address, that the
message really came from whomever uses and controls the jd...@foo.com
email account.) And we have yet other expectations with CA certificates
issued for use in signing downloadable executable code, etc.
> If this chain of questions and answers is valid, then the Mozilla
> Foundation has an obligation to those who use its products to
> authenticate not only the validity of each CA certificate in the
> default database but also the integrity of the CA's process of
> issuing and signing Web server certificates with that CA
> certificate.
I pretty much agree. I think the responsibility is in practice divided
among multiple parties, since the Mozilla Foundation doesn't own and
control all aspects of Mozilla development. But the Mozilla Foundation
is indeed responsible for the product that it distributes.
> This requires specific, objective, and verifiable
> criteria for authenticating both validity and integrity.
Ah, here's where I think opinions might begin to diverge. (Actually,
based on Ian Grigg's comments here and elsewhere I suspect his opinions
may have diverged a comment or two back -- but I'll let him speak for
himself.)
Let's take a moment to discuss this supposed need for "specific,
objective, and verifiable" criteria. In particular, recall that I
claimed in another message (and have not yet been contradicted) that CA
cert-related "bugs" (e.g., including a cert for a CA that did not
perform its proper functions) are simply a special class of security
vulnerabilities in general, and are formally equivalent to other
security vulnerabilities in the sense that the effects on the user may
be equally serious, and in some cases identical or nearly so.
As a concrete example, recall the recent vulnerability in IE -- and to
some extent Mozilla -- regarding display of URLs to a user. The net
effect of this vulnerability was that a user thinking they were
accessing one web site (e.g., http://www.onlinebank.com) ended up
accessing another site (e.g., http://www.badguys.org) instead, with
little or no indication that this had happened. This is basically the
same situation that could be caused by a CA issuing a
"www.onlinebank.com" server certificate to the wrong person/entity. (And
IIRC use of SSL/TLS would not have protected the user here, since the
attackers could have gotten a valid cert for "www.badguys.org", and the
browser would be checking that cert against the "real" URL -- i.e., the
one being accessed -- as opposed to the URL as falsely displayed to the
user.)
So, if CA cert-related vulnerabilities are formally equivalent to non-CA
related security vulnerabilities and vice versa, and if decisions on
including CA certs require "specific, objective, and verifiable"
criteria, then logically we should also specify and apply such criteria
for everything else in Mozilla related to user security.
But in fact we don't do this, even though such criteria exist (e.g.,
Common Criteria and related standards). Instead we depend on the "three
P's": people, processes, and publicity. The Mozilla project (under the
ultimate direction of the Mozilla Foundation) puts its trust in
designated "module owners" responsible for particular code areas,
requires that those modules owners and others follow particular
processes in developing and maintaining Mozilla (e.g., use of Bugzilla,
review and super-review, etc.), and do all that in a public manner,
where the details of the code and processes are open to public review.
As it happens, handling security vulnerabilities doesn't fully follow
this model, since the process isn't totally open at all times and in all
aspects. This was not for lack of trying -- the actual processes
recommended by mozilla.org policy were the result of a compromise
between the "full disclosure" position and the "fix in private"
position. But that doesn't change my essential point -- the Mozilla
project has never applied specific, objective, and verifiable criteria
to all aspects of Mozilla security, and doesn't seem to have especially
suffered for not doing so.
> I advocate third-party audits because those criteria already exist
> and are already being applied through such audits.
But as I mentioned earlier, mandating independent audits 1) imposes
other costs (really externalities in the economic sense) that are borne
by the Mozilla project and Mozilla users, and 2) may not actually be an
appropriate form of security risk mitigation in all cases.
Rather than repeat my previous comments addressing these issues in the
context of CAs and CA auditing, let's turn to a similar issue in another
closely-related context, namely independent auditing of cryptographic
implementations according to FIPS 140-x and related standards.
As it happens the Mozilla project was the beneficiary of a fortunate
historical accident: It was able to take advantage of a high-quality
field-proven open source cryptographic implementation, namely NSS, that
had also been FIPS 140-1 validated.
But let's turn back the clock a few years and suppose that NSS never
existed, and that the only available open source crypto library were
OpenSSL, which at the time was not FIPS validated. Let's further suppose
that there were another alternative choice, a proprietary crypto library
(call it "ClosedSSL") whose vendor had made it available in binary form
on the main Mozilla platforms (Windows, Mac OS, and Linux), with license
terms permitting it to be included in Mozilla and redistributed at no
charge.
If you had to pick which crypto library to include in Mozilla, which
would it have been: OpenSSL, a product with source code available and a
fairly public development process, but no formal validation against
specific, objective, and verifiable criteria, or ClosedSSL, a product
formally validated against specific, objective, and verifiable criteria
but developed behind closed doors with source code not available?
I think reasonable people could decide either way and justify the
choice. However I can tell you what I would have done: I would have
recommended use of OpenSSL instead of ClosedSSL, for at least two reasons:
First, use of an open source product that could be reviewed in the
public eye would have been consistent with practices and processes in
the rest of the Mozilla project. Otherwise we would have been able to
take advantage of public review and distributed bug detection and fixing
for the rest of Mozilla, but would have been hampered in attempting to
find and fix potential bugs in the crypto library. This would mean that
we couldn't leverage the distribute nature of open source bug fixing
with regard to the crypto library, and that the reputation of Mozilla as
a whole could be compromised by problems with a product (ClosedSSL) over
which we had no control or oversight.
Second, use of an open source product would help enable Mozilla to be
ported to more platforms, including platforms that the vendor of
ClosedSSL did not support and might not be interested in supporting.
This list of otherwise "deprived" platforms might have included OS/2,
the various *BSD distributions, non-Red Hat distributions of Linux,
Solaris, HP-UX, AIX, Irix, and others. Most people may not care whether
Mozilla is available on, say, OS/2, but I can guarantee that the users
of OS/2 care a lot, and the widespread availability of Mozilla on lots
of different platforms has been a major factor in its popularity and
success thus far.
So in this case the informal "validation" made possible by public review
of open source code would trump the formal validation of closed code
against specific, objective, and verifiable criteria, at least for me.
Based on the market success of OpenSSL over the years I think a lot of
people hold the same opinion as I do. As it happens OpenSSL is now being
validated against the FIPS 140-2 criteria, but note the cause and
effect: OpenSSL is being validated because it became so popular that its
user base came to include users for which FIPS validation was important,
but the popularity of OpenSSL had nothing to do with whether it was FIPS
validated or not.
This ties back to Ian Grigg's comments about "markets" in this context.
I don't agree with everything Ian writes, but I think this line of
thinking can be fruitful, particularly with regard to the role and value
of independent auditors:
If we look at why we have independent auditors in the case of public
companies, it's in large part because most of what goes on in any
company is closed to public view. Investors don't have access to
detailed internal sales forecasts, or customer lists, or development
plans, or other things that they might use to evaluate a company. So we
have independent auditors who are in a sense "stand-ins" for investors,
and who have access to information that investors are denied.
But at the same time independent auditors can't be complete stand-ins
for investors. For one thing, the auditors are paid by the company, and
so their interests are not 100% aligned with investors: Although the
vast majority of individual auditors and audit firms may act in a manner
beyond reproach, there is always at some level the temptation to "fudge"
the results, and there is almost always someone somewhere who succumbs
to that temptation, at least to some extent.
Besides whatever other virtues it might have, the requirement for
specific, objective, and verifiable criteria can be seen in one light as
a response to the issues raised by the temptations inherent in the role
of paid independent auditor: By tightly restricting the "degrees of
freedom" available to auditors, we make it more difficult for auditors
to "bend the rules" to help a company obtain a favorable evaluation.
However in public markets like the stock exchange investors still don't
put complete trust in the results of corporate audits, no matter how
carefully conducted. They also take into account any other information
available to them, and the final value assigned to a company is based on
the totality of information known about a company, of which the audited
results are only a part. If a company's operations were significantly
more transparent than they typically are today (and a number of people
have recommended that companies do this), then IMO the audited results
would be an even smaller factor in determining perceived company value.
If you substitute "users" for "investors" and "CAs" for "companies" (the
"auditors" are still "auditors") then I think you pretty much capture
the essence of what Ian Grigg is saying (or at least what I take him to
be saying).
So, to turn once again back to the case of deciding which CA certs to
include, a possible alternative policy would be for the Mozilla
Foundation to assign this task to a particular "module owner" and
require that they follow normal Mozilla project processes when making
their decisions: track requests and comments on them in Bugzilla,
supplement with discussions in public forums, and take public comments
and publicly-available information into account when making the
decisions. There would be no specific, objective, and verifiable
criteria outlined as part of the original policy; any such criteria
would emerge as part of the public decision process, and any particular
decision might apply some criteria but not others.
Now I suspect that whatever policy I end up proposing will in fact
include a large dose of specific, objective, and verifiable criteria for
CAs. That's because any policy, including this one, is a product of
compromise, and there a lot of people who think formal criteria are
important in this context. I think it will be much easier to get a
policy completed if we include enough formal criteria to satisfy most
people concerned about this.
> In the end, the real question is: Can we trust and rely on the CA
> certificates in the Mozilla default database to protect our
> privacy and our assets?
I respectfully disagree: The real question is: Can we trust and rely on
the Mozilla project to produce a product that properly protects the
security of users? The whole CA cert scheme is but an aspect of that.
The answer to that question will
> determine whether we can trust the Mozilla Foundation, which needs
> to clarify the underlying philosophy upon which the proposed
> policy should be based.
I agree that we need to clarify the underlying philosophy, which is why
my next task is to create the "meta-policy" I mentioned above. Only then
will I feel comfortable creating a new revision of the proposed policy
and FAQ.
Rather than "for a minimum of 12 months", I would say "until the last
issued EE cert expires". Then, yes, I think that makes sense.
>> The "built-in" list of CAs, and the built-in list of trust info is
>> no longer stored in the cert DB. It's in a shared library that gets
>> replaced when a new (or old) version of mozilla is installed.
[snip]
>> If users CHANGE the trust settings on a root CA, or import a new root
>> CA and trust, the new CA and trust info goes into the cert DB.
> So in essence a new release of Mozilla could remove or "revoke" CA certs
> on behalf of all the users who were trusting to Mozilla to do the right
> thing, while not affecting users who had exercised their own judgement.
Prior to NSS 3.4, which was introduced into mozilla in moz 1.3 or perhaps
earlier (not sure), the built-in certs and their trust info were all
copied into the cert DB. So users of mozilla whose cert DBs originated
before NSS 3.4 will still have a LOT of root CA certs in them.
But users whose cert DBs originated in moz 1.3 or later (including N7.1
IINM), should have rather few CA certs in their cert DBs.
> But I guess this is not *quite* true: If a new CA cert were added and
> trust flags turned on, that would affect everyone who upgraded to the
> new version, and users who preferred to trust their own judgement on CA
> certs would not necessarily be alerted during the installation process
> or thereafter. Instead they would have to manually check the CA cert
> list after the upgrade (or read the release notes).
Yes, this has always been true for NSS users, IINM.
> Frank
As you know, a certificate is a signed statement that is either true or
false. If it is false, then the act of presenting it as if it were true
is an act of fraud. The statement implicit in every cert has been "spoken"
by the Cert's issuer, and is signed by the cert's issuer. An English
approximation of that statement would read something like this:
"Here is a public key, and a collection of one or more names (which
may include one or more of each of the following:
- a directory name (which may include
- a person's name,
- names of organizations,
- names of locations and states,
- postal addresses, etc.) and
- an email address, and/or
- a server's domain name, and/or
- an IP address.
I (the issuer) certify that the private key that complements this