Here are my initial attempts at a policy and accompanying FAQ:
http://www.hecker.org/mozilla/certificate-policy/
http://www.hecker.org/mozilla/certificate-faq/
The FAQ is incomplete; I want to do a section on rationales behind the
policy, but haven't had time to do a proper draft yet. However I can
describe here some of my motivations and rationales for the policy
approach I personally prefer; think of this as a first draft of the
rationales FAQ:
* After doing a couple mozilla.org policies, I've decided I like to keep
the policies themselves relatively short and general, and push detailed
discussions into the FAQ. Hence the particular form I've chosen. For
purposes of discussion you can consider the "policy" in toto to be the
policy document itself supplemented by the guidance provided in the FAQ.
* As a public project we need a policy and associated decision process
that is relatively transparent. However at the same time I don't want to
over-specify things in the policy (see also above) and would prefer to
leave some flexibility for the application of human judgement by
whomever is charged with making the actual decisions (whom I'll refer to
as the "evaluators" in the discussion below).
To take one example, at a high-level I think it's appropriate to take
CA-related risks into account in making decisions, and at an
intermediate-level (as specified in the FAQ) I think it's appropriate to
evaluate how well CAs do in protecting signing keys and related
material. However I don't think it is appropriate to mandate a specific
approach to key protection; I prefer to defer to the evaluators' judgement.
* As I've previously mentioned in another post, I personally prefer a
policy that evaluates not only CA-related risks and risk mitigation, but
also potential benefits of including a CA's certificates.
Besides being a better approach in general IMO, I think such an approach
is specifically suited to the situation the Mozilla project finds itself
in: We have a lot of CAs whose certificates have been included as a
matter of course in Mozilla based on their inclusion in Netscape 6 and
7, and IMO it's pretty unlikely that we're going to go back and give
those CAs the same level of scrutiny we give new CAs.
IMO this is unlikely for three reasons: First, there are a lot of
pre-existing CA certs, and we don't have a lot of resources to do CA
vetting. I think most if not all attention will be focused on the more
pressing issue of looking at new CAs. Second, if we do go back and vet
already-included CAs, we have limited options for what we can do in the
event we deem a particular CA to be problematic. As previously noted,
it's difficult under the current scheme to "turn off" a CA cert except
through the user's manual intervention. Finally, there are some CAs for
which it would be very disruptive to users if we "turned off" their CA
certs, given the large number of sites using their certs; so in practice
I doubt anyone would ever seriously attempt to do this.
Given that in practice existing CAs are not going to have to go through
the same process as new CAs, I believe it would be unfair on new CAs to
impose strict requirements on them without at the same time formally
considering the potential benefits of including those new CAs, and
giving CAs a chance to make a positive case to us on why their certs
should be included.
* One question that has been raised is why we shouldn't just defer to
third-party judgements on CAs, e.g., WebTrust/AICPA, for legal reasons
and also to take advantage of an already-defined and -operating process
for CA vetting. First, IMO the legal argument is not nearly as
compelling to me as others seem to find it, and as I mentioned in a
previous post I have what I believe to be sound reasons for trying to do
the right thing independent of specifically legal considerations.
Second, it is not clear to me that the goals embodied in the AICPA and
similar evaluation processes overlap 100% with our goals in doing CA
evaluation in the context of the Mozilla project. Therefore I think we
should take AICPA, etc., endorsements into account, but not make them
our sole criteria.
To expand on this point: Despite what people say about "Mozilla is now a
real end user product", IMO Mozilla is fundamentally different than a
commercial software product like IE or Outlook. It is in some sense an
"experimental" product, not in the sense of being unfinished or
bug-ridden, but in the sense that (IMO) one of the major goals of the
project and product should be to help advance the state of the art of
Internet technologies in general, including browsing and mail
technologies in particular. I think this is to the ultimate benefit of
Mozilla users, and I think Mozilla users should take this into account
when deciding whether to use Mozilla or another alternative product.
Now in the case of PKI-based systems, my personal opinion is that the
traditional approaches have in many ways favored the ideal at the
expense of the real (I see this very much in some of the Federal PKI
efforts I've been involved in), and have taken a very commerce-centric
and legal-centric approach to the CA issue. I believe that these factors
and related ones have arguably hindered both innovation in and adoption
of more secure systems for the "ordinary" end user applications
(browsing, email, etc.) that are at the heart of the Mozilla project.
Therefore I do not want to simply adopt wholesale or replicate existing
CA evaluation criteria that come from this traditional approach, but
would much prefer that we have a more flexible policy in deciding which
CA certs to include in Mozilla, one that is more in tune with the nature
of the project and its goals.
That's all my comments for now. Please respond with comments in this
forum. Besides asking for comments, questions, objections, etc., on the
actual policy issues, I'd also like a technical critique on the part of
the FAQ that provides background information. My goal is for that part
of the FAQ to give a solid grounding in the underlying issues for
Mozilla users who are not that knowledgeable about PKIs and CAs, so that
such users can understand what the policy discussions are actually
about. (And of course if you want to contribute new questions and
proposed answers for the FAQ I'd be more than happy to consider
including them.)
Frank
--
Frank Hecker
hec...@hecker.org
However, I thought you might be interested in how the state of
California approves certificate authorities under its Government
Code Section 16.5. This code section deals with digital
signatures on documents that require signatures but are filed
electronically with the state or a local government. PKI keys
used for this must be authenticated no less than keys used for
encryption or for establishing secure communication between a Web
browser and a Web server.
See <http://www.ss.ca.gov/digsig/regulations.htm>. This is the
California Secretary of State's regulation implementing Government
Code Section 16.5. Of particular interest for Mozilla's policy,
see sections 22003(a)6(C) and 22003(a)6(D) of the regulation (a
bit more than half-way down the page). (Section 22003 begins at
<http://www.ss.ca.gov/digsig/regulations.htm#22003>.) 6(C) deals
with how a CA gains approval by the state; 6(D) deals with relying
on national and international accreditation bodies for granting
approval and with revoking approval. The latter contains a link
to a notice that WebTrust audits are accepted for determining
which CAs are approved.
6(C) and 6(D) together might take two pages to print, thereby
meeting the goal of keeping the Mozilla policies short. The
notice about WebTrust audits is itself only a single page.
--
David E. Ross
<http://www.rossde.com/>
I use Mozilla as my Web browser because I want a browser that
complies with Web standards. See <http://www.mozilla.org/>.
I reviewed both the policy and FAQ.
My comments on the policy are in the PDF file at
<http://www.rossde.com/Mozilla_certs/Policy.pdf>. These comments
are in the form of suggested revisions, highlighted in underlined
blue. Those revisions primarily address how a CA's certificates
are approved for inclusion in the default database. My concern is
that CA certificates should indeed be trusted.
Specifically:
#3: I indicate that a CA that fails an audit or loses
accreditation should have its certificates removed and the removal
should be publicized. Mozilla users should not rely on a
deficient CA.
#6 (new): I added this new section to indicate that only reliable
CAs should have their certificates in the default database.
Rather than having the Mozilla Foundation investigate CAs for
reliability, I used standards based on the California
regulations. Then the only effort required of the Foundation
would be to review an audit report and verify that the audit was
conducted by a qualified professional.
#7 (new): Despite wishes to the contrary, you cannot escape the
legalisms. I suggest the Mozilla Foundation's lawyer should word
the necessary clause in your license. Reliance on outside
standards and outside auditors (especially when that reliance is
already recognized in law in the state where the Foundation is
incorporated) will offer some protection against liability, but
you should also make sure the Foundation's general liability
insurance addresses this issue.
My comments on the FAQ are in the PDF file at
<http://www.rossde.com/Mozilla_certs/FAQ.pdf>. I had comments on
only two questions under "Details of the Mozilla Certificate
Policy", one of which relates back to my suggestions regarding the
policy.
I think the Policy is good, except for one comment on
the Risk, which I've responded more towards the FAQ
entry, here:
http://www.hecker.org/mozilla/certificate-faq/policy-details/
> In particular, we will evaluate whether or not a CA
> operates in a manner likely to cause undue risk for
> Mozilla users.
Risk is a very tricky thing to assess. Firstly, risk
cannot be assessed without proper attention to the
value at risk, and the threats against that value.
Secondly, by assessing the risk, however so done, and
then presenting the results for others to rely upon,
liability is created. This liability is perhaps
limited by the price paid by the user ($0) but is
none-the-less present and available for some smart
lawyer to exploit.
One way to overcome this would be to deny any risk-based
assessment (a "common carrier" approach) but this would
then leave Mozilla users at the mercy of costless attacks
that the PKI permits. Another way would be to ask for
the CAs to provide an indemnity; this however is unlikely,
as their own businesses are constructed to reduce their
risks, not increase them.
A better way may be to reflect those risk assessments
back to those that carry the losses - the users.
This could be done by opening up a forum for every new
CA proposal. (Actually, it could be done for all old
ones as well). Just like the current CACert bug that
started this thread, each CA could have an ongoing
forum for user comment.
In this way, users can comment on the information
published, and they can present their findings. This
would mean real scrutiny would now be possible, as
it is likely that Mozilla users have more resources
than the Mozilla Foundation.
Most users would never look at the practices of a CPA,
as a) they have not the time nor patience, or b) there
is nowhere to place their comments and assessments even
if they had the time. However, if there was a defined
forum for comment, it could be hoped that sufficient
close Mozilla users would do sufficient analysis on
the major CAs such that the Mozilla Foundation could
simply refer to the sentiment on the forums.
Thus, they would outsource the risk assessment. As
policy, this would also remove the liability.
Note 1: the original CACert bug, in a near perfect forum:
<http://bugzilla.mozilla.org/show_bug.cgi?id=215243>
Note 2: this form of open governance is practiced in the
gold issuance community, where lack of regulators means
that the users have to protect themselves by demanding
certain measures of issuers.
One other minor comment:
> We may elect to publish submitted information for use
> by Mozilla users and others; please note any information
> which you consider to be proprietary and not for public
> release.
This opens up a bait and switch. Secret information
may be provided to Mozilla that will be supressed and
unavailable to the public. In the event of a dispute,
this information may be relevent to the public party,
but will be unknown to them. I'd recommend that all
information provided be deemed public, non-proprietary,
and publishable by Mozilla.
iang
Of course what Frank Hecker meant was "the probability of loss" :-)
Frank
--
Frank Hecker
hecker.org
Thanks for your comments. I especially appreciate your taking the time
to create suggested revisions.
> #3: I indicate that a CA that fails an audit or loses
> accreditation should have its certificates removed and the removal
> should be publicized. Mozilla users should not rely on a
> deficient CA.
Note that in practice this will be problematic, since AFAIK removing a
cert from the default database affects only users who are installing
Mozilla for the first time. I'll let others speak to this issue.
> #6 (new): I added this new section to indicate that only reliable
> CAs should have their certificates in the default database. Rather
than having the Mozilla Foundation investigate CAs for
> reliability, I used standards based on the California
> regulations. Then the only effort required of the Foundation
> would be to review an audit report and verify that the audit was
> conducted by a qualified professional.
Every time I've worked on a mozilla.org policy there has been at least
one or two "wedge issues" on which people fundamentally disagreed, with
strong opinions on and plausible arguments for either side of the
issue. I suspect that this idea of mandating third-party audit of CAs
will be one of, if not the major wedge issue for any Mozilla Foundation
certificate policy.
For the record, I personally oppose mandating third-party audits as a
condition of including a CA certificate in Mozilla. I think it's fine to
use independent audits (e.g., WebTrust) as an input to the decision, and
peraps as the only thing needed for our decision where a CA has gotten
such a "seal of approval". However I do not believe that we should
automatically reject a CA that has not gone through such an audit; in
that case I think we should rather do our own vetting, to whatever level
we feel necessary.
Before I explain my reasoning, let me first say that I have no objection
in principle to audits and lawyers in the PKI/CA context; in my work
I've been involved in formal security evaluations (FIPS 140 and Common
Criteria) and have worked closely with lawyers as co-workers and also as
a client. However I also believe that there are trade-offs to getting
lawyers and independent auditors involved, and those trade-offs are not
always worth making.
More specifically, I see this proposed independent audit mandate as an
example of insurance: by mandating that all included CAs have undergone
(i.e., paid for and passed) an independent audit, we are presumably
insuring the Mozilla project and the Mozilla Foundation against the
possibility of bad things happening related to the included CA certs.
Now in general getting insurance may or may not make sense; it depends
on the size of the possible loss, the probability of loss, and the cost
of the insurance. In the context of this policy discussion we'll assume
that the possible loss to the project and to the Mozilla Foundation
could be major if not catastrophic, just as when insuring my house I
assume that my house could be completely destroyed.
What about the probability of loss? Insurance makes most sense when the
probability of low is relatively low (so insurance is affordable) but
not too low (in which case insurance may not be necessary). For example,
I consider it relatively unlikely that my house will burn down, but
major house fires do occur (including one in my neighborhood a few years
ago), so it's worth it to me to buy fire insurance for my home. On the
other hand, if someone offered to sell me insurance specifically against
the possibility of a meteor destroying my house, I would not consider
paying even $1 for it -- the probability of loss is so low (1 in 10^8? 1
in 10^9?) that the "expected loss" (loss probability times potential
loss amount) is close to zero. I have better uses for that dollar.
Now the question is: Is the loss we are insuring against here more like
a house fire or more like a meteor strike? The world has been using
browsers and SSL for almost ten years now, and S/MIME-capable email
products and downloadable signed code about as long. Over that time how
many lawsuits have there been involving the issues we're concerned about
here, e.g., failures on the part of CAs, on the part of people who
blithely embedded those CA's certs in applications, and so on? Thousands
of lawsuits? Hundreds? Dozens? A few? One or two? None?
I genuinely don't know the answer to this question. However in all the
discussions around this subject I've never heard anyone cite an actual
example lawsuit or other legal action, so the answer may well be none.
If that's the case, I hope I can be forgiven for concluding that what we
are worried about here is more like a meteor strike than a building fire.
What about the costs of this proposed "insurance"? You might say, "There
is no cost to the Mozilla project or the Mozilla Foundation -- the CAs
pay the cost of audits, and by relying on those audits the Mozilla
Foundation avoids the cost of doing its own CA vetting." But these are
not the costs I am concerned about. IMO the true cost is that by
mandating independent audits for CAs, we make it difficult to field
Mozilla-related applications and product features where we might want to
use a CA that hasn't undergone independent audit (e.g., because they
can't afford it, or whatever).
For example, a growing community of independent developers is creating
extensions for the Mozilla and Firefox browsers and the Thunderbird
email program. These extensions are packaged in the form of so-called
"XPI" files, and are designed to be installed by clicking on a link
pointing to the extension file. Ideally these files should be digitally
signed, with signatures validated prior to installation; Mozilla et.al.
do in fact support this feature. However in practice people don't sign
their extensions, at least the ones I've looked at. Why don't they?
Maybe it's a hassle to get a developer cert for object signing, maybe
it's the cost. (Remember that even small costs can be a significant
barrier to developers in certain countries, or for that matter
developers in certain life circumstances.)
One could imagine the mozdev.org or texturizer.net folks sponsoring a
no-cost CA specifically for use by extension developers, and it's quite
conceivable that they could do a good job of operating such a CA,
particularly if they had help from other individuals and non-profit
groups with CA expertise. However I very much doubt they'd go to the
trouble and expense of having an independent audit.
If we then require independent audits as a condition of having a CA cert
included in Mozilla, etc., then we can't include the extension
developers' CA cert, and that means that Mozilla/Fx/TB users would have
to explicitly download the CA cert before installing the extensions.
Based on experience most people wouldn't do this, so in practice
developers still wouldn't sign their extensions, and users would still
run whatever security risks they run by downloading and installing
unsigned code.
This then is part of the cost of the proposed "insurance". One could no
doubt come up with additional examples of things that might be
beneficial to the Mozilla project and Mozilla users, but would be
foreclosed by this mandated independent audit requirement for included CAs.
Unless someone comes up with a good argument otherwise, my personal
opinion is that this "insurance" is not worth the price the project
would have to pay. As I said earlier, I have no problem with using the
results of independent audits as a factor in deciding whether to include
a particular CA's certs, bt at the same time I believe it is absolutely
necessary to have an alternative approach for CAs that have not been
independently audited and are not likely to be audited. I believe the
most appropriate alternative approach is to do our own vetting according
to some reasonable criteria.
That is my position, and I'm sticking to it unless the opposition is so
overwhelming, and the opposing arguments so compelling, that I would be
stupid not to reconsider.
> #7 (new): Despite wishes to the contrary, you cannot escape the
> legalisms. I suggest the Mozilla Foundation's lawyer should word
> the necessary clause in your license. Reliance on outside
> standards and outside auditors (especially when that reliance is
> already recognized in law in the state where the Foundation is
> incorporated) will offer some protection against liability, but
> you should also make sure the Foundation's general liability
> insurance addresses this issue.
I can certainly suggest this to the Mozilla Foundation. Whether or not
they do anything about it is up to them.
See my response to David Ross for related comments.
> A better way may be to reflect those risk assessments
> back to those that carry the losses - the users.
>
> This could be done by opening up a forum for every new
> CA proposal. (Actually, it could be done for all old
> ones as well). Just like the current CACert bug that
> started this thread, each CA could have an ongoing
> forum for user comment.
I have actually been thinking about this, based on the principle of
providing more transparency into mozilla.org processes and policies. I'd
like to see others weigh in on this issue, whether pro or con. One way
to do this would be through a combination of bugzilla and a forum for
interested parties -- somewhat analogous to the "security group" we
created to address reports of security vulnerabilites, except that in
this case I see no reason not to make this a fully public process.
> Most users would never look at the practices of a CPA,
> as a) they have not the time nor patience, or b) there
> is nowhere to place their comments and assessments even
> if they had the time. However, if there was a defined
> forum for comment, it could be hoped that sufficient
> close Mozilla users would do sufficient analysis on
> the major CAs such that the Mozilla Foundation could
> simply refer to the sentiment on the forums.
>
> Thus, they would outsource the risk assessment. As
> policy, this would also remove the liability.
I agree that "outsourcing" risk assessment in this way, whether in part
or in whole, is worth considering. However it's not clear to me that
this would actually mitigate whatever liability issues might exist. (Of
course, this could still be worth doing for other reasons.)
> One other minor comment:
>
> > We may elect to publish submitted information for use
> > by Mozilla users and others; please note any information
> > which you consider to be proprietary and not for public
> > release.
>
> This opens up a bait and switch. Secret information
> may be provided to Mozilla that will be supressed and
> unavailable to the public. In the event of a dispute,
> this information may be relevent to the public party,
> but will be unknown to them. I'd recommend that all
> information provided be deemed public, non-proprietary,
> and publishable by Mozilla.
That's a good point; I will definitely consider revising this language
along the lines you suggest.
If they are asking for users to trust them in effect (by getting
inclusion) shouldn't security in general (maybe not specifically) about
their security procedures be as open and allow the public at large to know
what they really are trusting...
4.1 is merely a corollary of the "benefits" requirement.
4.2 is only necessary to evaluate the "risks" requirement.
4.3 should add a requirement that the data be compatibly licensed.
I do believe we need more details somewhere on key risk factors.
In the "details of policy" FAQ:
The "How will the Mozilla Foundation decide" entry significantly
understates the risks side of things. I believe the word "undue" should
be removed, as it suggests Mozilla will accept a fairly high level of
risk per CA. Remember, every CA we add increases the risk, as an
attacker only needs to break one of them to succeed. The entry should
probably list risks separatly from benefits.
The discontinuation entry should mention a change in the risk/reward
evaluation as being the most likely reason.
The "free certs" section goes into a digression about email certs. This
information, if it belongs anywhere, belongs in the "how will decide"
entry. The entire second paragraph is redundant with that entry.
In the "Exactly what information" section, I don't entirely agree with
the continuity of CA operations requirement. While continuity
requirements for any CRL and/or OCSP service might make sense, there is
no risk to mozilla users if a listed CA fails to continue issuing certs.
I think you have just opened a big can of worms with this Certificate
policy.
- It should be called a Mozilla Certificate authority policy, not
Certificate policy. I don't think there is any plan to include any
non-CA certificates.
- I think the term "default certificate database" is somewhat ambiguous.
Technically, there is a built-in PKCS#11 module containing a database of
root certificates and trust. This module is separate from the
certificate database associated with each Mozilla profile. In fact, the
root certs module/database can be removed by the user altogether and
security in Mozilla can continue to function without it. I just had to
point that out. The CA certs don't get added to the profile certificate
database, unless their trust is modified.
- I am not a lawyer, but I really think you are underestimating the
liability issues for the foundation if it chooses to select
certificates. Has the Mozilla Foundation hired a lawyer to look at the
issue to make a determination of the liability risks the security policy
exposes the Foundation to, or is the Foundation in the process of hiring
one ? I would love to be wrong, but I think this is definitely something
that needs to be looked at by a lawyer, because it's the sort of thing
that could take down the foundation if not done very carefully. Just
because Mozilla has a legal disclaimer does not mean that you won't be
sued. Commercial software comes with plenty of disclaimers, too.
- As the (soon-to-be-former) AOL/Netscape employee who has been doing
most of the check-ins to the built-in root certs for NSS in recent
years, I know I would not feel comfortable at all with a policy that is
so arbitrary and void of verifiable objective criteria - section 4.1 in
particular.
- The current official certifications for commercial CAs such as
WebTrust are extensive and expensive. They don't match 1 to 1 with the
spirit of the Mozilla foundation, in that they may be overly restrictive
on who can join the party. So they shouldn't be a sine qua non condition
for inclusion.
- Most users don't understand PKI security and are not able to make CA
certificate trust decisions. And it would be indeed laughable to except
them to be able to do so with a pop-up that simply shows a few fields in
the certificate. Ever tried to verify a root CA certificate just by
looking its contents ? What did you do, call a company's 800 number and
check the fingerprint and public key to make sure it matched ? The point
is, you need an external source of trust to help with the decision.
There is no one-size-fits-all list of trusted CAs. That's why trust is
editable, and not static. People are using Mozilla in diverse
environments. I personally use Mozilla as if it were commercial
software, for personal needs such as banking, and wouldn't expect it to
include MyFriendlyNonProfitCAWhoCan'tAffordWebTrust, Joe'sPersonalCA, or
MilitarySecretCA.
In the later two cases, the end-users are savvy enough to install the
certificates themselves, before they actually start to use them (ie.
long before the browser pops-up an "unknown CA - do you want to trust
it?" pop-up).
You on the other hand might want to use
MyFriendlyNonProfitCAWhoCan'tAffordWebTrust without being presented a
trust pop-up that is very hard to act upon.
Unfortunately, I don't know of any organization that will vouch for CAs
in the MyFriendlyNonProfitCAWhoCan'tAffordWebTrust category, but it
sounds like that's what you need here. I don't think it can or should be
the Mozilla foundation itself doing it through its policy.
I also don't think they should be blanket included together with all the
commercial CAs that passed a certification.
I think MF should defer to such a CA verification organization when one
is created. When it does, these CA certs can be compiled into a separate
PKCS#11 module containing only certificates CAs in this category.
The Mozilla browser could then prompt the user for the security policy
he wants to adopt when creating his profile : there could be a checkbox
for the commercial CAs, which would basically be the current built-in
module, and another checkbox for
MyFriendlyNonProfitCAWhoCan'tAffordWebTrustCAs(for lack of a better
term) who did not go through the WebTrust (or other) commercial
certification required to be included in the first group.
The effect of each checkbox would be to load or not load a given PKCS#11
modules containing a set of trusted CA certificates. 0, 1, 2 or n
PKCS#11 modules containing trusted CA certificates can be loaded in
Mozilla in any one profile.
This way, the user makes the decision of which CAs he trusts on a
rational basis when creating his profile with a question that he can answer.
Even if MF relies on a 3rd party whats to absolve them of all
responsibility, after all they still included the certificate regardless
of any 3rd party saying it was ok, and as previously stated,
webtrust/AICPA are a bunch of accountants, with the current certificate
practices resolving around commerce, rather then the 100's of other
purposes certificates can be used for but are too expensive to get and
use. In any case what has webtrust/AICPA done in light of blatant
mistakes by companies they have approved? Without a consequence what is
to stop any CA, commercial or otherwise from caring who they issue
certificates to as long as they make a buck from it?
Ignoring the semantics of any particular legal
threat, it may be worth considering creating a
single corporation, wholly owned by the Foundation,
that is given total responsibility for all CA issues
including creating the default list. This is a
well known ring-fencing or firewalling technique,
and is generally quite acceptable if clearly
documented (and the parent Foundation never makes
any independent judgement or decision). It would
mean that any suit against the single corporation
that made all the decision would not threaten the
rest of the project.
iang
I originally called it the Mozilla CA Certificate Policy, but changed it
just to have a shorter name. I can certainly change it back.
But to play devil's advocate: It is 100% guaranteed that we would never
ever want to include a non-CA cert in Mozilla?
> - I think the term "default certificate database" is somewhat ambiguous.
> Technically, there is a built-in PKCS#11 module containing a database of
> root certificates and trust. This module is separate from the
> certificate database associated with each Mozilla profile. In fact, the
> root certs module/database can be removed by the user altogether and
> security in Mozilla can continue to function without it. I just had to
> point that out. The CA certs don't get added to the profile certificate
> database, unless their trust is modified.
I am open to using different terms and a simple way to explain what
actually is done. Suggestions welcome.
> - I am not a lawyer, but I really think you are underestimating the
> liability issues for the foundation if it chooses to select
> certificates.
That may well be. As I said before, I will certainly submit any proposed
policy to the Mozilla Foundation for approval by the appropriate people
(MF officers, and the MF board if necessary), and recommend that they
have appropriate legal counsel review the policy. But I am not going to
attempt to do the lawyers' job for them; that is not what I'm being paid
to do (well, I'm not being paid anything at all, but you get the point).
Please forgive me now if I rant for a bit: I'd like to have a
conversation about mitigating security risks, but people keep dragging
me off to start a conversation about legal risks. Why is that? What is
it about CA certs (as opposed to a host of other important
security-related issues) that prompts this relentlessly single-minded
focus on bad things that can happen from a legal point of view? (I am
tempted to say, "because with PKI and CAs the lawyers got there first",
but I'll hold that thought for now.)
You may recall that I was the lead on mozilla.org creating a policy on
addressing and disclosing security vulnerabilities in Mozilla. We had
plenty of hard-hitting discussions on how best to mitigate security
risks to Mozilla users. We spent very little time (if any) worrying
about how to mitigate legal risks. But the types of security
vulnerabilities under discussion were fully as serious as the types of
vulnerabilities resulting from breakdowns in the CA cert scheme. (In
fact on first impression I'd take the vulnerabilities to be formally
equivalent: a Mozilla exploit allowing file writing could lead to CA
certs being invisibly added and/or trust flags reset, and a bad CA cert,
e.g., for object signing, could lead to a user downloading exploit code.)
I guess the difference is that with "normal" vulnerabilities we've
internalized the idea that license liability disclaimers do at least a
reasonable job of mitigating any legal risks to developers and
distributors, and we focus primarily on security risks. If we consider
things like formal security certifications (e.g., Common Criteria), it's
as a potentially-useful option for customers who care about it, but of a
somewhat different nature than standard "designing for security", and
not a substitute for it. On the other hand with CA certs we seem to get
paralyzed by the sheer amount and complexity of the legal paperwork and
audit frameworks, to the point where we feel we can't move without
consulting a lawyer.
Past a certain point I just don't understand why this is the case. I
don't understand why we have to consult a lawyer before deciding whether
to add a CA cert, and not when deciding how to best configure Mozilla
security options for the typical user. (And in fact isn't the former
just an special case of the latter?)
As a final point, I've actually looked at the ABA documents, and I can't
figure out how their whole legal discussion applies in the case of
something like Mozilla. IIRC it is organized around the concept of CAs,
certificate holders, and "relying parties". We are certainly not a CA
and not a certificate holder. It's possible that we would be considered
a "relying party", but that role really seems to be played by Mozilla
users, e.g., who connect to certificate-presenting web sites and so on.
I guess we could be considered a sort of agent acting on behalf of a
relying party, but I don't recall the ABA documents addressing that
situation. I'd be interested in any online references that actually
discuss this.
Anyway, that's the end of my rant (at least for now).
> - As the (soon-to-be-former) AOL/Netscape employee who has been doing
> most of the check-ins to the built-in root certs for NSS in recent
> years, I know I would not feel comfortable at all with a policy that is
> so arbitrary and void of verifiable objective criteria - section 4.1 in
> particular.
Then let's come up with some verifiable objective criteria -- but let's
focus on criteria that mitigate security risks, as opposed to legal
risks. The lawyers can take care of themselves.
> - The current official certifications for commercial CAs such as
> WebTrust are extensive and expensive. They don't match 1 to 1 with the
> spirit of the Mozilla foundation, in that they may be overly restrictive
> on who can join the party. So they shouldn't be a sine qua non condition
> for inclusion.
Glad to hear it.
> - Most users don't understand PKI security and are not able to make CA
> certificate trust decisions. And it would be indeed laughable to except
> them to be able to do so with a pop-up that simply shows a few fields in
> the certificate. Ever tried to verify a root CA certificate just by
> looking its contents ? What did you do, call a company's 800 number and
> check the fingerprint and public key to make sure it matched ? The point
> is, you need an external source of trust to help with the decision.
>
> There is no one-size-fits-all list of trusted CAs.
But of course the problem is that in this respect the Mozilla Foundation
offers Mozilla as a one-size-fits-all product, in large part as a
consequence of the design of the underlying security/crypto mechanisms.
We can't easily offer "Mozilla for casual Internet use", "Mozilla for
onlinke banking", "Mozilla for Federal government agencies and
contractors", and so on.
Ideally we could handle this through the extension model being
implemented by Firefox and Thunderbird -- download an extension to
enable a particular set of CA certs for a particular purpose. (Of course
we'd have to address the bootstrap problem of validating such
extensions, e.g., by signing them with an object signing cert issued by
a CA whose cert is present in the base product.) But this is speculation
about the ideal, not the reality we have to deal with right now.
> That's why trust is
> editable, and not static. People are using Mozilla in diverse
> environments. I personally use Mozilla as if it were commercial
> software, for personal needs such as banking, and wouldn't expect it to
> include MyFriendlyNonProfitCAWhoCan'tAffordWebTrust, Joe'sPersonalCA, or
> MilitarySecretCA.
>
> In the later two cases, the end-users are savvy enough to install the
> certificates themselves, before they actually start to use them (ie.
> long before the browser pops-up an "unknown CA - do you want to trust
> it?" pop-up).
The example of MilitarySecretCA reminds me of a point worth emphasizing:
IMO the most significant legal implications of CA certs come in
situations where the certificate holders are large enterprises engaging
in large-dollar-volume commerce or similar activities with relatively
severe consequences if things go awry, and where the parties involved
(the certificate holders) operate in a fairly heavyweight pre-existing
legal frameworks of contracts, etc. To a large extent these parties can
and IMO should be able to "take care of themselves" with regard to CA
certs, by actively vetting the software they use to perform these
activities, including consulting independent auditors, and reconfiguring
it if necessary (e.g., deleting/adding CA certs and setting trust flags
appropriately).
> You on the other hand might want to use
> MyFriendlyNonProfitCAWhoCan'tAffordWebTrust without being presented a
> trust pop-up that is very hard to act upon.
>
> Unfortunately, I don't know of any organization that will vouch for CAs
> in the MyFriendlyNonProfitCAWhoCan'tAffordWebTrust category, but it
> sounds like that's what you need here. I don't think it can or should be
> the Mozilla foundation itself doing it through its policy.
> I also don't think they should be blanket included together with all the
> commercial CAs that passed a certification.
Above you wrote "The current official certifications for commercial CAs
such as WebTrust ... shouldn't be a sine qua non condition for
inclusion", implying that a CA could be included without going through
such a certificate. But here you write "I also don't think
[non-certified CAs] should be blanket included together with all the
commercial CAs that passed a certification." So I assume that you are
using "blanket included" to mean something different than plain
"included", and the that difference is defined in your next statement:
"blanket included" means included in the same PKCS#11 module.
> I think MF should defer to such a CA verification organization when one
> is created. When it does, these CA certs can be compiled into a separate
> PKCS#11 module containing only certificates CAs in this category.
>
> The Mozilla browser could then prompt the user for the security policy
> he wants to adopt when creating his profile : there could be a checkbox
> for the commercial CAs, which would basically be the current built-in
> module, and another checkbox for
> MyFriendlyNonProfitCAWhoCan'tAffordWebTrustCAs(for lack of a better
> term) who did not go through the WebTrust (or other) commercial
> certification required to be included in the first group.
>
> The effect of each checkbox would be to load or not load a given PKCS#11
> modules containing a set of trusted CA certificates. 0, 1, 2 or n
> PKCS#11 modules containing trusted CA certificates can be loaded in
> Mozilla in any one profile.
>
> This way, the user makes the decision of which CAs he trusts on a
> rational basis when creating his profile with a question that he can
> answer.
This is a fine idea, and it matches my naive conception of an
extension-style mechanism to let users customize Mozilla in terms of
accepted CAs as they customize it in terms of features.
But this mechanism doesn't exist today, and may never exist if nobody
does the work of creating it. I want to create a policy now, and what
you seem to be recommending is the policy must mandate independent
audits of CAs until whatever point in the (possibly far distant) future
that the Mozilla implementation provides a way to group CAs in this way.
I don't agree with that.
After reviewing the discussion in this thread (and other threads),
I must conclude that the whole approach to developing a policy is
flawed. A policy should represent specifics based on a more
general philosophy, but I don't think the philosophy itself is
clear in this case.
The first question that must be answered is: Why continue
developing Mozilla? I would hope the answer does NOT revolve
around an exercise in computer science but instead reflects a
desire to create a high-quality software application for personal
and commercial use -- an application for the real world.
If Mozilla is intended for real use, the next question is: Who
uses Mozilla? Given my hope for the answer to the first question,
the answer to this question should be: Anyone who uses the
Internet.
This means that most Mozilla users are not truly sophisticated
software experts.
The answer to the second question raises the next question: In
that context, how are (not how should) CA certificates used?
Clearly (at least to me), the answer is: The primary and most
important use of a CA certificate is to provide the Mozilla user
with assurance that (1) a critical Web site is indeed what it
purports to be and (2) sensitive data communicated to a Web server
travels across the Internet securely.
If this chain of questions and answers is valid, then the Mozilla
Foundation has an obligation to those who use its products to
authenticate not only the validity of each CA certificate in the
default database but also the integrity of the CA's process of
issuing and signing Web server certificates with that CA
certificate. This requires specific, objective, and verifiable
criteria for authenticating both validity and integrity. I
advocate third-party audits because those criteria already exist
and are already being applied through such audits.
No, this does not mean only WebTrust audits. Earlier in this
thread, I cited a California state regulation that specifies
either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
22003(a)6(D) under
<http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
that regulation provides criteria for accepting other
accreditation criteria. However, until other criteria can be
clearly identified and documented, the WebTrust and SAS 70 audits
are the only trustworthy and reliable bases for accepting CA
certificates.
In the end, the real question is: Can we trust and rely on the CA
certificates in the Mozilla default database to protect our
privacy and our assets? The answer to that question will
determine whether we can trust the Mozilla Foundation, which needs
to clarify the underlying philosophy upon which the proposed
policy should be based.
Of course, my original assumption -- my hope for the answer to the
first question -- might not be valid. In this case, Mozilla is
merely an interesting toy; and I will then have to rely on some
other browser for online banking and other critical Web uses.
(It may be that the Mozilla users in the majority
are not sophisticated. But, that does not mean
that the software is written for them.)
> The answer to the second question raises the next question: In
> that context, how are (not how should) CA certificates used?
> Clearly (at least to me), the answer is: The primary and most
> important use of a CA certificate is to provide the Mozilla user
> with assurance that (1) a critical Web site is indeed what it
> purports to be and (2) sensitive data communicated to a Web server
> travels across the Internet securely.
(This is not clear at all. I think it rests on
a number of false assumptions, but those are
quite hard to describe in a quick email, so
I'll skip that here.)
> If this chain of questions and answers is valid, then the Mozilla
> Foundation has an obligation to those who use its products to
> authenticate not only the validity of each CA certificate in the
> default database but also the integrity of the CA's process of
> issuing and signing Web server certificates with that CA
> certificate.
How do you conclude that? As users don't pay
anything, there can not be much of an obligation
of any form, let alone something as sensitive as
the validity of a signature chain (something that
evidently other competitors have also failed to
treat as "obligations").
> No, this does not mean only WebTrust audits. Earlier in this
> thread, I cited a California state regulation that specifies
> either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
> 22003(a)6(D) under
> <http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
> that regulation provides criteria for accepting other
> accreditation criteria. However, until other criteria can be
> clearly identified and documented, the WebTrust and SAS 70 audits
> are the only trustworthy and reliable bases for accepting CA
> certificates.
Is there a specific reason why Mozilla should
decide to write and distribute its software
according to these regulations? It seems to
be a bad idea, on the face of it...
> In the end, the real question is: Can we trust and rely on the CA
> certificates in the Mozilla default database to protect our
> privacy and our assets? The answer to that question will
> determine whether we can trust the Mozilla Foundation, which needs
> to clarify the underlying philosophy upon which the proposed
> policy should be based.
No way. This is FUD. Just because the default
list of certs might have some flaws does not mean
that we or users or anyone should not trust the
Mozilla Foundation. The Foundation is under no
obligation to provide a list to you or anyone.
Trying to shame them into providing your list,
one that you can trust, will achieve nothing for
Mozilla or the users. This is easy to see - if
you could pick the list, as trustworthy, then so
could anyone else. As there is a debate, it is
clear that picking the list is a vexing issue.
Thus, no room for FUD tactics.
> Of course, my original assumption -- my hope for the answer to the
> first question -- might not be valid. In this case, Mozilla is
> merely an interesting toy; and I will then have to rely on some
> other browser for online banking and other critical Web uses.
iang
(It may be that the Mozilla users in the majority
are not sophisticated. But, that does not mean
that the software is written for them.)
> The answer to the second question raises the next question: In
> that context, how are (not how should) CA certificates used?
> Clearly (at least to me), the answer is: The primary and most
> important use of a CA certificate is to provide the Mozilla user
> with assurance that (1) a critical Web site is indeed what it
> purports to be and (2) sensitive data communicated to a Web server
> travels across the Internet securely.
(This is not clear at all. I think it rests on
a number of false assumptions, but those are
quite hard to describe in a quick email, so
I'll skip that here.)
> If this chain of questions and answers is valid, then the Mozilla
> Foundation has an obligation to those who use its products to
> authenticate not only the validity of each CA certificate in the
> default database but also the integrity of the CA's process of
> issuing and signing Web server certificates with that CA
> certificate.
How do you conclude that? As users don't pay
anything, there can not be much of an obligation
of any form, let alone something as sensitive as
the validity of a signature chain (something that
evidently other competitors have also failed to
treat as "obligations").
> No, this does not mean only WebTrust audits. Earlier in this
> thread, I cited a California state regulation that specifies
> either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
> 22003(a)6(D) under
> <http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
> that regulation provides criteria for accepting other
> accreditation criteria. However, until other criteria can be
> clearly identified and documented, the WebTrust and SAS 70 audits
> are the only trustworthy and reliable bases for accepting CA
> certificates.
Is there a specific reason why Mozilla should
decide to write and distribute its software
according to these regulations? It seems to
be a bad idea, on the face of it...
> In the end, the real question is: Can we trust and rely on the CA
> certificates in the Mozilla default database to protect our
> privacy and our assets? The answer to that question will
> determine whether we can trust the Mozilla Foundation, which needs
> to clarify the underlying philosophy upon which the proposed
> policy should be based.
No way. This is FUD. Just because the default
list of certs might have some flaws does not mean
that we or users or anyone should not trust the
Mozilla Foundation. The Foundation is under no
obligation to provide a list to you or anyone.
Trying to shame them into providing your list,
one that you can trust, will achieve nothing for
Mozilla or the users. This is easy to see - if
you could pick the list, as trustworthy, then so
could anyone else. As there is a debate, it is
clear that picking the list is a vexing issue.
Thus, no room for FUD tactics.
> Of course, my original assumption -- my hope for the answer to the
> first question -- might not be valid. In this case, Mozilla is
> merely an interesting toy; and I will then have to rely on some
> other browser for online banking and other critical Web uses.
iang
> Julien Pierre wrote:
>> - It should be called a Mozilla Certificate authority policy, not
>> Certificate policy. I don't think there is any plan to include any
>> non-CA certificates.
>
>
> I originally called it the Mozilla CA Certificate Policy, but changed it
> just to have a shorter name. I can certainly change it back.
Well "CA cerificate" is somewhat redundant (it includes "Certificate"
twice). I would say "Mozilla [built-in] Certificate Authority Policy"
would be a good name.
>
> But to play devil's advocate: It is 100% guaranteed that we would never
> ever want to include a non-CA cert in Mozilla?
It is not guaranteed. You can use the built-ins module for anything you
want, including negative trust on some known compromised popular server
certs (ie. like a global CRL). But I would not recommend such use. I
think in practice you would only ever want root CA certs on it.
>> - I think the term "default certificate database" is somewhat
>> ambiguous. Technically, there is a built-in PKCS#11 module containing
>> a database of root certificates and trust. This module is separate
>> from the certificate database associated with each Mozilla profile. In
>> fact, the root certs module/database can be removed by the user
>> altogether and security in Mozilla can continue to function without
>> it. I just had to point that out. The CA certs don't get added to the
>> profile certificate database, unless their trust is modified.
>
>
> I am open to using different terms and a simple way to explain what
> actually is done. Suggestions welcome.
Well, I don't know yet what the right name should be, but if we choose
to have several modules with different set of certs, then the
distinction becomes more important since there won't be a single
"default certificate database".
> (MF officers, and the MF board if necessary), and recommend that they
Make that "require".
> Please forgive me now if I rant for a bit: I'd like to have a
> conversation about mitigating security risks, but people keep dragging
> me off to start a conversation about legal risks. Why is that? What is
> it about CA certs (as opposed to a host of other important
> security-related issues) that prompts this relentlessly single-minded
> focus on bad things that can happen from a legal point of view? (I am
> tempted to say, "because with PKI and CAs the lawyers got there first",
> but I'll hold that thought for now.)
Does it really need spelling out ? If you have a rogue or compromised
trusted CA in Mozilla, which willingly signs fake server certificates,
that opens the door to all kinds of scams, where Mozilla users will
think they are doing business with somebody when in fact they are not.
Remember that one of the most common uses of SSL is for financial
transactions. If Mozilla users suffer financial losses due to a rogue
trusted CA, you can bet they will sue whoever approved that trusted CA,
disclaimer or not. So it is in the interest of the Foundation not to
make the decision itself.
> Past a certain point I just don't understand why this is the case. I
> don't understand why we have to consult a lawyer before deciding whether
> to add a CA cert, and not when deciding how to best configure Mozilla
> security options for the typical user. (And in fact isn't the former
> just an special case of the latter?)
You have a point. And I think the MF should have a good answer to that
question, since it distributes all the security code, not just the CA
certs. The liability situation is different now that there is an MF,
rather than a corporate distributor of the open-source code.
>> - As the (soon-to-be-former) AOL/Netscape employee who has been doing
>> most of the check-ins to the built-in root certs for NSS in recent
>> years, I know I would not feel comfortable at all with a policy that
>> is so arbitrary and void of verifiable objective criteria - section
>> 4.1 in particular.
>
>
> Then let's come up with some verifiable objective criteria -- but let's
> focus on criteria that mitigate security risks, as opposed to legal
> risks. The lawyers can take care of themselves.
The policy will have to address both risks, for the sake of the MF and
the contributors editing the database.
>> - Most users don't understand PKI security and are not able to make CA
>> certificate trust decisions. And it would be indeed laughable to
>> except them to be able to do so with a pop-up that simply shows a few
>> fields in the certificate. Ever tried to verify a root CA certificate
>> just by looking its contents ? What did you do, call a company's 800
>> number and check the fingerprint and public key to make sure it
>> matched ? The point is, you need an external source of trust to help
>> with the decision.
>>
>> There is no one-size-fits-all list of trusted CAs.
>
>
> But of course the problem is that in this respect the Mozilla Foundation
> offers Mozilla as a one-size-fits-all product, in large part as a
> consequence of the design of the underlying security/crypto mechanisms.
> We can't easily offer "Mozilla for casual Internet use", "Mozilla for
> onlinke banking", "Mozilla for Federal government agencies and
> contractors", and so on.
That single product could still offer the user a choice of several
security policies for the various types of users.
> Ideally we could handle this through the extension model being
> implemented by Firefox and Thunderbird -- download an extension to
> enable a particular set of CA certs for a particular purpose. (Of course
> we'd have to address the bootstrap problem of validating such
> extensions, e.g., by signing them with an object signing cert issued by
> a CA whose cert is present in the base product.)
Downloading a set of certs and asking the user to trust them all is an
even more difficult decision to do than asking the user to trust one
cert ... I think we should limit this discussion to the built-in certs
that are distributed with Mozilla.
>> In the later two cases, the end-users are savvy enough to install the
>> certificates themselves, before they actually start to use them (ie.
>> long before the browser pops-up an "unknown CA - do you want to trust
>> it?" pop-up).
>
>
> The example of MilitarySecretCA reminds me of a point worth emphasizing:
> IMO the most significant legal implications of CA certs come in
> situations where the certificate holders are large enterprises engaging
> in large-dollar-volume commerce or similar activities with relatively
> severe consequences if things go awry, and where the parties involved
> (the certificate holders) operate in a fairly heavyweight pre-existing
> legal frameworks of contracts, etc. To a large extent these parties can
> and IMO should be able to "take care of themselves" with regard to CA
> certs, by actively vetting the software they use to perform these
> activities, including consulting independent auditors, and reconfiguring
> it if necessary (e.g., deleting/adding CA certs and setting trust flags
> appropriately).
I agree. For these applications, the environment is not an open one, and
these Mozilla users can customize the software with their own built-in
list of trusted certs compiled in, as opposed to the one that comes with
Mozilla.
Alternatively, if they want to use the binaries distributed on
Mozilla.org, they can delete the built-in root certs module altogether,
and add their trusted root certs obtained from a known reliable source
manually.
> Above you wrote "The current official certifications for commercial CAs
> such as WebTrust ... shouldn't be a sine qua non condition for
> inclusion", implying that a CA could be included without going through
> such a certificate. But here you write "I also don't think
> [non-certified CAs] should be blanket included together with all the
> commercial CAs that passed a certification." So I assume that you are
> using "blanket included" to mean something different than plain
> "included", and the that difference is defined in your next statement:
> "blanket included" means included in the same PKCS#11 module.
Yes, I mean that there should be different groups of trusted certs for
these categories, in separate PKCS#11 modules, clearly marked and named.
There wouldn't be one default module that would be always trusted.
You could ask the user during profile creation if he wants to trust
X "commercial CAs (entities verified by WebTrust, Inc)"
X "other non-profit CAs (entities verified by MyCheaperAuditCompany, Inc)"
One, both, or neither of those checkboxes could be set by default, but
the user would have to be presented with this choice when creating his
Mozilla profile to make sure he chooses the set he wants.
To preserve compatibility with the current way Mozilla operates, and to
protect non-security savvy Mozilla users, I think the preferred default
would be to have a checkbox next to the commercial CAs, and no checkbox
next to the non-profit CAs.
> This is a fine idea, and it matches my naive conception of an
> extension-style mechanism to let users customize Mozilla in terms of
> accepted CAs as they customize it in terms of features.
>
> But this mechanism doesn't exist today, and may never exist if nobody
> does the work of creating it. I want to create a policy now, and what
> you seem to be recommending is the policy must mandate independent
> audits of CAs until whatever point in the (possibly far distant) future
> that the Mozilla implementation provides a way to group CAs in this way.
> I don't agree with that.
My take on this is, the policy should be carefully examined before it is
decided, it's not something to do in a hurry just because there are a
couple CAs that are shouting that they want to be included right away.
It may well be that the right policy requires some work to actually
implement.
Let's examine the work that's actually required to implement my proposal :
- NSS already has the ability to load any number of PKCS#11 modules. No
code changes needed here.
- It is quite trivial work to generate multiple PKCS#11 root-cert
modules from multiple CA cert lists. All that needs to be done is to
rebuild the builtins directory with a different certdata.txt, and a
different DLL/.so target name . That's mostly scripting/Makefile work.
Again, no code changes are needed.
- PSM already has a UI and code to load PKCS#11 modules manually, under
"Security Devices", which could be used to load alternate/additional
root certificate modules. This is not very user-friendly and buried in
the preferences/privacy and security dialog, but it actually does the job.
- The only real new code that needs to be written to make the process
seamless for Mozilla users is a GUI prompt in the Mozilla profile
creation to ask the user to choose the CA policy(s) he wants to use, and
load the corresponding PKCS#11 module(s), which is done by a single
existing NSS API call. Now, it's been a long time since I wrote any GUI
code, and I have never written any for Mozilla itself, but it does not
strike me as a lot of work.
Of course there is still the dependency on finding a third party to
verify the non-commercial CAs. I think that's something that is
inevitable, and you should start looking for one now. If you can't find
one, somebody else suggested creating a separate legal entity tasked
with that specific role, which would protect the MF from any bad calls
on the CAs included.
I kind of agree with Franks statement about security issues relating to
mozilla Vs this one, surely they are directly related a lot more to MF
liability then any CA issue, as the CA themselves should be liable not
the MF for any poor judgment of certificates issued.
Would you really trust a Web server certificate issued by a CA
that lost its accreditation or received less than an unqualified
opinion on an audit? I would not, and I would be extra suspicious
about server certificates issued by that CA before the negative
action against it. After all, such negative action would be the
result of past discrepancies by the CA, not future discrepancies.
And I would certainly not trust server certificates issued after
the negative action until someone -- definitely not the CA itself
-- pronounced the discrepancies corrected. Then, I would trust
only those server certificates issued after the corrections were
determined.
We are talking about MONEY and PRIVACY. How much risk are you
willing to take with these?
--
David E. Ross
<http://www.rossde.com/>
I use Mozilla as my Web browser because I want a browser that
So I take it you remove a lot of certificates from your copy of Mozilla
then?
>
>After reviewing the discussion in this thread (and other threads),
>I must conclude that the whole approach to developing a policy is
>flawed. A policy should represent specifics based on a more
>general philosophy, but I don't think the philosophy itself is
>clear in this case.
>
What Frank is calling the policy is, I believe, what you are calling the
philosophy. Simply put, it is that the Mozilla Foundation should decide
whether or not to include a CA based on a balancing of the risks and
benefits of doing so.
What we still need to nail down are some more specifics as to how to
evaluate the benefits and risks. I believe Frank's "FAQ" does a
reasonable job of describing how to evaluate the benefits. The risks
side needs much more definition.
>If this chain of questions and answers is valid, then the Mozilla
>Foundation has an obligation to those who use its products to
>authenticate not only the validity of each CA certificate in the
>default database but also the integrity of the CA's process of
>issuing and signing Web server certificates with that CA
>certificate.
>
I'm not sure I'd call it an "obligation", but given the minimalist
threat model I proposed earlier, this is something that is necessary in
order to evaluate the risks.
>No, this does not mean only WebTrust audits. Earlier in this
>thread, I cited a California state regulation that specifies
>either WebTrust or SAS 70 audits. (See Sections 22003(a)6(C) and
>22003(a)6(D) under
><http://www.ss.ca.gov/digsig/regulations.htm#22003>.) Further,
>that regulation provides criteria for accepting other
>accreditation criteria. However, until other criteria can be
>clearly identified and documented, the WebTrust and SAS 70 audits
>are the only trustworthy and reliable bases for accepting CA
>certificates.
>
>
WebTrust and SAS 70 audits outsource the bulk of the risk assessment.
They are only useful if the threat model used for the audit is
compatible with one's own threat model. It is quite possible that their
threat model protects against things that Mozilla users don't care
about, so requiring CAs to pass their criteria might unreasonably
exclude CAs. It also might be possible and worthwhile to perform such a
risk assessment without outsourcing.
But we do clearly need a threat model in order to assess risks.
> David Ross wrote:
>
>> Clearly (at least to me), the answer is: The primary and most
>> important use of a CA certificate is to provide the Mozilla user
>> with assurance that (1) a critical Web site is indeed what it
>> purports to be
>
> (This is not clear at all. I think it rests on
> a number of false assumptions, but those are
> quite hard to describe in a quick email, so
> I'll skip that here.)
As (1) is the definition of a certificate (modulo the fact that
applicability goes beyond just web sites), it is as clear to me as any
derivation from definitions. That you state it is not clear, omitting
any argument, is in no way convincing.
> In the "Exactly what information" section, I don't entirely agree with
> the continuity of CA operations requirement. While continuity
> requirements for any CRL and/or OCSP service might make sense, there is
> no risk to mozilla users if a listed CA fails to continue issuing certs.
I agree with that last sentence. Continuity of operations is primarily
to keep revocation going. If revocation stops, rightful private key
holders are therafter unprotected from damages due to compromised keys.
> > #3: I indicate that a CA that fails an audit or loses
> > accreditation should have its certificates removed and the removal
> > should be publicized. Mozilla users should not rely on a
> > deficient CA.
>
> Note that in practice this will be problematic, since AFAIK removing a
> cert from the default database affects only users who are installing
> Mozilla for the first time. I'll let others speak to this issue.
Frank, Things work rather differently now than they did 4 years ago.
The "built-in" list of CAs, and the built-in list of trust info is
no longer stored in the cert DB. It's in a shared library that gets
replaced when a new (or old) version of mozilla is installed.
If users CHANGE the trust settings on a root CA, or import a new root
CA and trust, the new CA and trust info goes into the cert DB.
Anyway, I think it's easier to remove trust for a built-in root CA now
than before.
Sorry, yes, I should have left that bit out.
The underlying fact here is that a CA certificate
carries a signature from a third party (CA)
on a key for a second party (website).
That's a cryptographic fact, in general, and
other claims are assumptions that may or may
not be founded.
It's by no means definitional whether that
signature delivers anything like "providing
assurance that a critical web site is indeed
what it purports to be." The question is
whether we can move from a cryptographic
statement (this key signs that key) to a
business statement (this site is who they
say they are) with any degree of confidence.
The answer to that seems to be no. Not with
any confidence.
Just as an example of one only amongst a
long list of difficulties, the present issue
is that, as no browser goes to any trouble to
to separate out *which* CA made the claim,
the confidence is reduced to the lowest
common denominator. (There are many more
issues, but that one is apropos.)
iang
PS: C.f, branding discussion started by Tim Dierks.
AFAIK, Peter Gutmann first made the observation
about "one size" security policy resulting in
no security.
Would it make sense for MF to have some assurance by the CA that the CRL
would be kept running for a minimum of 12 months after, either by their
own, or by a 3rd party, or even MF?
The uniting of the business assertion with the cryptographic assertion
is accomplished via 2 step process:
1. The statement from the CA on how the cryptographic assertion is made
- what checks and balances, identification and authentication mechanisms
are employed to assure that the details in the cryptographic assertion
(e.g. name, domain ownership etc) are valid - you can get this from the
Certification Practice Statement [CPS] (this is generally referenced in
the certificate)
2. The audit of the CA by an independant body rating the CA on it's
adherence to it's CPS - in the world of CAs we have SAS 70 and WebTrust
that are prevalent, the latter seeming to gain greater emphasis of late.
I seem to have read somewhere recently that Microsoft was considering
requiring CAs to pass the WebTrust audit before they would allow their
certs to be embedded in their browser - anyone confirm that?
Regards,
-Scott
Ian Grigg wrote:
> _______________________________________________
> mozilla-crypto mailing list
> mozilla...@mozilla.org
> http://mail.mozilla.org/listinfo/mozilla-crypto
The uniting of the business assertion with the cryptographic assertion
is accomplished via 2 step process:
1. The statement from the CA on how the cryptographic assertion is made
- what checks and balances, identification and authentication mechanisms
are employed to assure that the details in the cryptographic assertion
(e.g. name, domain ownership etc) are valid - you can get this from the
Certification Practice Statement [CPS] (this is generally referenced in
the certificate)
2. The audit of the CA by an independant body rating the CA on it's
adherence to it's CPS - in the world of CAs we have SAS 70 and WebTrust
that are prevalent, the latter seeming to gain greater emphasis of late.
I seem to have read somewhere recently that Microsoft was considering
requiring CAs to pass the WebTrust audit before they would allow their
certs to be embedded in their browser - anyone confirm that?
Regards,
-Scott
Ian Grigg wrote:
Were you sleeping the last two/three years, or more ? :-)
It must be since IE 5.5, at the last since IE 6, that CA that did not
passs an audit are not present in the browser built-in list.
The current news is more that XP will try to check if it's list of CA is
up-to-date with the latest version on Windows Update everytime a
certificate chain is verified.
So update to the list, addition or removal, will be very effective very
fast for all XP/CAPI users with an on-line connexion.
The list is updated for older client when they start an update download
from Windows.
Thanks for the info. This has not been the first time, nor will it be
the last, that my ignorance has led me astray.
> If users CHANGE the trust settings on a root CA, or import a new root
> CA and trust, the new CA and trust info goes into the cert DB.
So in essence a new release of Mozilla could remove or "revoke" CA certs
on behalf of all the users who were trusting to Mozilla to do the right
thing, while not affecting users who had exercised their own judgement.
But I guess this is not *quite* true: If a new CA cert were added and
trust flags turned on, that would affect everyone who upgraded to the
new version, and users who preferred to trust their own judgement on CA
certs would not necessarily be alerted during the installation process
or thereafter. Instead they would have to manually check the CA cert
list after the upgrade (or read the release notes).
Frank
--
--
Frank Hecker
hecker.org
David Ross wrote:
> After reviewing the discussion in this thread (and other threads),
> I must conclude that the whole approach to developing a policy is
> flawed. A policy should represent specifics based on a more
> general philosophy, but I don't think the philosophy itself is
> clear in this case.
This is an excellent comment which I'm going to take to heart. I have
concluded that it would be very useful for me to write and post a
"meta-policy" document that clarifies the underlying type of policy I
personally want to see us develop, and why that policy has the features
that it does; this would in essence outline the more general philosophy
behind the policy itself.
> The first question that must be answered is: Why continue
> developing Mozilla? I would hope the answer does NOT revolve
> around an exercise in computer science but instead reflects a
> desire to create a high-quality software application for personal
> and commercial use -- an application for the real world.
Yes, but additional background is useful here: With the founding of the
Mozilla Foundation the explicit focus of the project is now indeed to
produce an end user software product. (Prior to that the nominal focus
was to produce a developer product from which others would create an end
user product.) So, yes, we do want to create an "application for the
real world".
However although Mozilla is an end user product it is not a commercial
proprietary product but rather a non-commercial open source product. IMO
that has implications for what user's expectations are or at least
should be, both in general and in the area of security in particular.
Note carefully: I am *not* saying that users should have lower
expectations regarding the quality and security of non-commercial open
source products like Mozilla. Rather I am saying that users do (or
should) have different expectations about how that quality and security
is going to be maintained in practice.
For a commercial proprietary product a user's expectations are (or
should be) something like this:
* I've paid a vendor good money for this product (whether directly or
indirectly, e.g., for a bundled product like IE).
* The vendor has total control over this product and how it's developed
(since it's a proprietary closed source product).
* If the product has bugs, including security flaws, then I expect that
the vendor will take the money that I and others have given it and
through its own efforts (and no one else's) will provide the necessary
resources (people, systems, etc.) to fix the bugs and provide me with a
better product in the future.
* If this proves not to be the case then I will lose faith in the
product and the vendor, and will look for an alternative vendor and product.
On the other hand, for a non-commercial open source product like Mozilla
a user's expectations are (or should be) something like this:
* I've paid nothing for this product, and the licensing terms are such
that I can do pretty much anything with it, including modifying it using
the source code, redistributing it, and so on.
* The organization (or individual) distributing the product doesn't own
or control all the resources (people or otherwise) used to develop the
product.
* If the product has bugs, including security flaws, then I expect that
the product's distributor and/or others involved with the product will
have established processes that maximize the probability that the bugs
will be fixed and that I will be provided with a better product in the
future.
* If this proves not to be the case then I may lose faith in the
product, the processes, and the distributor and/or others that are
involved with them, and I may look for an alternative product. On the
other hand, I may decide to try to fix my own problems (which is
possible since I have the source code and necessary rights to that
source), or I may decide to participate in the processes myself and help
make them more effective at fixing the bugs that I and possibly others
have found.
Now, you may say: "So what? What does this difference, if indeed it is
real, have to do with anything, including the policy we're discussing?"
I'll come back to this question further on in my comments.
> If Mozilla is intended for real use, the next question is: Who
> uses Mozilla? Given my hope for the answer to the first question,
> the answer to this question should be: Anyone who uses the
> Internet.
> This means that most Mozilla users are not truly sophisticated
> software experts.
Agreed, and more specifically most Mozilla users are not security experts.
> The answer to the second question raises the next question: In
> that context, how are (not how should) CA certificates used?
> Clearly (at least to me), the answer is: The primary and most
> important use of a CA certificate is to provide the Mozilla user
> with assurance that (1) a critical Web site is indeed what it
> purports to be and (2) sensitive data communicated to a Web server
> travels across the Internet securely.
This is true for web server certificates. With email certificates issued
for CAs (e.g., for S/MIME) we have the somewhat different expectation
that the certificate will provide assurance that the entity signing a
signed email message is in fact the entity who controls that email
account. (In other words, if I receive signed email with an accompanying
certificate that lists "jd...@foo.com" as the email address, that the
message really came from whomever uses and controls the jd...@foo.com
email account.) And we have yet other expectations with CA certificates
issued for use in signing downloadable executable code, etc.
> If this chain of questions and answers is valid, then the Mozilla
> Foundation has an obligation to those who use its products to
> authenticate not only the validity of each CA certificate in the
> default database but also the integrity of the CA's process of
> issuing and signing Web server certificates with that CA
> certificate.
I pretty much agree. I think the responsibility is in practice divided
among multiple parties, since the Mozilla Foundation doesn't own and
control all aspects of Mozilla development. But the Mozilla Foundation
is indeed responsible for the product that it distributes.
> This requires specific, objective, and verifiable
> criteria for authenticating both validity and integrity.
Ah, here's where I think opinions might begin to diverge. (Actually,
based on Ian Grigg's comments here and elsewhere I suspect his opinions
may have diverged a comment or two back -- but I'll let him speak for
himself.)
Let's take a moment to discuss this supposed need for "specific,
objective, and verifiable" criteria. In particular, recall that I
claimed in another message (and have not yet been contradicted) that CA
cert-related "bugs" (e.g., including a cert for a CA that did not
perform its proper functions) are simply a special class of security
vulnerabilities in general, and are formally equivalent to other
security vulnerabilities in the sense that the effects on the user may
be equally serious, and in some cases identical or nearly so.
As a concrete example, recall the recent vulnerability in IE -- and to
some extent Mozilla -- regarding display of URLs to a user. The net
effect of this vulnerability was that a user thinking they were
accessing one web site (e.g., http://www.onlinebank.com) ended up
accessing another site (e.g., http://www.badguys.org) instead, with
little or no indication that this had happened. This is basically the
same situation that could be caused by a CA issuing a
"www.onlinebank.com" server certificate to the wrong person/entity. (And
IIRC use of SSL/TLS would not have protected the user here, since the
attackers could have gotten a valid cert for "www.badguys.org", and the
browser would be checking that cert against the "real" URL -- i.e., the
one being accessed -- as opposed to the URL as falsely displayed to the
user.)
So, if CA cert-related vulnerabilities are formally equivalent to non-CA
related security vulnerabilities and vice versa, and if decisions on
including CA certs require "specific, objective, and verifiable"
criteria, then logically we should also specify and apply such criteria
for everything else in Mozilla related to user security.
But in fact we don't do this, even though such criteria exist (e.g.,
Common Criteria and related standards). Instead we depend on the "three
P's": people, processes, and publicity. The Mozilla project (under the
ultimate direction of the Mozilla Foundation) puts its trust in
designated "module owners" responsible for particular code areas,
requires that those modules owners and others follow particular
processes in developing and maintaining Mozilla (e.g., use of Bugzilla,
review and super-review, etc.), and do all that in a public manner,
where the details of the code and processes are open to public review.
As it happens, handling security vulnerabilities doesn't fully follow
this model, since the process isn't totally open at all times and in all
aspects. This was not for lack of trying -- the actual processes
recommended by mozilla.org policy were the result of a compromise
between the "full disclosure" position and the "fix in private"
position. But that doesn't change my essential point -- the Mozilla
project has never applied specific, objective, and verifiable criteria
to all aspects of Mozilla security, and doesn't seem to have especially
suffered for not doing so.
> I advocate third-party audits because those criteria already exist
> and are already being applied through such audits.
But as I mentioned earlier, mandating independent audits 1) imposes
other costs (really externalities in the economic sense) that are borne
by the Mozilla project and Mozilla users, and 2) may not actually be an
appropriate form of security risk mitigation in all cases.
Rather than repeat my previous comments addressing these issues in the
context of CAs and CA auditing, let's turn to a similar issue in another
closely-related context, namely independent auditing of cryptographic
implementations according to FIPS 140-x and related standards.
As it happens the Mozilla project was the beneficiary of a fortunate
historical accident: It was able to take advantage of a high-quality
field-proven open source cryptographic implementation, namely NSS, that
had also been FIPS 140-1 validated.
But let's turn back the clock a few years and suppose that NSS never
existed, and that the only available open source crypto library were
OpenSSL, which at the time was not FIPS validated. Let's further suppose
that there were another alternative choice, a proprietary crypto library
(call it "ClosedSSL") whose vendor had made it available in binary form
on the main Mozilla platforms (Windows, Mac OS, and Linux), with license
terms permitting it to be included in Mozilla and redistributed at no
charge.
If you had to pick which crypto library to include in Mozilla, which
would it have been: OpenSSL, a product with source code available and a
fairly public development process, but no formal validation against
specific, objective, and verifiable criteria, or ClosedSSL, a product
formally validated against specific, objective, and verifiable criteria
but developed behind closed doors with source code not available?
I think reasonable people could decide either way and justify the
choice. However I can tell you what I would have done: I would have
recommended use of OpenSSL instead of ClosedSSL, for at least two reasons:
First, use of an open source product that could be reviewed in the
public eye would have been consistent with practices and processes in
the rest of the Mozilla project. Otherwise we would have been able to
take advantage of public review and distributed bug detection and fixing
for the rest of Mozilla, but would have been hampered in attempting to
find and fix potential bugs in the crypto library. This would mean that
we couldn't leverage the distribute nature of open source bug fixing
with regard to the crypto library, and that the reputation of Mozilla as
a whole could be compromised by problems with a product (ClosedSSL) over
which we had no control or oversight.
Second, use of an open source product would help enable Mozilla to be
ported to more platforms, including platforms that the vendor of
ClosedSSL did not support and might not be interested in supporting.
This list of otherwise "deprived" platforms might have included OS/2,
the various *BSD distributions, non-Red Hat distributions of Linux,
Solaris, HP-UX, AIX, Irix, and others. Most people may not care whether
Mozilla is available on, say, OS/2, but I can guarantee that the users
of OS/2 care a lot, and the widespread availability of Mozilla on lots
of different platforms has been a major factor in its popularity and
success thus far.
So in this case the informal "validation" made possible by public review
of open source code would trump the formal validation of closed code
against specific, objective, and verifiable criteria, at least for me.
Based on the market success of OpenSSL over the years I think a lot of
people hold the same opinion as I do. As it happens OpenSSL is now being
validated against the FIPS 140-2 criteria, but note the cause and
effect: OpenSSL is being validated because it became so popular that its
user base came to include users for which FIPS validation was important,
but the popularity of OpenSSL had nothing to do with whether it was FIPS
validated or not.
This ties back to Ian Grigg's comments about "markets" in this context.
I don't agree with everything Ian writes, but I think this line of
thinking can be fruitful, particularly with regard to the role and value
of independent auditors:
If we look at why we have independent auditors in the case of public
companies, it's in large part because most of what goes on in any
company is closed to public view. Investors don't have access to
detailed internal sales forecasts, or customer lists, or development
plans, or other things that they might use to evaluate a company. So we
have independent auditors who are in a sense "stand-ins" for investors,
and who have access to information that investors are denied.
But at the same time independent auditors can't be complete stand-ins
for investors. For one thing, the auditors are paid by the company, and
so their interests are not 100% aligned with investors: Although the
vast majority of individual auditors and audit firms may act in a manner
beyond reproach, there is always at some level the temptation to "fudge"
the results, and there is almost always someone somewhere who succumbs
to that temptation, at least to some extent.
Besides whatever other virtues it might have, the requirement for
specific, objective, and verifiable criteria can be seen in one light as
a response to the issues raised by the temptations inherent in the role
of paid independent auditor: By tightly restricting the "degrees of
freedom" available to auditors, we make it more difficult for auditors
to "bend the rules" to help a company obtain a favorable evaluation.
However in public markets like the stock exchange investors still don't
put complete trust in the results of corporate audits, no matter how
carefully conducted. They also take into account any other information
available to them, and the final value assigned to a company is based on
the totality of information known about a company, of which the audited
results are only a part. If a company's operations were significantly
more transparent than they typically are today (and a number of people
have recommended that companies do this), then IMO the audited results
would be an even smaller factor in determining perceived company value.
If you substitute "users" for "investors" and "CAs" for "companies" (the
"auditors" are still "auditors") then I think you pretty much capture
the essence of what Ian Grigg is saying (or at least what I take him to
be saying).
So, to turn once again back to the case of deciding which CA certs to
include, a possible alternative policy would be for the Mozilla
Foundation to assign this task to a particular "module owner" and
require that they follow normal Mozilla project processes when making
their decisions: track requests and comments on them in Bugzilla,
supplement with discussions in public forums, and take public comments
and publicly-available information into account when making the
decisions. There would be no specific, objective, and verifiable
criteria outlined as part of the original policy; any such criteria
would emerge as part of the public decision process, and any particular
decision might apply some criteria but not others.
Now I suspect that whatever policy I end up proposing will in fact
include a large dose of specific, objective, and verifiable criteria for
CAs. That's because any policy, including this one, is a product of
compromise, and there a lot of people who think formal criteria are
important in this context. I think it will be much easier to get a
policy completed if we include enough formal criteria to satisfy most
people concerned about this.
> In the end, the real question is: Can we trust and rely on the CA
> certificates in the Mozilla default database to protect our
> privacy and our assets?
I respectfully disagree: The real question is: Can we trust and rely on
the Mozilla project to produce a product that properly protects the
security of users? The whole CA cert scheme is but an aspect of that.
The answer to that question will
> determine whether we can trust the Mozilla Foundation, which needs
> to clarify the underlying philosophy upon which the proposed
> policy should be based.
I agree that we need to clarify the underlying philosophy, which is why
my next task is to create the "meta-policy" I mentioned above. Only then
will I feel comfortable creating a new revision of the proposed policy
and FAQ.
Rather than "for a minimum of 12 months", I would say "until the last
issued EE cert expires". Then, yes, I think that makes sense.
>> The "built-in" list of CAs, and the built-in list of trust info is
>> no longer stored in the cert DB. It's in a shared library that gets
>> replaced when a new (or old) version of mozilla is installed.
[snip]
>> If users CHANGE the trust settings on a root CA, or import a new root
>> CA and trust, the new CA and trust info goes into the cert DB.
> So in essence a new release of Mozilla could remove or "revoke" CA certs
> on behalf of all the users who were trusting to Mozilla to do the right
> thing, while not affecting users who had exercised their own judgement.
Prior to NSS 3.4, which was introduced into mozilla in moz 1.3 or perhaps
earlier (not sure), the built-in certs and their trust info were all
copied into the cert DB. So users of mozilla whose cert DBs originated
before NSS 3.4 will still have a LOT of root CA certs in them.
But users whose cert DBs originated in moz 1.3 or later (including N7.1
IINM), should have rather few CA certs in their cert DBs.
> But I guess this is not *quite* true: If a new CA cert were added and
> trust flags turned on, that would affect everyone who upgraded to the
> new version, and users who preferred to trust their own judgement on CA
> certs would not necessarily be alerted during the installation process
> or thereafter. Instead they would have to manually check the CA cert
> list after the upgrade (or read the release notes).
Yes, this has always been true for NSS users, IINM.
> Frank
As you know, a certificate is a signed statement that is either true or
false. If it is false, then the act of presenting it as if it were true
is an act of fraud. The statement implicit in every cert has been "spoken"
by the Cert's issuer, and is signed by the cert's issuer. An English
approximation of that statement would read something like this:
"Here is a public key, and a collection of one or more names (which
may include one or more of each of the following:
- a directory name (which may include
- a person's name,
- names of organizations,
- names of locations and states,
- postal addresses, etc.) and
- an email address, and/or
- a server's domain name, and/or
- an IP address.
I (the issuer) certify that the private key that complements this
public key is held by persons (or systems) rightfully identified
by all these names, and that the rightful holder(s) have the right to
use this public key for the following purposes: (list of purposes),
from this beginning date until this ending date."
That statement is essentially a "binding" of names to a public key.
By itself, this signed statement, this certificate, *DOES NOT*
"provide the Mozilla user with assurance that (1) a critical Web site
is indeed what it purports to be"
ANYONE can make a copy of that cert, and put it on their website.
The mere posession of, and presentation of, that certificate provides
NO assurances whatsoever that the presenting party is the party named
on the certificate.
ONLY the succesful demonstration, by the party presenting the certificate,
that he possesses the private key that complements the public key in
the cert, coupled with the validated CA signature on the cert, assures
the recipient of that party presenting the cert is the named party.
That succesful demonstration can take the form of
a) a signature that is verifiable by the party to whom the cert is
presented (the relying party), which signature incorporates information
provided by the relying party, or
b) the demonstrable decryption of data that was encrypted by the
relying party using the public key in the cert.
> Rather than "for a minimum of 12 months", I would say "until the last
> issued EE cert expires". Then, yes, I think that makes sense.
This would have to be a policy decision for MF I think, and if you were
to require this I also think that the MF would need to decide on a term
that they would be willing to pay for domains and host CRL/OCSP stuff...
If a company goes bust tomorrow, I doubt there would be any funding to
keep a CRL/OCSP running beyond that, and I doubt any company large or
small these days is beyond that with numerous "large" companies suddenly
going out of business owing billions...
--
Best regards,
Duane
http://www.cacert.org - Free Security Certificates
http://www.nodedb.com - Think globally, network locally
http://www.sydneywireless.com - Telecommunications Freedom
http://happysnapper.com.au - Sell your photos over the net!
The purpose of third-party audits is to provide evidence that the
CA's practices include some defined level of care when using the
CA certificate to sign a Web server certificate. If CA
certificates are installed only when the CA has passed such an
audit, then I indeed have some assurance that a critical Web site
is indeed what it purports to be. That assurance is greater than
if merely the CA itself said, "Trust me." It is also greater than
if Mozilla said, "Don't worry. We know what we're doing."
For protecting my bank and stock accounts and my privacy, I want
to know that the CA that issued and signed my bank's or mutual
fund's server certificate has itself been vetted by a professional
using recognized, objective standards.
--
David E. Ross
<http://www.rossde.com/>
I use Mozilla as my Web browser because I want a browser that
If I recall correctly, you recently posted something about all your
certs being good for only 6 or 12 months (I forget which), and you
were thinking of even lowering that.
So, my point was, there's no point in promising you'll keep OCSP
going for 12 months if all your certs will expire sooner than that.
After the last cert expires, shut 'em down!
No, that was for "unknown" people in the system that come along and
signup and with no one verifying they are who they say they are... But
my point about MF running a CRL/OCSP service after companies goes bust
was a generalised one regardless which CA it is, and relates back to
your comments about garentees about CAs continuing to run after the
principal gets hit by a bus, when in reality all that needs to happen is
the CRL/OCSP remain in operation, which in the event of a CA going bust
MF might want to take responsibility for the running of a serivce such
at this, if it were deemed that this was a good idea... I'm just
thinking out loud about the fact that companies are going bust left
right and centre, and how to ensure their CRL/OCSP remains accessable
till the last certificate they issued expires... Although the problem
with this is how does a user revoke an existing certificate between a CA
ceasing operation and their certificate expiring...
Good CA pay an insurance to cover that case. If they go bust, their
insurance pays someone to insure that minimal service.
Normally if your bank goes bust, some other banks make arrangements, and
you account is transferred to another bank just like nothing happened.
So it does happen in other areas.
> [...] Although the problem
> with this is how does a user revoke an existing certificate between a CA
> ceasing operation and their certificate expiring...
The insured minimal service should cover that too.
In your case, you could pay advance hosting charge that covers at least
the longuest validity length for the user certificates you emit.
The day you go bust, you close the enrolment URLs, and the rest runs on
it's own until the end of the already paid period.
You could have an arrangement with some people/institution, so that they
will check it stays in working order.
It should be possible to find a solution that way, where these people
would just have to be able to do some basic maintenance, *not* correct
bugs, and would not pay any hosting charge.
> It should be possible to find a solution that way, where these people
> would just have to be able to do some basic maintenance, *not* correct
> bugs, and would not pay any hosting charge.
We're actually going forwards in terms of money, as income from
donations/memberships/google ads are covering costs... In reality
certificates are like printing money, once you've covered minimum basic
costs and such everything else is pure profit...
For the average person, this is fairly meaningless.
It's akin to "trust me, we have auditors." Enron,
and all that, are just the beginning of the failure
of this process. As auditors tend to just check
that the CA is following its own declared practices,
and bring little incentive to detect real failures to
the event, audits are not all they are cracked up to
be.
(For the serious professional, it is even more daunting,
as each layer that gets peeled off in the CA reveals
either "trust me" or "I wonder if that really works...")
> If CA
> certificates are installed only when the CA has passed such an
> audit, then I indeed have some assurance that a critical Web site
> is indeed what it purports to be. That assurance is greater than
> if merely the CA itself said, "Trust me." It is also greater than
> if Mozilla said, "Don't worry. We know what we're doing."
Actually, I would agree that Mozilla should not say
"We know what we are doing." I would suggest that
Mozilla present a little more info to the user, and
suggest the user decide for themselves whether the
CA concerned is worth anything.
We've had good success using GeoTrust certificates,
for example. But, none of the users of our sites
know this. To them, the browser says "trust me,
it's probably Verisign." That's daft.
It would be much better if the browser simply said
something akin to "GeoTrust says this is the right
place, have a nice day!"
> For protecting my bank and stock accounts and my privacy, I want
> to know that the CA that issued and signed my bank's or mutual
> fund's server certificate has itself been vetted by a professional
> using recognized, objective standards.
David, I've got bad news for you.
While you were worried about some mythical man
in the middle sneaking in and stealing your
password for no good purpose (the bank/fund
would be covered against that in general), you
were probably being robbed blind by your mutual
fund.
You were sold a bill of goods. Certs do not
provide much protection in the scheme of things,
simply because there is little or no threat from
any MITM (and spoofs go right past them). The
serious, critical threats to those institutions
come from inside, and from other more simplistic
attacks. What SSL/TLS certificates *did* do,
however, was distract you, and countless other
professionals, from properly analysing the
security of the institution.
I'm hoping that Mozilla can realise this. There
is an opportunity here to restart the security
process that has lain dormant for a decade. And
a crying need - the threats today are from spoofs/
phishing, viruses, insider robbery, database hacks,
and so forth - all of which need to be addressed by
a wholistic approach to security, not by worrying
about this cert or that CA covering a threat that
doesn't exist except in the minds of cryptography
academics.
Mind you, I'm very curious - has anyone evaluated
the threat level that certs cover? Any evidence
of MITMs down your way? I've never seen any, and
I'd love to add some hard numbers to the analysis.
iang
For the average person, this is fairly meaningless.
It's akin to "trust me, we have auditors." Enron,
and all that, are just the beginning of the failure
of this process. As auditors tend to just check
that the CA is following its own declared practices,
and bring little incentive to detect real failures to
the event, audits are not all they are cracked up to
be.
(For the serious professional, it is even more daunting,
as each layer that gets peeled off in the CA reveals
either "trust me" or "I wonder if that really works...")
> If CA
> certificates are installed only when the CA has passed such an
> audit, then I indeed have some assurance that a critical Web site
> is indeed what it purports to be. That assurance is greater than
> if merely the CA itself said, "Trust me." It is also greater than
> if Mozilla said, "Don't worry. We know what we're doing."
Actually, I would agree that Mozilla should not say
"We know what we are doing." I would suggest that
Mozilla present a little more info to the user, and
suggest the user decide for themselves whether the
CA concerned is worth anything.
We've had good success using GeoTrust certificates,
for example. But, none of the users of our sites
know this. To them, the browser says "trust me,
it's probably Verisign." That's daft.
It would be much better if the browser simply said
something akin to "GeoTrust says this is the right
place, have a nice day!"
> For protecting my bank and stock accounts and my privacy, I want
> to know that the CA that issued and signed my bank's or mutual
> fund's server certificate has itself been vetted by a professional
> using recognized, objective standards.
> We are talking about MONEY and PRIVACY. How much risk are you
> willing to take with these?
I'm inclined to agree with Ian here, while you're being distracted by
flashy audits how many of those online shopping carts with a
commercially issued certificate have their MS SQL database hacked and
all the creditcards contained in it stolen? Shouldn't things be done to
encourage security (as he said) as a whole, rather then be bogged down
by one detail of it? This isn't just education of users, but poor
programming practises with handling financial information on servers
etc... Perhaps commercial CAs issuing certificates should take a more
proactive approach and run basic audits themselves on who they are
supposedly protecting... (Smoke and mirrors)
-Scott
-Scott
So this is why identity fraud is starting to get out of hand then? :)
Obviously they aren't doing their homework either... So if banks get it
wrong so often, and I'm sure they are audited and held in a lot more
regard then any CA.
If commercial CAs want to assert the idea about how much checking they
do, how hard could it be to organise some company in bulk to do simple
audits... "You SQL server is wide open, fix it and we'll issue you your
certificate"... It's not as if the money charged by some CAs wouldn't
cover these "incedental" expenses after all...
What good is auditing the CA if the weak link is after them?
When the infrastructure providing protection for the CA's private keys
can no longer be guaranteed, then the integrity of the CA is called into
question and it should be revoked. If the CA is revoked, any assertions
made in End Entity certificates are no longer in force and they too
should be revoked. Before decommissioning the CA, it should issue one
last CRL with a validity period past the last expiry date of any End
Entity certificate it has issued that includes all the remaining End
Entity certs that it has issued with a reason of cessationOfOperation (5).
-Scott
When the infrastructure providing protection for the CA's private keys
can no longer be guaranteed, then the integrity of the CA is called into
question and it should be revoked. If the CA is revoked, any assertions
made in End Entity certificates are no longer in force and they too
should be revoked. Before decommissioning the CA, it should issue one
last CRL with a validity period past the last expiry date of any End
Entity certificate it has issued that includes all the remaining End
Entity certs that it has issued with a reason of cessationOfOperation (5).
-Scott
When the infrastructure providing protection for the CA's private keys
can no longer be guaranteed, then the integrity of the CA is called into
question and it should be revoked. If the CA is revoked, any assertions
made in End Entity certificates are no longer in force and they too
should be revoked. Before decommissioning the CA, it should issue one
last CRL with a validity period past the last expiry date of any End
Entity certificate it has issued that includes all the remaining End
Entity certs that it has issued with a reason of cessationOfOperation (5).
-Scott
When the infrastructure providing protection for the CA's private keys
can no longer be guaranteed, then the integrity of the CA is called into
question and it should be revoked. If the CA is revoked, any assertions
made in End Entity certificates are no longer in force and they too
should be revoked. Before decommissioning the CA, it should issue one
last CRL with a validity period past the last expiry date of any End
Entity certificate it has issued that includes all the remaining End
Entity certs that it has issued with a reason of cessationOfOperation (5).
-Scott
> I totally agree with what you are saying - and maybe there is a business
> opportunity in there.... a CA could issue 2 types of SSL certs - 1)
> based around the current model that simply asserts the identity of the
> server; 2) that additionally asserts that the company has passed some
> sort of cursory security audit surrounding the server. But then how far
> do you expand the realm of that audit?? Who cares if the server is
> locked down tighter than a banker's wallet if as you say the SQL back
> end is more open than Janet Jackson's bodice....
Call it a network audit then, obviously automated processes don't care
if they scan 1 host or 50... However most smaller websites, the kind
that don't get patched and subsequantly get infected with worms and chew
all the bandwidth on the internet, are usually on the same server as the
website, which is more specifically what my point was aimed at,
*usually* larger firms have their own audits because it's becoming too
embarressing for companies not to these days...
I totally agree with what you are saying - and maybe there is a business
opportunity in there.... a CA could issue 2 types of SSL certs - 1)
based around the current model that simply asserts the identity of the
server; 2) that additionally asserts that the company has passed some
sort of cursory security audit surrounding the server. But then how far
do you expand the realm of that audit?? Who cares if the server is
locked down tighter than a banker's wallet if as you say the SQL back
end is more open than Janet Jackson's bodice....
-Scott
Even if they were all revoked the CRL/OCSP needs to be hosted and
responsive till all current certificates reach their predetermined
expiry date... However as Jean pointed out insurance should cover the
cost of that, but how many commercial CAs are covered for that
particular outcome?
--
Nelson B
I have disabled all CA certificates on my PC except those of the
three CAs vetted by the California Secretary of State, plus one
other vetted by my ISP. I'm not deleting the others (just
disabling them) because they might be accredited or otherwise
audited in the future, and it's too hard to get new copies.
Actually, I don't expect anything beyond that. If you read the
actual "WebTrust Program for Certification Authorities", you will
see that an accredited CA verifies that the purchaser is who he
says he is and that the CA signing key is kept secure to avoid
issuing unauthorized or unverified server certificates, both of
which are very important now that such frauds as "phishing" are
growing. A third-party audit serves to verify that the CA does
indeed exercise care when issuing server certificates. Nothing in
the WebTrust process involves having the CA verify the business
practices of the owners of server certificates issued by CAs.
If the Mozilla Foundation wants to do its own independent
verification of CA practices, I would accept such a policy.
However, the Foundation's verification process should be
documented. I merely advocate third-party audits because the
process for those audits is already documented and the audits
already are already being done.
Also, since third-party financial auditors have been found liable
for investor losses when their audits have been inaccurate or
inadequate, I think third-party CA audits could shift liability
away from the Mozilla Foundation. Such audits are endorsed by
California law, and the Foundation is incorporated in California.
Thus, reliance on such audits might be a good defense for the
Foundation if an accredited CA whose own certificate is contained
in the Mozilla default database happens to issue a server
certificate improperly (e.g., to a fraudulently identified server
owner). Note that the fact that Mozilla products can be obtained
for free does not eliminate the Foundation's liability if someone
suffers measurable harm from using those products (e.g., the
emptying of a bank account by a phishing fraud).
Ian Grigg wrote:
> While you were worried about some mythical man
> in the middle sneaking in and stealing your
> password for no good purpose (the bank/fund
> would be covered against that in general), you
> were probably being robbed blind by your mutual
> fund.
Those banking/fund protections may apply in some cases in the USA, but
they certainly don't always in other countries. If someone steals your
credit card number in France, you may still be liable. So SSL security
plays a much more important role than you think. I know this from
experience.
> I'm hoping that Mozilla can realise this. There
> is an opportunity here to restart the security
> process that has lain dormant for a decade. And
> a crying need - the threats today are from spoofs/
> phishing, viruses, insider robbery, database hacks,
> and so forth - all of which need to be addressed by
> a wholistic approach to security, not by worrying
> about this cert or that CA covering a threat that
> doesn't exist except in the minds of cryptography
> academics.
Certainly other attacks exist, but attacks on certificates are one type
of attacks that is possible. I agree that indeed Mozilla should be
reviewed for all types of attacks, not just crypto/certificates attacks,
but not that we should ignore crypto/certificates attacks.
> Duane wrote:
>
>>>We are talking about MONEY and PRIVACY. How much risk are you
>>>willing to take with these?
>>
>>So I take it you remove a lot of certificates from your copy of Mozilla
>>then?
>
>
> I have disabled all CA certificates on my PC except those of the
> three CAs vetted by the California Secretary of State, plus one
> other vetted by my ISP. I'm not deleting the others (just
> disabling them) because they might be accredited or otherwise
> audited in the future, and it's too hard to get new copies.
The only way to "delete" the other built-in certs is to remove the
built-in module altogether. Otherwise, you can only remove the trust.
Which in practice should have the same effect.
Ummm last time I checked most phishing scams didn't bother with SSL, and
in fact they even hosted them on geocities and exploited bugs in IE.
Fact is most people don't care, they are sheeples, following
instructions in an email, which is why MyDoom had such a huge impact, it
didn't exploit any computer related bugs, it exploited people related
flaws, who needs security when you can simply hack the people in mass
quite easily instead. Attackers are interested in maybe one or two
credit card numbers, they want banks full of them.
What if they steal your credit card, not because of the certificate, but
because of weak security in protecting it in storage? Security is after
all about the weakest link, what point is there auditing CAs if you
don't audit the hosts interacting with finacial information after you
send it over the net?
> Certainly other attacks exist, but attacks on certificates are one type
> of attacks that is possible. I agree that indeed Mozilla should be
> reviewed for all types of attacks, not just crypto/certificates attacks,
> but not that we should ignore crypto/certificates attacks.
And how often has it happened I think you'll find is his point, not
often if at all, they don't need to use ssl, just look at how much money
is lost every year to 419'ers
--
Best regards,
Julien Pierre wrote:
> So SSL security
> plays a much more important role than you think. I know this from
> experience.
You have experience of someone stealing your
credit card over a connection? That's something
I'd like to hear about. It would be very useful
to apply some statistics to the situation.
>> I'm hoping that Mozilla can realise this. There
>> is an opportunity here to restart the security
>> process that has lain dormant for a decade. And
>> a crying need - the threats today are from spoofs/
>> phishing, viruses, insider robbery, database hacks,
>> and so forth - all of which need to be addressed by
>> a wholistic approach to security, not by worrying
>> about this cert or that CA covering a threat that
>> doesn't exist except in the minds of cryptography
>> academics.
>
>
> Certainly other attacks exist, but attacks on certificates are one type
> of attacks that is possible. I agree that indeed Mozilla should be
> reviewed for all types of attacks, not just crypto/certificates attacks,
> but not that we should ignore crypto/certificates attacks.
How much time is spent arguing about crypto/cert
attacks? How much time is spent coding for phishing
attacks? How many of each attack occur, and how
much are people losing on each attack?
In the sector I've spent most of my time monitoring,
DGCs (digital gold currencies) I've seen maybe 50
phishing attacks. One used SSL. None were protected
by the CAs. Zero, zip, nada.
In fact, one DGC, a quite successful one, didn't
even bother to use a CA cert. The site purchased
a multi-year one about 2 years back and took over
a year to install it; meantime customers had to
"suffer" doing $1000 transactions over "unprotected"
self-signed cert-protected SSL connections.
Everybody knew this, and nothing happened. Why?
No crook in his right mind or even his wrong mind
would do an MITM. It just isn't a practical attack.
That applies as much to open, cleartext connections
as to SSL connections. So, what's the threat here?
It's possible to scale Everest, and has been done
many times by the daft and the frigid. That doesn't
mean that Nepal has to worry about a flood of refugees
from that direction....
iang
The threat I think everyone is complaining about is the fact CAs might
issue (intentionally or unintentionally) certificates for a
mydodgyonlineshop.com, and they don't want to take responsibility for
choosing if that shop/bank/financial is what they thought it was, or if
it's trustworthy to send financial information to.
Yet further example of people not wanting to take responsibility for
their own action, then sue the moment they think they can take advantage
of the situation, good example of this mentality is some woman using
nail glue on her daughter because she grabbed the wrong bottle, first
thing she does is pass the buck saying the bottles looked the same and
then calls a lawyer to try and sue someone else for her mistake, I mean
c'mon if everyone is so worried go to a real damn shop!
Frankly I'd be more worried about domain hijacking, how many large ISPs
have the ability to point bankingsite.com to another location if their
DNS server was compromised, further more how many end users would notice
the lock was missing as they entered their banking details into the site?
Person I knew doing an security audit for a bank did just that to a
major ISP here in Australia, and after they went to what they thought
was the banks login page it just had a simple notice, sorry online
banking is currently down, please try again later. Within an hour had I
think over 9,000 or 10,000 login details for that bank. No SSL, just a
simple DNS redirect and he didn't even have access to the banks name
server, he didn't need it.
Now to put things into perspective, there would have only about a
million users potentially effected if that, now what if that had been
AOL or other larger ISPs in the US with 10's of millions of users?
Actually the burden of responsibility here of any companies, not CAs,
should be auditted before getting any sort of financial information,
perhaps some sort of "auditted by visa", mastercard, et al, so anyone
without some sort of symbol shouldn't be accepting financial info of any
kind, but this is a much bigger social issue beyond the scope of this
list, or MF in general for that matter... In any case unless something
more is done on this front the problem of escaping credit card/banking
information is not only a matter of time, but going to increasingly get
worst...
Duane wrote:
> Frankly I'd be more worried about domain hijacking, how many large ISPs
> have the ability to point bankingsite.com to another location if their
> DNS server was compromised, further more how many end users would notice
> the lock was missing as they entered their banking details into the site?
>
> Person I knew doing an security audit for a bank did just that to a
> major ISP here in Australia, and after they went to what they thought
> was the banks login page it just had a simple notice, sorry online
> banking is currently down, please try again later. Within an hour had I
> think over 9,000 or 10,000 login details for that bank. No SSL, just a
> simple DNS redirect and he didn't even have access to the banks name
> server, he didn't need it.
That's a good story - you should write it up!
Can you ask your mate
a) how many connections came in but
didn't pursue / users didn't enter
their details, and
b) how many people complained / notified
/ otherwise thought that something was
fishy?
These would be very very useful statistics, and
would enable developers to better understand
the user base that we are dealing with.
iang
PS: I did have a much longer reply, but, ominously,
thunderbird decided to crash and take it away...
Duane wrote:
>> Those banking/fund protections may apply in some cases in the USA, but
>> they certainly don't always in other countries. If someone steals your
>> credit card number in France, you may still be liable. So SSL security
>> plays a much more important role than you think. I know this from
>> experience.
>
>
> What if they steal your credit card, not because of the certificate, but
> because of weak security in protecting it in storage?
You would still be liable too.
> Security is after
> all about the weakest link, what point is there auditing CAs if you
> don't audit the hosts interacting with finacial information after you
> send it over the net?
The point in auditing the CAs is that it's better than not auditing the
CAs at all.
>> Certainly other attacks exist, but attacks on certificates are one
>> type of attacks that is possible. I agree that indeed Mozilla should
>> be reviewed for all types of attacks, not just crypto/certificates
>> attacks, but not that we should ignore crypto/certificates attacks.
>
>
> And how often has it happened I think you'll find is his point, not
> often if at all, they don't need to use ssl, just look at how much money
> is lost every year to 419'ers
If that's his point, then I completely disagree with it. Just because
every other part of Mozilla does security reviews wrong (or not at all)
doesn't mean we also should do the same for the NSS and other security
components of Mozilla.
Ian Grigg wrote:
> > So SSL security
>
>> plays a much more important role than you think. I know this from
>> experience.
>
>
>
> You have experience of someone stealing your
> credit card over a connection? That's something
> I'd like to hear about. It would be very useful
> to apply some statistics to the situation.
No, I know from experience that if you have a bogus transaction on your
card in France, it's up to you to prove it, and the bank will not
automatically reverse it. You have to file police reports and so on.
It's very painful. I know several other people to whom it happened over
there, as well. I don't know for sure how the card numbers got
compromised, but through an insecure connection is a strong possibility,
since retail transactions in France use smartcards, not magnetic
stripes, and more than just a "number" is required to authorize any
retail transaction. The number method is only used for remote
transactions (mail order, internet).
I also know someone in the US who lost her credit card number over a
connection. She did a non-SSL transactions (with a business that didn't
have a cert) on a university network. And other students were snooping
on the connection and collecting numbers.
> How much time is spent arguing about crypto/cert
> attacks? How much time is spent coding for phishing
> attacks?
> How many of each attack occur, and how
> much are people losing on each attack?
> In the sector I've spent most of my time monitoring,
> DGCs (digital gold currencies) I've seen maybe 50
> phishing attacks. One used SSL. None were protected
> by the CAs. Zero, zip, nada.
That shows that current SSL security with trusted CA is rarely attacked.
We should not lower the value of using SSL in this model by adding
random CA unaudited certs without distinction.
The entire discussion of CA certificate policy is about the SSL with
trusted CA case. Any other case is irrelevant to the CA policy
discussion, IMO. The other cases are relevant to browser security
preferences and defaults. And I'm all for having more security warnings
on by default. But it's another discussion.
> If that's his point, then I completely disagree with it. Just because
> every other part of Mozilla does security reviews wrong (or not at all)
> doesn't mean we also should do the same for the NSS and other security
> components of Mozilla.
The point is, if you set this bar too high does it impact on security in
a detremental way in other areas cause people to have sites collecting
money without any encryption at all. There are some mediums gaining a
lot of market share such as cable internet and wireless that are
somewhat inheriently insecure because the nature of them is insecure.
Alternatively people after credit details usually don't want one or two
they want 1000's of them, and while we're all focusing on CAs and SSL
enabled websites these things are poorly secured in other areas, cost in
a lot of countries is a significant factor, and because of this online
shops may forgo the expense. As stated before only approx 0.3% of
webservers have SSL valid or other wise, I'm sure there are a lot of
sites out there collecting personal information at the same time.
Security should be a whole approach not focus specifically on one part
of it that in the current form will leave people with a false sense of
security.
>> Security is after all about the weakest link, what point is there
>> auditing CAs if you don't audit the hosts interacting with finacial
>> information after you send it over the net?
>
>
> The point in auditing the CAs is that it's better than not auditing the
> CAs at all.
It's not an absolute. There is no point in auditing
the CAs if it achieves little or nothing, in terms of
security, and costs money. The reason that Frank wrote
his policy on these points, presumably, is that it's
not clear that audits of CAs deliver value for money.
>>> Certainly other attacks exist, but attacks on certificates are one
>>> type of attacks that is possible. I agree that indeed Mozilla should
>>> be reviewed for all types of attacks, not just crypto/certificates
>>> attacks, but not that we should ignore crypto/certificates attacks.
>>
>>
>>
>> And how often has it happened I think you'll find is his point, not
>> often if at all, they don't need to use ssl, just look at how much
>> money is lost every year to 419'ers
>
>
> If that's his point, then I completely disagree with it. Just because
> every other part of Mozilla does security reviews wrong (or not at all)
> doesn't mean we also should do the same for the NSS and other security
> components of Mozilla.
It's one of my points! Another of my points is
someone has to pay for it, even if it doesn't
happen. So, a good security view will ask, what's
the value for money here?
iang
>> Security is after all about the weakest link, what point is there
>> auditing CAs if you don't audit the hosts interacting with finacial
>> information after you send it over the net?
>
>
> The point in auditing the CAs is that it's better than not auditing the
> CAs at all.
It's not an absolute. There is no point in auditing
the CAs if it achieves little or nothing, in terms of
security, and costs money. The reason that Frank wrote
his policy on these points, presumably, is that it's
not clear that audits of CAs deliver value for money.
>>> Certainly other attacks exist, but attacks on certificates are one
>>> type of attacks that is possible. I agree that indeed Mozilla should
>>> be reviewed for all types of attacks, not just crypto/certificates
>>> attacks, but not that we should ignore crypto/certificates attacks.
>>
>>
>>
>> And how often has it happened I think you'll find is his point, not
>> often if at all, they don't need to use ssl, just look at how much
>> money is lost every year to 419'ers
>
>
> If that's his point, then I completely disagree with it. Just because
> every other part of Mozilla does security reviews wrong (or not at all)
> doesn't mean we also should do the same for the NSS and other security
> components of Mozilla.
You mean a bank *operating* in France, Julien ?
If that's so, that's a disgusting thing to do.
You can call any consumers' association and denounce that.
If your bank really did that, they lied and cheated you.
The french law is very clear. You can not be held liable for a
transaction if neither your signature, nor your secret code was used.
You only need to write a letter to repudiate it, and if they still want
to charge you, the burden of proof that the merchandise was actually
sent to you is on them.
http://www.legifrance.gouv.fr/WAspad/UnArticleDeCode?code=CMONFINL.rcv&art=L132-4
Your problem may have occured before 2001, but that law did nothing but
explicit the jurisprudence that existed before that date.
> I also know someone in the US who lost her credit card number over a
> connection. She did a non-SSL transactions (with a business that didn't
> have a cert) on a university network.
American are not as protected by law as French people are, and this kind
of things can be a much larger problem in the US.
Jean-Marc Desperrier wrote:
>> I also know someone in the US who lost her credit card number over a
>> connection. She did a non-SSL transactions (with a business that
>> didn't have a cert) on a university network.
I'd be interested in establishing that - this is
the first time I've ever heard anyone claim that
an actual case of a credit card being lost over a
connection.
And, I've been looking for the last decade or so...
Is there any documentation on this? Is there any
indication that the card was in fact lost over the
network, rather than being hacked from the business's
computer? Any correlation between the thief and the
victim? Or was the card maybe lifted form the dorm
room?
iang
> You mean a bank *operating* in France, Julien ?
> If that's so, that's a disgusting thing to do.
> You can call any consumers' association and denounce that.
> If your bank really did that, they lied and cheated you.
Yes they did ...
> The french law is very clear. You can not be held liable for a
> transaction if neither your signature, nor your secret code was used.
> You only need to write a letter to repudiate it, and if they still
> want to charge you, the burden of proof that the merchandise was
> actually sent to you is on them.
> http://www.legifrance.gouv.fr/WAspad/UnArticleDeCode?code=CMONFINL.rcv&art=L132-4
>
>
> Your problem may have occured before 2001, but that law did nothing
> but explicit the jurisprudence that existed before that date.
It happened to me before, to my mother after.
> I also know someone in the US who lost her credit card number over a
> connection. She did a non-SSL transactions (with a business that
> didn't have a cert) on a university network.
>
> American are not as protected by law as French people are, and this
> kind of things can be a much larger problem in the US.
That's not actually the case.
>>> I also know someone in the US who lost her credit card number over a
>>> connection. She did a non-SSL transactions (with a business that
>>> didn't have a cert) on a university network.
>>
>
>
> I'd be interested in establishing that - this is
> the first time I've ever heard anyone claim that
> an actual case of a credit card being lost over a
> connection.
Well, now you have heard one. What do you want me to do to prove it,
give you the person's name, e-mail and and phone number, the name of the
university ? I do have that info, but I don't believe she would want me
to share it.
Also, I have seen legitimate (but security-ignorant) businesses that ask
for credit card numbers by insecure e-mail. And very likely many
security-ignorant customers will just volunteer the information over
insecure e-mail.
I don't need to tell you how vulnerable that is to snooping by all the
ISPs and relays, or any thief in between. I don't have any stats on it,
but I bet it's a significant cause of fraud.
> And, I've been looking for the last decade or so...
Where ? What was your research based on ?
Did you ask the banks for their statistics on credit card fraud ?
Try asking the US credit card processors why they charge a higher rate
for online transactions than for retail transactions. I don't think they
are just greedy (though they certainly are), but online fraud is a
significant problem to them and they compensate for it by higher rate.
However, it may be difficult to establish in many cases how exactly the
credit card numbers were compromised since there are so many different
ways. And the thieves probably don't go and brag about the most popular
methods.
More secure technology would reduce the processing rates and benefit
both merchants and consumers. The rate on smartcard transactions in
Europe is much lower than the rate for VISA/Mastercard, even retail.
Most businesses in Germany stopped accepting VISA/Mastercard because
they didn't want to pay the high processing rates. The foreigners have
to pay cash, and nationals all have smartcards. That way German business
doesn't pay the 2-3% that get you "free" frequent flyer miles. They
either pocket the difference in profit, or have lower prices.
> Is there any documentation on this? Is there any
> indication that the card was in fact lost over the
> network, rather than being hacked from the business's
> computer? Any correlation between the thief and the
> victim? Or was the card maybe lifted form the dorm
> room?
I don't know the answers to those questions. I only know what she told
me about 4 years ago when she was in school - that someone stole her
credit card number after she made an insecure transaction on the
university network, and a bunch of transactions that weren't hers
appearing on her statement soon after. She knew this for a fact because
it had happened to other people as well and word had gotten out that
there were people snooping on the university network (but they had not
been caught yet). I couldn't help but give her a little lecture on
security, as she was doing an internship in the Netscape/iPlanet web
server, where I was working on security. I haven't been in touch with
her since.
I think the only ones you would be able to check the story with are
those with whom she shared it personally, such as me, or the bank, which
obviously had to be notified of the fraud to reverse the charges, but
wouldn't necessarily know the exact cause of the card compromise. After
they reversed the charges, they canceled the old card account number,
opened a new one with a new number, and sent her the new card very
securely ... via US postal mail. I believe this to be very common. And
this is one of the key risks SSL tries to protect against.
Ian Grigg wrote:
>> The point in auditing the CAs is that it's better than not auditing
>> the CAs at all.
>
> It's not an absolute. There is no point in auditing
> the CAs if it achieves little or nothing, in terms of
> security, and costs money.
True, but I lost you after the if. I think the current audits are a
useful attempt at establishing identity of peer certs, if not a
guarantee. They may cost, but I consider them to be a very good value
for money, vs not auditing and simply trusting any random cert without
verification which in consumer environment would basically make SSL
worthless.
> The reason that Frank wrote
> his policy on these points, presumably, is that it's
> not clear that audits of CAs deliver value for money.
I did not see him write that. I think he was happy to accept audited
CAs, meaning that he did attribute some value in the audit; but that
these audits were not all things to all people. Ie: for people who have
no money, they get no value from no audit ...
> It's one of my points!
> Another of my points is
> someone has to pay for it, even if it doesn't
> happen. So, a good security view will ask, what's
> the value for money here?
The end-user or the Mozilla foundation. the one paying for the audit of
the CA certs. The CA is paying for the audit.
The end-user may be subject to more scam/fraud by doing SSK transactions
with certs issued by unaudited CAs. The value of trusting only auditing
CAs to me is clear, reduce this type of risk.
I rate this about the same as companies that get credit card information
from people talking on mobile and/or cordless phones...
They are both prone to interception as email is, but how many details
are transferred by this method, either spoken or typed into a keypad...
Surely any form of encryption is better then in the clear?
> Julien Pierre wrote:
>
>> I don't need to tell you how vulnerable that is to snooping by all
>> the ISPs and relays, or any thief in between. I don't have any stats
>> on it, but I bet it's a significant cause of fraud.
>
>
> I rate this about the same as companies that get credit card
> information from people talking on mobile and/or cordless phones...
>
> They are both prone to interception as email is, but how many details
> are transferred by this method, either spoken or typed into a keypad...
>
> Surely any form of encryption is better then in the clear?
>
Only if you are encrypting to the correct party, and not to a thief.
This is why we have CAs and trust.
> Only if you are encrypting to the correct party, and not to a thief.
> This is why we have CAs and trust.
Ian made a point of this about a Gold company using a self signed
certificate and not having a problem. At this current point in time if I
were a thief, there are numerous ways of getting information out of
people for not much more then the cost of a pen. So is all the fuss
about security so over rated to the point that people resort to using
unencrypted emails, and unencrypted websites just because security is
too costly or too difficult? I'd say yes, the first site (say google for
example) their browser will tell the user about entering information
into unencrypted websites, the user will dismiss that dialog box because
they are only doing a simple search (by default it won't come back).
Later on they go to Orkut, which has made news lately about social
networking and all that sort of thing. It collects potentially a ton of
personal information about it's users, yet none of this is deemed worthy
of encryption, not event he login. Security in the here and now is out
of hand, and out of grasp of most people so much so they risk personal
details by not using it.
http://theregister.co.uk/content/archive/30324.html
(April last year)
Couple of choice quotes...
"Nine in ten (90 per cent) of office workers at London's Waterloo
Station gave away their computer password for a cheap pen, compared with
65 per cent last year."
And this article
http://theregister.co.uk/content/55/35393.html
"A third of employees quizzed write their computer passwords down to
help them remember and one in ten keeps them on a Post-It note on their
desk. More than half (55 per cent) of those quizzed base their passwords
on people's names, making them far easier to guess."
> Julien Pierre wrote:
>
>> Only if you are encrypting to the correct party, and not to a thief.
>> This is why we have CAs and trust.
>
>
> Ian made a point of this about a Gold company using a self signed
> certificate and not having a problem. At this current point in time if
> I were a thief, there are numerous ways of getting information out of
> people for not much more then the cost of a pen. So is all the fuss
> about security so over rated to the point that people resort to using
> unencrypted emails, and unencrypted websites just because security is
> too costly or too difficult? I'd say yes, the first site (say google
> for example) their browser will tell the user about entering
> information into unencrypted websites, the user will dismiss that
> dialog box because they are only doing a simple search (by default it
> won't come back).
I guess I am the only one in the world who has that option turned on,
the dialog does come up for every one of my google search and other
posts. And I know to watch for it when I submit sensitive data. It has
come up on a few occasions. In Mozilla, the dialog is on by default. And
if you click "continue", the dialog will come back. You have to
explictly uncheck "alert me whenever I submit information that's not
encrypted". Perhaps we should have another dialog explaining to the user
in plain english but with more detail what they are really doing by
disabling this option, with a second confirmation dialog. It should stay
enabled.
While your at it explain to them in plain english what self signed
certificates are... "The server you are connected to is self signed,
this might not be desirable for financial transactions, in any case you
connection WILL be secure from people trying to listen to the data sent
to the server."
Well something to that effect :) and if it's a non-default CA perhaps
something similar, but point out the URL to visit the signing CAs
website for more information.
Nope. We need a better, less intrusive solution in terme of GUI.
This one will always be disabled by users in 99% of cases and saying
they're stupid will not change that.
A really working system can only be an advance warning it seems.
The "lock" icon is probably in the right direction to do this, but is
unfortunately completely inedequate (it doesn't tell you at all if the
form will go to an encrypted page).
Maybe we need a lock inside the form entry ?
With a different visual aspect based on the level of security ?
But then we'd need a way to forbid a page to simulate the same behaviour
using dynamic html.
Maybe the trick would be instead to use a visual warning the form is
unsafe, it would be a lot easier to make sure this warning can not be
removed by dynamic html.
> Maybe the trick would be instead to use a visual warning the form is
> unsafe, it would be a lot easier to make sure this warning can not be
> removed by dynamic html.
Make things too annoying and web masters will promote another product
and users will do likewise. Again making things too secure will only
make people look for an easier way out.
Maybe *this* is where french consumers are less protected than american.
An american company would never dare do such a thing, but french
companies estimate there's litle risk of being catched and even if they
are they'll get way by saying "oh, sorry, the clerk wasn't correctly
informed", and nothing actually bad will happen to them.
Ultimately what it comes down to is : we want checks and warnings if the
user is entering sensitive and/or financial information, and we don't
want them in other cases.
There is no automatic way for the browser to distinguish the correct
behavior when a user connects to a particular server. The option is
currently set on a global basis.
Perhaps we should have a better system of policy selection than a global
preference buried in a menu.
Maybe it could be a frame for the browser window : red means no insecure
submission check, green means check. The user would be able to open
either type of window, which would set the policy to warn for everything
that happens within it. He would have the ability to start either type
of window (start sensitive browser window ? start regular browser
windows) or toggle the policy (strict / not strict) which would be
clearly marked by the frame color.
The policy could also be saved as attribute for each URL in the
bookmarks file, so when you go to your bank, it can force the policy to
strict (and at the same time show it with the appropriate frame color).
And for thing like google you could bookmark it with the non-sensitive
policy (add bookmark would save the current policy).
Of course the visual indicator could be something else than the frame
color ... But it needs to be prominently visible.
Julien Pierre wrote:
> Well, now you have heard one. What do you want me to do to prove it,
> give you the person's name, e-mail and and phone number, the name of the
> university ? I do have that info, but I don't believe she would want me
> to share it.
Of course. The 1st issue here is whether it really
was a sniffing of a credit card. (I believe you've
given the key clues below...)
The second issue is how much was lost, and then how
frequent it is. Once we establish a cost of this,
and multiply by the frequency, we can then work
out how much to spend on protecting against it.
Say there were 1000 instances every year. And we
lost $1000 each time. I'm picking numbers here
which we should hear about.
That would be total losses of $1 million. So that's
how much - give or take - we want to spend to protect
against credit card losses. Across the net society.
Currently, certs are sold at about 40k per year [1].
Imagine each cert costs $1000 (include some hassle
time in there).
That makes for total costs to protect against the
loss as $40 million. If we only lose $1m per year,
that's not a good deal.
Hence, we can conclude two things:
* we really *really* want to know how many losses
(like your friend's) there are, and
* in considering the acceptance of a new CA cert
by MF or any other, there isn't much economic
support for insisting on costly protection such
as audits.
[1] http://www.securityspace.com/s_survey/sdata/200401/certca.html
> Also, I have seen legitimate (but security-ignorant) businesses that ask
> for credit card numbers by insecure e-mail. And very likely many
> security-ignorant customers will just volunteer the information over
> insecure e-mail.
Yes, I did a very basic test using google about
6 months back, and established there were about
10-30k sites who ask for credit cards without
using any form of SSL. This sits against the
approximate 100k sites that use SSL (these
numbers are all orders of magnitude). The
existance of significant numbers of people who
transmit CCs across HTTP or email is one reason
why I believe there to be unmeasurable numbers
of cases of snooping.
> I don't need to tell you how vulnerable that is to snooping by all the
> ISPs and relays, or any thief in between. I don't have any stats on it,
> but I bet it's a significant cause of fraud.
Nope, I doubt it is even measurable. Mind you,
it would be really nice if we could provide a
form of encryption protection to the very small
businesses that can't afford the current expensive
infrastructure. (It is for this reason that I
suggest that Apache should install out of the
box with a self-signed cert immediately generated,
and Browsers should accept self-signed certs as a
valid protected session.)
>> And, I've been looking for the last decade or so...
>
>
> Where ? What was your research based on ?
Anecdotal sources (talking to credit card people,
looking at the various media reports, etc). No
company will reveal this formally, unfortunately,
:-/ I have challenged a lot of people in the
field on this point, and they've maintained their
silence...
> Did you ask the banks for their statistics on credit card fraud ?
No, mostly the credit card people.
> Try asking the US credit card processors why they charge a higher rate
> for online transactions than for retail transactions.
Almost all fraud is one of these classes:
* insider fraud, where someone with access
to the information sells it in bulk,
* hacks of boxes, or
* false charge-backs. This latter is very
prevalent in Adult/Gaming.
Because of these factors, in general, there is
a much higher rate for online transactions:
* stolen batches of cards can be used over
the net to acquire goods,
* cards are at risk in the databases, no
matter how many security instructions
are sent out, and
* high chargeback rates in different areas.
Not because of anyone sniffing on the wire.
> I don't think they
> are just greedy (though they certainly are), but online fraud is a
> significant problem to them and they compensate for it by higher rate.
Right. But, they know it is not to do with
sniffing on the wire. If it was, they would
investigate where and when it was happening,
and identify which insiders were doing it.
For example, have you ever heard of a sysadmin
being arrested for sniffing credit cards? Or,
an advisory that states that someone is sniffing
cards in this or that place?
> However, it may be difficult to establish in many cases how exactly the
> credit card numbers were compromised since there are so many different
> ways. And the thieves probably don't go and brag about the most popular
> methods.
Actually, it is fairly well known how it is
all done. There are chat groups and rooms
and so forth where one can pick up the info
on how to do it, and find prices to buy, etc
(don't ask me *where*, that's not my game,
but I gather it is mostly in IRC and some of
the anon variants...).
> .... She knew this for a fact because
> it had happened to other people as well and word had gotten out that
> there were people snooping on the university network (but they had not
> been caught yet).
Ah, well, that latter part is certainly apropos.
If there were a bunch of these events happening,
then it is a plausible conclusion - looks like
this may be a case of students snooping over the
uni networks!
> ... After
> they reversed the charges, they canceled the old card account number,
> opened a new one with a new number, and sent her the new card very
> securely ... via US postal mail.
OK, so her cost was zero dollars, and some wasted
time and hassle. The bank reversed the charges
on the merchant, so the merchant was out for the
cost of the goods sent.
> I believe this to be very common. And
> this is one of the key risks SSL tries to protect against.
Well, I've been told by people who worked at
credit card companies that they've never ever
seen any proven case of credit cards being
compromised while on the wire. But, they
can document squillions of cases based on
insider fraud, cracking, etc. This is all
informal, of course, so I'm curious as to how
to establish this more scientifically.
iang
That's too big a jump. It's quite hard for a thief
to jump in the middle and change things. It's
much easier to eavesdrop. And, even that's only
easy for outsiders when on open networks such as
unswitched ethernet or 802.11b, so quite a limited
part of the net (e.g., the University experience
recently discussed).
There is now substantial experience with crypto
sans certs resulting as an essential protection.
SSH has (for one) shown that in an aggressive
attack environment, opportunistic cryptography
works well. In a market experience sense, SSH
saw off the wannabes of telnet and secure-telnet
(which used SSL and certs); they're just bad
dreams to those who've used both.
iang
Not really. Without the authentication, any proxy, including the
so-called transparent proxies, could descrypt all traffic in both
directions without the end parties detecting it.
There are entire countries whose internet access all passes through
transparent proxies, so their governments can snoop. If they could
do MITM attacks, you can bet they would. They cannot do undetected
MITM on https, today, beceause of cert based authentication.
I spent some time in one of them this past year. You can bet I
was particularly careful to ensure I had uncompromised software and
an uncompromised root CA list. It would take only one compromised
root CA for them to be able to do MITM attacks on all https traffic.
Oh, and cert based secure AIM was my friend.
--
Nelson B
So, we are saying here that, because there is a small
threat of an active/compromised node doing an MITM,
there is no point in protecting against passive
eavesdropping, which is a demonstrably larger threat?
Is this logic sufficient to justify effectively
denying users any protection against eavesdropping,
within the open, non-commercial world, as befits
the open source community?
> There are entire countries whose internet access all passes through
> transparent proxies, so their governments can snoop. If they could
> do MITM attacks, you can bet they would.
Governments are trying to snoop on what, exactly?
Credit cards? Doesn't make a lot of sense.
Mail? Most web mail systems don't use any form
of HTTPS, whether it be cert'd or not. Straight
HTTP. Getting them to use any form of encryption
would be fantastic. It'd certainly be a lot easier
if they could bootstrap their webmail servers into
a self-signed cert, accepted by the browsers, and
then upgrade to an expensive CA-signed cert if the
traffic warranted it.
People worried about government snooping on email
based comms generally do in fact get found and
beaten up badly. Some of them get killed for
their efforts. They then look at various offerings,
which I guess vary in their nature. OpenPGP is
widely used by the human rights people for example,
and has been since the early 90s, it was one of the
core user groups since forever. (If anyone is keen
on *serious* threat models, check out cryptorights.org)
In this case, I'd suggest that if the HRC people
were using HTTPS, they should use a cert. But, as
the browsers that would be used are potentially
compromised, they would have to validate the browser
somehow. Hard problem.
So, they might end up having to use OpenPGP on
floppies, install onto a machine, run the program
from the floppy, and then use webmail to transmit
the mail. In which case they'd be happy with any
form of encryption because they've already protected
themselves using other means.
Right now, HTTPS is basically limited to merchants
doing, e.g., credit card stuff or similar. If the
servers and browsers weren't so serious about merchants
and other server operators being charged hard currency
for running what is basically open source software, the
notion that governments are a threat might make more
sense, simply because there would be non-commercial
usage of HTTPS. I.e., HRC. But, for now, no, sorry,
HTTPS is too expensive for the average non-profit,
so I don't see governments being interested. Correct
me if I'm wrong, please!
> They cannot do undetected
> MITM on https, today, beceause of cert based authentication.
Almost all traffic is over HTTP. There is a bit
of ecommerce over HTTPS. Is this for real?
Unless you are talking about just your common or
garden criminal bureaucrat working for a government
and also doing a bit of credit card snooping on the
side. Granted, many governments employ / encourage
criminals, but they are still subject to economic
forces and would rather steal 10,000 credit cards
by hacking than sit there hoping a foreigner comes
along into a net cafe.
> I spent some time in one of them this past year. You can bet I
> was particularly careful to ensure I had uncompromised software and
> an uncompromised root CA list. It would take only one compromised
> root CA for them to be able to do MITM attacks on all https traffic.
If they compromised the browser you were using,
they could then compromise the traffic from that
machine - unless you used a CA cert list.
But, if they could compromise the browser, and/or
its root CA list, they could also compromise the
entire machine?
What was the threat model here?
> Oh, and cert based secure AIM was my friend.
Certs are great if they're available and costless.
They're just not costless. And, the decision
of the browser and the server to insist on their
usage means that it puts a lot of pressure on
things like CACert to reduce their cost, so we
can see the use of this software by the ordinary
people, rather than by the payments people.
Currently, the cost of certs is the primary
reason that HTTPS has achieved less than 1%
penetration of the market over the last decade.
Presumably AIM does it differently, by using
one cert at the server. But, if it did it
from user-to-user, as per p2p, then one could
be sure that there would be a desire to reduce
those certs down to their natural zero cost.
iang