<http://www.hecker.org/mozilla/ca-certificate-metapolicy/>:
> 11. ... We have to deal with the world as it is, not as it might be or
> as we might wish it to be
I disagree. We're here to *change* the world, aren't we? ;-)
> 17. The assessment of CAs that have not undergone independent audit
> should be based only on information that is (or will be) available to
> the general public
If the Mozilla person assessing the CA gains deep insight into the
processes of the CA, e.g. by speaking with the CA people, being
physically onsite, being virtually onsite by being allowed to access and
check their servers (using a user/guess account over SSH), and he gains
confidence in the CA as a result, and documents that publically as far
as possible (not all of the above may be possible to publish), and bases
his decision partially on that confidence, would that satisfy your rule
or conflict with it?
> 18. ... The policy should not arbitrarily exclude CAs from
> consideration based on factors such as the CA's size, reputation,
> business practices not related to certificate issuance, profit or
> nonprofit status, geographic location, and the like.
Not so sure about that. If a company has proven to be ruthless, I don't
want their certs to be in Mozilla by default, no matter how many
independent audits they passed. Hypothetical case: If Microsoft was a
CA, would you include them, if they formally passed our criteria? If a
company has a track record of issuing bad certs or even having their
root cert stolen, that's reputation, and IMHO very relevant.
> However a CA's certificate should be removed only if there is adequate
> reason, not just in the form of increased risk
So, if I'm a CA: start nice, or even just pretent to be nice, and when I
am included, I can get sloppy or evil?
Yes, the certificate holders of a CA have a reasonable expectation to
not have the certificate be draw from under them, and this should be
considered when removing CAs.
But I wouldn't state it *that* explicitly in the policy, so it would be
perceived as a right to stay in the root. I'd rather word it as
"consider", not "should not" or 'increased risk is not a reason'.
<http://www.hecker.org/mozilla/ca-certificate-policy/>:
> the actual criteria themselves will be included in another document,
> most likely the FAQ on policy details, not in the policy itself.
I'd say the actualy criteria *are* the policy, so should be part of it.
You already have a meta-policy, with this you make that a meta-meta-policy.
> CAs should send an email message to certif...@mozilla.org
It should be mandatory policy to inform interested users of new root CAs
after decision, and before decision for people who want to take part in
the discussion.
> other entities distributing Mozilla and related software are free to
> adopt their own policies.
Good and necessary.
<http://www.hecker.org/mozilla/ca-certificate-faq/policy-details/>
> For example, a given CA might issue relatively few certificates, but
> those certificates might be issued to particular web sites used by a
> large number of Mozilla users.
I'd rather add the actual certs, not a root cert. Much smaller risk.
> Which risks and threats are you concerned about in connection with
> certificates and CAs?
The answer doesn't really show the cases we consider problematic. Most
controverse is probably: Do we trust governments? I don't. This implies
that I don't trust Verisign either. You may argue that I'm paranoid, but
many governments (incl. US and Germany, unfortuantely) are snooping on
their citizens, even innocent third-parties, so I'd argue that this is
some risk even for average users.
It's hard to fight governments, but there are some things which can be
done to migitiate that risk (e.g. not blindly accept a *changed*
certificate), which could be considered, if we acknoledge that risk.
> (This check can be thought of as providing additional security against
> DNS spoofing beyond what is present in the DNS itself.)
DNS provides security? ;-)
> First, recall that we are co...
> Next, given these assumptions we need to...
I'd strike these paragraphs. Don't fit, and almsot anybody interested in
root certs knows that, I'd guess.
> Also, does the CA vet the certificate field
typo?
Should there be rules wrt to malware, i.e. abuse of rights when running
software signed with the cert? If an extension comes with a signature
with mozdev as root CA, I bet almost all users (actually including me)
expect that the software doesn't do anything evil.
I know that this would probably be against CA standard policy (we just
certify the name, not the holder or content), but runs *totally* against
most user expections.
Maybe this is a UI question, though, I don't know.
Criteria:
Given the long documents, the actual concrete criteria are very vage and
weak.
Is there a minimum requirement *how* the server name, email address or
real name (for person or company) must be determined. In particular, I
think (might have misunderstood it) there are several "classes" of
certificates you can get for email - Class 1 checks only the email
address, but does not have your real name, Class 2 checks your real name
using passport?, there's also a Class 3. Even with GPG, it's generally
expected that you check the real name against the passport, in person,
before you publically sign a key. The policy here doesn't mention *any*
of that. If I find a company name in a certificate, signed by any CA,
can I rest assured that the CA actually checked the records, in a secure
way, to assure that the name in the cert matches the holder name? That
the actual person/computer holding the cert is allowed to act on behalf
of that company?
What about CA security? How does the CA ensure that the root CA is not
stolen, the server rooted (virtually or physically), that no
man-in-the-middle attack is taking place during signing up? I'd
personally say it's minimum requirement that the server is physically
safe from access and guarded, and I would *not* count datacenters of
hosting services in that, it should be guarded by people of the CA. I'm
pretty sure that my servers at home are safe, but I don't know if
anybody accesses my dedicated server at the hosting company. Similarily,
securing a server virtually is a very hard task. As a basic rule, I'd
say that the CA server should have no other functions (esp. no FTP
server, CVS server etc.), no known remote holes etc..
> What is the process by which our request will be evaluated and a
> decision made?
The answer is nice, but as said earlier, that should be part of the
actual policy, and should include announcement of positive decisions of
inclusion.
> The module owner and other interested parties will discuss the request
> in Bugzilla (/not/ in the newsgroup and mailing list)
Sure? Bugzilla is totally inadequate for longer discussions.
No need to apologize. It's useful to have comments from more people than
just those who've commented thus far. Thanks for taking the time to read
the documents and respond.
>> 11. ... We have to deal with the world as it is, not as it might be or
>> as we might wish it to be
>
> I disagree. We're here to *change* the world, aren't we? ;-)
Yes, but... I think there's a limit to how much we can change the world
at one time. For example, as I noted in the metapolicy, many people
have proposed that Mozilla move away from a CA-based model (i.e., for
SSL-enabled servers, S/MIME email, and signed code) and move to a
SSH-like model of self-signed certificates. Regardless of whether this
would be a good idea in theory or not, IMO it's certainly not a change
we can make in the near term, for lots of reasons. (Among other things,
we don't have an agreed-upon design for the UI and underlying code, nor
do we have anyone committed to implement such a design.)
> If the Mozilla person assessing the CA gains deep insight into the
> processes of the CA, e.g. by speaking with the CA people, being
> physically onsite, being virtually onsite by being allowed to access and
> check their servers (using a user/guess account over SSH), and he gains
> confidence in the CA as a result, and documents that publically as far
> as possible (not all of the above may be possible to publish), and bases
> his decision partially on that confidence, would that satisfy your rule
> or conflict with it?
Yes, IMO this would satisfy the requirements of the proposed policy. It
would obviously be nice to have as much documentation as possible, but
at a certain point you have to trust the evaluator's judgement. (This is
assuming of course that the evaluator has a good reputation as someone
whose judgement you can trust.)
> If a company has proven to be ruthless, I don't
> want their certs to be in Mozilla by default, no matter how many
> independent audits they passed. Hypothetical case: If Microsoft was a
> CA, would you include them, if they formally passed our criteria? If a
> company has a track record of issuing bad certs or even having their
> root cert stolen, that's reputation, and IMHO very relevant.
I wasn't proposing to ignore the CA's track record specifically as a CA,
I was referring instead to the CA's general reputation as a business. To
answer your hypothetical question: if Microsoft acted as a CA, and if
Microsoft properly did the things one would expect a CA to do, then why
should their root CA cert not be included? Whether Microsoft is a "good"
company or "bad" company in terms of other non-CA-related business
practices (for example, the sorts of things that got them in trouble
with the US and EU) is IMO of little or no relevance.
>> However a CA's certificate should be removed only if there is adequate
>> reason, not just in the form of increased risk
>
> So, if I'm a CA: start nice, or even just pretent to be nice, and when I
> am included, I can get sloppy or evil?
> Yes, the certificate holders of a CA have a reasonable expectation to
> not have the certificate be draw from under them, and this should be
> considered when removing CAs.
>
> But I wouldn't state it *that* explicitly in the policy, so it would be
> perceived as a right to stay in the root. I'd rather word it as
> "consider", not "should not" or 'increased risk is not a reason'.
The text you're quoting is from the meta-policy, not the policy itself.
The policy itself simply states that "The Mozilla Foundation reserves
the right to discontinue including any CA certificate in Mozilla, at any
time and for any reason." The policy details FAQ expands on this a bit:
http://www.hecker.org/mozilla/ca-certificate-faq/policy-details/#discontinue
The language I included in the meta-policy is more of a statement about
the practical difficulty of removing root CA certs that are relied upon
by a lot of Mozilla users.
> <http://www.hecker.org/mozilla/ca-certificate-policy/>:
>
>> the actual criteria themselves will be included in another document,
>> most likely the FAQ on policy details, not in the policy itself.
>
> I'd say the actualy criteria *are* the policy, so should be part of it.
> You already have a meta-policy, with this you make that a meta-meta-policy.
This is really a question of what text should go where. Initially I
wanted to keep the policy document itself short, and put the more
lengthy details in a separate document. Now I'm thinking about extending
the policy document to put the most important items, namely the risks,
threats, and criteria, as a second section after the current content.
>> CAs should send an email message to certif...@mozilla.org
>
> It should be mandatory policy to inform interested users of new root CAs
> after decision, and before decision for people who want to take part in
> the discussion.
That's partly specified in the policy details document:
http://www.hecker.org/mozilla/ca-certificate-faq/policy-details/#request-process
See item 5. What I forgot to include was a step to publicly announce
decisions about new CA certs being included. I think the best way to
handle this is to announce decisions first in n.p.m.crypto, and then
also to include information about this in release notes for milestones.
(I'd also like to have a page on the mozilla.org web site where we
include links to information about the root CAs whose certs are
included. This would include links to public documents referenced in the
evaluation decision, as well as links to the relevant Bugzilla bugs. Of
course, someone -- presumably me -- will have to find the time to put
this together. I'd be happy to see someone else volunteer for this task.)
> <http://www.hecker.org/mozilla/ca-certificate-faq/policy-details/>
>
>> For example, a given CA might issue relatively few certificates, but
>> those certificates might be issued to particular web sites used by a
>> large number of Mozilla users.
>
>
> I'd rather add the actual certs, not a root cert. Much smaller risk.
Someone else can comment here, but I think there's been a long-standing
policy of adding only CA certs to the default cert list, not certs for
web sites or other servers ("end entity" certs, to use the PKI jargon).
Besides, "relatively few" in this context could mean at least a few
dozen certs; that's a lot of certs to add individually and mark trusted.
>> Which risks and threats are you concerned about in connection with
>> certificates and CAs?
>
> The answer doesn't really show the cases we consider problematic. Most
> controverse is probably: Do we trust governments? I don't. This implies
> that I don't trust Verisign either. You may argue that I'm paranoid, but
> many governments (incl. US and Germany, unfortuantely) are snooping on
> their citizens, even innocent third-parties, so I'd argue that this is
> some risk even for average users.
>
> It's hard to fight governments, but there are some things which can be
> done to migitiate that risk (e.g. not blindly accept a *changed*
> certificate), which could be considered, if we acknoledge that risk.
I don't really understand exactly what you mean here. What is the
specific threat that you're concerned about here, and how specifically
would you propose to mitigate it? (Also, what additional criteria would
you propose to add to the policy relating to this issue?)
>> (This check can be thought of as providing additional security against
>> DNS spoofing beyond what is present in the DNS itself.)
>
> DNS provides security? ;-)
Well, though I'm not a DNS expert by any means, I thought there was at
least some ongoing work aimed at improving DNS security. But maybe I'm
wrong.
>> First, recall that we are co...
>> Next, given these assumptions we need to...
>
> I'd strike these paragraphs. Don't fit, and almsot anybody interested in
> root certs knows that, I'd guess.
I presume you're talking about the first two paragraphs in
http://www.hecker.org/mozilla/ca-certificate-faq/policy-details/#criteria
I respectfully disagree that these paragraphs are redundant (though it's
an open question whether they should be moved elsewhere). In fact, at
least in my own mind these points are absolutely critical to creating a
reasonable set of criteria.
First, in order to assess potential threats for a typical Mozilla user
you have to specify what that user's likely security settings are going
to be and how you anticipate the user will behave in security-relevant
situations; that's what the first paragraph provides.
Second, in order to determine the value of various criteria for CAs, you
have to determine how conformance to those criteria might or might not
actually mitigate risk for a typical user. My claim is if a CA conforms
to a particular criterion, and that conformance in fact has no effect on
the behavior of Mozilla as experienced by a typical user, then you have
to question the value of including that criterion in a policy intended
to benefit typical users.
(Of course the criterion might be useful to non-typical users, in which
case we would make a judgement call as to whether it's worth including
the criterion anyway. CRLs and OCSP are good examples here: At present
they appear to provide no benefit to typical users as defined, because
they aren't enabled by default; thus one could question the usefulness
of including criteria related to CRLs and OCSP in the policy. However
CRLs and OCSP do provide benefit to non-typical users who know about
them and turn them on, and CRLs and OCSP might provide a benefit to
typical users if Mozilla were changed in the future; based on that it
would arguably make sense to include criteria in the policy related to
CRLs and OCSP.)
The second paragraph thus attempts to describe the ways Mozilla might
behave differently in various situations involving the use of CA-issued
certificates, so we can "test" the criteria to see how criteria
conformance or nonconformance might affect this behavior.
>> Also, does the CA vet the certificate field
>
> typo?
You think that "vet" is a typo? No, it's the word I meant, though I
could use another one. (Also I should change "field" to "attribute" I
think, and of course "displaed" in that sentence really is a typo.)
> Should there be rules wrt to malware, i.e. abuse of rights when running
> software signed with the cert? If an extension comes with a signature
> with mozdev as root CA, I bet almost all users (actually including me)
> expect that the software doesn't do anything evil.
>
> I know that this would probably be against CA standard policy (we just
> certify the name, not the holder or content), but runs *totally* against
> most user expections.
You're right that most users would expect this. I suspect this arises
from the common (mis)perception that CA's are providing a sort of "Good
Housekeeping Seal of Approval" (sorry, US-centric reference) for their
customers.
I don't have any good thoughts at the moment as to whether it makes
sense to try to police CAs in the manner you suggest.
>
> Criteria:
>
> Given the long documents, the actual concrete criteria are very vage and
> weak.
Yes, you're correct that the criteria are sketchy. This is because they
are still being written. I left the criteria for last because IMO before
looking at criteria we had to settle various issues regarding the
context in which the criteria would be used. Thus, for example, we had a
lot of discussion about whether we should pay attention to legal risks,
and whether or not the criteria should address those issues. We had
other discussions about whether risks varied from country to country,
and how that might change the criteria. The metapolicy was/is my attempt
to address all these preliminary issues and prepare us to focus on the
criteria in a single specified context.
> Is there a minimum requirement *how* the server name, email address or
> real name (for person or company) must be determined. In particular, I
> think (might have misunderstood it) there are several "classes" of
> certificates you can get for email - Class 1 checks only the email
> address, but does not have your real name, Class 2 checks your real name
> using passport?, there's also a Class 3. Even with GPG, it's generally
> expected that you check the real name against the passport, in person,
> before you publically sign a key. The policy here doesn't mention *any*
> of that. If I find a company name in a certificate, signed by any CA,
> can I rest assured that the CA actually checked the records, in a secure
> way, to assure that the name in the cert matches the holder name? That
> the actual person/computer holding the cert is allowed to act on behalf
> of that company?
This is an open question that we need to discuss; see my post at
http://www.google.com/groups?selm=c57v2e%24fvi2%40ripley.netscape.com&output=gplain
in particular the three paragraphs beginning "There's another
criteria-related issue...".
> What about CA security? How does the CA ensure that the root CA is not
> stolen, the server rooted (virtually or physically), that no
> man-in-the-middle attack is taking place during signing up? I'd
> personally say it's minimum requirement that the server is physically
> safe from access and guarded, and I would *not* count datacenters of
> hosting services in that, it should be guarded by people of the CA. I'm
> pretty sure that my servers at home are safe, but I don't know if
> anybody accesses my dedicated server at the hosting company. Similarily,
> securing a server virtually is a very hard task. As a basic rule, I'd
> say that the CA server should have no other functions (esp. no FTP
> server, CVS server etc.), no known remote holes etc..
We need to discuss yours and other's suggestions about criteria relating
to CA key protection. Again, I left this unspecified because I wasn't
yet sure what to include.
However I will make one general point here: I would like to have the
minimum set of criteria that will accomplish our purpose. We could spend
all day and night thinking of things we'd like CAs to do or not do, but
we need to evaluate whether those things belong in the policy, based on
cost/benefit trade-offs and other factors.
>> What is the process by which our request will be evaluated and a
>> decision made?
>
> The answer is nice, but as said earlier, that should be part of the
> actual policy, and should include announcement of positive decisions of
> inclusion.
I agree with you; see my earlier comment.
>> The module owner and other interested parties will discuss the request
>> in Bugzilla (/not/ in the newsgroup and mailing list)
>
> Sure? Bugzilla is totally inadequate for longer discussions.
Yes, but at least a bug provides a single point of reference, as opposed
to trying to follow a set of newsgroup threads to determine why
particular decisions were made. In reality longer discussions will
probably spill out to the newsgroup(s) anyway, but I'd like to keep
discussion on the core issues in bugzilla.
Frank
--
Frank Hecker
hec...@hecker.org
> For example, as I noted in the metapolicy, many people have proposed
> that Mozilla move away from a CA-based model (i.e., for SSL-enabled
> servers, S/MIME email, and signed code) and move to a SSH-like model
> of self-signed certificates.
I'm not alone? Interesting. So, maybe add a "for now" to that meta-rule?
<http://bugzilla.mozilla.org/show_bug.cgi?id=211498>
> Regardless of whether this would be a good idea in theory or not, IMO
> it's certainly not a change we can make in the near term, for lots of
> reasons.
Agreed about the near term.
But if really so many people believe in that model, we should consider
moving to it or at least fully supporting it, right now it's not
possible *at all* in practice, not even for advanced users, because of
bugs like 211498 <http://bugzilla.mozilla.org/show_bug.cgi?id=211498> (I
was upset about the treatment of that bug). This goes in scope far
beyond the policy. <http://bugzilla.mozilla.org/show_bug.cgi?id=211498>
> why should [Microsoft's] root CA cert not be included?
One reason would be that I/we don't trust them, and CAs are all about trust.
> the risks, threats, and criteria, as a second section after the
> current content.
I don't think you need to put the general risks and threats in the
concrete policy, but I'd expect the verifyable criteria (CA does this
and that to verify cert holders, does this and that to ensure security,
does not do this and that etc.) to be in there. If there is any
step-by-step process as guideline to evaluate CAs, it should be part of
the concrete policy, too.
> See item 5. What I forgot to include was a step to publicly announce
> decisions about new CA certs being included. I think the best way to
> handle this is to announce decisions first in n.p.m.crypto.
A n.p.m.announce.security would be a good fit, but we don't have that
yet (we should have it).
> (I'd also like to have a page on the mozilla.org web site where we
> include links to information about the root CAs whose certs are
> included. This would include links to public documents referenced in
> the evaluation decision, as well as links to the relevant Bugzilla
> bugs. Of course, someone -- presumably me -- will have to find the
> time to put this together. I'd be happy to see someone else volunteer
> for this task.)
The beef of that page would be the concrete information about each CA,
and that can be done while evaluating the CAs, not? Then you probably
also know what information makes sense to put there (I wouldn't know that).
> Besides, "relatively few" in this context could mean at least a few
> dozen certs
Ah, I thought 1-3.
>> The answer doesn't really show the cases we consider problematic.
>> Most controverse is probably: Do we trust governments? I don't. This
>> implies that I don't trust Verisign either. You may argue that I'm
>> paranoid, but many governments (incl. US and Germany, unfortuantely)
>> are snooping on their citizens, even innocent third-parties, so I'd
>> argue that this is some risk even for average users.
>
> I don't really understand exactly what you mean here. What is the
> specific threat that you're concerned about here
Let's say your enemy (the one you want to protect yourself from) is not
any competing company, but the government. E.g. you are a Chinese
dissident or like to criticise the US policy or have information that's
interesting for the government in some way or are just friends with such
people. You are scared that the government may read certain information
or pretend to be one of your allies to make you do things (e.g. come to
a certain place) or break into your computer and use your own webcam for
surveillance. Or maybe you have information on your computer that could
threaten your life in some way (e.g. possibility to be abused, with that
abuse leading to your death), if it falls into the wrong hands. Or maybe
you just want to be sure that nobody (apart from you and the addressee)
can read your love letters, really *nobody*.
Let's say you exchange sensitive emails with Arthur Friend
<fri...@example.com> in these circumstances via S/MIME. The US
government indeed wants to read your mails. It can just create its own
certificate for Arthur Friend <fri...@example.com>, walk into the
VeriSign office, ask them to sign it, and then start to pose as A.
Friend towards you or just intercept the mails, read them, maybe alter
them, and forward them. As I understand it, PSM/NSS will currently
accept new certificates signed by trusted CAs, even if a *different*
certificate is already known for that entity (I think even if the CA
mismatches). Mozilla would show the lock / pen icon as if everything
were OK and you'd never notice that you're now talking with the US
government, unless you check the cert fingerprint, which you are
unlikely to do for every mail.
Similar threats exist for signed software.
So, in the current model, you are vulnerable to governments (actually
anybody) which control root CAs.
> and how specifically would you propose to mitigate it? (Also, what
> additional criteria would you propose to add to the policy relating to
> this issue?)
You asked about threat models, and that's mine ;-).
There's probably not much we can do in the policy (the typical SSL model
is probably (intentionally) inadequate to protect against that). Maybe
it will make a difference in practice during CA evaluation in one case
or the other, maybe we can't do anything in practice (in the policy).
But I personally think we should acknowledge that risk, be concerned
about that, and state so.
>> DNS provides security? ;-)
>
> Well, though I'm not a DNS expert by any means, I thought there was at
> least some ongoing work aimed at improving DNS security.
There was an attempt a few years before to put crypto signatures into
DNS, but that hasn't gone forward, largely due to .com (Verisign and
ICANN) not cooperating. I don't know of any current efforts. As-is, it
seems to be relatively easy to make a certain victim go to your server
instead of mail.yahoo.com, but IIRC it involves active attacks (or of
course controlling part of the infrastructure). Given that DNS *could*
be safe by principle, it's quite unsafe.
>>> Also, does the CA vet the certificate field
>>
>> typo?
>
> You think that "vet" is a typo? No, it's the word I meant
Nod. I just didn't know that word, and my dict said 'a doctor for
animals; treating or examining an animal'. Another dict I just used says
'check thoroughly', I guess that's what you meant. I just had a hard
time to make make sense of the sentence, but because it's my lacking
English knowledge, ignore my comment.
> Yes, you're correct that the criteria are sketchy. This is because
> they are still being written.
Ah, I thought it is supposed to be almost done.
> for example, we had a lot of discussion about whether we should pay
> attention to legal risks, and whether or not the criteria should
> address those issues.
FYI, I completely agree with the current policy on that.
> We had other discussions about whether risks varied from country to
> country, and how that might change the criteria. The metapolicy was/is
> my attempt to address all these preliminary issues and prepare us to
> focus on the criteria in a single specified context.
I see. I didn't know that these meta-criteria were so much of an issue.
> I would like to have the minimum set of criteria that will accomplish
> our purpose. We could spend all day and night thinking of things we'd
> like CAs to do or not do
I think these criteria are critical to the well-functioning of the whole
system. The criteria I mentioned are things I would expect as a *matter
of course* from all default CAs. If they are not followed, I'd consider
the whole system to be broken. I probably didn't even include everything
I consider necessary. Basically, because the whole security model stands
and falls with the CAs, I think the criteria for the CA's own security
can hardly be too strong (as long as they are still doable). And weak
verifying of certs by one CA means practically weakening it for the
whole system, because all CAs are treated equal.
Yes, the system is rendered useless, if getting individual certs is
prohibitively expensive (or has unacceptable requiements bound to it).
But I don't think we should allow people to just add a CA to a server
that (to give an extreme example) already carries a large HTTP server
with lots of scripts, FTP server, CVS server, DNS, mail, mailing lists,
shell accounts etc.. Running a CA requires dedication and a lot of time,
it's IMHO nothing you can just add to your larger list of services.
>> Sure? Bugzilla is totally inadequate for longer discussions.
>
> In reality longer discussions will probably spill out to the
> newsgroup(s) anyway
Full agreement.
You are not alone! There has been substantial
work done in the last 5 years or so that has
revealed the "CA-based model" to be .. not as
advertised.
There's also been enough time, and a change in
business climate, so that people can look back
and see how it worked in practice, and how well
or badly competing models worked.
>> Regardless of whether this would be a good idea in theory or not, IMO
>> it's certainly not a change we can make in the near term, for lots of
>> reasons.
>
>
> Agreed about the near term.
There is one big problem: phishing. This is a
growing issue, related to identity fraud. It is
a breach of the browser's security model, and
costs a lot of people a lot of money.
Phishing can be addressed by some of the changes
that have been suggested. Specifically, it can
be addressed by moving from a CA-centric model
to a graduated model (CA and opportunistic) where
users have to complete the security loop.
How big phishing gets will determine how much
pressure there is to modify the model - and thus
defines "the near term." Right now, there are no
hard stats on how many losses are experienced, so
it's easy to ignore.
(I've seen one ludicrous number, and I likewise
found it easy to ignore: http://www.antiphishing.org/ )
> But if really so many people believe in that model, we should consider
> moving to it or at least fully supporting it, right now it's not
> possible *at all* in practice, not even for advanced users, because of
> bugs like 211498 <http://bugzilla.mozilla.org/show_bug.cgi?id=211498> (I
> was upset about the treatment of that bug).
It was certainly interesting reading! The assumptions
from the RFC (at the beginning of comment #11) are the
sort of thing that we now recognise as inadequate for
the construction of Internet security systems.
> This goes in scope far
> beyond the policy.
Yes, in principle. There is an open question as to
whether the policy makes sense without addressing the
weaknesses in the model. Right now, for example, if
the policy had to be "concluded" then the security
model would be left at listing phishing as a current
but unaddressed threat.
> So, in the current model, you are vulnerable to governments (actually
> anybody) which control root CAs.
Correct. This isn't going to change any time soon.
Anyone who's up against "national technical means"
had better do their research. It's a bit of a stretch
to say that Mozilla or any browser or any other app
should protect people against *all* threats.
> There's probably not much we can do in the policy (the typical SSL model
> is probably (intentionally) inadequate to protect against that). Maybe
> it will make a difference in practice during CA evaluation in one case
> or the other, maybe we can't do anything in practice (in the policy).
> But I personally think we should acknowledge that risk, be concerned
> about that, and state so.
Yes, legal or governmental control over a CA should
be listed in any threat model.
> I think these criteria are critical to the well-functioning of the whole
> system. The criteria I mentioned are things I would expect as a *matter
> of course* from all default CAs. If they are not followed, I'd consider
> the whole system to be broken. I probably didn't even include everything
> I consider necessary. Basically, because the whole security model stands
> and falls with the CAs, I think the criteria for the CA's own security
> can hardly be too strong (as long as they are still doable). And weak
> verifying of certs by one CA means practically weakening it for the
> whole system, because all CAs are treated equal.
That indeed is the crux of the debate. Strong
quality control on CAs just begs the question:
how strong? where strong?
Or, fix the model. Which begs further questions...
iang
This reply is also directed to Ben, I doubt self signed certs without
some kind of notification will work, in fact would leave us more open to
government MitM then under a CA model, I can not see any way to defend
against that kind of attack unless you know the person in person and
swap fingerprints. Sure this works for PGP under a limit set of
circumstances but then what, how do you do business with someone in
another country with no connection to you prior, lots of people do
business with companies in the US/UK all the time, do you think they'd
take the time and effort to verify signatures or would they just click
through warnings.
Yes the security model for SSL is flawed but self-signed isn't the
answer for large scale use either, if anything the trust would become so
weak even your ISP could walk over 95% of the population that hasn't a
clue about security.
--
Best regards,
Duane
http://www.cacert.org - Free Security Certificates
http://www.nodedb.com - Think globally, network locally
http://www.sydneywireless.com - Telecommunications Freedom
http://happysnapper.com.au - Sell your photos over the net!
http://e164.org - Using Enum.164 to interconnect asterisk servers
> The assumptions from the RFC (at the beginning of comment #11) are the
> sort of thing that we now recognise as inadequate for the construction
> of Internet security systems.
I think they always have been from the hardcore security circles (I
don't mean the cryptographers, but those using it). PGP was much earlier
than SSL. Old fight...
>> So, in the current model, you are vulnerable to governments (actually
>> anybody) which control root CAs.
>
> Correct. This isn't going to change any time soon.
I don't see that being the case for PGP. Nor for SSH, assuming that the
gov'ts usually don't listen to and alter the first connection. At least
from the crypto side (I do see very real threat there from the side of
security bugs, but that's another subject).
> That indeed is the crux of the debate. Strong quality control on CAs
> just begs the question: how strong? where strong?
Yup, and the current policy *completely* spares that out, that's why I
pointed it out.
> I doubt self signed certs without some kind of notification will work,
> in fact would leave us more open to government MitM then under a CA
> model, I can not see any way to defend against that kind of attack
> unless you know the person in person and swap fingerprints.
I don't know what you mean with "notification".
How can you pull a man-in-the-middle attack, if the browser warns you
about (or prevents) *changed* certificates (as described)?
The initial connection definitely not that serious, but not a problem
for many cases in practice. You probably have been at your bank's site
before, so you do know their certificate. If then a malicious site wants
to pretent to be your bank, it can't (any more than with current SSL),
because the browser notices the certificate change.
If I get to know a person on the internet, it doesn't matter to me that
her real name is indeed "Mary Franklin", but that I am always talking to
the *same* person.
> Yes the security model for SSL is flawed but self-signed isn't the
> answer for large scale use either
Maybe not. But it has its uses and should not be prevented or
discouraged by the software, as it currently is.
> The initial connection definitely not that serious
Urgs. "is" instead of "not that". Sorry
> How can you pull a man-in-the-middle attack, if the browser warns you
> about (or prevents) *changed* certificates (as described)?
One of Frank's earlier postings was on this was how to deal with normal
security dumb users, you and I would be able to deal with a situation
like this, but most users would probably end up okay'ing this situation,
same with the initial connection they just don't get how important
security is. Most viruses lately have taken numerous steps to infect a
user, and they still keep getting infected and you expect them to
protect themselves with this kind of situation???
They give their passwords away for a cheap pen, why would they keep
their browsing with self signed certificates secure?
In my day to day work I have to deal with some pretty non-security
related individuals and if they had to make a choice on all this they
would not care the slightest what you told them or how many times you
told them as long as they could get to the site they thought they wanted
to get to...
> Maybe not. But it has its uses and should not be prevented or
> discouraged by the software, as it currently is.
It has it's uses for power users, but end users are acutely clueless in
this respect... You can only make things so easy for someone to a
certain extent otherwise they get lazy and nonchalant about it and their
security goes to nil and we all go with it as a virus infects their
system and sends virus and spam infected emails to us encrypted
bypassing all scanners and filters... Other similar people than click
ok, I know Bob he's a good guy he wouldn't send me anything harmful, and
another one bites the dust...
I spend way too much time thinking about cause and effect hmmm....
This assertion as stated above is not really correct, and I thought I'd
correct it myself before someone else embarrasses me into doing so.
Instead of writing "has no effect on the behavior of Mozilla" I should
have written "has no effect on the sequence of events".
To restate this, a criterion placed on a CA should make a difference in
what actually happens in practice: If the CA conforms to the criterion
then as a consequence bad things should be prevented from happening, bad
things that would have (or plausibly might have) happened had the CA not
conformed to the criterion.
I'll expand on this point with three examples:
First, let's consider the criterion that a CA verify in some way the
domain name stored in a SSL server certificate (in the CN or wherever it
goes these days), as it might apply to a (fictitious) bill-paying
service at www.payyourbills.biz.
Suppose that an attacker is able to spoof DNS to direct
www.payyourbills.biz traffic to his own malicious site, and further
suppose that the attacker was able to find a CA who would issue him (or
anyone else) an SSL server cert with CN=www.payyourbills.biz, no
questions asked.
In this case it would make a difference for typical Mozilla users to
require that CAs do domain name verification as a criterion for
including the CAs' trusted root CA certs in Mozilla. The attacker would
either not be able to get a cert referencing www.payyourbills.biz, or
would have to get it from a CA whose cert is not included by default in
Mozilla. In either case the typical Mozilla user would be presented with
some sort of warning dialog, and could be warned away from the malicious
site.
Second, let's consider the criterion that a CA issue CRLs on a regular
basis and/or maintain an OCSP responder. (I know I'm beating this
example to death, but it's a good example.)
Suppose that an attacker is able to get copies of the private key
corresponding to the www.payyourbills.biz SSL server cert, and again
does the DNS spoofing to redirect traffic to his own malicious version
of www.payyourbills.biz. Further suppose that the CA learns of the key
compromise and revokes the www.payyourbills.biz cert.
In this case it makes no difference to typical Mozilla users whether we
impose CRL- or OCSP-related criteria on CAs as a condition of including
their certs, since (as previously noted) CRL or OCSP checks would never
be done under the default Mozilla settings. No matter what the criteria
are, typical Mozilla users would see no warnings whatsoever when they go
to the fake www.payyourbills.biz site.
Finally, a more "in between" example: let's consider the criterion that
a CA validate the correctness of every piece of data in the certificate,
including in particular the O and OU attributes.
Suppose that an attacker sets up a new site www.payyourtaxes.biz, and
represents it as an offshoot site of www.payyourbills.biz, run by
PayYourBills Inc., the operator of the original site. Suppose also that
the attacker finds a CA who will issue him a cert with
CN=www.payyourtaxes.biz and O=PayYourBills Inc. This cert then contains
a mix of true information (the attacker does in fact operate the
www.payyourtaxes.biz site) and false information (the attacker has no
association with PayYourBills Inc.).
Now, would having the above-mentioned criterion make a difference to a
typical Mozilla user or not? If the typical user never clicks on a "View
Certificate" button in Mozilla then they will never see the O
attribute; since the certificate would check out as OK otherwise
(including passing the domain name check), the user would have no reason
to suspect that anything is amiss.
So the criterion in question fails the test of making a difference from
this point of view. However to be fair some users (typical or otherwise)
might sometime find occasion to do a "View Certificate" operation, even
though most users would not, and imposing the criterion would make a
difference for those users. It's also possible that future versions of
Mozilla might give greater visibility to the O attribute, perhaps by
displaying it in the chrome somewhere; this might make it much more
likely that a typical Mozilla user would catch what was happening..
However given the current state of Mozilla imposing this criterion is
not as obviously useful as imposing the criterion to verify the domain name.
This example is of course specific to SSL server certificates, and the
situation might be different for email certs or developer certs. For
example, as I understand it a typical user would always get a warning
dialog the first time they downloaded a signed code object (e.g., Java
applet or XPI) from a new developer. In that event a typical user might
be more likely to click the "View Certificate" button and inspect other
fields of the developer's certificate, so there would be more value in a
criterion to ensure that CAs validate more of the information placed in
such a certificate.
To sum up, my point in these comments is *not* that we should not have
criteria regarding how CAs operate, or that we should have very loose
criteria. My point is rather to argue that for each and every criterion
we decide to adopt, we have to justify why adopting that criterion would
actually make a difference in the case of the typical Mozilla user.
(And recall that the criteria in question here are really the criteria
we would use for our own evaluation of CAs, in cases where independent
evaluations either don't exist or we choose not to rely entirely on
them, for whatever reason.)
> most users would probably end up okay'ing this situation
Then the current SSL situation doesn't help them either, because all
that currently happenes at man-in-the-middle attack is also just a
warning dialog.
> as I understand it a typical user would always get a warning dialog
> the first time they downloaded a signed code object (e.g., Java applet
> or XPI) from a new developer. In that event a typical user might be
> more likely to click the "View Certificate" button
FYI, Mozilla currently always shows the warning before installing an
XPI, and it does (to my knowledge) show the real name of the signer
right in the dialog. (DougT implemented that some time ago, IIRC.)
> (And recall that the criteria in question here are really the criteria
> we would use for our own evaluation of CAs, in cases where independent
> evaluations either don't exist or we choose not to rely entirely on
> them, for whatever reason.)
I think we should enforce the same requirements on all CAs. If a CA has
a WebTrust attest, but knowingly violates a rule we have for our
evalution, it should not pass. One big exception: I don't think we can
do without VeriSign=Thawte in any case :-( , if we are to keep the
current security model.
As I said before, current SSL situation might be flawed, but self signed
isn't the answer either...
Based on some brief experimentation, I think Mozilla specifically shows
the value of the CN attribute of the code signer's certificate; whether
or not that is the signer's "real name" is presumably up to the signer
and the CA :-)
First, you're automatically (and perhaps mistakenly) extrapolating from
"I" to "we" (depending on who "we" refers to). Second, the "trust"
placed in a CA is specifically limited to practices involving cert
issuance, revocation, etc., and IMO this has little to do with whether
you approve or disapprove of a CA's general business practices,
especially practices in business areas separate from the CA itself.
> Let's say your enemy (the one you want to protect yourself from) is not
> any competing company, but the government.
<omitting example of possible threat from government>
> So, in the current model, you are vulnerable to governments (actually
> anybody) which control root CAs.
>
>> and how specifically would you propose to mitigate it? (Also, what
>> additional criteria would you propose to add to the policy relating to
>> this issue?)
>
> You asked about threat models, and that's mine ;-).
>
> There's probably not much we can do in the policy (the typical SSL model
> is probably (intentionally) inadequate to protect against that). Maybe
> it will make a difference in practice during CA evaluation in one case
> or the other, maybe we can't do anything in practice (in the policy).
> But I personally think we should acknowledge that risk, be concerned
> about that, and state so.
I agree that threats from governments should be for the most part "out
of scope" as far as the policy is concerned. However I'll consider
adding an item to the meta-policy briefly discussing this issue.
>> You think that "vet" is a typo? No, it's the word I meant
>
> Nod. I just didn't know that word, and my dict said 'a doctor for
> animals; treating or examining an animal'. Another dict I just used says
> 'check thoroughly', I guess that's what you meant. I just had a hard
> time to make make sense of the sentence, but because it's my lacking
> English knowledge, ignore my comment.
No, I'll change the word. IMO the policy should be written so that
people don't have to look up words in the dictionary. (And I think your
"English knowledge" is pretty darn good.)
>> Yes, you're correct that the criteria are sketchy. This is because
>> they are still being written.
>
> Ah, I thought it is supposed to be almost done.
I wish :-)
> But I don't think we should allow people to just add a CA to a server
> that (to give an extreme example) already carries a large HTTP server
> with lots of scripts, FTP server, CVS server, DNS, mail, mailing lists,
> shell accounts etc.. Running a CA requires dedication and a lot of time,
> it's IMHO nothing you can just add to your larger list of services.
Agreed. In practice I don't see us approving anybody who is not running
a true CA operation (as opposed to the folks Nelson Bolyard complains
about, who install OpenSSL, issue a few certs, and decide they're a CA).
The least "formal" application we've received is that from the
CAcert.org folks; regardless of what one thinks of their operation as
compared to commercial CAs, clearly they're trying to run a dedicated CA
service as opposed to just a CA service "stuck on the side" of some
other service.
So, you agree that we should change that from a warning to an error,
no override possible?
--
Nelson B
That would be to make the same error as is already
made with the warnings - decide in advance what the
"right thing" is, when there is no user information
to help make that judgement.
iang
If you're serious about mozilla moving away from a CA model to an SSH model
then do this:
Go into mozilla, and disable ALL trust for all the CAs. All of them.
Root CAs and intermediate CAs. Every one. And leave them disabled. NO CAs.
You can trust self-signed SSL and email certs from your friends, or that
you yourself issued, but NOTHING else. And live with it like that for 90
days, without trusting any CAs at any time during those 90 days.
No Cheating. No turning on one CA so you can go to your bank, and then
turning it off again. If you cheat, your proposal loses.
Also, no trusting SSL certs for your bank or your favorite merchant,
unless you get the fingerprint for that cert from the bank or merchant
themselves, over some channel more secure than email (for this purpose,
we'll say a phone line is OK, but only if you're not a dissident in China).
Because that's what the SSH model is all about.
Live with what you propose. No cheating. Then after 90 days, come back
and tell me you never needed any CAs to do anything. Tell us how many
calls you had to make to get fingerprints. Tell us how many times you
wanted to visit a web site but found that the cert was untrusted.
Tell us how much better that was than with CAs.
When you and numerous others can honestly tell us that's better, THEN
maybe mozilla should start to consider that approach.
In the meantime, the issue before mozilla foundation is to choose new
CAs for admission to the list of trusted CAs.
--
Nelson B
> So, you agree that we should change that from a warning to an error,
> no override possible?
No, because there are a lot of legitimate sites that don't use a
(default) CA (my test server, for one).
> So, you agree that we should change that from a warning to an error,
> no override possible?
No that would be painful as well, but on the other hand if you removed
the option to add webserver certificates from the pop-up you get and
forced the user to open the file themselves (like a CA file) they
couldn't accidentally accept the certificate as easily then... No need
to punish us all because not everyone understands the implications, on
the other hand if we know better then this shouldn't make things any
harder in practice...
Nelson, I wonder if you misunderstand what people
are saying. I know this is a tough ask, most people
who have spent much of the last decade working on
SSL security systems also take any criticism of the
whole SSL thing as an unwarranted attack.
Nobody is saying that "SSH is *the* answer."
And, nobody is saying that the CA system be disabled,
or dismantled or disposed of. Nobody is suggesting
that the lines of code that people have slaved
over are to be wasted or thrown away. Also, nobody
is saying "we should get rid of all the CAs."
There is no change to the crypto protocols mooted,
no reduction in the information available to the
user, and no reduction in their choices.
What instead people *are* saying is that CAs aren't
the only way to do security, and that SSH offers a
successful example of another way. What is being
proposed is not less CA certs, but more CA certs.
CAs are needed by some people, but not by all people.
There shouldbe more information for users describing
the real details on the cert, rather than a bland
and tiny lock symbol that we all agree is ignored.
> Live with what you propose. No cheating. Then after 90 days, come back
> and tell me you never needed any CAs to do anything. Tell us how many
> calls you had to make to get fingerprints. Tell us how many times you
> wanted to visit a web site but found that the cert was untrusted.
> Tell us how much better that was than with CAs.
People do this all the time over HTTP.
...
> When you and numerous others can honestly tell us that's better, THEN
> maybe mozilla should start to consider that approach.
This is why I keep my ear to the ground for any data
about MITMs. There is very little. There is the
one story I heard on this group, relating to a
credit card on a student campus, and then there
a few stories from other protocol areas (one email
story, and one other, can't recall right now).
This allows me to claim - honestly - that MITM is
not the threat that you think it is. I can't prove
it because there is an absence of information. But,
I can show - honestly - that credit cards do not in
general get stolen over HTTP(S), because the card
issuers know of few or no cases, even though there
are large numbers of merchants that use HTTP and
email for transmission of credit card info. They
get stolen by hackers and inside thefts, because
crooks can count and they understand the risks.
Now, can you honestly tell us that phishing is not
a threat, and Mozilla shouldn't respond to it?
Phishing is a direct attack on the browser's security
model, and it breaches it very nicely. Your favourite
bank has been attacked by phishing (if it is of any
size and is in the USA, that is, you can look it up
on the antiphishing.org site). Your favourite bank
is suffering because what was supposed to be secure,
isn't.
Phishing is addressed by the sort of measures we
have proposed, which you think to be bad. So,
honestly, what do have to do to get Mozilla to
start considering improving security to cope with
the threats that are today breaching the security
model?
> In the meantime, the issue before mozilla foundation is to choose new
> CAs for admission to the list of trusted CAs.
> I'm all for open discussions, in the newsgroups, where they belong.
We're ok, then. The point of discussing the CA
admission policy within the realms of a wider
security discussion is that without an understanding
of the "one size fits all" CA bug, any policy is
... as good as the next.
If however, the "one size fits all" issue can
included in the discussion, then a policy can
be created that a) makes sense in the face of
the bug, and b) aims at a suitable point when
the bug can be addressed in the security model.
(All of which can be done without disabling certs.)
iang
Why then would you continue to want or need to issue your own
server certs?
And, if you have no further need to issue your own certs, what
further objection do you then have to removing the CA addition
dialog?
Duane, I'd like to know your answer to this question, too.
You are, after all, trying to get into the trusted CA list.
If you succeed, why would you champion a UI policy that obviates
your service?
> Duane, I'd like to know your answer to this question, too.
> You are, after all, trying to get into the trusted CA list.
> If you succeed, why would you champion a UI policy that obviates
> your service?
My comments have been to the effect the system is broken but nothing
better is being suggested... I don't think self signed or pgp styled
systems would even work large scale as well as the current CA based
system. For example I don't see how some person in the middle of Africa
would be able to build up a trust network that would extend to the
middle of South America, yet based on the sheer number of people that
exist I'm sure this very situation is happening at present... Even if it
is only the Nigerians trying to get 20 or whatever your site says you
sell and are more then happy to provide credit card details for the
purchase...
My only other comment on this was the fact I wouldn't risk my life based
on PKI, don't get me wrong here it's not the technology I have doubts
about. Happy to use it to protect my credit card transactions, my pop3
connections hiding my password, my smtp connections and even my webmail,
however I wouldn't be happy to be a martyr for companies getting rich
from selling certificates who could be coerced into breaching trust, or
solely setup for the purpose of breaching if/when it was needed...
> What instead people *are* saying is that CAs aren't
> the only way to do security, and that SSH offers a
> successful example of another way. What is being
> proposed is not less CA certs, but more CA certs.
Yes, infinitely more. Everyone gets to be a CA. Opportunism!
Every web site and email would be saying "Trust me! Trust me!"
Users would have no idea what or whom to trust.
Let's not go there again.
I had a conversation today with a long-time open source developer about
the proposals to change mozilla to use an "SSH model" for security.
His eyes got very big in disbelief! I said "yes, people are proposing
to read cert fingerprints over the phone to authenticate public keys."
He burst out laughing!
He said "That's the theoretical SSH model! Let me tell you about the
REAL SSH model". He went on to say that people visit an SSH server,
it presents an RSA public key, and they just blindly trust it without
any effort to check its authenticity first, because that's too
inconventient.
In other words, people use SSH in a way that provides NO authentication
whatsoever. They get encryption, and feel good about that, not
realizing how easy MITM attacks with transparent proxies and routers
really are. They get no more real security than without any encryption
at all, especially not from any governments.
(*God help* any political dissidents who fall for that! The human
tendency to skip over technical details that are not well understood
is *exactly* the reason why non-Uber-Geeks should not use SSH!
A cert and public key should mean "you (the party relying on this cert)
have MORE assurance of the authenticity of the source (or destination)
of this connection (for SSL) or message (for SMIME) than you would have
if you didn't use cryptographic security." If a cert/key doesn't better
assure authenticity, then it is a sham, giving the (naive) user false
security, baseless peace of mind.
Now, if you believe authentication is not needed for adequate security,
if you believe we really don't need more authentication than what we
get with present insecure protocols, then why not just drop encryption
alltogether? If mozilla really just wants to set people's minds at ease,
without going to any of the bother of providing real authentication, then
there's a MUCH easier and cheaper way to do that than with encryption.
It's the HTML <LOCK> tag.
You just put a <LOCK> tag somewhere in the head or body of an html
page or html email message, and it makes the lock icon appear locked!
HTML engines silently ingore any tags they don't understand. The ones
that do understand it show the lock/key/pen icon in the secure state.
No costly crypto, no certs, no CAs, not even signatures are needed.
Users get the peace of mind of seeing that icon in the secure state.
It's amazingly easy, joyously cheap. It hasn't been implemented in
mozilla yet, but it would be a LOT easier to implement that than adding
any new CAs to the list. Same results, much less cost and effort.
Before proposing any more unauthenticated crypto, ask yourself, "how is
the authentication this provides better than the <LOCK> tag proposal?"
If the answer is "it isn't" then please champion the <LOCK> tag proposal instead.
> This is why I keep my ear to the ground for any data
> about MITMs. There is very little. There is the
> one story I heard on this group, relating to a
> credit card on a student campus, and then there
> a few stories from other protocol areas (one email
> story, and one other, can't recall right now).
>
> This allows me to claim - honestly - that MITM is
> not the threat that you think it is. I can't prove
> it because there is an absence of information.
Obviously, people who have successfully implemented MITM attacks do
not find it in their own best interest to reveal what they've done.
Victims may also wish to keep quiet. So, the information about MITM
attacks is not very public.
If someone could show you a massive on-going MITM attack on http and
https, affecting thousands of users, how would that influence your
position? Please do answer that.
> Phishing is addressed by the sort of measures we have proposed,
Everyone calls their bank (or ebay, or amazon) and asks for the
fingerprint of their self-signed cert?
The only thing I recall that *might* help is the idea that the browser
display more info from the cert to the user. This doesn't help if the
user is trusting certs from an untrustworthy source. An MITM attacker
will copy the entire subject name from the legitimate server's cert,
and could just as easily copy the issuer name also (with some tiny
modification, such as a seemingly harmless addition).
/Nelson
Nelson, I would suggest that you seriously
misunderstand what people are saying. Now, I
know you are not doing it deliberately, because
most people who have spent much of the last
decade working on SSL security systems also
take any criticism of the whole SSL thing as
an unwarranted attack.
Nobody is saying that "SSH is *the* answer." And,
nobody is saying that the CA system be disabled,
or dismantled or disposed of. Nobody is suggesting
that the lines of code that people have slaved
over are to be wasted or thrown away. Also,
nobody is saying "we don't need CAs." There is
no change to the crypto protocols, no reduction
in the information available to the user, and no
reduction in their choices.
What instead people *are* saying is that CAs
aren't the only way to do security, and that
SSH offers an example of another way. What is
being proposed is not less CA certs, but more
CA certs. CAs are needed by some people, but
not by all people. There would be more information
for users describing the real details on the cert,
rather than a bland and ignored lock.
> Live with what you propose. No cheating. Then after 90 days, come back
> and tell me you never needed any CAs to do anything. Tell us how many
> calls you had to make to get fingerprints. Tell us how many times you
> wanted to visit a web site but found that the cert was untrusted.
> Tell us how much better that was than with CAs.
People do this all the time over HTTP.
...
> When you and numerous others can honestly tell us that's better, THEN
> maybe mozilla should start to consider that approach.
This is why I keep my ear to the ground for any data
about MITMs. There is very little. There is the
one story I heard on this group, relating to a
credit card on a student campus, and then there
a few stories from other protocol areas.
This allows me to claim - honestly - that MITM is
not the threat that you think it is. I can't prove
it because there is an absence of information. But,
I can show - honestly - that credit cards do not
in general get stolen over HTTP(S), because the card
issuers know of few or no cases, even though there
are large numbers of merchants that use HTTP and
email for transmission of credit card info. They
get stolen by hackers and inside thefts, because
crooks can count and they understand the risks.
Now, can you honestly tell us that phishing is not
a threat, and Mozilla shouldn't respond to it?
Phishing is a direct attack on the browser's security
model, and it breaches it very nicely. Your favourite
bank has been attacked by phishing (if it is of any
size and is in the USA, you can look it up on the
antiphishing.org site). Your favourite bank is
suffering because what was supposed to be secure,
isn't.
Phishing is addressed by the sort of measures we
have proposed, which you think to be bad. So,
honestly, what do have to do to get Mozilla to
start considering improving security?
> In the meantime, the issue before mozilla foundation is to choose new
> CAs for admission to the list of trusted CAs.
> I'm all for open discussions, in the newsgroups, where they belong.
We're ok, then. The point of discussing the CA
admission policy within the realms of a wider
security discussion is that without an understanding
of the "one size fits all" CA bug, any policy is
junk. If however, the "one size fits all" issue
can included in the discussion, then a policy
can be created that a) makes sense in the face of
the bug, and b) aims at a suitable point when the
bug can be addressed in the security mode.
All of which can be done without disabling certs.
iang
> Let us suppose ... cacert.org gets into the trusted certs list
> Why then would you continue to want or need to issue your own
> server certs?
Maybe I wouldn't, but there's still be a lot of sites out there. But
maybe CAcert is not acceptable for me for any reason, or I just won't
want to go through the hassle, because it's not important in my case.
Also, for example, a huge German webmailer (web.de) has its own CA
(authenticating *all* full users via paper mail and issuing all of them
an email cert) and uses that to sign the certs of the webmail servers.
Not many people import their root cert, so many people see the dialog
when they want to read webmail. Then there's Black Helicopter and
similar, not all of which will get into the Mozilla root. There are
probably more such cases out there.
> I said "yes, people are proposing to read cert fingerprints over the
> phone to authenticate public keys."
Who is proposing that? I certainly wasn't, and I hope you didn't say so.
(But, FWIW, in the case of my bank, I did exchange keys with my bank
using a paper letter. They do that with all customers for RSA
authentication with HBCI, a German banking standard for financial
applications.)
> He said "That's the theoretical SSH model! Let me tell you about the
> REAL SSH model". He went on to say that people visit an SSH server,
> it presents an RSA public key, and they just blindly trust it without
> any effort to check its authenticity first, because that's too
> inconventient.
Right.
> In other words, people use SSH in a way that provides NO authentication
> whatsoever. They get encryption, and feel good about that, not
> realizing how easy MITM attacks with transparent proxies and routers
> really are. They get no more real security than without any encryption
> at all, especially not from any governments.
True - if, and only if, the first connection is tampered with. If the
first connection is secure, all are, because they all use the same keys.
The fingerprints are stored automatically and compared the next time.
The whole *idea* of this model is that I *do* store the keys (or
fingerprints). If I don't store them and always just use the keys the
sites presents, then I indeed have no security at all.
If the first connection is tampered with and the rest isn't, the
(successful) attack is uncovered in retroperspect. So, to have the
victim not notice the problem, you have to run the man-in-the-middle
attack *always*, basically *forever*. Even if the victim changes ISPs or
travels.
> Now, if you believe authentication is not needed for adequate security,
> if you believe we really don't need more authentication than what we
> get with present insecure protocols, then why not just drop encryption
> alltogether?
You have a different kind of authentication in mind. CAs are based on
the idea that (esp. in the case of S/MIME) the *real name* is what's
important. I claim that for a number of cases, it's the *continuity*
that's important.
I meet somebody online via IRC or find a company online via Google. I
gain trust in that entity. A week later, I want to come back to that
entity. That *same* entity. It would be a breach of security for me to
unknowingly talk to another entity with the same name, but I don't care
what the real name of the entity is. E.g. if I talked to Mark Franklin
last week and now talk to a Mary Franklin again, but it's a different
person this time, I have a problem. If the Mary Franklin I talked to is
really Mary Bullard in real life, it's usually not a problem for me.
Note that this is no different from domain names. I find a company
online, remember the domain name or bookmark it, and then come back to
that domain. The domain name has hardly any connection to real life.
Nevertheless, it's the key for trust on the web - even with SSL.
The case is different for ecommerce, where liability matters. When I
need the ability to sue, the real name matters, and the CA-based model
is of real utility. Also, with ecommerce, usually it doesn't matter, if
the government is listening.
The 2 models are not exclusive:
IMO, the continuity of the cert should be checked in *any* case, with or
without CAs, because of the threat model I outlined earlier. For me, it
would be a severe security bug to not do that. I don't know if Mozilla
currently does, I haven't tested it.
Additionally, you can use the proposed model for self-signed keys, but
then not trust the real name. Only when the cert is signed by a trusted
CA, you mark the real name as trusted / verified 'by CA XYZ'.
Well, actually, let me adjust that: more CAs,
and existing large commercial CAs will sell more
CAs to a wider market. Read my rant of a few
weeks ago.
However, users would have much more idea as to
who to trust. Right now, they only have this
binary idea, which is ludicrously bad: some
cryptographer said that some developer said
that some piece of code said that some cert
said that some CA said that .... and this site
is trusted because there is a green lock that
I always ignored.
Yeah, we can improve on that a bit. We can
move users on from where they are now.
> I had a conversation today with a long-time open source developer about
> the proposals to change mozilla to use an "SSH model" for security.
> His eyes got very big in disbelief! I said "yes, people are proposing
> to read cert fingerprints over the phone to authenticate public keys."
> He burst out laughing!
Well, I'd encourage this long term guy to
join and debate. BTW, I'm not sure that anyone
is suggesting that cert fingerprints had to be
read over the phone to authenticate public keys.
Also, what do you care? If this idea is dumb,
then surely this will encourage merchants of
distinction to use a CA-signed cert? Isn't
that a good thing?
> He said "That's the theoretical SSH model! Let me tell you about the
> REAL SSH model". He went on to say that people visit an SSH server,
> it presents an RSA public key, and they just blindly trust it without
> any effort to check its authenticity first, because that's too
> inconventient.
Right. It works. The fact that it doesn't
follow the no-risk model is the one thing
that has contributed to its success, over,
for example, Secure-telnet, now just a bad
dream, as far as most sysadms are concerned.
Now it's ok to disparage either, but all you
are really doing is revealing that you don't
agree with the risk equation that system
administrators make for themselves, for their
networks, and over the open Internet, with
their money and their bosses and their bosses
customers.
I.e., you are saying that you know better
than the sysadms, and web site operators.
So much so that you think that they have
to spend money because their notion of
security is too poor to be trusted.
[Ben B answered your other point.]
> (*God help* any political dissidents who fall for that! The human
> tendency to skip over technical details that are not well understood
> is *exactly* the reason why non-Uber-Geeks should not use SSH!
God helps any who help themselves. Mozilla's mission
is not to save political dissidents on exactly
the same terms as they would save grandma's credit
card. They are different and incompatible missions.
If you confuse them, you will fail in one or the
other of the missions (right now, it's the dissident
who dies, and grandma doesn't get robbed).
Real security involves knowing your threat model
and creating a security model to cover a set of
the threats that you are comfortable with.
In this case, the security model for the CA certs
system in SSL / browsing, etc, does not cover
dissidents who need their lives protecting. If
are unsure of this, go ask Verisign. Or, ask
QuoVadis or CACert guys, here on this list, to
confirm whether they cover lives of dissidents
or not.
> A cert and public key should mean "you (the party relying on this cert)
> have MORE assurance of the authenticity of the source (or destination)
> of this connection (for SSL) or message (for SMIME) than you would have
> if you didn't use cryptographic security." If a cert/key doesn't better
> assure authenticity, then it is a sham, giving the (naive) user false
> security, baseless peace of mind.
No, not really. There are two considerations,
encryption and authentication. (Also, integrity,
but we can ignore that.) Encryption provides a
measure of protection, with one weakness (MITM).
Authenticated encryption avoids that weakness,
either self-signed with SOME assurance, or, CA
with MORE assurance, but at a cost. It's horses
for courses.
That isn't a sham, unless you are trying to market
certs to all and sundry.
> Now, if you believe authentication is not needed for adequate security,
> if you believe we really don't need more authentication than what we
> get with present insecure protocols, then why not just drop encryption
> alltogether?
That is to confuse encryption with authentication,
they do not necessarily walk hand in hand. The
tying together of these was one of the characteristics
of the crypto theory in the nineties, and a thing
that has haunted us for some time. Still, the
protocol is in place, so we'd best use it as best
we can. What people propose is keeping the entire
protocol in place, because it's hard to change the
crypto, and instead, just use what's there in better
ways.
...
>> This is why I keep my ear to the ground for any data
>> about MITMs. There is very little. There is the
>> one story I heard on this group, relating to a
>> credit card on a student campus, and then there
>> a few stories from other protocol areas (one email
>> story, and one other, can't recall right now).
>>
>> This allows me to claim - honestly - that MITM is
>> not the threat that you think it is. I can't prove
>> it because there is an absence of information.
>
>
> Obviously, people who have successfully implemented MITM attacks do
> not find it in their own best interest to reveal what they've done.
> Victims may also wish to keep quiet. So, the information about MITM
> attacks is not very public.
Um, you'd maybe hope that, but no, that's not
how it works. People who have successfully
implemented MITM realise that it is riskier than
the alternates. And, those attacks we see working
cause reports to be filed in. Eventually a pattern
emerges as to attacks, and statistics on the form
and success start to build up. There are no
statistics on MITM, ergo (and we have a fair degree
of confidence in this) it isn't happening to any
great extent, such that it's worthwhile worrying
about it.
This is standard "insurance industry" style logic.
For example, you probably can't get standard insurance
against MITM because there is no way to calculate the
premium - as there is no body of statistics available.
Standard insurance process. Mind you, you would be
able to pick up snake oil insurance against MITM.
> If someone could show you a massive on-going MITM attack on http and
> https, affecting thousands of users, how would that influence your
> position? Please do answer that.
I will - if you address my answer!
Your scenario will allow us to calculate how the risk
of MITM effects our decisions. So, taking your numbers
without question, if there are thousands of users
being hit by MITM, and there are, say $200 of losses
every time they are hit, let's call that 200 x 5000
per year, then we get total losses of $1 million.
Now, if the total cost of protecting these people
is CA signed certs for all merchants, the current
situation, then we look at that cost and compare.
( We can *NOW* look at that number: http://www.securityspace.com/
http://www.securityspace.com/s_survey/sdata/200403/certca.html )
There are 43,430 valid certs out there. Assume
an approximate cost (including time) of $1000 per
cert. Cost to protect against the MITM: 43 million
dollars.
So, I conclude, using your numbers, that it is not
worth it.
What does this mean? Not disabling certs, or stopping
SSL or ditching the CAs. It implies that the decision
to force all merchants to adopt certificates is wrong,
costly, wasteful, bone headed, and should be dropped.
That is, forcing them to adopt certificates costs them
more money than it saves.
The HTTPS policy has created a "tax" or "transfer"
that delivers practically no productive benefit to the
merchants, at some annual cost.
Now, all this means is: don't force them to use CA-
signed certs. Let them start with self-signed (and
more CA-signed certs will result). Let them choose,
because they know more about their risks, and all risks,
than you do.
>> Phishing is addressed by the sort of measures we have proposed,
>
>
> Everyone calls their bank (or ebay, or amazon) and asks for the
> fingerprint of their self-signed cert?
No, I'm sorry that you've picked up this FUD from
somewhere. Here's how it is:
Those merchants that want to use CA-signed certs,
should do so. Those servers that feel that the
self-signed cert is adequate to their needs, should
do so. Those users that find a self-signed cert to
be insufficient, should choose another provider.
Those that don't care, don't use SSL.
In practice, nobody's going to do much in the way
of phoning to get the fingerprint, that doesn't make
any sort of economic sense. The fact that the SSL
self-signed cert *can* be authenticated via finger-
print and phone is more of an "entry-level" technique
for a startup business, or for a club or association
that simply doesn't want to generate additional costs
for no real benefit, or a test website or any number
of want-to ideas.
> The only thing I recall that *might* help is the idea that the browser
> display more info from the cert to the user. This doesn't help if the
> user is trusting certs from an untrustworthy source. An MITM attacker
> will copy the entire subject name from the legitimate server's cert,
> and could just as easily copy the issuer name also (with some tiny
> modification, such as a seemingly harmless addition).
Of course it helps. It alerts the user to the fact
that it is a self-signed cert, and allows them to
decide how trustworthy that cert is! I'm sure the UI
people will be able to pick a suitably bland colour
to show their displeasure...
It is also how the merchant shows that it is caring
about security - bigger merchants will choose a branded
CA cert. (BTW, as far as I know, all CAs are unified
in wanting this branding capability.)
The other thing that should be added is that the
browser should cache *all* certs. This is needed
for the phishing protection (which breaches the CA
signed cert protection), and also helps to protect
the self-signed cert protection.
iang
> Let us suppose for this discussion that, within the next 90 days,
> cacert.org gets into the trusted certs list, and consequently,
> (If I understand what cacert.org is offering) you can then get a
> legitimate SSL server cert from a trusted CA for free.
>
> Why then would you continue to want or need to issue your own
> server certs?
Cost. The CACerts that are above and are free
are only costless in terms of dollars sent to
Duane. There is still the cost of setting up
the server. What the server should do is start
up and generate its own self-signed cert on
install time, so it's up and running straight
away. That's free to the server operator, or,
it's indistinguishable in cost to the installation
of SSL server in the first place.
However, getting a cert from CACert or from any
where will always require some degree of mucking
around in protocol, and also in selection. As
browsers start to show that info to the users,
that will also create a draw up from self-signed
to "cheap and free" CAcerts, and then up to ones
with real due diligence applied.
>> Duane, I'd like to know your answer to this question, too.
>> You are, after all, trying to get into the trusted CA list.
>> If you succeed, why would you champion a UI policy that obviates
>> your service?
>
>
> My comments have been to the effect the system is broken but nothing
> better is being suggested... I don't think self signed or pgp styled
> systems would even work large scale as well as the current CA based
> system. For example I don't see how some person in the middle of Africa
> would be able to build up a trust network that would extend to the
> middle of South America, yet based on the sheer number of people that
> exist I'm sure this very situation is happening at present... Even if it
> is only the Nigerians trying to get 20 or whatever your site says you
> sell and are more then happy to provide credit card details for the
> purchase...
I'm not sure it is easy to graft WoT onto
SSL. For a start, x.509 doesn't support
multiple sigs on the certs. Secondly, there
is way too much assumption and infrastructure
out there assuming the one model. Thirdly,
WoT assumes a "web" and that's not really in
place with www, in the WoT sense, for browsing
(although it might be for S/MIME). Still,
there are a bunch of proposals out there...
> My only other comment on this was the fact I wouldn't risk my life based
> on PKI, don't get me wrong here it's not the technology I have doubts
> about. Happy to use it to protect my credit card transactions, my pop3
> connections hiding my password, my smtp connections and even my webmail,
> however I wouldn't be happy to be a martyr for companies getting rich
> from selling certificates who could be coerced into breaching trust, or
> solely setup for the purpose of breaching if/when it was needed...
I've not heard of the dissident world trusting
certificates with lives, they mostly are rather
worried about the security model for browsing
and assume it has to be reworked. For example,
they have to assume that the machines - both
ends - have been tampered with, unless they
can prove otherwise. Now, as it happens, this
assumption completely breaks the SSL security
model, as documented, because the assumption
is "the network is untrusted, and the nodes
are trusted." But, the dissident guys (e.g.,
cryptorights.org) don't really care if the
security work was duff, they are more keen on
using what they can at a reasonable cost.
iang
Well, actually, let me adjust that: more CAs,
and existing large commercial CAs will sell more
CAs to a wider market. Read my rant of a few
weeks ago.
However, users would have much more idea as to
who to trust. Right now, they only have this
binary idea, which is ludicrously bad: some
cryptographer said that some developer said
that some piece of code said that some cert
said that some CA said that .... and this site
is trusted because there is a green lock that
I always ignored.
Yeah, we can improve on that a bit. We can
move users on from where they are now.
> I had a conversation today with a long-time open source developer about
> the proposals to change mozilla to use an "SSH model" for security.
> His eyes got very big in disbelief! I said "yes, people are proposing
> to read cert fingerprints over the phone to authenticate public keys."
> He burst out laughing!
Well, I'd encourage this long term guy to
join and debate. BTW, I'm not sure that anyone
is suggesting that cert fingerprints had to be
read over the phone to authenticate public keys.
Also, what do you care? If this idea is dumb,
then surely this will encourage merchants of
distinction to use a CA-signed cert? Isn't
that a good thing?
> He said "That's the theoretical SSH model! Let me tell you about the
> REAL SSH model". He went on to say that people visit an SSH server,
> it presents an RSA public key, and they just blindly trust it without
> any effort to check its authenticity first, because that's too
> inconventient.
Right. It works. The fact that it doesn't
follow the no-risk model is the one thing
that has contributed to its success, over,
for example, Secure-telnet, now just a bad
dream, as far as most sysadms are concerned.
Now it's ok to disparage either, but all you
are really doing is revealing that you don't
agree with the risk equation that system
administrators make for themselves, for their
networks, and over the open Internet, with
their money and their bosses and their bosses
customers.
I.e., you are saying that you know better
than the sysadms, and web site operators.
So much so that you think that they have
to spend money because their notion of
security is too poor to be trusted.
[Ben B answered your other point.]
> (*God help* any political dissidents who fall for that! The human
> tendency to skip over technical details that are not well understood
> is *exactly* the reason why non-Uber-Geeks should not use SSH!
God helps any who help themselves. Mozilla's mission
is not to save political dissidents on exactly
the same terms as they would save grandma's credit
card. They are different and incompatible missions.
If you confuse them, you will fail in one or the
other of the missions (right now, it's the dissident
who dies, and grandma doesn't get robbed).
Real security involves knowing your threat model
and creating a security model to cover a set of
the threats that you are comfortable with.
In this case, the security model for the CA certs
system in SSL / browsing, etc, does not cover
dissidents who need their lives protecting. If
are unsure of this, go ask Verisign. Or, ask
QuoVadis or CACert guys, here on this list, to
confirm whether they cover lives of dissidents
or not.
> A cert and public key should mean "you (the party relying on this cert)
> have MORE assurance of the authenticity of the source (or destination)
> of this connection (for SSL) or message (for SMIME) than you would have
> if you didn't use cryptographic security." If a cert/key doesn't better
> assure authenticity, then it is a sham, giving the (naive) user false
> security, baseless peace of mind.
No, not really. There are two considerations,
encryption and authentication. (Also, integrity,
but we can ignore that.) Encryption provides a
measure of protection, with one weakness (MITM).
Authenticated encryption avoids that weakness,
either self-signed with SOME assurance, or, CA
with MORE assurance, but at a cost. It's horses
for courses.
That isn't a sham, unless you are trying to market
certs to all and sundry.
> Now, if you believe authentication is not needed for adequate security,
> if you believe we really don't need more authentication than what we
> get with present insecure protocols, then why not just drop encryption
> alltogether?
That is to confuse encryption with authentication,
they do not necessarily walk hand in hand. The
tying together of these was one of the characteristics
of the crypto theory in the nineties, and a thing
that has haunted us for some time. Still, the
protocol is in place, so we'd best use it as best
we can. What people propose is keeping the entire
protocol in place, because it's hard to change the
crypto, and instead, just use what's there in better
ways.
...
>> This is why I keep my ear to the ground for any data
>> about MITMs. There is very little. There is the
>> one story I heard on this group, relating to a
>> credit card on a student campus, and then there
>> a few stories from other protocol areas (one email
>> story, and one other, can't recall right now).
>>
>> This allows me to claim - honestly - that MITM is
>> not the threat that you think it is. I can't prove
>> it because there is an absence of information.
>
>
> Obviously, people who have successfully implemented MITM attacks do
> not find it in their own best interest to reveal what they've done.
> Victims may also wish to keep quiet. So, the information about MITM
> attacks is not very public.
Um, you'd maybe hope that, but no, that's not
how it works. People who have successfully
implemented MITM realise that it is riskier than
the alternates. And, those attacks we see working
cause reports to be filed in. Eventually a pattern
emerges as to attacks, and statistics on the form
and success start to build up. There are no
statistics on MITM, ergo (and we have a fair degree
of confidence in this) it isn't happening to any
great extent, such that it's worthwhile worrying
about it.
This is standard "insurance industry" style logic.
For example, you probably can't get standard insurance
against MITM because there is no way to calculate the
premium - as there is no body of statistics available.
Standard insurance process. Mind you, you would be
able to pick up snake oil insurance against MITM.
> If someone could show you a massive on-going MITM attack on http and
> https, affecting thousands of users, how would that influence your
> position? Please do answer that.
>> Phishing is addressed by the sort of measures we have proposed,
>
>
> Everyone calls their bank (or ebay, or amazon) and asks for the
> fingerprint of their self-signed cert?
No, I'm sorry that you've picked up this FUD from
somewhere. Here's how it is:
Those merchants that want to use CA-signed certs,
should do so. Those servers that feel that the
self-signed cert is adequate to their needs, should
do so. Those users that find a self-signed cert to
be insufficient, should choose another provider.
Those that don't care, don't use SSL.
In practice, nobody's going to do much in the way
of phoning to get the fingerprint, that doesn't make
any sort of economic sense. The fact that the SSL
self-signed cert *can* be authenticated via finger-
print and phone is more of an "entry-level" technique
for a startup business, or for a club or association
that simply doesn't want to generate additional costs
for no real benefit, or a test website or any number
of want-to ideas.
> The only thing I recall that *might* help is the idea that the browser
> display more info from the cert to the user. This doesn't help if the
> user is trusting certs from an untrustworthy source. An MITM attacker
> will copy the entire subject name from the legitimate server's cert,
> and could just as easily copy the issuer name also (with some tiny
> modification, such as a seemingly harmless addition).
> Mozilla's mission is not to save political dissidents on exactly the
> same terms as they would save grandma's credit card. They are
> different and incompatible missions. If you confuse them, you will
> fail in one or the other of the missions (right now, it's the
> dissident who dies, and grandma doesn't get robbed).
So, you are proposing to ignore the dissident threat model? The
consequence would be to go on as we had, but you seem to be arguing
against that.
> Encryption provides a measure of protection, with one weakness (MITM).
If there's no man in the middle, why encryption?
> There are no statistics on MITM, ergo (and we have a fair degree of
> confidence in this) it isn't happening to any great extent, such that
> it's worthwhile worrying about it.
No. There are good reasons to believe that the NSA processes *all*
Internet traffic it can get its hands on or used to and still tries to.
Encrypted traffic is way harder than cleartext, but we're talking about
just such a case how that could be circumvented. And as for publicity:
NSA = No Such Agency ;-)
In short, yes. The problem is to construct a
security system that does the greatest good for
the benefit of as many as we can determine. Now,
skipping for now, how we measure or determine
that greatest good, it does seem that any
generic browser manufacturer is going to be
closer to the goal when thinking about grandma
and her finance, rather than the dissident and
his life.
A more analytical answer would construct the
number of each class of users (a million grandmas
and a thousand dissidents) and assign values to
their losses. E.g., $200 per grandma and $2m
per dissident. And then multiply it out to
decide what we are deciding to cover. Then,
balance that with the cost of constructing the
solutions. (Unfortunately, this is a slam dunk,
as you cannot protect the dissident, he is
subject to too many attacks outside the control
of the browser distributor that there isn't much
point in including him specifically.)
Another factor is that there are organisations
(cryptorights.org is one I know) that exist to
serve the small, high work factor market of the
dissident. Those guys know security as it is
really done - to save lives - rather than the
sort of vanilla security model that browsing
implies, one flavour for all people.
(For example, unlike all the net cryptographers
I have ever spoken to, the cryptorights guys have
actual experience of active attacks like MITMs.)
>> Encryption provides a measure of protection, with one weakness (MITM).
>
>
> If there's no man in the middle, why encryption?
Because there is the eavesdropper. The MITM
is a more sophisticated form of attack - it
involves an active component; it has both
the ability to inject packets, and the side-
effect of leaving tracks. That raises the costs
significantly, and reduces the attack commensurately.
Not so the eavesdropper. They only listen to
the packets, and the net is not "quantum" in
that it cannot detect these copies. Simple
encryption without authentication defeats the
simple eavesdropper completely, and forces her
(Eve) to move to MITMing. Which, in the process,
eliminates most people's interest because they
only do things that are costless.
>> There are no statistics on MITM, ergo (and we have a fair degree of
>> confidence in this) it isn't happening to any great extent, such that
>> it's worthwhile worrying about it.
>
>
> No. There are good reasons to believe that the NSA processes *all*
> Internet traffic it can get its hands on or used to and still tries to.
> Encrypted traffic is way harder than cleartext, but we're talking about
> just such a case how that could be circumvented.
No, the NSA is an eavesdropper, in general. It
does not do MITMing, again, in general. The NSA
is (as far as we know) defeated completely by
unauthenticated crypto techniques such as self-
signed certs, ADH, and shared secret encryption.
Until it decides to take special action against
a particular target, any of a half dozen basic
crypto techniques will give complete privacy. As
the cost of the NSA taking special action against
a target is quite large - in effort terms and in
risk terms - and as the coverage is only one target
each time, this is done infrequently.
Now, examine the numbers. The NSA (and partners *)
scoop up the traffic of a 100 million or so people
(all who do international calls, much of the net,
etc etc).
These people could be protected - completely -
by ADH or self-signed. The NSA could possibly
attack a thousand people directly (with MITMs,
or other techniques). These people would not be
protected by ADH/SS.
So, the policy is to protect the thousand, and
not protect the 100 million. And pay for it.
Mind you, this analysis only pertains if the
NSA is the threat. If you are someone who doesn't
care if the NSA & friends read your traffic, then
it doesn't apply.
> And as for publicity: NSA = No Such Agency ;-)
They are at the moment advertising in the open
market for jobs, so they are quite open these days.
iang
* What we are talking about falls under the
rubric of Echelon, which is a system within the
UKUSA partnership to scoop up as many forms of
the open channels as possible: phone, net, etc.
The UKUSA partnership includes the UK, US, AU,
CA, NZ countries.
> the dissident and his life....
> E.g., ... $2m per dissident.
Eh, sorry, $2m for a life?
You could argue that dissidents should use PGP. But then, as I outlined,
we are not just talking about dissidents, but about husband<->wife or
anybody having anything *really* private in email, and that's almost
anybody or at least 20% of the userbase. The value of privacy can't be
expressed in dollars.
> The NSA is (as far as we know) defeated completely by unauthenticated
> crypto techniques such as self-signed certs
haha.
> Until it decides to take special action against a particular target
...which is when things get interesting...
> any of a half dozen basic crypto techniques will give complete privacy.
"As long as nobody cares to listen, you have complete privacy." Surely
true, but not very satisfactionary.
> Echelon
BTW: A good source for information about that is the special in
Telepolis, I think it was actually the magazine which brought all this
into public, esp. Christiane Schulzki Haddouti. Unfortunately most of it
is in German.
<http://translate.google.com/translate?hl=en&sl=de&u=http://www.heise.de/tp/deutsch/special/ech/>
Which part are you questioning? My convenient
number, my low/high estimate, or my temerity
in placing a value on a life? For the first
two, pick your own number. For the last, this
is standard risk analysis - convert all assets
to dollar values and convert.
> You could argue that dissidents should use PGP.
In the context of this maillist, I'd not argue anything
about dissidents, other than they are special customers
with special needs, and they should look after themselves,
because a generic offering like Mozilla products,
as distributed in standard form, is unlikely to
be good enough. This is no problem, they generally
know this, because they also worked out long ago
that their life is on the line, and they have to
take care.
(Outside this maillist - they need hardware as
well, and custom installs.)
> But then, as I outlined,
> we are not just talking about dissidents, but about husband<->wife or
> anybody having anything *really* private in email, and that's almost
> anybody or at least 20% of the userbase. The value of privacy can't be
> expressed in dollars.
Well, that would be *another* group of users.
And herein lies one of the big difficulties
in the security model for browsing: the
group of users includes lots of diverse
users and lots of diverse needs. Also, we
(being the people on the build side) don't
have enough of a clue to be able to say "we
know what our users are doing, and we know
what they want."
Husband/wife? That can be covered with 40 bit
ADH, in general. Lawyers sending client-conf-
idential docs back and forth over the open net:
that's something I'd prefer myself to see done
in at least DES (56bit) or better. Just IMHO.
All of these "anybodys" that we can list will
be substantially benefitted by something as
boring as 40bit with ADH or "any old key
swapping" protocol. The fact that they can
get better for no cost is a bonus of some
value. The fact that they can get something
even better but pay money for it is something
only they should be asked to decide.
>> The NSA is (as far as we know) defeated completely by unauthenticated
>> crypto techniques such as self-signed certs
>
>
> haha.
In eavesdropping terms, I mean. That's the
current understanding. Now, curiously, even
if it is not true, we are still protected,
as they would save any secret tricks (such as
being able to breach ADH or RSA) they know
for real high value targets. The last thing
they want is open speculation of how they got
the info, because they don't want real enemies
to figure it out. This is the Enigma dilemma.
>> Until it decides to take special action against a particular target
>
>
> ...which is when things get interesting...
Right. Then we have the competing key distribution
models (CA, opportunistic, WoT, other media, etc).
But this is a really small proportion of the total
user base.
>> any of a half dozen basic crypto techniques will give complete privacy.
>
>
> "As long as nobody cares to listen, you have complete privacy." Surely
> true, but not very satisfactionary.
Well, true, and that's what the open internet is.
Nobody much cares to listen, and for the most part,
the whole world gets by without worrying about it.
Same for the telephone.
Then you get the passive listeners.
And finally, you get the active attackers.
>> Echelon
>
>
> BTW: A good source for information about that is the special in
> Telepolis, I think it was actually the magazine which brought all this
> into public, esp. Christiane Schulzki Haddouti. Unfortunately most of it
> is in German.
> <http://translate.google.com/translate?hl=en&sl=de&u=http://www.heise.de/tp/deutsch/special/ech/>
We owe a debt of gratitude to the Germans for
taking up the flame against echelon with gpg!
iang
Ummm I thought mail order brides were only $5000 :) Jokes aside I WOULD
NEVER trust PKI for anything like that, simply because I don't trust
most/all CAs regardless what auditing they've had. At the end of the day
they're in it for the money not for security, not for the benefit of the
human race, but simply what they can milk from commercial enterprise.
I have no doubt in my mind that Commercial CAs are highly susceptible to
coercing from governments, not to mention the fact that certain
governments no doubt had a pretty good hand in setting up some of the
CAs, and some governments have choke points on their international
traffic links...
Judging from the arrest the other week the NSA DOES scan emails in the
clear regardless what the source and destination are...
Basically the horse has bolted on that one, we need to acknowledge the
threat and realise the only solution to this in situations of people vs
governments is using some means other then PKI...
Mind you if everyone used encryption it would make it a tad difficult to
scan every single piece of email, so in effect the governments would be
forced to restrict actions against only those they suspect rather then
the public at large... Hell if spam was encrypted it could be annoying
us to death and doing us a favour at the same time!
> You could argue that dissidents should use PGP. But then, as I outlined,
> we are not just talking about dissidents, but about husband<->wife or
> anybody having anything *really* private in email, and that's almost
> anybody or at least 20% of the userbase. The value of privacy can't be
> expressed in dollars.
husband<->wife I'd say in 99% of cases neither of them couldn't even
decode rot13... They'd see garbage and think it was corruption...
Which is DWARFED by the cost of setting up the clients!
> What the server should do is start
> up and generate its own self-signed cert on
> install time, so it's up and running straight
> away. That's free to the server operator, or,
> it's indistinguishable in cost to the installation
> of SSL server in the first place.
You set up the server once. If you set it up with a cert
that is already recognized by the client, the client setup cost
to use that server is zero. If you set the server up with a
self-signed cert, every client must be setup to use it.
That absolutely dwarfs the few extra seconds required at the
time the server is setup.
> I'm not sure it is easy to graft WoT onto
> SSL. For a start, x.509 doesn't support
> multiple sigs on the certs.
I *think* Duane's model is that he will issue a cert when some
number of PGP signatures have appeared on a PGP key on some PGP
server. Duane please correct that if that statement is mistaken.
--
Nelson B
> Ben Bucksch wrote:
>
>> Eh, sorry, $2m for a life?
>
> Which part are you questioning? My convenient number, my low/high
> estimate, or my temerity in placing a value on a life?
You are even asking? The last.
> For the last, this is standard risk analysis - convert all assets to
> dollar values and convert.
Fine. Then "standard risk analysis" is inherently flawed, because
obviously (at least to me) not all values can be expressed in dollars. I
mean, even *MasterCard* got *that* ;-P.
> Husband/wife? That can be covered with 40 bit ADH, in general.
Huch? I certainly don't want my love letters to be read my *anyone*,
*ever*, apart from the recipient and me. That means not even the NSA or
my little brother, if they really try. In fact, I expect that as a basic
right.
(I have neither wife nor little brother atm, as it happens ;-) .)
> I have no doubt in my mind that Commercial CAs are highly susceptible
> to coercing from governments
> Basically the horse has bolted on that one, we need to acknowledge the
> threat and realise the only solution to this in situations of people
> vs governments is using some means other then PKI...
That's all I'm saying, basically :-).
But, assuming that Mozilla warns me when I get an email from a known
recipient (with a known certificate), but with a new certificate, and
I'd optionally check the fingerprint when needed, S/MIME could work,
right? Or am I missing something?
> husband<->wife I'd say in 99% of cases neither of them couldn't even
> decode rot13... They'd see garbage and think it was corruption...
hm? That's a question of UI, not security.
>> For the last, this is standard risk analysis - convert all assets to
>> dollar values and convert.
>
>
> Fine. Then "standard risk analysis" is inherently flawed, because
> obviously (at least to me) not all values can be expressed in dollars. I
> mean, even *MasterCard* got *that* ;-P.
It might sound as though it makes no intiutive sense
to express all values in dollars. Yet, in practically
all large dollar decision questions that might raise
this question, it is standard practice to express the
dollar value of a life.
The reason for this is quite simple: if not done
this way, there is no other way to justifiably decide
between opposing choices. E.g., which life to save.
For example, the roads management people happen to
have a value. I recall it being $186,000, about 20
years back. This allows them to decide whether to
expend money to correct or improve a road. First
they calculate, statistically, how many lives the
planned improvement will save, and also, the cost.
Divide those two numbers, and if the answer is better
than $186,000 per life saved, then they make the
change.
The Military have these numbers. Hospitals have
these numbers. Life insurance sells these numbers.
Anywhere where there are serious calculations made
(serious constraints meet serious demands) these
numbers exist. Certainly, this is standard in
security engineering.
My suggestion here is that Mozilla simply bypasses
this whole question by doing what practically all
retail and open source software organisations do:
explicitly declare that they are not in the life
saving game (it's part of the disclaimers).
>> Husband/wife? That can be covered with 40 bit ADH, in general.
>
>
> Huch? I certainly don't want my love letters to be read my *anyone*,
> *ever*, apart from the recipient and me. That means not even the NSA or
> my little brother, if they really try. In fact, I expect that as a basic
> right.
I approve!
You might then be a great supporter of our proposals
to, for example, permit enhanced self-signed cert
browsing. This would mean, for example, there would
be many more free webmail interfaces that used certs
to protect those very sensitive love letters. You'd
also be a great fan of having all those chat rooms
where you trade personal information such as pre-
divorce advice over the open net converted to using
some form of easy crypto.
Encouraging those servers to use self-signed certs
would be a great boon to privacy. Alternatively, if
we subscribe to conspiracy theories and believe that
the NSA has already acquired all the root copies it
needs, then we really want some alternatives...
> (I have neither wife nor little brother atm, as it happens ;-) .)
That might be the safest course :)
iang
> I *think* Duane's model is that he will issue a cert when some
> number of PGP signatures have appeared on a PGP key on some PGP
> server. Duane please correct that if that statement is mistaken.
Nope, basically the website "acts" as the web of trust, people already
trusted in the system are able to issue trust to others, usually it
takes 3 or more existing trusted parties to enable another to become
trusted within the system, so it's still more like PKI in the sense you
need an existing trusted party, then simply joe is my friend and we
cross sign each others keys. Mind you if you become trusted you can
cross sign the 3 peoples keys that trusted you... Least that's the jist
of it...
> But, assuming that Mozilla warns me when I get an email from a known
> recipient (with a known certificate), but with a new certificate, and
> I'd optionally check the fingerprint when needed, S/MIME could work,
> right? Or am I missing something?
Bingo, most CAs only require you to supply a CSR, not the private key,
in any case if you check the fingerprints of certificates trusted or not
and verify they are who they say they are you should be ok. ***BUT***
the thing is how many people actually do that? not many? they look at
the icon, it's locked there was no warning messages, all is fine right?
They've been given just enough rope to hang themselves with...
> hm? That's a question of UI, not security.
If you give a person all the tools and a jack and a spare tyre why does
roadside assistance still get called out? :)
Ian Grigg wrote:
> Your scenario will allow us to calculate how the risk
> of MITM effects our decisions. So, taking your numbers
Not my numbers.
> without question, if there are thousands of users
> being hit by MITM, and there are, say $200 of losses
> every time they are hit, let's call that 200 x 5000
> per year, then we get total losses of $1 million.
>
> Now, if the total cost of protecting these people
> is CA signed certs for all merchants, the current
> situation, then we look at that cost and compare.
> There are 43,430 valid certs out there. Assume
> an approximate cost (including time) of $1000 per
> cert. Cost to protect against the MITM: 43 million
> dollars.
>
> So, I conclude, using your numbers, that it is not
> worth it.
None of the numbers in the text quoted above came from me.
I doubt most of them are even approximately correct.
> http://www.securityspace.com/s_survey/sdata/200403/certca.html
That's very interesting. I don't know how trustable the numbers are, though.
If anyone else has numbers about how widespread the root CAs are used, I
think that would be very helpful in the consideration of which CAs to add.
> Mind you if you become trusted you can cross sign the 3 peoples keys
> that trusted you...
Maybe I misunderstood you (middle of the night here), but isn't that a
problem? Evil guy gets fake cert, trusts you with his cert, you get
trusted, you sign his cert, the fake cert gets trusted?
> Maybe I misunderstood you (middle of the night here), but isn't that a
> problem? Evil guy gets fake cert, trusts you with his cert, you get
> trusted, you sign his cert, the fake cert gets trusted?
The trust programme only interacts with the certificate process as far
as identifying them in the first place, evil guy would have to get 3
people to trust him, ID checks etc, although judging by the following
article photo ID documents can be just as useless...
http://smh.com.au/articles/2004/04/13/1081621954002.html
> As I understand it, PSM/NSS will currently accept new certificates
> signed by trusted CAs, even if a *different* certificate is already
> known for that entity (I think even if the CA mismatches). Mozilla
> would show the lock / pen icon as if everything were OK and you'd
> never notice that you're now talking with the US government
Nelson, can you confirm or deny that, so that I can stop wondering? I
don't feel like applying for 2 different certs just to try it out. If
that problem indeed exists, I'd consider it a severe security problem
and file a bug about it.
> If anyone else has numbers about how widespread the root CAs are used, I
> think that would be very helpful in the consideration of which CAs to add.
I've asked on the CAcert mailing list and so far know one I know knows
of any other stats on this subject...
Then you should file a bug against the IETF and the ISO who created the
PKI standards.
Seriously, I can confirm that the 2 certs will work. And under the PKI
model, that's perfectly valid. A single entity, such as a user, can have
an infinity of certs, for a variety of good reasons, such as having
different keys, different usages (certs for encrypting and signing),
getting multiple certifications for the same key, or simply renewing the
cert .
There is no rule that any those certs must be issued by the same CA. In
fact, there is nothing that binds the subject name (eg. a domain name
for a web site, or an e-mail address for an individual) to a particular
CA. That allows you to choose any CA you trust that will verify you to
get your cert, as opposed to having that choice made for you.
If you don't like that choice, you should talk to the IETF and complain.
I think you will be in good company, as people are actually working on
protocols to make that determination, but they haven't become standard.
To give you a concrete example, when I worked at Netscape I had a cert
with my business e-mail address from the corporate CA. I also had a
second cert with my business e-mail address from Thawte. I used the
former to login to internal corporate sites with client auth, and the
later in my signed e-mails.
The internal servers enforced that they only trusted the corporate CA,
and no root cert, so I only use my corporate cert to login. Any properly
setup SSL server would have that property, so there is no security issue
there.
For e-mail, things were different. Clients such as Mozilla trust a large
set of CAs certs for e-mail, including both of the roots that my two
certs chained to, GTE and Thawte. That meant I had a choice of which
certificate to use for signing my e-mails. Unless they specifically
checked the certs in my valid e-mail signatures, my correspondents could
not tell which cert I was using.
Some corporate types might consider that a bad thing and require that I
use my corporate cert and its escrowed key. The way to implement such a
policy would be to distribute a custom e-mail client without any root
certs for e-mail purposes, with only the corporate CA trusted for
e-mail, without the ability to add e-mail trust to any other CA, and
disallowing anyone from using any other e-mail program. It's not a
practical policy to implement.
> Seriously, I can confirm that the 2 certs will work.
Duh :-(((
Thanks for the clarification.
> And under the PKI model, that's perfectly valid.
OK, then I claim that the PKI model is inherently flawed (probably
intentionally) and not suitable to protect email between private persons
where *nobody* else is supposed to listen. It is appropriate when only
money matters and serious corporate espionage is not a factor.
> That allows you to choose any CA you trust that will verify you to get
> your cert, as opposed to having that choice made for you.
With my proposal, you'd still have the choice (even the choice to use no
CA), but once made, you'd be *bound* to that choice as long as you want
your trust relationships to be valid. Makes sense to me. (Actually, even
that could be solved with software support, see below.)
> IETF...people are actually working on protocols to make that determination
Do you know any protocol names offhand?
> To give you a concrete example, when I worked at Netscape I had a cert
> with my business e-mail address from the corporate CA. I also had a
> second cert with my business e-mail address from Thawte. I used the
> former to login to internal corporate sites with client auth, and the
> later in my signed e-mails.
That would be fine, as each party would know you only by one cert, ever.
It gets critical when you *change* the cert towards one party. E.g. you
wrote an email to me yesterday with the AOL cert, but today using the
Thawte cert. I *should* get a bold warning from Mozilla about that, just
like SSH does. I'd have to re-validate you, which is hard and people
wouldn't do in practice, unless there's an automatic way to do it, e.g.
by you sending the new cert to all your contacts, that mail signed with
the old cert, and the client automatically detects that and chains the 2
certificates (in that direction only).
> For e-mail, things were different. ... Unless they specifically
> checked the certs in my valid e-mail signatures, my correspondents
> could not tell which cert I was using.
That's exactly the security problem. If I can coerce any root CA to give
me a cert for your email address, you lost.
Actually, that probably wouldn't even be that hard, I don't need to be a
government for that, I'd only need to be able to listen to (and maybe
intercept) your mailbox (that's exactly the problem that crypto tries to
solve, right?), in that case I could apply for a Class 1 certificate
(only validates email mailbox) from any CA, catch and respond to the
verification mail to your mailbox, and then use that new certificate to
pose as you in email towards your correspondants. Given what you said,
they wouldn't notice the certificate change, answer me encrypted with
the new key, I would catch the email from your mailbox again, decrypt it
using my fake cert and be done. Attack successful.
If I can pull the same attack against your recipients, I could play the
man in the middle, unnoticed unless someone looks very closely at the
cert (and *maybe* the received headers).
"Ben Bucksch" <ben.buck...@beonex.com> wrote in message
news:407E06F1...@beonex.com...
I'm sorry, I missed the original question - is this what
you are looking for:
http://www.securityspace.com/s_survey/sdata/200403/certca.html
?
iang
Oops! Found the original question :-) Must have
more than 2 coffees before posting.....
iang
> It gets critical when you *change* the cert towards one party. E.g. you
> wrote an email to me yesterday with the AOL cert, but today using the
> Thawte cert. I *should* get a bold warning from Mozilla about that, just
> like SSH does. I'd have to re-validate you, which is hard and people
> wouldn't do in practice, unless there's an automatic way to do it, e.g.
> by you sending the new cert to all your contacts, that mail signed with
> the old cert, and the client automatically detects that and chains the 2
> certificates (in that direction only).
Actually there is a few major assumptions in your thinking here...
1) You assume the CA to always be valid, and always under the same root
certificate, this isn't the case, CAs have already onsold root
certificates or just gone out of the business.
2) It's still anti-competitive to stick people to one CA, they sign up
first year for say $5, then every other year the CA slugs them $500
cause they can't go anywhere else...
3) Certificates expire and are stolen, sure they can be revoked, but
said CA has a revoke fee of $500...
> Ben Bucksch wrote:
>
>> there's an automatic way to do it, e.g. by you sending the new cert
>> to all your contacts, that mail signed with the old cert, and the
>> client automatically detects that and chains the 2 certificates (in
>> that direction only).
>
> Actually there is a few major assumptions in your thinking here...
[cut]
In all these cases, you could send out the notication described above,
go on with a new cert and still have the trust chain uninterrupted.
Correct, this is what we have been debating. This
is another side-effect of the "one size fits all"
bug in the PKI.
> If I can pull the same attack against your recipients, I could play the
> man in the middle, unnoticed unless someone looks very closely at the
> cert (and *maybe* the received headers).
Yup. This is why it is proposed that clients should
display the salient details of the cert that is used
on the chrome. Also, the client should cache each
cert and show the number of times it is used. So,
when a cert changes, someone with whom you have
corresponded many times will suddenly report a "0"
instead of a "100".
Without integrating the user into the security protocol,
the current applications fall to MITM attacks, no
matter how the underlying SSL protocol advertises
itself. This is more or less what happens in phishing.
iang
Do you have one of them? I'm interested to know how
far apart they are. I noticed that they report rather
different numbers on the number of web sites out there,
and it seemed to relate to one counting "parked" sites
and the other not.
It would be very useful to know if NetCraft agree in the
number of certificates that are being used out there,
but at the price they are charging, I'm not buying.
iang
>>>http://www.securityspace.com/s_survey/sdata/200403/certca.html
> In all these cases, you could send out the notication described above,
> go on with a new cert and still have the trust chain uninterrupted.
Still that pesky $500 charge to revoke, the user can't revoke only the
CA can...
Nelson, you said:
"If someone could show you a massive on-going MITM attack on http and
https, affecting thousands of users, how would that influence your
position? Please do answer that."
So I did. Now, I wrote that in a hurry, so you are totally
correct to pick up the exagerations in the claim of where
the numbers came from. When you said "affecting thousands
of users" I took that as the starting point, so *all* the
numbers did not come from you.
Apologies for that loose attribution.
> None of the numbers in the text quoted above came from me.
> I doubt most of them are even approximately correct.
You picked an illustrative number "thousands", and I agree
your number wasn't correct - if it was, please give us a
source. Your lead appears hypothetical, and I followed.
Labelling these minor quibbles as "false accusations" might
make someone else think that you are not being serious, and
trying to avoid the real substance of the debate. As you asked
a serious question, please respond to that which you asked for,
and let me know of the "SO MANY flaws" there.
iang
PS: for an additional example on how to construct a risk-
benefit analysis as one input into a security model, see
this link:
http://www.schneier.com/crypto-gram-0404.html#4
By Bruce Schneier and Paul Kocher.
"Duane" <sup...@cacert.org> wrote in message
news:c5mchq$t...@ripley.netscape.com...
I think you are referring here to the user's cost
of accepting a new self-signed cert when it first
appears, and requiring the user to click through
the wizard to accept it. (Alternatively, you may
be referring to the cost of adding the root into
the clients.)
In general, I've seen new certs for SSL servers
run from several days of emails and waiting around
(Verisign renewal) down to 30 mins (a GeoTrust cert
done immediately after a prior one, sold by a good
friend, with hard net payment, and coordinated on
the phone as well as email...).
In managerial terms, it's quite a nuisance, and if
I was paying city dollars, I'd insist on GeoTrust
for quickness alone, as at standard programming
costs, 30 mins is about $50. Several days can be
up to thousands of dollars in internal costs, not
to mention issues like how hard the due diligence
is (collecting up paper work etc).
From a techie point of view, adding the cert into
the right file seems trivial. But, with CACert, if
the cost in dollars goes to zero, we still have to
find and smooth up to 3 other trusted players.
So, we can I hope agree that there are *some* costs
associated with getting a CA-signed cert, even a
CACert one. For this, if each user has to go through
10-60 seconds of pain to accept a self-signed cert,
I can see that self-signed certs are definately going
to be valuable up to (e.g.) 100 or so users.
Only if you are like a serious merchant with dozens
of clients a day, and taking hundreds of thousands
of dollars would you be financially interested in
getting a CA-signed cert, just to save your users
from the time wasted in clicking through the wizard.
>> What the server should do is start
>> up and generate its own self-signed cert on
>> install time, so it's up and running straight
>> away. That's free to the server operator, or,
>> it's indistinguishable in cost to the installation
>> of SSL server in the first place.
>
>
> You set up the server once. If you set it up with a cert
> that is already recognized by the client, the client setup cost
> to use that server is zero. If you set the server up with a
> self-signed cert, every client must be setup to use it.
> That absolutely dwarfs the few extra seconds required at the
> time the server is setup.
Sometimes only, depending on how many clients and how
long each of those costs are. There is still a big gap
in there where self-signed certs are more efficient
than a zero-dollar third party cert.
>> I'm not sure it is easy to graft WoT onto
>> SSL. For a start, x.509 doesn't support
>> multiple sigs on the certs.
>
>
> I *think* Duane's model is that he will issue a cert when some
> number of PGP signatures have appeared on a PGP key on some PGP
> server. Duane please correct that if that statement is mistaken.
Got it, thanks. I can see how this works. I'm not
so sure it will cause a flood of certs though, as
getting three people to sign your application isn't
so easy. The PGP WoT is powerful in concept, but the
reality hasn't really sizzled in terms of numbers,
IMHO (and, I'm a great supporter of it all). Still,
a great choice, methinks.
iang
>>
>> Actually there is a few major assumptions in your thinking here...
>
>
> [cut]
>
> In all these cases, you could send out the notication described above,
> go on with a new cert and still have the trust chain uninterrupted.
No such notification or process for it. Revocation is not done that way
in PKI.
Ben Bucksch wrote:
>> IETF...people are actually working on protocols to make that
>> determination
>
>
> Do you know any protocol names offhand?
I can't recall the acronym name. I haven't resubscribed to ietf-pkix
yet. The list is very high traffic. But you can search the archives. I
believe Verisign and Microsoft had something to do with it.
>> To give you a concrete example, when I worked at Netscape I had a cert
>> with my business e-mail address from the corporate CA. I also had a
>> second cert with my business e-mail address from Thawte. I used the
>> former to login to internal corporate sites with client auth, and the
>> later in my signed e-mails.
>
>
> That would be fine, as each party would know you only by one cert, ever.
>
> It gets critical when you *change* the cert towards one party. E.g. you
> wrote an email to me yesterday with the AOL cert, but today using the
> Thawte cert. I *should* get a bold warning from Mozilla about that, just
> like SSH does.
SSH doesn't use cert. Your analogy does not apply.
NSS/Mozilla does memorize (cache) the certificates of e-mail recipients.
So we can technically find out if there are existing certificates for
the same person. But we can't say any of the certificates is better than
the other, because they are all good.
> I'd have to re-validate you, which is hard and people
> wouldn't do in practice, unless there's an automatic way to do it, e.g.
> by you sending the new cert to all your contacts, that mail signed with
> the old cert, and the client automatically detects that and chains the 2
You are thinking of re-validation outside of the PKI model.
In the PKI model, there is an automatic way of verifying the
certificates, and that's done through the chain of trust. Since no one
certificate is more valid than any other, there is obviously no
mechanism for passing around which certificate is the most correct. Each
program can choose to use any of the valid certificates.
Speaking for NSS/Mozilla, we choose the most recent one, so that if you
renew your cert with a different CA, that one will automaticaly be used
over your old one.
>> For e-mail, things were different. ... Unless they specifically
>> checked the certs in my valid e-mail signatures, my correspondents
>> could not tell which cert I was using.
>
>
> That's exactly the security problem. If I can coerce any root CA to give
> me a cert for your email address, you lost.
Yes. And if you can do that, then that root CA needs to be revoked,
which is done by removing it from the mozilla trust db.
If this is reported and happens repeatedly, that root CA should lose its
webtrust CA seal, so we should drop it from Mozilla. That relates to the
policy.
However, most root CAs don't typically issue end-user certs, but only
intermediate CA certs, which will in turn sign the e-mail certs for
users. For those intermediate CA certs, other revocation mechanisms are
available - OCSP and CRLs - which are much easier to deploy and don't
involve upgrading your browser to remove a root, or explicitly disabling
trust on a root.
> Actually, that probably wouldn't even be that hard, I don't need to be a
> government for that, I'd only need to be able to listen to (and maybe
> intercept) your mailbox (that's exactly the problem that crypto tries to
> solve, right?), in that case I could apply for a Class 1 certificate
> (only validates email mailbox) from any CA, catch and respond to the
> verification mail to your mailbox, and then use that new certificate to
> pose as you in email towards your correspondants. Given what you said,
> they wouldn't notice the certificate change, answer me encrypted with
> the new key, I would catch the email from your mailbox again, decrypt it
> using my fake cert and be done. Attack successful.
Correct, that would be a successful attack, and nothing can stop it today.
Note that this attack depends on the fact that the verification of your
identity by the CA is done through an insecure channel, ie. SMTP . If
you could ensure that SMTP/SSL and was used all the way between your
mailbox and the CA, nobody could snoop on it. You could then download
your mail from your POP/SSL or IMAP/SSL mailbox account. This attack
exists because PKI is not fully implemented.
Unfortunately, most e-mail goes through gateways which don't use SMTP
over SSL, so in practice, cert enrollments through e-mail aren't secure.
I don't know if there is an RFC that proposes to enforce SMTP/SSL on the
Internet for e-mail transports. That would be a very good thing. There
probably is. I suspect it will be adopted, and RFC822 dropped, long
after IPv6 gets popular :-(
This was part of my reasoning on why not to lock people to CAs, the CA
can then act in an anti-competitive manner...
> I can't recall the acronym name. I haven't resubscribed to ietf-pkix
> yet. The list is very high traffic. But you can search the archives. I
> believe Verisign and Microsoft had something to do with it.
Hmmm now why would Microsoft and Verisign want a method to lock clients
into their services even more, I mean that's so unlike them to act in a
monopolistic fashion...
But this strays so far away from the discussion here, which is to select a
methodology for selecting CAs for inclusion in Mozilla. Frankly, if a CA
acts up -- you pull them out.
Beginning with Windows XP, Microsoft has this capability (auth.cab) and I
would suggest Mozilla consider a similar feature.
It is important to have an independent standard against which to judge CA
behavior (and WebTrust seems to be the most likely candidate).
To stretch your argument to say that "lock-in" could encourage CAs to charge
for revocation is inflammatory. Most CAs are on the hook under a warranty
or other forms of liability if they continue to validate a certificate that
is no longer valid -- they need to encourage certificate owners to revoke
when appropriate.
With the increased competition in the CA business -- price battles have
undermined most "lock-ins". If anything, the CAs are throwing in freebies
(such as vuln scans) to increase the "value" of the certificate purchase.
The only "lock-in" that I see possible is when enterprises have integrated
their applications with the CA -- such as for the issuance of S/MIME
certificates to a large community. But most relationships of this kind are
not "retail" -- they are the outcome of a long development of
trust/relationship and are normally covered by multiyear contracts/SLAs.
Also, much of the dialogue regarding dissidents is ... well ... just
irrelevant in the commercial CA business!! Most commercial CAs have
commercial enterprises as their clients -- that's just the way the money
grubbing world is. Our clients are not normally afraid of the NSA ... they
are trying to fulfill regulatory obligations to protect user privacy or to
enable business efficiency.
You must have missed the emails advocating lock-in by technical means by
CAs then, all I'm saying is doing so would end up an anti-competitive
exercise, and removing free choice won't stop any government from simply
breaking and entering and breaking into your computer and stealing your
private key... If you prevent things by technical means you better have
the physical security as well or it's a pointless exercise...
FBI bugged the keyboard of a gangster that used PGP, I doubt the windows
security model would be much of a challenge for them...
I agree that this discussion sometimes broadens
away from the key question. But, I suggest this
is necessary to develop a policy that can survive.
It's fine to say that competition might make CAs
competitive, but it wasn't always that way, and
it may not be so in the future [1].
CAs being anti-competitive is a very key issue
to the policy, and the policy should assume that
manipulation for anti-competitive purposes will
be the norm. CAs are going to be anti-competitive,
if they can get away with it, and if money is
involved. And, they will use any tool they can
think of to reach their goals.
> Frankly, if a CA
> acts up -- you pull them out.
People say that, but has anyone done it? Has any
CA been pulled, ever? And what for? How hard was
it to do?
Imagine if a CA instituted a policy of charging
a disconnect fee. Nominally because its due
diligence was ongoing and required to be closed
down. Of course this is fair... and any business
could construct a reason to do this, with its slow
moving client base.
If however there was a challenge to this very fair
and reasonable fee of $500 for disconnect, then
no doubt the CA would fight hard to keep from being
dropped.
(Or, imagine *any* reason for pulling the CA.)
If the CA was "active" and in the market place,
I'd say the very first thing that would happen
is that Mozilla Foundation would be sued in
court and an injunction requested. This would
be granted [2], and then it would take about 4
years to battle the case through, plus/minus a
couple of years.
I think the notion that a CA can be "pulled" from
the list if it misbehaves should be treated with
intense suspicion. Also, I think I can comfortably
suggest that the cost of pulling a CA will exceed
the cost of adding a CA. In terms of time and
analysis and emails and risk and user support.
So, it would seem sensible to design a policy
whereby CAs did not need to be pulled. Now, I
know this is all bad news (and I hate being the
harbinger of evil tidings) but the CA business
is not the walk in the park that some programmers
wish it were.
> It is important to have an independent standard against which to judge CA
> behavior (and WebTrust seems to be the most likely candidate).
This is an important point. So, the question
then is, how does WebTrust do it? How does it
decide, process, analyse and advise a decision
to drop a CA? Does it indeed do anything, other
than decline to conduct another audit?
iang
[1] I think it's fair to say that the origins of the
CA market were a case study in a pure anti-competitive
market. Legislation was proposed and pushed through
by CAs in some places that created a barrier to entry.
Luckily, legislators around the world caught on to the
game and declined to follow the original model. Now,
most legislation simply reserves the right to pursue
an anti-competitive framework, rather than mandates
it. Any policy should consider that this unfortunate
past may arise again.
[2] In the normal vein of legal proceedings, injunctions
would be granted. The injunction is granted to preserve
the balance, pending the case being resolved in court.
So, the assumption is that the party with the power has
to defer its employment of that power until the issue
has been heard by the judge.
In general, injunctions are granted. Further, they are
not lifted (again, in general) until the resolution of
the case. If incorrectly applied, your normal remedy is
damages after the case (again, in general), not to have
it lifted.
Consult your lawyers on this, I'm only talking from a
low knowledge base: I got hit by one, and had to fight
it. Luckily, the injunction seeker made mistakes which
could be seen as deceptive, and the judge saw fit to
drop the injunction. But that was considered highly
unusual to have made such mistakes.
> Also, much of the dialogue regarding dissidents is ... well ...
First, the dissident was just one 'threat model' I mentioned. As I said,
the husband/wife case (in case they care) also applies. When I say I
want nobody to listen, I mean *nobody*.
> just irrelevant in the commercial CA business!! Most commercial CAs
> have commercial enterprises as their clients -- that's just the way
> the money grubbing world is.
So, you're saying that CAs and PKIs are inappropriate for private users?
Frankly, I don't care much about protecting the little secrets of large
companies. I care about protecting *privacy* for *people*.
> Our clients are not normally afraid of the NSA
Maybe not yours, but large companies in other countries do have reason
to fear the NSA for corporate espionage. There have been reports (and
IIRC even admissions) about government secret agencies (of USA and other
countries) "helping out" local companies in important international
contracts.
> First, the dissident was just one 'threat model' I mentioned. As I said,
> the husband/wife case (in case they care) also applies. When I say I
> want nobody to listen, I mean *nobody*.
Simply isn't possible with the PKI model, there is no way to guarantee
all links in the chain are secure when there is so much money and
political influence involved... PGP and SSH models are even worst for
the general public because they'd simply click through, at least with
PKI you know exactly who is a threat...
> So, you're saying that CAs and PKIs are inappropriate for private users?
I'd have to agree with that sentiment, ***BUT*** I don't know of
anything better for the general run of the mill joe public that would be
any better... In most cases I doubt most ppl would be targeted by a
government, and even if they were, if a government decided to break into
your house and steal your computer for the private keys without you
finding out about it, people would still be in ignorant bliss...
> Frankly, I don't care much about protecting the little secrets of large
> companies. I care about protecting *privacy* for *people*.
This is a people problem not a technical problem, and as always money is
the root of all evil... If certificates were commoditised and easy/cheap
to get hold of this would actually go along way to protecting privacy by
the sheer number of emails encrypted, it would make world government
surveillance oh so much more fun, then of course the governments in each
respective country would possibly try and crack down on it again...
> Maybe not yours, but large companies in other countries do have reason
> to fear the NSA for corporate espionage. There have been reports (and
> IIRC even admissions) about government secret agencies (of USA and other
> countries) "helping out" local companies in important international
> contracts.
The French complained about echelon and it's effect on it's companies
and the end result was the French installed their own network with more
or less the same capabilities in their own territories around the world,
if you can't beat 'em join 'em...
> Ben Bucksch wrote:
>
>> When I say I want nobody to listen, I mean *nobody*.
>
> Simply isn't possible with the PKI model...
> I don't know of anything better for the general run of the mill joe
> public that would be any better...
What about the model I proposed? First cert for a person is either
CA-based or self-signed, subsequent certs *must* be authorized and
signed by the previous cert or will be treated as attack.
The only 2 problems I see are:
* Identify "person". People still change their email addresses, and
different people with the same name exist. Might be solvable with
help from user.
* People using certs, but being careless: Signing up to to one, then
deleting it, e.g. reinstalling their harddrive. What do I do as
recipient? Do I believe the story or not? Most people would, and
then they'd fall for the attack as well. But at least we turned a
technical attack, invisible to the user, into a social engineering
attack, which is much harder and can be prevented by
smart/knowledgable recipients.
3rd problem, race condition, the thief and the owner race to the door to
lock it to keep the other out... did you really get the valid
certificate in the first place... even if you did, if someone steals the
key and you're unaware of the theft... There is possibly numerous more,
not to mention the most worst, it would increase the anti-competitive
nature of the PKI industry while offering little if anything to the end
user as they don't have enough knowledge of security practises to know
what the hell their doing...
Web of trust solves the "nobody" requirement, but
CA/PKI does not, in general, and neither does the
SSH model (more properly, opportunistic crypto).
In web of trust (WoT), both parties
communicate their fingerprints over other means
(generally phone or person to person) and then
sign each other's keys. From then on, they are
protected from all on-the-wire attacks (but they are
not protected if their nodes/keys are compromised).
This would be relatively easy to put into x.509 mail
agents from a conceptual pov, but there would need
to be some hackery, as the x.509 layout does not
support more than one signature on the cert. I.e.,
it does not support WoT.
(In your example above, the CA signature on the
cert is irrelevant, as you are starting from a
known key and using that to bootstrap comms. This
is the SSH model. But, you have to establish
what the known key is to both parties - hence, WoT.)
> The only 2 problems I see are:
>
> * Identify "person". People still change their email addresses, and
> different people with the same name exist. Might be solvable with
> help from user.
> * People using certs, but being careless: Signing up to to one, then
> deleting it, e.g. reinstalling their harddrive. What do I do as
> recipient? Do I believe the story or not? Most people would, and
> then they'd fall for the attack as well. But at least we turned a
> technical attack, invisible to the user, into a social engineering
> attack, which is much harder and can be prevented by
> smart/knowledgable recipients.
All good crypto protocols are made of two parts,
the first part of which tells the second to "trust
this key completely."
In CA/PKI architecture, the CA is the first part,
and the CA tells the second part (SSL) to trust
keys signed by the CA completely. Thus, any issues
generally occur with reference to the CA.
* The CA defines what person is. They might do this
by demanding company docs, or in CACert's part, by
demanding three trusted OpenPGP sigs. If a person
changes its name (or email address), then she becomes
another "person" as far as the CA is concerned.
* If the person loses their cert, she has to go
back to the CA and get another.
(If it were a WoT system, then the user would
generate another key and exchange fingerprints
again.)
iang
> * The CA defines what person is. They might do this
> by demanding company docs, or in CACert's part, by
> demanding three trusted OpenPGP sigs. If a person
> changes its name (or email address), then she becomes
> another "person" as far as the CA is concerned.
Erm no, our pgp section of the website (signing not looking at sigs) has
nothing to do with any of the trust component, the trust component
involves forms, paper trails and all the fun stuff dealing with due
diligence and identity checks. The person doing the checks then fills in
the details on the website via a html form. Most of he guys that go out
(usually for free/cost of a coffee), take the face to face checks more
serious then most government bodies who are paid to take these things
seriously...
> (If it were a WoT system, then the user would
> generate another key and exchange fingerprints
> again.)
How much time would be required if they needed to do face to face
checking on 100's if not 1000's of people, both time and cost
prohibitive, and no guarantee you'd be able to cover directly or
indirectly everyone with this method...
Ben Bucksch wrote:
>
> What about the model I proposed? First cert for a person is either
> CA-based or self-signed, subsequent certs *must* be authorized and
> signed by the previous cert or will be treated as attack.
If the key for the first cert was compromised (fell into the wrong
hands), and that cert was self-signed, how can you possibly do
revocation on it ?
Why can't a self-signed cert/key revoke itself?
Unless the user lost the private key, *and*
it fell into someone else's hands... That
would be a nuisance.
Mind you, revocations seem rather rare. Most
people just get a new key setup and tell
everyone by other means that their old setup
is dead.
iang
> If the key for the first cert was compromised (fell into the wrong
> hands), and that cert was self-signed, how can you possibly do
> revocation on it ?
I don't know, but I could in any case send out a (computer-parsable)
statement "this cert is invalid from now on", signed by that cert. Then
I am no worse as if I never had a cert. This is assuming, of course,
that I also still have a copy of the private key somewhere.
I personally don't worry all that much about the compromised key case,
because that's something I can prevent (or I am screwed anyways). I
can't prevent the problems in the model.
> I personally don't worry all that much about the compromised key case,
> because that's something I can prevent (or I am screwed anyways). I
> can't prevent the problems in the model.
Physical security is as important as digital, you loose the key all the
mail becomes just as readable unless you deleted it all after
reading/replying etc...
> Physical security is as important as digital, you loose the key all the
> mail becomes just as readable unless you deleted it all after
> reading/replying etc...
Well, right. But, and it's a big but, the protocol
designer cannot do much about it. The system has to
be built to deal with the realities that the user
might stuff up and lose the key.
What does this mean? OT1H, ignoring or assuming it
away can lead to errors in both directions. OTOH,
there isn't much that can be done except offer the
user the choice of the system under each of two
assumptions: a) you might lose your key, or b)
you take good care and the key is secure.
This is what makes protocol design so interesting -
walking the fine line presented by unknowables that
might be in conflict.
iang
> Why can't a self-signed cert/key revoke itself?
How would it do so?
Would it publish a CRL listing itself?
And if you found a CRL that listed its signer's cert, would you
trust that CRL?
Isn't that like choosing whether or not to believe the person
who says "everything I say is a lie"?
> Mind you, revocations seem rather rare.
Look at the size of any CA's CRL.
Even cacert's CRL seems to have a lot of entries, and seems to
have expanded at a significant rate.
--
Nelson B
In the original scenario, Ben was leaning towards
person to person communications, such as email.
So, to do a revocation, the user could hit the
button to revoke (which might create a CRL if that
is the best way to do it) and then mail the results
to the people in the address book.
In browsing, one could publish the revocation,
but as self-certs would be normally used for low
monetary value, or otherwise protected activities,
then just replace the self-signed cert with another
and tell everyone you mucked up.
> Isn't that like choosing whether or not to believe the person
> who says "everything I say is a lie"?
No, the key is saying that "I am compromised"
and the key is as authoritive in its statements
as anything else.
Even if this is a false statement (the owner
only thinks it is true), it is still acceptable
as a true statement. It simply means there are
some cases where one is over-zealous.
>> Mind you, revocations seem rather rare.
>
>
> Look at the size of any CA's CRL.
> Even cacert's CRL seems to have a lot of entries, and seems to
> have expanded at a significant rate.
Oh, ok! Now, how many of those are actual
results of compromise? As opposed to routine
replacements or expiries or other benign
effects. Are we saying that CACert has a
lot of compromises already? That would be
a surprise.
Perhaps I should have said compromise
revocations are rare, or important revocations
are rare...
iang
> > Frankly, if a CA
> > acts up -- you pull them out.
>
> People say that, but has anyone done it? Has any
> CA been pulled, ever? And what for? How hard was
> it to do?
Please compare the built-in CA list for Communicator 4.7x
and mozilla (any recent version). IIRC, mozilla's list
is smaller. Yet it was derived from Communicator's list.
If my memory isn't mistaken here, then CAs have been
pulled from the list.
Netscape's original policy, IIRC, was that CAs paid a fee to
be included in the list until the next "major" revision of the
software, at which point the list would start fresh.
> Imagine if a CA instituted a policy of charging
> a disconnect fee.
I'd really rather stop these straw men.
> (Or, imagine *any* reason for pulling the CA.)
Perhaps it's time for a "major" revision of the CA list.
>> It is important to have an independent standard against which to
>> judge CA
>> behavior (and WebTrust seems to be the most likely candidate).
>
>
> This is an important point. So, the question
> then is, how does WebTrust do it? How does it
> decide, process, analyse and advise a decision
> to drop a CA? Does it indeed do anything, other
> than decline to conduct another audit?
That's a fair question. Another is, what does it take to convince
WebTrust that some party they've audited is no-longer following the
audited practices, and therefore that party's seal ought to be
reconsidered.
I recently learned that at least one "authenticode" cert has been
revoked by its issuer because the issuer believed that the party to
whom the cert was issued was violating some rule, probably some aspect
of some agreement. I'm not familiar with the terms of the agreement(s)
to which an applicant must agree to receive an authenticode cert, but
that might be instructive to find out.
> [1] I think it's fair to say that the origins of the
> CA market were a case study in a pure anti-competitive
> market. Legislation was proposed and pushed through
> by CAs in some places that created a barrier to entry.
That occured well AFTER Netscape first offered clients with CA lists.
AFAIK, those laws presented barriers to CAs wanting to do business
with the state. But they didn't stop CAs from getting into Netscape's
list. And I think they have no bearing on mozilla, unless mozilla
decides they do.
--
Nelson B
Right, but that's not quite "pulling" is it?
That's "declining to copy."
Mozilla Foundation is a separate organisation
to Netscape. They are very different, one is
a not-for-profit open source developer, and
the other is a for-profit, closed source,
seller of browsers and servers. In business
terms, it's just about all different.
From this point of view of Mozilla being a
separate entity, it would be ludicrous to say
that just because some CA was listed in the
Netscape list, Mozilla cannot remove it...
A very different thing to Mozilla adding a
CA this year and taking it out next year.
...
> Perhaps it's time for a "major" revision of the CA list.
I'd suggest that take place after Frank's current
WebTrust set is done, and after the non-WebTrusts
are done.
>>> It is important to have an independent standard against which to
>>> judge CA
>>> behavior (and WebTrust seems to be the most likely candidate).
>>
>>
>>
>> This is an important point. So, the question
>> then is, how does WebTrust do it? How does it
>> decide, process, analyse and advise a decision
>> to drop a CA? Does it indeed do anything, other
>> than decline to conduct another audit?
>
>
> That's a fair question. Another is, what does it take to convince
> WebTrust that some party they've audited is no-longer following the
> audited practices, and therefore that party's seal ought to be
> reconsidered.
>
> I recently learned that at least one "authenticode" cert has been
> revoked by its issuer because the issuer believed that the party to
> whom the cert was issued was violating some rule, probably some aspect
> of some agreement. I'm not familiar with the terms of the agreement(s)
> to which an applicant must agree to receive an authenticode cert, but
> that might be instructive to find out.
I suppose the issue here is that if a CA has
a WebTrust, and the seal is pulled, then there
is no problem with pulling Mozilla's root distro.
Then, for a CA without a WebTrust, they probably
wouldn't cause too much of a difficulty anyway,
so that isn't an issue.
The remaining danger area is a CA with a WebTrust
where Mozilla has decided to pull it, and WebTrust
has not. On this, having a policy that clearly
spells out that it can be pulled at sole discretion
by MF, and taking no money (very important, as this
means there is no contract) then that would cover
it.
Except for the costs of proving that in court,
however.
>> market. Legislation was proposed and pushed through
>> by CAs in some places that created a barrier to entry.
>
>
> That occured well AFTER Netscape first offered clients with CA lists.
As the legislation in mind passed in 1995, I'm
not sure that it is that "well after" is on the
money there. Wasn't the earliest discussion
of SSL in 1994?
Mind you, I never heard of Netscape having
anything to do with the legislation. And, I
don't really see what is to be gained by
digging up the past that much, as long as we
can recognise it for what it was.
> AFAIK, those laws presented barriers to CAs wanting to do business
> with the state.
No, the original model set liability limits
for all users of the CAs. It was a messy
area, and I think the whole thing blew up
in their faces. Which was why the original
models weren't so widely adopted.
> But they didn't stop CAs from getting into Netscape's
> list. And I think they have no bearing on mozilla, unless mozilla
> decides they do.
Yes, it was a purely historical comment,
meant to imply that HTTPS got off to a bad
start, and resorting back to the good old
days is not much use.
iang
Ian Grigg wrote:
> Julien Pierre wrote:
>
>> Ben,
>>
>> Ben Bucksch wrote:
>>
>>>
>>> What about the model I proposed? First cert for a person is either
>>> CA-based or self-signed, subsequent certs *must* be authorized and
>>> signed by the previous cert or will be treated as attack.
>>
>>
>>
>> If the key for the first cert was compromised (fell into the wrong
>> hands), and that cert was self-signed, how can you possibly do
>> revocation on it ?
>
>
>
> Why can't a self-signed cert/key revoke itself?
>
> Unless the user lost the private key, *and*
> it fell into someone else's hands... That
> would be a nuisance.
That's precisely the case I was concerned about.
Ben Bucksch wrote:
>
> I don't know, but I could in any case send out a (computer-parsable)
> statement "this cert is invalid from now on", signed by that cert. Then
> I am no worse as if I never had a cert. This is assuming, of course,
> that I also still have a copy of the private key somewhere.
>
> I personally don't worry all that much about the compromised key case,
> because that's something I can prevent (or I am screwed anyways). I
> can't prevent the problems in the model.
You can't, but CAs can !
If your cert was signed by a CA, and your private key was compromised,
you can notify your CA of the key compromise, and they will put the
serial number of your certificate on their CRL (Certificate Revocation
List).
Also, if they operate an OCSP responder (OCSP is Online Certificate
Status Protocol), the responder will state that the cert was revoked to
anybody who asks, as well as the reason why.
CRL and OCSP are computer-parsable statements, and they are secure. If
there was no CA signature on that computer-parsable statement, then
anybody can fake that statement and revoke your self-signed cert !
In other words, if you have a self-signed cert, there is nothing you can
do to revoke it in a secure way if the reason was a key compromise,
particular, as Ian pointed out, if you lost your own private key and
somebody else got ahold of it - for example, if the computer with your
private key was stolen, which is surely something you would want to
protect against.
It would be of great benefit to you to read the specifications for the
existing and secure PKI revocation mechanisms of CRLs and OCSP. There is
no need to reinvent the wheel. I won't be answering any more of your
messages until you do your due diligence.
Ian Grigg wrote:
> No, the key is saying that "I am compromised"
> and the key is as authoritive in its statements
> as anything else.
>
> Even if this is a false statement (the owner
> only thinks it is true), it is still acceptable
> as a true statement. It simply means there are
> some cases where one is over-zealous.
Unfortunately, as you pointed out yourself, if you no longer have your
private key, and you know or suspect that somebody else got a copy of
it, then you cannot make that revocation statement yourself.
A very common case for this would be that the computer that has your
unique copy of the private key stored on it gets stolen. If the thief
was after your private key, he may be able to password-crack your key
database, and get ahold of the key. You would have absolutely no way to
do anything about it. And if the thief was indeed after your private
key, then I wouldn't hold my breath for *him* to make the revocation
statement !
> Oh, ok! Now, how many of those are actual
> results of compromise? As opposed to routine
> replacements or expiries or other benign
> effects. Are we saying that CACert has a
> lot of compromises already? That would be
> a surprise.
I'm assuming quite a few would be from lost certs when they've
reformatted or viruses did it for them.... While not being compromised
I'm sure a lot of people have lost access to PGP keys the same way...
There is no way to revoke a PGP key in that instance...
>> Unless the user lost the private key, *and*
>> it fell into someone else's hands... That
>> would be a nuisance.
>
>
> That's precisely the case I was concerned about.
Ah, well. In that case, the user would have
to "revoke" via shouting from the roof tops.
Seems like a reasonable compromise. If a user
is concerned about this risk, then I suppose
they could use a CA-signed cert instead. But
for the average p2p email scenario, it would
be simpler just to mail the address book and
say "sorry, it ain't me."
iang
> Unfortunately, as you pointed out yourself, if you no longer have your
> private key, and you know or suspect that somebody else got a copy of
> it, then you cannot make that revocation statement yourself.
Right, that's a separate case. By definition, a
self-signed cert cannot deal with that, at the
protocol level. No biggie. It's not compulsory.
> A very common case for this would be that the computer that has your
> unique copy of the private key stored on it gets stolen. If the thief
> was after your private key, he may be able to password-crack your key
> database, and get ahold of the key. You would have absolutely no way to
> do anything about it. And if the thief was indeed after your private
> key, then I wouldn't hold my breath for *him* to make the revocation
> statement !
I don't know how common this is, really. I've heard
of all these things happening in isolation, but I've
never heard of a someone stealing a laptop, searching
for the key, cracking it open with a password cruncher,
and then going out and ... doing some damage like
stealing your value using a your cracked key.
I mean, all these things are possible, but they are
rather unlikely. It's only in the last year that
viruses have targetted e-gold and Paypal passwords,
I've never heard of anyone targetting keys (although
there is a paper on this, google "lunchtime attack").
It's like walking out the front door - we take adequate
precautions for normal risks, like looking left and right
on entering the road. But we don't worry about meteors.
Most people walk around with cash in their pockets. It
they lose their wallets, they lose their cash. What do
they do? Take care, mostly. These are normal risks and
normal responses. Ben says he wants to take care, is
all.
Self-signed certs have limitations. But, they are nice
and cheap. You don't get everything for free, but you
do get quite a lot.
(talking to Benm you said:)
> You can't, but CAs can !
>
> If your cert was signed by a CA,...
I count about 6 ifs there, that the average CA is
selling. $100-$900 buys far too many ifs for many
uses. I think a lot of people will be happy with
a lack of hand-holding, at the price.
> It would be of great benefit to you to read the specifications for the
> existing and secure PKI revocation mechanisms of CRLs and OCSP. There is
> no need to reinvent the wheel. I won't be answering any more of your
> messages until you do your due diligence.
Well, here's some due diligence: How much has been lost
due to lack of 3rd party recovation capabilities in the
OpenPGP or SSH or any world? Indeed, how much has been
lost due to lack of 3rd party revocation, in the SSL
world - given that we are only just now seeing something
like an infrastructure that could be considered to be
substantial? Correct me if I'm wrong, but the 1st
decade of SSL was ... revocation weak. Surely there
are some risks, some losses to show for it?
Some merchants saying that "I lost my cert, I lost my
house?"
Also, given the nature of self-signed certificates, it
is pretty clear that the user gives up any benefit of
revocation by CAs. What on earth is offensive about
that? A self-signed cert user doesn't want anything
to do with a CA, including revocation. There is simply
no drama here.
iang
As expected. You know, in the Linux community,
machines are hacked all the time, I guess we would
have heard of stolen certs by now.
> I'm sure a lot of people have lost access to PGP keys the same way...
> There is no way to revoke a PGP key in that instance...
Right. In OpenPGP, one is supposed to create
a revocation certificate up front, and then
keep that in a safe place. I have never bothered.
iang
PS: Funny story from back in '96 or so. I was sitting
all alone in the office and someone sent an urgent
message to the corporate PGP key. Which I couldn't
find. So, thinking that I could be smart and search
the entire system for the corporate key, I wrote a
little program to try the message on the key.
Well, to my surprise there were 5000 keys on the
machine... and only many hours later, did the script
eventually bumble it's way to find the key. Then, I
only had to remember the password, which took another
several hours.
Now, this is not really representative, there aren't
many companies out there that built systems that
scattered keys around like they were free!
> As expected. You know, in the Linux community,
> machines are hacked all the time, I guess we would
> have heard of stolen certs by now.
We know they're mass defaced by script kiddies running scripts and we
know about it from the effects that happen to websites, now if someone
seriously wanted to get keys and you didn't know about it I see that as
a real issue...
> Right. In OpenPGP, one is supposed to create
> a revocation certificate up front, and then
> keep that in a safe place. I have never bothered.
I doubt most others have either, I know I have for the CAcert signing
keys, but never for my own personal keys...
> Now, this is not really representative, there aren't
> many companies out there that built systems that
> scattered keys around like they were free!
Slowed you down, didn't stop you...
> I don't know how common this is, really. I've heard
> of all these things happening in isolation, but I've
> never heard of a someone stealing a laptop, searching
> for the key, cracking it open with a password cruncher,
> and then going out and ... doing some damage like
> stealing your value using a your cracked key.
The FBI broke into a gangsters place (legally) and placed a key logger
on his keyboard to get his pgp password to break his crypto...
> Self-signed certs have limitations. But, they are nice
> and cheap. You don't get everything for free, but you
> do get quite a lot.
CAcert is also free (well unless people want to donate to us :), but the
added benefit is an impartial 3rd party (with NO monetary gains) will
try to do as much checking as possible for as minimal cost as possible
(due diligence), where as self signed certificates it's dicey, email
addresses can be easily forged, and self signed certificates created
within seconds... Hello encrypted spam!
> Well, here's some due diligence: How much has been lost
> due to lack of 3rd party recovation capabilities in the
> OpenPGP or SSH or any world? Indeed, how much has been
SSH is a special case where you SHOULD be intermittently knowledgeable
of the system you're connecting to, you don't go out and SSH machines
you have no prior relationship with otherwise you're there for well
non-legit reasons, you do go out and email people you have no prior
relationship with, you do go out and connect to websites you have no
prior relationship with etc etc etc....
>>> Mind you, revocations seem rather rare.
>>
>> Look at the size of any CA's CRL.
>> Even cacert's CRL seems to have a lot of entries, and seems to
>> have expanded at a significant rate.
>
> Oh, ok! Now, how many of those are actual
> results of compromise? As opposed to routine
> replacements or expiries or other benign
> effects.
I doubt that any of them are due to mere expiration.
A CRL is never required to list expired certs.
A cert's date of expiration is the end of the issuer's
obligation to carry it in the CRL.
One reason to issue certs with short expiration times (e.g. only
a year, even for keys that are thought to require 50+ years to
break) is to mitigate the amount of information that must be carried
in the issuer's CRL.
I think it is considered good practice to carry a cert on
a CRL for some small time after it expires, but not continually
thereafter.
> Are we saying that CACert has a lot of compromises
> already? That would be a surprise.
Let's ask Duane. Duane: why the revocations?
--
Nelson B
> Let's ask Duane. Duane: why the revocations?
I'd have to ask users, we don't issue a CRL for expired certificates, so
the only cases are, lost private keys:
Numerous emails about this as people try to reinstall their certificates
from the site but we never get their private key so they emailing us
asking why it doesn't work. I can only assume formatting the HDD would
be a common reason, then needing their key again,
With openssl it only allows 1 valid certificate for any CN at a time, so
people wanting to renew certificates before they expire revoke and renew
them.
I haven't heard of PCs being stolen to get the private key for any
CAcert issued certificates, maybe this is a question better suited for
commercial CAs that have been in operation a longer amount of time...
>> Please compare the built-in CA list for Communicator 4.7x
>> and mozilla (any recent version). IIRC, mozilla's list
>> is smaller. Yet it was derived from Communicator's list.
>> If my memory isn't mistaken here, then CAs have been
>> pulled from the list.
>
> Right, but that's not quite "pulling" is it?
> That's "declining to copy."
Not if it's done by the same party, which it was.
mozilla's present CA list is actually Netscape 7.1's list, that is,
the final Netscape browser's CA list. Netscape/AOL managed the NSS
CA list until about a year ago, up until moz 1.4 which is approximately
equal to moz 1.4, in my judgement. Netscape 7.x's CA list did not
include all the CAs in Communicator 4.x's list, IINM.
You doubted that any root CA list has ever been reduced, that
any CAs have ever been removed. I cited an example.
It's not at all apparent to me that mozilla should have any less
control, less ability, or less risk, than Netscape had, over
removing CAs from the list. And Netscape did take money.
I see no reason why mozilla shouldn't do something similar, and
say "we're going to concoct a new list every so often".
>> [...] what does it take to convince
>> WebTrust that some party they've audited is no-longer following the
>> audited practices, and therefore that party's seal ought to be
>> reconsidered.
>>
>> I recently learned that at least one "authenticode" cert has been
>> revoked by its issuer because the issuer believed that the party to
>> whom the cert was issued was violating some rule, probably some aspect
>> of some agreement. I'm not familiar with the terms of the agreement(s)
>> to which an applicant must agree to receive an authenticode cert, but
>> that might be instructive to find out.
>
> I suppose the issue here is that if a CA has
> a WebTrust, and the seal is pulled, then there
> is no problem with pulling Mozilla's root distro.
> Then, for a CA without a WebTrust, they probably
> wouldn't cause too much of a difficulty anyway,
> so that isn't an issue.
>
> The remaining danger area is a CA with a WebTrust
> where Mozilla has decided to pull it, and WebTrust
> has not. On this, having a policy that clearly
> spells out that it can be pulled at sole discretion
> by MF, and taking no money (very important,
I agree. MF's policy needs to address that.
--
Nelson B
IANAL etc. However I could see how that could certainly go a long way to
alleviate the situation, they wouldn't be removed so much as just not
included in the new list...
It's been shown that CA-issued certs have the advantages of
being revocable, with two separate revocation mechanisms in place,
and with the ability to revoke remaining even if/when the cert
owner loses his only/last private key for the cert.
Let's leave this point, and move on.
--
Nelson B
Ian Grigg wrote:
> Also, given the nature of self-signed certificates, it
> is pretty clear that the user gives up any benefit of
> revocation by CAs. What on earth is offensive about
> that? A self-signed cert user doesn't want anything
> to do with a CA, including revocation. There is simply
> no drama here.
What that means is that applications such as Mozilla should treat
self-signed certs differently from certs issued by trusted CAs. And they
do currently, by bringing you a big warning. I think it should remain.
Perhaps wording can be added, to explain that if the user accepts the
cert, there is no way it will ever get revoked. But I believe that's too
complicated for most people to understand. Under most circumstances, the
cert simply shouldn't be trusted period, especially by people who have
no understanding of certs. Having an "advanced" mode for the browser
where it's possible to change trust, and a normal mode where it's not,
for unsophisticated users, would be a good compromise.
The logical conclusion of this is that self-signed certs, if they are
ever to be used, should only be used by sophisticated users, who have
means to validate the certs outside of PKI, in a manual way (!).
Having an application automatically generate self-signed certs to widen
PKI use for the masses, as you have suggested, would be a very bad
disservice to the value of certs, because the masses could never
understand the risks associated with using and trusting any of those
self-signed certs. They just don't read the security warnings, they
blindly click through.
Ian Grigg wrote:
> Duane wrote:
>
>> Ian Grigg wrote:
>>
>>> Oh, ok! Now, how many of those are actual
>>> results of compromise? As opposed to routine
>>> replacements or expiries or other benign
>>> effects. Are we saying that CACert has a
>>> lot of compromises already? That would be
>>> a surprise.
>>
>>
>>
>> I'm assuming quite a few would be from lost certs when they've
>> reformatted or viruses did it for them.... While not being compromised
>
>
>
> As expected. You know, in the Linux community,
> machines are hacked all the time, I guess we would
> have heard of stolen certs by now.
Why do you guess that ?
What makes you think people would communicate with you that their certs
were compromised ?
And what makes you think that the Linux community cares about trusted
X509 certs that they have to pay for, when they are perfectly happy to
exchange untrusted public keys, that are FREE. That's most important to
them, after all, not security.
> Right. In OpenPGP, one is supposed to create
> a revocation certificate up front, and then
> keep that in a safe place. I have never bothered.
If you, with the knowledge you have of the issuer, haven't bothered, I
wouldn't expect many other PGP users do either. Even if you created that
revocation cert, it's possible you didn't back it up, or the data was
lost, just like the private key data itself. It's a fundamentally broken
revocation model.
> Physical security is as important as digital
I didn't say otherwise. But physical security is not the subject here.
> you loose the key all the mail becomes just as readable unless you
> deleted it all after reading/replying etc...
Right. Which is a *very* good reason to keep a backup of your key at
some other (safe) place, somewhere where it won't be stolen, burnt etc.
together with the primary copy.
Absolutely, I agree with that.
> And they
> do currently, by bringing you a big warning. I think it should remain.
Ah, oh. Well, there I think we can improve things,
by showing the nature of the cert in more glorious
detail. Browsing - it can be some bland "Self-Signed
Cert" box whereas I *know* Verisign want to fill their
spot with a more persuasive description of the qualities
of their cert.
The difficulty with the warning is that it discourages
use, unfairly. If you believe in that warning, then
you should display an even bigger warning when they
are using HTTP.
But, I do think that we are all agreed that the warning
is generally ignored anyway by users, and should be
improved, as you say below: "They just don't read the
security warnings, they blindly click through."
> Perhaps wording can be added, to explain that if the user accepts the
> cert, there is no way it will ever get revoked. But I believe that's too
> complicated for most people to understand. Under most circumstances, the
> cert simply shouldn't be trusted period, especially by people who have
> no understanding of certs. Having an "advanced" mode for the browser
> where it's possible to change trust, and a normal mode where it's not,
> for unsophisticated users, would be a good compromise.
Sure it can be trusted! The browser should say
that you've seen this very cert X times before, and
if you want to check, these were the times. Adding
cert caching to the application is very important,
it is critical to addressing phishing, which bypasses
the cert system altogether, so it's very important
to *somehow* show the user that there is no cert
involved *this time*.
(Other apps would think differently.)
> The logical conclusion of this is that self-signed certs, if they are
> ever to be used, should only be used by sophisticated users, who have
> means to validate the certs outside of PKI, in a manual way (!).
No, the logical conclusion is that the self-signed
certs should be used for every unsophisicated user,
who is not as yet sophisticated enough to demand a
CA-signed cert.
Because it's better than the most common alternate,
which is nothing. (I'm thinking here of the 99%
of servers that offer no protection, and you are
thinking of the 0.4% of the servers that offer
CA-model protection.)
> Having an application automatically generate self-signed certs to widen
> PKI use for the masses, as you have suggested, would be a very bad
> disservice to the value of certs, because the masses could never
> understand the risks associated with using and trusting any of those
> self-signed certs. They just don't read the security warnings, they
> blindly click through.
The fact that browser manufacturers as a group have
failed to create a security UI of value is not proof
that a) it can't be created nor b) users don't or
can't deal with security nor c) self-signed certs
would squeeze out CA-signed certs from their job of
protecting users. I believe Nelson nailed this one
when he described how the Netscape managers tried
to get the UI to do something sensible, and the UI
programmers didn't agree, so they overrode the
security concerns.
Work done in Mozilla to show that users can understand
security UIs and UIs can present security in a way that
helps:
Ye and Smith, Trusted Path for Browsers,
11th Usenix security symp, 2002.
Advertisement by Sean Smith <s...@cs.dartmouth.edu>:
"we also built this into Mozilla, for Linux and Windows.
http://www.cs.dartmouth.edu/~pkilab/demos/countermeasures/
(It's been a while since I read this, I should read
it again to see that it says what I recall... )
iang