Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MITM in the wild

21 views
Skip to first unread message

Nelson B Bolyard

unread,
Oct 18, 2008, 2:22:59 PM10/18/08
to mozilla's crypto code discussion list
In bug https://bugzilla.mozilla.org/show_bug.cgi?id=460374 the reporter
complained about how difficult it is to override bad cert errors in FF3.
She complained because she was getting bad cert errors on EVERY https
site she visited. ALL the https sites she visited were apparently
presenting self-signed certs. The example for which she provided evidence
was www.paypal.com. By the time she filed the bug, she had already
overridden the bad cert errors for all the major https sites that she
visited with any frequency, including facebook, myspace, hotmail, her
college's network servers, and more. In hacker speak, she was *owned*.

(Please discuss this here, not in that bug.)

Despite all the additional obstacles that FF3 put in her way, and all
the warnings about "legitimate sites will never ask you to do this",
she persisted in overriding every error, and thus giving away most of
her valuable passwords to her attacker.

None of this had triggered any suspicion in the victim. She was merely
upset that the browser made it so difficult for her to get to the sites
she wanted to visit. She was complaining about the browser.

FF3 had utterly failed to convey to her any understanding that she was
under attack. The mere fact that the browser provided a way to override
the error was enough to convince her that the errors were not serious.
I submit that the user received no real protection whatsoever from FF3 in
this case.

KCM would not have helped. If anything, it would have reduced the pain
of overriding those errors to the point where the victim would never have
cried for help, and never would have learned of the attack to which she
was a victim.

The question is: how can FF3+ *effectively* protect users like her from
MITM attackers better than FF3 has already done?

Is removal of the ability to override bad certs the ONLY effective
protection for such users?

The evolution of that UI is under discussion in bug
https://bugzilla.mozilla.org/show_bug.cgi?id=431826

Ian G

unread,
Oct 18, 2008, 3:32:58 PM10/18/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> In bug https://bugzilla.mozilla.org/show_bug.cgi?id=460374 the reporter
> complained about how difficult it is to override bad cert errors in FF3.
> She complained because she was getting bad cert errors on EVERY https
> site she visited. ALL the https sites she visited were apparently
> presenting self-signed certs. The example for which she provided evidence
> was www.paypal.com. By the time she filed the bug, she had already
> overridden the bad cert errors for all the major https sites that she
> visited with any frequency, including facebook, myspace, hotmail, her
> college's network servers, and more. In hacker speak, she was *owned*.
>
> (Please discuss this here, not in that bug.)
>
> Despite all the additional obstacles that FF3 put in her way, and all
> the warnings about "legitimate sites will never ask you to do this",
> she persisted in overriding every error, and thus giving away most of
> her valuable passwords to her attacker.


Yep, no surprise. FF3 tries too hard, way too hard, imho.


> None of this had triggered any suspicion in the victim. She was merely
> upset that the browser made it so difficult for her to get to the sites
> she wanted to visit. She was complaining about the browser.
>
> FF3 had utterly failed to convey to her any understanding that she was
> under attack.


I would say it slightly differently: it was clear that in her mind,
the problem was the browser, not anything else. This is because for
the last 14 years, and for the last 99.999% of times this has
happened, it is the browser that is stopping her (and everyone like
her) getting to the place she wanted to go.


> The mere fact that the browser provided a way to override
> the error was enough to convince her that the errors were not serious.


History provided her the confidence that the errors were browser
problems, not anything else. https://paypal.com/

Overrides didn't effect that situation. OTOH if you had removed the
overrides she would have switched browser. Also note, she is pretty
savvy, she compiles her own stuff it seems.


> I submit that the user received no real protection whatsoever from FF3 in
> this case.


I agree.

> KCM would not have helped.


I agree, KCM would not have helped. In both cases, the warnings are
delivered, and the user is given the responsibility for the overrides.


> If anything, it would have reduced the pain
> of overriding those errors to the point where the victim would never have
> cried for help, and never would have learned of the attack to which she
> was a victim.


Not sure about that, but it's probably moot :)


> The question is: how can FF3+ *effectively* protect users like her from
> MITM attackers better than FF3 has already done?


It cannot. Note the above assumption that she made:

"there is no MITM, there cannot be an attack,
this stupid UI is something made up by crazy
people to annoy me."

And, to a very high confidence level, she had a good assumption. I
think there is a Bayesian logic that explains this fallacy
somewhere, to do with what happens when the false negatives rate is
too high.

The only way any tool can protect her is if the assumption itself --
that the tool is broken -- is challenged. She needs to learn that
there are MITMs. (Which she has now learnt. And, now, she will
work with the UIs ...)

Now, the broader question is about the wider public, and the wider
MITMs. Will MITMs now become a regular enough topic to be a
suitable learning experience? Will the wider public get a wider
lesson? Only time will tell, and more data; one data point doesn't
do more than tease us. Until that time, FF3's security UI is a
skyscraper built on sand.

This is the pathological problem with MITM protection that has
existed from day 1 of SSL: it was a solution in advance of a
problem. Given that the solution was theoretical, and the problem
had no practical existence (until recently), the solution could
never be trialled against a real attacker. Add in some complexity,
hello brittleness, meet shatter!

Which is to say, everything that has been done until now may have to
be re-thought ... as it moves from theoretical to practical ...
because now we face a real attacker. Assuming her attacker becomes
common, only now can we find the right balance between overrides,
convenience and protections, and losses.

(Yes, we will face losses. Real security is about losses.)


> Is removal of the ability to override bad certs the ONLY effective
> protection for such users?


No: in this case, removal of the overrides will (I speculate)
convince her to switch browser.

Yes: but only if you redesign the web to follow the principles of
security architecture ;)


> The evolution of that UI is under discussion in bug
> https://bugzilla.mozilla.org/show_bug.cgi?id=431826


Nice case study! What would be wonderful is if you could ask her to
go out and publicise her trauma.

iang

Eddy Nigg

unread,
Oct 18, 2008, 4:17:11 PM10/18/08
to mozilla's crypto code discussion list
Ian G:

> Nelson B Bolyard wrote:
>>
>> Despite all the additional obstacles that FF3 put in her way, and all
>> the warnings about "legitimate sites will never ask you to do this",
>> she persisted in overriding every error, and thus giving away most of
>> her valuable passwords to her attacker.
>
>
> Yep, no surprise. FF3 tries too hard, way too hard, imho.

Quite the opposite...just imagine Firefox wouldn't have made it that
hard and annoying, she wouldn't have filed a bug report and we wouldn't
know.

>
> I would say it slightly differently: it was clear that in her mind,

> the problem was the browser, not anything else. This is because...

...she never saw how Firefox behaves with really secured web sites.


> for
> the last 14 years, and for the last 99.999% of times this has
> happened, it is the browser that is stopping her (and everyone like
> her) getting to the place she wanted to go.

If that were true, we wouldn't have the problems today! But it's not
true and the browser must convince the user that something is wrong.
Otherwise not let to connect at all!

>
>> The mere fact that the browser provided a way to override
>> the error was enough to convince her that the errors were not serious.

Nelson: Yes

>
>
> History provided her the confidence that the errors were browser
> problems, not anything else. https://paypal.com/

Paypal uses an EV certificate and wild cards are not allowed. However
Paypal could fix this by adding paypal.com to the SAN. It's their
shortcoming, not that of the browser.

>
>> I submit that the user received no real protection whatsoever from FF3 in
>> this case.

Nelson: Only either because she was playing around with her own build
and/or she never saw it functioning without being MITM'd.

>> If anything, it would have reduced the pain
>> of overriding those errors to the point where the victim would never have
>> cried for help, and never would have learned of the attack to which she
>> was a victim.

Nelson: Correct!

>
> Not sure about that, but it's probably moot :)
>

Not moot at all...

>
>> The question is: how can FF3+ *effectively* protect users like her from
>> MITM attackers better than FF3 has already done?
>

Allow connecting to such sites only after modifying about:config

>
> It cannot. Note the above assumption that she made:
>
> "there is no MITM, there cannot be an attack,
> this stupid UI is something made up by crazy
> people to annoy me."
>

Where exactly did she say that? This is YOUR assumption, not hers.

>
>> Is removal of the ability to override bad certs the ONLY effective
>> protection for such users?
>

Nelson: Yes, require editing of about:config

>
> Nice case study! What would be wonderful is if you could ask her to
> go out and publicise her trauma.
>

I did that for here:
http://www.linuxtoday.com/news_story.php3?ltsn=2008-10-18-012-35-OS-CY-NT


--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: star...@startcom.org
Blog: https://blog.startcom.org

Eddy Nigg

unread,
Oct 18, 2008, 4:17:11 PM10/18/08
to mozilla's crypto code discussion list
Ian G:
> Nelson B Bolyard wrote:
>>
>> Despite all the additional obstacles that FF3 put in her way, and all
>> the warnings about "legitimate sites will never ask you to do this",
>> she persisted in overriding every error, and thus giving away most of
>> her valuable passwords to her attacker.
>
>
> Yep, no surprise. FF3 tries too hard, way too hard, imho.

Quite the opposite...just imagine Firefox wouldn't have made it that

hard and annoying, she wouldn't have filed a bug report and we wouldn't
know.

>

> I would say it slightly differently: it was clear that in her mind,

> the problem was the browser, not anything else. This is because...

...she never saw how Firefox behaves with really secured web sites.

> for
> the last 14 years, and for the last 99.999% of times this has
> happened, it is the browser that is stopping her (and everyone like
> her) getting to the place she wanted to go.

If that were true, we wouldn't have the problems today! But it's not

true and the browser must convince the user that something is wrong.
Otherwise not let to connect at all!

>

>> The mere fact that the browser provided a way to override
>> the error was enough to convince her that the errors were not serious.

Nelson: Yes

>
>
> History provided her the confidence that the errors were browser
> problems, not anything else. https://paypal.com/

Paypal uses an EV certificate and wild cards are not allowed. However

Paypal could fix this by adding paypal.com to the SAN. It's their
shortcoming, not that of the browser.

>

>> I submit that the user received no real protection whatsoever from FF3 in
>> this case.

Nelson: Only either because she was playing around with her own build

and/or she never saw it functioning without being MITM'd.

>> If anything, it would have reduced the pain


>> of overriding those errors to the point where the victim would never have
>> cried for help, and never would have learned of the attack to which she
>> was a victim.

Nelson: Correct!

>
> Not sure about that, but it's probably moot :)
>

Not moot at all...

>
>> The question is: how can FF3+ *effectively* protect users like her from
>> MITM attackers better than FF3 has already done?
>

Allow connecting to such sites only after modifying about:config

>

> It cannot. Note the above assumption that she made:
>
> "there is no MITM, there cannot be an attack,
> this stupid UI is something made up by crazy
> people to annoy me."
>

Where exactly did she say that? This is YOUR assumption, not hers.

>

>> Is removal of the ability to override bad certs the ONLY effective
>> protection for such users?
>

Nelson: Yes, require editing of about:config

>

> Nice case study! What would be wonderful is if you could ask her to
> go out and publicise her trauma.
>

I did that for here:

Steffen Schulz

unread,
Oct 18, 2008, 8:31:03 PM10/18/08
to dev-tec...@lists.mozilla.org
On 081018 at 20:30, Nelson B Bolyard wrote:
> FF3 had utterly failed to convey to her any understanding that she was
> under attack. The mere fact that the browser provided a way to override

> the error was enough to convince her that the errors were not serious.

I find it amazing that someone shows this level of ignorance but then
manages to file a bugreport... :-)


> KCM would not have helped. If anything, it would have reduced the pain


> of overriding those errors to the point where the victim would never have
> cried for help, and never would have learned of the attack to which she
> was a victim.

> The question is: how can FF3+ *effectively* protect users like her from


> MITM attackers better than FF3 has already done?

Personally, I like the idea of a 'safe mode' in the browser. Safe-mode
is very visible, provides limited scripting and https-only to a defined
set of sites. If mom wants to go banking, she's been told she has to
activate safe-mode. Otherwise banking is insecure.

It is some action that the user initiates, she tells the program when
some critical operation starts and ends. If she has to exit safe-mode
to go to a bank then that is a very obvious decision to test her luck.


> Is removal of the ability to override bad certs the ONLY effective
> protection for such users?

No. Vista/IE7 seems to ship with various scripting deactivated by
default. So what happens? The page worked before, now it doesn't.
Thats clearly a problem of the stupid new computer. So we ask the
neighbour's kid to solve this and everything 'works'...


I do though would like some sane alternative for people who are aware
of the certificate stuff. The possibility to chose Yes/No/Ignore with
one click and to optionally display certiciate details plus KCM info
instead of a verbose warning.


/steffen

David E. Ross

unread,
Oct 18, 2008, 10:45:46 PM10/18/08
to
On 10/18/2008 11:22 AM, Nelson B Bolyard wrote [in part]:
>
> Is removal of the ability to override bad certs the ONLY effective
> protection for such users?

I visit some Web sites with self-signed certificates. None of those
sites request any input from me. The only reason they have site
certificates is that the site owners want to show off how technically
astute they are. Hah! However, those sites do indeed contain
information that I want. I definitely do not want to be locked out of
them.

I have also visited sites with incorrectly configured site certificates.
In at least one situation, the owner decided to change the domain name
without getting a new certificate for the new domain. In several cases,
intermediate certificates were not installed, contrary to explicit
instructions from the certificate authorities. I definitely do not want
to be locked out of these sites either.

--
David E. Ross
<http://www.rossde.com/>

Go to Mozdev at <http://www.mozdev.org/> for quick access to
extensions for Firefox, Thunderbird, SeaMonkey, and other
Mozilla-related applications. You can access Mozdev much
more quickly than you can Mozilla Add-Ons.

Eddy Nigg

unread,
Oct 18, 2008, 11:10:21 PM10/18/08
to
David E. Ross:

> I visit some Web sites with self-signed certificates. None of those
> sites request any input from me. The only reason they have site
> certificates is that the site owners want to show off how technically
> astute they are. Hah! However, those sites do indeed contain
> information that I want. I definitely do not want to be locked out of
> them.

Connect to plain text then.

>
> I have also visited sites with incorrectly configured site certificates.
> In at least one situation, the owner decided to change the domain name
> without getting a new certificate for the new domain. In several cases,
> intermediate certificates were not installed, contrary to explicit
> instructions from the certificate authorities. I definitely do not want
> to be locked out of these sites either.
>

When the visitor statistics suddenly goes down, web site owners will
take action. Besides that I think Firefox MUST do a better job in case
of missing CA certificates in the chain (yes, I'd prefer it also
otherwise, but it's a too common error to ignore).

In the end of the day, 98% of the time you'll be able to connect in
plain http. For the other 2% there must be a way to visit that site -
for you and other savvy users. Not for 99.9% of the users however. They
should not visit such sites because they don't have the knowledge to
differentiate between an attack and carelessness.

Requiring a change to about:config would facilitate your needs (because
you have the knowledge to do both - change the config and know what it
means), while still protecting the standard user who neither cares about
security nor has any clue what certificates are.

Graham Leggett

unread,
Oct 19, 2008, 6:28:59 AM10/19/08
to mozilla's crypto code discussion list
David E. Ross wrote:

> I visit some Web sites with self-signed certificates. None of those
> sites request any input from me. The only reason they have site
> certificates is that the site owners want to show off how technically
> astute they are. Hah! However, those sites do indeed contain
> information that I want. I definitely do not want to be locked out of
> them.

For a scary example of exactly this, read the following thread:

http://lists.gnucash.org/pipermail/gnucash-user/2008-October/027009.html

(Of course, you have to accept the self signed certificate to do so).

The page http://wiki.gnucash.org/wiki/Mailing_Lists is covered in little
padlocks giving the user the impression that some level of security
exists, when in reality there is none.

If admins are telling users that self signed certs are ok, what hope
does a user have if they are not clued up on security?

> I have also visited sites with incorrectly configured site certificates.
> In at least one situation, the owner decided to change the domain name
> without getting a new certificate for the new domain. In several cases,
> intermediate certificates were not installed, contrary to explicit
> instructions from the certificate authorities. I definitely do not want
> to be locked out of these sites either.

This is the classic balance between convenience and security.

If the next door neighbour's kid can jimmy the computer so that you can
see the sites you want to see, even though the security is broken, then
the site has no incentive to fix their security issue.

In one case a while back, a business banking portal I use ran with an
expired certificate for some months. Customers who called their helpdesk
were cheerfully told how to systematically switch off all the security
in IE6, which allowed their site to work. I was the only customer to
complain.

Regards,
Graham
--

Ian G

unread,
Oct 19, 2008, 8:09:06 AM10/19/08
to mozilla's crypto code discussion list
Ian G wrote:

> Nelson B Bolyard wrote:
>> KCM would not have helped.
>
>
> I agree, KCM would not have helped. In both cases, the warnings are
> delivered, and the user is given the responsibility for the overrides.


I was thinking about this, and actually, KCM would have helped here.
If you look at the two cert viewers side by side, then there is a
clear difference:

https://bugzilla.mozilla.org/attachment.cgi?id=343662
https://bugzilla.mozilla.org/attachment.cgi?id=343663

Now, this info and the difference is available to the browser, which
operating in KCM mode. It would be an easy thing to display the two
certs, and the differences highlighted, perhaps in red or somesuch.

Especially, if the bad one said "Self-signed cert, can be made by
anyone" the trigger might have been there.

This approach actually works much better because the KCM and PKI
would be working together, they would augment each other's protections.

iang

Ian G

unread,
Oct 19, 2008, 8:50:27 AM10/19/08
to dev-tec...@lists.mozilla.org
Steffen Schulz wrote:
> On 081018 at 20:30, Nelson B Bolyard wrote:
>> FF3 had utterly failed to convey to her any understanding that she was
>> under attack. The mere fact that the browser provided a way to override
>> the error was enough to convince her that the errors were not serious.
>
> I find it amazing that someone shows this level of ignorance but then
> manages to file a bugreport... :-)


And ... reformat drive, play with compilers, flags, build own
browser, switch between versions, bum off others' wireless, maintain
a login at bugzilla, make a near-perfect bug report ...

This is not your average end-user. I'll bet you a dime to a dollar
she knew precisely what the certificates are for. The general
excuse of "users are stupid" isn't going to work this time :)


>> The question is: how can FF3+ *effectively* protect users like her from
>> MITM attackers better than FF3 has already done?
>
> Personally, I like the idea of a 'safe mode' in the browser. Safe-mode
> is very visible, provides limited scripting and https-only to a defined
> set of sites. If mom wants to go banking, she's been told she has to
> activate safe-mode. Otherwise banking is insecure.


I have thought about that too, and I don't think it is going to work
for the general users. Originally I thought it would, but I think
we have crossed that Rubicon already.

I run NoScript which cuts away about 95% of the crap on most sites,
and actually makes FF run nicely, because it isn't struggling under
all that javascript crap. (It is worth it for that alone.)

However, it breaks a lot of ecommerce sites that use credit cards.
Three times now I've found that certain (big) ecommerce sites that
use credit cards totally break in the actual payment phase. I have
to close the browser, restart, retype in the transaction from
scratch, and use the nuclear button on NoScript:

Allow scripts *Globally* (Dangerous!)

before the transaction goes live. Then it goes through.

I don't know what these sites are doing, but this is far too
regular. And, NoScript is as good as it gets atm (so I am told,
opinions welcome).


> It is some action that the user initiates, she tells the program when
> some critical operation starts and ends. If she has to exit safe-mode
> to go to a bank then that is a very obvious decision to test her luck.


This unfortunately will be the case, and too many times. I have to
permit all scripting for my online bank. What is the combined sum
of these messages:

Bank uses scripting,
NoScript turns off scripting because it is dangerous,
User has to turn off NoScript ?

We have a mess. Users have a right to be confused if this is forced
on them...


>> Is removal of the ability to override bad certs the ONLY effective
>> protection for such users?
>

> No. Vista/IE7 seems to ship with various scripting deactivated by
> default. So what happens? The page worked before, now it doesn't.
> Thats clearly a problem of the stupid new computer. So we ask the
> neighbour's kid to solve this and everything 'works'...


Right. That's reality.


> I do though would like some sane alternative for people who are aware
> of the certificate stuff. The possibility to chose Yes/No/Ignore with
> one click and to optionally display certiciate details plus KCM info
> instead of a verbose warning.


I would definately like to see the KCM deployed. Both of KCM and
the CA-pki model work well enough when nothing is happening; now
stuff is happening, and we need more. Use every tool we can,
hopefully they can work together.

Other than that, I would like to figure out a nice story that says
"use Firefox for all your general browsing ... but use XXXX for your
online bank". I just don't know what XXXX is.

I liked the google Chrome approach of separate VMs for each
tab/page. There are definate limits to how we can expect a general
user app like Firefox to firewall itself with "quality code" without
general overflow protection ... putting hard boundaries around the
virtual site within the browser is a very good idea, I think.

Some people maintain separate Firefox installs. I've tried using
"fast user switching" in MacOSX. But these are too hard to expect
ordinary users to follow them.

iang

Kaspar Brand

unread,
Oct 19, 2008, 12:25:28 PM10/19/08
to dev-tec...@lists.mozilla.org
Ian G wrote:

> Steffen Schulz wrote:
>> I find it amazing that someone shows this level of ignorance but then
>> manages to file a bugreport... :-)
>
>
> [...] play with compilers, flags, build own browser,

To provide the output shown at the end of
https://bugzilla.mozilla.org/show_bug.cgi?id=460374#c0, typing
"about:buildconfig" and copy-pasting is absolutely sufficient.

> This is not your average end-user. I'll bet you a dime to a dollar
> she knew precisely what the certificates are for.

I don't think so.

Kaspar

Nelson B Bolyard

unread,
Oct 19, 2008, 2:09:09 PM10/19/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-19 05:09:
> Ian G wrote:
>> Nelson B Bolyard wrote:
>>> KCM would not have helped.
>>
>> I agree, KCM would not have helped. In both cases, the warnings are
>> delivered, and the user is given the responsibility for the overrides.
>
> I was thinking about this, and actually, KCM would have helped here.

No, it couldn't have. In fact, it could have been hurtful.

> If you look at the two cert viewers side by side, then there is a
> clear difference:
>
> https://bugzilla.mozilla.org/attachment.cgi?id=343662
> https://bugzilla.mozilla.org/attachment.cgi?id=343663
>
> Now, this info and the difference is available to the browser, which
> operating in KCM mode. It would be an easy thing to display the two
> certs, and the differences highlighted, perhaps in red or somesuch.

This was a brand new installation. Formatted hard drive, reinstalled OS,
installed browser. FIRST contact with every https site produced the
self-signed cert warning. There simply were no other certs with which
to compare. KCM would have accepted those certs without any complaint.

THEN, later, if and when she came upon the REAL server certs from the
real server, KCM would have warned her away! It would have said
"Wait, don't trust this new cert!"

And don't forget the Debian key generator. It showed us that a serious
flaw in KCM is the complete lack of any revocation mechanism.

I want to drive a stake through the heart of something, too.
Can you guess what it is?

Nelson B Bolyard

unread,
Oct 19, 2008, 2:20:36 PM10/19/08
to mozilla's crypto code discussion list
Eddy Nigg wrote, On 2008-10-18 20:10:

> Requiring a change to about:config would facilitate your needs (because
> you have the knowledge to do both - change the config and know what it
> means), while still protecting the standard user who neither cares about
> security nor has any clue what certificates are.

Isn't that just a few more clicks?
There are already lots of web pages telling users how to set up security
exceptions, to work around the "bug" in FF3.
Do you imagine that such pages would not also quickly arise for an
about:config setting?

Eddy Nigg

unread,
Oct 19, 2008, 2:36:07 PM10/19/08
to
Nelson B Bolyard:


Yes, but the ones doing that must have a better knowledge about the
browser. It still would effectively - I mean really effectively -
prevent the other 99% from accessing such a site. Just think about the
pain to search and understand what exactly to do in about:config. This
isn't something my parents, wife and children would do, it's something
_ME_ and _YOU_ would do.

The ones who really need it belong either to the crowd which prefers to
use self-signed certificates by all means or professionals, which are
unfortunate enough to configure routers and other administrative sites
and applications. However both of the groups know about the dangers
involved (the former don't care and mostly never understood the meaning
of certificates anyway, the later might be aware about the complications).

Ian G

unread,
Oct 19, 2008, 6:17:59 PM10/19/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> Ian G wrote, On 2008-10-19 05:09:
>> Ian G wrote:
>>> Nelson B Bolyard wrote:
>>>> KCM would not have helped.
>>> I agree, KCM would not have helped. In both cases, the warnings are
>>> delivered, and the user is given the responsibility for the overrides.
>> I was thinking about this, and actually, KCM would have helped here.
>
> No, it couldn't have. In fact, it could have been hurtful.
>
>> If you look at the two cert viewers side by side, then there is a
>> clear difference:
>>
>> https://bugzilla.mozilla.org/attachment.cgi?id=343662
>> https://bugzilla.mozilla.org/attachment.cgi?id=343663
>>
>> Now, this info and the difference is available to the browser, which
>> operating in KCM mode. It would be an easy thing to display the two
>> certs, and the differences highlighted, perhaps in red or somesuch.
>
> This was a brand new installation. Formatted hard drive, reinstalled OS,
> installed browser. FIRST contact with every https site produced the
> self-signed cert warning. There simply were no other certs with which
> to compare.


Lol... Yes, you are right, I missed that completely. KCM can not
use the user's original validation if there was none before.

(I commented on the bug about where our views diverge so I won't
repeat that here.)


> KCM would have accepted those certs without any complaint.


Ahhh, not exactly! With KCM, it is not up to it to accept any certs
any time: unfamiliar certs are passed up to the user for validation.

If the user does not validate, then she has done a bad thing. Yes,
KCM would be at its weakest at that point, but no software tool is
perfect; at some stage we have to ask the user, and then by
definition the software is weak, dependent on the user.

> THEN, later, if and when she came upon the REAL server certs from the
> real server, KCM would have warned her away! It would have said
> "Wait, don't trust this new cert!"


Right, that too !

Although, what I suggested above is a little better than disaster in
this scenario. It would have presented her both certificates. If
she had marked the self-signed one as "good" and then noticed that
the new CA-signed "bad" one is superior because it has a brand name
sig on it, this better info displayed would still have given her
enough to deal with her "upgrade" attack.


> And don't forget the Debian key generator. It showed us that a serious
> flaw in KCM is the complete lack of any revocation mechanism.


Not sure about that one? Do you mean all the SSH servers that were
exposed to compromise because of the Debian OpenSSL random snafu?

http://xkcd.com/424/

Sure, but the comparison is of chalk and cheese. In practice, the
SSH community takes what it is given for free, and wouldn't trade
that for all the revocation in the world. Even the nice low $$$
cost of a Startcom cert -- free! -- isn't going to wrest them away
from their precious KCM, and for good reason: for that particular
application, revocation isn't worth the costs that it would add to
the solution.

(Funnily enough, I used telnet with SSL support for a year or so
back in 1995 :)


> I want to drive a stake through the heart of something, too.
> Can you guess what it is?


This one I can guess [1] :)

However, bear in mind that KCM still requires: the user has to
validate the first time. If the user does not validate, then we
have a potential problem.

Compare and contrast: CA-signed models ask the user to validate
when the certificates don't make sense to the algorithms.

If the user does not validate, or validates badly, then the world
will eventually drift to failures.

Assuming that there is a requirement to validate, built into the
system, then no tool can protect against a failure to validate.
It's part of the system.

iang

[1] but I couldn't guess the one in your essay!

Eddy Nigg

unread,
Oct 19, 2008, 6:39:33 PM10/19/08
to
Ian G:

> If the user does not validate, then she has done a bad thing. Yes,
> KCM would be at its weakest at that point, but no software tool is
> perfect; at some stage we have to ask the user, and then by
> definition the software is weak, dependent on the user.
>

Chiming in here....

PKI wasn't meant to facilitate certificates issued from "random". PKI is
mean disallow anything it doesn't know and doesn't chain to the root. In
the browser we have many roots, but it's the browser fault to allow the
user to ignore and click all th way through to heaven...or hell. :-)

PKI is mean to be strict (avoiding the word perfect)! It's not meant to
be "maybe" valid, "possibly" chained to a root and "likely" not an MITM.
It's meant to provide a clear YES/NO answer. PKI provides what KCM can
not accomplish.

Eddy Nigg

unread,
Oct 19, 2008, 6:41:19 PM10/19/08
to
Eddy Nigg:

> PKI wasn't meant to facilitate certificates issued from "random". PKI is
> mean disallow anything it doesn't know and doesn't chain to the root. In
> the browser we have many roots, but it's the browser fault to allow the
> user to ignore and click all th way through to heaven...or hell. :-)
>
> PKI is mean to be strict (avoiding the word perfect)! It's not meant to
> be "maybe" valid, "possibly" chained to a root and "likely" not an MITM.
> It's meant to provide a clear YES/NO answer. PKI provides what KCM can
> not accomplish.
>

Arrg.../PKI is mean disallow/PKI is meant to disallow/

Nelson B Bolyard

unread,
Oct 19, 2008, 9:31:23 PM10/19/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-19 15:17:
> Nelson B Bolyard wrote:

>> KCM would have accepted those certs without any complaint.
>
> Ahhh, not exactly! With KCM, it is not up to it to accept any certs
> any time: unfamiliar certs are passed up to the user for validation.

Yes, but the users are conditioned to accept all certs upon initial
presentation.

I used to think SSH's KCM model was pretty good, until someone (it was
You, actually) opened my eyes to the fact that users do not attempt to
verify key correctness, do not attempt to do out-of-band verification of
key "thumbprints" or any other reasonable verification, but instead merely
always assume that the key they get is valid, the first time they connect
to the server. When I learned that, I contacted many people who were SSH
aficionados, and they all confirmed the truth of that situation that had
been too horrible for me to even imagine until it was told to me.

So, today, I equate KCM with accepting all keys at face value, upon first
contact. That's just what the victim in bug 460374 did. I would not say
that it served her well.

> If the user does not validate, then she has done a bad thing.

Um, er, well, in this case, she would have done a GOOD thing, no?

>> And don't forget the Debian key generator. It showed us that a serious
>> flaw in KCM is the complete lack of any revocation mechanism.
>
> Not sure about that one? Do you mean all the SSH servers that were
> exposed to compromise because of the Debian OpenSSL random snafu?

Yes. And the 10MB file that SSH users must now drag around containing
all those bad keys, since there is no service to which they can turn for
revocation help.

> Even the nice low $$$ cost of a Startcom cert -- free! -- isn't going to
> wrest them away from their precious KCM, and for good reason: for that
> particular application, revocation isn't worth the costs that it would
> add to the solution.

That 10MB file that they all must drag around now is an ongoing cost
of the solution. It's a back breaker for browsers, more than doubling
the size of the browser download to include that file.

>> I want to drive a stake through the heart of something, too.
>> Can you guess what it is?

> This one I can guess [1] :)

> [1] but I couldn't guess the one in your essay!

I'm quite curious! What would you guess instead?

> If the user does not validate, or validates badly, then the world
> will eventually drift to failures.

And you have taught me well that users simply do not validate, but
merely accept all server keys at face value on initial contact.

Nelson B Bolyard

unread,
Oct 19, 2008, 10:03:37 PM10/19/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-18 12:32:

> This is the pathological problem with MITM protection that has
> existed from day 1 of SSL: it was a solution in advance of a
> problem. Given that the solution was theoretical, and the problem
> had no practical existence (until recently), the solution could
> never be trialled against a real attacker. Add in some complexity,
> hello brittleness, meet shatter!

Be careful not to confuse and conflict the MITM detection properties
of SSL with the MITM resistance properties of the browser UI.

As we see in this case, SSL did not fail to detect a single one of the
attacks, but the browser UI allowed the value of that detection to be lost.

Failure of browser UI is not a bad reflection on SSL, except to the extent
that people who write about this confuse the two.

Nelson B Bolyard

unread,
Oct 19, 2008, 10:10:51 PM10/19/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-19 05:50:
> [...] I would like to figure out a nice story that says

> "use Firefox for all your general browsing ... but use XXXX for your
> online bank". I just don't know what XXXX is.

As much as it pains me to say it, I agree. That is what is needed.

This incident has shown that FF3, with its all-too-easy-to-defeat MITM
reporting, is NOT suitable for high-value web transactions such as
online banking.

I wish (and have wished for a decade now) that Mozilla browsers WERE
those browsers that were trustworthy enough to be relied upon for high
value online transactions, such as online banking, but they are not.

If I could find a product that was suitable, I would seek to work on it.

I have little, and decreasing, desire to continue to invest in strong
security for a product that discards that security for the masses,
in exchange for acceptance by a very small number of people whose
intense desire to be entirely self reliant causes them to insist on
unverifiable security measures.

Nelson B Bolyard

unread,
Oct 19, 2008, 10:12:01 PM10/19/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote, On 2008-10-19 19:03:

> Be careful not to confuse and conflict the MITM detection properties
> of SSL with the MITM resistance properties of the browser UI.

s/conflict/conflate/ :(

Eddy Nigg

unread,
Oct 19, 2008, 11:15:50 PM10/19/08
to
Nelson B Bolyard:

> This incident has shown that FF3, with its all-too-easy-to-defeat MITM
> reporting, is NOT suitable for high-value web transactions such as
> online banking.
>

FF3 is suitable for people on this list. It appears that it's not yet
suitable for the average user. At least FF3 succeeded in having the user
complain about the "broken" browser, which is at least a step forward.
The question which remains is perhaps, how many undetected and
unreported events exist.

Incidentally I'm browsing with browser.identity.ssl_domain_display set
to 1, even though I'm perhaps one of the last persons needing it really.
Instead the average user has to do without it...quite odd.

Jean-Marc Desperrier

unread,
Oct 20, 2008, 4:50:28 AM10/20/08
to
Eddy Nigg wrote:
> Ian G:
>> Nelson B Bolyard wrote:
>>>
>>> Despite all the additional obstacles that FF3 put in her way, and all
>>> the warnings about "legitimate sites will never ask you to do this",
>>> she persisted in overriding every error, and thus giving away most of
>>> her valuable passwords to her attacker.
>>
>> Yep, no surprise. FF3 tries too hard, way too hard, imho.
>
> Quite the opposite...just imagine Firefox wouldn't have made it that
> hard and annoying, she wouldn't have filed a bug report and we wouldn't
> know.

As has *already* been reported on this group, *many*, *many*, *many*
users did not fill a bug report until now and switched browser instead.

You have found the very single user knowledgeable enough to fill a bug
report instead of switching browser. The mozilla community absolutly
*needs* to understand this is *not* the standard behaviour until now.
The standard behaviour of users has always been to switch browser and
not report anything.

Jean-Marc Desperrier

unread,
Oct 20, 2008, 5:01:32 AM10/20/08
to
Eddy Nigg wrote:
> [...]

> When the visitor statistics suddenly goes down, web site owners will
> take action.[...]

It will not go down. It's only the percentage of user using Firefox that
will go down.

Please note that we've seen *one* knowledgeable enough webmaster report
here that the number of Firefox users had gone down on his website as a
consequence of this, with almost no reaction on the group.

Most webmasters who do that sort of stupid thing also don't care that
the number of firefox users goes down. Most probably their reaction will
be something like : "Oh ! I see that the hype around that so-called
Firefox browser is definitively going doing nowadays".

And that *one* knowledgeable enough webmaster was disapointed to see
less firefox user, but not really decided to change his website, and
instead thinking about how firefox should change it's behaviour.

The mozilla community may think this should not happen, but the attitude
of "We won't change anything. They will end up seeing the error in their
way" is not one that's very effective.

Jean-Marc Desperrier

unread,
Oct 20, 2008, 5:10:22 AM10/20/08
to
Graham Leggett wrote:
> David E. Ross wrote:
>> [...]

>> I have also visited sites with incorrectly configured site certificates.
>> [...]. I definitely do not want to be locked out of these sites either.

>
> This is the classic balance between convenience and security.

inconvenience != security.

inconvenience == unsecurity.

In chernobyl, the security was implemented in a very inconvenient way.

The prime reason why occidental nuclear power plant are most secure is
not that they have more security than Tchernobyl.
It's that their security is much more convenient, and that's probably
the number one lesson people got out of chernobyl.
Recheck every security procedure and make sure it's easy enough to use
that people won't switch it out.
The chernobyl disaster happened after people had switched out almost
every security mechanism because they were so broken and inconvenient.

It very hard to find a solution that's both convenient and secure. But
that's the only way. Inconvenient solutions are strongly unsecure.

Jean-Marc Desperrier

unread,
Oct 20, 2008, 5:32:21 AM10/20/08
to
Nelson B Bolyard wrote:
> [...]

> This incident has shown that FF3, with its all-too-easy-to-defeat MITM
> reporting, is NOT suitable for high-value web transactions such as
> online banking.

You know Nelson the reason why you are taking this the wrong way is that
you have *no* direct experience of how "average" users interact with
broken ssl sites.

Let me explain how I had the revelation FX 3 is broken *because* it
tries too much to block acces to web sites with invalid certificates.

It happened when one of my collegue came to me to talk about this new FX
3 browser. He told me it was nice but SSL support was broken.
Broken ? Yes, instead of accessing to the web site, he got some error
screen, and had to run IE instead.
This was a developer with already around two years of writing SSL
related softwares.

Since then I'm definetively convinced the current firefox method is
broken *and makes the average joe unsecure* because it blocks access to
the site (and just not only the average joe, but a lot many users who
should know better).

Now, the answer about what to do next is not easy. But it's *not* to
block even more access to those web sites. Whilst I have no magic
bullet, it definetively lies in the line of finding a way to *explain*
to the user *what* is broken exactly, and provide him an effective and
easy way to check if it's an error or an attack.

Eddy Nigg

unread,
Oct 20, 2008, 6:29:16 AM10/20/08
to
Jean-Marc Desperrier:

> Eddy Nigg wrote:
>> [...]
>> When the visitor statistics suddenly goes down, web site owners will
>> take action.[...]
>
> It will not go down. It's only the percentage of user using Firefox that
> will go down.
>

Can you please backup your assumptions?

MY sources show clearly that both web sites using legitimate
certificates and "market share" of Firefox has gone up. This is correct
in real number and relative percentage wise.

Eddy Nigg

unread,
Oct 20, 2008, 7:49:39 AM10/20/08
to
Jean-Marc Desperrier:

> Graham Leggett wrote:
>>
>> This is the classic balance between convenience and security.
>
> inconvenience != security.
>
> inconvenience == unsecurity.
>

Every time I come from shopping it's very inconvenient to put down the
shopping bags, grab for my keys and open the front door of my house.
Then pick up my bags again. After entering I have to lock the door again
(by convenience, if I want). But overall, what an inconvenience...why
did they put a door and lock there?

Ian G

unread,
Oct 20, 2008, 7:53:12 AM10/20/08
to mozilla's crypto code discussion list
Eddy Nigg wrote:
> Jean-Marc Desperrier:
>> Graham Leggett wrote:
>>>
>>> This is the classic balance between convenience and security.
>>
>> inconvenience != security.
>>
>> inconvenience == unsecurity.
>>
>
> Every time I come from shopping it's very inconvenient to put down the
> shopping bags, grab for my keys and open the front door of my house.
> Then pick up my bags again. After entering I have to lock the door again
> (by convenience, if I want). But overall, what an inconvenience...why
> did they put a door and lock there?


Curious! Eddy, how did you learn how to go to all that inconvenience?


iang

Jean-Marc Desperrier

unread,
Oct 20, 2008, 7:53:29 AM10/20/08
to
Eddy Nigg wrote:
> [...]

> MY sources show clearly that both web sites using legitimate
> certificates and "market share" of Firefox has gone up. This is correct
> in real number and relative percentage wise.

The second number hardly actually proves anything. In what I describe,
users will continue to use Firefox most of the time, and switch to IE
only for broken SSL sites.

The first number is more interesting, you actually got a statistically
significant percentage of people correcting their site after the Fx 3
release ?

But even if you prove me wrong by showing this happened as a significant
phenomen, I still be worried by the phenomen of people who are switching
to IE to view those sites.

My personnal evidence might be anecdotical but it's massive.
Thorsten Becker had actual numbers showing the decline in ff usage and
that switching the browser was the number 1 reactions of users in
http://groups.google.fr/group/mozilla.dev.tech.crypto/msg/7e1680e605ab8228

You see, saying this results in some site correcting their ssl use is
not the end of the story if it also has the result that many users will
"just use IE".

Because the second case can lead to some very vicious attacks.

You could see a pirate who wants to propagate a malware that is only
able to attack IE deliberately put a version of his site behind a broken
SSL so to get many usual Firefox users to use IE to access it and be
infected.

Also, saying that we need to find a way so that the number one reaction
of the average user is not to switch to IE *does not* mean we won't try
to do as much as possible so that site owner will be convinced to
correct their site.

Eddy Nigg

unread,
Oct 20, 2008, 8:02:04 AM10/20/08
to
Jean-Marc Desperrier:

> Broken ? Yes, instead of accessing to the web site, he got some error
> screen, and had to run IE instead.

Oh yes, and IE let him just through, no errors and no red address bar
and no "We recommend not to visit this site", right?

> This was a developer with already around two years of writing SSL
> related softwares.

Which SSL software does he write so I can avoid it?

> Now, the answer about what to do next is not easy. But it's *not* to
> block even more access to those web sites.

At least one shortcoming must be fixed and it's the fetching of missing
CA certificates in the chain. Sites which are unfortunately not
configured correctly, but otherwise use a perfect certificate shouldn't
be blocked and the browser should try to build the chain by fetching the
missing CA certificates.

> Whilst I have no magic
> bullet, it definetively lies in the line of finding a way to *explain*
> to the user *what* is broken exactly, and provide him an effective and
> easy way to check if it's an error or an attack.

Here I agree with you completly. Education is a very important part of
what we must do. I've been thinking that allowing users to click through
warnings was very bad from an educational point of view. One of the
problems is that users simply don't read, they don't care until their
passwords are stolen and credit cards emptied.

I also agree with you that there is no magic bullet - except that we've
tried it the current way of presenting warnings, errors screens etc. for
years. Maybe we should try it otherwise, because SSL does protect
against MITM attacks - that's one of its major tasks.

Jean-Marc Desperrier

unread,
Oct 20, 2008, 8:06:24 AM10/20/08
to
Eddy Nigg wrote:
> [...]

> Every time I come from shopping it's very inconvenient to put down the
> shopping bags, grab for my keys and open the front door of my house.
> Then pick up my bags again. After entering I have to lock the door again
> (by convenience, if I want). But overall, what an inconvenience...why
> did they put a door and lock there?

The pratical result of inconvenience is a threshold level that depends
of two factor : the inconvenience and the perceived threat.

Once the level of inconvenience is higher than the perceived threat
people stop applying the security mesure.

*Many* people do not lock their door after entering because the
perceived level of risk is lower than the inconvenience.

In small villages, many people do not lock their door at all.

When people stop using a security feature because the inconvenience is
higher than the perceived security, you can indeed play on two factors
to correct this, either lower the inconvenience or highten the perceived
risk.

It's not uncommon to see some people only and systematically play the
second card.
But please do consider that when the inconvenience is something that
people meet every day, it's extremly hard to play the second card high
enough so that it's still effective.
The only way might be at a point to tranform the potential risk to an
actual risk by going and using the weakness to hack their computer.
But even if you'd do that, you might only be showing them you are a
nasty person, and not that's it's a real risk.

Eddy Nigg

unread,
Oct 20, 2008, 8:08:32 AM10/20/08
to mozilla's crypto code discussion list
Ian G:

>
> Curious! Eddy, how did you learn how to go to all that inconvenience?
>

LOL

Because I'm a security expert I guess :-)

Eddy Nigg

unread,
Oct 20, 2008, 8:08:32 AM10/20/08
to mozilla's crypto code discussion list
Ian G:

>
> Curious! Eddy, how did you learn how to go to all that inconvenience?
>

LOL

Because I'm a security expert I guess :-)

Eddy Nigg

unread,
Oct 20, 2008, 8:25:01 AM10/20/08
to
Jean-Marc Desperrier:

> The second number hardly actually proves anything. In what I describe,
> users will continue to use Firefox most of the time, and switch to IE
> only for broken SSL sites.

Believe me, I have counts of web site owners "fixing" their web sites
because of the mounting complaints they receive from Firefox users. The
second number proves that overall usage continues to grow despite. It's
an important fact to recognize (and we would be in trouble if it would
be the other way around, we'd have to make some hard thinking).

But change of decade old behavior doesn't disappear within a night. Give
it a chance! It's just a few month since the FF3 release and the effects
are starting to show (in a positive way). We can't really judge in such
a short term, except that market share hasn't declined.

>
> The first number is more interesting, you actually got a statistically
> significant percentage of people correcting their site after the Fx 3
> release ?

Yes, I have testimonies of scores of web site owners fixing their sites
and getting third-party issued certificates. And I'm almost certain that
the other CAs see the same. It's a clear tendency!

Actually it took a few short weeks for the effect to seep in, but the
trend is very steady and growing. And note that we haven't changed our
offerings during that time.

> My personnal evidence might be anecdotical but it's massive.
> Thorsten Becker had actual numbers showing the decline in ff usage and
> that switching the browser was the number 1 reactions of users in
> http://groups.google.fr/group/mozilla.dev.tech.crypto/msg/7e1680e605ab8228

Well, this is a different problem and this problem will be solved.
Despite that, http://www.xitimonitor.com/ has testimony to a growing
market share of Firefox in Europe, including Germany. Go figure...

Jean-Marc Desperrier

unread,
Oct 20, 2008, 8:33:54 AM10/20/08
to
Jean-Marc Desperrier wrote:
> Eddy Nigg wrote:
>> [...]
>> Every time I come from shopping it's very inconvenient to put down the
>> shopping bags, grab for my keys and open the front door of my house.
>> Then pick up my bags again. After entering I have to lock the door again
>> (by convenience, if I want). But overall, what an inconvenience...why
>> did they put a door and lock there?
>
> [...]

> *Many* people do not lock their door after entering because the
> perceived level of risk is lower than the inconvenience.

After writing that, I realized that there's a specific reason why I
don't lock my door after entering.

It's that someone else has perceived that problem ("people who don't
value the security brought by locking their door after entering more
than the inconvenience of locking the door") and has found a smart way
to mitigate it. The door of my appartement doesnt' have an ouside
handle. You can't enter without using the key.

This is a very smart solution, because if I'm outside the appartement
and want to enter, most of the time there's nobody else inside and I
anyway would have locked the door, so needed the key to open it.
If there's someone inside and I find it inconvenient to search for the
key, probably I can just call the person and ask him to open the door.

But if no one had cared about the problem, and just said "if people
aren't stupid, they'll lock the door", I might find myself in the same
situation as when I was younger in the house of my parents, where there
was an outside handle, and very often the door was unlocked, despite
that it did happen that we were all in the other side of the house and
someone really could have stolen something without us noticing.

Eddy Nigg

unread,
Oct 20, 2008, 8:36:30 AM10/20/08
to
Jean-Marc Desperrier:

> The pratical result of inconvenience is a threshold level that depends
> of two factor : the inconvenience and the perceived threat.

I agree with every word you said in this mail! Risk assessment is
important! I believe that we just don't agree (yet) where to draw the
line. But if we believe that we should get to the point to prevent users
from clicking through errors (because of the risk involved) than we
are very close already. Implementation proposals may vary, but I think
that with providing better security for the AVERAGE user, overall
usability of the Internet will improve and facilitate more business on
the Internet in every respect (not only financial transactions, but
getting applications from the OS to the Internet and many other
conveniences.

Therefore, the current inconvenience will be balanced by far greater
gains in other fields, making it a great investment instead a burden.

Jean-Marc Desperrier

unread,
Oct 20, 2008, 8:36:47 AM10/20/08
to
Eddy Nigg wrote:
> [...]

> Despite that, http://www.xitimonitor.com/ has testimony to a growing
> market share of Firefox in Europe, including Germany. Go figure...

I *never* claimed that this problem would lower the *general* use of
Firefox. The SSL use case is small enough that it has *no* weight when
compared to global web usage.

Ian G

unread,
Oct 20, 2008, 8:55:43 AM10/20/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> Ian G wrote, On 2008-10-19 15:17:
>> Nelson B Bolyard wrote:
>
>>> KCM would have accepted those certs without any complaint.
>> Ahhh, not exactly! With KCM, it is not up to it to accept any certs
>> any time: unfamiliar certs are passed up to the user for validation.
>
> Yes, but the users are conditioned to accept all certs upon initial
> presentation.


Right. Users are trained to avoid the inconvenience of the security
model. Chernobyl style, as pointed out by Jean-Marc, they'll keep
going until something bad happens. Inconvenience isn't helpful in
this scenario, and adding more inconvenience isn't any more helpful.

So the question right now, assuming the Chernobyl thesis, is not how
inconvenient the Firefox UI can make it, but what is the bad thing
that will happen?

K showed us one possible future.


> I used to think SSH's KCM model was pretty good, until someone (it was
> You, actually) opened my eyes to the fact that users do not attempt to
> verify key correctness, do not attempt to do out-of-band verification of
> key "thumbprints" or any other reasonable verification, but instead merely
> always assume that the key they get is valid, the first time they connect
> to the server. When I learned that, I contacted many people who were SSH
> aficionados, and they all confirmed the truth of that situation that had
> been too horrible for me to even imagine until it was told to me.


I should keep my mouth shut sometimes :) But, really, it is very
important to look at how real users react. Our deep technical
experience makes us absorb and skip over things in very different
ways, which leads us to paint the world rosy. This is why I don't
bother to learn the about: thing, as it takes me away from user-land.


> So, today, I equate KCM with accepting all keys at face value, upon first
> contact. That's just what the victim in bug 460374 did. I would not say
> that it served her well.


Right. However, click-thru syndrome is the same, more or less, for
KCM and for PKI. Users just click through, both.

How it effects the different models varies, of course, but in both
cases, users will click through.

Until Armageddon (minor): you stop offering the click-thru option,
in which case they say "it's broken" and they switch away from
Firefox. Or Armageddon (major), in which case they do get in a
mess, lose their money, say "it's broken", then switch from the
Internet.

Which do you fancy, the devil or the deep blue sea?


>> If the user does not validate, then she has done a bad thing.
>
> Um, er, well, in this case, she would have done a GOOD thing, no?


Well, indeed! We wish that she would decide not to go to that site,
and that we (the browser) have correctly picked the site as evil.
Problem is, there are many possibilities:


User accepts User rejects
recommendation recommendation


Browser correctly
says the site is
evil

Browser incorrectly
says the site is
evil

Depending on our politics, we are arguing over which square we are
in. We would all like the rosy wonderful life of being in top-left.
But, because of the history of the product, we do not really find
ourselves there, but elsewhere.

The problem here is that, depending on which square we find
ourselves, the UI recommendations are radically different.


>>> And don't forget the Debian key generator. It showed us that a serious
>>> flaw in KCM is the complete lack of any revocation mechanism.
>> Not sure about that one? Do you mean all the SSH servers that were
>> exposed to compromise because of the Debian OpenSSL random snafu?
>
> Yes. And the 10MB file that SSH users must now drag around containing
> all those bad keys, since there is no service to which they can turn for
> revocation help.


I don't follow that. I don't know if I am doing it. But I do have
to ssh in to a handful of debian machines ... so I'll ask at the
next tech meeting. Maybe I'm just the user! Click-click, lemme in :)


>> Even the nice low $$$ cost of a Startcom cert -- free! -- isn't going to
>> wrest them away from their precious KCM, and for good reason: for that
>> particular application, revocation isn't worth the costs that it would
>> add to the solution.
>
> That 10MB file that they all must drag around now is an ongoing cost
> of the solution. It's a back breaker for browsers, more than doubling
> the size of the browser download to include that file.


I guess I'm *not* doing that...


>>> I want to drive a stake through the heart of something, too.
>>> Can you guess what it is?
>
>> This one I can guess [1] :)
>> [1] but I couldn't guess the one in your essay!
>
> I'm quite curious! What would you guess instead?


You wrote:

"It's already part of the TLS standard, and in NSS. Need I name it?"

For me, there are many authentication possibilities that might
relate to TLS:

* passwords over TLS
* secureId tokens over TLS
* client certs
* smartcards
* PSK
* SRP

In approximate order of popularity. They all suck, in their
separate ways. Curiously, one of the things that we had a chance to
experiment with over at CAcert was client certificates. They
"work", but they have some very annoying security weaknesses.


>> If the user does not validate, or validates badly, then the world
>> will eventually drift to failures.
>
> And you have taught me well that users simply do not validate, but
> merely accept all server keys at face value on initial contact.

iang, the messenger!

Jean-Marc Desperrier

unread,
Oct 20, 2008, 9:09:48 AM10/20/08
to
Eddy Nigg wrote:
> [...]. But if we believe that we should get to the point to prevent users

> from clicking through errors (because of the risk involved) than we are
> very close already. Implementation proposals may vary, but I think that
> with providing better security for the AVERAGE user, overall usability
> of the Internet will improve and facilitate more business on the
> Internet in every respect (not only financial transactions, but getting
> applications from the OS to the Internet and many other conveniences.
>[...]

I'm more convinced by your other message that I understand as follows :

"Yes, users have been trained in the past to ignore warning and access
the site anyway, so the current solution doesn't work perfectly well for
them, *but* it is very effective to push site owner to correct their
site, so that at the end meeting an invalid site will be very rare,
which will result in users being *untrained* to ignore warnings and will
result in them considering the warning as a real security risk instead
of just an annoyance to work around".

That's the best justification I've seen for the current Fx behavior yet.

But for this to work, false positives, sites that are not an attack but
for which the user gets a warning, need to become extremly rare, and I
think some work is needed to ensure that.
Currently I'm seing a lot of those false positives.
I have some idea about what could be done, if not to make access to the
site either, at least lower the amount of false positive, or let user
understand more easily why they got that false positive, but I spend
already too much time on this for today, so I'll present my idea about
it tomorrow.

Nelson B Bolyard

unread,
Oct 20, 2008, 1:16:56 PM10/20/08
to mozilla's crypto code discussion list

So, what's your point, Jean Marc?

Do you argue that Firefox should ignore bad cert errors, or make them
utterly trivial to override, so that users will continue to use Firefox,
even if it means that they will be *owned*, as the user of bug 460374
was?

Perhaps Firefox should not even bother to report bad cert errors, then?
That would be consistent with caring only about keeping users, and not
caring about user security. Is that what you advocate?

Paul Hoffman

unread,
Oct 20, 2008, 2:22:26 PM10/20/08
to mozilla's crypto code discussion list
Everybody take a deep breath. If we start treating this as black-and-white extremes, it is unlikely that most users will get the best security and usability.

Few if any of us active in this thread are HCI experts. Few of us have anything more than small amounts of anecdotal evidence. Many of us strongly-held religions about what users should want for the security we offer them.

It is quite clear that almost anything that is wanted along the spectrum of easy-and-insecure to cumbersome-and-very-secure is implementable in NSS and in software that uses NSS. It also is likely that NSS could embody many points along that spectrum and let the software decide; it would be our responsibility to choose those points wisely and to document them very well. My personal religion would have more points on the cumbersome-and-very-secure side, FWIW, but I know that there is a whole lot that I don't know.

This discussion is an important one, but it is one that should involve way more than just us. In fact, maybe we should be only minor players in the discussion, better adept at implementing what others want than to try to lead them to the best solution for the users. I don't see the expertise here for any of us to be stating the One True Solution.

--Paul Hoffman

Nelson B Bolyard

unread,
Oct 20, 2008, 2:37:12 PM10/20/08
to mozilla's crypto code discussion list
Jean-Marc Desperrier wrote, On 2008-10-20 05:33:
> Jean-Marc Desperrier wrote:

> I realized that there's a specific reason why I don't lock my door after

> entering. [...] The door of my appartement doesnt' have an ouside handle.


> You can't enter without using the key.

In other words, you don't have a choice. You don't need to lock your
door after entering, because your door is always locked after entering.
There is no easy way around using a key to enter. You could replace
your door with one that works differently, but you have not apparently
chosen to do so.

You seem to like it. You described it as

> This is a very smart solution,

This is exactly analogous to what Eddy has proposed for Firefox.
Yet you object vociferously to doing for Firefox what you do for your
own front door.

Nelson B Bolyard

unread,
Oct 20, 2008, 2:49:26 PM10/20/08
to mozilla's crypto code discussion list
Jean-Marc Desperrier wrote, On 2008-10-20 01:50:

> As has *already* been reported on this group, *many*, *many*, *many*
> users did not fill a bug report until now and switched browser instead.

OK. So, many users who have been MITM attacked chose to defeat their
protections, and switch to a product with less security. Shall I weep?

They choose to make themselves vulnerable to their attackers.
Shall I regret that they were less vulnerable with my product?

They switch products, and they suffer consequences. Does that mean that
we should strive to ensure that they suffer those consequences with our
product also?

If I were a maker of locks, or a bank executive, and people began to
reveal that they switched from my locks/vaults to the competition,
and then were robbed blind, would I be incented to lessen my security?

Should I say "Oh, come back to my bank! We've made it easier for robbers
here, too!" ??

> You have found the very single user knowledgeable enough to fill a bug
> report instead of switching browser. The mozilla community absolutly
> *needs* to understand this is *not* the standard behaviour until now.
> The standard behaviour of users has always been to switch browser and
> not report anything.

Yes, and apparently, you think we should change Firefox so that not even
this user would detect or report her attack.

Kyle Hamilton

unread,
Oct 20, 2008, 2:56:31 PM10/20/08
to mozilla's crypto code discussion list
On Mon, Oct 20, 2008 at 4:49 AM, Eddy Nigg <eddy...@startcom.org> wrote:
> Jean-Marc Desperrier:
>>
>> Graham Leggett wrote:
>>>
>>> This is the classic balance between convenience and security.
>>
>> inconvenience != security.
>>
>> inconvenience == unsecurity.
>>
>
> Every time I come from shopping it's very inconvenient to put down the
> shopping bags, grab for my keys and open the front door of my house. Then
> pick up my bags again. After entering I have to lock the door again (by
> convenience, if I want). But overall, what an inconvenience...why did they
> put a door and lock there?

To keep honest people honest, and to inconvenience you in a visible
way that gives you a false sense of security. If someone really wants
to steal something from your home, they'll break a window -- which is
a much more expensive replacement than a lock or door, and much less
secure.

-Kyle H

Wes Kussmaul

unread,
Oct 20, 2008, 3:50:34 PM10/20/08
to mozilla's crypto code discussion list
My good and knowledgeable friend Eddy Nigg will have a fit about my
putting into this list a link to something that is just an illustration.

Eddy, forgive me, but the folks on this list should be allowed to see a
new approach to a solution that is worth noting here.

See the bottom paragraph on this page...

http://osmio.org/cityhall_vehicles.html

...and click on the Apply link.

And please keep in mind that this is just an illustration, not a live site.

Wes Kussmaul
QE Alliance


Ian G

unread,
Oct 20, 2008, 4:28:28 PM10/20/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> Jean-Marc Desperrier wrote, On 2008-10-20 05:33:
>> Jean-Marc Desperrier wrote:
>
>> I realized that there's a specific reason why I don't lock my door after
>> entering. [...] The door of my appartement doesnt' have an ouside handle.

>> You can't enter without using the key.
>
> In other words, you don't have a choice. You don't need to lock your
> door after entering, because your door is always locked after entering.
> There is no easy way around using a key to enter. You could replace
> your door with one that works differently, but you have not apparently
> chosen to do so.
>
> You seem to like it. You described it as
>
>> This is a very smart solution,
>
> This is exactly analogous to what Eddy has proposed for Firefox.


One side is exactly analogous: the defence side. Lock it up!

The threat side is not analogous.

The difference here is that Jean-Marc's lock is in place because
there is a lot of experience with what is an appropriate, cost
effective way to deal with burglars. This has evolved over
centuries, and we really do know how to do this -- as a society.
The lock on his door is far more subtle than "just a lock."

It is a lot easier because of the history, also because of the
tangibility of the crime. When something goes missing, the average
person can draw a line from the missing spot ... to the door ... to
the perpetrator in a far off place.

When the user forgets to lock the door ... eventually someone
discovers that it is easy to have the door locked when it is only in
locked state. Therefore we must all carry keys.

However, with the attack we face here, few -- and certainly not the
users -- have the first clue what is happening or how to fix it.

(e.g., we do agree that we'd like to write something that says "for
high value commerce, use XXXX" ... except we don't know what XXXX is.)


> Yet you object vociferously to doing for Firefox what you do for your
> own front door.


Yes. E.g., did you know that the point of a good lock on a door is
*not* to stop a burglar getting in, but to stop him getting out?
That's why it is called a deadbolt. The burglar can always get in,
the game is to stop him getting out the front door, carrying your stuff.

Now, if we install a deadbolt in Firefox, that means ... something
like one quarter of websites with SSL cannot be accessed.

We might agree that "the state of the world today" is annoying, but
we should also be able to see that such a drastic change will cause
more trouble than it is worth.

iang

PS: https://financialcryptography.com/ for one will be "deadbolted"
You may laugh, but will you have made me or my readers more secure?
No chance. Will you have caused mass confusion and a move across
to IE? Probably.

Ian G

unread,
Oct 20, 2008, 4:50:29 PM10/20/08
to mozilla's crypto code discussion list
Kyle Hamilton wrote:
> On Mon, Oct 20, 2008 at 4:49 AM, Eddy Nigg <eddy...@startcom.org> wrote:
>> Jean-Marc Desperrier:
>>> Graham Leggett wrote:
>>>> This is the classic balance between convenience and security.
>>> inconvenience != security.
>>>
>>> inconvenience == unsecurity.
>>>
>> Every time I come from shopping it's very inconvenient to put down the
>> shopping bags, grab for my keys and open the front door of my house. Then
>> pick up my bags again. After entering I have to lock the door again (by
>> convenience, if I want). But overall, what an inconvenience...why did they
>> put a door and lock there?
>
> To keep honest people honest, and to inconvenience you in a visible
> way that gives you a false sense of security. If someone really wants
> to steal something from your home, they'll break a window -- which is
> a much more expensive replacement than a lock or door, and much less
> secure.

Ahhh... it's more subtle than that.

The purpose of a good lock is *not* to keep the burgler out.

It is more subtle; it is to *stop the burgler getting out*.

This is because the theory of burglary is that the crook can always
get in, but he needs to carry heavy stuff out. Sure he can break a
window to get in. But can he climb out the window carrying a TV?

Most burglaries are conducted by entering through the window and
leaving through the front door.

Hence, deadbolts.

(You might validly ask then how Jean-Marc's lock works. I'm
guessing that it also has a mode to lock it "dead" on exiting, which
is easy to unlock on entering.)

iang

Paul Hoffman

unread,
Oct 20, 2008, 6:01:36 PM10/20/08
to mozilla's crypto code discussion list
At 11:49 AM -0700 10/20/08, Nelson B Bolyard wrote:
>Jean-Marc Desperrier wrote, On 2008-10-20 01:50:
>
>> As has *already* been reported on this group, *many*, *many*, *many*
>> users did not fill a bug report until now and switched browser instead.
>
>OK. So, many users who have been MITM attacked chose to defeat their
>protections, and switch to a product with less security.

There is zero evidence that the people who switched were under attack. They may have been going to a site that was, in fact, self-signed. We have no way of knowing.

--Paul Hoffman

Nelson B Bolyard

unread,
Oct 20, 2008, 6:07:00 PM10/20/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-20 13:28:

> Yes. E.g., did you know that the point of a good lock on a door is
> *not* to stop a burglar getting in, but to stop him getting out?
> That's why it is called a deadbolt. The burglar can always get in,
> the game is to stop him getting out the front door, carrying your stuff.

I think you are using the term "deadbolt" to describe locks that require
a key on both the inside and the outside to lock or unlock them.

I think that is not the definition of "deadbolt" common used in the USA.
I wonder if that is a regional thing, US English vs UK, or something.

In the USA, a deadbolt lock is any lock whose "bolt" must be explicitly
locked each time the door is closed, or else it remains unlocked.
While such locks are common, typically they have a simple handle on
the "inside", and require a key only on the outside.

I suppose that makes them not "good" locks by your definition, and I
agree that the typical US deadbolt lock does not hinder egress, but
only hinders ingress.

> (e.g., we do agree that we'd like to write something that says "for
> high value commerce, use XXXX" ... except we don't know what XXXX is.)

I keep wondering about that. Lots of people seem to agree that they want
some kind of half-vast SSL, providing some encryption, but no assurance
that the party to whom they're connected is who they intended it to be.
No protection against MITM, just a warm fuzzy feeling that "well, at least
we're using encryption". I think the term "security theater" applies.

How do we give them that in a way that clearly distinguishes between that
and real authenticated connections? I think there are (at least) two parts
to the puzzle:

a) some way to convey to the browser that the EXPECTED amount of security
is low, so the browser won't try to impose all the usual high security
requirements on the connection (e.g. not impose strong authentication
requiremetns) and hence won't show any warnings. I'm thinking we need an
alterantive to https for this.

httpst:// (security theater) maybe? or
httpwf:// (warm fuzzy) or
mitm://

b) some unmistakeable blatantly obvious way to show the user that this
site is not using security that's good enough for banking but, well,
is pretty good security theater. Flashing pink chrome?
Empty wallet icon? The whistling sounds associated with falling things?
http://www.sounds.beachware.com/2illionzayp3may/dhy/BOMBFALL.mp3

With such an alternative to regular https, we could raise the bar on https
certs (stop allowing overrides) while still offering an alternative for
those who want it.

Eddy Nigg

unread,
Oct 20, 2008, 7:36:28 PM10/20/08
to
Nelson B Bolyard:

>
> httpst:// (security theater) maybe? or
> httpwf:// (warm fuzzy) or
> mitm://
>

LOL....I can't hold myself on the chair anymore...I'm laughing myself
kaput! Because of you I had to change my shirt and clean the keyboard
from coffee stains....Can you warn me next time upfront not to drink?!

That's the best comment I've seen for a long time! I'd vote for mitm:// :-)

> With such an alternative to regular https, we could raise the bar on https
> certs (stop allowing overrides) while still offering an alternative for
> those who want it.

Even if it sounds really funny, but maybe this isn't such a bad idea at
all. Just please allow me to disable the usage of mitm:// at all. I
don't mind editing about:config

Robert Relyea

unread,
Oct 20, 2008, 9:08:44 PM10/20/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> b) some unmistakeable blatantly obvious way to show the user that this
> site is not using security that's good enough for banking but, well,
> is pretty good security theater. Flashing pink chrome?
> Empty wallet icon? The whistling sounds associated with falling things?
> http://www.sounds.beachware.com/2illionzayp3may/dhy/BOMBFALL.mp3
>
http://new.wavlist.com/movies/325/strk1-intrdra.wav
Chrome turns flashing red and yellow;).

bob

Nelson B Bolyard

unread,
Oct 20, 2008, 9:23:48 PM10/20/08
to mozilla's crypto code discussion list
OK, I was too flippant, but I'm serious about wanting an alternative
to https, something that means security not good enough for financial
transactions, but OK for your private home router/server.

Nelson B Bolyard wrote, On 2008-10-20 15:07:
> Ian G wrote, On 2008-10-20 13:28:

>> (e.g., we do agree that we'd like to write something that says "for
>> high value commerce, use XXXX" ... except we don't know what XXXX is.)
>
> I keep wondering about that. Lots of people seem to agree that they want
> some kind of half-vast SSL, providing some encryption, but no assurance
> that the party to whom they're connected is who they intended it to be.
> No protection against MITM, just a warm fuzzy feeling that "well, at least
> we're using encryption". I think the term "security theater" applies.
>
> How do we give them that in a way that clearly distinguishes between that
> and real authenticated connections? I think there are (at least) two parts
> to the puzzle:
>
> a) some way to convey to the browser that the EXPECTED amount of security
> is low, so the browser won't try to impose all the usual high security
> requirements on the connection (e.g. not impose strong authentication
> requiremetns) and hence won't show any warnings. I'm thinking we need an
> alterantive to https for this.

serious alternatives to https wanted.

> b) some unmistakeable blatantly obvious way to show the user that this
> site is not using security that's good enough for banking but,

Serious chrome ideas wanted.

Eddy Nigg

unread,
Oct 20, 2008, 9:57:44 PM10/20/08
to
Nelson B Bolyard:

> OK, I was too flippant, but I'm serious about wanting an alternative
> to https, something that means security not good enough for financial
> transactions, but OK for your private home router/server.
>

One way doing it is going to http://www.ietf.org/ and proposing it.

Another way could be to enable for professionals and service personnel a
special mode to allow configuring of routers and other similar
appliances (I suggested editing of about:config but there might be
better choices and ideas), while keeping the average user out of this cycle.

Incidentally the Mozilla manifesto principals call in #4 for
"Individuals' security on the Internet is fundamental and cannot be
treated as optional." I believe that the above suggested and proposed is
perfectly in line - and a direct implementation - of this principal.

Self-signed certificates are by fact and design not validated by a third
party and responsible for the current insecurity - and with the browser
providing the convenience to override them, makes the individuals'
security optional. One could claim that the current behavior is counter
to the Mozilla manifesto principals.

Better security will strengthen the other goals and principals of the
manifesto, it will make the browser and the Internet stronger and more
usable then ever.

Ian G

unread,
Oct 20, 2008, 10:24:17 PM10/20/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> OK, I was too flippant, but I'm serious about wanting an alternative
> to https, something that means security not good enough for financial
> transactions, but OK for your private home router/server.
>
> Nelson B Bolyard wrote, On 2008-10-20 15:07:
>> Ian G wrote, On 2008-10-20 13:28:
>
>>> (e.g., we do agree that we'd like to write something that says "for
>>> high value commerce, use XXXX" ... except we don't know what XXXX is.)
>> I keep wondering about that.


Actually above I was referring to online banking. Right now,
browsers aren't up to it. That's a big concern. I have been toying
with NoScript, but it requires too much interaction for the masses
to follow.


>> Lots of people seem to agree that they want
>> some kind of half-vast SSL, providing some encryption, but no assurance
>> that the party to whom they're connected is who they intended it to be.
>> No protection against MITM, just a warm fuzzy feeling that "well, at least
>> we're using encryption". I think the term "security theater" applies.


Well, from high to low ...

There are possibilities. One is the server-side self-signed certs,
which would generally prefer KCM to be useful, so add Petnames.
This is ok for small sites, small communities, but valuable there as
compromised boxes are a pain.

The second is the ADH style where it just boots up in promiscuous
mode and we hope the person we're with is the one we wanted.
Personally I think this is not worth worrying about; if KCM is
coming then self-signed certs would be more bang-for-buck.

The alternate aspect is that once in ADH, we could upgrade to
certificate-based security if needed. E.g., for a login; this is
more or less what all online banking does. Sometimes it then falls
back, which would be fine if the cert-based step confirmed the ADH
key exchange.

However, caveats: just idle late night thoughts. Also, this is all
pie in the sky. There is little ability to change the protocols or
get these things implemented, these things are set in stone. We
can't even get the "approved" stuff implemented, let alone new stuff.


>> How do we give them that in a way that clearly distinguishes between that
>> and real authenticated connections? I think there are (at least) two parts
>> to the puzzle:
>>
>> a) some way to convey to the browser that the EXPECTED amount of security
>> is low, so the browser won't try to impose all the usual high security
>> requirements on the connection (e.g. not impose strong authentication
>> requiremetns) and hence won't show any warnings. I'm thinking we need an
>> alterantive to https for this.


Well, we had in the past suggested that a white URL bar would be
fine for that.

(But now that is being used for full CA-authenticated SSL.)

Also, black is a neutral colour, so invert the URL bar, perhaps?

If you wanted a padlock replacement that was kind of weak, you could
put a fig leaf ... although if drawn too small it might look like
that other popular weed :)

Actually the current display gives much more info by clicking, so
you could just put the "lowsec" rating in there, no?


> serious alternatives to https wanted.


https is generally thought to be 443 and/or SSL. Are you saying you
want to vary those?


A lot depends on what you are trying to do ("requirements") and how
much you want to re-use the existing SSL infrastructure.

For my money, I would not drift at all from the SSL infrastructure,
and I would add in an upgrade path; the goal would be to get more
people upgrading to full SSL, rather than wholescale rewrites.

Also, bear in mind the maxim: all protocols divide naturally into
two parts. Distribute this key, then, Trust this key completely.
So, there essence is to break it into two areas, being distro the
cert or key in cleartext / http, cache it, then upgrade and encrypt.

E.g., distro the key/cert/port in the HTTP headers, or have the
client do a GET of a wellknown file. Then have the client do an
automatic upgrade.

Just some thoughts.

>> b) some unmistakeable blatantly obvious way to show the user that this
>> site is not using security that's good enough for banking but,
>
> Serious chrome ideas wanted.


Yes, this part is harder.


>> With such an alternative to regular https, we could raise the bar on https
>> certs (stop allowing overrides) while still offering an alternative for
>> those who want it.


Certainly, having a series of steps up that allows most sites to
settle for "medium" and those that need it to stay at "high" is a
good idea. As long as we keep an eye on easy upgrades, then it
should help everyone.

Oh, and implement TLS/SNI. You already have? Darn, someone must be
holding back!


iang

PS: it's httpd team at apache. They just missed the release, again.

Kyle Hamilton

unread,
Oct 21, 2008, 12:00:26 AM10/21/08
to mozilla's crypto code discussion list
https is a perfectly valid protocol, and I don't think that it should
be changed (or any aspect of it should be changed or supplanted). The
ONLY problem that exists is the chrome.

On Mon, Oct 20, 2008 at 6:23 PM, Nelson B Bolyard <nel...@bolyard.me> wrote:
>
>> b) some unmistakeable blatantly obvious way to show the user that this
>> site is not using security that's good enough for banking but,
>
> Serious chrome ideas wanted.
>

Serious chrome idea:

How about a little popup at the bottom right hand corner of the
window, that gives information on:

1) Firefox's opinion of how trusted the certificate is ('high' for EV,
'medium' for certs from vetted CAs, 'low' for self-signed and
private-label CAs, 'not at all' for any CA that hasn't been added to
the PSM by the user)
2) Who it says it belongs to (and whether Firefox considers the
information trustworthy, which it only does for EV certificates)
2a) The SubjectAlternativeName (or Subject) that the site's DNS name
validates against
3) Who says it belongs to that entity (and again, whether Firefox
considers that information trustworthy, with the same caveat -- this
should be the Issuer, not the ultimate root)
3a) The ultimate root that the certificate chains to, and how
trustworthy Firefox considers it ('very' for EV-enabled roots even if
the certificate is not marked EV, 'fairly' for non-EV roots included
in the distributed root list, 'not at all' for private-label or
self-signed CAs)
4) Information on the cipher in use for the session, and how long that
'session' has been active (with a button to clear the session to force
a full renegotiation on next connection)
5) a button to see the entire certificate chain
6) A button to dismiss the pop-up

Show this on initial connection, and on ALL pages that have forms to
submit. If someone tries to submit a form on a ('medium' or?) 'low'
opinion without dismissing the popup, shake the popup to draw
attention to it.

Maybe for EV certs, say "Firefox trusts this site for banking" in nice
green letters at the or next to the dismissal button in the popup, or
"Firefox does NOT trust this site for banking" in red (or at least
not-green) lettering, at the same place.

The user-interface features of this are:

1) Allow private-label CAs, if the client wants to
2) Make sure that the user is ALWAYS presented with the information,
rather than simply telling people to "look for the lock"
3) Increase the amount of information that is easily available to the user
4) Motion to indicate something the user really should pay attention to
5) Use the information that Firefox already has to present information
which is otherwise very close to inaccessible to the user.

I don't like modal dialogs. I don't like my browser interposing
itself into my workflow. If it's going to, I'd like to minimize the
annoyance factor that it carries with it (in this case, 'block the
form submission' is the only workflow alteration, and it's something
I'd be willing to deal with).

-Kyle H

Nelson B Bolyard

unread,
Oct 21, 2008, 12:10:31 AM10/21/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-20 19:24:

> There are possibilities. One is the server-side self-signed certs,
> which would generally prefer KCM to be useful, so add Petnames.
> This is ok for small sites, small communities, but valuable there as
> compromised boxes are a pain.

The Debian OpenSSL fiasco caused the creation of 3*65536 bad keys of
each and every conceivable size (e.g., 1024 bit, 1025 bit, 1026 bit ...).
A file was created that contained all those keys for two popular sizes,
1024 bit and 2048 bit, and when compressed, that file is about the size
of the entire browser download.

It is widely agreed that, since KCM has no central revocation facility,
the only way to effectively handle revocation is for individual KCM
clients and servers, which is to say, users, to download those enormous
files of bad keys, and check their sets of trusted keys against those
files. Tools for doing that are available to SSH users now. Users who
don't do that, who don't download and use those enormous compromised key
lists (CKLs) and their checking programs, will be forever vulnerable to
those compromised keys.

Further, new KCM keys should be tested against those files before being
added to the user's trusted list. This has given rise to the proposal
to add code to do that to the browser. But the prospect of adding such
enormous CKLs to browser downloads seems to be unacceptable to nearly
everyone in Mozilla land. I think that says that KCM really must be
relegated to the uses that really don't care about MITM, not even in the
least tiny little bit.

Personally, I have no such uses. I have no need for encryption that is
vulnerable to MITM, but evidently lots of people think they do.

Ian G

unread,
Oct 21, 2008, 1:41:07 AM10/21/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> Ian G wrote, On 2008-10-20 19:24:
>
>> There are possibilities. One is the server-side self-signed certs,
>> which would generally prefer KCM to be useful, so add Petnames.
>> This is ok for small sites, small communities, but valuable there as
>> compromised boxes are a pain.
>
> The Debian OpenSSL fiasco caused the creation of 3*65536 bad keys of
> each and every conceivable size (e.g., 1024 bit, 1025 bit, 1026 bit ...).
> A file was created that contained all those keys for two popular sizes,
> 1024 bit and 2048 bit, and when compressed, that file is about the size
> of the entire browser download.
>
> It is widely agreed that, since KCM has no central revocation facility,


KCM is not central, period. Talking about revocation is a strawman.


> the only way to effectively handle revocation is for individual KCM
> clients and servers, which is to say, users, to download those enormous
> files of bad keys, and check their sets of trusted keys against those
> files. Tools for doing that are available to SSH users now. Users who
> don't do that, who don't download and use those enormous compromised key
> lists (CKLs) and their checking programs, will be forever vulnerable to
> those compromised keys.


What's your point? Sounds to me like most of the last 1000 security
bugs. Patch it, or remain vulnerable?

It seems like you are searching for any reason to stick that stake
in the heart of KCM. Problem is, it has to be an honest stake; the
concept doesn't care if you don't like it.


> Further, new KCM keys should be tested against those files before being
> added to the user's trusted list. This has given rise to the proposal
> to add code to do that to the browser. But the prospect of adding such
> enormous CKLs to browser downloads seems to be unacceptable to nearly
> everyone in Mozilla land.


What has this got to do with KCM? Is KCM being used to create keys
now? Or are you saying that the KCM module has to now test all the
PKI keys too?


> I think that says that KCM really must be
> relegated to the uses that really don't care about MITM, not even in the
> least tiny little bit.


Nelson, you sound really bitter about this. SSH has protected
people for a decade or more. If you can't see why that is, well,
perhaps you can at least see that people are not abandoning it, and
it will be protecting for another decade.


> Personally, I have no such uses. I have no need for encryption that is
> vulnerable to MITM, but evidently lots of people think they do.


If your choice is to pay that cost, yourself, that's fine. Just be
careful that you are not one of the ones who dictate to others how
much they will pay for your choices.

iang

Nelson B Bolyard

unread,
Oct 21, 2008, 2:05:40 AM10/21/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-10-20 22:41:
> Nelson B Bolyard wrote:

>> It is widely agreed that, since KCM has no central revocation facility,
>
> KCM is not central, period. Talking about revocation is a strawman.

I should have said "central revocation SERVICE". Sadly, it DOES have a
central revocation facility now, a central source for that awful 10MB
file that every KCM user must now use.

>> Further, new KCM keys should be tested against those files before being
>> added to the user's trusted list. This has given rise to the proposal
>> to add code to do that to the browser. But the prospect of adding such
>> enormous CKLs to browser downloads seems to be unacceptable to nearly
>> everyone in Mozilla land.
>
> What has this got to do with KCM? Is KCM being used to create keys
> now? Or are you saying that the KCM module has to now test all the
> PKI keys too?

If you're going to have the browser use KCM for SSL servers, then the
browser has need of a revocation method for KCM, just like SSH does,
and that presently means dragging around that 10MB file.

>> I think that says that KCM really must be relegated to the uses that
>> really don't care about MITM, not even in the least tiny little bit.

> Nelson, you sound really bitter about this. SSH has protected
> people for a decade or more. If you can't see why that is, well,
> perhaps you can at least see that people are not abandoning it, and
> it will be protecting for another decade.

I know that lots of SSH users have still never downloaded the 10MB
file+program package and run it locally. Yes, I know why they cling
to SSH, even though they do not use the Debian Key Finding program/file.
It's because they don't understand the danger, and simply like the warm
and fuzzy feeling they have from using SSH in blissful ignorance.

>> Personally, I have no such uses. I have no need for encryption that is
>> vulnerable to MITM, but evidently lots of people think they do.

> If your choice is to pay that cost, yourself, that's fine.

Pay? Just what is that cost?
The cost of a cert from a free CA?

Eddy Nigg

unread,
Oct 21, 2008, 5:03:48 AM10/21/08
to
Ian G:
> Nelson B Bolyard wrote:
>> It is widely agreed that, since KCM has no central revocation facility,
>
> KCM is not central, period. Talking about revocation is a strawman.
>

I think that's the point he is making.

>
> What's your point? Sounds to me like most of the last 1000 security
> bugs. Patch it, or remain vulnerable?
>

Patching is fine, they did. However the (SSH) keys don't have a validity
period attached to them, nor can they be revoked. At least CAs could
revoke the vulnerable keys, which CAs really did.

If you encounter a cert from StartCom today you can be assured that it's
not a weak key. You can't do that with KCM (easily) nor is there an
authority who cares and takes responsibility. Nor would Mozilla be in
the situation to take over the role of CAs. The idea of scanning for
weak keys was not feasible.

>
> What has this got to do with KCM? Is KCM being used to create keys
> now? Or are you saying that the KCM module has to now test all the
> PKI keys too?
>

Compare that to the above and you understand the little difference
between having a third party and KCM. Beside that the self-signed certs
don't provide any value...

>
> Nelson, you sound really bitter about this. SSH has protected
> people for a decade or more.

You can use PKI with SSH. Not many uses it, but that's not SSH's fault.

Bernie Sumption

unread,
Nov 4, 2008, 7:04:19 AM11/4/08
to
> Is removal of the ability to override bad certs the ONLY effective
> protection for such users?

No. If we can detect MITM attacks, the problem goes away. There are
ways of detecting MITM attacks, but first of all, this is why we need
to do it:

The problem as I see it is that the same warning UI is shown whenever
there is a less than perfect certificate. Let us assume that 99.99% of
the time, this either a misconfigured web server or a homebrew site
that is using self-signed certs because they only care about
encryption, not authentication. 0.01% of the time it is a MITM
attack.In the MITM scenario the UI is not harsh enough. In the common
case it is too harsh.

The important thing is that we recognise that some kind of MITM
detection is essential, no matter how hard it might be to implement,
because if you show the same UI for a MITM attack as you show for a
misconfigured/homebrew web server, even quite savvy users are going to
assume that a real MITM is a misconfiguration/homebrew.

In the event of a MITM attack, the user should be shown a huge red
warning, like the phishing and malware warnings, stating that "Firefox
has detected a man-in-the-middle attack: we think that an attacker is
intercepting your connection". Whether you let users override this can
be debated.

In the event of a misconfigured web server / homebrew site, the user
could be shown a more qualified warning that "this site uses
encryption, but can't be identified because {$REASON}. It is
difficult, but not impossible, for an attacker to see any data you
send or receive. If you use this site for important communications or
financial transactions you should not use it. Please contact the site
owners and let them know about this problem."

Here's one idea for detecting MITM attacks, but I'm not a security
expert so please don't jump on me and call me an idiot. If this way
doesn't work for some reason, I'm sure that there are other ways:

The browser could send all self-signed or invalid certificates to a
trusted MITM detection service, say https://mitm.mozilla.com. A MITM
on this site is impossible because it would have a valid certificate.
This site could inspect the certificate and use a variety of
heuristics to detect MITM attacks:
* The service could connect to the same site and check that it has the
same certificate, which obviously only works if the attacker is not in
a position to MITM the trusted server too (if the attacker is on the
same network as the host, they can MITM any client on the Internet).
* The service could use a community based approach as used by phishing
detection to report MITM attacks so close to the target host that they
can MITM the entire Internet.
* There could be some kind of opt-in way (through a DNS record?) of a
site specifying a MITM policy, so banks could state that anything but
a properly signed certificate is treatede as an MITM.
* Any other ideas?

Care would need to be taken with privacy, but if this approach works
with phishing, why now MITM?

Graham Leggett

unread,
Nov 4, 2008, 7:19:58 AM11/4/08
to mozilla's crypto code discussion list
Bernie Sumption wrote:

> The problem as I see it is that the same warning UI is shown whenever
> there is a less than perfect certificate. Let us assume that 99.99% of
> the time, this either a misconfigured web server or a homebrew site
> that is using self-signed certs because they only care about
> encryption, not authentication. 0.01% of the time it is a MITM
> attack.In the MITM scenario the UI is not harsh enough. In the common
> case it is too harsh.

The trouble is that there is no way to distinguish between an MITM or an
admin who installed a self signed certificate. The difference between
the two is the intention of the person who put the certificate there,
and this is impossible to measure using a computer.

The problem right now is that this is the message that the end user
currently sees:

+--------------------------------------+
| BLAH BLAH BLAH |
| Blah blah blah, blah blah, fishpaste |
| blah, call your nephew, he'll fix it |
+--------------------------------------+

Perhaps icons should be better used, as they are easier for an end user
to understand.

A great big red STOP sign perhaps, or some indicator that clearly says
"no really, do not continue".

In order for this icon to remain undiluted and in turn lose it's
meaning, the icon should not be used anywhere else in the browser.

Regards,
Graham
--

Nelson B Bolyard

unread,
Nov 4, 2008, 11:06:31 AM11/4/08
to mozilla's crypto code discussion list
Bernie Sumption wrote, On 2008-11-04 04:04:
>> Is removal of the ability to override bad certs the ONLY effective
>> protection for such users?
>
> No. If we can detect MITM attacks, the problem goes away.

It does?

Absence of an incomplete MITM attack does not prove the identity of the
server.

> There are ways of detecting MITM attacks,

There are ways of detecting SOME MITM attacks on SOME server, those that
affect only a limited portion of the Internet against servers that are
not part of content distribution networks.

The methods currently proposed also have the problem that they interfere
with so-called content distribution networks (like Akamai, for one).
They may detect MITMs when no MITM is in effect, simply because different
servers rightfully act as www.foo.com in different parts of the Internet.

> The important thing is that we recognise that some kind of MITM
> detection is essential, no matter how hard it might be to implement,
> because if you show the same UI for a MITM attack as you show for a
> misconfigured/homebrew web server, even quite savvy users are going to
> assume that a real MITM is a misconfiguration/homebrew.

If you could implement a perfect MITM detection service, that would be
of some value. But an imperfect MITM detection service simply becomes
the favorite new target of attackers.

A perfect MITM detection service is useful in that if it detects an MITM
then that might be a basis upon which to stop the client cold. But in
the absence of such detection, there is still no proof that the cert
accurately identifies the party it claims to identify. Trouble is,
users will learn to treat the absence of a definitive MITM detection as
if it WAS proof of the server's identity.

Eddy Nigg

unread,
Nov 4, 2008, 1:51:39 PM11/4/08
to
On 11/04/2008 02:04 PM, Bernie Sumption:

>
> The problem as I see it is that the same warning UI is shown whenever
> there is a less than perfect certificate. Let us assume

The concept of SSL certificates isn't based on assumptions! Neither does
the cryptographic library assume things, but makes decisions.

How about dropping encryption with certain web sites because the browser
assumes it to more or less important to secure?

>
> The important thing is that we recognise that some kind of MITM
> detection is essential,

There is a very specific MITM detection tool being used widely...and
it's not "some kind"...it works, it does the job, many invested into it
(from NSS up to the Mozilla Foundation to the CAs). It has very clear
rules to follow and detection is 100% guarantied. The very specific
rules rule out self-signed certificates. Is that so hard to get?

> The browser could send all self-signed or invalid certificates to a
> trusted MITM detection service, say https://mitm.mozilla.com. A MITM
> on this site is impossible because it would have a valid certificate.

I know you brought it up somewhere on Bugzilla....go ahead and implement
it. Obviously mitm.mozilla.com will be the first target to attack in
order to invalidate the service by letting it send out false positives.
Up to the point where it would become unreliable.

> * Any other ideas?

Yes, how about simple, available x.509 certificates from a big range of
CAs fitting every pocket and taste? :-)

Bernie Sumption

unread,
Nov 6, 2008, 6:57:20 AM11/6/08
to
Graham, Nelson, Eddy, you all make good points.

I'll take your word for it that it's impossible to detect MITM attacks
with 100% reliability, as I said I'm not a security expert.

How about an MITM detection service that gives no false positives, but
might give false negatives? If you positively identify an MITM attack,
you can present users with a much more definite UI saying "this *is*
an MITM attack" and giving advice about what to do in the event of an
MITM.

I'm not talking about fixing all the problems for all the users, just
a real improvement for a proportion of users.

For example, can one give site owners a way of specifying that their
domain must not be accessed if it presents a self-signed certificate.
Paypal.com would no doubt take this option, as would any large bank.
If the method is made easy enough, so might other sites like facebook.
Two possible methods that don't require a detection service
(mitm.mozilla.org) might be a DNS record (doesn't work if the attacker
has compromised DNS) or a subdomain naming convention (i.e.
secure.example.com requires a valid certificate - presents adoption
issues for existing sites).

This would likely have stopped the original bug poster from revealing
her password.

> If you could implement a perfect MITM detection service, that would be
> of some value. But an imperfect MITM detection service simply becomes
> the favorite new target of attackers.
>
> A perfect MITM detection service is useful in that if it detects an MITM
> then that might be a basis upon which to stop the client cold. But in
> the absence of such detection, there is still no proof that the cert
> accurately identifies the party it claims to identify. Trouble is,
> users will learn to treat the absence of a definitive MITM detection as
> if it WAS proof of the server's identity.

I can see how there are philosophical reasons to avoid any MITM
detection even if it gave no false positives, because a false negative
would be interpreted as an "all clear".

However, if the user who filed the bug report is anything to go by,
people are already misinterpreting real MITM attacks as false
positives. Making the error screen scarier for all errors won't fix
this because users will just learn that the new scarier screen is what
false positives now look like. Introducing a new screen that has a far
lower rate of false positives seems a reasonable thing to try.

> I know you brought it up somewhere on Bugzilla....go ahead and implement it.

Implement what? There's no proposal yet, I'm just trying to start a
constructive discussion. If there is interest in implementing
something resembling my suggestions, I'll pitch in as much as my
schedule and ability allow.

Nelson B Bolyard

unread,
Nov 6, 2008, 3:20:31 PM11/6/08
to mozilla's crypto code discussion list
Bernie Sumption wrote, On 2008-11-06 03:57:
> Graham, Nelson, Eddy, you all make good points.
>
> I'll take your word for it that it's impossible to detect MITM attacks
> with 100% reliability, as I said I'm not a security expert.
>
> How about an MITM detection service that gives no false positives, but
> might give false negatives?

I don't think that's possible, either.

It is possible in the Internet to setup different physical servers
around the globe, all of which appear to users on different parts of
the Internet to be the same server. This technology can be used for
good or for evil. It is my understanding that this is how "Content
Distribution Networks" like Akamai work. But obviously it can also
be used to perform MITM attacks.

The only difference between a CDN server and an MITM attacker is the
presence or absence of authorization given to the alternative site
operator by the true & rightful owner of the site. I doubt that the
presence of that authorization can be detected by the likes of "perspectives".

> If you positively identify an MITM attack, you can present users with a
> much more definite UI saying "this *is* an MITM attack" and giving advice
> about what to do in the event of an MITM.

If we create an error display that says "No kidding, this absolutely
is an attack and we're stopping you cold to protect you from it."
it seems unavoidable that users will learn to treat the absence
of such an unbypassable error display as proof to the contrary,
proof that the site is genuine and verified.

Do we want to train them that way?

Nelson B Bolyard

unread,
Nov 6, 2008, 3:41:27 PM11/6/08
to mozilla's crypto code discussion list
What curious things do you notice about these certs?

Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224169969 (0x48f759f1)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=unaportal.una.edu,O=University of North Alabama"
Validity:
Not Before: Fri May 18 00:00:00 2007
Not After : Sun May 17 23:59:59 2009
Subject: "CN=unaportal.una.edu,O=University of North Alabama"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
24:58:0b:ec:78:87:81:c4:d5:25:8b:b1:3e:30:a6:0f:
ae:02:8a:d0:e9:5e:15:b8:ba:37:e2:b0:70:1e:3d:f5:
3f:31:b1:fe:af:b7:dc:e4:c2:9c:9d:fb:f1:80:8e:18:
7c:e8:3b:d4:00:24:28:1f:7f:43:e5:53:ea:40:39:44:
68:af:9e:10:94:c6:c2:31:d6:04:84:a9:1c:ef:a8:9e:
47:19:c1:2c:29:a8:3f:14:2c:2c:4f:49:f1:85:27:06:
a3:85:73:3b:18:70:87:11:aa:02:43:f1:64:ee:41:80:
27:e3:a3:95:34:22:10:26:ce:9f:21:db:32:eb:66:ee

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1225207685 (0x49072f85)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=*.govdelivery.com,O="GovDelivery, Inc""
Validity:
Not Before: Sat Feb 24 00:00:00 2007
Not After : Mon Feb 23 23:59:59 2009
Subject: "CN=*.govdelivery.com,O="GovDelivery, Inc""
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
90:67:64:97:95:11:35:4b:43:30:19:ba:24:69:3a:03:
23:6f:33:ac:0f:bc:2c:52:7b:b8:81:8d:e9:51:c5:f8:
72:db:bf:54:5d:c1:3d:dd:42:75:89:c8:a4:c7:0d:5a:
38:23:d8:70:5e:85:a0:7d:80:e6:a1:38:e5:97:48:a3:
c2:28:90:6b:ef:7c:9d:20:89:b2:30:04:e4:67:36:01:
c9:05:b9:1b:eb:3c:9f:7c:ec:94:c8:4d:04:9e:ff:9b:
68:3d:a5:72:9a:a8:8f:73:b5:41:0e:e8:fe:2e:d5:3a:
29:80:0b:32:3f:c7:64:9a:05:f8:e2:49:36:bc:c2:87

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224197811 (0x48f7c6b3)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=login.yahoo.com,O=Yahoo! Inc."
Validity:
Not Before: Wed Jan 04 17:09:06 2006
Not After : Tue Jan 04 17:09:06 2011
Subject: "CN=login.yahoo.com,O=Yahoo! Inc."
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
07:33:d2:77:35:11:10:31:72:6c:01:65:46:59:36:8a:
1d:d1:2d:fd:61:74:3a:50:c2:0c:a7:3d:3d:29:7e:3a:
01:64:28:92:a6:98:64:a8:23:64:55:d4:cf:5c:c3:df:
dd:6e:21:a9:59:02:9d:ec:be:bc:86:eb:18:54:63:85:
f8:de:65:c1:e4:44:92:e3:6f:97:f8:f8:34:eb:97:58:
f1:0e:5f:d8:3c:7e:b0:91:62:d3:56:f6:90:35:9f:55:
62:d7:78:c7:cd:0c:64:97:23:6b:c3:5e:92:83:5c:e0:
4c:59:16:10:0f:1b:77:0b:a4:5a:b9:fd:c3:3c:12:b9

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1225208630 (0x49073336)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=edit.yahoo.com,O=Yahoo! Inc."
Validity:
Not Before: Fri Apr 25 15:41:54 2008
Not After : Wed May 26 15:41:54 2010
Subject: "CN=edit.yahoo.com,O=Yahoo! Inc."
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
4a:41:64:98:50:67:93:1a:05:d7:d3:a2:3c:b2:63:89:
13:5b:a5:e0:bf:2f:1a:a3:ca:d1:d5:bb:7d:9c:ed:4d:
ee:ca:38:6f:49:33:74:98:d5:a2:19:01:d1:61:39:ef:
b5:cb:22:b5:74:fa:df:35:ea:42:90:32:d0:e1:d3:df:
19:88:2a:dc:79:4d:95:c6:d4:a5:2b:62:ed:38:d5:87:
cd:0c:b0:ae:fd:57:7b:6d:7c:e9:3f:cc:03:cb:23:5e:
b1:1e:c2:20:a2:33:8c:7c:10:50:45:f4:5f:7e:74:cb:
7d:d8:4e:cd:c7:f3:2d:b4:5c:74:b3:82:65:88:84:33

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224169923 (0x48f759c3)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=wireless.una.edu,O=University of North Alabama"
Validity:
Not Before: Fri Jan 11 00:00:00 2008
Not After : Sun Jan 10 23:59:59 2010
Subject: "CN=wireless.una.edu,O=University of North Alabama"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
4f:31:b0:91:c9:01:06:0f:d0:32:21:88:e8:df:da:a1:
fd:83:56:cb:1e:ab:e6:e0:f4:11:e0:51:56:55:28:09:
09:78:f5:d5:7b:cf:6c:a3:ea:d4:97:c6:ea:bb:14:c2:
52:fc:74:9b:cd:0f:91:9f:10:f4:06:32:72:bd:15:a0:
36:e9:db:15:cd:08:fb:a5:9b:ad:07:44:f7:71:1c:1d:
64:10:cc:91:36:1d:95:1d:da:3e:49:04:47:c2:36:88:
16:d5:29:03:fe:67:4c:6a:8a:33:b8:bb:61:8a:5f:fd:
52:98:d6:22:d1:19:ea:38:f5:93:ed:f0:57:cc:e2:c5

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224211462 (0x48f7fc06)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=*.wireless.att.com,O=AT&T"
Validity:
Not Before: Thu Mar 13 15:48:05 2008
Not After : Fri Mar 13 15:48:05 2009
Subject: "CN=*.wireless.att.com,O=AT&T"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
a1:1f:7a:63:e0:74:97:fa:1a:de:cf:b4:2a:9b:87:3f:
1a:6c:99:8e:5c:f3:d8:d5:b5:be:6c:78:75:d7:c7:bd:
81:8f:5d:3e:3d:b1:23:48:4d:c3:20:e3:38:bf:8d:28:
57:ad:79:93:ba:c7:48:66:97:2f:b1:d4:17:fc:a4:bc:
2c:9a:e3:49:cc:91:ca:3f:2a:49:d6:8c:44:67:fe:cc:
e6:41:2b:cf:85:e1:9e:e4:16:0a:88:3a:39:a1:7d:0d:
13:49:95:8b:d5:50:f0:80:46:f4:77:32:b1:c1:3a:31:
09:b3:10:23:29:94:60:af:54:03:91:5e:2c:ef:4b:06

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1225288698 (0x49086bfa)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=register.go.com,O=Disney Enterprises"
Validity:
Not Before: Thu Oct 02 05:39:09 2008
Not After : Fri Oct 02 05:39:09 2009
Subject: "CN=register.go.com,O=Disney Enterprises"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
0a:4f:12:79:3f:d6:8f:32:63:91:70:95:ae:69:31:09:
a4:e6:2b:69:1a:b7:bf:ab:bc:8d:d5:a6:df:f8:e2:17:
1d:05:7b:38:5d:dd:c3:51:f2:6f:88:f0:d8:5a:9e:0d:
62:80:1a:cb:9d:1d:f0:60:9f:8f:78:2e:a0:fd:31:ed:
7c:28:58:f1:3d:06:bf:e5:f9:7c:c9:7c:55:6f:a4:67:
7c:33:a2:cf:b4:87:04:9e:a0:6b:3d:8d:2a:1e:88:c1:
a9:86:5c:6f:3b:84:45:4b:f8:73:be:a7:dd:f0:75:d4:
a4:e1:14:dc:00:46:a5:16:b6:61:cc:46:fb:8f:16:e1

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224127195 (0x48f6b2db)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=login.live.com,O=Microsoft Corporation"
Validity:
Not Before: Thu Jun 19 00:00:00 2008
Not After : Mon Jul 20 23:59:59 2009
Subject: "CN=login.live.com,O=Microsoft Corporation"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
27:3a:b5:98:6e:ea:5e:62:e9:f5:3a:9c:03:dd:47:16:
60:4f:01:7a:00:a2:c6:13:32:41:54:1e:7f:99:5a:e6:
75:82:0c:d6:61:86:ab:66:3a:97:6c:82:82:a4:1c:46:
da:78:fb:df:48:c9:da:fa:19:5f:a6:8f:60:18:3c:58:
6a:83:5c:1e:a2:6d:29:55:e2:b2:62:f3:ae:df:d8:d7:
15:af:52:c9:b2:27:59:28:a5:01:ee:53:48:2c:87:a6:
10:b1:0e:a1:26:f8:eb:c7:61:40:68:54:ca:63:9f:3f:
cb:91:b3:0b:13:ea:26:51:95:b1:cd:c7:a5:b1:d1:5e
4D:68:80:AA:00:8F:59:E2:1B:FF:DA:12:DD:59:29:30
4D:07:A2:03:F3:0E:9F:F7:7A:73:F7:28:33:80:DC:63:90:7B:51:3B

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224170001 (0x48f75a11)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=*.mozilla.org,O=Mozilla Corporation"
Validity:
Not Before: Mon Dec 10 18:02:33 2007
Not After : Thu Dec 10 18:02:33 2009
Subject: "CN=*.mozilla.org,O=Mozilla Corporation"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
3e:28:5f:a5:35:0c:b0:fd:21:ea:ac:7b:3d:00:e7:0b:
6f:fa:11:fc:f4:ed:3b:19:17:89:b1:47:8e:01:6d:47:
96:35:87:48:26:72:fb:df:36:61:d2:bf:a8:40:ab:c8:
97:24:67:cc:59:17:40:ea:32:32:5f:bb:24:dd:e0:36:
3e:0c:4f:26:82:bd:e9:4c:3e:e4:5e:f5:f0:f2:0d:79:
79:29:a6:95:dd:79:18:2c:dd:2a:30:a8:67:e7:67:63:
f5:2d:43:85:d5:b9:b2:29:20:61:7d:f4:48:44:ab:6f:
50:91:c9:35:3b:fa:1b:10:3f:6e:97:a7:aa:fb:c1:81
38:7E:0F:8D:3C:B8:67:96:D2:BC:21:77:E7:96:40:9F
10:3E:59:AC:F6:81:38:5E:4C:03:89:9C:11:01:D3:7F:B1:56:8C:61

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1224115668 (0x48f685d4)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=login.facebook.com,O=login.facebook.com"
Validity:
Not Before: Wed Dec 13 07:31:49 2006
Not After : Tue Jan 12 07:31:49 2010
Subject: "CN=login.facebook.com,O=login.facebook.com"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
ac:5f:5a:12:29:0f:ac:a2:81:5a:c4:06:8a:4f:5d:ff:
f7:33:9e:89:39:9e:6a:fc:bc:36:7a:62:54:27:37:e5:
02:d5:ef:81:3a:c6:d0:56:42:db:83:3b:0c:89:78:95:
a5:5a:56:79:02:4e:62:29:ca:d4:a5:5b:86:d9:ff:6b:
6a:5c:8d:45:aa:4a:6f:35:65:a4:08:7d:09:3e:60:b5:
52:12:c4:7b:49:a6:19:5b:5c:69:04:fb:3b:28:45:f1:
5b:16:1a:f5:a0:af:ae:9d:63:f9:69:c7:b6:e7:2a:9f:
60:64:c1:0b:f8:de:00:26:92:12:eb:6a:40:ee:dd:b3
0C:74:29:98:79:87:1C:00:33:14:83:DC:2B:30:74:65
BD:52:8B:11:28:9B:6B:09:D3:5C:55:D3:2F:84:6E:11:A2:83:C1:A1

=============== ============
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1225208188 (0x4907317c)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Issuer: "CN=register.facebook.com,O=register.facebook.com"
Validity:
Not Before: Thu Feb 08 23:11:02 2007
Not After : Wed Mar 10 23:11:02 2010
Subject: "CN=register.facebook.com,O=register.facebook.com"
Subject Public Key Info:
Public Key Algorithm: PKCS #1 RSA Encryption
RSA Public Key:
Modulus:
b3:7a:4d:3d:3b:5d:1a:55:52:90:ca:45:0b:40:d4:c9:
ce:ba:95:64:3a:e7:0f:bc:a7:98:00:15:b7:46:d8:3f:
ae:0a:0c:53:92:b2:56:96:4e:bb:2e:95:ab:1f:cd:c5:
b2:1b:ca:d5:dd:58:89:ac:b9:7e:93:f6:81:ac:e2:ab:
71:fc:2f:42:8f:84:e4:f1:b7:18:6e:73:5f:cc:33:b2:
8d:6c:b2:d3:5a:aa:7f:79:4a:82:33:81:84:7d:1c:bb:
04:88:aa:8e:ab:f2:0c:4f:21:f8:58:89:45:42:95:6d:
3d:4b:a9:97:f1:4a:3b:1e:6f:84:3d:40:d5:c6:88:e3
Exponent: 65537 (0x10001)
Signature Algorithm: PKCS #1 MD5 With RSA Encryption
Signature:
60:ee:f1:c8:e0:37:6b:03:00:eb:8b:ba:ca:0b:c3:eb:
fa:10:05:22:eb:6d:1c:56:b0:ad:91:2e:17:0f:77:b0:
78:8c:6d:3f:dd:f5:03:51:b1:0e:c9:48:7f:b2:8b:4c:
84:cd:15:5b:31:27:1c:b6:bf:06:13:5a:f8:bc:a4:99:
7a:e6:88:b3:9c:a7:db:5b:2e:08:97:e2:d6:70:e4:9d:
98:5f:d0:31:e4:f7:40:35:21:79:d7:ac:dd:e1:6d:7f:
b5:97:dc:28:b2:1f:10:2f:fb:43:9b:ab:eb:a9:45:f0:
53:be:85:e0:8d:f4:75:10:dc:68:0c:2b:22:03:d7:65

Ian G

unread,
Nov 6, 2008, 3:48:19 PM11/6/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> What curious things do you notice about these certs?


Only one key? All have same Issuer + Subject?

iang

Kyle Hamilton

unread,
Nov 6, 2008, 3:49:00 PM11/6/08
to mozilla's crypto code discussion list
Aside from the fact that they all claim to be issued by themselves,
but the key modulus is the same across all of them?

Perhaps the fact that they're all version 3 certificates that don't
show any version 3 extensions, such as "keyUsage" and
"extendedKeyUsage"?

Should there be a check to make sure that disparate sites aren't using
the same public key modulus/exponent?

-Kyle H

On Thu, Nov 6, 2008 at 12:41 PM, Nelson B Bolyard <nel...@bolyard.me> wrote:
> What curious things do you notice about these certs?
>

> _______________________________________________
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>

Kyle Hamilton

unread,
Nov 6, 2008, 3:49:32 PM11/6/08
to mozilla's crypto code discussion list
...and they're all using MD5?

-Kyle H

On Thu, Nov 6, 2008 at 12:48 PM, Ian G <ia...@iang.org> wrote:
> Nelson B Bolyard wrote:
>>
>> What curious things do you notice about these certs?
>
>

> Only one key? All have same Issuer + Subject?
>
> iang

Julien R Pierre - Sun Microsystems

unread,
Nov 6, 2008, 4:41:50 PM11/6/08
to mozilla's crypto code discussion list
Kyle,

Kyle Hamilton wrote:

> Should there be a check to make sure that disparate sites aren't using
> the same public key modulus/exponent?

That would be fairly hard to implement reliably.

Currently, we don't persist end-entity certs of web sites in general in PSM.

Even if we did, what is the likelihood for one individual browser to
have visited all those sites and be able to detect them ?

Those are problems that should be dealt with by revocation, which is not
a process that works for self-signed certs.

Julien R Pierre - Sun Microsystems

unread,
Nov 6, 2008, 4:41:50 PM11/6/08
to mozilla's crypto code discussion list
Kyle,

Kyle Hamilton wrote:

> Should there be a check to make sure that disparate sites aren't using
> the same public key modulus/exponent?

That would be fairly hard to implement reliably.

Nelson B Bolyard

unread,
Nov 6, 2008, 5:25:58 PM11/6/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-11-06 12:48:
> Nelson B Bolyard wrote:
>> What curious things do you notice about these certs?
>
> Only one key?

Yup. That's the biggie. It allows the MITM to get by with just a
single private key.

> All have same Issuer + Subject?

Yeah, all self signed. All DNs consist of CN=<something>,O=<something>
attributes, in that order, and the values of those attributes come from
the real https server's cert Subject name. All other attributes from
the real server's cert subject name are lost.

The Validity period dates also come straight from the real server cert.

The 32-bit serial numbers are actually Unix time_t's (count of seconds
since midnight Jan 1, 1970 UTC). I believe they show the time the cert
was created.

1224115668 Wed Oct 15 17:07:48 2008
1224127195 Wed Oct 15 20:19:55 2008
1224169923 Thu Oct 16 08:12:03 2008
1224169969 Thu Oct 16 08:12:49 2008
1224170001 Thu Oct 16 08:13:21 2008
1224197811 Thu Oct 16 15:56:51 2008
1224211462 Thu Oct 16 19:44:22 2008
1225207685 Tue Oct 28 08:28:05 2008
1225208188 Tue Oct 28 08:36:28 2008
1225208630 Tue Oct 28 08:43:50 2008
1225288698 Wed Oct 29 06:58:18 2008

Ian G

unread,
Nov 6, 2008, 6:06:11 PM11/6/08
to mozilla's crypto code discussion list
Nelson B Bolyard wrote:
> Ian G wrote, On 2008-11-06 12:48:
>> Nelson B Bolyard wrote:
>>> What curious things do you notice about these certs?
>> Only one key?
>
> Yup. That's the biggie. It allows the MITM to get by with just a
> single private key.


OK. We can of course all imagine ways to exploit that weakness, but it
seems rather pointless to me. In that, if any defence worked, the
attacker would just start using different keys. How long does it take
to generate a pool of thousands of keys? How many million machines on
your botnet?

Is this a real live attack? Any other details? Or is this K's attack
as per current thread?


iang

Robert Relyea

unread,
Nov 6, 2008, 7:33:16 PM11/6/08
to mozilla's crypto code discussion list
Given the Data Nelson presented, there can be no doubt that the certs
were created on the fly.

bob

Nelson B Bolyard

unread,
Nov 6, 2008, 8:09:45 PM11/6/08
to mozilla's crypto code discussion list
Ian G wrote, On 2008-11-06 15:06:
> Nelson B Bolyard wrote:
>> Ian G wrote, On 2008-11-06 12:48:
>>> Nelson B Bolyard wrote:
>>>> What curious things do you notice about these certs?
>>> Only one key?
>> Yup. That's the biggie. It allows the MITM to get by with just a
>> single private key.

> OK. We can of course all imagine ways to exploit that weakness, but it
> seems rather pointless to me.

I'm merely providing evidence of an MITM attack.

These certs were extracted from a Firefox user's cert DB, after
"security exceptions" had been created for every one of them.

The idea that it was an MITM attack came about because the user
could not access any https sites (for some time) without encountering
one of FireFox's self-signed cert dialogs. The fact that all the
certs bear a common public key is only confirmation of that conclusion.

Kyle Hamilton

unread,
Nov 6, 2008, 10:18:00 PM11/6/08
to mozilla's crypto code discussion list
So, essentially, what you're saying is that it was a targeted attack
against a user, instead of an attack targeted against a server?

Apparently, keeping track of keys in certificates placed individually
into NSS might be a good idea regardless.

-Kyle H

Julien R Pierre - Sun Microsystems

unread,
Nov 6, 2008, 10:37:07 PM11/6/08
to
Kyle,

Kyle Hamilton wrote:
> So, essentially, what you're saying is that it was a targeted attack
> against a user, instead of an attack targeted against a server?
>
> Apparently, keeping track of keys in certificates placed individually
> into NSS might be a good idea regardless.

The attacker absolutely didn't have to reuse the same key for this
attack. He could have regenerated a new key on the fly for every site
the user visited.

Remember that there are plenty of cases where it's perfectly valid to
reuse the same keypair - cases like cross-certification.

Even if we detected duplicate public keys between certs in NSS, that is
not necessarily something we want to fail on. We would have to know that
the keys have been assigned to completely different entities, as in the
example Nelson posted. Sometimes it may be unclear, for example if
somebody changes CA, or changes domain and happens to reuse the same
private key for 2 different certs.

This kind of attack is easily mitigated - the certs were self-signed.
That's a dead giveaway that the user shouldn't accept them.

Eddy Nigg

unread,
Nov 7, 2008, 4:09:30 AM11/7/08
to
On 11/07/2008 05:18 AM, Kyle Hamilton:

> So, essentially, what you're saying is that it was a targeted attack
> against a user, instead of an attack targeted against a server?
>

What is an attack targeted against a server in the context of browsers
and MITMs?

Bernie Sumption

unread,
Nov 7, 2008, 4:39:12 AM11/7/08
to
> If we create an error display that says "No kidding, this absolutely
> is an attack and we're stopping you cold to protect you from it."
> it seems unavoidable that users will learn to treat the absence
> of such an unbypassable error display as proof to the contrary,
> proof that the site is genuine and verified.
>
> Do we want to train them that way?

I don't think that this is an issue. I believe most users likely never
see a MITM attack in their browsing carer - indeed this rarity of real
MITM attacks is the reason why real attacks are interpreted as false
positives, it's just the most likely explanation for a cert error
screen.

If a MITM detection service could be designed that gave no false
negatives, most users would never see it, so would not learn to
associate the existing cert error screen with an "all clear".

I have no idea if MITM attacks are generally targeted at users, as in
the case of this thread, or against servers.

If MITM attackls are targeted at servers, I accept that there is very
little that Firefox can do to stop this. If the attack is targeting a
user, surely there is an opportunity for Firefox to help the user
realise that they are being MITM'd? This could be a sustained attack,
lasting days or weeks, slowly collecting all of the user's passwords.
The idea of it makes me shudder! Any solution will be an imperfect
trade-off, but is it really the consensus that there's no better trade-
off than the current situation?

Ian G

unread,
Nov 7, 2008, 10:39:23 AM11/7/08
to mozilla's crypto code discussion list
Eddy Nigg wrote:
> On 11/07/2008 05:18 AM, Kyle Hamilton:
>> So, essentially, what you're saying is that it was a targeted attack
>> against a user, instead of an attack targeted against a server?
>>
>
> What is an attack targeted against a server in the context of browsers
> and MITMs?


Possibly, it is much closer to the user, so one user gets the full
effect, across all her services. As opposed to being much closer to the
server, so one server gets the full effect, across all its users.

Using the word "targetting" is possibly a stretch :)

iang

Iang

unread,
Nov 7, 2008, 11:22:53 AM11/7/08
to mozilla's crypto code discussion list
Bernie Sumption wrote:
> Graham, Nelson, Eddy, you all make good points.
>
> I'll take your word for it that it's impossible to detect MITM attacks
> with 100% reliability, as I said I'm not a security expert.
>
> How about an MITM detection service that gives no false positives, but
> might give false negatives? If you positively identify an MITM attack,
> you can present users with a much more definite UI saying "this *is*
> an MITM attack" and giving advice about what to do in the event of an
> MITM.


This is what we have now, sort of. It detects any
certificate MITMs. It also treats any misconfigurations or
use of non-CA certs as potential attacks. It pretty much
picks up all real cert-based attacks on the browser.

The problem here is that the real MITMs are almost
non-existent, the "false negatives" are routine, and there
is no real way to tell the difference. What then is
displayed is (generally) not an attack, the users known
(generall) that it is not an attack, so the users believe
the display to be wrong (fairly).

Click-thru syndrome.

This part is well known. What is less easy is what to do
about it. It all depends on ones commercial or structural
or security viewpoint.

What is clear is that there are no easy answers. Solution A
will offend group X, solution B will offend group Y, etc.

The only solution that seems not to be offensive is to do
much more TLS so that much more attention can be fixed on
the problem. Attention at all levels: user, developer,
LAMPs, ...

But, this is currently blocked by two factors: the absence
of TLS/SNI in servers, and the difficulty of getting certs
into servers. Both situations are slowly getting better,
but aren't really the subject of here.

(I'm talking high level here. Please don't respond with the
normal self-serving low level message.)


> I'm not talking about fixing all the problems for all the users, just
> a real improvement for a proportion of users.
>
> For example, can one give site owners a way of specifying that their
> domain must not be accessed if it presents a self-signed certificate.
> Paypal.com would no doubt take this option, as would any large bank.
> If the method is made easy enough, so might other sites like facebook.


Yes, this is the solution known hereabouts as Key Continuity
Management. (With a twist.)

> Two possible methods that don't require a detection service
> (mitm.mozilla.org) might be a DNS record (doesn't work if the attacker
> has compromised DNS) or a subdomain naming convention (i.e.
> secure.example.com requires a valid certificate - presents adoption
> issues for existing sites).
>
> This would likely have stopped the original bug poster from revealing
> her password.


Easily defeated with secure2.example.com ? The problem with
technical solutions (only) is that the attack is not at the
technical level, it is at the interface between the tech and
the human. Adi Shamir puts it well in his 3rd law:

"Cryptography is typically bypassed, not penetrated."

The corollary to this is that it is typically wrong to
improve the crypto. E.g., think about all the efforts to
move from 40bits to 256bits ... Security Architecture is
about expanding the security model, and integrating the
human into it, not improving the bits & bobs.


> Introducing a new screen that has a far
> lower rate of false positives seems a reasonable thing to try.


Yes, but that is fundamentally impossible without a massive
increase in the number of actual MITMs (won't happen for
many and various reasons) or a massive decrease in the
number of other cert errors (won't happen for many various
reasons).

As far as the false positives versus false negatives is
concerned, we are fundamentally stuck with the current balance.

Although, I agree with one point. The screen should analyse
the self-signed cert and show it is self-signed. It is easy
enough to say "look, Mate, this would never be done on a big
ecommerce site or a bank, but it might be done on a hobbyist
or sysadm site."

iang

Ian G

unread,
Nov 7, 2008, 11:43:33 AM11/7/08
to mozilla's crypto code discussion list
Bernie Sumption wrote:
>> If we create an error display that says "No kidding, this absolutely
>> is an attack and we're stopping you cold to protect you from it."
>> it seems unavoidable that users will learn to treat the absence
>> of such an unbypassable error display as proof to the contrary,
>> proof that the site is genuine and verified.
>>
>> Do we want to train them that way?
>
> I don't think that this is an issue. I believe most users likely never
> see a MITM attack in their browsing carer - indeed this rarity of real
> MITM attacks is the reason why real attacks are interpreted as false
> positives, it's just the most likely explanation for a cert error
> screen.


Yes.

> If a MITM detection service could be designed that gave no false
> negatives, most users would never see it, so would not learn to
> associate the existing cert error screen with an "all clear".


It is kind of plausible to design any service that does "something" and
there are a few examples. But there are difficulties. Firstly, the
existing service already "promises" it, so what went wrong, and what
will happen when you bypass it? Are we breaking the existing service?
Are we just adding in more complications?

Secondly, recall Adi's 3rd law: the attacker typically bypasses. This
means that any service has to consider whether it is trivially bypassed,
and/or whether there are better attacks outside its boundary already.
The answer at this level is "probably, yes, c.f., phishing." At which
point, it then becomes less valuable, even if it is "right" technically,
and it is likely to become more costly than the benefits it delivers.
See above.

Thirdly, there isn't really any hope of "no false negatives" ... because
the service isn't really close enough to the two players to be
absolutely sure. It can only create that absolutism by mandating that
everyone believes its viewpoint, which is a trick that isn't easy to
pull off, and is wrong.


> I have no idea if MITM attacks are generally targeted at users, as in
> the case of this thread, or against servers.

We have too little data to answer that. In this case, it was a wireless
attack. In the past, we have predicted that wireless would lead to an
increase in MITMs of this nature, but we were wrong, there are still
only isolated cases. These MITMs are just too rare. (Why that is, and
what to do about it are interesting questions...)

But the fact is, real anti-cert MITMs are too rare.


> If MITM attackls are targeted at servers, I accept that there is very
> little that Firefox can do to stop this. If the attack is targeting a
> user, surely there is an opportunity for Firefox to help the user
> realise that they are being MITM'd? This could be a sustained attack,
> lasting days or weeks, slowly collecting all of the user's passwords.
> The idea of it makes me shudder!


LOL... Did someone tell you that browsing was safe?


> Any solution will be an imperfect
> trade-off, but is it really the consensus that there's no better trade-
> off than the current situation?


No. There is no consensus. There are opposing camps. One camp
believes that the solution is to drop all self-signed certs. Another
camp believes that Key Continuity Management is the answer. Yet a third
camp believes that user training has to be done, and the UI needs a
little tweaking, is all. A fourth camp has written off SSL / secure
browsing as irrepairably flawed. A fifth camp believes that only
protocol bugs and the number of bits is security, the rest is outside
purview. A sixth camp believes this is not a technical issue at all,
and will be solved by the lawyers. If we look over the hill, we'll see
other camps, hear much muttering, and in the end, we all return to our
cups and mutter on...

There is no consensus! Sorry about that... you want a cup of wine with
your muttering? :)

iang

Robert Relyea

unread,
Nov 7, 2008, 1:14:55 PM11/7/08
to mozilla's crypto code discussion list
Bernie Sumption wrote:
>> If we create an error display that says "No kidding, this absolutely
>> is an attack and we're stopping you cold to protect you from it."
>> it seems unavoidable that users will learn to treat the absence
>> of such an unbypassable error display as proof to the contrary,
>> proof that the site is genuine and verified.
>>
>> Do we want to train them that way?
>>
>
> I don't think that this is an issue. I believe most users likely never
> see a MITM attack in their browsing carer - indeed this rarity of real
> MITM attacks is the reason why real attacks are interpreted as false
> positives, it's just the most likely explanation for a cert error
> screen.
>
I think this has been historically true... even though we know there are
holes in DNS, the ability to generally exploit those holes have been
difficult. That is no longer the case in the wireless world.

The NSS team has been worried about this kind of attack for a while,
which is why we pushed for changes in the UI. In some sense the bug
report we saw spoke to a partial success. The UI was annoying enough the
user wrote a bug about the problem, allowing the user to find out that
they were potentially hacked. With our old UI, the user would have
dismissed the warning dialog and proceeded. We know from experience
users train themselves to dismiss those dialogs without even seeing
them. In that case no bug would have been written up. Where the new UI
failed was the user was able to proceed without the sophistication
needed to evaluate that she was being attacked.

These attacks are easy to produce, and with a large number of mobile,
wireless devices out there (including laptops), potentially profitable.
I think if we don't take steps to protect the user, like was done in FF
3, the rate of these attacks will likely increase.

bob

Nelson B Bolyard

unread,
Nov 7, 2008, 4:21:46 PM11/7/08
to mozilla's crypto code discussion list
Iang wrote, On 2008-11-07 08:22:
> Bernie Sumption wrote:

>> How about an MITM detection service that gives no false positives, but
>> might give false negatives? If you positively identify an MITM attack,
>> you can present users with a much more definite UI saying "this *is*
>> an MITM attack" and giving advice about what to do in the event of an
>> MITM.
>

Ian, I agree with all that you wrote, quoted above.

I will add that, while MITMs have historically been very rare, they are
on the upswing. I see two broad areas where MITM attacks are on the
increase, and they're both directed at the user, not the server.

1) ISPs who want to intercept their customers' traffic, ostensibly to
alter URLs for links and images to point to advertisements of their
choosing, rather than to advertisements chosen by the content provider.

(Note that this is what cable TV companies have done on cable channels
for decades, substituting their own ads for the ads coming from the
cable channel's content feed. So this seems perfectly natural to them.
But defeating secrecy and authenticity measures is a real threat.)

2) software that runs on the user's own PC, and intercepts and modifies
his https traffic. In some cases, this is installed by the user himself,
ostensibly to block advertisements and certain scripts, and/or do virus
detection and prevention. In other cases, it is attack software, malware,
plain and simple. In the cases where the user has consciously installed it,
the software has merely claimed that it would stop advertisements, and has
not explained that it would intercept secure traffic, and defeat all (or
most) MITM warnings, to do so.

The ISP MITM phenomenon is on the rise, just getting started now. I
would encourage users to periodically examine their systems for trusted
root CA certs that belong to their ISP, because such certs make it EASY
for the ISP to do MITM. (Hint: there's one ISP with roots in FF)

Eddy Nigg

unread,
Nov 7, 2008, 5:34:44 PM11/7/08
to
On 11/07/2008 11:21 PM, Nelson B Bolyard:

> I will add that, while MITMs have historically been very rare, they are
> on the upswing. I see two broad areas where MITM attacks are on the
> increase, and they're both directed at the user, not the server.

One must recognize the fact that MITM attacks were in the past rather
expensive when compared to other options to deceive a user. However due
to better anti-phishing measures and on some operating systems also
anti-viruses and with the rise of wireless, MITM attacks have become
more attractive.

Obviously such attacks can be performed cheaper as well, by simply
redirecting to regular http protocol, for which I suggest to set
browser.identity.ssl_domain_display to 1. It should be the default
setting IMO since it raises awareness and by my own account I've become
quite used to it (after the switch from the yellow address bar).

> The ISP MITM phenomenon is on the rise, just getting started now. I
> would encourage users to periodically examine their systems for trusted
> root CA certs that belong to their ISP, because such certs make it EASY
> for the ISP to do MITM. (Hint: there's one ISP with roots in FF)

Actually the attack which started this thread might have been simply a
router which creates the certs on the fly...

Kyle Hamilton

unread,
Nov 8, 2008, 3:50:44 PM11/8/08
to mozilla's crypto code discussion list
There are two ways to target MITM attacks.

First is the attack against the user, sending everything destined for
TLS (either via HTTP proxy or via port-fowarding techniques) from the
user's machine to the attacker.
Second is the attack against the server, sending network traffic
destined for the server -- to the attacker (this seems to be the
'classical' view, that a bank is a bigger fish and thus a bigger
target than the individual user).

Both of these types of attack can be done via Cisco IOS's enable, even
without reconfiguring the user's system to talk through a proxy. I
was concerned when the IOS buffer overflow was announced, as well as
the IOS rootkit.

Bernie's solution might actually be doable, if you could get all of
the CAs who are in Firefox's trust list to check with each other on
the subjects, subject alternative names and organizations who are
registering certificates, and perhaps even make an XML-RPC or SOAP
query interface to this checking mechanism available to the public.
(Of course, they're all competitors, so they're unlikely to share
information about their client bases.) This would allow for an
emulation of the underlying condition that actually makes X.509
theoretically work -- the notion of the central X.500 Directory, where
everything about a given Subject could be looked up from a delegated,
distributed database very much like DNS.

The basic idea for querying this would be as follows: hash the Subject
and each/all SANs in the certificate, and query for that hash (perhaps
to a web service). If there's a match, ensure it's signed by a CA in
the default db; if it isn't, conclude that it's an MITM. If there
isn't a match, pop up a small notification (like the 'Firefox has
blocked this download' notification) that Firefox can't authenticate
the certificate, and they proceed at their own risk. (If they add the
certificate to their store, the notification can say "You've manually
accepted the certificate for this site, Firefox didn't do it
automatically"?)

I would have no problem with changing the chrome when people step
outside of the assurances that Firefox tries to provide. I /do/ have
a problem with removing the ability for users to try to self-organize
their own networks. (The threat model is different, the policies are
different, and the fact that everyone on this list is talking about
removing the ability for self-signed roots to be used at all is an
extremely counterproductive and cartel-supporting view.)

-Kyle H

Ian G

unread,
Nov 8, 2008, 4:47:04 PM11/8/08
to mozilla's crypto code discussion list
Kyle Hamilton wrote:

> The basic idea for querying this would be as follows: hash the Subject
> and each/all SANs in the certificate, and query for that hash (perhaps
> to a web service). If there's a match,


Would I as an attacker use a perfect Subject / SAN that would leave
itself easily matchable by software?


> ensure it's signed by a CA in
> the default db;


Does this mean, on examining each cert, we would have to go to each CA
to see if it records that Subject / SANs ?


> if it isn't, conclude that it's an MITM. If there
> isn't a match, pop up a small notification (like the 'Firefox has
> blocked this download' notification) that Firefox can't authenticate
> the certificate, and they proceed at their own risk. (If they add the
> certificate to their store, the notification can say "You've manually
> accepted the certificate for this site, Firefox didn't do it
> automatically"?)


Yes, certs that the user has accepted should be shown differently. They
have a different trust chain.


> I would have no problem with changing the chrome when people step
> outside of the assurances that Firefox tries to provide. I /do/ have
> a problem with removing the ability for users to try to self-organize
> their own networks. (The threat model is different, the policies are
> different, and the fact that everyone on this list is talking about
> removing the ability for self-signed roots to be used at all is an
> extremely counterproductive and cartel-supporting view.)


I don't think it is "everyone" although there is a loud minority against
self-signed certs. As far as I can see, there is no consensus to drop
them from Firefox, and my understanding is that Firefox is still
planning to enhance the KCM in future generations. Also, Firefox isn't
discussed on this group, this is dev-tech-crypto.

(I could be wrong, but I'm not a developer so there is not a lot I can
do about it ... either being wrong or doing the code :) ).


iang

Eddy Nigg

unread,
Nov 8, 2008, 11:05:38 PM11/8/08
to
On 11/08/2008 10:50 PM, Kyle Hamilton:

> I would have no problem with changing the chrome when people step
> outside of the assurances that Firefox tries to provide. I /do/ have
> a problem with removing the ability for users to try to self-organize
> their own networks. (The threat model is different, the policies are
> different, and the fact that everyone on this list is talking about
> removing the ability for self-signed roots to be used at all is an
> extremely counterproductive and cartel-supporting view.)
>

Kyle, why don't you do that the proper way, specially for corporate
networks? Creating a root and distributing the root is the proper way to
go, not some lousy self-signed crap you never ever will verify anyway.

I'm not against somebody being his own CA - not wanting to depend on
others, but I'm against risking others by their actions. I think by
creating your own root and by distributing it throughout your network
and affected circles, you provide a certain protection level self-signed
can't. You may even issue CRLs. Others which encounter a site without
having imported the root (currently) still can accept the cert.

There is open source software out there which provide excellent support
for setting up a corporate CA which requires minimum effort. I suggest
to enable users "self-organize their own networks" correctly, mitigating
even their own risks! Think about it...

Kyle Hamilton

unread,
Nov 9, 2008, 1:38:49 AM11/9/08
to mozilla's crypto code discussion list
Because you're assuming that everything that occurs in this world
exists in a corporate environment, Eddy. That is the environment
where CAs flourish, where CAs thrive, where CAs can do what they're
best at -- *because all authority and trust trickles down from the
corporation, a tool used to help its workers which are acting on its
behalf*.

Realistically, there is exactly one reason for third-party scrutiny:
any situation which has an interest defined in law as "fiduciary"
requires due diligence. Employment contracts, communications (even
privileged communications, such as between an attorney and client),
money, contract law, etc.

This is the reason why X.509 was developed by an international
consortium of governments and government contractors. This is why
X.509's trust model is the way it is. This is why X.509 doesn't have
any mechanism defined for a sovereign end-user to limit the amount of
trust placed in any given credential or assertion. This is why X.509
doesn't allow a user to limit the policies that they're willing to
accept in certificate path validation. (To be fair, some minimal nod
toward this was put into the PKIX profiles, but realistically there's
no implementation. In PKIX, an end-user simply does not have a
certifying key, period. This means that literally everything that the
end-user wishes to do must start with a "CA, may I <blank>?" question,
just to get an assignation of authority -- a certificate, linking a
key that the user holds to that which the user wishes to do.

I don't care who you are, what interests you represent -- in 1996, the
US gave up on the Clipper chip and the entire key-escrow debacle, and
that was a full-fledged effort with the full, vast weight of the
military, the CIA, and every law-enforcement agency (federally- and
state-chartered) behind it. Do you really think that I or anyone else
am going to be willing to limit our behavior to those things which an
entity which isn't even a government is willing to assign authority
for?

I'm going to go out on a limb and suggest that I'm already well-aware
of "corporate CA software which requires minimum effort". I've been
following cryptography for a very long time, and I think that my
position outside of the structure which has accreted since 1995 (which
requires the use and imposition of a 'central identity/authentication
model' simply to continue to exist, much less make any money -- and
let's face it, everything which has been stated about the reasons for
X.509 certificates and EV certificates is simply designed to inspire
fear which makes the acceptance of such a central model much more
palatable to those who allow the fear to control their decisions)
allows me to see the pitfalls in the currently-dominant paradigm.

"I suggest to enable users "self-organize their own networks"

correctly, mitigating even their own risks!" Oddly, I have thought
about this. I have espoused the ability for users to identify
themselves -- either via running their own CA (their "self-signed
certificates" that you are unhappy with accepting) or via their own
out-of-band communication method for authenticating their own
certificate thumbprints. Think about what you say, and then think
about how all of the Mozilla products which use NSS makes it damnably
frustrating to do so. (And since it's as easy to generate a client
certificate which is signed by the user's self-signed CA certificate,
you can't simply say that you would let any certificate that wasn't
signed by itself have something of a pass in the trust evaluation --
it would chain to an unknown root, which would present the same issue
as a self-signed root.)

If you have some time, I would ask you to look at
http://web.mac.com/wolfoftheair/internetpkirethought.txt . My last
edit on it was 8/25/2008, according to the file modification time.
(you will likely want to view the source, then turn on word-wrapping,
since my text editor soft-wraps for me.) This is an attempt to use
RFC 3647's CP/CPS framework to allow individual sites and groups the
ability to build their own CAs, and perhaps more importantly to
describe how to authenticate certificates from disparate providers
that the user doesn't necessarily have knowledge of, much less any
desire to assign fiduciary status to -- as well as a user-interface
suggestion to reduce the stress of dealing with such untrusted
identification/authentication to the absolute minimum required to
allow the user to make his own access decisions (and to reduce the
possibility of mis-sent messages when multiple conversations are
open).

At this point, I don't know of any PKIX client that actually supports
policy evaluation. I'm pretty sure that NSS doesn't, and I'm also
pretty sure that OpenSSL doesn't (I can't speak for other open-source
projects) -- as well as being pretty sure that none of the
closed-source clients I'm aware of support it either. This is where
focus needs to be placed, a means of identifying what policies are
being used to issue each certificate, as well as a means of policy
mapping (NSS could do this by creating a "web policy", an "email
policy", and a "software policy", then issuing a cross-certificate to
each of its included CAs that maps the individual policy OIDs to the
'master' web/email/software OIDs -- but nobody wants Mozilla to run a
CA, sigh).

-Kyle H

Eddy Nigg

unread,
Nov 9, 2008, 7:33:04 AM11/9/08
to
On 11/09/2008 08:38 AM, Kyle Hamilton:

> Because you're assuming that everything that occurs in this world
> exists in a corporate environment, Eddy.

Well, I didn't meant only the corporate, but also any hobbyist geek.
Those are, which lament against PKI in general and promote self-signed
certs.

> I don't care who you are, what interests you represent --

I represent the interests of anybody seeking a secure Internet and
better reliance and productivity. I personally invested time and money
to provide the services StartCom does and the way it does, mainly
because I realized that an alternative to the established CAs must be
offered in order to improve overall security and reliability. The
StartCom CA wasn't established with the goal to further facilitate the
establishment and their cause, but to provide an alternative. Just as a
reminder, in 2004/5 digital certificates weren't so cheaply available as
today...

Now I'm interested in getting rid of self-signed certificates if
possible. They undermine "legitimate" certificates and put the majority
of users under an unneeded risk. That's one of my goals today!


> Do you really think that I or anyone else
> am going to be willing to limit our behavior to those things which an
> entity which isn't even a government is willing to assign authority
> for?

Browser vendors effectively govern CAs and with it all digital
certificates. It's a fact and it's in their hand what they do with it. I
recognized that power very early and planned accordingly. You can bang
your head against a wall - it won't help.

Personally I feel that they do it quite right and have reasonable
requirements - first and foremost Mozilla.

> (And since it's as easy to generate a client
> certificate which is signed by the user's self-signed CA certificate,
> you can't simply say that you would let any certificate that wasn't
> signed by itself have something of a pass in the trust evaluation --
> it would chain to an unknown root, which would present the same issue
> as a self-signed root.)

No! It's much better than that, because it requires you to explicitly
ask the user to trust the CA root. This way users can "self-organize
their own networks" as you requested earlier. Anybody willing to trust
you or any other CA can install the root, benefit from your decisions
and procedures and you even have the ability to revoke issued
certificates. No UI will prevent the install of the root and no error
screen will appear thereafter.

And that's much better than having to trust a self-signed certificate
on-the-fly because somebody got directed to a certain URL.

BTW, the only certificates which should be self-signed should be roots,
as you quite well know - that's how PKI is implemented. And roots
shouldn't be used to secure web sites or to sign email's. Actually it
would also show something about the least capabilities of the issuer of
the cert...

Ian G

unread,
Nov 9, 2008, 8:57:40 AM11/9/08
to mozilla's crypto code discussion list
Kyle Hamilton wrote:
> Because you're assuming that everything that occurs in this world
> exists in a corporate environment, Eddy. That is the environment
> where CAs flourish, where CAs thrive, where CAs can do what they're
> best at -- *because all authority and trust trickles down from the
> corporation, a tool used to help its workers which are acting on its
> behalf*.


Kyle, if that were true, the PC wouldn't have defeated IBM and we'd
still be using proprietary networks like Minitel :) The reason
computing is as it is today is because of the success of peer
arrangements sans intermediation. Corporations benefit as much from
peer-to-peer as anyone else. The entire dotcom boom was about that
benefit... leading on to the current financial crisis, another elegant
proof that authority & trust in corporations does not trickle down ;)


> ... This is why X.509


> doesn't allow a user to limit the policies that they're willing to
> accept in certificate path validation. (To be fair, some minimal nod
> toward this was put into the PKIX profiles, but realistically there's
> no implementation. In PKIX, an end-user simply does not have a
> certifying key, period. This means that literally everything that the
> end-user wishes to do must start with a "CA, may I <blank>?" question,
> just to get an assignation of authority -- a certificate, linking a
> key that the user holds to that which the user wishes to do.


This part is deeply interesting. PKI was supposed to be about extending
meaning into contracts and certs, but that failed. That failure is
probably understandable (to me at least).

The part that is less easy to understand is "what is the meaning that
was left over when we stopped trying to put meanings in?" This question
is at the root of many symptomatic ills, such as the whole
EV-versus-non-EV thing, and the failure of S/MIME.

Once we isolate, align and fix these meanings, it should be possible to
unlock the potential in the code. This is a future challenge.


> At this point, I don't know of any PKIX client that actually supports
> policy evaluation. I'm pretty sure that NSS doesn't, and I'm also
> pretty sure that OpenSSL doesn't (I can't speak for other open-source
> projects) -- as well as being pretty sure that none of the
> closed-source clients I'm aware of support it either.


"Policy evaluation" would probably reduce to "contracts in digital
form." The places where this has been looked at are things like my
Ricardian Contracts, Nick Szabo's smart contracts, and the like. There
is a link of related stuff here:

http://www.webfunds.org/guide/ricardian.html#alternates

Short summary: contracts are difficult to do in digital form!


> This is where
> focus needs to be placed, a means of identifying what policies are
> being used to issue each certificate, as well as a means of policy
> mapping


My view would be slightly different, although we may be looking at the
opposite sides of the same coin: We need to figure out what these
current practices are, as a starting point. Right now they are not
clear, as the practices are a combination of various claims spread
across various documents and various people at various times.

(I.e., they are not policies, just practices.)

E.g., although there is a practice of "all CAs are equal" there is no
real documentation that says that, nor examines just how equal they are
and what steps are taken to equalise the CAs. E.g., 2., there is no
clear description of what a signature in S/MIME means, a situation that
comes close all by itself to breaking S/MIME (any lawyer will tell you
not to sign something unless you intend to be bound by it ...).


> (NSS could do this by creating a "web policy", an "email
> policy", and a "software policy", then issuing a cross-certificate to
> each of its included CAs that maps the individual policy OIDs to the
> 'master' web/email/software OIDs -- but nobody wants Mozilla to run a
> CA, sigh).


Yes, there are fierce pressures on Mozilla to do the work of the CAs.
This is not the CAs fault, nor Mozilla's fault. But the forces still exist.

Question is, what to do about it?

iang

Ian G

unread,
Nov 9, 2008, 9:25:27 AM11/9/08
to mozilla's crypto code discussion list
Eddy Nigg wrote:

> Now I'm interested in getting rid of self-signed certificates if
> possible. They undermine "legitimate" certificates and put the majority
> of users under an unneeded risk. That's one of my goals today!


Well, all the arguments have been heard on this already, and positions
are fairly entrenched. It seems futile to have the debate over and
over, and I for one would like to point out that it is uncomfortable to
treat it like a political campaign.

Perhaps a vote?

It seems that Eddy and Nelson are in the anti-self-signed-certs camp,
and I would join Kyle in the pro-self-signed-certs camp.

Do others have strong-enough feelings? I'm searching for a way here to
show one side or the other which way the wind is blowing.

iang

Eddy Nigg

unread,
Nov 9, 2008, 10:26:36 AM11/9/08
to
On 11/09/2008 04:25 PM, Ian G:

>
> Well, all the arguments have been heard on this already, and positions
> are fairly entrenched. It seems futile to have the debate over and over,
> and I for one would like to point out that it is uncomfortable to treat
> it like a political campaign.

Well, Kyle stated that he doesn't care who I am nor which interests I
represent. This was a short statement about what I did and which
interests I represent (including obviously my personal preference) :-)

Perhaps I should have added, that I'm coming from the same grass-root
camp as Kyle and others. WE have/had some very similar goals... however
I actually did something in order to establish an alternative. I was the
thriving "force" behind the establishing of a legitimate authority where
the commercial aspects aren't the primary goal and where certification
isn't bound to the financial capabilities of the subscriber.

For some reason, advocates like Kyle are opponents to us today and
needless to mention that the hard-core CAcert crowd isn't happy with
what we do and achieved either - first we were ignored, then publicly
laughed at, then publicly decried. Go figure...
Today I'm afraid that for those I don't have much sympathy left either
and continue to serve those who appreciate it.

>
> Perhaps a vote?
>

Vote on what? It's a process which has started a while ago. It's usually
not discussed here, but by the UI designers and other forums.

> It seems that Eddy and Nelson are in the anti-self-signed-certs camp,
> and I would join Kyle in the pro-self-signed-certs camp.

I think that's way too simple...

> Do others have strong-enough feelings? I'm searching for a way here to
> show one side or the other which way the wind is blowing.
>

You don't have to search a lot....just visit Paypal and look at your
browser. That's the way the wind is blowing. And if regular certificates
are further circumvented and devalued, this is what you'll be left with
in the end. Think about it!

Kyle Hamilton

unread,
Nov 9, 2008, 7:11:12 PM11/9/08
to mozilla's crypto code discussion list
On Sun, Nov 9, 2008 at 7:26 AM, Eddy Nigg <eddy...@startcom.org> wrote:
> On 11/09/2008 04:25 PM, Ian G:
>>
>> Well, all the arguments have been heard on this already, and positions
>> are fairly entrenched. It seems futile to have the debate over and over,
>> and I for one would like to point out that it is uncomfortable to treat
>> it like a political campaign.
>
> Well, Kyle stated that he doesn't care who I am nor which interests I
> represent. This was a short statement about what I did and which interests I
> represent (including obviously my personal preference) :-)

Since there's a fairly argumentative tone going on, I think I should
explain what my viewpoint is:
0) First and foremost, I'm a citizen and resident of the United States
of America, educated in the public school system in the 1980s to early
1990s. This means that I was brought up with a particular set of
beliefs (most notably that only governmental authority is
proscriptive, and there is no generally-applicable prescriptive
authority). This colors every value judgement that I make, most
importantly the following...
1) I believe that users who own their own machines are sovereign. The
USER, not the CA, is the root of all trust on that user's machine.
(Families are different, but only in that the actual owner of the
machine -- usually a parent -- is the root of all trust for the kids
who use it. There are many rights which corporations have that family
owners don't, including the right to capture and log keystroke, video
output, audio output, and network activity on the machine.)
2) I believe that CAs which have been audited against their CPs and
CPSes have a severe disincentive against making new types of
certificates available.
3) I believe that CAs which have been audited against their CPs and
CPSes have a severe disincentive against binding different types of
identities (other than legal identity) to public keys. The reason for
this is that CAs must adhere to their CPs, and thus far none of them
have brought any expression of non-legal identities (this is different
from 'illegal identities', which include identities that people take
on for the purpose of defrauding another or any identity used for an
illegal purpose) into their certificate policies.
4) I believe that CAs which have been audited against their CPs and
CPSes have a severe disincentive against placing user-requested
extensions in their certificates, because these extensions can mean
anything and be interpreted as anything.
5) I believe the disincentive against placing user-requested
extensions in certificates reduces the incentive for general-purpose
PKIX clients to properly handle arbitrary extensions -- the
information doesn't exist, so there's no implementation thereof. If
there's no implementation thereof, there's no pressure to provide the
implementation.
6) I believe that the USA has a chartered requirement to not infringe
on anyone's right of free association. (Among other things, a person
can associate with gays -- and one of the more deeply-held beliefs in
the gay community is that if you know that someone's gay, you do not
under any circumstances make that group membership known -- it is the
individual's decision when (or if) to "come out of the closet", i.e.
make their own group membership known. I have other examples to
suggest that this is an appropriate view to take, but I cannot express
them without betraying my own associations in ways that I wish not
to.)
7) Because of #1, I believe that users have the right to use their
computers as tools they can work in any manner, for communications
they wish to share in any manner, and with pseudonyms as they may
require or desire -- not simply with the de facto legal identity
information embedded in ways that CAs currently require. Also because
of #1, I believe that assigning "ultimate trust" to any CA that is not
run according to the need of the user is a violation of the user's
sovereignity.
8) in light of #2, 3, 4, and 5, I conclude that trying to get any
audited CA to embed a non-legal identity or group-membership extension
is pointless.
9) in light of the interactions between #3 and 6, and the conclusion
in #8, I conclude that using the current CAs for anything outside of
the top-down, hierarchal, legal-identity declaration that they
currently are is pointless.
10) I believe that there are many, many applications that would
benefit from the application of cryptography. However, the dominant
paradigm of cryptographically binding an identity to a key (but only
as long as the identity that's bound is the legal identity) makes it
difficult for advocates of cryptography to gain any traction in those
environments.

In addition, regarding your desire to do away with self-signed
certificates as website identification... unfortunately, I can't see a
way within a strict reading of X.509 to do that. I do believe it
would be good practice, but it just seems to be... well, dogmatically
prohibited. As far as I've been able to figure out, all trust flows
from the trust anchor... so the trust anchor itself can be used for
any purpose for which it's trusted. "I trust this anchor to
authenticate websites. If I see the trust anchor itself in a
certificate for a website which authenticates according to Subject and
SAN processing, I will trust it even though the key itself is being
used in a way that it should not be." (A few years ago we had a
discussion on here which led to the statement that the 'trust anchor'
is not the CA certificate itself, but rather the CA's public key.)

Also, the case that Nelson brought up would have been made less
intrusive into the user's mindspace by simply asking the user to
accept a CA which (even with the same key in the server certificates)
would have only brought a single security exception dialog up. As
well, with the new multi-process database capability, it will now be
possible for a rogue extension to add CAs to the profile's certificate
database without any intervention from the user and without any
corruption of the database, which would make for a discoverable but
not obvious security boondoggle.

> Perhaps I should have added, that I'm coming from the same grass-root camp
> as Kyle and others. WE have/had some very similar goals... however I
> actually did something in order to establish an alternative. I was the
> thriving "force" behind the establishing of a legitimate authority where the
> commercial aspects aren't the primary goal and where certification isn't
> bound to the financial capabilities of the subscriber.

By trying to appear 'legitimate' the authority which you created falls
into the same problems which plague every other authority. As well,
since the 'authority' that you run does not issue the credentials
which can be used to authenticate a legal identity, your 'authority'
is not 'authoritative'. That's one of the most important things that
CAs need to recognize -- they're not authoritative for anything which
they are not the sole arbiter of. (A user group is authoritative on
its membership, but a general-purpose certificate issuer is only
authoritative on 'the entities which have chosen to request issuance
of a certificate from it' -- the general-purpose certificate issuer is
not authoritative for 'the identities of entities'. In this light,
even the DMV/MVD isn't authoritative, and it's only because the US
Department of State takes several weeks to verify presented documents
with the issuing authorities that it really can be called
authoritative for US citizenship and passport issuance.)

> For some reason, advocates like Kyle are opponents to us today and needless
> to mention that the hard-core CAcert crowd isn't happy with what we do and
> achieved either - first we were ignored, then publicly laughed at, then
> publicly decried. Go figure...

I have no issue with what you're doing, and I think that it's a very
good thing. I believe that the CAcert project is interesting, at
least insofar as it's getting more people to be aware of the massive
issues of legal identity (individual and corporate) and legal name as
they are interpreted around the world.

What I do have issue with, though, is that you seem to think that the
service you created is a panacea, that the concept of monetary
exchange is the only thing which prevents people from using the
services of the general-purpose CAs. It's not. (Among other things,
you issue end-user certificates, which cannot themselves issue other
certificates. This means that a user cannot use the certificate you
issue to issue certificates to his router, his PC, to his friends, to
any services that he chooses to run -- all he can issue are proxy
certificates, which cannot be used for identity; they can only be used
for delegation of the permission that an end-user certificate has.
Since end-user certificates cannot be used to identify servers, the
end-user still has an artificial barrier to entry if he wants to, say,
protect his home network with ipsec.)

> Today I'm afraid that for those I don't have much sympathy left either and
> continue to serve those who appreciate it.

I have issues with the entire X.509 paradigm, but the biggest problem
is this: X.509 was designed around the concept of a central, worldwide
Directory. This would have made Subjects (and SANs) searchable,
globally... and the fact that there is no centrally-searchable
database is what allows for certificates to be mis-issued by audited
and trusted CAs, under the strict readings of their CPs.

>> It seems that Eddy and Nelson are in the anti-self-signed-certs camp,
>> and I would join Kyle in the pro-self-signed-certs camp.
>
> I think that's way too simple...

I agree, this characterization is far too simple. I just want to
maintain the ability for users to reason their ways through the
process of adding new roots to their stores. I would like to simplify
this process as much as possible. I would like to ensure that it is a
"reasoning" process, though, rather than just a "the damned browser's
getting in my way so I'll accept this" process.

I think I would be amenable to the following:

1) A certificate should not be self-signed if it is being used to
identify a server, as a means of encouraging best-practice CA key
management.
2) If a certificate issued by a CA and a known CA have the same key,
the certificate should be flagged for review (again, encourage
best-practice CA key management).
3) The process of adding a CA should not be directly accessible from
the security exception UI. Individual sites should be able to have
their certificates added on a case-by-case basis, because a site may
be legitimate in the user's estimation even if they don't want to
trust the authority identifying it. (This is outside NSS's scope.)
4) There should be an extension defined in CA and end-entity
certificates for a CA to embed information on how to add the CA to the
trusted store. (There's already two extensions defined for CAs to
link to their certification practice statements, but none for any
tutorials or information on how to add a CA to the root store.) This
link could be shown in the security exception UI. (This is outside
NSS's scope.)
5) The security exception dialog (even if it's not an actual dialog
box, it's still an interaction between the user and software) should
also have a prominent link to the criteria for inclusion in the
default NSS database, and those criteria should have their rationale
explained -- so that the user has the ability to understand what
exactly is at stake. As well, there should be a statement "Mozilla,
Firefox, and NSS take no responsibility for any consequences which may
happen if you add a CA that they have not included." (Again, outside
of NSS's scope.)

> You don't have to search a lot....just visit Paypal and look at your
> browser. That's the way the wind is blowing. And if regular certificates are
> further circumvented and devalued, this is what you'll be left with in the
> end. Think about it!

Er, honestly? "regular certificates" have already been devalued by
the entry of over 100 competitors to Verisign. The chrome for the
browser, everyone keeps telling me, is outside the scope of this group
-- so why bring it up? But, since you did, I'll return an anecdote...

The fact is, "regular certificates" didn't go far enough. The idea of
domain-validated certificates was created by an audited and accepted
CA, and that idea appears to me to be what led to the creation of the
EV certificate profile by the CAB forum anyway. I removed my trust in
Firefox for every CA that I could find a CPS for which suggested that
they could issue domain-validated certificates, and I found that the
browser's chrome made it impossible for me to get anything done after
I did that.

I brought up the additional cost for EV certificates before; you
thought it wasn't appropriate for me to bring up because "only major
e-commerce sites and banks" would need them. And now, you seem to
suggest that eventually, it likely will end up that everyone who
touches credit card numbers or other fiduciary data will need an EV
cert anyway? I hate to say it, but look within your own industry for
the cause for that. (And then realize -- once again -- that
MasterCard and Visa, at the least, offer a $0 liability to anyone who
has a fraudulent transaction posted to their account. The credit card
issuers have already taken action to protect their clients... so what
exactly is it that any certificate is supposed to protect against, at
e-commerce sites? AFAICT, it's only really banks and other
fiduciaries that need the EV protection. The idea that everyone needs
an EV certificate is a natural herd reaction to the whispers of
uncertainty and doubt that the CA industry representatives
collectively whisper or have whispered into the ears of those who
control the writing of the software.)

-Kyle H

Paul Hoffman

unread,
Nov 9, 2008, 8:15:51 PM11/9/08
to mozilla's crypto code discussion list
>Well, all the arguments have been heard on this already, and positions are fairly entrenched. It seems futile to have the debate over and over, and I for one would like to point out that it is uncomfortable to treat it like a political campaign.
>
>Perhaps a vote?

Not for me, but perhaps a design competition.

>It seems that Eddy and Nelson are in the anti-self-signed-certs camp, and I would join Kyle in the pro-self-signed-certs camp.
>

>Do others have strong-enough feelings?

I would like to see self-signed certs allowed, but not in the way that they have been in the past, and not with the UI they are now. My design would be:

a) Firefox keep track of every TLS site you have ever visited. If they ever have used an externally trusted cert, they can never use a self-signed cert. "Never" here means that there is no way in the UI to get to the site over HTTPS after a backwards transition: you get a long error message, but not a choice.

b) Mozilla keeps track of every TLS site known to have used an externally trusted cert. It does this by its own probes, possibly with help from its friends like Google or the CAs. This information is optionally used in the calculation from (a), with the default being to use it.

c) When you first get to a site that is self-signed that does not trigger a failure above, you get an informational (not scary) explanation of what is going on; this message has the SHA-256 fingerprint of the public key. You are told that you can go there just this once, or you can have Firefox remember this self-signed cert. If the site had a previously-memorized self-signed cert that is different than the one now, you get a different warning explaining the situation that asks if you are sure that the site has changed its self-signed cert; this message is half-ominous, but clickable through.

This system works without (b), but works much more safely with it.

Using such a system, all other cert errors can now have more useful warnings that will not be confused with those for sights that self-sign.

Eddy Nigg

unread,
Nov 9, 2008, 8:56:46 PM11/9/08
to
On 11/10/2008 02:11 AM, Kyle Hamilton:

> On Sun, Nov 9, 2008 at 7:26 AM, Eddy Nigg<eddy...@startcom.org> wrote:
> Since there's a fairly argumentative tone going on, I think I should
> explain what my viewpoint is:

Kyle, your reply was highly interesting! Nevertheless I'll cut down my
response to a few highlights only... (after writing my reply I realized
that's more than expected).

> 1) I believe that users who own their own machines are sovereign. The
> USER, not the CA, is the root of all trust on that user's machine.

In principal I agree with this statement and I believe this is in fact
the case.

> 2) I believe that CAs which have been audited against their CPs and
> CPSes have a severe disincentive against making new types of

> certificates available....

I agree with most if not all points up to here...

> 6) I believe that the USA has a chartered requirement to not infringe
> on anyone's right of free association.

I don't think that anything prevents you from doing so.

> 7) Because of #1, I believe that users have the right to use their
> computers as tools they can work in any manner, for communications
> they wish to share in any manner, and with pseudonyms as they may
> require or desire -- not simply with the de facto legal identity
> information embedded in ways that CAs currently require. Also because
> of #1, I believe that assigning "ultimate trust" to any CA that is not
> run according to the need of the user is a violation of the user's
> sovereignity.

You have the freedom to embed your own CA, remove existing CAs etc.
Nobody violates the users sovereignty if he cares. Also ways exists
(mainly domain/email validated certificates) without really having to
disclose who you are publicly.

However the same right also exists for the software vendor who has the
freedom to decide what's in the best interest of the average user. Also
its own interests are legitimate up to a certain extend (depending on
the position in the market and other factors).


> 10) However, the dominant


> paradigm of cryptographically binding an identity to a key (but only
> as long as the identity that's bound is the legal identity) makes it
> difficult for advocates of cryptography to gain any traction in those
> environments.

aero...@gmail.com is hardly a legal identity...

>
> By trying to appear 'legitimate' the authority which you created falls
> into the same problems which plague every other authority.

I don't sense the problem really.

> As well,
> since the 'authority' that you run does not issue the credentials
> which can be used to authenticate a legal identity, your 'authority'
> is not 'authoritative'.

Doesn't this depend on the type of verification done? I agree that CAs
resemble much more a Notary Public than the authority governing the real
legal identities.

> but a general-purpose certificate issuer is only
> authoritative on 'the entities which have chosen to request issuance
> of a certificate from it'

Obviously. Nobody forces anybody into it.

> What I do have issue with, though, is that you seem to think that the
> service you created is a panacea

No it's not. As I stated - besides being legitimate and similar - we
provide an alternative and remove the financial barrier. This is a big
hurdle for many still which prevents adoption by the masses of
(legitimate) digital certificates (not the only one, but one of them -
ease of use is another one).

> that the concept of monetary
> exchange is the only thing which prevents people from using the
> services of the general-purpose CAs. It's not.

Of course not. But I can't serve you otherwise. But we should recognize
that the context we are discussing here is hundreds of millions of users
(of the browser), potentially millions of web sites, a handful of CAs
and a few browsers. There are various goals we try to achieve with PKI,
starting from preventing eavesdropping, reliance (mainly for business
purpose?), prevent fraud (phishing) if possible etc. In this context,
PKI makes a lot of sense...it may not for your (other) alternative
networks, affiliations and software.

> (Among other things,
> you issue end-user certificates, which cannot themselves issue other
> certificates.

Of course not. Would be the same as "self-authorized".

> This means that a user cannot use the certificate you
> issue to issue certificates to his router, his PC, to his friends, to
> any services that he chooses to run -- all he can issue are proxy
> certificates, which cannot be used for identity; they can only be used
> for delegation of the permission that an end-user certificate has.
> Since end-user certificates cannot be used to identify servers, the
> end-user still has an artificial barrier to entry if he wants to, say,
> protect his home network with ipsec.)

Mmmhh...can you explain that again? Get the right certificate for the
right purpose then...nothing prevents you from doing that.

> and the fact that there is no centrally-searchable
> database is what allows for certificates to be mis-issued by audited
> and trusted CAs, under the strict readings of their CPs.

I don't understand which misuse you refer to. However strict reading of
CPs sounds good to me ;-)

> I agree, this characterization is far too simple. I just want to
> maintain the ability for users to reason their ways through the
> process of adding new roots to their stores. I would like to simplify
> this process as much as possible.

Kyle! It's easier to add a CA root to your store than add an exception
for a self-authorized certificate. Did you try? Try for example this
link: http://www.startssl.com/?app=9

> 1) A certificate should not be self-signed if it is being used to
> identify a server, as a means of encouraging best-practice CA key
> management.

I'd sign on that :-)

> 2) If a certificate issued by a CA and a known CA have the same key,
> the certificate should be flagged for review (again, encourage
> best-practice CA key management).

Indeed...that actually should never happen.

> 4) There should be an extension defined in CA and end-entity
> certificates for a CA to embed information on how to add the CA to the
> trusted store. (There's already two extensions defined for CAs to
> link to their certification practice statements, but none for any
> tutorials or information on how to add a CA to the root store.)

Many have that. It's in the CA Issuer entry of the AIA extension. For
example examine the intermediate CA certificate of
http://www.startssl.com/certs/sub.class1.server.ca.crt

> 5) The security exception dialog (even if it's not an actual dialog
> box, it's still an interaction between the user and software) should
> also have a prominent link to the criteria for inclusion in the
> default NSS database, and those criteria should have their rationale
> explained

Even though an overkill, this is actually a nice idea. Something like
this should be maybe made available through the UI (web site information
dialog via Larry). Obviously I wouldn't promote an exception for
correctly secured sites.

> so that the user has the ability to understand what
> exactly is at stake.

Again, 99.9% of the users will never see nor understand it.

> As well, there should be a statement "Mozilla,
> Firefox, and NSS take no responsibility for any consequences which may
> happen if you add a CA that they have not included."

So you want to make it harder to import a new root into the browser?

>
>> You don't have to search a lot....just visit Paypal and look at your
>> browser. That's the way the wind is blowing. And if regular certificates are
>> further circumvented and devalued, this is what you'll be left with in the
>> end. Think about it!
>
> Er, honestly? "regular certificates" have already been devalued by
> the entry of over 100 competitors to Verisign.

> The fact is, "regular certificates" didn't go far enough. The idea of
> domain-validated certificates was created by an audited and accepted
> CA, and that idea appears to me to be what led to the creation of the
> EV certificate profile by the CAB forum anyway.

You maybe misunderstood. I'm in favor of domain validated certificates,
they serve a legitimate purpose if applied correctly. What I meant is,
that if DV (and otherwise validated) certificates are further devalued
by various means (including self-authorized certificates) and the only
way to rely on digital certification is by the means of EV, than the
trend is clearly that support for DV will be dropped at some point by
the browser and CA alliance.

> I removed my trust in
> Firefox for every CA that I could find a CPS for which suggested that
> they could issue domain-validated certificates,

Aren't you contradicting yourself? Remove support of domain validated
but promote self validated certs?

> and I found that the
> browser's chrome made it impossible for me to get anything done after
> I did that.

If you removed the StartCom root, than you deserve to be punished ;-)

> I brought up the additional cost for EV certificates before; you
> thought it wasn't appropriate for me to bring up because "only major
> e-commerce sites and banks" would need them. And now, you seem to
> suggest that eventually, it likely will end up that everyone who
> touches credit card numbers or other fiduciary data will need an EV
> cert anyway?

I'm saying that those are the trends! I didn't say anywhere that I
promoted it nor was in favor of it!

> AFAICT, it's only really banks and other
> fiduciaries that need the EV protection.

Well, I value private information worthy to protect even more than
credit cards (besides the hassle). Sharing information may be more
critical and it's good to know with whom you share it.

Anders Rundgren

unread,
Nov 10, 2008, 2:24:51 AM11/10/08
to mozilla's crypto code discussion list
I haven't followed this lengthy discussion in detail but I have for a long time wondered how DNSSEC
and SSL-CA-Certs should coexist.

Which one will be the "most" authoritative?

Could DNSSEC (if it finally succeeds) be the end of SSL-CA-certs?

Anders

Ian G

unread,
Nov 10, 2008, 9:31:26 AM11/10/08
to mozilla's crypto code discussion list
Eddy Nigg wrote:
> On 11/10/2008 02:11 AM, Kyle Hamilton:
>> On Sun, Nov 9, 2008 at 7:26 AM, Eddy Nigg<eddy...@startcom.org> wrote:
>> Since there's a fairly argumentative tone going on, I think I should
>> explain what my viewpoint is:
>
> Kyle, your reply was highly interesting! Nevertheless I'll cut down my
> response to a few highlights only... (after writing my reply I realized
> that's more than expected).


I also find myself unaccustomed to dealing with other people's long
replies ;) But this point struck me:


>> 10) However, the dominant
>> paradigm of cryptographically binding an identity to a key (but only
>> as long as the identity that's bound is the legal identity) makes it
>> difficult for advocates of cryptography to gain any traction in those
>> environments.
>
> aero...@gmail.com is hardly a legal identity...


That's because there is no such thing as a "legal identity."

(If you think this is "wrong" then please provide a definition on same,
preferably one that is useful for us.)


>> By trying to appear 'legitimate' the authority which you created falls
>> into the same problems which plague every other authority.
>
> I don't sense the problem really.


Legitimacy is 99% marketing. If the 99% believe you are legit, then you
are. If not, then not.

(If you need to ask what the other 1% is, you're in trouble.)


iang

Nelson Bolyard

unread,
Nov 10, 2008, 2:52:36 PM11/10/08
to mozilla's crypto code discussion list

DNSSEC only attempts to ensure that you get the (a) correct IP address.
It does absolutely nothing to ensure that you actually are connected to
the site you wanted. It doesn't obviate SSL or PKI at all.

It is loading more messages.
0 new messages