.cn is authorized for i18n, and the * will match anything, allowing all
the classic i18n based attacks.
He enhanced the attack by finding some i18n chars that look like '/' or
'?', enabling to hide the ".ijjk.cn" very far to the right, in many
cases behind the end of the displayed part of the url bar.
So what the proper immediate/long term solution ? Disable punycode for
the wildcard part of certificates ?
PS : Some of his other remarks about the current state of SSL are
interesting but are not really that much news for everyone on this group
and do not require similar immediate action.
This was striking:
Get a domain-validated SSL wildcard cert for *.ijjk.cn
> So what the proper immediate/long term solution ? Disable punycode for
> the wildcard part of certificates ?
Disallow domain validated wild card certificates. Make identity
validation a requirement, same as with code signing. It has been said
over and over again, not just by chance.
>
> PS : Some of his other remarks about the current state of SSL are
> interesting but are not really that much news for everyone on this group
> and do not require similar immediate action.
Nope....but this is another good one:
If we want to avoid the dialogs of death, start with HTTP not HTTPS.
Yeah, why not actually, cause it's easy to fake that blueish icon too...
--
Regards
Signer: Eddy Nigg, StartCom Ltd.
Jabber: star...@startcom.org
Blog: https://blog.startcom.org
Other than this specific attack, what are the concerns about wildcards that
would make us take such a drastic action?
It sounds to me that we could and should fix this bug simply by disabling
punycode for the wildcard portion.
--BDS
> PS : Some of his other remarks about the current state of SSL are
> interesting but are not really that much news for everyone on this group
> and do not require similar immediate action.
I think actually his presentation is much better than just "old news"
and the results will be news for people on this group.
1. He has clearly laid out the trap of negative versus positive
feedback, and explained why Firefox 3 UI changes make the result less
secure than Ff2.
(This is not to say that the Ff3 UI should never have been done, we
needed the experiment to clarify why that direction was wrong, I for one
could not have said it like that.)
2. Also, he has highlighted the trap of HTTP versus HTTPS. This has
been known as a critical weakness since day 1 of SSL / secure browsing.
It was discovered within months of rollout, and basically ignored
because business overrode the security model. Basically, because it
opens up a systemic attack across and between boundaries, it means that
secure browsing can never be "high security".
Fixing these requires a lot of changes, none of which are possible
without agreement on the basic weaknesses. Which we don't have.
3. And then there is the punycode thing. That's just spice, as far as I
can see.
iang
> Other than this specific attack, what are the concerns about wildcards that
> would make us take such a drastic action?
>
> It sounds to me that we could and should fix this bug simply by disabling
> punycode for the wildcard portion.
The issue is one of cross-area complexity. Punycode is "powerful" and
so is wildcards. By themselves, they are ok, and they work "on paper".
But when you combine them, there are possible weird interactions. As
the paper showed, there are ways in which you can combine these things
to create a good attack.
To a large extent, there may be some merit in establishing a principle
or criteria, such as Eddy is pointing towards:
* powerful features are only available to well-verified people.
+ wildcards
+ punycode
+ codesigning
(That's just a hypothetical.)
iang
Because punycode isn't the real problem here...
I don't think this is what he is saying exactly, but rather that for
HTTP the world looks always fine...
...the positive indications for Secure are weak, the ones for Plain
don't exists. Some of my thoughts about this from last year:
http://blog.startcom.org/?p=86
> 3. And then there is the punycode thing. That's just spice, as far as I
> can see.
>
Also the wild card issue we've been discussing here already. Some of it
is presented here: http://blog.startcom.org/?p=83
(PS. I'm moving the discussion to mozilla.dev.security.policy, please
follow up there as this is the right news group for it)
Yes, it's surprising how some of such attacks seem obvious *after* they
have been done, but it takes so long to realize it can be done.
The md5 collision between a normal and a *CA* certificate was similar
for me, "how the fuck did we not think earlier, when it was already
obvious someone would soon create a collision between two real md5
certs, that they just had to do that to make the attack really effective".
This being said : Is there already a bug open for this ? The only thing
that stops me opening it myself is that it might already exist but be
security restricted.
PS : I think this discussion should be on mozilla.dev.security since
it's about a security vulnerability, not crypto and not security.policy.
Does everyone share my opinion ? (I'm setting the follow-up there)
> It sounds to me that we could and should fix this bug simply by disabling
> punycode for the wildcard portion.
I'm not sure what you're proposing here, Ben, or what effect you think
it would have.
Homomorphic characters aren't a problem for wildcard matching. They're a
problem for users' eyeballs. The attack that was demonstrated could have
been done without wildcards. Changing the wildcard matching rules would
not eliminate this attack (in the general case).
In any case, I think Dan's recent IDN blacklist bug is on the right track.
I'm proposing that when Firefox displays the domain name of a site, it
should only use punycode display for the portion of the domain name which
actually appears in the certificate. So for a wildcard cert *.ijjk.cn, the
display would be
xn--blahblahunreadablepunycode.ijjk.cn
> Homomorphic characters aren't a problem for wildcard matching. They're a
> problem for users' eyeballs. The attack that was demonstrated could have
> been done without wildcards. Changing the wildcard matching rules would
> not eliminate this attack (in the general case).
I don't see how the attack could have been done without wildcards. CA
guidelines say that certificates should not be issued with homographic
characters that might cause confusion, and as far as we know these
guidelines are being followed. The attack here takes place entirely within
the wildcard portion of the domain because that's the portion the CA can't
verify when they issue the certificate.
--BDS
Thank you for confirming and being clear on this! This is a general
problem with wild cards. The solution to prevent this could be easy.
Thanks for explaining that. You're proposing a change to the Firefox
display, not to the actual wildcard matching rules.
One implication of your proposal is that the code that would attempt to
determine which part of the name matches a wildcard would need a way to
fetch the particular DNS name string from the cert that was used in the
match. That's quite feasible, but today, the function that does that
name matching does not output the particular string against which it
successfully matched. You would want a version of the function that
could do that, I think.
>> Homomorphic characters aren't a problem for wildcard matching. They're a
>> problem for users' eyeballs. The attack that was demonstrated could have
>> been done without wildcards. Changing the wildcard matching rules would
>> not eliminate this attack (in the general case).
>
> I don't see how the attack could have been done without wildcards. CA
> guidelines
Which (whose) guidelines? Are you referring to RFC 5280 section 7, or
to some other guidelines?
Mozilla's CA cert policy doesn't even mention this subject.
> say that certificates should not be issued with homographic
> characters that might cause confusion, and as far as we know these
> guidelines are being followed.
By all CAs? That would be surprisingly delightful, if true.
When I consider the problems we've recently seen with fundamental issues
like properly verifying the identity of the certified subject, I'd be
surprised if something as esoteric as IDN is handled correctly by all CAs.
> The attack here takes place entirely within the wildcard portion of the
> domain because that's the portion the CA can't verify when they issue the
> certificate.
A wildcard cert enabled numerous different sites to be spoofed with a
single cert for this demo. But I'd be surprised to learn that there are
NO CAs out there who are willing to issue certs with seemingly verifiable
non-wildcard IDN domain names.
>>> Homomorphic characters aren't a problem for wildcard matching. They're a
>>> problem for users' eyeballs. The attack that was demonstrated could have
>>> been done without wildcards. Changing the wildcard matching rules would
>>> not eliminate this attack (in the general case).
>> I don't see how the attack could have been done without wildcards. CA
>> guidelines
>
> Which (whose) guidelines? Are you referring to RFC 5280 section 7, or
> to some other guidelines?
>
> Mozilla's CA cert policy doesn't even mention this subject.
>
>> say that certificates should not be issued with homographic
>> characters that might cause confusion, and as far as we know these
>> guidelines are being followed.
>
> By all CAs? That would be surprisingly delightful, if true.
> When I consider the problems we've recently seen with fundamental issues
> like properly verifying the identity of the certified subject, I'd be
> surprised if something as esoteric as IDN is handled correctly by all CAs.
I agree with Nelson's comments. I'd even go further and say it is not
likely that a CA can reliably identify an IDN or even an ordinary domain
that is "sensitive" before the event.
CAs are not "global branding police" and do not have the wherewithall to
become such. Consider two countries with different languages and little
common cultural connection ... say Peru and Iraq. How is the Iraqi CA
going to spot that an iraqi just purchased a domain that looks like
Peru's biggest bank? (IDNs and wildcards are just distractors in this
question, they make the problem worse, but don't change it fundamentally
that I can see.)
It *might* be ok if we were just talking about one country's market and
everyone knows the names of all banks ... but that's not true in USA
where the banks number in the many thousands, and that's where the hot
threat scenario is.
(I do not see a solution for this possible at the guidelines / CA /
Mozilla policy level, included EV, but please correct me if I'm wrong...)
>> The attack here takes place entirely within the wildcard portion of the
>> domain because that's the portion the CA can't verify when they issue the
>> certificate.
>
> A wildcard cert enabled numerous different sites to be spoofed with a
> single cert for this demo. But I'd be surprised to learn that there are
> NO CAs out there who are willing to issue certs with seemingly verifiable
> non-wildcard IDN domain names.
How would they do it?
iang
This does not fix the problem that Eddy pointed out, that you don't need Punycode to make a sensible-looking domain name appear on the left of a wild-carded domain name.
>I don't see how the attack could have been done without wildcards. CA
>guidelines say that certificates should not be issued with homographic
>characters that might cause confusion
They do? Where?
>, and as far as we know these
>guidelines are being followed.
Pointers, please. This is fascinating, if true.
>The attack here takes place entirely within
>the wildcard portion of the domain because that's the portion the CA can't
>verify when they issue the certificate.
That's true whether or not it is an IDNA label.
I believe that Unicode Technical Report #36 addresses this.
-Kyle H
UTR #36 is not a CA guideline, it is a guideline that some CAs might read and implement. I know of none that have. Does anyone here know which CAs, if any, do any filtering based on IDNA labels in requested certs?
--Paul Hoffman
I think part of what's going on here is a confusion between CAs and
domain name registrars. IIRC there was indeed some sort of agreement
among domain name registrars to implement special checking for
internationalized domain names. I think we (Mozilla) made this a
condition for turning on IDN support in Mozilla products for particular
TLDs. However as you note I'm not aware of an agreement addressing
similar measures to be taken by CAs.
Gerv Markham was pretty heavily involved in the IDN issues with domain
name registrars. I've copied him on this in hopes he can add more
information.
Frank
--
Frank Hecker
hec...@mozillafoundation.org
There was no such agreement. TLD registries ask which language a name is in; some then do some filtering based on what characters they think are used by particular languages. This is far from a science and fails miserably for most European languages.
>I think we (Mozilla) made this a condition for turning on IDN support in Mozilla products for particular TLDs.
True. And, IMHO, embarrassing to Mozilla. The reason for showing the Punycode for www.éxample.com but the actual characters for www.éxample.org takes a lot of stretching, to say the least.
>However as you note I'm not aware of an agreement addressing similar measures to be taken by CAs.
Of course, it would have to be an agreement with *every* CA in your trust anchor pile, which is kind of unlikely.
>Gerv Markham was pretty heavily involved in the IDN issues with domain name registrars. I've copied him on this in hopes he can add more information.
It will be interesting to see if he has anything to say about CAs, who are the real security concern here.
Some CA policies do. I can't recall right now, but EV might address that
as well.
>> The attack here takes place entirely within
>> the wildcard portion of the domain because that's the portion the CA can't
>> verify when they issue the certificate.
>
> That's true whether or not it is an IDNA label.
Yup.
>> I believe that Unicode Technical Report #36 addresses this.
>
> UTR #36 is not a CA guideline, it is a guideline that some CAs might
read and implement. I know of none that have. Does anyone here know
which CAs, if any, do any filtering based on IDNA labels in requested certs?
>
You don't like that I mention particular CAs, but the one I'm affiliated
with does to some extend. ;-)
Nelson and everyone else not knowing the details of this :
The problem is solved not at the CA level, but at the registry/TLD level.
By default, i18n is in the *disabled* state and the punycode is
displayed instead of the i18n domain name.
When the registry in charge of a given TLD states that it has a
homograph avoidance method in place that guarantees that nobody will be
able to obtain a dangerous i18n domain name, then punycode decoding is
allowed for that TLD.
See the detail of that policy here :
http://www.mozilla.org/projects/security/tld-idn-policy-list.html
When issuing a SSL server cert there is no need for a special checking
at the CA level, because nobody will first be able to obtain a dangerous
domain name within that TLD.
Writing the above was very useful for me, because it made me realize the
current problem is quite wider than just wildcard certificates.
The attack is possible even without a wildcard certificate.
If it fails, then report it to secu...@mozilla.org as per the policy here :
http://www.mozilla.org/projects/security/tld-idn-policy-list.html
If the failure is confirmed and not solvable, then i18n should be
disable for that TLD.
If you feel your report gets wrongly ignored or that mozilla.org
acknowledges it but fails to take proper action to disable i18n for that
TLD, then feel free to report it on the mozilla.dev.security
newsgroup/mailing-list.
But don't complain that the current system doesn't work and that
mozilla.org does nothing about it, without precisely explaining how it
fails and without giving to mozilla.org an opportunity to correct the
problem.
Like the IANA requirement to state correct information in the WHOIS
records? Makes me laugh...
> Writing the above was very useful for me, because it made me realize the
> current problem is quite wider than just wildcard certificates.
> The attack is possible even without a wildcard certificate.
That's correct. IDN presents a different problem than wild cards.
Unfortunately the original reporter mixed those two badly up, which
wasn't useful. Both issues need to be treated differently.
This has been a very interesting exploration! OK, so in the sense of
"wildcard versus IDN" ... and of apples & oranges, chalk and cheese: Do
people feel that:
* IDNs present more danger than wildcards,
* wildcards present more danger than IDNs,
* they are approximately the same level of danger,
and trying to separate them out is not efficacious
at this level of discussion?
Pick one?
This would feed into "problematic practices" and a clause that says "do
you treat these things with more care?" For example, if we look at:
https://wiki.mozilla.org/CA:Problematic_Practices
We see one, and not the other.
iang
I do not like you mentioning particular CAs to advertise (yourself) or attack (your competitor); asking for a list of CAs that implement policies that some might agree or disagree with seems reasonable, depending on how you color your answer. :-)
Jean-Marc, you have fallen for Gerv's wishful thinking and security theater. There are multiple TLDs on that list that have policies that say *nothing* about preventing homograph spoofing.
>If the failure is confirmed and not solvable, then i18n should be disable for that TLD.
The failure listed on that page does not match the policies listed for the TLDs. That is, if a TLD never said that the had a policy relating to confusing homographs, just that it had a policy of some sort, there is nothing to report. Note how few even list the allowed characters (and, in one important case, the character list does not exist at the URL given).
>If you feel your report gets wrongly ignored or that mozilla.org acknowledges it but fails to take proper action to disable i18n for that TLD, then feel free to report it on the mozilla.dev.security newsgroup/mailing-list.
You completely misunderstood my message: the failure is in Mozilla thinking that asking a registrant to say what language they are registering in will achieve any significant security. I laugh along with Eddy on this.
>But don't complain that the current system doesn't work and that mozilla.org does nothing about it, without precisely explaining how it fails and without giving to mozilla.org an opportunity to correct the problem.
I didn't say "mozilla.org does nothing about it": you are doing something about it. You're tenaciously sticking to a silly bit of security theater that makes the users' experience worse. To me, that's a failure, but it seems like you think that it is worth it.
Again: it is not clear how you can say that www.éxample.com is unsafe but www.éxample.org is safe given what is said on that page.
--Paul Hoffman
Anything which can be misused in such a way should be effectively
prevented by policy.
> https://wiki.mozilla.org/CA:Problematic_Practices
>
> We see one, and not the other.
>
The one which is isn't listed (IDN) requires another policy decision
concerning what CAs should do in order to prevent occurrences such as
PAYPA1.COM, MICR0S0FT.COM, PayPaI.com and similar. Only then it's
reasonable to limit IDNs on the same basis.
Every TLD on that list should have a published set of characters it
permits and, if that set contains homographic characters, an
anti-spoofing policy. If this is not true for one or more, please let me
know.
> You completely misunderstood my message: the failure is in Mozilla
> thinking that asking a registrant to say what language they are
> registering in will achieve any significant security. I laugh along
> with Eddy on this.
Mozilla does not think this. What makes you say we do? Unlike other
browser's, Mozilla's anti-spoofing mechanisms currently do not rely on
things like "no mixed scripts".
> Again: it is not clear how you can say that www.éxample.com is unsafe
> but www.éxample.org is safe given what is said on that page.
The "rationale" section of this document explains very well why our
policy and technical implementation is as it is:
http://www.mozilla.org/projects/security/tld-idn-policy-list.html
Gerv
Why do we think that wildcard certs are required to demonstrate the problem?
Gerv
You'll need to elaborate on what you are saying here, because the way I
read it, he _hates_ the new FF3 security error pages, and will do
anything to avoid them. That looks like a win to me.
Gerv
The security of our anti-spoofing measures does not rest on any sort of
"language" being associated with a name, or any filtering that may be
done on that basis. It rests on each registry having a list of
characters which are the only ones it permits and either:
a) that list not having any homographs; or
b) them having a list of which characters are homographic to each other
and an anti-spoofing policy the deploy whenever a domain name with one
of any pair is registered.
All the registries added to the list had this when they were added. As I
said in my previous message, if you know of a registry which no longer
meets these criteria, please let me know.
> It will be interesting to see if he has anything to say about CAs,
> who are the real security concern here.
CAs are irrelevant to spoofing issues. If www.something.com is a
homograph for www.someth1ng.com, that's a bad thing irrespective of
whether the owners of each of the two domains can get a certificate for
them.
Gerv
> The "rationale" section of this document explains very well why our
> policy and technical implementation is as it is:
> http://www.mozilla.org/projects/security/tld-idn-policy-list.html
>
OK, reading the IDN policy I understand that registrars uses human, and
not technical, means to provide a chain of trust from the registry to
the application to the user.
That's about the same crap like this one:
http://www.icann.org/en/announcements/advisory-10may02.htm
In particular:
"At its expense, Registrar shall provide an interactive web page and a
port 43 Whois service providing free public query-based access" and
"*Require* each registrant to submit (and keep updated) accurate contact
details"
Now I could give you scores of registrars which not only don't provide
WHOIS lookup service but also effectively and willingly prevent the
publishing of those details (WhoisGuard anybody?).
How to prove? Does Mozilla buy domain names (or purchase certificates)
from time to time in order to govern its policies?
> CAs are irrelevant to spoofing issues. If www.something.com is a
> homograph for www.someth1ng.com, that's a bad thing irrespective of
> whether the owners of each of the two domains can get a certificate for
> them.
Only CAs are relevant if at all. You don't expect that 200 domain names
were registered by going through anti-spoofing checking and measures, do
you?!
Concerning the example above, a certificate for the later would
represent a problem at least for some CAs, it might be nevertheless
issued if there is evidence that no basis for concern exists (due to
out-of-bound identity validation for example).
It's important to realize something rather important... security must
be designed into the system from the ground up, and all pieces of a
secure system must operate together properly. It's not *just* the CA,
it's everything.
Since we don't have a secure system, we need to find a way to make
things as secure as possible given the lack of cooperation from the
registrars/ICANN/browser vendors/CAs/users.
-Kyle H
> --
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>
Ideally yes, your are right...
> Since we don't have a secure system, we need to find a way to make
> things as secure as possible given the lack of cooperation from the
> registrars/ICANN/browser vendors/CAs/users.
...but I think that the CAs would be the better equipped and capable
parties of those (beyond unilateral actions on part of the browser
vendors, like removing support for wild cards, IDN and numbers in domain
names generally and/or in certificates particularly). What's lacking is
perhaps a policy making those requirements. It's of course just my
opinion on this matter...
Browser vendors can't take unilateral actions (except perhaps showing
both the punycode and interpreted versions of the site name, and
explicitly in the chrome breaking it into the 'protocol', 'host',
'port', and 'query string' portions of the URL).
Removal of support for wildcards can't be done without PKIX action, if
one wants to claim conformance to RFC 3280/5280.
This is part of the reason why the CAB Forum was created. However, at
this point, ICANN also needs to be brought into the discussion.
-Kyle H
Recognising that this is an old debate, and etc etc, I thought myself it
was clear enough.
The negative response from FF3 is now so fierce that the attacker
prefers not to trigger it. Yes, this is "a win" on paper.
The point that is made is that the "positive response" is so weak that
it doesn't support the overall effect; the attacker just prefers to
trick the user using HTTP and some favicons or other simple symbols.
And (so the author claims) gets away with it easily enough, because
there is no "positive response" that is worth much these days.
He lays out the trap, but reasonable people can differ as to whether the
trap has closed, and whether it really hurts us.
Probably, we could see this as more of a killer issue if an ordinary
sysadm or an ordinary developer thought the "FF3 negative response" was
so fierce that they also preferred to stick to HTTP.
Such, if it happened, would then be a failure of security; the
principle here is that "the first requirement of security is usability."
This is easily shown: If the usability goes down, users drop the
system's security, and they then don't get security. Overall delivered
security goes down, and this tends to dominate theoretical or
cryptographic security.
However this is theorising, there is no evidence that this is happening.
iang
On 21/2/09 15:34, Peter Gutmann wrote:
> "Steven M. Bellovin"<s...@cs.columbia.edu> writes:
>
>> http://www.theregister.co.uk/2009/02/19/ssl_busting_demo/ -- we've talked
>> about this attack for quite a while; someone has now implemented it.
>
> My analysis of this (part of a much longer writeup):
>
> -- Snip --
>
> [...] it's now advantageous for attackers to spoof non-SSL rather than their
> previous practice of trying to spoof SSL. The reason for this is that the
> Hamming distance beteween the eye-level SSL indicators and the no-SSL
> indicators (even without using the trick of putting a blue border around the
> favicon) is now so small that, as shown in the magnified view in [Reference to
> graphic snipped], it's barely noticeable (imagine this crammed up into the
> corner of a 1280 x 1024 display, at which point the difference is practically
> invisible). What makes this apparently counterintuitive spoof worthwhile is
> the destructive interaction between the near-invisible indicators and the
> change in the way that certificate errors are handled. In Firefox 3 any form
> of certificate error (including minor bookkeeping ones like forgetting to pay
> your annual CA tax) results in a huge scary warning that requires a great many
> clicks to bypass. In contrast not having a certificate at all produces almost
> no effect. Since triggering negative feedback from the browser is something
> that attackers generally want to avoid while failing to trigger positive
> feedback has little to no effect, the unfortunate interaction of these two
> changes in Firefox is that it's now of benefit to attackers to spoof non-SSL
> rather than spoofing SSL.
>
> -- Snip --
>
> It's the law of unintended consequences in effect, HCI people pointed out some
> time ago that the change in the security indicators in FF3 was a bad idea but
> AFAIK 'Moxie Marlinspike' is the first person to show that it's even worse
> than that because of the destructive interaction between the
> security-indicator change and the cert-warning change.
>
> The first step in fixing this would be to undo several of the UI changes that
> lead to the easily-spoofed security indicators in FF3 and bring back the FF2
> versions, which would at least partially upset the nasty interaction that
> makes this attack effective.
>
> Peter.
>
> ---------------------------------------------------------------------
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to majo...@metzdowd.com
>
>
I agree that the positive / versus negative indicators or not favorable,
specially for regular SSL. We had "fierce" fights on this subject here
and at the bugs...
...however I must correct here some impression which seems to have taken
over the minds because of the way FF3 handles SSL errors, which however
is absolutely not correct.
I remember when we discussed the adoption of EV here, that I pointed out
and could reasonable prove that SSL certs were and are not part of
phishing attacks - meaning that the vast majority of all known phishing
sites never used SSL certs in fist place. Now this was way before the
debut of FF3. Also these days, phishing sites don't use SSL but plain
text and in itself this is hardly news and neither due to the SSL UI and
error pages of FF3 (and despite of what Peter Gutmann has to say
concerning the non-existing CA tax ;-) ).
Huh? Both these RFCs completely step out of the way when it comes to
wildcard certificates - just read the last paragraph of section
4.2.1.7/4.2.1.6. PKIX never did wildcards in its RFCs.
Kaspar
-Kyle H
Definitely not a PKIX RFC. "Removal of support for wildcards" doesn't
need any PKIX action.
Kaspar
Right. This can also be seen as evidence that secure browsing has not
protected the users, because it was so easily bypassed.
Security is a balance, not a binary. The point of security is to ...
deliver security to users, not feelgood to cryptographers. If the users
aren't using it, then it isn't delivering security.
If the security is "too hard to use" and therefore delivers less
security, we should be making security easier to use. So that it covers
more users. The first requirement of security is Usability. This time,
every time, and always, because if the user decided not to use it, it's
game over. And this is the danger that the current FF3 beta page may
have tempted.
(Having said that, I think the "negative response" has been
substantially improved by Johnathan and his team. Although I've not
seen it in action myself, it may have brought things back closer to
balance and this conversation would be out of date.)
iang
Or....the price to stage an attack using SSL is still considered too
high. It's rather a point for SSL than against IMO.
> If the security is "too hard to use" and therefore delivers less
> security, we should be making security easier to use.
Where do you see the problem exactly? Is it hard for a user browsing to
a web site when in SSL mode? I guess not...
...better indicators when submitting information via pain text should
always prompt (that's a settings I have at my browser and it's
astonishing how many times I must confirm or deny the information to go
through). Negative indicators whenever a user interaction happens in
plain should be made more clear, certainly when the password field type
is used in a form, but not only.)
Which says:
Finally, the semantics of subject alternative names that include
wildcard characters (e.g., as a placeholder for a set of names) are
not addressed by this specification. Applications with specific
requirements MAY use such names, but they must define the semantics.
At 10:50 PM -0800 2/23/09, Kyle Hamilton wrote:
>RFC 2818 ("HTTP Over TLS"), section 3.1.
RFC 2818 is Informational, not Standards Track. Having said that, it is also widely implemented, and is the main reason that the paragraph above is in the PKIX spec.
We rely on good citizens like you to let us know when there's a problem
:-) We don't regularly attempt to break the security of CA cert issuance
procedures, either.
> Only CAs are relevant if at all. You don't expect that 200 domain names
> were registered by going through anti-spoofing checking and measures, do
> you?!
I don't understand what you are saying here. :-(
Gerv
That's a very bad idea if you RELY on that. Instead you should implement
a procedure and plan for random checking various issues. Remember the
bug with Equifax/Geotrust which issues directly from a CA root and how
surprised you were and how surprised I was that you were surprised? ;-)
I can't know what Mozilla knows or doesn't know!
>> Only CAs are relevant if at all. You don't expect that 200 domain names
>> were registered by going through anti-spoofing checking and measures, do
>> you?!
>
> I don't understand what you are saying here. :-(
>
Outsh, sorry! That should have been 200 *million* domain names were
registered by going through some anti-spoofing checking and measures...
OTOH domain spoofing is dangerous *even* when there's no certificate
involved, so it makes sense to require to solve it at the registrar/DNS
level, and not at the CA level.
But you are right to point out that the volume of domain names involved
makes unrealistic any procedure that's not fully automatized.
So I think Mozilla should require that the procedure be fully
automatized, and not accept any solution that requires human
intervention to approve requests, even if only for a portion of them.
Just one thing : The use of a wildcard certificate was a misleading red
herring in the implementation of the attack.
What's truly broken is that the current i18n attack protection relies on
the checking done by the registrar/IDN, and that the registrar/IDN can
only check the second-level domain name component.
Once they have obtained their domain name, attacker can freely use the
third-level domain name component to implement any i18n attack they want
even if no wildcard certificate is authorized.
This is not to say that wildcard certificates are not bad, evil,
anything, but that nothing new has been truly brought about that by this
attack.
So talk about wildcard certificate all you want, but this is a separate
discussion from the discussion about the solution for this new i18n attack.
And the solution for it will not be wildcard certificate related, will
not be easy or obvious, and so needs to be discussed as widely as possible.
Also there will be no crypto involved in the solution, as it's not
acceptable to choose to just leave ordinary DNS user out in the cold
with regard to the attack. So it needs to be discussed on the security
group, not crypto.
The vast majority of those domain names are ASCII, not IDN.
> So I think Mozilla should require that the procedure be fully
> automatized, and not accept any solution that requires human
> intervention to approve requests, even if only for a portion of them.
Why? If a registrar wants to do all their checking manually, what's that
to us? It'll raise their costs, but that's their business.
Gerv
The author was showing that even looking at the lock doesn't help in a spoofing attack if the attacker has a wildcard certificate. In this way, it is an attack improvement.
>This is not to say that wildcard certificates are not bad, evil, anything, but that nothing new has been truly brought about that by this attack.
>
>So talk about wildcard certificate all you want, but this is a separate discussion from the discussion about the solution for this new i18n attack.
>And the solution for it will not be wildcard certificate related, will not be easy or obvious, and so needs to be discussed as widely as possible.
>Also there will be no crypto involved in the solution, as it's not acceptable to choose to just leave ordinary DNS user out in the cold with regard to the attack. So it needs to be discussed on the security group, not crypto.
We disagree here: it should be discussed in both places. In "security", it is what should the browser do about spoofing. In crypto-policy (or whatever that list will be called when it is turned on), it should be how wildcards assist in the attack if a user is looking at the lock.