Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More detailed Mozilla statement; starting a discussion

127 views
Skip to first unread message

Gervase Markham

unread,
Mar 25, 2011, 11:46:19 AM3/25/11
to mozilla-dev-s...@lists.mozilla.org
Mozilla has made a more detailed statement about the Comodo misissuance
incident:
http://blog.mozilla.com/security/2011/03/25/comodo-certificate-issue-follow-up/

As the last paragraph states, we wish to begin (or join; we would not
want to suggest we are the first to think this is necessary) a
wide-ranging conversation, both about what we can do in the short term,
and also where we are headed - what should the web security landscape
look like in 3-5 years, and what can Mozilla do to help us get there?

In support of this, I have prepared a page of resources:
https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
which will collect ideas and position papers from around the web. My
next step will be to write up my own opinions :-)

Gerv

Eddy Nigg

unread,
Mar 25, 2011, 3:03:06 PM3/25/11
to mozilla-dev-s...@lists.mozilla.org
On 03/25/2011 05:46 PM, From Gervase Markham:

> In support of this, I have prepared a page of resources:
> https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
> which will collect ideas and position papers from around the web. My
> next step will be to write up my own opinions :-)
>

About TOFU some thoughts - I have disabled a couple of CA roots in my
Firefox profile and take any certificate that comes from such a root
with a grain of salt (and never using it for something critical).
Luckily this works for most stuff I need, however would I have come
across one of the certs that were mistakenly issued, I would have
accepted it and clicked through (in the absence of a revocation message
that is).

Building an opinion about a certificate that is chained to one of those
roots that I have disabled is mostly useless. So my decision is mostly
based on if I trust the root or not. But it also indicates to me that
TOFU mostly doesn't work.

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
XMPP: star...@startcom.org
Blog: http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg

Juha Luoma

unread,
Mar 25, 2011, 4:01:16 PM3/25/11
to mozilla-dev-s...@lists.mozilla.org
On 25.3.2011 17:46, Gervase Markham wrote:

> In support of this, I have prepared a page of resources:
> https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
> which will collect ideas and position papers from around the web. My
> next step will be to write up my own opinions :-)

I was shocked when I realized that Firefox does not actually load and
check CRLs even if end certificates and intermediate certificates would
have proper certificate revocation list pointers. Even if CRLs have
their problems, I really think any serious browser should do what is
possible today, even if it is a bit clumsy.

What is written about Chrome here
http://www.imperialviolet.org/2011/03/18/revocation.html
seems to be the best practice among the checked browsers. Mozilla should
do at least the same or even better, give clear message to the user if
revocation status can not be checked now. If there is older CRL in
cache, tell that it was last checked to be valid at the end of validity
time of that cached CRL.

OCSP stapling is also very good thing to do. How common it is at HTTPS
servers today?

What is proposed in the last paragraph here
http://www.imperialviolet.org/2011/03/18/revocation.html
sounds also possible idea, but would require first some kind of standard
way for servers to refresh their certificates automatically. And this
can not be the answer to the problem in short term.

- Juha

Matt McCutchen

unread,
Mar 25, 2011, 5:21:18 PM3/25/11
to mozilla-dev-s...@lists.mozilla.org
On Mar 25, 11:46 am, Gervase Markham <g...@mozilla.org> wrote:
> As the last paragraph states, we wish to begin (or join; we would not
> want to suggest we are the first to think this is necessary) a
> wide-ranging conversation, both about what we can do in the short term,
> and also where we are headed - what should the web security landscape
> look like in 3-5 years, and what can Mozilla do to help us get there?

Great news!

I would call the topic web site identity (c.f. the site identity
button); web security is much broader, including server-side and
browser-based security issues. Let's distinguish up front between TLS
server authentication (a.k.a, SSL_AuthCertificate) and owner
attributes (EV, etc.), and recognize that under the same origin
policy, owner attributes can only meaningfully be bound to a DNS name.

My view is that DANE (DNSSEC-based certificate designation) is the
ideal TLS server authentication scheme and should be promoted as the
preferred scheme for use with Mozilla applications. Indeed, the DNS
itself is the authoritative source for information asserted by the
holder of a DNS name; conceptually there is no reason to use any other
system now that DNS has the necessary cryptographic identity
protection. For sites that do not opt in to DANE, Mozilla
applications can continue to use a CA list.

This position leaves open some issues:
1. How to bolster the CA system for owner attributes. If EV
attributes continue to be the only ones used by Mozilla applications,
and problems with EV continue to be rare, nothing may need to be done
here.
2. How to bolster the CA system for TLS server authentication for
sites that do not opt in to DANE. Any of the other techniques (HSTS
CA pinning, Perspectives, ...) may be applicable here.
3. Some users may not even wish to be exposed to the DNS registries
for some purposes. I believe this can only be properly solved by
switching to a different naming system that reflects user intent,
which will have far-reaching consequences. I think Mozilla can put
this aside for now.

Concerns other than web site identity, such as phishing and malware
protection, should be addressed by other means.

--
Matt

Kyle Hamilton

unread,
Mar 27, 2011, 7:23:26 PM3/27/11
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org

On Fri, Mar 25, 2011 at 8:46 AM, Gervase Markham <ge...@mozilla.org> wrote:
> As the last paragraph states, we wish to begin (or join; we would not want
> to suggest we are the first to think this is necessary) a wide-ranging
> conversation, both about what we can do in the short term, and also where we
> are headed - what should the web security landscape look like in 3-5 years,
> and what can Mozilla do to help us get there?

The security landscape should be opaque, as opaque as possible.

This should be done as simply as possible.

Creating a mechanism by which clients and servers can dynamically create their own credential bundles during the handshake would be ideal.

Identity Certifying Authorities -- the members of the root program -- should examine ways to provide assertions which are Subject-privacy-oriented, while still providing all of the services which they currently do.

Certifying Authorities should require notarized/acknowledged statements of state identity and public key bindings, utilizing the state infrastructure which already exists to authoritatively bind information for binding state identities. (Gerv, this is what I meant when I suggested "to the level necessary for a notary public in their jurisdiction".) Identity CAs are -excellent- at document authentication, and I think that the Identity CA should run continuing-education programs for notaries public to explain the science's intricacies. (This is the reason I wish CABF were a strong central organization.)

Certifying Authorities should automate key enrollment, utilizing the capacity to provide multiple assertions in the X.509/PKIX handshake which I proposed elsewhere to permit multiple statements of authority (for example, the device's key as certified by the authoritatively-bound key above, and the authoritatively-bound key itself) to be made over a PKCS#10 Certificate Signing Request, proving that the owner of the authoritatively-bound key authorized the key's enrollment, proving that the owner of the authoritatively-bound key authorized the CSR key's binding. In my dream world, the content of the request would be honored (to the extent that it could be accommodated under CA policy), allowing it to be automated and generated dynamically by the client to include only the information which needs to be asserted by the client in the transaction it's working with.

Certifying Authorities should share information about hash values of DNs, domain names, and authoritative keys that they certify. The intent would be to create a central point to determine if another CA is already servicing a particular domain or DN -- not so that the colliding CA must refuse to service the request, but to give it a point of reference that manual verification is required.

Certificates should flow like water, to prevent back-end database matching on the same key (which is, by definition, an identity which must be strongly bound to other identity information).

Instead of forcing every device which accesses a single email account to use the same keypair, every device which accesses an email account should be individually identifiable. This is so that in the inevitable event of a compromise, the keys held by the compromised device can be identified and revoked individually without inconveniencing every other device -- but it also should be completely opaque to the correspondent, who doesn't need to know the name or type of device you're using if he has surety that your identity is known and available for state process.

I don't understand why everyone insists that this must be difficult. Why does everyone adhere to rigid "state identity is the only thing you'll ever need, so state identity is the only thing you'll ever get" X.500 dogma? Even computer-level IPsec certificates don't bother identifying their owner in Microsoft's implementations, even though the intent of the standard was to identify who owns a particular device.

Ideally, CAs (and Mozilla) will move away from the idea of "OMG IT'S UNTRUSTED I'M STOPPING PROCESSING" to alternative user interface systems which permit untrusted certificates to still be parsed.

Of course, ideally, Mozilla would also stop treating "the presence of a trusted certificate" as "authorization to display UI without completely blocking the client's flow". Just because a particular assertion is untrustworthy does not mean that I want to be protected from myself. Instead, I'd prefer a reddened address-bar with a tooltip that pops up when I mouse over it.

Ideally, the server and the client would never throw fatal alerts to each other. As it stands right now, if the server throws an "unknown issuer" alert, then the connection is closed and the (context-insensitive) error page is shown -- which has no information about how to correct the error. Apache's mod_ssl documentation says that "optional_no_ca" is against the idea of security, but it's not. The application layer can decide "okay, this isn't trusted so I can't act on its information -- but I can guide the user into my own context's error-processing path to help the user fix the problem."

The authentication layer IS NOT the authorization layer. The authentication layer provides information to the authorization layer, for the authorization layer to be able to apply local (and thus context-specific) policy.

-Kyle H

Kyle Hamilton

unread,
Mar 27, 2011, 7:50:02 PM3/27/11
to Matt McCutchen, mozilla-dev-s...@lists.mozilla.org


On Fri, Mar 25, 2011 at 2:21 PM, Matt McCutchen <ma...@mattmccutchen.net> wrote:
> 2. How to bolster the CA system for TLS server authentication for
> sites that do not opt in to DANE.  Any of the other techniques (HSTS
> CA pinning, Perspectives, ...) may be applicable here.

I believe that the CA system can best be bolstered by applying e.g. the VISA/MasterCard model: a central organization to administer the terms of each individual CA's contract with it (ideally, the contracts would be unilaterally offered to the CAs, so they would all be working under the same rules). The contracts would include enforceable penalties for failure to comply. Then, that central organization's root could be the only trust anchor included in the software as having authority to permit the display of the certificate's contained identity information by the software. That organization could then cross-certify other CAs to delegate the authority to display authoritative identity information.

The largest problem that I'm perceiving is that the state-identity interest believes that it is the only legitimate interest. It is not. There's also the privacy interest, the enrollment authority interest, the consumer utility interest, and the non-state-identity/reputation conveyance interest.

> Concerns other than web site identity, such as phishing and malware
> protection, should be addressed by other means.

Technically, 'phishing' is all about website identity. It is the false assumption of trusted status from the user who is fooled by very-similar markup. This is, in fact, the entire thing that the CA model was intended to protect against.

Malware protection is not something that state identity alone can solve. It requires the application of reputation (non-state-identity) metrics to mitigate or eliminate.

-Kyle H

David E. Ross

unread,
Mar 27, 2011, 10:29:20 PM3/27/11
to mozilla-dev-s...@lists.mozilla.org
On 3/27/11 3:23 PM, Kyle Hamilton wrote [in part]:

>
>
> On Fri, Mar 25, 2011 at 8:46 AM, Gervase Markham <ge...@mozilla.org> wrote:
>> As the last paragraph states, we wish to begin (or join; we would not want
>> to suggest we are the first to think this is necessary) a wide-ranging
>> conversation, both about what we can do in the short term, and also where we
>> are headed - what should the web security landscape look like in 3-5 years,
>> and what can Mozilla do to help us get there?
>
> The security landscape should be opaque, as opaque as possible.

I thoroughly disagree. The security landscape should be as transparent
as possible. When interested and knowledgible outsiders can see how it
works, vulnerabilities are discovered, reported, and fixed. This is why
it was so important that the source code of PGP was available for all to
inspect (at least until Symantec bought out the PGP Corp.)

--

David E. Ross
<http://www.rossde.com/>

On occasion, I might filter and ignore all newsgroup messages
posted through GoogleGroups via Google's G2/1.0 user agent
because of spam from that source.

Matt McCutchen

unread,
Mar 28, 2011, 2:23:52 AM3/28/11
to mozilla-dev-s...@lists.mozilla.org
On Sunday, March 27, 2011 7:50:02 PM UTC-4, Kyle Hamilton wrote:
> I believe that the CA system can best be bolstered by applying e.g. the VISA/MasterCard model: a central organization to administer the terms of each individual CA's contract with it (ideally, the contracts would be unilaterally offered to the CAs, so they would all be working under the same rules). The contracts would include enforceable penalties for failure to comply. Then, that central organization's root could be the only trust anchor included in the software as having authority to permit the display of the certificate's contained identity information by the software. That organization could then cross-certify other CAs to delegate the authority to display authoritative identity information.

Mozilla is doing essentially this via the CA policy and the built-in certs list, but maybe the policy and/or the enforcement could be strengthened.

> > Concerns other than web site identity, such as phishing and malware
> > protection, should be addressed by other means.
>
> Technically, 'phishing' is all about website identity. It is the false assumption of trusted status from the user who is fooled by very-similar markup.

You are right, that statement came out patently false. What I meant was, I believe SSL_AuthCertificate should authenticate the holder of the DNS name and nothing else. Technically speaking, there is nothing wrong with the holder of evil.com operating a TLS service at www.paypal.com.evil.com that serves malware. Consequently, I believe DANE is a sufficient backend for SSL_AuthCertificate (see previous discussion surrounding https://groups.google.com/d/msg/mozilla.dev.security.policy/xStt5FitVL4/-a9Wb9AsDFUJ) and oppose the proposal for extra requirements on "wildcard" style server authentication (https://bugzilla.mozilla.org/show_bug.cgi?id=481725).

> This is, in fact, the entire thing that the CA model was intended to protect against.

EV can do this. DV does it awkwardly, and I would prefer that it stick to its basic function of authentication of services bound to DNS names.

--
Matt

Matt McCutchen

unread,
Mar 28, 2011, 2:23:52 AM3/28/11
to mozilla-dev-s...@lists.mozilla.org
On Sunday, March 27, 2011 7:50:02 PM UTC-4, Kyle Hamilton wrote:
> I believe that the CA system can best be bolstered by applying e.g. the VISA/MasterCard model: a central organization to administer the terms of each individual CA's contract with it (ideally, the contracts would be unilaterally offered to the CAs, so they would all be working under the same rules). The contracts would include enforceable penalties for failure to comply. Then, that central organization's root could be the only trust anchor included in the software as having authority to permit the display of the certificate's contained identity information by the software. That organization could then cross-certify other CAs to delegate the authority to display authoritative identity information.

Mozilla is doing essentially this via the CA policy and the built-in certs list, but maybe the policy and/or the enforcement could be strengthened.

> > Concerns other than web site identity, such as phishing and malware


> > protection, should be addressed by other means.
>
> Technically, 'phishing' is all about website identity. It is the false assumption of trusted status from the user who is fooled by very-similar markup.

You are right, that statement came out patently false. What I meant was, I believe SSL_AuthCertificate should authenticate the holder of the DNS name and nothing else. Technically speaking, there is nothing wrong with the holder of evil.com operating a TLS service at www.paypal.com.evil.com that serves malware. Consequently, I believe DANE is a sufficient backend for SSL_AuthCertificate (see previous discussion surrounding https://groups.google.com/d/msg/mozilla.dev.security.policy/xStt5FitVL4/-a9Wb9AsDFUJ) and oppose the proposal for extra requirements on "wildcard" style server authentication (https://bugzilla.mozilla.org/show_bug.cgi?id=481725).

> This is, in fact, the entire thing that the CA model was intended to protect against.

EV can do this. DV does it awkwardly, and I would prefer that it stick to its basic function of authentication of services bound to DNS names.

--
Matt

Kyle Hamilton

unread,
Mar 28, 2011, 3:41:09 AM3/28/11
to mozilla-dev-s...@lists.mozilla.org

On Sun, Mar 27, 2011 at 7:29 PM, David E. Ross <nob...@nowhere.invalid> wrote:
>> The security landscape should be opaque, as opaque as possible.
>
> I thoroughly disagree.  The security landscape should be as transparent
> as possible.  When interested and knowledgible outsiders can see how it
> works, vulnerabilities are discovered, reported, and fixed.  This is why
> it was so important that the source code of PGP was available for all to
> inspect (at least until Symantec bought out the PGP Corp.)

My concept of 'opaque' is related to 'the information flowing across the wires must be as opaque as possible'.

I agree that the openness of implementations is very important. I also agree that the openness of the specifications is very important. I believe that open reference implementations are a necessary thing.

My issue is primarily that I want to get away from the idea of "it's okay if the data streams are tagged with their owner(s) in the clear".

-Kyle H

Kyle Hamilton

unread,
Mar 28, 2011, 5:39:46 AM3/28/11
to mozilla.dev.s...@googlegroups.com, Matt McCutchen, mozilla-dev-s...@lists.mozilla.org


On Sun, Mar 27, 2011 at 11:23 PM, Matt McCutchen <ma...@mattmccutchen.net> wrote:
> On Sunday, March 27, 2011 7:50:02 PM UTC-4, Kyle Hamilton wrote:
>> I believe that the CA system can best be bolstered by applying e.g. the VISA/MasterCard model: a central organization to administer the terms of each individual CA's contract with it (ideally, the contracts would be unilaterally offered to the CAs, so they would all be working under the same rules).  The contracts would include enforceable penalties for failure to comply.  Then, that central organization's root could be the only trust anchor included in the software as having authority to permit the display of the certificate's contained identity information by the software.  That organization could then cross-certify other CAs to delegate the authority to display authoritative identity information.
>
> Mozilla is doing essentially this via the CA policy and the built-in certs list, but maybe the policy and/or the enforcement could be strengthened.

I have no faith in Mozilla's capacity to administer its root program for the benefit of its users, instead of pandering to untrustworthy CAs. Also, Mozilla refuses to run a Certification Authority of any kind, even though it would be better for its root program and its capacity to (for example) revoke an untrustworthy CA's capacity to harm the users.

Mozilla does not require contracts, and thus cannot enforce compliance.

> You are right, that statement came out patently false.  What I meant was, I believe SSL_AuthCertificate should authenticate the holder of the DNS name and nothing else.  Technically speaking, there is nothing wrong with the holder of evil.com operating a TLS service at www.paypal.com.evil.com that serves malware.  Consequently, I believe DANE is a sufficient backend for SSL_AuthCertificate (see previous discussion surrounding https://groups.google.com/d/msg/mozilla.dev.security.policy/xStt5FitVL4/-a9Wb9AsDFUJ) and oppose the proposal for extra requirements on "wildcard" style server authentication (https://bugzilla.mozilla.org/show_bug.cgi?id=481725).

I oppose extra requirements on wildcard server authenticators as well, because there is indeed a technical restriction against "www.paypal.com.evil.com". A single * should only affect one DNS layer. *.evil.com would permit paypal.evil.com or com.evil.com, but not paypal.com.evil.com. (RFC 2818, page 5) Notably, RFC 5280 disclaims wildcard semantics for SAN DNSNames, leaving it for the application to define.

I am okay with the idea of limiting wildcards in certificates to a single domain level.

>> This is, in fact, the entire thing that the CA model was intended to protect against.
>
> EV can do this.  DV does it awkwardly, and I would prefer that it stick to its basic function of authentication of services bound to DNS names.

Its basic function is "prove that the person who requested the certificate has authority to bind the domain". DV is completely inappropriate for this purpose, and ICANN (as the authority over the DNS) is the appropriate place for identity registration to be rooted.

-Kyle H

Gervase Markham

unread,
Mar 28, 2011, 7:01:52 AM3/28/11
to mozilla-dev-s...@lists.mozilla.org
On 25/03/11 20:01, Juha Luoma wrote:
> I was shocked when I realized that Firefox does not actually load and
> check CRLs even if end certificates and intermediate certificates would
> have proper certificate revocation list pointers. Even if CRLs have
> their problems, I really think any serious browser should do what is
> possible today, even if it is a bit clumsy.

This is for historical reasons - there was a patent on this ability :-(
This problem has now been resolved, and NSS implemented a new library,
libpkix, which includes this ability. Switching to use libpkix is high
on our developers' priority lists for Firefox 5.

> OCSP stapling is also very good thing to do. How common it is at HTTPS
> servers today?

Very uncommon indeed. There is no stable version of Apache which ships
with it.

Gerv

David Illsley

unread,
Mar 28, 2011, 2:34:51 PM3/28/11
to mozilla-dev-s...@lists.mozilla.org
On Mar 25, 4:46 pm, Gervase Markham <g...@mozilla.org> wrote:
> Mozilla has made a more detailed statement about the Comodo misissuance
> incident:http://blog.mozilla.com/security/2011/03/25/comodo-certificate-issue-...

>
> As the last paragraph states, we wish to begin (or join; we would not
> want to suggest we are the first to think this is necessary) a
> wide-ranging conversation, both about what we can do in the short term,
> and also where we are headed - what should the web security landscape
> look like in 3-5 years, and what can Mozilla do to help us get there?
>
> In support of this, I have prepared a page of resources:https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
> which will collect ideas and position papers from around the web. My
> next step will be to write up my own opinions :-)
>
> Gerv

I can't claim to be any kind of expert, but I've got a few thoughts
having read the wiki page...

CA incentives
* Has there been any kind of thought or discussion around having a
cash penalty for poor behaviour - possibly including a sum held in
escrow with the amount based on the number of certificates issued
against a specific root?

HSTS/OSCP Failure Modes
* My reading of the HSTS spec is a little unclear on whether section
7.3 requires the connection to fail if OSCP checking fails. If it
does, surely this make it unlikely that those with reservations about
OSCP checking will use HSTS, which doesn't seem desirable. Would it
make sense to make mandatory OSCP checking an explicit option in HSTS
- allowing those site which prefer the security a way to opt-in?

David
--
As always, my comments are my own and represent my views an not those
of my employer.

Steingruebl, Andy

unread,
Mar 29, 2011, 8:39:37 PM3/29/11
to David Illsley, mozilla-dev-s...@lists.mozilla.org
> From: David Illsley

>
> HSTS/OSCP Failure Modes
> * My reading of the HSTS spec is a little unclear on whether section
> 7.3 requires the connection to fail if OSCP checking fails. If it does, surely this
> make it unlikely that those with reservations about OSCP checking will use
> HSTS, which doesn't seem desirable. Would it make sense to make
> mandatory OSCP checking an explicit option in HSTS
> - allowing those site which prefer the security a way to opt-in?

First, that discussion belongs more properly on the IETF WebSec list where HSTS is one of the work items.
web...@ietf.org

Second, to briefly answer you question, we didn't want the HSTS spec to be about exactly how certificate validation happens, but to be rules about when to use HTTPS and how to deal with failures, not really specifically what constitutes a failure though I admit we went there a little bit with the self-signed statements....

The same was true of several issues including "mixed-content" which some folks wanted to include into the spec even though it doesn't really belong there.

Fundamentally there problem here is that for any security behavior that is documented in something like the BSH (http://code.google.com/p/browsersec/wiki/Main) where there is divergent behavior among the browsers, there ought to be a spec that exists and forces to press browsers towards convergence on one similar behavior. Clearly the world isn't there yet... but I can dream :)

- Andy

Kyle Hamilton

unread,
Mar 30, 2011, 3:53:41 AM3/30/11
to dev-secur...@lists.mozilla.org

On Sun, Mar 27, 2011 at 11:23 PM, Matt McCutchen <ma...@mattmccutchen.net> wrote:
> On Sunday, March 27, 2011 7:50:02 PM UTC-4, Kyle Hamilton wrote:
>> I believe that the CA system can best be bolstered by applying e.g. the VISA/MasterCard model: a central organization to administer the terms of each individual CA's contract with it (ideally, the contracts would be unilaterally offered to the CAs, so they would all be working under the same rules).  The contracts would include enforceable penalties for failure to comply.  Then, that central organization's root could be the only trust anchor included in the software as having authority to permit the display of the certificate's contained identity information by the software.  That organization could then cross-certify other CAs to delegate the authority to display authoritative identity information.
>
> Mozilla is doing essentially this via the CA policy and the built-in certs list, but maybe the policy and/or the enforcement could be strengthened.

I have no faith in Mozilla's capacity to administer its root program for the benefit of its users, instead of pandering to untrustworthy CAs. Also, Mozilla refuses to run a Certification Authority of any kind, even though it would be better for its root program because of its capacity to (for example) revoke an untrustworthy CA's capacity to harm the users without requiring client updates.

Mozilla does not require contracts, and thus cannot enforce compliance.

Even Godzilla eventually falls, in all the movies.

> You are right, that statement came out patently false.  What I meant was, I believe SSL_AuthCertificate should authenticate the holder of the DNS name and nothing else.  Technically speaking, there is nothing wrong with the holder of evil.com operating a TLS service at www.paypal.com.evil.com that serves malware.  Consequently, I believe DANE is a sufficient backend for SSL_AuthCertificate (see previous discussion surrounding https://groups.google.com/d/msg/mozilla.dev.security.policy/xStt5FitVL4/-a9Wb9AsDFUJ) and oppose the proposal for extra requirements on "wildcard" style server authentication (https://bugzilla.mozilla.org/show_bug.cgi?id=481725).

There actually is a mechanism in place to prevent *.evil.com from matching www.paypal.com.evil.com, unless the W3C has come out with something that I don't know about.

RFC 5280 disclaims semantics for wildcards in subjectalternativename. RFC2818 (HTTP over TLS, even though it's Informational) limits * to one single domain component, on page 5.

I oppose extra requirements on wildcard server authenticators as well, because there is indeed a technical restriction against "www.paypal.com.evil.com". A single * should only affect one DNS layer. *.evil.com would permit paypal.evil.com or com.evil.com, but not paypal.com.evil.com. (RFC 2818, page 5) Notably, RFC 5280 disclaims wildcard semantics for SAN DNSNames, leaving it for the application to define.

I am quite okay with the idea of limiting wildcards in certificates to a single domain component, and with this limit in place I cannot see how there is any more danger to the user. I believe that the prohibition against wildcards should be negated.

>> This is, in fact, the entire thing that the CA model was intended to protect against.
>
> EV can do this.  DV does it awkwardly, and I would prefer that it stick to its basic function of authentication of services bound to DNS names.

DV's basic function is "to prove that the actor who requested the certificate has the authority to bind the domain to the actor's key".

DV certificates as they currently exist are completely inappropriate for this purpose, and (IMO) ICANN (as the authority over DNS and IP delegations) is the appropriate place for domain and DNS identity authority to be rooted. Under this model, accredited registrars like GoDaddy (which operate their own CAs for their own customers) could be cross-certified by ICANN, and thus claim the benefit of state authority for their DV-level assertions.

-Kyle H

Tangomike

unread,
Mar 30, 2011, 4:51:32 PM3/30/11
to mozilla-dev-s...@lists.mozilla.org
On Mar 25, 11:46 am, Gervase Markham <g...@mozilla.org> wrote:
> Mozilla has made a more detailed statement about the Comodo misissuance
> incident:http://blog.mozilla.com/security/2011/03/25/comodo-certificate-issue-...

>
> As the last paragraph states, we wish to begin (or join; we would not
> want to suggest we are the first to think this is necessary) a
> wide-ranging conversation, both about what we can do in the short term,
> and also where we are headed - what should the web security landscape
> look like in 3-5 years, and what can Mozilla do to help us get there?
>
> In support of this, I have prepared a page of resources:https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
> which will collect ideas and position papers from around the web. My
> next step will be to write up my own opinions :-)
>
> Gerv

Hypothesis: revocation doesn’t work because of the following vicious
circle:

1. Browsers display content even when revocation information is
unobtainable.
2. Revocation information is not guaranteed to be obtainable.
3. Site operators don’t pressure CAs to make their revocation
information reliably obtainable.
4. Browsers display content even when revocation information is
unobtainable.

You could break the circle at any point. But, in the nature of
vicious circles, no one wants to make the first move.

Perhaps, the problem with revocation information is that (in the
current design) it must be obtained from one source; the issuing CA.
If site operators were to be reliant for their business continuity on
revocation information, and that revocation information was only
available from one source, then they would be right to be nervous.

On the other hand, if there were choice and redundancy in the source
of revocation information, then site operators might accept that
browsers would refuse to display content when revocation information
was not available. A site operator with a certificate from one CA
could shop around for an OCSP response from another CA. (Naturally,
I’m envisaging the use of OCSP Stapling).

In addition, all CAs would have to publish their CRLs, and the CA
issuing the OCSP response would have to consult the issuing CA’s CRL.
Bad guys would first have to trick a CA into issuing a certificate,
and then trick one or more CAs (but, possibly the original one) into
issuing a continuous sequence of OCSP responses. This doesn’t appear
to be any more vulnerable than the current system.

While it’s true that a CA could declare a certificate from one of its
competitors invalid, the site simply wouldn’t staple such a response.

It’s worth noting that this approach was tried many years ago; by
Valicert. It didn’t work then! And, maybe, it won’t work now. But,
times have changed. Maybe, in the restricted circumstances of HTTPS
on the Web, it can be made to work.

Multi-source OCSP stapling isn’t so very different from multi-source
short-lifetime certificates. In this latter solution, an existing
certificate from one CA would be used to authenticate a request for a
replacement short-lifetime certificate from another CA. The main
difference (perhaps) is that the CA issuing the replacement
certificate would be relying on the verification performed by the
initial CA. And, such reliance would have to be time-limited.

Hard-coded blacklists aren't the answer. Somehow, we have to break
the vicious circle.

All the best. Tim.

David E. Ross

unread,
Mar 31, 2011, 11:50:12 AM3/31/11
to mozilla-dev-s...@lists.mozilla.org
On 3/30/11 12:51 PM, Tangomike wrote [in part]:

>
> Hypothesis: revocation doesn’t work because of the following vicious
> circle:
>
> 1. Browsers display content even when revocation information is
> unobtainable.
> 2. Revocation information is not guaranteed to be obtainable.
> 3. Site operators don’t pressure CAs to make their revocation
> information reliably obtainable.
> 4. Browsers display content even when revocation information is
> unobtainable.
>

Are you saying that, if I set my preferences to "When an OCSP connection
fails, treat the certificate as invalid", I will still access the
affected Web page?

Gervase Markham

unread,
Apr 1, 2011, 11:50:08 AM4/1/11
to mozilla-dev-s...@lists.mozilla.org
On 31/03/11 16:50, David E. Ross wrote:
>> 4. Browsers display content even when revocation information is
>> unobtainable.
>
> Are you saying that, if I set my preferences to "When an OCSP connection
> fails, treat the certificate as invalid", I will still access the
> affected Web page?

No, he is saying that the default for that preference is Off, not On -
i.e. by default, browsers display content even if they can't get
revocation information.

Gerv

Gervase Markham

unread,
Apr 1, 2011, 12:20:43 PM4/1/11
to mozilla-dev-s...@lists.mozilla.org
On 28/03/11 19:34, David Illsley wrote:
> CA incentives
> * Has there been any kind of thought or discussion around having a
> cash penalty for poor behaviour - possibly including a sum held in
> escrow with the amount based on the number of certificates issued
> against a specific root?

A small amount of thought; that thought immediately brings to mind the
following possible problems:

- How do you define "poor behaviour"? Who arbitrates?
- Can a CA insure against this risk? If so, the penalty for them is not
so great anyway.

Gerv

Tangomike

unread,
Apr 1, 2011, 5:05:18 PM4/1/11
to mozilla-dev-s...@lists.mozilla.org
On Mar 25, 11:46 am, Gervase Markham <g...@mozilla.org> wrote:
> Mozilla has made a more detailed statement about the Comodo misissuance
> incident:http://blog.mozilla.com/security/2011/03/25/comodo-certificate-issue-...

>
> As the last paragraph states, we wish to begin (or join; we would not
> want to suggest we are the first to think this is necessary) a
> wide-ranging conversation, both about what we can do in the short term,
> and also where we are headed - what should the web security landscape
> look like in 3-5 years, and what can Mozilla do to help us get there?
>
> In support of this, I have prepared a page of resources:https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
> which will collect ideas and position papers from around the web. My
> next step will be to write up my own opinions :-)
>
> Gerv

Hypothesis: revocation doesn’t work because of the following vicious
circle:

1. Browsers display content even when revocation information is
unobtainable.


2. Revocation information is not guaranteed to be obtainable.
3. Site operators don’t pressure CAs to make their revocation

information reliably available.


4. Browsers display content even when revocation information is
unobtainable.

You could break the circle at any point. But, in the nature of
vicious circles, no one has an incentive to make the first move.

Perhaps, the problem with revocation information is that (in the

current design) it must be obtained from only one source; the issuing


CA. If site operators were to be reliant for their business
continuity on revocation information, and that revocation information

were only available from one source, then they would be right to be
nervous.

On the other hand, if there were choice and redundancy in the source
of revocation information, then site operators might accept that
browsers would refuse to display content when revocation information
was not available. A site operator with a certificate from one CA
could shop around for an OCSP response from another CA. (Naturally,
I’m envisaging the use of OCSP Stapling).

As it wouldn’t necessarily be public information who was issuing OCSP
responses for a particular certificate, there may have to be one or
more central services for reporting certificate misuse. Those issuing
OCSP responses would have to consult these services.

Bad guys would first have to trick a CA into issuing a certificate,
and then trick one or more CAs (but, possibly the original one) into
issuing a continuous sequence of OCSP responses. This doesn’t appear
to be any more vulnerable than the current system.

While it’s true that a CA could declare a certificate from one of its
competitors invalid, the site simply wouldn’t staple such a response.
It’s worth noting that this approach was tried many years ago by
Valicert. It didn’t work then! And, maybe, it won’t work now. But,
times have changed. Maybe, in the restricted circumstances of HTTPS
on the Web, it can be made to work.

Multi-source OCSP stapling isn’t so very different from multi-source
short-lifetime certificates. In this latter solution, an existing
certificate from one CA would be used to authenticate a request for a
replacement short-lifetime certificate from another CA. The main
difference (perhaps) is that the CA issuing the replacement
certificate would be relying on the verification performed by the
initial CA. And, such reliance would have to be time-limited.

A hard-coded blacklist isn’t the answer. Somehow, we have to break
the vicious circle.

Kyle Hamilton

unread,
Apr 1, 2011, 9:11:38 PM4/1/11
to Tangomike, mozilla-dev-s...@lists.mozilla.org

On Fri, Apr 1, 2011 at 2:05 PM, Tangomike <tang...@rogers.com> wrote:
>
> A hard-coded blacklist isn’t the answer.  Somehow, we have to break
> the vicious circle.

To break this vicious cycle, one must break the mental shackles which enable and enforce this cycle.

The shackle which must be broken, most directly, is the thought that identity CAs are good for authorizing operation of sites. They are not -- they are only good for telling the endpoint who the other endpoint is. At that point, it's up to endpoint policy.

(Unencrypted http is orders of magnitude more dangerous and undesirable -- it has all of the properties that TLS attempts to protect against -- but it still has the classic "show this warning again" dialog associated with it in the browser.)

-Kyle H

David Illsley

unread,
Apr 6, 2011, 2:16:24 PM4/6/11
to mozilla-dev-s...@lists.mozilla.org
On Apr 1, 5:20 pm, Gervase Markham <g...@mozilla.org> wrote:
> On 28/03/11 19:34, David Illsley wrote:
>
> > CA incentives
> >    * Has there been any kind of thought or discussion around having a
> > cash penalty for poor behaviour - possibly including a sum held in
> > escrow with the amount based on the number of certificates issued
> > against a specific root?
>
> A small amount of thought; that thought immediately brings to mind the
> following possible problems:
>
> - How do you define "poor behaviour"? Who arbitrates?
Uhm... allowing certs to be issued without the appropriate level of
verification might be a start. I suspect it'd be possible to reach
some sort of concensus on scenarios which represent failure to
successfully implement that trust implied as a trusted CA. I guess the
arbitration is the hard part. Do the annual audits cover how the
response to failures matched the plan? That might identify the
failures via an independent authority.

> - Can a CA insure against this risk? If so, the penalty for them is not
>    so great anyway.

Good point... but then there would still be someone else with a
business relationship and leverage to discourage poor practices. And
once you've lost a bunch of money belonging to one insurer, likely
it's your own on the line in the future.

>
> Gerv

Ian G

unread,
Apr 6, 2011, 4:21:35 PM4/6/11
to David Illsley, mozilla-dev-s...@lists.mozilla.org
On 7/04/11 4:16 AM, David Illsley wrote:

> On Apr 1, 5:20 pm, Gervase Markham<g...@mozilla.org> wrote:
>> On 28/03/11 19:34, David Illsley wrote:
>>
>>> CA incentives
>>> * Has there been any kind of thought or discussion around having a
>>> cash penalty for poor behaviour - possibly including a sum held in
>>> escrow with the amount based on the number of certificates issued
>>> against a specific root?
>>
>> A small amount of thought; that thought immediately brings to mind the
>> following possible problems:
>>
>> - How do you define "poor behaviour"? Who arbitrates?
> Uhm... allowing certs to be issued without the appropriate level of
> verification might be a start. I suspect it'd be possible to reach
> some sort of concensus on scenarios which represent failure to
> successfully implement that trust implied as a trusted CA. I guess the
> arbitration is the hard part.


Here's a thought experiment from a while back:

https://wiki.mozilla.org/CA:Dispute_resolution

There are many ways that we can address difficult issues such as "poor
behaviour." One way is to document it, with a set of rules and
guidelines. Another way is to leave it broad, and use a forum of
dispute resolution to deal with each question on a case by case basis.

When it comes to dispute resolution, the standard forum everyone thinks
of is the courts. For various reasons, these will not satisfy, they
won't resolve anywhere near enough disputes and those they do resolve
will be too expensive.

A viable alternative is to invent our own. This is formed by agreement,
everyone has to agree. That's easy enough to secure over time.

> Do the annual audits cover how the
> response to failures matched the plan?

(DRC does, words to effect "The CPS describes the CA's procedures for
recovering from disasters and other operating interruptions" and "CA
personnel demonstrate knowledge of disaster recovery procedures.")

> That might identify the
> failures via an independent authority.

(Yes, to the auditor. The results might not be disclosed beyond that.)

>> - Can a CA insure against this risk? If so, the penalty for them is not
>> so great anyway.

> Good point... but then there would still be someone else with a
> business relationship and leverage to discourage poor practices. And
> once you've lost a bunch of money belonging to one insurer, likely
> it's your own on the line in the future.

(I would guess that it is more cost-effective to self-insure.)

iang

0 new messages