As the last paragraph states, we wish to begin (or join; we would not
want to suggest we are the first to think this is necessary) a
wide-ranging conversation, both about what we can do in the short term,
and also where we are headed - what should the web security landscape
look like in 3-5 years, and what can Mozilla do to help us get there?
In support of this, I have prepared a page of resources:
https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
which will collect ideas and position papers from around the web. My
next step will be to write up my own opinions :-)
Gerv
About TOFU some thoughts - I have disabled a couple of CA roots in my
Firefox profile and take any certificate that comes from such a root
with a grain of salt (and never using it for something critical).
Luckily this works for most stuff I need, however would I have come
across one of the certs that were mistakenly issued, I would have
accepted it and clicked through (in the absence of a revocation message
that is).
Building an opinion about a certificate that is chained to one of those
roots that I have disabled is mostly useless. So my decision is mostly
based on if I trust the root or not. But it also indicates to me that
TOFU mostly doesn't work.
--
Regards
Signer: Eddy Nigg, StartCom Ltd.
XMPP: star...@startcom.org
Blog: http://blog.startcom.org/
Twitter: http://twitter.com/eddy_nigg
> In support of this, I have prepared a page of resources:
> https://wiki.mozilla.org/CA:Comodo_Misissuance_Response
> which will collect ideas and position papers from around the web. My
> next step will be to write up my own opinions :-)
I was shocked when I realized that Firefox does not actually load and
check CRLs even if end certificates and intermediate certificates would
have proper certificate revocation list pointers. Even if CRLs have
their problems, I really think any serious browser should do what is
possible today, even if it is a bit clumsy.
What is written about Chrome here
http://www.imperialviolet.org/2011/03/18/revocation.html
seems to be the best practice among the checked browsers. Mozilla should
do at least the same or even better, give clear message to the user if
revocation status can not be checked now. If there is older CRL in
cache, tell that it was last checked to be valid at the end of validity
time of that cached CRL.
OCSP stapling is also very good thing to do. How common it is at HTTPS
servers today?
What is proposed in the last paragraph here
http://www.imperialviolet.org/2011/03/18/revocation.html
sounds also possible idea, but would require first some kind of standard
way for servers to refresh their certificates automatically. And this
can not be the answer to the problem in short term.
- Juha
Great news!
I would call the topic web site identity (c.f. the site identity
button); web security is much broader, including server-side and
browser-based security issues. Let's distinguish up front between TLS
server authentication (a.k.a, SSL_AuthCertificate) and owner
attributes (EV, etc.), and recognize that under the same origin
policy, owner attributes can only meaningfully be bound to a DNS name.
My view is that DANE (DNSSEC-based certificate designation) is the
ideal TLS server authentication scheme and should be promoted as the
preferred scheme for use with Mozilla applications. Indeed, the DNS
itself is the authoritative source for information asserted by the
holder of a DNS name; conceptually there is no reason to use any other
system now that DNS has the necessary cryptographic identity
protection. For sites that do not opt in to DANE, Mozilla
applications can continue to use a CA list.
This position leaves open some issues:
1. How to bolster the CA system for owner attributes. If EV
attributes continue to be the only ones used by Mozilla applications,
and problems with EV continue to be rare, nothing may need to be done
here.
2. How to bolster the CA system for TLS server authentication for
sites that do not opt in to DANE. Any of the other techniques (HSTS
CA pinning, Perspectives, ...) may be applicable here.
3. Some users may not even wish to be exposed to the DNS registries
for some purposes. I believe this can only be properly solved by
switching to a different naming system that reflects user intent,
which will have far-reaching consequences. I think Mozilla can put
this aside for now.
Concerns other than web site identity, such as phishing and malware
protection, should be addressed by other means.
--
Matt
On Fri, Mar 25, 2011 at 8:46 AM, Gervase Markham <ge...@mozilla.org> wrote:
> As the last paragraph states, we wish to begin (or join; we would not want
> to suggest we are the first to think this is necessary) a wide-ranging
> conversation, both about what we can do in the short term, and also where we
> are headed - what should the web security landscape look like in 3-5 years,
> and what can Mozilla do to help us get there?
The security landscape should be opaque, as opaque as possible.
This should be done as simply as possible.
Creating a mechanism by which clients and servers can dynamically create their own credential bundles during the handshake would be ideal.
Identity Certifying Authorities -- the members of the root program -- should examine ways to provide assertions which are Subject-privacy-oriented, while still providing all of the services which they currently do.
Certifying Authorities should require notarized/acknowledged statements of state identity and public key bindings, utilizing the state infrastructure which already exists to authoritatively bind information for binding state identities. (Gerv, this is what I meant when I suggested "to the level necessary for a notary public in their jurisdiction".) Identity CAs are -excellent- at document authentication, and I think that the Identity CA should run continuing-education programs for notaries public to explain the science's intricacies. (This is the reason I wish CABF were a strong central organization.)
Certifying Authorities should automate key enrollment, utilizing the capacity to provide multiple assertions in the X.509/PKIX handshake which I proposed elsewhere to permit multiple statements of authority (for example, the device's key as certified by the authoritatively-bound key above, and the authoritatively-bound key itself) to be made over a PKCS#10 Certificate Signing Request, proving that the owner of the authoritatively-bound key authorized the key's enrollment, proving that the owner of the authoritatively-bound key authorized the CSR key's binding. In my dream world, the content of the request would be honored (to the extent that it could be accommodated under CA policy), allowing it to be automated and generated dynamically by the client to include only the information which needs to be asserted by the client in the transaction it's working with.
Certifying Authorities should share information about hash values of DNs, domain names, and authoritative keys that they certify. The intent would be to create a central point to determine if another CA is already servicing a particular domain or DN -- not so that the colliding CA must refuse to service the request, but to give it a point of reference that manual verification is required.
Certificates should flow like water, to prevent back-end database matching on the same key (which is, by definition, an identity which must be strongly bound to other identity information).
Instead of forcing every device which accesses a single email account to use the same keypair, every device which accesses an email account should be individually identifiable. This is so that in the inevitable event of a compromise, the keys held by the compromised device can be identified and revoked individually without inconveniencing every other device -- but it also should be completely opaque to the correspondent, who doesn't need to know the name or type of device you're using if he has surety that your identity is known and available for state process.
I don't understand why everyone insists that this must be difficult. Why does everyone adhere to rigid "state identity is the only thing you'll ever need, so state identity is the only thing you'll ever get" X.500 dogma? Even computer-level IPsec certificates don't bother identifying their owner in Microsoft's implementations, even though the intent of the standard was to identify who owns a particular device.
Ideally, CAs (and Mozilla) will move away from the idea of "OMG IT'S UNTRUSTED I'M STOPPING PROCESSING" to alternative user interface systems which permit untrusted certificates to still be parsed.
Of course, ideally, Mozilla would also stop treating "the presence of a trusted certificate" as "authorization to display UI without completely blocking the client's flow". Just because a particular assertion is untrustworthy does not mean that I want to be protected from myself. Instead, I'd prefer a reddened address-bar with a tooltip that pops up when I mouse over it.
Ideally, the server and the client would never throw fatal alerts to each other. As it stands right now, if the server throws an "unknown issuer" alert, then the connection is closed and the (context-insensitive) error page is shown -- which has no information about how to correct the error. Apache's mod_ssl documentation says that "optional_no_ca" is against the idea of security, but it's not. The application layer can decide "okay, this isn't trusted so I can't act on its information -- but I can guide the user into my own context's error-processing path to help the user fix the problem."
The authentication layer IS NOT the authorization layer. The authentication layer provides information to the authorization layer, for the authorization layer to be able to apply local (and thus context-specific) policy.
-Kyle H
I thoroughly disagree. The security landscape should be as transparent
as possible. When interested and knowledgible outsiders can see how it
works, vulnerabilities are discovered, reported, and fixed. This is why
it was so important that the source code of PGP was available for all to
inspect (at least until Symantec bought out the PGP Corp.)
--
David E. Ross
<http://www.rossde.com/>
On occasion, I might filter and ignore all newsgroup messages
posted through GoogleGroups via Google's G2/1.0 user agent
because of spam from that source.
Mozilla is doing essentially this via the CA policy and the built-in certs list, but maybe the policy and/or the enforcement could be strengthened.
> > Concerns other than web site identity, such as phishing and malware
> > protection, should be addressed by other means.
>
> Technically, 'phishing' is all about website identity. It is the false assumption of trusted status from the user who is fooled by very-similar markup.
You are right, that statement came out patently false. What I meant was, I believe SSL_AuthCertificate should authenticate the holder of the DNS name and nothing else. Technically speaking, there is nothing wrong with the holder of evil.com operating a TLS service at www.paypal.com.evil.com that serves malware. Consequently, I believe DANE is a sufficient backend for SSL_AuthCertificate (see previous discussion surrounding https://groups.google.com/d/msg/mozilla.dev.security.policy/xStt5FitVL4/-a9Wb9AsDFUJ) and oppose the proposal for extra requirements on "wildcard" style server authentication (https://bugzilla.mozilla.org/show_bug.cgi?id=481725).
> This is, in fact, the entire thing that the CA model was intended to protect against.
EV can do this. DV does it awkwardly, and I would prefer that it stick to its basic function of authentication of services bound to DNS names.
--
Matt
Mozilla is doing essentially this via the CA policy and the built-in certs list, but maybe the policy and/or the enforcement could be strengthened.
> > Concerns other than web site identity, such as phishing and malware
> > protection, should be addressed by other means.
>
> Technically, 'phishing' is all about website identity. It is the false assumption of trusted status from the user who is fooled by very-similar markup.
You are right, that statement came out patently false. What I meant was, I believe SSL_AuthCertificate should authenticate the holder of the DNS name and nothing else. Technically speaking, there is nothing wrong with the holder of evil.com operating a TLS service at www.paypal.com.evil.com that serves malware. Consequently, I believe DANE is a sufficient backend for SSL_AuthCertificate (see previous discussion surrounding https://groups.google.com/d/msg/mozilla.dev.security.policy/xStt5FitVL4/-a9Wb9AsDFUJ) and oppose the proposal for extra requirements on "wildcard" style server authentication (https://bugzilla.mozilla.org/show_bug.cgi?id=481725).
> This is, in fact, the entire thing that the CA model was intended to protect against.
EV can do this. DV does it awkwardly, and I would prefer that it stick to its basic function of authentication of services bound to DNS names.
--
Matt
This is for historical reasons - there was a patent on this ability :-(
This problem has now been resolved, and NSS implemented a new library,
libpkix, which includes this ability. Switching to use libpkix is high
on our developers' priority lists for Firefox 5.
> OCSP stapling is also very good thing to do. How common it is at HTTPS
> servers today?
Very uncommon indeed. There is no stable version of Apache which ships
with it.
Gerv
I can't claim to be any kind of expert, but I've got a few thoughts
having read the wiki page...
CA incentives
* Has there been any kind of thought or discussion around having a
cash penalty for poor behaviour - possibly including a sum held in
escrow with the amount based on the number of certificates issued
against a specific root?
HSTS/OSCP Failure Modes
* My reading of the HSTS spec is a little unclear on whether section
7.3 requires the connection to fail if OSCP checking fails. If it
does, surely this make it unlikely that those with reservations about
OSCP checking will use HSTS, which doesn't seem desirable. Would it
make sense to make mandatory OSCP checking an explicit option in HSTS
- allowing those site which prefer the security a way to opt-in?
David
--
As always, my comments are my own and represent my views an not those
of my employer.
First, that discussion belongs more properly on the IETF WebSec list where HSTS is one of the work items.
web...@ietf.org
Second, to briefly answer you question, we didn't want the HSTS spec to be about exactly how certificate validation happens, but to be rules about when to use HTTPS and how to deal with failures, not really specifically what constitutes a failure though I admit we went there a little bit with the self-signed statements....
The same was true of several issues including "mixed-content" which some folks wanted to include into the spec even though it doesn't really belong there.
Fundamentally there problem here is that for any security behavior that is documented in something like the BSH (http://code.google.com/p/browsersec/wiki/Main) where there is divergent behavior among the browsers, there ought to be a spec that exists and forces to press browsers towards convergence on one similar behavior. Clearly the world isn't there yet... but I can dream :)
- Andy
Hypothesis: revocation doesn’t work because of the following vicious
circle:
1. Browsers display content even when revocation information is
unobtainable.
2. Revocation information is not guaranteed to be obtainable.
3. Site operators don’t pressure CAs to make their revocation
information reliably obtainable.
4. Browsers display content even when revocation information is
unobtainable.
You could break the circle at any point. But, in the nature of
vicious circles, no one wants to make the first move.
Perhaps, the problem with revocation information is that (in the
current design) it must be obtained from one source; the issuing CA.
If site operators were to be reliant for their business continuity on
revocation information, and that revocation information was only
available from one source, then they would be right to be nervous.
On the other hand, if there were choice and redundancy in the source
of revocation information, then site operators might accept that
browsers would refuse to display content when revocation information
was not available. A site operator with a certificate from one CA
could shop around for an OCSP response from another CA. (Naturally,
I’m envisaging the use of OCSP Stapling).
In addition, all CAs would have to publish their CRLs, and the CA
issuing the OCSP response would have to consult the issuing CA’s CRL.
Bad guys would first have to trick a CA into issuing a certificate,
and then trick one or more CAs (but, possibly the original one) into
issuing a continuous sequence of OCSP responses. This doesn’t appear
to be any more vulnerable than the current system.
While it’s true that a CA could declare a certificate from one of its
competitors invalid, the site simply wouldn’t staple such a response.
It’s worth noting that this approach was tried many years ago; by
Valicert. It didn’t work then! And, maybe, it won’t work now. But,
times have changed. Maybe, in the restricted circumstances of HTTPS
on the Web, it can be made to work.
Multi-source OCSP stapling isn’t so very different from multi-source
short-lifetime certificates. In this latter solution, an existing
certificate from one CA would be used to authenticate a request for a
replacement short-lifetime certificate from another CA. The main
difference (perhaps) is that the CA issuing the replacement
certificate would be relying on the verification performed by the
initial CA. And, such reliance would have to be time-limited.
Hard-coded blacklists aren't the answer. Somehow, we have to break
the vicious circle.
All the best. Tim.
Are you saying that, if I set my preferences to "When an OCSP connection
fails, treat the certificate as invalid", I will still access the
affected Web page?
No, he is saying that the default for that preference is Off, not On -
i.e. by default, browsers display content even if they can't get
revocation information.
Gerv
A small amount of thought; that thought immediately brings to mind the
following possible problems:
- How do you define "poor behaviour"? Who arbitrates?
- Can a CA insure against this risk? If so, the penalty for them is not
so great anyway.
Gerv
Hypothesis: revocation doesn’t work because of the following vicious
circle:
1. Browsers display content even when revocation information is
unobtainable.
2. Revocation information is not guaranteed to be obtainable.
3. Site operators don’t pressure CAs to make their revocation
information reliably available.
4. Browsers display content even when revocation information is
unobtainable.
You could break the circle at any point. But, in the nature of
vicious circles, no one has an incentive to make the first move.
Perhaps, the problem with revocation information is that (in the
current design) it must be obtained from only one source; the issuing
CA. If site operators were to be reliant for their business
continuity on revocation information, and that revocation information
were only available from one source, then they would be right to be
nervous.
On the other hand, if there were choice and redundancy in the source
of revocation information, then site operators might accept that
browsers would refuse to display content when revocation information
was not available. A site operator with a certificate from one CA
could shop around for an OCSP response from another CA. (Naturally,
I’m envisaging the use of OCSP Stapling).
As it wouldn’t necessarily be public information who was issuing OCSP
responses for a particular certificate, there may have to be one or
more central services for reporting certificate misuse. Those issuing
OCSP responses would have to consult these services.
Bad guys would first have to trick a CA into issuing a certificate,
and then trick one or more CAs (but, possibly the original one) into
issuing a continuous sequence of OCSP responses. This doesn’t appear
to be any more vulnerable than the current system.
While it’s true that a CA could declare a certificate from one of its
competitors invalid, the site simply wouldn’t staple such a response.
It’s worth noting that this approach was tried many years ago by
Valicert. It didn’t work then! And, maybe, it won’t work now. But,
times have changed. Maybe, in the restricted circumstances of HTTPS
on the Web, it can be made to work.
Multi-source OCSP stapling isn’t so very different from multi-source
short-lifetime certificates. In this latter solution, an existing
certificate from one CA would be used to authenticate a request for a
replacement short-lifetime certificate from another CA. The main
difference (perhaps) is that the CA issuing the replacement
certificate would be relying on the verification performed by the
initial CA. And, such reliance would have to be time-limited.
A hard-coded blacklist isn’t the answer. Somehow, we have to break
the vicious circle.
> - Can a CA insure against this risk? If so, the penalty for them is not
> so great anyway.
Good point... but then there would still be someone else with a
business relationship and leverage to discourage poor practices. And
once you've lost a bunch of money belonging to one insurer, likely
it's your own on the line in the future.
>
> Gerv
Here's a thought experiment from a while back:
https://wiki.mozilla.org/CA:Dispute_resolution
There are many ways that we can address difficult issues such as "poor
behaviour." One way is to document it, with a set of rules and
guidelines. Another way is to leave it broad, and use a forum of
dispute resolution to deal with each question on a case by case basis.
When it comes to dispute resolution, the standard forum everyone thinks
of is the courts. For various reasons, these will not satisfy, they
won't resolve anywhere near enough disputes and those they do resolve
will be too expensive.
A viable alternative is to invent our own. This is formed by agreement,
everyone has to agree. That's easy enough to secure over time.
> Do the annual audits cover how the
> response to failures matched the plan?
(DRC does, words to effect "The CPS describes the CA's procedures for
recovering from disasters and other operating interruptions" and "CA
personnel demonstrate knowledge of disaster recovery procedures.")
> That might identify the
> failures via an independent authority.
(Yes, to the auditor. The results might not be disclosed beyond that.)
>> - Can a CA insure against this risk? If so, the penalty for them is not
>> so great anyway.
> Good point... but then there would still be someone else with a
> business relationship and leverage to discourage poor practices. And
> once you've lost a bunch of money belonging to one insurer, likely
> it's your own on the line in the future.
(I would guess that it is more cost-effective to self-insure.)
iang