Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CA pinning within something like HSTS

83 views
Skip to first unread message

Paul Tiemann

unread,
May 10, 2011, 4:28:12 PM5/10/11
to mozilla-dev-s...@lists.mozilla.org
Websites should be able to tell browsers what CAs they use, so that browsers can know to reject self-signed certificates and so high value targets can reduce their MITM risk from rogue certificates obtained at another CA.

Support for that opinion:

* Last Friday it was reported that someone in Syria was attempting to MITM Syrian Facebook users with a bogus 512-bit self-signed certificate for s.static.ak.facebook.com.

https://www.eff.org/deeplinks/2011/05/syrian-man-middle-against-facebook

Firefox, and other browsers currently have to use very neutral language when they describe a certificate not trusted error:

"This Connection is Untrusted
You have asked Firefox to connect securely to www.bogus-site.com, but we can't confirm that your connection is secure.
Normally, when you try to connect securely, sites will present trusted identification to prove that you are going to the right place. However, this site's identity can't be verified."

But if the site operators had previously sent HTTP response headers that told the browser "We only use CAs X and Y for this site" with a max-age of 3 months or 6 months, then the browser would be armed with enough information to actually protect the user in cases like the Syrian certificate -- the browser could have enough information to reject the connection.

* I think sites should be able to push that information back to browsers using HTTP headers because if we do it via a DNS query, then new browsers would start generating lots more DNS traffic to non-existent DNS records (if FF had to ask for _allowed_cas._tcp.domain.com then it would have to always ask that when connecting to sites, even for the majority of sites who aren't worried about it enough to limit the trust anchor list.) If you use an HTTP header similar to HSTS then you can just send that header on a few of your site's pages (login pages, home page perhaps) and your user base would be inoculated against rogue certs and self-signed spoof certs.

* As a general observation, we're having a hard time coming up with a good way to enable security.OCSP.require=true, and other areas of concern aren't much easier to resolve either (like BR for external sub CAs, etc) -- and perhaps the difficulty stems from trying to design a perfect system that is perfect for all use cases simultaneously. What if we leave security.OCSP.require=false as default and the first step is to come up with a way for sites to opt-in to higher security restrictions using something similar to HSTS?

=JeffH

unread,
May 10, 2011, 6:51:39 PM5/10/11
to mozilla-dev-s...@lists.mozilla.org
> Websites should be able to tell browsers what CAs they use, so that browsers
> can know to reject self-signed certificates and so high value targets can
> reduce their MITM risk from rogue certificates obtained at another CA.
>
> Support for that opinion:
>
> * Last Friday it was reported that someone in Syria was attempting to MITM
> Syrian Facebook users with a bogus 512-bit self-signed certificate for
> s.static.ak.facebook.com.
>
> https://www.eff.org/deeplinks/2011/05/syrian-man-middle-against-facebook
>
> Firefox, and other browsers currently have to use very neutral language when
> they describe a certificate not trusted error:
>
> "This Connection is Untrusted
> You have asked Firefox to connect securely to www.bogus-site.com, but we
> can't confirm that your connection is secure.
> Normally, when you try to connect securely, sites will present trusted
> identification to prove that you are going to the right place. However, this
> site's identity can't be verified."
>
> But if the site operators had previously sent HTTP response headers that
> told the browser "We only use CAs X and Y for this site" with a max-age of 3
> months or 6 months, then the browser would be armed with enough information
> to actually protect the user in cases like the Syrian certificate -- the
> browser could have enough information to reject the connection.

Yep.


> * I think sites should be able to push that information back to browsers
> using HTTP headers because if we do it via a DNS query, then new browsers
> would start generating lots more DNS traffic to non-existent DNS records (if
> FF had to ask for _allowed_cas._tcp.domain.com then it would have to always
> ask that when connecting to sites, even for the majority of sites who aren't
> worried about it enough to limit the trust anchor list.)

And that is to some extent the least of the (present) issues of using
(DNSSEC-secured) DNS to distribute such policy directives (the info would
likely be cached etc etc in order to address that particular issue) such as
even getting (DNSSEC-secured) DNS replies to end systems.


> If you use an HTTP
> header similar to HSTS then you can just send that header on a few of your
> site's pages (login pages, home page perhaps) and your user base would be
> inoculated against rogue certs and self-signed spoof certs.

yes, tho there's a few more details one has to get right (such as serving such
policy directives only over secure connections).


> * As a general observation, we're having a hard time coming up with a good

> way to enable security. OCSP.require=true, and other areas of concern aren't


> much easier to resolve either (like BR for external sub CAs, etc) -- and
> perhaps the difficulty stems from trying to design a perfect system that is
> perfect for all use cases simultaneously. What if we leave
> security.OCSP.require=false as default and the first step is to come up with
> a way for sites to opt-in to higher security restrictions using something
> similar to HSTS?

The former (security.OCSP.require=false) is something that could be done pretty
quickly, while the latter will take a while. In any case, are they not
effectively separable concerns?

=JeffH


Rob Stradling

unread,
May 11, 2011, 5:24:28 AM5/11/11
to dev-secur...@lists.mozilla.org, Paul Tiemann, =JeffH
On Tuesday 10 May 2011 23:51:39 =JeffH wrote:
> On Tuesday 10 May 2011 21:28:12 Paul Tiemann wrote:
<snip>

> > But if the site operators had previously sent HTTP response headers that
> > told the browser "We only use CAs X and Y for this site" with a max-age
> > of 3 months or 6 months,

What if the site operator decides to switch to a different CA, and for some
reason wants/needs to do so in a hurry? Having to wait 3-6 months for all
clients to accept their new site certificate would be problematic.

> > then the browser would be armed with enough
> > information to actually protect the user in cases like the Syrian
> > certificate -- the browser could have enough information to reject the
> > connection.
>
> Yep.

This approach could help, but it's worth remembering that (unless there is a
"preloaded list" of which CAs are used by which sites) it does rely on the
user's browser...
- visiting the legitimate site.
- caching your proposed HTTP response header.
- never clearing that cache.
...before ever visiting the attacker's site.

Google have been implementing a small, hard-coded "preloaded list" in Chrome:
http://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security_state.cc?view=markup
(search for "kGoogleAcceptableCerts")

http://dev.chromium.org/sts notes that as this list grows, "it can change into
a list this (sic) is shared across browsers, like the safe-browsing database
is today".

> > * I think sites should be able to push that information back to browsers
> > using HTTP headers because if we do it via a DNS query, then new
> > browsers would start generating lots more DNS traffic to non-existent
> > DNS records (if FF had to ask for _allowed_cas._tcp.domain.com then it
> > would have to always ask that when connecting to sites, even for the
> > majority of sites who aren't worried about it enough to limit the trust
> > anchor list.)

A "list...shared across browsers" could generate lots more traffic too. Either
the browser would have to download the whole list (including entries for many
sites that the user won't ever visit), or the browser would have to call some
kind of web service (sending the domain name, and receiving back the STS and
CA pinning information) prior to contacting the site for the first time.

DNSSEC-secured "CA pinning" approachs (such as CAA -
http://tools.ietf.org/html/draft-hallambaker-donotissue) would not need a
"pre-loaded list", so perhaps they would actually generate less additional
traffic, not more.

Incidentally, Google co-authored CAA with us and AFAIK they have not lost
their interest in DNSSEC-secured "CA pinning".

<snip>

Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Rob Stradling

unread,
May 11, 2011, 5:44:14 AM5/11/11
to dev-secur...@lists.mozilla.org, =JeffH
On Tuesday 10 May 2011 23:51:39 =JeffH wrote:
> On Tuesday 10 May 2011 21:28:12 Paul Tiemann wrote:
> > What if we leave security.OCSP.require=false as default and the first step
> > is to come up with a way for sites to opt-in to higher security
> > restrictions using something similar to HSTS?
>
> The former (security.OCSP.require=false) is something that could be done
> pretty quickly,

Jeff, security.OCSP.require=false is already the default. Did you mean that
changing it to "true" could be done pretty quickly?

Changing it to "true" could be done very quickly. It just needs a tiny
code/config change in the source code and then a new point release of the
browser. The problem is that various parties are reluctant to see this change
made yet, because it would block users from accessing many sites when the
relevant CAs' OCSP Responders are unavailable.

> while the latter will take a while. In any case, are they not effectively
> separable concerns?

The concern is that certificate "revocation doesn't work".
http://www.imperialviolet.org/2011/03/18/revocation.html

Therefore, we're looking at every idea we can possibly think of for how to
make certificate revocation "work".

Paul Tiemann

unread,
May 11, 2011, 8:28:03 PM5/11/11
to Rob Stradling, dev-secur...@lists.mozilla.org, =JeffH
On May 11, 2011, at 3:24 AM, Rob Stradling wrote:

> On Tuesday 10 May 2011 23:51:39 =JeffH wrote:
>> On Tuesday 10 May 2011 21:28:12 Paul Tiemann wrote:

> <snip>
>>> But if the site operators had previously sent HTTP response headers that
>>> told the browser "We only use CAs X and Y for this site" with a max-age
>>> of 3 months or 6 months,
>
> What if the site operator decides to switch to a different CA, and for some
> reason wants/needs to do so in a hurry? Having to wait 3-6 months for all
> clients to accept their new site certificate would be problematic.

Not much of a problem if more than one CA could be referenced. (List 1 backup CA in case you need it one day?)

And maybe a smaller max-age would make more sense?

>>> then the browser would be armed with enough
>>> information to actually protect the user in cases like the Syrian
>>> certificate -- the browser could have enough information to reject the
>>> connection.
>>
>> Yep.
>
> This approach could help, but it's worth remembering that (unless there is a
> "preloaded list" of which CAs are used by which sites) it does rely on the
> user's browser...
> - visiting the legitimate site.
> - caching your proposed HTTP response header.
> - never clearing that cache.
> ...before ever visiting the attacker's site.

Those are just the limitations of CA pinning I guess. I don't think we need to distribute a list of pinned sites, and I don't really favor the stuff that requires a waste of resources on 95% of sites so that 5% of sites can put special pinning records into DNS or elsewhere if it's easy enough for those who want pinning to specify it using a header on their HTTP login page and other https pages...

To borrow a principle from PayPal's recent position paper on stopping cybercrime:

"We believe that this principle is self-evident: do no more than needed in order to make the Internet safer."

If some kind of header based pinning were available, the interested sites could protect themselves from almost all of the organizations big enough to MITM their traffic.

The problem I'm seeing with things like perspectives and other funny-business detectors is that they have to make a guess about what the site operator intended, but it doesn't work so well when very large sites legitimately open up a new POP in another country and use a different SSL certificate at that POP...

> Google have been implementing a small, hard-coded "preloaded list" in Chrome:
> http://src.chromium.org/viewvc/chrome/trunk/src/net/base/transport_security_state.cc?view=markup
> (search for "kGoogleAcceptableCerts")
>
> http://dev.chromium.org/sts notes that as this list grows, "it can change into
> a list this (sic) is shared across browsers, like the safe-browsing database
> is today".
>
>>> * I think sites should be able to push that information back to browsers
>>> using HTTP headers because if we do it via a DNS query, then new
>>> browsers would start generating lots more DNS traffic to non-existent
>>> DNS records (if FF had to ask for _allowed_cas._tcp.domain.com then it
>>> would have to always ask that when connecting to sites, even for the
>>> majority of sites who aren't worried about it enough to limit the trust
>>> anchor list.)
>
> A "list...shared across browsers" could generate lots more traffic too. Either
> the browser would have to download the whole list (including entries for many
> sites that the user won't ever visit), or the browser would have to call some
> kind of web service (sending the domain name, and receiving back the STS and
> CA pinning information) prior to contacting the site for the first time.

I wasn't clear in that paragraph -- I was actually saying I think site operators should be able to do it via HTTP headers and not via DNS entries or other pre-shared lists or other mechanisms. I know it has the limitations mentioned above, but ...

* It doesn't require any extra network traffic for all the people who don't care to set it up.
* It doesn't need to wait for everyone to support DNSSEC

<snip>

Paul

Paul Tiemann

unread,
May 11, 2011, 8:29:39 PM5/11/11
to Rob Stradling, dev-secur...@lists.mozilla.org, =JeffH
On May 11, 2011, at 3:44 AM, Rob Stradling wrote:

> The concern is that certificate "revocation doesn't work".
> http://www.imperialviolet.org/2011/03/18/revocation.html
>
> Therefore, we're looking at every idea we can possibly think of for how to
> make certificate revocation "work".

I'd also add 'to make certificate revocation "work" without making OCSP responders the next big DDoS target.'

Paul

0 new messages