Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The need to improve HSTS (etc.) preloading based on our implementation experience

61 views
Skip to first unread message

Brian Smith

unread,
Sep 16, 2013, 10:24:17 PM9/16/13
to mozilla-de...@lists.mozilla.org
See http://news.cnet.com/8301-13578_3-57602701-38/nsa-disguised-itself-as-google-to-spy-say-reports/

The way we currently do HSTS preloading, and the way that I've been
hoping we'd do key pinning preloading, has us verify the authenticity
of the data to preload by connecting to the site in question and
verifying that the site actually sends the headers. Further, in order
to minimize the risk of a site getting DoS'd (stuck with outdated
preload info hard-coded into the browser), we've required sites to set
a 18 week max-age on the HSTS header. With an 18 week max-age, there
will be at least two Firefox releases before the HSTS information
expires, and so if the website needs to turn HSTS off for whatever
reason, the DoS potentially will be limited to approximately 6-12
weeks (the update frequency of Firefox).

Ultimately, I would like us to scan the entire internet (as much as we
can find) to collect this SSL-related metadata for all sites
automatically, and then preload all the found data into the browser,
until it gets impractically large. My expectation is that this
approach will eventually allow our preloaded HSTS/pinning/must-staple
lists to cover a much larger amount of the internet than Chrome's
preload list, which requires manual registration with Google and which
is curated manually (AFAICT) at Google.

Unfortunately, we heard from Google that it is actually difficult for
them to send these headers in HTTPS responses from their servers due
to some special requirements that I am not so familiar with. I know
that Google's servers require exceptions to the HSTS includeSubdomains
rules, but providing a way to specify exceptions was rejected during
the work on defining HSTS. My understanding is that that isn't the
only roadblock.

So far, we've taken the stance that we should avoid treating Google
specially. Perhaps it is time to admit that perfect is the enemy of
the good here. We should find some way to work with Google and other
Google-like targets to get their SSL-related metadata preloaded into
Firefox sooner than our current policy would otherwise allow.

I still think that the general approach of explicitly and
automatically verifying/gathering the SSL-related metadata we preload
by contacting the website is the right way to go in the long run. We
should find more general solutions to the problems that Google has
encountered. I know other sites have expressed the need for specifying
exceptions to includeSubdomains and that seems like an easy extension
to add, though I don't know if that change alone would be sufficient
for Google or others that have found the standard way of communicating
HSTS information to be impractical.

Thoughts?

Cheers,
Brian
--
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)

Stefan Arentz

unread,
Sep 17, 2013, 4:28:33 AM9/17/13
to Brian Smith, mozilla-de...@lists.mozilla.org

On Sep 17, 2013, at 4:24 AM, Brian Smith <br...@briansmith.org> wrote:

> Ultimately, I would like us to scan the entire internet (as much as we
> can find) to collect this SSL-related metadata for all sites
> automatically, and then preload all the found data into the browser,
> until it gets impractically large. My expectation is that this
> approach will eventually allow our preloaded HSTS/pinning/must-staple
> lists to cover a much larger amount of the internet than Chrome's
> preload list, which requires manual registration with Google and which
> is curated manually (AFAICT) at Google.

I am very interested in working on this part. Let’s talk.

S.

Yvan Boily

unread,
Sep 17, 2013, 4:55:45 AM9/17/13
to Brian Smith, mozilla-de...@lists.mozilla.org

> So far, we've taken the stance that we should avoid treating Google
> specially. Perhaps it is time to admit that perfect is the enemy of
> the good here. We should find some way to work with Google and other
> Google-like targets to get their SSL-related metadata preloaded into
> Firefox sooner than our current policy would otherwise allow.

Is there a clear set of requirements in terms of what we need to support for Google? Are there similar issues for FB, Twitter, and other major properties?

I think that aiming for the whole web as a starting point could be painful, but starting with the Alex Top X (1000 or so) to start to shake out any major issues, the moving to the top million, and then the web would allow us to deal with the "high priority" sites that would impact useability.

Am I being to conservative on this?

Otherwise I totally agree with your proposal, and would like to know how we can help on the automation side of things to make sure those lists are generated and kept up to date.

Cheers,
Yvan Boily

Gervase Markham

unread,
Sep 17, 2013, 4:58:28 AM9/17/13
to Brian Smith
On 17/09/13 03:24, Brian Smith wrote:
> Unfortunately, we heard from Google that it is actually difficult for
> them to send these headers in HTTPS responses from their servers due
> to some special requirements that I am not so familiar with.

Can we work out what those requirements are by studying the pinning
configuration for google.com and its subdomains in Chrome?

> I know
> that Google's servers require exceptions to the HSTS includeSubdomains
> rules, but providing a way to specify exceptions was rejected during
> the work on defining HSTS. My understanding is that that isn't the
> only roadblock.

So, just to be clear: Google is achieving pinning in Chrome by
preloading, and is 'happy' (willing) not to be pinned in other browsers
because they are not sending the headers?

> So far, we've taken the stance that we should avoid treating Google
> specially.

In what context? :-)

I think that if we have a convenient "just set it up and it'll work"
scanning system, and an inconvenient "you have to contact us and
register" system, then the use of the latter will be limited to
companies who can't use the former, and hopefully we'll avoid scaling
issues.

> Perhaps it is time to admit that perfect is the enemy of
> the good here. We should find some way to work with Google and other
> Google-like targets to get their SSL-related metadata preloaded into
> Firefox sooner than our current policy would otherwise allow.

I don't have an issue with people registering to be preloaded, as long
as we allow more people than just Google to do so.

Gerv

a...@google.com

unread,
Sep 17, 2013, 10:18:47 AM9/17/13
to
On Tuesday, September 17, 2013 4:58:28 AM UTC-4, Gervase Markham wrote:
> Can we work out what those requirements are by studying the pinning
> configuration for google.com and its subdomains in Chrome?

There are two different things that I fear are getting conflated here:

1) HSTS (i.e. "HTTPS required") preloading.
2) Public key pinning.

Chromium contains a list of HSTS preloads which is open to anyone and which I, currently, manually manage.

Chromium also contains preloaded pinning for Google, Twitter, Tor and CryptoCat. This is also manually managed, but is not open to everyone as it takes much more time to handle these.

*In the strongest terms*: no other client should take the pinning preloads from Chromium. Pinning is fairly high risk and we have broken large numbers of clients with it in the past. It would be very bad for anyone to start hardcoding those sorts of assumptions without our knowledge. (If you wish to deal with Twitter etc directly, then you are, of course, free to do so.)


When it comes to the HSTS preloading, I am rather bored of managing that list. Although I'm not sure whether the answer is to scan the network and gather it automatically, or to concentrate only on high value targets, like pinning, and let the HSTS headers do the rest.


Google production is fairly bad at sending HSTS headers I'm afraid and we do have several special cases. Some of that was historically my fault, which I've now fixed but it remains the case that not all teams at Google are configuring their services to send HSTS headers. It is something that I and others continue to remind them of.

From memory, the HSTS exceptions for Google are that charts.apis.google.com is deprecated but still somewhat common on the web and doesn't support HSTS, while apis.google.com is, otherwise, HSTS.

We also have a number of domains ("gmail.com", "googlemail.com" etc) which require SNI to serve the correct certificate and therefore cannot be HSTS is the user has only enabled SSLv3. We used to expose this in the preferences UI for some reason but have removed it now. It may well be that we shouldn't be worrying about supporting that configuration any longer, but it's still there for the moment.


Cheers

AGL

Gervase Markham

unread,
Sep 18, 2013, 5:44:22 AM9/18/13
to mozilla-de...@lists.mozilla.org
On 17/09/13 15:18, a...@google.com wrote:
> On Tuesday, September 17, 2013 4:58:28 AM UTC-4, Gervase Markham
> wrote:
>> Can we work out what those requirements are by studying the
>> pinning configuration for google.com and its subdomains in Chrome?
>
> There are two different things that I fear are getting conflated
> here:
>
> 1) HSTS (i.e. "HTTPS required") preloading. 2) Public key pinning.

Yes, my fault, sorry.

> Chromium also contains preloaded pinning for Google, Twitter, Tor and
> CryptoCat. This is also manually managed, but is not open to everyone
> as it takes much more time to handle these.

Have you reached out to other high profile sites which have been
attacked in the past for pinning info, and they have declined? Or are
you aiming to keep this list short for now?

> We also have a number of domains ("gmail.com", "googlemail.com" etc)
> which require SNI to serve the correct certificate

Change of topic: that's really interesting. You are using SNI in
production? What about IE on Windows XP and the other non-SNI-supporting
platforms?

Gerv

Paul van Brouwershaven

unread,
Sep 18, 2013, 2:14:35 PM9/18/13
to Gervase Markham, mozilla-de...@lists.mozilla.org
On Wed, Sep 18, 2013 at 11:44 AM, Gervase Markham <ge...@mozilla.org> wrote:

> On 17/09/13 15:18, a...@google.com wrote:
> > We also have a number of domains ("gmail.com", "googlemail.com" etc)
> > which require SNI to serve the correct certificate
>
> Change of topic: that's really interesting. You are using SNI in
> production? What about IE on Windows XP and the other non-SNI-supporting
> platforms?
>

It's interesting when you combine it with a multi-domain certificate for
non SNI supporting platforms, see:
https://www.globalsign.com/cloud/multiple-ssl-certificates-single-ip-address.html

This would serve a specific certificate via SNI to 92% of your visitors
(with the possibility to use OV/EV and the performance advantage of a
smaller certificate) and the remaining 8% will get the bigger/slower
multi-domain certificate.
0 new messages