See
http://news.cnet.com/8301-13578_3-57602701-38/nsa-disguised-itself-as-google-to-spy-say-reports/
The way we currently do HSTS preloading, and the way that I've been
hoping we'd do key pinning preloading, has us verify the authenticity
of the data to preload by connecting to the site in question and
verifying that the site actually sends the headers. Further, in order
to minimize the risk of a site getting DoS'd (stuck with outdated
preload info hard-coded into the browser), we've required sites to set
a 18 week max-age on the HSTS header. With an 18 week max-age, there
will be at least two Firefox releases before the HSTS information
expires, and so if the website needs to turn HSTS off for whatever
reason, the DoS potentially will be limited to approximately 6-12
weeks (the update frequency of Firefox).
Ultimately, I would like us to scan the entire internet (as much as we
can find) to collect this SSL-related metadata for all sites
automatically, and then preload all the found data into the browser,
until it gets impractically large. My expectation is that this
approach will eventually allow our preloaded HSTS/pinning/must-staple
lists to cover a much larger amount of the internet than Chrome's
preload list, which requires manual registration with Google and which
is curated manually (AFAICT) at Google.
Unfortunately, we heard from Google that it is actually difficult for
them to send these headers in HTTPS responses from their servers due
to some special requirements that I am not so familiar with. I know
that Google's servers require exceptions to the HSTS includeSubdomains
rules, but providing a way to specify exceptions was rejected during
the work on defining HSTS. My understanding is that that isn't the
only roadblock.
So far, we've taken the stance that we should avoid treating Google
specially. Perhaps it is time to admit that perfect is the enemy of
the good here. We should find some way to work with Google and other
Google-like targets to get their SSL-related metadata preloaded into
Firefox sooner than our current policy would otherwise allow.
I still think that the general approach of explicitly and
automatically verifying/gathering the SSL-related metadata we preload
by contacting the website is the right way to go in the long run. We
should find more general solutions to the problems that Google has
encountered. I know other sites have expressed the need for specifying
exceptions to includeSubdomains and that seems like an easy extension
to add, though I don't know if that change alone would be sufficient
for Google or others that have found the standard way of communicating
HSTS information to be impractical.
Thoughts?
Cheers,
Brian
--
Mozilla Networking/Crypto/Security (Necko/NSS/PSM)