Intent To Deprecate And Remove: Public Key Pinning

24,225 views
Skip to first unread message

Chris Palmer

unread,
Oct 27, 2017, 3:07:54 PM10/27/17
to blink-dev

Primary eng (and PM) emails


pal...@chromium.org, rsl...@chromium.org, est...@chromium.org, a...@chromium.org


Summary


Deprecate support for public key pinning (PKP) in Chrome, and then remove it entirely.


This will first remove support for HTTP-based PKP (“dynamic pins”), in which the user-agent learns of pin-sets for hosts by HTTP headers. We would like to do this in Chrome 67, which is estimated to be released to Stable on 29 May 2018.


Finally, remove support for built-in PKP (“static pins”) at a point in the future when Chrome requires Certificate Transparency for all publicly-trusted certificates (not just newly-issued publicly-trusted certificates). (We don’t yet know when this will be.)


Motivation


PKP offers a way to defend against certificate misissuance, by providing a Web-exposed mechanism (HPKP) for sites to limit the set of certificate authorities (CAs) that can issue for their domain. However, this exposes as part of the Open Web Platform considerations that are external to it: specifically, the choice and selection of CAs is a product-level security decision made by browsers or by OS vendors, and the choice and use of sub-CAs, cross-signing, and other aspects of the PKI hierarchy are made independently by CAs.


As a consequence, site operators face difficulties selecting a reliable set of keys to pin to, and adoption of PKP has remained low. When site operators’ expectations don’t match the reality of trust anchors on real world client machines, users suffer. Unexpected or spurious pinning errors can result in error fatigue rather than user safety.


Concretely:


  • It is hard to build a pin-set that is guaranteed to work, due to the variance in both user-agent trust stores and CA operations.

  • There is a risk of rendering a site unusable.

  • There is a risk of hostile pinning, should an attacker obtain a misissued certificate. While there are no confirmed or rumored cases of this having happened, the risk is present even for sites that don’t use PKP.


Interoperability And Compatibility Risk


There is no compatibility risk; no web site will stop working as a result of the removal of static or dynamic PKP.


Edge: Currently does not support key pinning.

Firefox: No official signals yet.

Safari: Currently does not support key pinning.

Opera: If Opera wishes to continue to support pinning, they will need to carry a patch that reverts our diff(s).


Alternative implementation suggestion for web developers


To defend against certificate misissuance, web developers should use the Expect-CT header, including its reporting function. Expect-CT is safer than HPKP due to the flexibility it gives site operators to recover from any configuration errors, and due to the built-in support offered by a number of CAs. Site operators can generally deploy Expect-CT on a domain without needing to take any additional steps when obtaining certificates for the domain. Even if the CT log ecosystem substantially changes during the validity period of the certificate, site operators can provide updated SCTs in the form of OCSP responses (if their CA supports it) or via a TLS extension (if they wish for greater control). The combination of these mitigations substantially reduces the risk of DoS (either accidental or hostile) via Expect-CT deployment. By combining Expect-CT with active monitoring for relevant domains, which a growing number of CAs and third-parties now provide, site operators can proactively detect misissuance in a way that HPKP does not achieve, while also reducing the risk of misconfiguration and avoiding the risk of hostile pinning.


Usage information from UseCounter


Scott Helme found in August 2016 that very few of the Alex Top 1 Million sites were using HPKP (375) or the Report-Only (RO) variant (76): https://scotthelme.co.uk/alexa-top-1-million-crawl-aug-2016/. My own scans (using slightly different/additional corpora) earlier in 2016 showed even lower numbers.


The UMA histogram Net.PublicKeyPinSuccess records a high number of successes, but it includes popular domains for which Chrome has static key pins: likely Google sites.


The UMA histogram Net.PublicKeyPinReportSendingFailure2, which is probably a much better indication of dynamic PKP adoption than Net.PublicKeyPinSuccess, shows extremely low numbers of success and failure, indicating that very few sites are using it, and/or that only very low-traffic sites are using it.


If necessary, we can add UMA histograms to find the ratio of HTTP responses with and without HPKP(-RO) headers.


OWP launch tracking bug


The Chromium tracking bug is crbug.com/779166.


Entry on the feature dashboard


I’m not sure if we need this. If we do, I’ll add one.


Requesting approval to remove too?


Yes.

bry...@zadegan.net

unread,
Oct 27, 2017, 6:30:38 PM10/27/17
to blink-dev, pal...@chromium.org
There is no compatibility risk; no web site will stop working as a result of the removal of static or dynamic PKP.

Well, cyph.com (cyph.ws specifically) would stop working since HPKP Suicide underpins its WebSign implementation, so that's at least one. In addition to unearthing interesting attack cases e.g. RansomPKP, using HPKP suicide as a builder technique also underpinned much of our talk at BH/DC last year, so I'd prescribe to the team a rigorous investigation before killing the standard. My understanding is that DigiCert and possibly others expanded support for rapid key rotation in part to enable use cases like this.

In short: there is a non-zero count of sites which would fundamentally break if HPKP were removed.

zeug...@gmail.com

unread,
Oct 27, 2017, 7:20:22 PM10/27/17
to blink-dev, pal...@chromium.org

Am Freitag, 27. Oktober 2017 21:07:54 UTC+2 schrieb Chris Palmer:
To defend against certificate misissuance, web developers should use the Expect-CT header, including its reporting function.

Isn't the idea of PKP to also stop "security boxes" from intercepting TLS traffic?

Rich Baldry

unread,
Oct 27, 2017, 9:07:07 PM10/27/17
to blink-dev, pal...@chromium.org, zeug...@gmail.com
That is one of the effects of PKP but it was not stated as a primary goal of the HPKP extension. The RFP focuses on the ability of HPKP to 
   "...reduce the incidence of man-in-the-middle attacks due to compromised Certification Authorities."



zeug...@gmail.com

unread,
Oct 27, 2017, 9:42:17 PM10/27/17
to blink-dev, pal...@chromium.org, zeug...@gmail.com
Ok, thank you for the clarification.

Andrew Meyer

unread,
Oct 27, 2017, 11:40:49 PM10/27/17
to blink-dev, pal...@chromium.org
Related discussion on Hacker News: https://news.ycombinator.com/item?id=15572143

Jochen Eisinger

unread,
Oct 28, 2017, 3:23:28 AM10/28/17
to Chris Palmer, blink-dev
lgtm1

I read about the HPKP suicide approach to implement trust on first use which is interesting but doesn't strike me as something HPKP was designed for. Instead, this might be something that we could address with signature based SRI in the future

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAOuvq212inCp0nTNrGGF1a2mWH3aVToQ5%3Dsr%2BDGyY6abufbpWg%40mail.gmail.com.

mk...@chromium.org

unread,
Oct 28, 2017, 5:41:55 AM10/28/17
to blink-dev, pal...@chromium.org
On Friday, October 27, 2017 at 9:07:54 PM UTC+2, Chris Palmer wrote:

This will first remove support for HTTP-based PKP (“dynamic pins”), in which the user-agent learns of pin-sets for hosts by HTTP headers. We would like to do this in Chrome 67, which is estimated to be released to Stable on 29 May 2018.


I expect y'all will be putting in a deprecation warning in trunk shortly, and if there's no risk of breakage (and I agree with y'all's analysis that no site will stop working), it doesn't seem like you'd need a 3-release gap. Is there a reason you chose M67?

Finally, remove support for built-in PKP (“static pins”) at a point in the future when Chrome requires Certificate Transparency for all publicly-trusted certificates (not just newly-issued publicly-trusted certificates). (We don’t yet know when this will be.)


It looks like the built-in pinsets we have in https://chromium.googlesource.com/chromium/src/+/9bf166893edc3cea7ae7194b0784d85e9ad2cf0f/net/http/transport_security_state_static.json are limited to Google, Tor, Twitter, Facebook, Dropbox, SpiderOak, Yahoo, swehack.org, NCSCCS, and Tumblr. It looks like we support requiring CT on an origin-by-origin basis with `Expect-CT`'s 'enforce' directive. If that's the case, is there still value in keeping the hard-coded list? It seems like we could drop the hard-coded list as soon as these origins independently start sending `Expect-CT: max-age=whatever; enforce`, rather than waiting for ecosystem-wide changes.

Alternative implementation suggestion for web developers


To defend against certificate misissuance, web developers should use the Expect-CT header, including its reporting function. Expect-CT is safer than HPKP due to the flexibility it gives site operators to recover from any configuration errors, and due to the built-in support offered by a number of CAs. Site operators can generally deploy Expect-CT on a domain without needing to take any additional steps when obtaining certificates for the domain. Even if the CT log ecosystem substantially changes during the validity period of the certificate, site operators can provide updated SCTs in the form of OCSP responses (if their CA supports it) or via a TLS extension (if they wish for greater control). The combination of these mitigations substantially reduces the risk of DoS (either accidental or hostile) via Expect-CT deployment. By combining Expect-CT with active monitoring for relevant domains, which a growing number of CAs and third-parties now provide, site operators can proactively detect misissuance in a way that HPKP does not achieve, while also reducing the risk of misconfiguration and avoiding the risk of hostile pinning.


I note that you didn't mention the `enforce` mechanism here. It seems like something we ought to be recommending folks to opt-into, right?

Entry on the feature dashboard


I’m not sure if we need this. If we do, I’ll add one.


Please do. The feature dashboard drives things like release notes, so it's useful even for small features (and this doesn't seem small to me :) ).

-mike 

bardi.h...@gmail.com

unread,
Oct 28, 2017, 12:54:34 PM10/28/17
to blink-dev, pal...@chromium.org
I would like to renew calls for deployment of DANE, or a similar specification to replace some of the security guarantees previously provided by HPKP. Certificate Transparency allows post-mortem identification of an attack but does not stop attacks. The time to intervention is raised from zero to the time for the site operator to be notified by their CT monitoring software of a malicious issuance, plus the time for the site operator to notify the certificate authority (or in the case of a rogue CA, the Chrome team), plus the time for OCSP staples to expire (or CRLSets to be pushed). This is an entirely manual process, and may never occur for domains with inadequate monitoring.

jbash

unread,
Oct 28, 2017, 9:50:56 PM10/28/17
to blink-dev, pal...@chromium.org
Patience, please.

It's unreasonable to measure the success of something like that over a couple of years. It's the sort of thing you'd expect to take 10 or 15 years to catch on, and to have some growing pains along the way. Most Web sites aren't Google. They don't even install software updates, much less adopt new crypto practices in a couple of years.

If people had backed away from IPv6 after after two years of non-adoption, or given up on it because it was hard to deploy at the time it was designed, we'd all be in serious trouble now.

There are reasonable knocks against pinning. You can see it as partly a band-aid for the fundamental every-CA-can-sign-any-name brokenness of X.509. It was really strange to choose to do it at the HTTP layer rather than the TLS layer. And among your issues, malicious pins ring true.

... but certificate transparency isn't remotely a substitute.

CT works OK for big public services like Google. It works basically not at all for small, low-traffic, or completely private services; nobody is going to audit the log for www.podunk.com. Even if they did, nobody could be sure whether any given update was actually a problem or merely looked "suspicious" under some possibly broken heuristic. Nor is it obvious what you're supposed to do if you *find* something suspicious, again especially for a small site.

And where pinning is partly a band-aid on the fundamental X.509 brokenness of every-CA-signs-anything, CT is *entirely* a such band-aid.

If you want to make a real difference, I suggest DANE, which eliminates the basica problem... and stick with it for at least 5 years before you even think about reevaluating that support.

... but given that you already have pinning, and it really is a pretty trivial thing, you should also give *it* 5 years or more.

hal...@gmail.com

unread,
Oct 29, 2017, 6:00:16 PM10/29/17
to blink-dev
It is a pity that this hasn't worked. But I am not convinced that CT covers the same set of concerns.

The big problem with pinning is that for most sites, losing the ability to publish the site is a vastly more significant concern than confidentiality. So I am not at all surprised that takeup was slim. Basically it is not possible to use it without an ops team that includes a world class crypto group or a world class Dunning Kreuger effect.

Pinning has two major effects:

1) It forces the use of TLS
2) It requires the use of a particular set of trust anchors.

CT and CAA help address the first but not the second. Currently, CT and CAA are two independent systems. Perhaps it is time to consider linking them and closing the loop via CT for DNSSEC.


On the security policy issue, I think it is past time to admit the failure of DANE and look at the blocking problems that have to be solved to get any sort of client side enforcement of server security policy. I see the problems of DANE as being

1) There is no administration infrastructure. Any security policy scheme that depends on administrators updating DNS records to match server configuration manually is doomed to fail.

2) The scheme is limited to TLS and is not integrated with DNS Discovery (RFC6763).

3) The scheme links security policy with trust anchor validation. While this might be feasible if you are working within an entirely novel discovery infrastructure (e.g. UDDI if it had worked), trying to do that in DNS not.

4) Any scheme that requires additional RR round trips is going to add latency and be unacceptable.

5) There is still far too much infrastructure that blocks unknown DNS RRs to make anything that requires delivery practical.


Despite the length of the list, these are all fixable if there is a will to do so.


Ryan Sleevi

unread,
Oct 30, 2017, 9:38:43 AM10/30/17
to Bryant Zadegan, blink-dev, Chris Palmer
Without wanting to focus too much on the technical details, I think it's worth noting: the HPKP Suicide approach outlined is not intended to be functional as part of the Web Platform.

More specifically:
- WebSign based on AppCache is relying on deprecated technology ( https://www.fxsitecompat.com/en-CA/docs/2016/application-cache-support-will-be-removed/ )
- WebSign based on AppCache works due to manifest update failures not resulting in the disabling of the manifest, which is a security downside of AppCache
- WebSign based on Service Workers only functions for 24 hours (due to the service worker activation lifecycle - https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle )

Put differently, it seems as if this is largely relying on misfeatures of AppCache rather than HPKP itself. HPKP is just used to force network errors even when the network is functional. For the case outlined, one doesn't need HPKP or HPKP Suicide - just to induce network errors. Similarly, if one were to analyze the security properties of HPKP Suicide, they largely rely on some form of self-attestment that the server is well-behaving - and one can achieve that with Service Workers (and no HPKP).

--
You received this message because you are subscribed to a topic in the Google Groups "blink-dev" group.
To unsubscribe from this topic, visit https://groups.google.com/a/chromium.org/d/topic/blink-dev/he9tr7p3rZ8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to blink-dev+unsubscribe@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/f6a15f83-516e-4072-9eec-8133f224b02b%40chromium.org.

Ryan Sleevi

unread,
Oct 30, 2017, 9:42:02 AM10/30/17
to Mike West, blink-dev, Chris Palmer
On Sat, Oct 28, 2017 at 5:41 AM, <mk...@chromium.org> wrote:
On Friday, October 27, 2017 at 9:07:54 PM UTC+2, Chris Palmer wrote:

This will first remove support for HTTP-based PKP (“dynamic pins”), in which the user-agent learns of pin-sets for hosts by HTTP headers. We would like to do this in Chrome 67, which is estimated to be released to Stable on 29 May 2018.


I expect y'all will be putting in a deprecation warning in trunk shortly, and if there's no risk of breakage (and I agree with y'all's analysis that no site will stop working), it doesn't seem like you'd need a 3-release gap. Is there a reason you chose M67?

Finally, remove support for built-in PKP (“static pins”) at a point in the future when Chrome requires Certificate Transparency for all publicly-trusted certificates (not just newly-issued publicly-trusted certificates). (We don’t yet know when this will be.)


It looks like the built-in pinsets we have in https://chromium.googlesource.com/chromium/src/+/9bf166893edc3cea7ae7194b0784d85e9ad2cf0f/net/http/transport_security_state_static.json are limited to Google, Tor, Twitter, Facebook, Dropbox, SpiderOak, Yahoo, swehack.org, NCSCCS, and Tumblr. It looks like we support requiring CT on an origin-by-origin basis with `Expect-CT`'s 'enforce' directive. If that's the case, is there still value in keeping the hard-coded list? It seems like we could drop the hard-coded list as soon as these origins independently start sending `Expect-CT: max-age=whatever; enforce`, rather than waiting for ecosystem-wide changes.

Unfortunately, it's not a unilateral replacement - that is, we can't independently transition, say, Facebook, to Expect-CT - because that would first require Facebook to be able to ensure that any certificate it ever gets or deploys will be CT compliant. Much like HPKP, this is something that individual sites can work out with their respective CAs - or, unlike HPKP, they can configure their servers to guarantee this (via the TLS extension) - but it's not something that we can just change itself.

So the date outlined here represents the upper-bound: That is, even if none of the pinned sites transition their servers or collaborate with their respective CAs, at the time in which all publicly-trusted certificates are accompanied with CT information, one can easily remove the pin list.

That's not to say we can't move it sooner, just that it's not part of the deprecation plan to aggressively push other sites into going sooner.

Ryan Sleevi

unread,
Oct 30, 2017, 9:50:01 AM10/30/17
to Phillip Hallam-Baker, blink-dev
On Sun, Oct 29, 2017 at 6:00 PM, <hal...@gmail.com> wrote:
Pinning has two major effects:

1) It forces the use of TLS
2) It requires the use of a particular set of trust anchors.

Note: HPKP does not force the use of TLS. HSTS does.
 
CT and CAA help address the first but not the second. Currently, CT and CAA are two independent systems. Perhaps it is time to consider linking them and closing the loop via CT for DNSSEC.

This would be an orthogonal and unrelated improvement to the CA ecosystem.

It is important to note that this intent tries to establish that "requires the use of a particular set of trust anchors" is not a valuable goal as part of the Web Platform. More precisely, it is actively harmful towards the security of users, both individually and as an ecosystem. HPKP in particular exposes the details of the underlying platform (whether independent software stack - in the case of Mozilla - or OS library in the case of Apple and Microsoft) - and in doing so, undermines the value of interoperability and openness.

While the desire is unquestionably to provide a robust platform with extensibility throughout, certain aspects of the platform are intentionally not exposed - for example, there's no Web API to control the process-level sandboxing of a given site, nor is there a way to customize the Same Origin Policy (CORS notwithstanding). The choice of a particular set of trust anchors is similarly 'below' the platform to be able to be effectively, interoperably, and securely exposed - hence the Intent to Deprecate.

Rick Byers

unread,
Oct 30, 2017, 10:32:55 AM10/30/17
to Ryan Sleevi, Phillip Hallam-Baker, blink-dev
Although I'm not an expert in this space, I'm a little uneasy about the apparent inconsistency between saying that the static pinning list must persist for some indefinite time, while the dynamic approach no longer has value.  I'd feel more comfortable if either (like Mike suggests) we removed static key pinning at the exact same time, or we at least had an open policy and clear instructions for what anyone who was sufficiently motivated could do to get their keys into our static list (I did a quick google search and checked the FAQ, but maybe I'm missing it?).  For me this mostly falls under the "we go out of our way to make it clear that Google properties do not receive special treatment in Blink" comment in the blink compat principles.

Also, what's the best guidance we can point people to for understanding CT log anomolies and how to respond to them?  I don't see anything really addressing that question in the CT FAQ.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CACvaWvYvp55ZiV7qkM9xdDh8%2BCx5A0ZrU_SvRj5NHMTABcy7zA%40mail.gmail.com.

Phillip Hallam-Baker

unread,
Oct 30, 2017, 11:06:42 AM10/30/17
to rsl...@chromium.org, blink-dev
On Mon, Oct 30, 2017 at 9:49 AM, Ryan Sleevi <rsl...@chromium.org> wrote:
>
>
> On Sun, Oct 29, 2017 at 6:00 PM, <hal...@gmail.com> wrote:
>>
>> Pinning has two major effects:
>>
>> 1) It forces the use of TLS
>> 2) It requires the use of a particular set of trust anchors.
>
>
> Note: HPKP does not force the use of TLS. HSTS does.

So it does. Seems an odd choice, if you are going to accept a
downgrade to plaintext, a downgrade to a possibly less than
satisfactory key seems to miss the point.


>> CT and CAA help address the first but not the second. Currently, CT and
>> CAA are two independent systems. Perhaps it is time to consider linking them
>> and closing the loop via CT for DNSSEC.
>
> This would be an orthogonal and unrelated improvement to the CA ecosystem.

Probably. However there is a connection in that there is a finite
limit to the amount of outstanding proposals extending this space that
the community can track. If the authors are no longer promoting
RFC7469 as an ongoing proposal, perhaps you could propose to make in
HISTORIC and free up the slot?


> It is important to note that this intent tries to establish that "requires
> the use of a particular set of trust anchors" is not a valuable goal as part
> of the Web Platform. More precisely, it is actively harmful towards the
> security of users, both individually and as an ecosystem. HPKP in particular
> exposes the details of the underlying platform (whether independent software
> stack - in the case of Mozilla - or OS library in the case of Apple and
> Microsoft) - and in doing so, undermines the value of interoperability and
> openness.

I am unable to parse or make sense of that. It is not clear whether
you are referring to an assumption in what I originally posted or
making a separate claim. I think it is wrong either way.

If you are asserting something is 'actively harmful' then there should
be some specific risk that is introduced.


> While the desire is unquestionably to provide a robust platform with
> extensibility throughout, certain aspects of the platform are intentionally
> not exposed - for example, there's no Web API to control the process-level
> sandboxing of a given site, nor is there a way to customize the Same Origin
> Policy (CORS notwithstanding). The choice of a particular set of trust
> anchors is similarly 'below' the platform to be able to be effectively,
> interoperably, and securely exposed - hence the Intent to Deprecate.

I think that the problem is that there is a layering violation in
HPKP. A URI is an identifier in a Uniform (aka Universal) naming
space.

Introducing a protocol mechanism that semi-permanently changes the
interpretation of the naming space for an unpredictable set of clients
is problematic to say the least.

That objection would not apply to a restriction that was carried in
the naming space itself, e.g. as DNS records, DNS labels or some new
form of URI.

--
Website: http://hallambaker.com/

Ryan Sleevi

unread,
Oct 30, 2017, 11:16:02 AM10/30/17
to Rick Byers, Ryan Sleevi, blink-dev
On Mon, Oct 30, 2017 at 10:32 AM, Rick Byers <rby...@chromium.org> wrote:
Although I'm not an expert in this space, I'm a little uneasy about the apparent inconsistency between saying that the static pinning list must persist for some indefinite time, while the dynamic approach no longer has value.

Could you indicate where/how you arrived at that interpretation? We should definitely work to clarify that.

I think the statement here is that dynamic pinning is substantially more risky - both for those who try to use it (a bad fit for the platform) and those who don't want to use it (e.g. hostile pinning). These risks substantially outweigh the reward.

Static pinning is different, in this respect, on multiple dimensions. More importantly, the incremental approach allows for a meaningful reduction in risk to users and the ecosystem, without compatibility concerns.
 
  I'd feel more comfortable if either (like Mike suggests) we removed static key pinning at the exact same time, or we at least had an open policy and clear instructions for what anyone who was sufficiently motivated could do to get their keys into our static list (I did a quick google search and checked the FAQ, but maybe I'm missing it?).  For me this mostly falls under the "we go out of our way to make it clear that Google properties do not receive special treatment in Blink" comment in the blink compat principles.

Historically, we've accepted anyone - it's been handled the same as HSTS submissions. However, given the deprecation plan - and the costs - I don't think we'd want to continue accepting inclusion requests.

With respect to the Blink Compat principles, I think that may be a misapplication, although well intentioned - after all, it was the same argument I put forward when arguing for standardization of HPKP. You can look at the static pin list on a number of dimensions - from a site operator perspective, it doesn't 'guarantee' more security, because it's not a fundamental part of the Web Platform. From a user perspective, it doesn't make one browser that includes such pins more or less secure than another browser that doesn't - the sites and/or UA vendors themselves are responsible for detecting and mitigating the risk. From a Compat/Interop side, this doesn't provide any special advantages to one site over another - so I don't think it conflicts with those principles at all.

I think you'd agree that static pinning for various Google-specific features (e.g. translation, updates, sync) wouldn't be a concern - both from the fact that these aren't part of the Web Platform stack, but also that these are vendor-specific security settings. The major UA vendors (Apple, Microsoft, Mozilla) all provide some level of restrictions on updates and specific features, and this also doesn't undermine interop or compat.

Does that help address?
 
Also, what's the best guidance we can point people to for understanding CT log anomolies and how to respond to them?  I don't see anything really addressing that question in the CT FAQ.

"Best" is going to be inherently subjective and complex - it's somewhat similar to asking what's the best guidance we can give for people running a high-traffic site or for monitoring for brand/trademark abuse. There will be general principles, for sure, but many details will end up being specific to the use case/objective.

The simple answer is site operators should look for certificates for their domains they don't recognize - and a number of tools exist to help with that:

For example, two automated tools are https://www.facebook.com/notes/protect-the-graph/introducing-our-certificate-transparency-monitoring-tool/1811919779048165/ or https://sslmate.com/certspotter/ (and a number of CAs provide their customers with equivalent tools)

Chris Palmer

unread,
Oct 30, 2017, 1:12:29 PM10/30/17
to rsleevi, Bryant Zadegan, blink-dev
Regarding "cyph.ws will stop working", that's not the case. It will continue to work as well as it currently does in browsers that don't support HPKP, such as Safari, Edge, and IE. (I just tested Safari.)

Chris Palmer

unread,
Oct 30, 2017, 1:27:43 PM10/30/17
to bardi.h...@gmail.com, blink-dev
The strength of PKP ― run-time enforcement with hard-fail — is also exactly its weakness. Some people are OK with the risk, but overall I think the market has spoken: HPKP has seen close to no adoption. People don't want to brick their sites.

As for DANE, it would at best have the same strength/weakness of HPKP, but with the additional problems of DNSSEC. But this thread (or even this list) is not a good place to debate the merits of DANE or DNSSEC.

Rick Byers

unread,
Oct 30, 2017, 2:30:56 PM10/30/17
to Ryan Sleevi, blink-dev
On Mon, Oct 30, 2017 at 11:15 AM, Ryan Sleevi <rsl...@chromium.org> wrote:


On Mon, Oct 30, 2017 at 10:32 AM, Rick Byers <rby...@chromium.org> wrote:
Although I'm not an expert in this space, I'm a little uneasy about the apparent inconsistency between saying that the static pinning list must persist for some indefinite time, while the dynamic approach no longer has value.

Could you indicate where/how you arrived at that interpretation? We should definitely work to clarify that.

I'm sure I was just mis-interpreting / reading too much into it.  But it was these lines that struck me (lacking context in the details you describe below) as inconsistent:

"Unexpected or spurious pinning errors can result in error fatigue rather than user safety." (all the listed concrete drawbacks sound, to me, like they'd apply equally to both types of PKP). Then "remove support for built-in PKP at a point in the future ... we don’t yet know when this will be" (which I presumed to also mean "may never actually happen").

I think the statement here is that dynamic pinning is substantially more risky - both for those who try to use it (a bad fit for the platform) and those who don't want to use it (e.g. hostile pinning). These risks substantially outweigh the reward.
Static pinning is different, in this respect, on multiple dimensions. More importantly, the incremental approach allows for a meaningful reduction in risk to users and the ecosystem, without compatibility concerns.

That's a good argument, thank you.  The cost/benefit tradeoff to static pins makes it worthwhile for some people at this time, but that for dynamic pins doesn't really.  I can believe that, but then see below.
 
  I'd feel more comfortable if either (like Mike suggests) we removed static key pinning at the exact same time, or we at least had an open policy and clear instructions for what anyone who was sufficiently motivated could do to get their keys into our static list (I did a quick google search and checked the FAQ, but maybe I'm missing it?).  For me this mostly falls under the "we go out of our way to make it clear that Google properties do not receive special treatment in Blink" comment in the blink compat principles.

Historically, we've accepted anyone - it's been handled the same as HSTS submissions. However, given the deprecation plan - and the costs - I don't think we'd want to continue accepting inclusion requests.

Keeping some in while not accepting new ones seems inconsistent to me.  Would we also accept changes to the existing ones, or freeze the list entirely?

With respect to the Blink Compat principles, I think that may be a misapplication, although well intentioned - after all, it was the same argument I put forward when arguing for standardization of HPKP. You can look at the static pin list on a number of dimensions - from a site operator perspective, it doesn't 'guarantee' more security, because it's not a fundamental part of the Web Platform.

It's not about guarantees, it's about statistical risk and expected costs.  Let's put this another way with a completely hypothetical and improbable example. Imagine GitHub were to claim that their expected cost-per-user for a certificate missisuance incident were higher than for other auth providers like Facebook and Google due to their inclusion in the static pin list (since a significant subset of their users would be largely immune from attack, and theirfore their recovery costs for a massive password phishing operation would be lower).  Would we really tell GitHub that's just too bad?  I think we had this conversation a couple years ago and you satisfied me with "HPKP is the right answer for the long tail" <grin>.

In practice perhaps this will never come up - that static pinning isn't really that important to anyone for it to matter.  But I believe the integrity of our open intent process depends on us bein consistent in treating all of the web the same.  Maybe we could agree that if such a request with Chrome usage above some threshold (applied retroactively to existing pinsets) comes in, then we'll commit to either adding it, or removing all our static pins at that time?  Ideally I'd rather we treat sites the same regardless of usage level, but if we're going to have site whitelists then at least inclusion should be controled by some principled equation (i.e. chrome users impacted / cost, or just the top 12 in terms of usage) instead of being an arbitrary accident of history.

From a user perspective, it doesn't make one browser that includes such pins more or less secure than another browser that doesn't - the sites and/or UA vendors themselves are responsible for detecting and mitigating the risk. From a Compat/Interop side, this doesn't provide any special advantages to one site over another - so I don't think it conflicts with those principles at all.

Yeah, I agree - it's not the compat and interop risk I'm worried about here.  It's about "openness" beyond interop, I just did a poor job of separating that into it's own section in the compat principles doc (IP rights is really the only other section like that).

I think you'd agree that static pinning for various Google-specific features (e.g. translation, updates, sync) wouldn't be a concern - both from the fact that these aren't part of the Web Platform stack, but also that these are vendor-specific security settings. The major UA vendors (Apple, Microsoft, Mozilla) all provide some level of restrictions on updates and specific features, and this also doesn't undermine interop or compat.

Yes, agreed - none of this applies to what we do for Google services built into Chrome. 

Does that help address?
 
Also, what's the best guidance we can point people to for understanding CT log anomolies and how to respond to them?  I don't see anything really addressing that question in the CT FAQ.

"Best" is going to be inherently subjective and complex - it's somewhat similar to asking what's the best guidance we can give for people running a high-traffic site or for monitoring for brand/trademark abuse. There will be general principles, for sure, but many details will end up being specific to the use case/objective.

Sorry I shouldn't have said "best" - just what is the one or two resources we should point people to who ask (probably link to from the chromestatus entry that console warning and deprecations blog post links to)?

The simple answer is site operators should look for certificates for their domains they don't recognize - and a number of tools exist to help with that:

For example, two automated tools are https://www.facebook.com/notes/protect-the-graph/introducing-our-certificate-transparency-monitoring-tool/1811919779048165/ or https://sslmate.com/certspotter/ (and a number of CAs provide their customers with equivalent tools)

Cool.  I see SSLMate even mentions some possible "corrective actions" that can be taken in their docs.  Is there a page somewhere that has, or could be modified to have links like these so we at least have some explicit guidance we can link to?  


--
You received this message because you are subscribed to the Google Groups "blink-dev" group.

Ryan Sleevi

unread,
Oct 30, 2017, 2:45:46 PM10/30/17
to Rick Byers, Ryan Sleevi, blink-dev
On Mon, Oct 30, 2017 at 2:30 PM, Rick Byers <rby...@chromium.org> wrote:
Historically, we've accepted anyone - it's been handled the same as HSTS submissions. However, given the deprecation plan - and the costs - I don't think we'd want to continue accepting inclusion requests.

Keeping some in while not accepting new ones seems inconsistent to me.  Would we also accept changes to the existing ones, or freeze the list entirely?

Indeed, it's an uneasy inconsistency. We've had some recent updates - for example, due to discussions around DigiCert and Symantec, Yahoo recently updated their set of pins. If those updates weren't made, then their static pins may have broken due to changes in the CA ecosystem. Alternatively, simply removing all the pins entirely may have also worked - but this Intent wasn't out yet :)

So I don't think we can straight freeze the list - there has to be some change, otherwise users suffer.

As to whether we'd want other site operators to add themselves to the pin list, as the intent tries to capture - that's trying to offer site operators a guarantee that they can't actually rely on. It may be better aligned with, say, Pepper or Extension private APIs - there's a whitelisted set, but that whitelist is trying to be shrunk and the API eliminated, rather than expanded.


In practice perhaps this will never come up - that static pinning isn't really that important to anyone for it to matter.  But I believe the integrity of our open intent process depends on us bein consistent in treating all of the web the same.  Maybe we could agree that if such a request with Chrome usage above some threshold (applied retroactively to existing pinsets) comes in, then we'll commit to either adding it, or removing all our static pins at that time?  Ideally I'd rather we treat sites the same regardless of usage level, but if we're going to have site whitelists then at least inclusion should be controled by some principled equation (i.e. chrome users impacted / cost, or just the top 12 in terms of usage) instead of being an arbitrary accident of history.

Well, that's where the "no new inclusions" attempts to establish a clear principled equation. I don't think we can escape the arbitrary accident of history, and I think attempting to derive a set of rules either runs the risk of new inclusions (which we know are not beneficial to users, to browser developers, or to site operators), or quickly removing things still in use.

We know that the dynamic usage of HPKP is low - low enough to support removal - but the static usage is, well, an arbitrary accident of history that also needs time to correct :)
 
The simple answer is site operators should look for certificates for their domains they don't recognize - and a number of tools exist to help with that:

For example, two automated tools are https://www.facebook.com/notes/protect-the-graph/introducing-our-certificate-transparency-monitoring-tool/1811919779048165/ or https://sslmate.com/certspotter/ (and a number of CAs provide their customers with equivalent tools)

Cool.  I see SSLMate even mentions some possible "corrective actions" that can be taken in their docs.  Is there a page somewhere that has, or could be modified to have links like these so we at least have some explicit guidance we can link to?  

Where do you see that page-of-links being linked to? The chromestatus entry? And what's the goal for the explicit guidance? To document alternatives to a rarely used (and itself underdocumented) feature? 

ry...@cyph.com

unread,
Oct 30, 2017, 3:54:55 PM10/30/17
to blink-dev, rby...@chromium.org, rsl...@chromium.org, Chris Palmer, Bryant Zadegan
Thanks Ryan and Chris for responding to the Suicide/WebSign-specific concerns raised by Bryant! I'll fully read through the latest messages and respond privately, as the explanations and follow-up questions will be a bit out of scope here. However, I'll say here that the cyph.ws compatibility issue noted above, while heavily simplified, is essentially correct.

Anyway:

First, a huge +1 to everything said by @jbash. This seems way too early to be judging the uptake of HPKP. And the proposed alternative doesn't address the same use case at all; it's just the regular old TLS trust model with an improved ability to conduct postmortems.

Second, why is mass uptake suddenly a goal that HPKP needs to achieve? In my mind it seems pretty clearly not a feature that makes sense for every single website to use, but rather the small subset of sites with a particular need for confidentiality, which I think is okay. This would be like deprecating WebRTC or Workers simply because they're relatively niche advanced features, when that would've been obvious all the way back during their standardization processes.

All that being said, clearly the accidental self-bricking footgun aspect is a problem. However, this is a shortcoming in the usability of the API, not in the utility of the functionality provided. Why not open a discussion with the community about how this could be improved before jumping straight to "Intent To Deprecate"? Here is my suggestion:

1. Short-term (Chrome 67): Restrict dynamic PKP to an HSTS-Preload-like whitelist, with a big scary warning in the submission form that the use of this feature is highly discouraged unless they have a specific need for it and an experienced InfoSec team to manage it.

2. Medium-to-long-term (ecosystem-wide collaboration): Disregard dynamic PKP headers unless the domain in question i) has CAA enabled, ii) has an EV cert, and iii) has some kind of new indicator in the certificate to indicate that the CA has validated that the site owner is really really sure they want to use HPKP and understands the risks involved (i.e. offload the whitelisting/validation responsibility from individual browser vendors to the broader CA industry).

Alternatively, maybe #2.i and #2.ii could be merged into #1, and maybe the existing whitelisting system is good enough and #2.iii isn't needed.

jba...@gmail.com

unread,
Oct 30, 2017, 5:33:58 PM10/30/17
to blink-dev, rby...@chromium.org, rsl...@chromium.org, pal...@chromium.org, bry...@zadegan.net, ry...@cyph.com

First, a huge +1 to everything said by @jbash.

Wow, thanks. I'm afraid I'm still going to disagree with some stuff YOU said, though...

In my mind it seems pretty clearly not a feature that makes sense for every single website to use, but rather the small subset of sites with a particular need for confidentiality,

It may or may not be niche in the long run, but a lot of the value I see is actually in private networks, and I don't see it as being about "particular needs for confidentiality". Everything has a need for confidentiality.

1. Short-term (Chrome 67): Restrict dynamic PKP to an HSTS-Preload-like whitelist, with a big scary warning in the submission form that the use of this feature is highly discouraged unless they have a specific need for it and an experienced InfoSec team to manage it.

What would be the point of dynamic pinning if you had to get the browser to whitelist it before it would work? Why would you want to force people to figure out how to supplicate themselves to however many browsers before they could make a server-side configuration decision?

> 2. Medium-to-long-term (ecosystem-wide collaboration): Disregard dynamic PKP headers unless the domain in question i) has CAA enabled,

What does CAA have to do with it?

This sounds like "we don't think you're smart enough to know what to do with your own domain, so we're going to make you jump through irrelevant hoops". And random dependencies like that are always bars to adoption. Something that used to be deployable with a simple local decision is now a big administrative hassle that has to be negotiated with a bunch of people.

If you're going to overcome browsers' traditional reluctance go out and request data from DNS, then, again, please use that juice to do DANE.

> ii) has an EV cert, a

I (generic "I", not me personally) can't get an EV cert for every router or management station or whatever in an enterprise network, and that's one of the major places where you need the extra assurance. Nor am generic-I very likely willing to have my internal CA generate EV flags, and if I am willing I'm going to have to go through a change control board to do it.

Not only that, but one very reasonable pinning use case is trust on first use with self signed certs.

> iii) has some kind of new indicator in the certificate to indicate that the CA has validated that the site owner is really really sure they want to use HPKP and understands the risks involved (i.e. offload the whitelisting/validation responsibility from individual browser vendors to the broader CA industry).

Now I have to change CAs or wait until my CA supports this, plus doing extra administrative work to communicate this to the CA. Although this one is probably the least objectionable idea mentioned so far if you really want to deal with the malicious pin issue.
 
This would be like deprecating WebRTC or Workers simply because they're relatively niche advanced features, when that would've been obvious all the way back during their standardization processes.

Actually, if people could take all that security-damaging complexity out of browsers, I'd be pretty happy. I might even shut up about certs. I can give you a list of stuff to drop. I might let you keep JavaScript, but not without some grumbling. :-)

ry...@cyph.com

unread,
Oct 30, 2017, 6:34:18 PM10/30/17
to blink-dev, rby...@chromium.org, rsl...@chromium.org, pal...@chromium.org, bry...@zadegan.net, ry...@cyph.com, jba...@gmail.com
It may or may not be niche in the long run, but a lot of the value I see is actually in private networks, and I don't see it as being about "particular needs for confidentiality". Everything has a need for confidentiality.

Just about everything can benefit from some level of confidentiality, but not everything needs strong confidentiality more than they need the site to stay up. e.g. The bricking risk makes sense for something like my bank's website, GitHub, or Ashley Madison; but (to steal tptacek's example from the HN thread) a small ecommerce business using GoDaddy would be justified in not trying to correctly configure/manage HPKP.

The difficulty of setting up HPKP has been massively overblown here IMO, but I think it's fair to say that it shouldn't be implemented by a novice admin for anything important given its current interface and the current state of surrounding tooling.

What would be the point of dynamic pinning if you had to get the browser to whitelist it before it would work? Why would you want to force people to figure out how to supplicate themselves to however many browsers before they could make a server-side configuration decision?

Why would the point change? I as a site operator want to use dynamic pinning on balls.com for whatever reason I have in mind. The only difference is now I have to wait a little bit for it to start working in Chrome until Google approves me (and potentially likewise for other browsers), and I have a few warnings shoved in my face that'll hopefully scare me off if I truly don't know what I'm doing.

> 2. Medium-to-long-term (ecosystem-wide collaboration): ...

Sorry, I was unclear on the full reasoning for that. Aside from pulling the whitelisting responsibility away from browser vendors (so that going through a whitelisting process N times per domain, as you noted, would not be a requirement), this was also intended as a mitigation for hostile pinning attacks.

I listed CAA first because (as noted in my and Bryant's Black Hat / DEF CON talk that brought attention to this risk in the form of "RansomPKP") it blocks low-effort attacks involving taking over any arbitrary box and grabbing a cert to pin from LetsEncrypt; you'd specifically need a cert from the CA they're already using. Further, by attaching an EV requirement, there would be a higher level of confidence that whoever requested the certificate is actually the site owner and not an attacker. These two things do secondarily add some hoops to jump through to reduce the likelihood of someone inexperienced from deploying a bad HPKP setup, however.

Not saying my proposal is ideal, but I'd say it's a lot more productive a stepping stone to something that everyone will be more or less okay with than "Intent to Deprecate".

As far as your examples, I think those make a lot of sense to consider. I would rather end up with something than nothing, but hopefully we can collectively come up with something that the Chrome team is okay with and doesn't break any existing legitimate/non-malicious uses of HPKP.

Actually, if people could take all that security-damaging complexity out of browsers, I'd be pretty happy. I might even shut up about certs. I can give you a list of stuff to drop. I might let you keep JavaScript, but not without some grumbling. :-)

lol. Those were just examples off the top of my head. My general point is that relatively low usage isn't a great argument for killing something that wouldn't have been expected to have high usage to begin with (on top of your point that it's still too early to gauge its success in that regard).

jba...@gmail.com

unread,
Oct 30, 2017, 10:31:09 PM10/30/17
to blink-dev, rby...@chromium.org, rsl...@chromium.org, pal...@chromium.org, bry...@zadegan.net, ry...@cyph.com, jba...@gmail.com


On Monday, October 30, 2017 at 6:34:18 PM UTC-4, Ryan Lester wrote:

 [Moving the most important part to the top...]

Not saying my proposal is ideal, but I'd say it's a lot more productive a stepping stone to something that everyone will be more or less okay with than "Intent to Deprecate".

I understand the concern. It's not nice at all if somebody who can compromise you once can make every client that visits you while compromised shun you for the next couple of months. And I can see where that would be really bad (TM) for apublic Web site with a ton of clueless users, in a way that it would not necessarily be for a private server. But I don't think overloading CAA would even work (see below) and requiring or requiring EV certs gives me the heebie-jeebies.

The difficulty of setting up HPKP has been massively overblown here IMO, but I think it's fair to say that it shouldn't be implemented by a novice admin for anything important given its current interface and the current state of surrounding tooling.

I have to admit that I'm having an awful lot of culture shock here. I just can't see how it's a browser's job to keep a sysadmin from turning on a feature on the server side, regardless of how bad an idea it may be. It's simply flat-out none of the browser's business to protect sysadmins, many of whom may not even use that browser, from getting behavior they explicitly request.

... and I haven't looked recently, but I suspect that you have to be pretty technical to even get the average server to set a pin at all at the moment. It's not like you can accidentally click the "set a pin and lose the key" button.

Frankly, I think a browser should do exactly zero to prevent a server admin from hosing themselves. I do agree that it should help prevent other people from hosing them, though.
 
What would be the point of dynamic pinning if you had to get the browser to whitelist it before it would work? Why would you want to force people to figure out how to supplicate themselves to however many browsers before they could make a server-side configuration decision?

Why would the point change? I as a site operator want to use dynamic pinning on balls.com for whatever reason I have in mind. The only difference is now I have to wait a little bit for it to start working in Chrome until Google approves me (and potentially likewise for other browsers), and I have a few warnings shoved in my face that'll hopefully scare me off if I truly don't know what I'm doing.

I sort of thought that a big part of the point was that it was an easy, lightweight thing to do.

If I'm setting up a server in a standardized protocol, I don't traditionally have to worry about configuring every possible client that any random user might choose to use. I don't traditionally even have to worry about knowing what clients exist. Whitelisting asks me not only to know, but to explicitly ask for a modification to the distributed code of (potentially) every single client. I can't see how that's sane at all.

Yes, I know that you can't be truly client-blind when running public Web sites (although you could be a lot more so if people would just quit it with the pointless bells and whistles). But running a private server inside your own network can be like that.

There are other issues, too:
  • How is the browser distributor going to authenticate my request? Are they all going to make me jump through different hoops?
  • Would that authentication work for, say, a .onion site, which could very reasonably want to pin?
  • What if I don't want to disclose the existence of the server I'm using pinning on? Not everything is a public service, and even things that will be public in the future may not want to announce it until they are
  • What if I want to test the stuff using different names?
  • What if I want to learn to set it up, or teach somebody else to set it up? Do hostnames in my learning lab now have to be embedded in Chrome?
  • What if I want to spin something up quickly?
  • Where's the RFC that tells me I have to communicate with the browsers at all?
I listed CAA first because (as noted in my and Bryant's Black Hat / DEF CON talk that brought attention to this risk in the form of "RansomPKP") it blocks low-effort attacks involving taking over any arbitrary box and grabbing a cert to pin from LetsEncrypt; you'd specifically need a cert from the CA they're already using.

Wait, that's different. If I understand what you say correctly, you want the browser not only to check whether CAA records exist, but also to look at the content of the CAA RR(s).

That changes the effective semantics of CAA, which is explicitly only meant to be looked at in the first place by the CAs themselves, not the clients. At the moment, I can publish a CAA record authorizing firstca.com, get a bunch of firstca.com certs that are valid for a year... and the very next day change my CAA to secondca.com, without invalidating any of my firstca.com certs. If I understand what you're proposing, you'd require me to retain the CAA pointing to firstca.com for as long as I wanted to keep using the certs I already have.

Also, CAA records use FQDNs, but CA root certs use DNs. It's not clear to me that it's even possible to determine whether a given issued cert matches a given CAA record.

I submit that if you want to tell a client what CA or certs are valid at the time of a connection, there's already a standardized way of doing that: DANE.

But I still would rather not require somebody to set up DANE to use HPKP. Part of the beauty of HPKP is that it's totally local.

Further, by attaching an EV requirement, there would be a higher level of confidence that whoever requested the certificate is actually the site owner and not an attacker.

Higher. :-)

 

Joe Medley

unread,
Oct 31, 2017, 11:19:59 AM10/31/17
to Chris Palmer, blink-dev
Re feature dashboard, my feeling based on first glance is that you do need an entry. The text that caught my eye is in the motivation section: "Web-exposed mechanism".

Joe

Joe Medley | Technical Writer, Chrome DevRel | jme...@google.com | 816-678-7195
If an API's not documented it doesn't exist.

--
You received this message because you are subscribed to the Google Groups "blink-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+unsubscribe@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/blink-dev/CAOuvq212inCp0nTNrGGF1a2mWH3aVToQ5%3Dsr%2BDGyY6abufbpWg%40mail.gmail.com.

Joe Medley

unread,
Oct 31, 2017, 11:21:54 AM10/31/17
to Chris Palmer, blink-dev
One more thing: DevRel made a deal of this when we rolled it out. It would be courteous to let developers know we're turning it off.


Joe Medley | Technical Writer, Chrome DevRel | jme...@google.com | 816-678-7195
If an API's not documented it doesn't exist.

Ryan Lester

unread,
Oct 31, 2017, 7:28:38 PM10/31/17
to blink-dev, rby...@chromium.org, rsl...@chromium.org, pal...@chromium.org, bry...@zadegan.net, ry...@cyph.com, jba...@gmail.com
re: jbash, all great points (I hadn't considered internal networking at all), and if it were up to me there would be no or very conservative changes. I don't think it warrants a change in the standard itself at this point if people have deployed it incorrectly in the past, but rather the documentation and tooling should be improved; for example, maybe certbot could handle automatic HPKP configuration with optional cloud backups of keys and all major docs like MDN could be updated with examples/guides based on the new certbot command along with more prominent/scary warnings about the dangers of HPKP.

Based on your feedback, here's a modification to my earlier proposal:

1. Short-term (Chrome 67): Any time dynamic PKP is used, print a console warning that additional requirements are planned to be attached to the use of dynamic PKP with a relevant link.

2. Medium-to-long-term (ecosystem-wide collaboration): Disregard dynamic PKP headers unless the domain in question has some kind of new indicator in the certificate to show that the CA has validated that the site owner is really really sure they want to use HPKP and understands the risks involved (i.e. offload the whitelisting/validation responsibility from individual browser vendors to the broader CA industry).

Note that what was originally #2.iii is now #2 in its entirety. This should also work for self-signed certs, reducing the hindrance for internal networking use cases; it also pretty much encompasses any benefit requiring EV would've provided. There are no changes here that impact static PKP, as my impression is that no one has any particular problem with static PKP provided that HPKP in general continues to exist.

Maybe this is too conservative to change the team's mind, and maybe they would prefer to attach different restrictions to public and private hostnames (with more relaxed requirements for private ones), but I think this makes a lot of sense as-is. Any thoughts, Chris and Ryan?

---

Replies to specific comments:

Would that authentication work for, say, a .onion site, which could very reasonably want to pin?

Incidentally, I'm using HPKP on cyphdbyhiddenbhs.onion, so this would be a concern of mine to ensure is handled correctly as well.

Wait, that's different. If I understand what you say correctly, you want the browser not only to check whether CAA records exist, but also to look at the content of the CAA RR(s).

Nope, I hadn't suggested that (the exact phrasing I used was "Disregard dynamic PKP headers unless the domain in question i) has CAA enabled"). It's a little weak because a sysadmin could always enable CAA after the fact of getting a cert (and not even necessarily use an RR that's compatible with their existing cert), but it would help drive adoption of CAA, which is a good long-term hostile pinning mitigation for the reason you quoted. I get that tying that to HPKP is also kind of messy for the reasons you mentioned, particularly at this point now that HPKP is already out there in production use, but might be worth considering as a requirement for public hostnames.

Chris Palmer

unread,
Oct 31, 2017, 8:00:36 PM10/31/17
to Joe Medley, blink-dev
Yes, good idea. We'll write a Developers blog post, when the time comes.

Chris Palmer

unread,
Oct 31, 2017, 8:27:28 PM10/31/17
to blink-dev
In the short term, we'll certainly add a warning on the console. But, about the deprecation. :)

I don't see how (2) really addresses the concerns we stated in the original post. As stated there, there are simply too many uncoupled or loosely-coupled business and policy interests to result in a reliable API guarantee of the kind that we need for the Open Web Platform. For example, even a CA might have a hard time ensuring that a given pin set will work for all clients, who build certificate paths in their own ways based on their own sets of trust anchors and policies. And those things change over time, sometimes surprisingly.

Similarly, client platforms can't always know for sure what decisions CAs are going to make as regards cross-signing, issuer cert lifetimes, and so on. Coordinating more closely is the goal of the CA/Browser Forum, but to date, the status quo is the best it has ever been (IMHO). But it's not sufficient to help establish a reliable API.

Also, "set a header to indicate your intent oh actually no you need a header AND an X.509 extension in your cert") is not a super clear 'calling convention' (if you will) f