(Pre-)Intent to Deprecate: <keygen> element and application/x-x509-*-cert MIME handling

2876 views
Skip to first unread message

Ryan Sleevi

unread,
Jul 28, 2015, 3:46:51 PM7/28/15
to blink-dev

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


[3] https://connect.microsoft.com/IE/feedback/details/793734/ie11-feature-request-support-for-keygen

[4] https://lists.w3.org/Archives/Public/public-html/2009Sep/0153.html

[5] https://blog.whatwg.org/this-week-in-html5-episode-35

[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen

[7] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[8] https://developer.mozilla.org/en-US/docs/Archive/Mozilla/JavaScript_crypto/generateCRMFRequest


Compatibility Risk

While there is no doubt that <keygen> remains used in the wild, both the use counters [9] and Google's own crawler indicate that it's use is extremely minimal. Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


[9] https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement

[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1024871


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


On some level, this removal will remove support for arbitrarily adding keys to the users' device-wide store, which is an intentional, by-design behaviour.


While a use case exists for provisioning TLS client certificates for authentication, such a use case is inherently user-hostile for usability, and represents an authentication scheme that does not work well for the web. An alternative means for addressing this use case is to employ the work of the FIDO Alliance [12], which has strong positive signals from Microsoft and Google (both in the WG), is already supported via extensions in Chrome [13], with Mozilla evaluating support via similar means [14]. This offers a more meaningful way to offer strong, non-phishable authentication, in a manner that is more privacy preserving, offers a better user experience, better standards support, and more robust security capabilities.


[11] https://developer.chrome.com/extensions/enterprise_platformKeys

[12] https://fidoalliance.org/

[12] https://fidoalliance.org/google-launches-security-key-worlds-first-deployment-of-fast-identity-online-universal-second-factor-fido-u2f-authentication/

[14] https://bugzilla.mozilla.org/show_bug.cgi?id=1065729


Usage information from UseCounter

https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement


Based on data seen from Google crawler (unfortunately, not public), the number of sites believed to be affected is comically low. 


OWP launch tracking bug

https://code.google.com/p/chromium/issues/detail?id=514767


Entry on the feature dashboard

https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement


Requesting approval to remove too?

No

David Benjamin

unread,
Jul 28, 2015, 4:25:23 PM7/28/15
to Ryan Sleevi, blink-dev
On Tue, Jul 28, 2015 at 3:46 PM 'Ryan Sleevi' via Security-dev <securi...@chromium.org> wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


To add to your list, <keygen> requires that the browser process perform an expensive operation (key generation) to compute a <form>'s POST data. This is currently implemented with a synchronous IPC (https://crbug.com/52949). <keygen> is the only place in the platform where we'd need to compute this information asynchronously. (File uploads are done via a different mechanism in Chrome.)
 

While I gleefully support <keygen>'s removal and the deprecation of anything related to client certificates on general principle, use counters are probably not a very good metric here, both because of enterprise underrepresentation in UMA and because people using <keygen> to enroll in certificates likely only use the element once every year or so.

Also, one more data point: <keygen> was broken on Android in Chrome 42, but this was only noticed two months later. (Which could mean that no one uses <keygen>, or that those who do use it very infrequently, or that it sees especially little use on Android. Probably all three.)

This also means that the fallout from the removal will be somewhat protracted as it will take a very long time between removal and users reporting problems. (Or, if we put a deprecation notice, it's likely that notice won't get much coverage without running it for a long time.)
 

Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


[9] https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement

[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1024871


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


It may be worth hooking up a default download handler for PKCS#12 files (or whatever format) to the about:settings certificate management UI on Linux so that we needn't reinstate these instructions for client certificate enrollment:
 

Alex Russell

unread,
Jul 28, 2015, 4:35:52 PM7/28/15
to Ryan Sleevi, blink-dev
This argument seems week. We absolutely allow browsers to do things like register for protocol handlers.

David Benjamin

unread,
Jul 28, 2015, 4:47:16 PM7/28/15
to Alex Russell, Ryan Sleevi, blink-dev

We allow browsers to *become* your native email client by allowing it to register a mailto handler. That's a fairly reasonable operation since registering protocol handlers is a very standard inter-application integration point in basically all platforms. The security story is clear (when you open a mailto: URL you expect it to open in another application), and it's one that we can explain to users fairly well ("do you use mail.google.com as your email client?")

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.
 

Ryan Sleevi

unread,
Jul 28, 2015, 4:53:48 PM7/28/15
to Alex Russell, blink-dev
On Tue, Jul 28, 2015 at 1:35 PM, Alex Russell <sligh...@chromium.org> wrote:

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


This argument seems week. We absolutely allow browsers to do things like register for protocol handlers.

That's Apples to Oil Changes sort of comparison. It's one thing to expect your browser to be able to be launched on an action, it's quite another to expect it to modify your network configuration or affect how other (native and web) applications behave and execute. 

<keygen> is used for things like configuring devices Wifi configurations or affecting (native) applications. Surely that sounds bonkers to you - in the same way that having an application configure a VPN should - and if it doesn't create a deep negative reaction, then that worries me. We don't allow unmediated access to the filesystem, much in the same way we shouldn't allow arbitrary websites to, say, configure wifi. The closest we get is the (non-standard) Chrome Native Messaging, and even that requires apriori opt-in via native, in administrative mode, before arbitrary websites can begin modifying how applications behave and interact.

<keygen> can easily be abused to DoS a device in ways that cannot be easily mitigated. At best, simply foisting the decision to the user is a solution, but one that doesn't offer anything actionable other than "Hope you don't get owned along the way".

Similarly, application/x-x509-*-cert represent powerful ways to brick devices, again _by design_.

If we were to evaluate support for adding <keygen> today, it seems extremely unlikely that it would begin to pass the sniff-test for https://w3ctag.github.io/security-questionnaire/, let alone our own security review. Even as an extension API for a replacement, there were a number of concerns about the capability being exposed and the ability to do harm, as well as the mitigation concerns.

Alex Russell

unread,
Jul 28, 2015, 5:28:44 PM7/28/15
to David Benjamin, Ryan Sleevi, blink-dev
That's a stronger argument and one that resonates with me.

kai.e...@gmail.com

unread,
Jul 29, 2015, 6:42:34 AM7/29/15
to blink-dev, sle...@google.com

Am Dienstag, 28. Juli 2015 21:46:51 UTC+2 schrieb Ryan Sleevi:

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.



Does WebCrypto offer any way to guarantee that the client side private key will never leave the client computer?

Ryan Sleevi

unread,
Jul 29, 2015, 11:36:07 AM7/29/15
to kai.e...@gmail.com, blink-dev


On Jul 29, 2015 3:42 AM, <kai.e...@gmail.com> wrote:
>
> Does WebCrypto offer any way to guarantee that the client side private key will never leave the client computer?
>

extractable=false, but I doubt that is the level you're trying to get.

It only guarantees that the browser won't export it - naturally, the browser can't make any guarantees about other software on the machine, and it is intentional and by-design that WebCrypto does not expose any smart card access whatsoever (to do so is fundamentally insecure, and long-term privacy hostile)

melvinc...@gmail.com

unread,
Jul 30, 2015, 10:42:10 AM7/30/15
to blink-dev, sle...@google.com

-1 KEYGEN is in use.

This move will be severely detrimental several grass roots communities, such as the WebID community. 

[1] https://www.w3.org/community/webid/participants
 

David Benjamin

unread,
Jul 30, 2015, 11:43:16 AM7/30/15
to Ryan Sleevi, kai.e...@gmail.com, blink-dev
Does our implementation of <keygen> give you smartcard access anyway? In theory one could render <keygen> as a more complex widget that selects what token/whatever to put the key on, but I don't think we do more than the dropdown. I would think it just uses the default software key store everywhere.

David 

Ryan Sleevi

unread,
Jul 30, 2015, 11:47:00 AM7/30/15
to David Benjamin, Kai Engert, blink-dev


On Jul 30, 2015 8:43 AM, "David Benjamin" <davi...@chromium.org> wrote:
>
>
> Does our implementation of <keygen> give you smartcard access anyway?

No

> In theory one could render <keygen> as a more complex widget that selects what token/whatever to put the key on, but I don't think we do more than the dropdown. I would think it just uses the default software key store everywhere.

Correct. 

Ryan Sleevi

unread,
Jul 30, 2015, 11:53:56 AM7/30/15
to Melvin Carvalho, blink-dev


On Jul 30, 2015 7:42 AM, <melvinc...@gmail.com> wrote:
>

>
> -1 KEYGEN is in use.
>
> This move will be severely detrimental several grass roots communities, such as the WebID community. 
>
> [1] https://www.w3.org/community/webid/participants
>  

This comment doesn't really address any of the technical concerns raised. WebID has repeatedly demonstrated a willingness to appropriate ill-suited technology, and has readily acknowledged that no browser implements the desired functionality for WebID to be successful.

WebID is still nascent, and readily admits it won't work with Edge. An alternative would be for WebID to proceed with standards that are actually widely used and have a viable chance at being usable - such as WebCrypto.

But it seems odd to hold a feature that was never fit to purpose nor working as desired hostage for experimental activity in a CG.

henry...@gmail.com

unread,
Jul 31, 2015, 4:44:38 AM7/31/15
to blink-dev, melvinc...@gmail.com, rsl...@chromium.org
As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

As I understand web crypto requires the private keys generated to be downloaded into the local storage of the browser. Due to entirely reasonable security restrictions of the browser the private key can from that storage only be used for that one origin ( which means that it can still be misused by any JS that is downloaded to a web site! ) That restriction removes many of the uses of public key cryptography that would allow one to strengthen the security of the web. Furthermore storing private keys in local storage seems a lot less secure than storing it encrypted in a keychain that is either tied to the browser like Netscape or tied  the OS. 

So I don't see that WebCrypto has proved itself yet as a substitute for client side certificates. 

As for wether WebID can work with Edge ( Microsoft Edge? ) or not I don't think we have yet looked at it. Why would it not work there?



 

Ryan Sleevi

unread,
Jul 31, 2015, 5:33:21 AM7/31/15
to henry...@gmail.com, blink-dev, Melvin Carvalho


On Jul 31, 2015 1:44 AM, <henry...@gmail.com> wrote:
>
> As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

No, this is not correct. You can create keys for your origin, store them, and have full control over the UI.

What you don't get is to use those keys for TLS client auth, and that's a good thing, because TLS client auth simply doesn't work for the web at scale.

> As I understand web crypto requires the private keys generated to be downloaded into the local storage of the browser. Due to entirely reasonable security restrictions of the browser the private key can from that storage only be used for that one origin ( which means that it can still be misused by any JS that is downloaded to a web site! ) That restriction removes many of the uses of public key cryptography that would allow one to strengthen the security of the web. Furthermore storing private keys in local storage seems a lot less secure than storing it encrypted in a keychain that is either tied to the browser like Netscape or tied  the OS. 

<keygen> is never speced to behave as you describe. That it does is simply a matter of emulating Netscape's original behaviour, but as noted earlier, even those that emulate the behaviour do so inconsistently.

For example, using <keygen> on iOS in a webview doesn't affect the Safari keychain - as apps can't write into that.

>
> So I don't see that WebCrypto has proved itself yet as a substitute for client side certificates. 

Correct, because it intentionally is not trying to offer a 1:1 emulation of client certificates, because they're a horrible and broken technology for the web at large (e.g. outside niche enterprise use cases). This is a fundamental flaw that WebID has been struggling with, without solution or adoption, for over for years.

However, alternatives exist - both via WebCrypto (as clearly demonstrated by Mozilla's past efforts through Persona) and via FIDO (as clearly demonstrated by Google, Microsoft, and others) - that offer more meaningful, usable, robust alternatives.

> As for wether WebID can work with Edge ( Microsoft Edge? ) or not I don't think we have yet looked at it. Why would it not work there?

Please read my original message, detailing the varying incompatibilities with <keygen>.

None of these are new to the WebID folks - nor has there been any traction whatsoever on the pet features the WebID CG have deemed essential to WebID (such as TLS logout). The hope is that WebID will, after the considerable time spent as a CG, realize there are alternative technical solutions that offer the same fundamental properties in more usable, more widely supported ways, without relying on the esoteric incompatible behaviours of browsers (as it does today, for example, TLS logout).

henry...@gmail.com

unread,
Jul 31, 2015, 6:23:14 AM7/31/15
to blink-dev, henry...@gmail.com, melvinc...@gmail.com, rsl...@chromium.org


On Friday, 31 July 2015 11:33:21 UTC+2, Ryan Sleevi wrote:


On Jul 31, 2015 1:44 AM, <henry...@gmail.com> wrote:
>
> As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

No, this is not correct. You can create keys for your origin, store them, and have full control over the UI.

What you don't get is to use those keys for TLS client auth, and that's a good thing, because TLS client auth simply doesn't work for the web at scale.


It does if you use it the way the WebID-TLS spec specifies it:

This allows you to use one certificate to authenticate to all servers. WebCrypto as you admit does not do that.

 

> As I understand web crypto requires the private keys generated to be downloaded into the local storage of the browser. Due to entirely reasonable security restrictions of the browser the private key can from that storage only be used for that one origin ( which means that it can still be misused by any JS that is downloaded to a web site! ) That restriction removes many of the uses of public key cryptography that would allow one to strengthen the security of the web. Furthermore storing private keys in local storage seems a lot less secure than storing it encrypted in a keychain that is either tied to the browser like Netscape or tied  the OS. 

<keygen> is never speced to behave as you describe. That it does is simply a matter of emulating Netscape's original behaviour, but as noted earlier, even those that emulate the behaviour do so inconsistently.

For example, using <keygen> on iOS in a webview doesn't affect the Safari keychain - as apps can't write into that.


So you admit that you want to remove a feature that cannot be emulated by other technologies. Why not instead first try to see if your favorite technology meets the use case that WebID tries to address, and then when it does a case can be made for deprecating other technologies.  For the moment your attempt is too hasty.
  

>
> So I don't see that WebCrypto has proved itself yet as a substitute for client side certificates. 

Correct, because it intentionally is not trying to offer a 1:1 emulation of client certificates, because they're a horrible and broken technology for the web at large (e.g. outside niche enterprise use cases). This is a fundamental flaw that WebID has been struggling with, without solution or adoption, for over for years.

However, alternatives exist - both via WebCrypto (as clearly demonstrated by Mozilla's past efforts through Persona) and via FIDO (as clearly demonstrated by Google, Microsoft, and others) - that offer more meaningful, usable, robust alternatives.


WebID shows that client side certificates can be used outside of what you call "niche enterprise use cases" for the general public. Mozilla's Persona effort has been wound down and can be considered a failure. Presumably partly because it tried to use JS to do crypto and did not want to tie the security to the chrome. Other reasons may be that it did not want to adopt WebID profile information to create a web of trust.

 

> As for wether WebID can work with Edge ( Microsoft Edge? ) or not I don't think we have yet looked at it. Why would it not work there?

Please read my original message, detailing the varying incompatibilities with <keygen>.

None of these are new to the WebID folks - nor has there been any traction whatsoever on the pet features the WebID CG have deemed essential to WebID (such as TLS logout). The hope is that WebID will, after the considerable time spent as a CG, realize there are alternative technical solutions that offer the same fundamental properties in more usable, more widely supported ways, without relying on the esoteric incompatible behaviours of browsers (as it does today, for example, TLS logout).


You did not answer my concerns about JS crypto:
 * requires private keys to be served on the users local storage in a non secure manner
 * is open to be read by any JS from the same origin, which is extreemly difficult to control, and as a result creates very difficult to enforce limitations of JS on web servers
 * is thefore open to fishing attacks
 * suffers from all of the JS security issues

These are all related to JS breaking the rule of least power, which is very important in the case of security.


I am building highly complex single page applications with JS and so I certainly recognise there is an important role for JS in the modern web. 

  I have not looked at Fido in detail to see if it is compatible with WebID, as my policy on that group was to work with deployed existing technologies. TLS fit the bill. WebCrypto has been in deployment for a few years, but does not really solve the problems that TLS was able to solve, even if not perfectly. 

 For example logout could easily be solved by browsers by making that capability visible in the chrome. Proposals for that have been made: it's just requires some UI innovtaion. This issue will pop up any other way you try to do things, which is why the Web Crypto folks still cannot do the right thing.

I'll look at FIDO to see if it would allow WebID authentication, which would have been easy to add to Mozilla Persona too btw. But I have seen a lot of large corporations get together to try to build standards ( eg. the Liberty Alliance , Persona, OpenId,... ) and most of those have failed. So I'd suggest a wait and see attitude.

Henry 


Ryan Sleevi

unread,
Jul 31, 2015, 7:10:34 AM7/31/15
to Henry Story, blink-dev, Melvin Carvalho


On Jul 31, 2015 3:23 AM, <henry...@gmail.com> wrote:
>
> It does if you use it the way the WebID-TLS spec specifies it:
>   http://www.w3.org/2005/Incubator/webid/spec/
>

I think we will just have to agree to disagree here. As it relates to UX, as it relates to interactions with users' antivirus, as it relates to other devices and form factors, WebID-TLS is irreparably flawed. WebID-TLS advocates - such as yourself - should find this as no surprise. You (the CG) have lamented about 'bugs' (things working as designed) that hold it back.

As it relates to the browser, WebID-TLS merely appropriates the nearest technology, poorly, and with a number of problems. But finding a way to solve WebID-TLS isn't entirely germane to this discussion, and I've offered copious constructive feedback over the years of these efforts to try to assist you. Unfortunately, I think it's best to simply note that you disagree, but given that viable technical alternatives exist, that disagreement is by no means reason to keep a non-interoperable, problematic, insecure-by-design API around.

> So you admit that you want to remove a feature that cannot be emulated by other technologies. Why not instead first try to see if your favorite technology meets the use case that WebID tries to address, and then when it does a case can be made for deprecating other technologies.  For the moment your attempt is too hasty.

I have provided you constructive feedback for over four years on advising you about the inherent flaws of the design. It is neither reasonable nor polite to suggest that I spend another year, or more, offering much of the same feedback, which will simply be ignored not because it is insufficient for your needs, but because you're change averse.

> You did not answer my concerns about JS crypto:
>  * requires private keys to be served on the users local storage in a non secure manner

No different than <keygen>

>  * is open to be read by any JS from the same origin, which is extreemly difficult to control, and as a result creates very difficult to enforce limitations of JS on web servers

If you can't control same-origin scripts, you have lost the game. Security game over.

>  * is thefore open to fishing attacks

This isn't how phishing works. The same-origin policy gives you greater protection, not less.

>  * suffers from all of the JS security issues

>  For example logout could easily be solved by browsers by making that capability visible in the chrome. Proposals for that have been made: it's just requires some UI innovtaion. This issue will pop up any other way you try to do things, which is why the Web Crypto folks still cannot do the right thing.
>

I spent several years detailing to you in excruciating detail why this is technically flawed and fundamentally incompatible with how the web, TLS, and servers work. I seriously doubt you will take heed this time, but there is no doubt I've made extensive good faith effort at providing actionable technical solutions.

I hope you can understand my frustration that, despite years of providing technical feedback, WebID continues to insist on a broken model.

I can appreciate deprecating <keygen> will break the academic, experimental concept that is WebID. However, suitable technical alternatives do exist, if the WebID CG were to invest in the technical work, so this really falls on WebID. It's also clear that 'no' (for some value of 'no' that is nearly non-existent) users are actually using WebID, which again is consistent with the four+ years of the CG trying to promote WebID-TLS, so even the impact of WebID needing to readjust the design is exceedingly minimal.

My hope is that you'll take the time to evaluate the many messages I've sent in the past on this, as well as the ample feedback to the non-bugs, as I doubt there is anything more I can offer you in conversation on this thread beyond what has been said. When I wrote this message, I was and am very familiar with the WebID work, on a deep technical level, and I still stand by every bit that I said in the original message arguing for the deprecation in spite of this.

Melvin Carvalho

unread,
Jul 31, 2015, 7:52:25 AM7/31/15
to rsl...@chromium.org, Henry Story, blink-dev
On 31 July 2015 at 13:10, Ryan Sleevi <rsl...@chromium.org> wrote:


On Jul 31, 2015 3:23 AM, <henry...@gmail.com> wrote:
>
> It does if you use it the way the WebID-TLS spec specifies it:
>   http://www.w3.org/2005/Incubator/webid/spec/
>

I think we will just have to agree to disagree here. As it relates to UX, as it relates to interactions with users' antivirus, as it relates to other devices and form factors, WebID-TLS is irreparably flawed. WebID-TLS advocates - such as yourself - should find this as no surprise. You (the CG) have lamented about 'bugs' (things working as designed) that hold it back.

As it relates to the browser, WebID-TLS merely appropriates the nearest technology, poorly, and with a number of problems. But finding a way to solve WebID-TLS isn't entirely germane to this discussion, and I've offered copious constructive feedback over the years of these efforts to try to assist you. Unfortunately, I think it's best to simply note that you disagree, but given that viable technical alternatives exist, that disagreement is by no means reason to keep a non-interoperable, problematic, insecure-by-design API around.

> So you admit that you want to remove a feature that cannot be emulated by other technologies. Why not instead first try to see if your favorite technology meets the use case that WebID tries to address, and then when it does a case can be made for deprecating other technologies.  For the moment your attempt is too hasty.

I have provided you constructive feedback for over four years on advising you about the inherent flaws of the design. It is neither reasonable nor polite to suggest that I spend another year, or more, offering much of the same feedback, which will simply be ignored not because it is insufficient for your needs, but because you're change averse.

> You did not answer my concerns about JS crypto:
>  * requires private keys to be served on the users local storage in a non secure manner

No different than <keygen>

>  * is open to be read by any JS from the same origin, which is extreemly difficult to control, and as a result creates very difficult to enforce limitations of JS on web servers

If you can't control same-origin scripts, you have lost the game. Security game over.

>  * is thefore open to fishing attacks

This isn't how phishing works. The same-origin policy gives you greater protection, not less.

>  * suffers from all of the JS security issues

>  For example logout could easily be solved by browsers by making that capability visible in the chrome. Proposals for that have been made: it's just requires some UI innovtaion. This issue will pop up any other way you try to do things, which is why the Web Crypto folks still cannot do the right thing.
>

I spent several years detailing to you in excruciating detail why this is technically flawed and fundamentally incompatible with how the web, TLS, and servers work. I seriously doubt you will take heed this time, but there is no doubt I've made extensive good faith effort at providing actionable technical solutions.


I cant find a record of you stating your concerns to the community group, among the 1000s of emails in the webid cg archive?

https://lists.w3.org/Archives/Public/public-webid/

Jeffrey Yasskin

unread,
Jul 31, 2015, 11:07:29 AM7/31/15
to henry...@gmail.com, blink-dev, melvinc...@gmail.com, Ryan Sleevi
On Fri, Jul 31, 2015 at 3:23 AM, <henry...@gmail.com> wrote:


On Friday, 31 July 2015 11:33:21 UTC+2, Ryan Sleevi wrote:


On Jul 31, 2015 1:44 AM, <henry...@gmail.com> wrote:
>
> As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

No, this is not correct. You can create keys for your origin, store them, and have full control over the UI.

What you don't get is to use those keys for TLS client auth, and that's a good thing, because TLS client auth simply doesn't work for the web at scale.


It does if you use it the way the WebID-TLS spec specifies it:

This allows you to use one certificate to authenticate to all servers. WebCrypto as you admit does not do that.

Doesn't it? If someone runs an identity provider, perhaps with a Service Worker to work offline, relying parties can iframe the identity provider, and the identity provider can store a key in WebCrypto and prove its presence to the relying parties. With fallback request interception (https://github.com/slightlyoff/ServiceWorker/issues/684), the relying parties can also ping the identity provider from their service workers.

Jeffrey

01234567...@gmail.com

unread,
Jul 31, 2015, 3:59:37 PM7/31/15
to blink-dev, sle...@google.com


On Tuesday, July 28, 2015 at 3:46:51 PM UTC-4, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


Maybe if this is  strategic direction taking us away from asymetric  crypto and putting it the hands of users and developers should be reconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]


Microsoft didn't implement SVG technology for about a decade.  For many that left it as a questionable technology.  The solution was for Microsoft to implement it.
 

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)


If it is unique, then it should be removed. If you replace it with another way of allowing the user to create a key pair and control the use of the key, then fine, But don't please remove keygen until
 

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])


A solution would be then to extend the standrad in future versions and code compatibly to that.
 

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.


Why not?  Almost all secure protocols have to have an upgrade path, where you allow back-compatibility for a certain time
 
If the alternative is the browser vendors or the developers re-creating a new tool, then -- that ain't goonna be back-compatible either, is it?

 

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.


There is a typical browser vendor mindset: this feature allows a web site to do something to the user. 
Think of it: this allows a user to do something very powerful with their browser which gives them greater security in their relationships with web sites. The browser acts as the user's agent.


 

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


[3] https://connect.microsoft.com/IE/feedback/details/793734/ie11-feature-request-support-for-keygen

[4] https://lists.w3.org/Archives/Public/public-html/2009Sep/0153.html

[5] https://blog.whatwg.org/this-week-in-html5-episode-35

[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen

[7] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[8] https://developer.mozilla.org/en-US/docs/Archive/Mozilla/JavaScript_crypto/generateCRMFRequest


Compatibility Risk

While there is no doubt that <keygen> remains used in the wild, both the use counters [9] and Google's own crawler indicate that it's use is extremely minimal. Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


Well, obviously a move to create more interop though a better standard would be a good idea.  Meanwhile keygen seems to work for me for webid on the 4 browsers I use.
 
 


[9] https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement

[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1024871


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


The creation of a key pair should be as simple for the user as possible.  This is a user creating an account, it has to be smooth.  It has to happen completely withing the browser, no other apps involved: users will not be able to cope.  This will lead to enterprises and organizations which want to be able to authenticate users writing special code (Like MIT's certaid) which run in the OS not inn the web and do who knows what behind the user's back.


 


On some level, this removal will remove support for arbitrarily adding keys to the users' device-wide store, which is an intentional, by-design behavior.


I want to be able to write a  web site which gives me as a user (or my family member or friend)  a private key so I can communicate withe them securely.

While a use case exists for provisioning TLS client certificates for authentication, such a use case is inherently user-hostile for usability,


What?  What is the problem? Compared to other things suggested <keygen> seems simpler adn user-friendly to me.  There is a bug that the website doesn't get an event back when the key has been generated.  That needs to be fixed, so that the website can contiue the dialog and pick up doing whatever the user wanted to do when they found they needed to mint and identity. That is a bug to be fixed,


 

and represents an authentication scheme that does not work well for the web.


Asymetric auth is fundamentally more useful and powerful than the mass shared passwords and stuff which is an alternative.  If it "doesn't work well for the web" is that just that UI for dealing with certs needs improvement?

 

An alternative means for addressing this use case is to employ the work of the FIDO Alliance [12], which has strong positive signals from Microsoft and Google (both in the WG), is already supported via extensions in Chrome [13], with Mozilla evaluating support via similar means [14]. This offers a more meaningful way to offer strong, non-phishable authentication, in a manner that is more privacy preserving, offers a better user experience, better standards support, and more robust security capabilities.


Well, let's see the result of that work.  Don't throw out keygen until it is up and running and FIDO does what we need smoothly and securely.

In the mean time, please fix bugs in client certs which are just a pain.

- Firefox offers to save your choice of client cert ("remember this choice?" for a sit but doesn't
- Safari displays a list of all kinds of deleted and old and expired certs as well as you active ones, which is confusing.

- In general, the client cert selection needs more real estate and a bit more info about each cert (Not just my name which is (duh) often the same for each certificate!  Unless I have remembers to make up a different name every time to be able to select between certs later)

So don't abandon <keygen>, fix it.

And sure, bring in a really nice secure powerful Fido-based system which will make <keygen> obsolete and make management of personas and identities by users a dream, has a non-phisable UI and allows the creation of a secure relationship between a user and a web site to happen as easily as any sign-up one could imagine, and allows the users to manage them when they need to.   And is compatible in all the browsers and doesn't have these silly interop problems. That would be great, and I'm sure people will be happy to move to it.   I'll believe it when I see it.  

In the meantime please support keygen and improve client cert handling. 

timbl






 

01234567...@gmail.com

unread,
Jul 31, 2015, 5:16:23 PM7/31/15
to blink-dev, sle...@google.com


On Tuesday, July 28, 2015 at 3:46:51 PM UTC-4, Ryan Sleevi wrote:



Alternative implementation suggestion for web developers

...

Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.


Do you have a pointer to "OSX Enterprise settings? What is that?
 

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


Remember Ryan that you work for a browser company. Solutions you can dream up involve coding the browser.  WebID was written by developers who don't have that luxury.  Don't compare it with browser initiatives past or future.  It was written using the things which you the browsers made available at the time, and keygen was that. If you want to criticize webid, then you have to say what they should do to get the same effect under the same constraints.

So for example you suggest doing it all in webcrypto js would be possible except that keygen pops the key pair straight into the certificate store, which is access using harder-to-phish browser code dialogs at login time. Generating it in webapp js and handing it to the user to install is horrible as a workflow, and trains users to do random stuff outside the browser in order log on to things, which opens them up to phishing.    It asks user to trust ausing a private key which has been generated by some random web page.  Much worse than using one which has been generated and kept within the browser.   Supporting that process on all platforms would be a major a major pain.  (It becomes temping on the open platforms to just give the users an app to fix the whole thing, do certificate management, properly).      Just using webcrypto then is not a better solution.

Storing the certs generated by webcrypto in common client-side storage  has its own problems, phishing, lack of trusted interface.   But is possible, and people have explored that ... indeed ignoring  the whole TLS layer and putting in web-crypto <-> web-crypto app-app asymmetric encrypted and authentication communication is a possibility.

You suggest that the things that webid developers have described as 'bugs' you feel is 'working as intended'.... hmmm.


 

Ryan Sleevi

unread,
Jul 31, 2015, 5:36:04 PM7/31/15
to 01234567...@gmail.com, blink-dev
On Fri, Jul 31, 2015 at 2:16 PM, <01234567...@gmail.com> wrote:
If you want to criticize webid, then you have to say what they should do to get the same effect under the same constraints.

Not when the desired effect is unreasonable. The point is to show comparable solutions exist, with reasonable tradeoffs. There's not an incumbent demand to show there's a 100% functionally equivalent solution, because the functionality itself is problematic and flawed by design.
 
Supporting that process on all platforms would be a major a major pain.  (It becomes temping on the open platforms to just give the users an app to fix the whole thing, do certificate management, properly). 

This is exactly what I said in the original message, and which you acknowledge has a very important adjective there - "properly".

Systems management is not the realm of the browser. You should not have functionality that, say, allows arbitrary websites to configure your Wifi. I say this knowing full well that there are a tremendous number of interesting and exciting things you could do if such a functionality existed. Similarly, your browser should not be the vector for VPN management, despite their being exciting and compelling things you could do.

Certificate and key management - which is system wide, affects all other applications and origins - is itself a fundamentally flawed premise for a browser to be involved in, nor is it required for any 'spec-following' <keygen> implementation. As the original message laid out, there's no way such a feature would have a reasonable chance of succeeding. The very premise of WebID rests on the ability to create a cross-origin super-cookie (the client cert), and the bugs filed make it clear there's a desire to reduce or eliminate the user interaction in doing so.
 
You suggest that the things that webid developers have described as 'bugs' you feel is 'working as intended'.... hmmm.

Yes. They are. I've explained at great length to your fellow WebID CG members, during discussions in the WebCrypto WG, on the Chromium tracker, in private emails, and at W3C TPACs and F2Fs, how several of the fundamental requirements of WebID are, at a protocol level, incompatible with how the Web works.

Regardless of your feelings towards WebID, which I fully acknowledge that after several years of effort, I'm unlikely to dissuade any members of the technical folly, let's go back to first principles.

Should an arbitrary website be able to make persistent modification to your device, that affects all applications, without user interaction? (As implemented today in several browsers?). The answer is immediately and obviously no.

What tools exist to mitigate this concern? As browser operators, we have two choices:
1) Only allow the origin to affect that origin.
2) Defer the decision to the user, hoping they're sufficiently aware and informed of the risks and implications.

1) is entirely viable if <keygen> were kept around, but it would entirely eliminate the value proposition of <keygen> from those arguing for it being kept (both on this thread and the similar Mozilla thread - https://groups.google.com/d/msg/mozilla.dev.platform/pAUG2VQ6xfQ/FKX63BwOIwAJ ). So we can quickly rule that out as causing the same concerns.

2) is not a viable pattern. You yourself have commented on past threads on encouraging TLS adoption what you view of the complexity involved in certificate management, and these are challenges being faced by populations with a modicum of technical background (web developers, server operators, the like). Surely you can see these challenges - and concepts - of key management and hygiene are well beyond the ken of the average user. But let's assume we could draft some perfect language to communicate to the user, what would we need to communicate?

"If you continue, this website may be able prevent your machine from accessing the Internet?"
"If you continue, this website may be able to allow arbitrary parties to intercept all of your secure communications?"

Is that a reasonable permission to grant? No, absolutely not. But that's what <keygen> - and the related application/x-x509-*-cert - rely on. That itself is a fundamental problem, and one which is not at all reasonable to ask.

I do hope you can re-read my original message and see the many, fundamental, by-design issues with <keygen>. It's existence was to compete with a Microsoft-developed ActiveX system management control intended for internal networks only, and whose inception predated even the modern extension systems now used for management of Mozilla Firefox. I don't deny that you've found utility in the fact that something insecure, and broken by design, has been exposed to the Web, but that's not itself an argument for its continuation. Given the risks, it's also not reasonable to suggest that the feature be held hostage by a small minority of developers, with virtually non-existent group of users, until they can be convinced of technical alternatives. Somewhere, there has to be a split as to what represents a reasonable part of the Web platform - and what represents an artifact of the heady, but terribly insecure, days of the First Browser Wars and the pursuit of vendor-lockin.

henry...@gmail.com

unread,
Jul 31, 2015, 6:40:41 PM7/31/15
to blink-dev, 01234567...@gmail.com, sle...@google.com


On Friday, 31 July 2015 23:36:04 UTC+2, Ryan Sleevi wrote:


On Fri, Jul 31, 2015 at 2:16 PM, <01234567...@gmail.com> wrote:
If you want to criticize webid, then you have to say what they should do to get the same effect under the same constraints.

Not when the desired effect is unreasonable. The point is to show comparable solutions exist, with reasonable tradeoffs. There's not an incumbent demand to show there's a 100% functionally equivalent solution, because the functionality itself is problematic and flawed by design.

Asymmetric key usage is flawed by design? For whom? For groups that wish to have an easier time surveilling or breaking into systems? There are indeed things that can be improved with TLS just with any other web technology. Think about the evolution of HTML 1.0 to HTML5.0, or of HTTP1.0 to HTTP2.0. A statement as strong as "flawed by design" would require a very serious argumentation to back it up. Can you point us to it, so that we can look it up?
 
 
Supporting that process on all platforms would be a major a major pain.  (It becomes temping on the open platforms to just give the users an app to fix the whole thing, do certificate management, properly). 

This is exactly what I said in the original message, and which you acknowledge has a very important adjective there - "properly".

Systems management is not the realm of the browser. You should not have functionality that, say, allows arbitrary websites to configure your Wifi. I say this knowing full well that there are a tremendous number of interesting and exciting things you could do if such a functionality existed. Similarly, your browser should not be the vector for VPN management, despite their being exciting and compelling things you could do.

Certificate and key management - which is system wide, affects all other applications and origins - is itself a fundamentally flawed premise for a browser to be involved in, nor is it required for any 'spec-following' <keygen> implementation. As the original message laid out, there's no way such a feature would have a reasonable chance of succeeding. The very premise of WebID rests on the ability to create a cross-origin super-cookie (the client cert), and the bugs filed make it clear there's a desire to reduce or eliminate the user interaction in doing so.

This would be a lot more convincing if the certificates added to the keychains were trust certificates. Here we are speaking of user certificates that allow a user to authenticate to a different web site, not one that makes decisions about which sites are authenticated. Here the user gets to choose if the certificate gets added to his keychain and when the certificate is used to log in to different web sites. The UI to make this obvious could be much more clearly designed. Aza Raskin made suggestions to that effect a while ago for Firefox http://www.azarask.in/blog/post/identity-in-the-browser-firefox/ .
On the whole Chrome has one of the best UIs but there is a lot more that can be done to improve the situation.
 
 
You suggest that the things that webid developers have described as 'bugs' you feel is 'working as intended'.... hmmm.

Yes. They are. I've explained at great length to your fellow WebID CG members, during discussions in the WebCrypto WG, on the Chromium tracker, in private emails, and at W3C TPACs and F2Fs, how several of the fundamental requirements of WebID are, at a protocol level, incompatible with how the Web works.

Perhaps there are things you have not understood about the functioning of WebID. And as I mentioned not being browser vendors we can only use technologies that are available to us in the browser. Improvements there will allow us to improve the protocol. ( I'll write another more detail mail to respond to Jeffrey Yasskin as soon as I have studied some of the pointers to the still being debated proposals for navigator.connect )
 

Regardless of your feelings towards WebID, which I fully acknowledge that after several years of effort, I'm unlikely to dissuade any members of the technical folly, let's go back to first principles.

Should an arbitrary website be able to make persistent modification to your device, that affects all applications, without user interaction? (As implemented today in several browsers?). The answer is immediately and obviously no.

1) Firefox at least until recently had its own keychain, so it did not affect all other apps. So your statement is not true for all browsers
2) If I remember correctly Firefox allowed the user to accept or deny the addition of the certificate to his browser
3) It would be easy to improve the user interface to allow the user to view and inspect the certificate before accepting it. The user could for example add a small nick name to the certificate to identify it more easily. I can think of a lot of other improvements that would massively improve the UI. (I am available for consultation to help out there)
4) The certificates added allow the user to authenticate to other sites, they were not certifiate authority certificates ( IF a browser automatically added that there would be justified global outcry). These client certificates do not influence the security of the web.
5) Your argument would apply just as well for adding passwords to the keychain too. I don't think many people would be convinced that that was a great problem. I myself find that feature really nice by the way. It allows me to move between Safari and Chrome from time to time ( and to Firefox with an addon ), without having to add all the passwords back again. The reason I never used Opera was the fact that I could never extract passwords again from their keychain. I did not want to invest in a browser that did not give me the freedom to switch later. 

You have not provided an argument here based on first principles. If you had it would not be so easy to show how minor improvements to the UI could change the situation. I think your problem here is that you are so focused on the crypto or the lower layers that you fail to take into account the dimensions closer to the user that involve psychology, artistic design, etc...
Of course if you don't take those dimensions into account I can see how you can come to the conclusion that your argument is fail safe. 
 

What tools exist to mitigate this concern? As browser operators, we have two choices:
1) Only allow the origin to affect that origin.
2) Defer the decision to the user, hoping they're sufficiently aware and informed of the risks and implications.

1) is entirely viable if <keygen> were kept around, but it would entirely eliminate the value proposition of <keygen> from those arguing for it being kept (both on this thread and the similar Mozilla thread - https://groups.google.com/d/msg/mozilla.dev.platform/pAUG2VQ6xfQ/FKX63BwOIwAJ ). So we can quickly rule that out as causing the same concerns.

yes, a keygen that would create a certificate that allowed you to log in to one site only would not be very interesting. Kind of defeating the whole point of asymmetric crypto.
 

2) is not a viable pattern. You yourself have commented on past threads on encouraging TLS adoption what you view of the complexity involved in certificate management, and these are challenges being faced by populations with a modicum of technical background (web developers, server operators, the like). Surely you can see these challenges - and concepts - of key management and hygiene are well beyond the ken of the average user. But let's assume we could draft some perfect language to communicate to the user, what would we need to communicate?

"If you continue, this website may be able prevent your machine from accessing the Internet?"
"If you continue, this website may be able to allow arbitrary parties to intercept all of your secure communications?"

This would only be the case if the browser allowed Certificate Authority level certificates to be installed. But the browsers usually have 3 different levels of certificates which go into different boxes that are isolated: user certificates, user installed CA certificates and browser CA installed certificates.

Installing client certificates is only complicated and unuseable if you do not use keygen. Keygen was not widely known until it was supported by HTML5, so it is not surprising that it may not have been used widely.  
 

Is that a reasonable permission to grant? No, absolutely not. But that's what <keygen> - and the related application/x-x509-*-cert - rely on. That itself is a fundamental problem, and one which is not at all reasonable to ask.

yes, but it is a straw man, as I don't think any browser automatically would allow CA based certificates to be added. 
 

I do hope you can re-read my original message and see the many, fundamental, by-design issues with <keygen>. It's existence was to compete with a Microsoft-developed ActiveX system management control intended for internal networks only, and whose inception predated even the modern extension systems now used for management of Mozilla Firefox. I don't deny that you've found utility in the fact that something insecure, and broken by design, has been exposed to the Web, but that's not itself an argument for its continuation. Given the risks, it's also not reasonable to suggest that the feature be held hostage by a small minority of developers, with virtually non-existent group of users, until they can be convinced of technical alternatives. Somewhere, there has to be a split as to what represents a reasonable part of the Web platform - and what represents an artifact of the heady, but terribly insecure, days of the First Browser Wars and the pursuit of vendor-lockin.

From the above I don't get the feeling that you really understand how the certificate systems work. What am I missing here?
 

Ryan Sleevi

unread,
Jul 31, 2015, 7:00:23 PM7/31/15
to Henry Story, blink-dev, 01234567...@gmail.com
On Fri, Jul 31, 2015 at 3:40 PM, <henry...@gmail.com> wrote:


1) Firefox at least until recently had its own keychain, so it did not affect all other apps. So your statement is not true for all browsers

I never said it was.
 
3) It would be easy to improve the user interface to allow the user to view and inspect the certificate before accepting it. The user could for example add a small nick name to the certificate to identify it more easily. I can think of a lot of other improvements that would massively improve the UI. (I am available for consultation to help out there)

While generous, I already demonstrated why this is a fundamentally flawed argument.
 
4) The certificates added allow the user to authenticate to other sites, they were not certifiate authority certificates ( IF a browser automatically added that there would be justified global outcry). These client certificates do not influence the security of the web.

This is factually wrong, as responded to below.
 
5) Your argument would apply just as well for adding passwords to the keychain too. I don't think many people would be convinced that that was a great problem. I myself find that feature really nice by the way. It allows me to move between Safari and Chrome from time to time ( and to Firefox with an addon ), without having to add all the passwords back again. The reason I never used Opera was the fact that I could never extract passwords again from their keychain. I did not want to invest in a browser that did not give me the freedom to switch later. 

Yes, and we no longer do. So that's also consistent :)
 
You have not provided an argument here based on first principles. If you had it would not be so easy to show how minor improvements to the UI could change the situation. I think your problem here is that you are so focused on the crypto or the lower layers that you fail to take into account the dimensions closer to the user that involve psychology, artistic design, etc...
Of course if you don't take those dimensions into account I can see how you can come to the conclusion that your argument is fail safe. 

While I appreciate your concern about what I am and am not thinking about, you will find that the very message you were replying to was focused very much on these. My past replies to you, in the WebCrypto WG (and WebCrypto CG) and on the bugs you have filed have also provided ample evidence why it's not a UI issue, but a technical one. It's clear I won't be able to convince you of that any further, but I do want to highlight the very real disagreement with your assertions.

yes, a keygen that would create a certificate that allowed you to log in to one site only would not be very interesting. Kind of defeating the whole point of asymmetric crypto.

This is a statement that makes no sense. It defeats the point of _your_ goal, but it's perfectly consistent with asymmetric crypto. There is no requirement for multi-party exchanges in public/private keypairs.

This would only be the case if the browser allowed Certificate Authority level certificates to be installed. But the browsers usually have 3 different levels of certificates which go into different boxes that are isolated: user certificates, user installed CA certificates and browser CA installed certificates.

Sorry Henry, but this isn't the case.

Let me make it absolutely clear, although I stated as much in the original message: Installing client certificates is a risky and dangerous operation that can break the ability for a user to securely browse the web. This is true for ALL platforms I'm concerned about (e.g. all the platforms on Chrome). I cannot make this more abundantly obvious, but the sheer act of installing a client certificate is a _dangerous_ operation, and one on which the entire premise of WebID is balanced.

 
yes, but it is a straw man, as I don't think any browser automatically would allow CA based certificates to be added. 

No, you're ignoring what I wrote in the original message. Through simple use of <keygen> - without any certificates - I can break users devices. Full stop.

Now you can argue it's a "bug" that needs fixing - and I agree, the fact that a <keygen> tag can do this IS quite unfortunate AND well known (although not by the WebID folks, it would seem). But the question is whether or not there is anything worth saving in <keygen> in the face of bugs like this - and the many other reasons I outlined in the original message - and the answer is increasingly "No".
 
From the above I don't get the feeling that you really understand how the certificate systems work. What am I missing here?

Thanks for the vote of confidence. This will be my last reply, as I've yet again been suckered into making a good faith attempt to explain to you the issues.

henry...@gmail.com

unread,
Aug 1, 2015, 2:01:10 AM8/1/15
to blink-dev, henry...@gmail.com, 01234567...@gmail.com, sle...@google.com


On Saturday, 1 August 2015 01:00:23 UTC+2, Ryan Sleevi wrote:


On Fri, Jul 31, 2015 at 3:40 PM, <henry...@gmail.com> wrote:


1) Firefox at least until recently had its own keychain, so it did not affect all other apps. So your statement is not true for all browsers

I never said it was.
 
3) It would be easy to improve the user interface to allow the user to view and inspect the certificate before accepting it. The user could for example add a small nick name to the certificate to identify it more easily. I can think of a lot of other improvements that would massively improve the UI. (I am available for consultation to help out there)

While generous, I already demonstrated why this is a fundamentally flawed argument.

No you did not.
 
 
4) The certificates added allow the user to authenticate to other sites, they were not certifiate authority certificates ( IF a browser automatically added that there would be justified global outcry). These client certificates do not influence the security of the web.

This is factually wrong, as responded to below.

I read the response below, and found no answer to this point. Has it been possible for 10 years to add using keygen new CA certificates to the browser? Without the user even being notified? Have there been attacks that use keygen? How do these work?  You have only made a theoretical point about the attack "modifiying the system of the user". But you have not detailed this attack, or pointed to actual uses of this until now hypothetical attack.
 
 
5) Your argument would apply just as well for adding passwords to the keychain too. I don't think many people would be convinced that that was a great problem. I myself find that feature really nice by the way. It allows me to move between Safari and Chrome from time to time ( and to Firefox with an addon ), without having to add all the passwords back again. The reason I never used Opera was the fact that I could never extract passwords again from their keychain. I did not want to invest in a browser that did not give me the freedom to switch later. 

Yes, and we no longer do. So that's also consistent :)

( Sigh ! At least you seem to be consistent. What were the actual attacks that were launched using these passwords? )
 
 
You have not provided an argument here based on first principles. If you had it would not be so easy to show how minor improvements to the UI could change the situation. I think your problem here is that you are so focused on the crypto or the lower layers that you fail to take into account the dimensions closer to the user that involve psychology, artistic design, etc...
Of course if you don't take those dimensions into account I can see how you can come to the conclusion that your argument is fail safe. 

While I appreciate your concern about what I am and am not thinking about, you will find that the very message you were replying to was focused very much on these. My past replies to you, in the WebCrypto WG (and WebCrypto CG) and on the bugs you have filed have also provided ample evidence why it's not a UI issue, but a technical one. It's clear I won't be able to convince you of that any further, but I do want to highlight the very real disagreement with your assertions.

(note: You provide once again neither arguments nor pointers to actual arguments here. The readers of _this_ list can only go by trusting you. But can they absolutely exclude that your account has not been hijacked while the real Ryan Sleevy is on holidays, or in hospital, or worse? That attack of is a well known social engineering way to infiltrate systems. But what is the attack where the client certificate is misused? ) 
 

yes, a keygen that would create a certificate that allowed you to log in to one site only would not be very interesting. Kind of defeating the whole point of asymmetric crypto.

This is a statement that makes no sense. It defeats the point of _your_ goal, but it's perfectly consistent with asymmetric crypto. There is no requirement for multi-party exchanges in public/private keypairs.

If asymmetric crypto was only used to log in to a site that had access to the secret, then I would not need asymmetric crypto. The point of asymmetric crypto is to allow secure communication with parties you have _never_ communicated with before! The romans 2000 years ago knew how to do cryptography with symmetric keys, where the two parties knew the secret - a.k.a a password! It was only in the 1970ies that mathematics enabling one to publicly share a key so that it could be read even by your enemies while allowing you to communicate securely was invented.

So while one can of course use asymmetric crypto the way symmetric crypto was used, I still stand by my point that if you do that you loose one of the main technical advantages of asymmetric crypto. I'll also note that a number of governments have tried and many even succeeded to make asymmetric crypto illegal for their citizens for fear of the power of that technology. As I understand in the US it is protected by the right to bear arms.  


This would only be the case if the browser allowed Certificate Authority level certificates to be installed. But the browsers usually have 3 different levels of certificates which go into different boxes that are isolated: user certificates, user installed CA certificates and browser CA installed certificates.

Sorry Henry, but this isn't the case.

Let me make it absolutely clear, although I stated as much in the original message: Installing client certificates is a risky and dangerous operation that can break the ability for a user to securely browse the web. This is true for ALL platforms I'm concerned about (e.g. all the platforms on Chrome). I cannot make this more abundantly obvious, but the sheer act of installing a client certificate is a _dangerous_ operation, and one on which the entire premise of WebID is balanced.

Ok, so please point to a document - that describes the attack vector - and that shows that this is not a user interface issue.
How can installing a client certificate be a risky and dangerous operation? 
I have not found any answers to this question in this thread here.
 

 
yes, but it is a straw man, as I don't think any browser automatically would allow CA based certificates to be added. 

No, you're ignoring what I wrote in the original message. Through simple use of <keygen> - without any certificates - I can break users devices. Full stop.

How?
 

Now you can argue it's a "bug" that needs fixing - and I agree, the fact that a <keygen> tag can do this IS quite unfortunate AND well known (although not by the WebID folks, it would seem). But the question is whether or not there is anything worth saving in <keygen> in the face of bugs like this - and the many other reasons I outlined in the original message - and the answer is increasingly "No".

You have managed to avoid in this whole thread explaining how this could be done or pointing to a document describing this "well known" attack.
 
 
From the above I don't get the feeling that you really understand how the certificate systems work. What am I missing here?

Thanks for the vote of confidence. This will be my last reply, as I've yet again been suckered into making a good faith attempt to explain to you the issues.

If you refuse to point to the document explaining how client certificates can be used as an attack vector perhaps someone else can? 

henry...@gmail.com

unread,
Aug 1, 2015, 4:27:02 AM8/1/15
to blink-dev, davi...@chromium.org, sle...@google.com, sligh...@chromium.org
Skimming the list for some arguments on the danger of client certificate generation I found this.

On Tuesday, 28 July 2015 23:28:44 UTC+2, Alex Russell wrote:
[snip]

On Tue, Jul 28, 2015 at 1:47 PM, David Benjamin <davi...@chromium.org> wrote:
[snip]

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

That's a stronger argument and one that resonates with me.

It looks like this would be quite easy to fix on the browser side by:

1) placing the generated private key with metadata ( when was it generated, on which site, what form fields were filled in, etc ) in a browser specific store
2) if the certificate is then downloaded to use all this metadata to remind the user of when this was done. The further back it is the more detail obviously needs to be given to remind the user of what he was trying to do. 
3) sites that do certificate spamming should then obvious to the user who can then avoid them like any other spamming site

Note that in the WebID-TLS authentication protocol we can get the certificate to be returned IMMEDIATELY. There is no need to go through a CA "verification" procedure. The extra information could be added and removed later by the CA by adding or removing relations from the WebID Profile document or from other linked documents ( see details in the documents http://www.w3.org/2005/Incubator/webid/spec/ ).
 
 
 
It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.

I can't quite tell from the description, but this raises alarm bells a it sounds like it would allow the private key to leak  out to a wider world. The way things work currently where the private key is under the full and only control of the browser is much more secure. 

David Benjamin

unread,
Aug 2, 2015, 12:25:34 PM8/2/15
to henry...@gmail.com, blink-dev, sle...@google.com, sligh...@chromium.org
On Sat, Aug 1, 2015 at 4:27 AM <henry...@gmail.com> wrote:
Skimming the list for some arguments on the danger of client certificate generation I found this.

On Tuesday, 28 July 2015 23:28:44 UTC+2, Alex Russell wrote:
[snip]

On Tue, Jul 28, 2015 at 1:47 PM, David Benjamin <davi...@chromium.org> wrote:
[snip]

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

That's a stronger argument and one that resonates with me.

It looks like this would be quite easy to fix on the browser side by:

1) placing the generated private key with metadata ( when was it generated, on which site, what form fields were filled in, etc ) in a browser specific store
2) if the certificate is then downloaded to use all this metadata to remind the user of when this was done. The further back it is the more detail obviously needs to be given to remind the user of what he was trying to do.  

This does not address the key-spamming issue. Or are you proposing that keygen include a user prompt too? Both this and the prompt below are going to be incomprehensible to most users.
 
3) sites that do certificate spamming should then obvious to the user who can then avoid them like any other spamming site

This kind of scheme would be rejected today. Installing user certificates modifies state outside an origin and thus cannot just be resolved by the user deciding not to visit the site anymore.

Client certificates are also mired with legacy assumptions by other uses. Deployments expect that certificates from the system-wide store be usable which constrains changes.

This should be using an origin-scoped storage API. We already have more of those than I can count. I believe WebCrypto intentionally declined to add a new one and (correctly) leans on existing ones. We cannot add random APIs for every niche use case that comes up. Instead, we should add a minimal set of APIs that allow maximal flexibility and power while satisfying other goals. (Easy to develop for, satisfies security policies, etc.) keygen and x-x509-user-cert satisfy none of this. They're pure legacy.

This is also relevant:
 
Note that in the WebID-TLS authentication protocol we can get the certificate to be returned IMMEDIATELY. There is no need to go through a CA "verification" procedure. The extra information could be added and removed later by the CA by adding or removing relations from the WebID Profile document or from other linked documents ( see details in the documents http://www.w3.org/2005/Incubator/webid/spec/ ).

In that case, you have no need for persistent storage and can simply hold the key in memory while waiting for the certificate to be minted.
 
It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.

I can't quite tell from the description, but this raises alarm bells a it sounds like it would allow the private key to leak  out to a wider world. The way things work currently where the private key is under the full and only control of the browser is much more secure. 

If you can't trust your origin then your enrollment application is already bust. The web's security model assumes that code running in an origin is running on behalf of that origin, just like native code must avoid arbitrary code exec on a traditional platform. In MIT's enrollment system, for instance, the origin is already trusted to handle the user's password, a more valuable credential than the certificates it mints.

Perhaps WebID's enrollment application doesn't have a password it verifies. But defending an origin from itself is not consistent with the browser's security model. We can build mechanisms to help an origin maintain its integrity (CSP, SRI) or help an origin subdivide itself (<iframe sandbox> or this per-page suborigins proposal I've seen floating around), but trying to defend an origin from *all* of itself doesn't make sense. That boils down to moving entire applications into the browser with one-off single-use-case APIs (like keygen and x-x509-user-cert). That is not how to develop a platform. It would be absurd to add a playChess() API to protect your chess win rate from a compromised https://chess.example.com origin.


David

henry...@gmail.com

unread,
Aug 2, 2015, 8:33:12 PM8/2/15
to blink-dev, henry...@gmail.com, sle...@google.com, sligh...@chromium.org

On Sunday, 2 August 2015 18:25:34 UTC+2, David Benjamin wrote:
On Sat, Aug 1, 2015 at 4:27 AM <henry...@gmail.com> wrote:
Skimming the list for some arguments on the danger of client certificate generation I found this.

On Tuesday, 28 July 2015 23:28:44 UTC+2, Alex Russell wrote:
[snip]

On Tue, Jul 28, 2015 at 1:47 PM, David Benjamin <davi...@chromium.org> wrote:
[snip]

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

That's a stronger argument and one that resonates with me.

It looks like this would be quite easy to fix on the browser side by:

1) placing the generated private key with metadata ( when was it generated, on which site, what form fields were filled in, etc ) in a browser specific store
2) if the certificate is then downloaded to use all this metadata to remind the user of when this was done. The further back it is the more detail obviously needs to be given to remind the user of what he was trying to do.  

This does not address the key-spamming issue..

We were promised by Ryan Sleevi

Installing client certificates is a risky and dangerous operation that can break the ability for a user to securely browse the web. This is true for ALL platforms I'm concerned about (e.g. all the platforms on Chrome). I cannot make this more abundantly obvious, but the sheer act of installing a client certificate is a _dangerous_ operation [snip]

and all we have at the moment is a potential spam issue, that is not yet clearly specified. I suppose I am to imagine a user  going to a site which populates its forms with <keygen> elements, so that each time the user submits a form the browser creates a public/private key pair, sends the public key thus generated along with the other form elements to the server, and then what... ? Up to this point of the story the site has just managed to make usage of its pages slow, as generating public/private key pairs is cpu intensive. How is that different from a site where some bad JS just uses up CPU resources, or a very slow server ? What is the server going to do with the public key to spam and endanger the user's computer? It could return a certificate to the client immediately or at some later point in time, but how is that going to compromise something? Please give a detailed scenario.
 
 Or are you proposing that keygen include a user prompt too? Both this and the prompt below are going to be incomprehensible to most users

As I said I think Firefox does or did have a prompt asking the user if he wanted to add the certificate to his store. Of course the design of this part should not be left up to cryptographers, as that would most likely make it incomprehensible to most users. 

Again we were promised a proof of a fundamental security issue and all we have is a claim that a few people are unable to think of a good user interface, and from that premise  come to the conclusion that it cannot be done. But from my lack of skills at playing the piano it does not follow that no one can play the piano, and from your lack of skills at UI design it does not follow that other gifted designers cannot succeed. This reminds me that people used to repeat everywhere as a matter of fact truth that unix could never succeed because the command line was too difficult to use: OSX and Android have shown just how wrong they were ( and that was not because the command line became popular! )

 
3) sites that do certificate spamming should then obvious to the user who can then avoid them like any other spamming site

This kind of scheme would be rejected today. Installing user certificates modifies state outside an origin and thus cannot just be resolved by the user deciding not to visit the site anymore.

So the idea is I suppose again that you go to a site that sends you certificates that even after a good UI helps you understand what you are about to do, you nevertheless install. Then you go to another site and that asks you to log in and to do so you need to select a certificate. So what is the attack vector here? You suddenly have a large number of certificates from the some spam sites you visited pop up? Why not give the user a chance to delete them then? Problem solved.
 
Client certificates are also mired with legacy assumptions by other uses. Deployments expect that certificates from the system-wide store be usable which constrains changes.

Can you detail these problems, or point to a place where this has been detailed carefully? For the moment this is hand waving.
 
This should be using an origin-scoped storage API. We already have more of those than I can count. I believe WebCrypto intentionally declined to add a new one and (correctly) leans on existing ones. We cannot add random APIs for every niche use case that comes up. Instead, we should add a minimal set of APIs that allow maximal flexibility and power while satisfying other goals. (Easy to develop for, satisfies security policies, etc.) keygen and x-x509-user-cert satisfy none of this. They're pure legacy.

They are not legacy since 
1. storing public/private keys in local storage is a security problem ( the private key is  much more widely visible than it need be ) whereas it is not yet shown to be a security problem for plain old certificates
2. x509 certificates can be used to authenticate across domains for the past 15 years, whereas at present I have only seen hand waving attempts using completely alpha JS apis that are only on the drawing board that may be useable to do this, but may be not. 
yes. We are not installing DLLs here, just certificates which are signed documents into a store that is more widely accessible.
 
 
Note that in the WebID-TLS authentication protocol we can get the certificate to be returned IMMEDIATELY. There is no need to go through a CA "verification" procedure. The extra information could be added and removed later by the CA by adding or removing relations from the WebID Profile document or from other linked documents ( see details in the documents http://www.w3.org/2005/Incubator/webid/spec/ ).

In that case, you have no need for persistent storage and can simply hold the key in memory while waiting for the certificate to be minted.
 
It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.

I can't quite tell from the description, but this raises alarm bells a it sounds like it would allow the private key to leak  out to a wider world. The way things work currently where the private key is under the full and only control of the browser is much more secure. 

If you can't trust your origin then your enrollment application is already bust. The web's security model assumes that code running in an origin is running on behalf of that origin, just like native code must avoid arbitrary code exec on a traditional platform. In MIT's enrollment system, for instance, the origin is already trusted to handle the user's password, a more valuable credential than the certificates it mints.

Here my feeling is that there is a lot more to be done to improve the JS security model. I am not the only one to think so as projects such as http://cowl.ws/ reveal. It would be a pity to use the weakness of the current JS security model to weaken something that is stronger. 

Clearly a system where the private key is guaranteed to not leave the browser storage gives a lot stronger security guarantees. This may prove to be very useful later with a stronger JS security model. It's certainly worth considering. 

 
Perhaps WebID's enrollment application doesn't have a password it verifies. But defending an origin from itself is not consistent with the browser's security model. We can build mechanisms to help an origin maintain its integrity (CSP, SRI) or help an origin subdivide itself (<iframe sandbox> or this per-page suborigins proposal I've seen floating around), but trying to defend an origin from *all* of itself doesn't make sense. That boils down to moving entire applications into the browser with one-off single-use-case APIs (like keygen and x-x509-user-cert). That is not how to develop a platform. It would be absurd to add a playChess() API to protect your chess win rate from a compromised https://chess.example.com origin.

yes, so there are improvements to be made. Currently the browser has a very strong security mechanism for keeping a private key in inaccessible storage. Let's not remove strong security features in favor of weaker ones. Rather let's try to increase the security of the web all round.
 


David

henry...@gmail.com

unread,
Aug 2, 2015, 8:53:07 PM8/2/15
to blink-dev, henry...@gmail.com, melvinc...@gmail.com, rsl...@chromium.org
Thanks for these interesting pointers.

A lot of this though is still research projects, relying on very new technolgies none of which have yet been tested to see if they allow us to use authentication the way I am currently able to do it with X509 certificates using WebID. 

Reading it earlier today I wondered how would a random site I come across know what iframe or service worker to communicate with, given that it does not yet know who I am? Currently this is done using the well known NASCAR pattern that is: a site one goes to lists all the top centralised providers with which one can log on. ( see https://indiewebcamp.com/NASCAR_problem ) The nascar solution to the problem just re-inforces centrlaised players, is bad for the web, and is a UI disaster.

How do you deal with this without the user needing to type in a URL for his account? With WebID the browser presents a set of certificates which the user can then just select from using point and click. Furthermore that is recogniseably part of the chrome.
 

Jeffrey

min...@sharp.fm

unread,
Aug 4, 2015, 7:50:11 PM8/4/15
to Security-dev, blin...@chromium.org, sle...@google.com
On Tuesday, 28 July 2015 21:46:47 UTC+2, Ryan Sleevi wrote:

> Primary eng (and PM) emails
>
> rsl...@chromium.org
>
> Summary
> This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].

Please dont.

If - and only if - a compelling replacement(s) for keygen are introduced and become widely and practically available, *then* consider dropping keygen. Until that time please leave it alone.

For an example of the embarrassment and chaos caused when Mozilla removed the crypto.signText API without checking with anybody, and without offering any replacement, see this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1083118

If keygen has shortcomings, fix them.

Regards,
Graham
--

Ryan Sleevi

unread,
Aug 4, 2015, 8:28:33 PM8/4/15
to min...@sharp.fm, blink-dev
On Tue, Aug 4, 2015 at 4:50 PM, <min...@sharp.fm> wrote:
For an example of the embarrassment and chaos caused when Mozilla removed the crypto.signText API without checking with anybody, and without offering any replacement, see this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1083118

I wouldn't call that embarrassment and chaos - that was Mozilla being good stewards of the platform and removing Browser-specific public APIs that have never been implemented by any other browser, nor would they be. While Mozilla ultimately decided to fund development of an extension to replace it, it did so primarily because it was clear that the software vendors were selling products with a clear intent to "only support Firefox", and were unwilling or unable to engineer solutions that were browser agnostic.

This sort of gets to the core point - whether or not it's a healthy function of a browser to encourage and enable developers to write sites that are browser-specific. The answer is, for most browsers, "Definitely not". That is, openness and interoperability is key.

So then the next question is whether <keygen>, window.crypto.signText, generateCRMFRequest, application/x-x509-*-cert, or any of these other "PKI management" functions have a role in the open web platform. And the answer there is, again, no; they're artifices of an early time when security was an afterthought and a rational, interoperable web platform that can scale to billions of devices - from your phone to your PC to your TV - was not a priority.

I know Mozilla does not view this as a success story - that is, that governments and banks rely on these non-standard features. Our own experiences with the vendors selling these governments/banks software is that they are some of the worst actors for standards compliance, and consistently fail to properly implement what they're supposed to for smart cards, so it comes as no surprise that they would use a variety of non-standard features (and worse, unnecessarily so, since they ended up developing plugins to support other browsers and they could have just done the same in Firefox).

However, none of this is relevant to <keygen>, for which the metrics - both crawler and real world usage - show there's virtually no usage, and there are alternatives for what Chrome has implemented (I clarify this to ensure that no improper parallels to Firefox are drawn, but even still, alternatives exist for Firefox as well)

Jeffrey Yasskin

unread,
Aug 4, 2015, 9:20:15 PM8/4/15
</