(Pre-)Intent to Deprecate: <keygen> element and application/x-x509-*-cert MIME handling

3.533 görüntüleme
İlk okunmamış mesaja atla

Ryan Sleevi

okunmadı,
28 Tem 2015 15:46:5128.07.2015
alıcı blink-dev

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


[3] https://connect.microsoft.com/IE/feedback/details/793734/ie11-feature-request-support-for-keygen

[4] https://lists.w3.org/Archives/Public/public-html/2009Sep/0153.html

[5] https://blog.whatwg.org/this-week-in-html5-episode-35

[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen

[7] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[8] https://developer.mozilla.org/en-US/docs/Archive/Mozilla/JavaScript_crypto/generateCRMFRequest


Compatibility Risk

While there is no doubt that <keygen> remains used in the wild, both the use counters [9] and Google's own crawler indicate that it's use is extremely minimal. Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


[9] https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement

[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1024871


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


On some level, this removal will remove support for arbitrarily adding keys to the users' device-wide store, which is an intentional, by-design behaviour.


While a use case exists for provisioning TLS client certificates for authentication, such a use case is inherently user-hostile for usability, and represents an authentication scheme that does not work well for the web. An alternative means for addressing this use case is to employ the work of the FIDO Alliance [12], which has strong positive signals from Microsoft and Google (both in the WG), is already supported via extensions in Chrome [13], with Mozilla evaluating support via similar means [14]. This offers a more meaningful way to offer strong, non-phishable authentication, in a manner that is more privacy preserving, offers a better user experience, better standards support, and more robust security capabilities.


[11] https://developer.chrome.com/extensions/enterprise_platformKeys

[12] https://fidoalliance.org/

[12] https://fidoalliance.org/google-launches-security-key-worlds-first-deployment-of-fast-identity-online-universal-second-factor-fido-u2f-authentication/

[14] https://bugzilla.mozilla.org/show_bug.cgi?id=1065729


Usage information from UseCounter

https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement


Based on data seen from Google crawler (unfortunately, not public), the number of sites believed to be affected is comically low. 


OWP launch tracking bug

https://code.google.com/p/chromium/issues/detail?id=514767


Entry on the feature dashboard

https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement


Requesting approval to remove too?

No

David Benjamin

okunmadı,
28 Tem 2015 16:25:2328.07.2015
alıcı Ryan Sleevi, blink-dev
On Tue, Jul 28, 2015 at 3:46 PM 'Ryan Sleevi' via Security-dev <securi...@chromium.org> wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


To add to your list, <keygen> requires that the browser process perform an expensive operation (key generation) to compute a <form>'s POST data. This is currently implemented with a synchronous IPC (https://crbug.com/52949). <keygen> is the only place in the platform where we'd need to compute this information asynchronously. (File uploads are done via a different mechanism in Chrome.)
 

While I gleefully support <keygen>'s removal and the deprecation of anything related to client certificates on general principle, use counters are probably not a very good metric here, both because of enterprise underrepresentation in UMA and because people using <keygen> to enroll in certificates likely only use the element once every year or so.

Also, one more data point: <keygen> was broken on Android in Chrome 42, but this was only noticed two months later. (Which could mean that no one uses <keygen>, or that those who do use it very infrequently, or that it sees especially little use on Android. Probably all three.)

This also means that the fallout from the removal will be somewhat protracted as it will take a very long time between removal and users reporting problems. (Or, if we put a deprecation notice, it's likely that notice won't get much coverage without running it for a long time.)
 

Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


[9] https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement

[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1024871


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


It may be worth hooking up a default download handler for PKCS#12 files (or whatever format) to the about:settings certificate management UI on Linux so that we needn't reinstate these instructions for client certificate enrollment:
 

Alex Russell

okunmadı,
28 Tem 2015 16:35:5228.07.2015
alıcı Ryan Sleevi, blink-dev
This argument seems week. We absolutely allow browsers to do things like register for protocol handlers.

David Benjamin

okunmadı,
28 Tem 2015 16:47:1628.07.2015
alıcı Alex Russell, Ryan Sleevi, blink-dev

We allow browsers to *become* your native email client by allowing it to register a mailto handler. That's a fairly reasonable operation since registering protocol handlers is a very standard inter-application integration point in basically all platforms. The security story is clear (when you open a mailto: URL you expect it to open in another application), and it's one that we can explain to users fairly well ("do you use mail.google.com as your email client?")

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.
 

Ryan Sleevi

okunmadı,
28 Tem 2015 16:53:4828.07.2015
alıcı Alex Russell, blink-dev
On Tue, Jul 28, 2015 at 1:35 PM, Alex Russell <sligh...@chromium.org> wrote:

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


This argument seems week. We absolutely allow browsers to do things like register for protocol handlers.

That's Apples to Oil Changes sort of comparison. It's one thing to expect your browser to be able to be launched on an action, it's quite another to expect it to modify your network configuration or affect how other (native and web) applications behave and execute. 

<keygen> is used for things like configuring devices Wifi configurations or affecting (native) applications. Surely that sounds bonkers to you - in the same way that having an application configure a VPN should - and if it doesn't create a deep negative reaction, then that worries me. We don't allow unmediated access to the filesystem, much in the same way we shouldn't allow arbitrary websites to, say, configure wifi. The closest we get is the (non-standard) Chrome Native Messaging, and even that requires apriori opt-in via native, in administrative mode, before arbitrary websites can begin modifying how applications behave and interact.

<keygen> can easily be abused to DoS a device in ways that cannot be easily mitigated. At best, simply foisting the decision to the user is a solution, but one that doesn't offer anything actionable other than "Hope you don't get owned along the way".

Similarly, application/x-x509-*-cert represent powerful ways to brick devices, again _by design_.

If we were to evaluate support for adding <keygen> today, it seems extremely unlikely that it would begin to pass the sniff-test for https://w3ctag.github.io/security-questionnaire/, let alone our own security review. Even as an extension API for a replacement, there were a number of concerns about the capability being exposed and the ability to do harm, as well as the mitigation concerns.

Alex Russell

okunmadı,
28 Tem 2015 17:28:4428.07.2015
alıcı David Benjamin, Ryan Sleevi, blink-dev
That's a stronger argument and one that resonates with me.

kai.e...@gmail.com

okunmadı,
29 Tem 2015 06:42:3429.07.2015
alıcı blink-dev, sle...@google.com

Am Dienstag, 28. Juli 2015 21:46:51 UTC+2 schrieb Ryan Sleevi:

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.



Does WebCrypto offer any way to guarantee that the client side private key will never leave the client computer?

Ryan Sleevi

okunmadı,
29 Tem 2015 11:36:0729.07.2015
alıcı kai.e...@gmail.com, blink-dev


On Jul 29, 2015 3:42 AM, <kai.e...@gmail.com> wrote:
>
> Does WebCrypto offer any way to guarantee that the client side private key will never leave the client computer?
>

extractable=false, but I doubt that is the level you're trying to get.

It only guarantees that the browser won't export it - naturally, the browser can't make any guarantees about other software on the machine, and it is intentional and by-design that WebCrypto does not expose any smart card access whatsoever (to do so is fundamentally insecure, and long-term privacy hostile)

melvinc...@gmail.com

okunmadı,
30 Tem 2015 10:42:1030.07.2015
alıcı blink-dev, sle...@google.com

-1 KEYGEN is in use.

This move will be severely detrimental several grass roots communities, such as the WebID community. 

[1] https://www.w3.org/community/webid/participants
 

David Benjamin

okunmadı,
30 Tem 2015 11:43:1630.07.2015
alıcı Ryan Sleevi, kai.e...@gmail.com, blink-dev
Does our implementation of <keygen> give you smartcard access anyway? In theory one could render <keygen> as a more complex widget that selects what token/whatever to put the key on, but I don't think we do more than the dropdown. I would think it just uses the default software key store everywhere.

David 

Ryan Sleevi

okunmadı,
30 Tem 2015 11:47:0030.07.2015
alıcı David Benjamin, Kai Engert, blink-dev


On Jul 30, 2015 8:43 AM, "David Benjamin" <davi...@chromium.org> wrote:
>
>
> Does our implementation of <keygen> give you smartcard access anyway?

No

> In theory one could render <keygen> as a more complex widget that selects what token/whatever to put the key on, but I don't think we do more than the dropdown. I would think it just uses the default software key store everywhere.

Correct. 

Ryan Sleevi

okunmadı,
30 Tem 2015 11:53:5630.07.2015
alıcı Melvin Carvalho, blink-dev


On Jul 30, 2015 7:42 AM, <melvinc...@gmail.com> wrote:
>

>
> -1 KEYGEN is in use.
>
> This move will be severely detrimental several grass roots communities, such as the WebID community. 
>
> [1] https://www.w3.org/community/webid/participants
>  

This comment doesn't really address any of the technical concerns raised. WebID has repeatedly demonstrated a willingness to appropriate ill-suited technology, and has readily acknowledged that no browser implements the desired functionality for WebID to be successful.

WebID is still nascent, and readily admits it won't work with Edge. An alternative would be for WebID to proceed with standards that are actually widely used and have a viable chance at being usable - such as WebCrypto.

But it seems odd to hold a feature that was never fit to purpose nor working as desired hostage for experimental activity in a CG.

henry...@gmail.com

okunmadı,
31 Tem 2015 04:44:3831.07.2015
alıcı blink-dev, melvinc...@gmail.com, rsl...@chromium.org
As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

As I understand web crypto requires the private keys generated to be downloaded into the local storage of the browser. Due to entirely reasonable security restrictions of the browser the private key can from that storage only be used for that one origin ( which means that it can still be misused by any JS that is downloaded to a web site! ) That restriction removes many of the uses of public key cryptography that would allow one to strengthen the security of the web. Furthermore storing private keys in local storage seems a lot less secure than storing it encrypted in a keychain that is either tied to the browser like Netscape or tied  the OS. 

So I don't see that WebCrypto has proved itself yet as a substitute for client side certificates. 

As for wether WebID can work with Edge ( Microsoft Edge? ) or not I don't think we have yet looked at it. Why would it not work there?



 

Ryan Sleevi

okunmadı,
31 Tem 2015 05:33:2131.07.2015
alıcı henry...@gmail.com, blink-dev, Melvin Carvalho


On Jul 31, 2015 1:44 AM, <henry...@gmail.com> wrote:
>
> As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

No, this is not correct. You can create keys for your origin, store them, and have full control over the UI.

What you don't get is to use those keys for TLS client auth, and that's a good thing, because TLS client auth simply doesn't work for the web at scale.

> As I understand web crypto requires the private keys generated to be downloaded into the local storage of the browser. Due to entirely reasonable security restrictions of the browser the private key can from that storage only be used for that one origin ( which means that it can still be misused by any JS that is downloaded to a web site! ) That restriction removes many of the uses of public key cryptography that would allow one to strengthen the security of the web. Furthermore storing private keys in local storage seems a lot less secure than storing it encrypted in a keychain that is either tied to the browser like Netscape or tied  the OS. 

<keygen> is never speced to behave as you describe. That it does is simply a matter of emulating Netscape's original behaviour, but as noted earlier, even those that emulate the behaviour do so inconsistently.

For example, using <keygen> on iOS in a webview doesn't affect the Safari keychain - as apps can't write into that.

>
> So I don't see that WebCrypto has proved itself yet as a substitute for client side certificates. 

Correct, because it intentionally is not trying to offer a 1:1 emulation of client certificates, because they're a horrible and broken technology for the web at large (e.g. outside niche enterprise use cases). This is a fundamental flaw that WebID has been struggling with, without solution or adoption, for over for years.

However, alternatives exist - both via WebCrypto (as clearly demonstrated by Mozilla's past efforts through Persona) and via FIDO (as clearly demonstrated by Google, Microsoft, and others) - that offer more meaningful, usable, robust alternatives.

> As for wether WebID can work with Edge ( Microsoft Edge? ) or not I don't think we have yet looked at it. Why would it not work there?

Please read my original message, detailing the varying incompatibilities with <keygen>.

None of these are new to the WebID folks - nor has there been any traction whatsoever on the pet features the WebID CG have deemed essential to WebID (such as TLS logout). The hope is that WebID will, after the considerable time spent as a CG, realize there are alternative technical solutions that offer the same fundamental properties in more usable, more widely supported ways, without relying on the esoteric incompatible behaviours of browsers (as it does today, for example, TLS logout).

henry...@gmail.com

okunmadı,
31 Tem 2015 06:23:1431.07.2015
alıcı blink-dev, henry...@gmail.com, melvinc...@gmail.com, rsl...@chromium.org


On Friday, 31 July 2015 11:33:21 UTC+2, Ryan Sleevi wrote:


On Jul 31, 2015 1:44 AM, <henry...@gmail.com> wrote:
>
> As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

No, this is not correct. You can create keys for your origin, store them, and have full control over the UI.

What you don't get is to use those keys for TLS client auth, and that's a good thing, because TLS client auth simply doesn't work for the web at scale.


It does if you use it the way the WebID-TLS spec specifies it:

This allows you to use one certificate to authenticate to all servers. WebCrypto as you admit does not do that.

 

> As I understand web crypto requires the private keys generated to be downloaded into the local storage of the browser. Due to entirely reasonable security restrictions of the browser the private key can from that storage only be used for that one origin ( which means that it can still be misused by any JS that is downloaded to a web site! ) That restriction removes many of the uses of public key cryptography that would allow one to strengthen the security of the web. Furthermore storing private keys in local storage seems a lot less secure than storing it encrypted in a keychain that is either tied to the browser like Netscape or tied  the OS. 

<keygen> is never speced to behave as you describe. That it does is simply a matter of emulating Netscape's original behaviour, but as noted earlier, even those that emulate the behaviour do so inconsistently.

For example, using <keygen> on iOS in a webview doesn't affect the Safari keychain - as apps can't write into that.


So you admit that you want to remove a feature that cannot be emulated by other technologies. Why not instead first try to see if your favorite technology meets the use case that WebID tries to address, and then when it does a case can be made for deprecating other technologies.  For the moment your attempt is too hasty.
  

>
> So I don't see that WebCrypto has proved itself yet as a substitute for client side certificates. 

Correct, because it intentionally is not trying to offer a 1:1 emulation of client certificates, because they're a horrible and broken technology for the web at large (e.g. outside niche enterprise use cases). This is a fundamental flaw that WebID has been struggling with, without solution or adoption, for over for years.

However, alternatives exist - both via WebCrypto (as clearly demonstrated by Mozilla's past efforts through Persona) and via FIDO (as clearly demonstrated by Google, Microsoft, and others) - that offer more meaningful, usable, robust alternatives.


WebID shows that client side certificates can be used outside of what you call "niche enterprise use cases" for the general public. Mozilla's Persona effort has been wound down and can be considered a failure. Presumably partly because it tried to use JS to do crypto and did not want to tie the security to the chrome. Other reasons may be that it did not want to adopt WebID profile information to create a web of trust.

 

> As for wether WebID can work with Edge ( Microsoft Edge? ) or not I don't think we have yet looked at it. Why would it not work there?

Please read my original message, detailing the varying incompatibilities with <keygen>.

None of these are new to the WebID folks - nor has there been any traction whatsoever on the pet features the WebID CG have deemed essential to WebID (such as TLS logout). The hope is that WebID will, after the considerable time spent as a CG, realize there are alternative technical solutions that offer the same fundamental properties in more usable, more widely supported ways, without relying on the esoteric incompatible behaviours of browsers (as it does today, for example, TLS logout).


You did not answer my concerns about JS crypto:
 * requires private keys to be served on the users local storage in a non secure manner
 * is open to be read by any JS from the same origin, which is extreemly difficult to control, and as a result creates very difficult to enforce limitations of JS on web servers
 * is thefore open to fishing attacks
 * suffers from all of the JS security issues

These are all related to JS breaking the rule of least power, which is very important in the case of security.


I am building highly complex single page applications with JS and so I certainly recognise there is an important role for JS in the modern web. 

  I have not looked at Fido in detail to see if it is compatible with WebID, as my policy on that group was to work with deployed existing technologies. TLS fit the bill. WebCrypto has been in deployment for a few years, but does not really solve the problems that TLS was able to solve, even if not perfectly. 

 For example logout could easily be solved by browsers by making that capability visible in the chrome. Proposals for that have been made: it's just requires some UI innovtaion. This issue will pop up any other way you try to do things, which is why the Web Crypto folks still cannot do the right thing.

I'll look at FIDO to see if it would allow WebID authentication, which would have been easy to add to Mozilla Persona too btw. But I have seen a lot of large corporations get together to try to build standards ( eg. the Liberty Alliance , Persona, OpenId,... ) and most of those have failed. So I'd suggest a wait and see attitude.

Henry 


Ryan Sleevi

okunmadı,
31 Tem 2015 07:10:3431.07.2015
alıcı Henry Story, blink-dev, Melvin Carvalho


On Jul 31, 2015 3:23 AM, <henry...@gmail.com> wrote:
>
> It does if you use it the way the WebID-TLS spec specifies it:
>   http://www.w3.org/2005/Incubator/webid/spec/
>

I think we will just have to agree to disagree here. As it relates to UX, as it relates to interactions with users' antivirus, as it relates to other devices and form factors, WebID-TLS is irreparably flawed. WebID-TLS advocates - such as yourself - should find this as no surprise. You (the CG) have lamented about 'bugs' (things working as designed) that hold it back.

As it relates to the browser, WebID-TLS merely appropriates the nearest technology, poorly, and with a number of problems. But finding a way to solve WebID-TLS isn't entirely germane to this discussion, and I've offered copious constructive feedback over the years of these efforts to try to assist you. Unfortunately, I think it's best to simply note that you disagree, but given that viable technical alternatives exist, that disagreement is by no means reason to keep a non-interoperable, problematic, insecure-by-design API around.

> So you admit that you want to remove a feature that cannot be emulated by other technologies. Why not instead first try to see if your favorite technology meets the use case that WebID tries to address, and then when it does a case can be made for deprecating other technologies.  For the moment your attempt is too hasty.

I have provided you constructive feedback for over four years on advising you about the inherent flaws of the design. It is neither reasonable nor polite to suggest that I spend another year, or more, offering much of the same feedback, which will simply be ignored not because it is insufficient for your needs, but because you're change averse.

> You did not answer my concerns about JS crypto:
>  * requires private keys to be served on the users local storage in a non secure manner

No different than <keygen>

>  * is open to be read by any JS from the same origin, which is extreemly difficult to control, and as a result creates very difficult to enforce limitations of JS on web servers

If you can't control same-origin scripts, you have lost the game. Security game over.

>  * is thefore open to fishing attacks

This isn't how phishing works. The same-origin policy gives you greater protection, not less.

>  * suffers from all of the JS security issues

>  For example logout could easily be solved by browsers by making that capability visible in the chrome. Proposals for that have been made: it's just requires some UI innovtaion. This issue will pop up any other way you try to do things, which is why the Web Crypto folks still cannot do the right thing.
>

I spent several years detailing to you in excruciating detail why this is technically flawed and fundamentally incompatible with how the web, TLS, and servers work. I seriously doubt you will take heed this time, but there is no doubt I've made extensive good faith effort at providing actionable technical solutions.

I hope you can understand my frustration that, despite years of providing technical feedback, WebID continues to insist on a broken model.

I can appreciate deprecating <keygen> will break the academic, experimental concept that is WebID. However, suitable technical alternatives do exist, if the WebID CG were to invest in the technical work, so this really falls on WebID. It's also clear that 'no' (for some value of 'no' that is nearly non-existent) users are actually using WebID, which again is consistent with the four+ years of the CG trying to promote WebID-TLS, so even the impact of WebID needing to readjust the design is exceedingly minimal.

My hope is that you'll take the time to evaluate the many messages I've sent in the past on this, as well as the ample feedback to the non-bugs, as I doubt there is anything more I can offer you in conversation on this thread beyond what has been said. When I wrote this message, I was and am very familiar with the WebID work, on a deep technical level, and I still stand by every bit that I said in the original message arguing for the deprecation in spite of this.

Melvin Carvalho

okunmadı,
31 Tem 2015 07:52:2531.07.2015
alıcı rsl...@chromium.org, Henry Story, blink-dev
On 31 July 2015 at 13:10, Ryan Sleevi <rsl...@chromium.org> wrote:


On Jul 31, 2015 3:23 AM, <henry...@gmail.com> wrote:
>
> It does if you use it the way the WebID-TLS spec specifies it:
>   http://www.w3.org/2005/Incubator/webid/spec/
>

I think we will just have to agree to disagree here. As it relates to UX, as it relates to interactions with users' antivirus, as it relates to other devices and form factors, WebID-TLS is irreparably flawed. WebID-TLS advocates - such as yourself - should find this as no surprise. You (the CG) have lamented about 'bugs' (things working as designed) that hold it back.

As it relates to the browser, WebID-TLS merely appropriates the nearest technology, poorly, and with a number of problems. But finding a way to solve WebID-TLS isn't entirely germane to this discussion, and I've offered copious constructive feedback over the years of these efforts to try to assist you. Unfortunately, I think it's best to simply note that you disagree, but given that viable technical alternatives exist, that disagreement is by no means reason to keep a non-interoperable, problematic, insecure-by-design API around.

> So you admit that you want to remove a feature that cannot be emulated by other technologies. Why not instead first try to see if your favorite technology meets the use case that WebID tries to address, and then when it does a case can be made for deprecating other technologies.  For the moment your attempt is too hasty.

I have provided you constructive feedback for over four years on advising you about the inherent flaws of the design. It is neither reasonable nor polite to suggest that I spend another year, or more, offering much of the same feedback, which will simply be ignored not because it is insufficient for your needs, but because you're change averse.

> You did not answer my concerns about JS crypto:
>  * requires private keys to be served on the users local storage in a non secure manner

No different than <keygen>

>  * is open to be read by any JS from the same origin, which is extreemly difficult to control, and as a result creates very difficult to enforce limitations of JS on web servers

If you can't control same-origin scripts, you have lost the game. Security game over.

>  * is thefore open to fishing attacks

This isn't how phishing works. The same-origin policy gives you greater protection, not less.

>  * suffers from all of the JS security issues

>  For example logout could easily be solved by browsers by making that capability visible in the chrome. Proposals for that have been made: it's just requires some UI innovtaion. This issue will pop up any other way you try to do things, which is why the Web Crypto folks still cannot do the right thing.
>

I spent several years detailing to you in excruciating detail why this is technically flawed and fundamentally incompatible with how the web, TLS, and servers work. I seriously doubt you will take heed this time, but there is no doubt I've made extensive good faith effort at providing actionable technical solutions.


I cant find a record of you stating your concerns to the community group, among the 1000s of emails in the webid cg archive?

https://lists.w3.org/Archives/Public/public-webid/

Jeffrey Yasskin

okunmadı,
31 Tem 2015 11:07:2931.07.2015
alıcı henry...@gmail.com, blink-dev, melvinc...@gmail.com, Ryan Sleevi
On Fri, Jul 31, 2015 at 3:23 AM, <henry...@gmail.com> wrote:


On Friday, 31 July 2015 11:33:21 UTC+2, Ryan Sleevi wrote:


On Jul 31, 2015 1:44 AM, <henry...@gmail.com> wrote:
>
> As I understand WebCrypto ( the JS libraries ) do not allow the chrome to ask the user to select the keys. This is probably due to the fact that JS is too powerful a language to be able to correctly constrain what is done, breaking the rule of least power [1]. Currently TLS client certificate selection is the  only way to allow the user to select a key and use it. 

No, this is not correct. You can create keys for your origin, store them, and have full control over the UI.

What you don't get is to use those keys for TLS client auth, and that's a good thing, because TLS client auth simply doesn't work for the web at scale.


It does if you use it the way the WebID-TLS spec specifies it:

This allows you to use one certificate to authenticate to all servers. WebCrypto as you admit does not do that.

Doesn't it? If someone runs an identity provider, perhaps with a Service Worker to work offline, relying parties can iframe the identity provider, and the identity provider can store a key in WebCrypto and prove its presence to the relying parties. With fallback request interception (https://github.com/slightlyoff/ServiceWorker/issues/684), the relying parties can also ping the identity provider from their service workers.

Jeffrey

01234567...@gmail.com

okunmadı,
31 Tem 2015 15:59:3731.07.2015
alıcı blink-dev, sle...@google.com


On Tuesday, July 28, 2015 at 3:46:51 PM UTC-4, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


Maybe if this is  strategic direction taking us away from asymetric  crypto and putting it the hands of users and developers should be reconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]


Microsoft didn't implement SVG technology for about a decade.  For many that left it as a questionable technology.  The solution was for Microsoft to implement it.
 

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)


If it is unique, then it should be removed. If you replace it with another way of allowing the user to create a key pair and control the use of the key, then fine, But don't please remove keygen until
 

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])


A solution would be then to extend the standrad in future versions and code compatibly to that.
 

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.


Why not?  Almost all secure protocols have to have an upgrade path, where you allow back-compatibility for a certain time
 
If the alternative is the browser vendors or the developers re-creating a new tool, then -- that ain't goonna be back-compatible either, is it?

 

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.


There is a typical browser vendor mindset: this feature allows a web site to do something to the user. 
Think of it: this allows a user to do something very powerful with their browser which gives them greater security in their relationships with web sites. The browser acts as the user's agent.


 

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


[3] https://connect.microsoft.com/IE/feedback/details/793734/ie11-feature-request-support-for-keygen

[4] https://lists.w3.org/Archives/Public/public-html/2009Sep/0153.html

[5] https://blog.whatwg.org/this-week-in-html5-episode-35

[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen

[7] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[8] https://developer.mozilla.org/en-US/docs/Archive/Mozilla/JavaScript_crypto/generateCRMFRequest


Compatibility Risk

While there is no doubt that <keygen> remains used in the wild, both the use counters [9] and Google's own crawler indicate that it's use is extremely minimal. Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


Well, obviously a move to create more interop though a better standard would be a good idea.  Meanwhile keygen seems to work for me for webid on the 4 browsers I use.
 
 


[9] https://www.chromestatus.com/metrics/feature/popularity#HTMLKeygenElement

[10] https://bugzilla.mozilla.org/show_bug.cgi?id=1024871


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


The creation of a key pair should be as simple for the user as possible.  This is a user creating an account, it has to be smooth.  It has to happen completely withing the browser, no other apps involved: users will not be able to cope.  This will lead to enterprises and organizations which want to be able to authenticate users writing special code (Like MIT's certaid) which run in the OS not inn the web and do who knows what behind the user's back.


 


On some level, this removal will remove support for arbitrarily adding keys to the users' device-wide store, which is an intentional, by-design behavior.


I want to be able to write a  web site which gives me as a user (or my family member or friend)  a private key so I can communicate withe them securely.

While a use case exists for provisioning TLS client certificates for authentication, such a use case is inherently user-hostile for usability,


What?  What is the problem? Compared to other things suggested <keygen> seems simpler adn user-friendly to me.  There is a bug that the website doesn't get an event back when the key has been generated.  That needs to be fixed, so that the website can contiue the dialog and pick up doing whatever the user wanted to do when they found they needed to mint and identity. That is a bug to be fixed,


 

and represents an authentication scheme that does not work well for the web.


Asymetric auth is fundamentally more useful and powerful than the mass shared passwords and stuff which is an alternative.  If it "doesn't work well for the web" is that just that UI for dealing with certs needs improvement?

 

An alternative means for addressing this use case is to employ the work of the FIDO Alliance [12], which has strong positive signals from Microsoft and Google (both in the WG), is already supported via extensions in Chrome [13], with Mozilla evaluating support via similar means [14]. This offers a more meaningful way to offer strong, non-phishable authentication, in a manner that is more privacy preserving, offers a better user experience, better standards support, and more robust security capabilities.


Well, let's see the result of that work.  Don't throw out keygen until it is up and running and FIDO does what we need smoothly and securely.

In the mean time, please fix bugs in client certs which are just a pain.

- Firefox offers to save your choice of client cert ("remember this choice?" for a sit but doesn't
- Safari displays a list of all kinds of deleted and old and expired certs as well as you active ones, which is confusing.

- In general, the client cert selection needs more real estate and a bit more info about each cert (Not just my name which is (duh) often the same for each certificate!  Unless I have remembers to make up a different name every time to be able to select between certs later)

So don't abandon <keygen>, fix it.

And sure, bring in a really nice secure powerful Fido-based system which will make <keygen> obsolete and make management of personas and identities by users a dream, has a non-phisable UI and allows the creation of a secure relationship between a user and a web site to happen as easily as any sign-up one could imagine, and allows the users to manage them when they need to.   And is compatible in all the browsers and doesn't have these silly interop problems. That would be great, and I'm sure people will be happy to move to it.   I'll believe it when I see it.  

In the meantime please support keygen and improve client cert handling. 

timbl






 

01234567...@gmail.com

okunmadı,
31 Tem 2015 17:16:2331.07.2015
alıcı blink-dev, sle...@google.com


On Tuesday, July 28, 2015 at 3:46:51 PM UTC-4, Ryan Sleevi wrote:



Alternative implementation suggestion for web developers

...

Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.


Do you have a pointer to "OSX Enterprise settings? What is that?
 

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


Remember Ryan that you work for a browser company. Solutions you can dream up involve coding the browser.  WebID was written by developers who don't have that luxury.  Don't compare it with browser initiatives past or future.  It was written using the things which you the browsers made available at the time, and keygen was that. If you want to criticize webid, then you have to say what they should do to get the same effect under the same constraints.

So for example you suggest doing it all in webcrypto js would be possible except that keygen pops the key pair straight into the certificate store, which is access using harder-to-phish browser code dialogs at login time. Generating it in webapp js and handing it to the user to install is horrible as a workflow, and trains users to do random stuff outside the browser in order log on to things, which opens them up to phishing.    It asks user to trust ausing a private key which has been generated by some random web page.  Much worse than using one which has been generated and kept within the browser.   Supporting that process on all platforms would be a major a major pain.  (It becomes temping on the open platforms to just give the users an app to fix the whole thing, do certificate management, properly).      Just using webcrypto then is not a better solution.

Storing the certs generated by webcrypto in common client-side storage  has its own problems, phishing, lack of trusted interface.   But is possible, and people have explored that ... indeed ignoring  the whole TLS layer and putting in web-crypto <-> web-crypto app-app asymmetric encrypted and authentication communication is a possibility.

You suggest that the things that webid developers have described as 'bugs' you feel is 'working as intended'.... hmmm.


 

Ryan Sleevi

okunmadı,
31 Tem 2015 17:36:0431.07.2015
alıcı 01234567...@gmail.com, blink-dev
On Fri, Jul 31, 2015 at 2:16 PM, <01234567...@gmail.com> wrote:
If you want to criticize webid, then you have to say what they should do to get the same effect under the same constraints.

Not when the desired effect is unreasonable. The point is to show comparable solutions exist, with reasonable tradeoffs. There's not an incumbent demand to show there's a 100% functionally equivalent solution, because the functionality itself is problematic and flawed by design.
 
Supporting that process on all platforms would be a major a major pain.  (It becomes temping on the open platforms to just give the users an app to fix the whole thing, do certificate management, properly). 

This is exactly what I said in the original message, and which you acknowledge has a very important adjective there - "properly".

Systems management is not the realm of the browser. You should not have functionality that, say, allows arbitrary websites to configure your Wifi. I say this knowing full well that there are a tremendous number of interesting and exciting things you could do if such a functionality existed. Similarly, your browser should not be the vector for VPN management, despite their being exciting and compelling things you could do.

Certificate and key management - which is system wide, affects all other applications and origins - is itself a fundamentally flawed premise for a browser to be involved in, nor is it required for any 'spec-following' <keygen> implementation. As the original message laid out, there's no way such a feature would have a reasonable chance of succeeding. The very premise of WebID rests on the ability to create a cross-origin super-cookie (the client cert), and the bugs filed make it clear there's a desire to reduce or eliminate the user interaction in doing so.
 
You suggest that the things that webid developers have described as 'bugs' you feel is 'working as intended'.... hmmm.

Yes. They are. I've explained at great length to your fellow WebID CG members, during discussions in the WebCrypto WG, on the Chromium tracker, in private emails, and at W3C TPACs and F2Fs, how several of the fundamental requirements of WebID are, at a protocol level, incompatible with how the Web works.

Regardless of your feelings towards WebID, which I fully acknowledge that after several years of effort, I'm unlikely to dissuade any members of the technical folly, let's go back to first principles.

Should an arbitrary website be able to make persistent modification to your device, that affects all applications, without user interaction? (As implemented today in several browsers?). The answer is immediately and obviously no.

What tools exist to mitigate this concern? As browser operators, we have two choices:
1) Only allow the origin to affect that origin.
2) Defer the decision to the user, hoping they're sufficiently aware and informed of the risks and implications.

1) is entirely viable if <keygen> were kept around, but it would entirely eliminate the value proposition of <keygen> from those arguing for it being kept (both on this thread and the similar Mozilla thread - https://groups.google.com/d/msg/mozilla.dev.platform/pAUG2VQ6xfQ/FKX63BwOIwAJ ). So we can quickly rule that out as causing the same concerns.

2) is not a viable pattern. You yourself have commented on past threads on encouraging TLS adoption what you view of the complexity involved in certificate management, and these are challenges being faced by populations with a modicum of technical background (web developers, server operators, the like). Surely you can see these challenges - and concepts - of key management and hygiene are well beyond the ken of the average user. But let's assume we could draft some perfect language to communicate to the user, what would we need to communicate?

"If you continue, this website may be able prevent your machine from accessing the Internet?"
"If you continue, this website may be able to allow arbitrary parties to intercept all of your secure communications?"

Is that a reasonable permission to grant? No, absolutely not. But that's what <keygen> - and the related application/x-x509-*-cert - rely on. That itself is a fundamental problem, and one which is not at all reasonable to ask.

I do hope you can re-read my original message and see the many, fundamental, by-design issues with <keygen>. It's existence was to compete with a Microsoft-developed ActiveX system management control intended for internal networks only, and whose inception predated even the modern extension systems now used for management of Mozilla Firefox. I don't deny that you've found utility in the fact that something insecure, and broken by design, has been exposed to the Web, but that's not itself an argument for its continuation. Given the risks, it's also not reasonable to suggest that the feature be held hostage by a small minority of developers, with virtually non-existent group of users, until they can be convinced of technical alternatives. Somewhere, there has to be a split as to what represents a reasonable part of the Web platform - and what represents an artifact of the heady, but terribly insecure, days of the First Browser Wars and the pursuit of vendor-lockin.

henry...@gmail.com

okunmadı,
31 Tem 2015 18:40:4131.07.2015
alıcı blink-dev, 01234567...@gmail.com, sle...@google.com


On Friday, 31 July 2015 23:36:04 UTC+2, Ryan Sleevi wrote:


On Fri, Jul 31, 2015 at 2:16 PM, <01234567...@gmail.com> wrote:
If you want to criticize webid, then you have to say what they should do to get the same effect under the same constraints.

Not when the desired effect is unreasonable. The point is to show comparable solutions exist, with reasonable tradeoffs. There's not an incumbent demand to show there's a 100% functionally equivalent solution, because the functionality itself is problematic and flawed by design.

Asymmetric key usage is flawed by design? For whom? For groups that wish to have an easier time surveilling or breaking into systems? There are indeed things that can be improved with TLS just with any other web technology. Think about the evolution of HTML 1.0 to HTML5.0, or of HTTP1.0 to HTTP2.0. A statement as strong as "flawed by design" would require a very serious argumentation to back it up. Can you point us to it, so that we can look it up?
 
 
Supporting that process on all platforms would be a major a major pain.  (It becomes temping on the open platforms to just give the users an app to fix the whole thing, do certificate management, properly). 

This is exactly what I said in the original message, and which you acknowledge has a very important adjective there - "properly".

Systems management is not the realm of the browser. You should not have functionality that, say, allows arbitrary websites to configure your Wifi. I say this knowing full well that there are a tremendous number of interesting and exciting things you could do if such a functionality existed. Similarly, your browser should not be the vector for VPN management, despite their being exciting and compelling things you could do.

Certificate and key management - which is system wide, affects all other applications and origins - is itself a fundamentally flawed premise for a browser to be involved in, nor is it required for any 'spec-following' <keygen> implementation. As the original message laid out, there's no way such a feature would have a reasonable chance of succeeding. The very premise of WebID rests on the ability to create a cross-origin super-cookie (the client cert), and the bugs filed make it clear there's a desire to reduce or eliminate the user interaction in doing so.

This would be a lot more convincing if the certificates added to the keychains were trust certificates. Here we are speaking of user certificates that allow a user to authenticate to a different web site, not one that makes decisions about which sites are authenticated. Here the user gets to choose if the certificate gets added to his keychain and when the certificate is used to log in to different web sites. The UI to make this obvious could be much more clearly designed. Aza Raskin made suggestions to that effect a while ago for Firefox http://www.azarask.in/blog/post/identity-in-the-browser-firefox/ .
On the whole Chrome has one of the best UIs but there is a lot more that can be done to improve the situation.
 
 
You suggest that the things that webid developers have described as 'bugs' you feel is 'working as intended'.... hmmm.

Yes. They are. I've explained at great length to your fellow WebID CG members, during discussions in the WebCrypto WG, on the Chromium tracker, in private emails, and at W3C TPACs and F2Fs, how several of the fundamental requirements of WebID are, at a protocol level, incompatible with how the Web works.

Perhaps there are things you have not understood about the functioning of WebID. And as I mentioned not being browser vendors we can only use technologies that are available to us in the browser. Improvements there will allow us to improve the protocol. ( I'll write another more detail mail to respond to Jeffrey Yasskin as soon as I have studied some of the pointers to the still being debated proposals for navigator.connect )
 

Regardless of your feelings towards WebID, which I fully acknowledge that after several years of effort, I'm unlikely to dissuade any members of the technical folly, let's go back to first principles.

Should an arbitrary website be able to make persistent modification to your device, that affects all applications, without user interaction? (As implemented today in several browsers?). The answer is immediately and obviously no.

1) Firefox at least until recently had its own keychain, so it did not affect all other apps. So your statement is not true for all browsers
2) If I remember correctly Firefox allowed the user to accept or deny the addition of the certificate to his browser
3) It would be easy to improve the user interface to allow the user to view and inspect the certificate before accepting it. The user could for example add a small nick name to the certificate to identify it more easily. I can think of a lot of other improvements that would massively improve the UI. (I am available for consultation to help out there)
4) The certificates added allow the user to authenticate to other sites, they were not certifiate authority certificates ( IF a browser automatically added that there would be justified global outcry). These client certificates do not influence the security of the web.
5) Your argument would apply just as well for adding passwords to the keychain too. I don't think many people would be convinced that that was a great problem. I myself find that feature really nice by the way. It allows me to move between Safari and Chrome from time to time ( and to Firefox with an addon ), without having to add all the passwords back again. The reason I never used Opera was the fact that I could never extract passwords again from their keychain. I did not want to invest in a browser that did not give me the freedom to switch later. 

You have not provided an argument here based on first principles. If you had it would not be so easy to show how minor improvements to the UI could change the situation. I think your problem here is that you are so focused on the crypto or the lower layers that you fail to take into account the dimensions closer to the user that involve psychology, artistic design, etc...
Of course if you don't take those dimensions into account I can see how you can come to the conclusion that your argument is fail safe. 
 

What tools exist to mitigate this concern? As browser operators, we have two choices:
1) Only allow the origin to affect that origin.
2) Defer the decision to the user, hoping they're sufficiently aware and informed of the risks and implications.

1) is entirely viable if <keygen> were kept around, but it would entirely eliminate the value proposition of <keygen> from those arguing for it being kept (both on this thread and the similar Mozilla thread - https://groups.google.com/d/msg/mozilla.dev.platform/pAUG2VQ6xfQ/FKX63BwOIwAJ ). So we can quickly rule that out as causing the same concerns.

yes, a keygen that would create a certificate that allowed you to log in to one site only would not be very interesting. Kind of defeating the whole point of asymmetric crypto.
 

2) is not a viable pattern. You yourself have commented on past threads on encouraging TLS adoption what you view of the complexity involved in certificate management, and these are challenges being faced by populations with a modicum of technical background (web developers, server operators, the like). Surely you can see these challenges - and concepts - of key management and hygiene are well beyond the ken of the average user. But let's assume we could draft some perfect language to communicate to the user, what would we need to communicate?

"If you continue, this website may be able prevent your machine from accessing the Internet?"
"If you continue, this website may be able to allow arbitrary parties to intercept all of your secure communications?"

This would only be the case if the browser allowed Certificate Authority level certificates to be installed. But the browsers usually have 3 different levels of certificates which go into different boxes that are isolated: user certificates, user installed CA certificates and browser CA installed certificates.

Installing client certificates is only complicated and unuseable if you do not use keygen. Keygen was not widely known until it was supported by HTML5, so it is not surprising that it may not have been used widely.  
 

Is that a reasonable permission to grant? No, absolutely not. But that's what <keygen> - and the related application/x-x509-*-cert - rely on. That itself is a fundamental problem, and one which is not at all reasonable to ask.

yes, but it is a straw man, as I don't think any browser automatically would allow CA based certificates to be added. 
 

I do hope you can re-read my original message and see the many, fundamental, by-design issues with <keygen>. It's existence was to compete with a Microsoft-developed ActiveX system management control intended for internal networks only, and whose inception predated even the modern extension systems now used for management of Mozilla Firefox. I don't deny that you've found utility in the fact that something insecure, and broken by design, has been exposed to the Web, but that's not itself an argument for its continuation. Given the risks, it's also not reasonable to suggest that the feature be held hostage by a small minority of developers, with virtually non-existent group of users, until they can be convinced of technical alternatives. Somewhere, there has to be a split as to what represents a reasonable part of the Web platform - and what represents an artifact of the heady, but terribly insecure, days of the First Browser Wars and the pursuit of vendor-lockin.

From the above I don't get the feeling that you really understand how the certificate systems work. What am I missing here?
 

Ryan Sleevi

okunmadı,
31 Tem 2015 19:00:2331.07.2015
alıcı Henry Story, blink-dev, 01234567...@gmail.com
On Fri, Jul 31, 2015 at 3:40 PM, <henry...@gmail.com> wrote:


1) Firefox at least until recently had its own keychain, so it did not affect all other apps. So your statement is not true for all browsers

I never said it was.
 
3) It would be easy to improve the user interface to allow the user to view and inspect the certificate before accepting it. The user could for example add a small nick name to the certificate to identify it more easily. I can think of a lot of other improvements that would massively improve the UI. (I am available for consultation to help out there)

While generous, I already demonstrated why this is a fundamentally flawed argument.
 
4) The certificates added allow the user to authenticate to other sites, they were not certifiate authority certificates ( IF a browser automatically added that there would be justified global outcry). These client certificates do not influence the security of the web.

This is factually wrong, as responded to below.
 
5) Your argument would apply just as well for adding passwords to the keychain too. I don't think many people would be convinced that that was a great problem. I myself find that feature really nice by the way. It allows me to move between Safari and Chrome from time to time ( and to Firefox with an addon ), without having to add all the passwords back again. The reason I never used Opera was the fact that I could never extract passwords again from their keychain. I did not want to invest in a browser that did not give me the freedom to switch later. 

Yes, and we no longer do. So that's also consistent :)
 
You have not provided an argument here based on first principles. If you had it would not be so easy to show how minor improvements to the UI could change the situation. I think your problem here is that you are so focused on the crypto or the lower layers that you fail to take into account the dimensions closer to the user that involve psychology, artistic design, etc...
Of course if you don't take those dimensions into account I can see how you can come to the conclusion that your argument is fail safe. 

While I appreciate your concern about what I am and am not thinking about, you will find that the very message you were replying to was focused very much on these. My past replies to you, in the WebCrypto WG (and WebCrypto CG) and on the bugs you have filed have also provided ample evidence why it's not a UI issue, but a technical one. It's clear I won't be able to convince you of that any further, but I do want to highlight the very real disagreement with your assertions.

yes, a keygen that would create a certificate that allowed you to log in to one site only would not be very interesting. Kind of defeating the whole point of asymmetric crypto.

This is a statement that makes no sense. It defeats the point of _your_ goal, but it's perfectly consistent with asymmetric crypto. There is no requirement for multi-party exchanges in public/private keypairs.

This would only be the case if the browser allowed Certificate Authority level certificates to be installed. But the browsers usually have 3 different levels of certificates which go into different boxes that are isolated: user certificates, user installed CA certificates and browser CA installed certificates.

Sorry Henry, but this isn't the case.

Let me make it absolutely clear, although I stated as much in the original message: Installing client certificates is a risky and dangerous operation that can break the ability for a user to securely browse the web. This is true for ALL platforms I'm concerned about (e.g. all the platforms on Chrome). I cannot make this more abundantly obvious, but the sheer act of installing a client certificate is a _dangerous_ operation, and one on which the entire premise of WebID is balanced.

 
yes, but it is a straw man, as I don't think any browser automatically would allow CA based certificates to be added. 

No, you're ignoring what I wrote in the original message. Through simple use of <keygen> - without any certificates - I can break users devices. Full stop.

Now you can argue it's a "bug" that needs fixing - and I agree, the fact that a <keygen> tag can do this IS quite unfortunate AND well known (although not by the WebID folks, it would seem). But the question is whether or not there is anything worth saving in <keygen> in the face of bugs like this - and the many other reasons I outlined in the original message - and the answer is increasingly "No".
 
From the above I don't get the feeling that you really understand how the certificate systems work. What am I missing here?

Thanks for the vote of confidence. This will be my last reply, as I've yet again been suckered into making a good faith attempt to explain to you the issues.

henry...@gmail.com

okunmadı,
1 Ağu 2015 02:01:101.08.2015
alıcı blink-dev, henry...@gmail.com, 01234567...@gmail.com, sle...@google.com


On Saturday, 1 August 2015 01:00:23 UTC+2, Ryan Sleevi wrote:


On Fri, Jul 31, 2015 at 3:40 PM, <henry...@gmail.com> wrote:


1) Firefox at least until recently had its own keychain, so it did not affect all other apps. So your statement is not true for all browsers

I never said it was.
 
3) It would be easy to improve the user interface to allow the user to view and inspect the certificate before accepting it. The user could for example add a small nick name to the certificate to identify it more easily. I can think of a lot of other improvements that would massively improve the UI. (I am available for consultation to help out there)

While generous, I already demonstrated why this is a fundamentally flawed argument.

No you did not.
 
 
4) The certificates added allow the user to authenticate to other sites, they were not certifiate authority certificates ( IF a browser automatically added that there would be justified global outcry). These client certificates do not influence the security of the web.

This is factually wrong, as responded to below.

I read the response below, and found no answer to this point. Has it been possible for 10 years to add using keygen new CA certificates to the browser? Without the user even being notified? Have there been attacks that use keygen? How do these work?  You have only made a theoretical point about the attack "modifiying the system of the user". But you have not detailed this attack, or pointed to actual uses of this until now hypothetical attack.
 
 
5) Your argument would apply just as well for adding passwords to the keychain too. I don't think many people would be convinced that that was a great problem. I myself find that feature really nice by the way. It allows me to move between Safari and Chrome from time to time ( and to Firefox with an addon ), without having to add all the passwords back again. The reason I never used Opera was the fact that I could never extract passwords again from their keychain. I did not want to invest in a browser that did not give me the freedom to switch later. 

Yes, and we no longer do. So that's also consistent :)

( Sigh ! At least you seem to be consistent. What were the actual attacks that were launched using these passwords? )
 
 
You have not provided an argument here based on first principles. If you had it would not be so easy to show how minor improvements to the UI could change the situation. I think your problem here is that you are so focused on the crypto or the lower layers that you fail to take into account the dimensions closer to the user that involve psychology, artistic design, etc...
Of course if you don't take those dimensions into account I can see how you can come to the conclusion that your argument is fail safe. 

While I appreciate your concern about what I am and am not thinking about, you will find that the very message you were replying to was focused very much on these. My past replies to you, in the WebCrypto WG (and WebCrypto CG) and on the bugs you have filed have also provided ample evidence why it's not a UI issue, but a technical one. It's clear I won't be able to convince you of that any further, but I do want to highlight the very real disagreement with your assertions.

(note: You provide once again neither arguments nor pointers to actual arguments here. The readers of _this_ list can only go by trusting you. But can they absolutely exclude that your account has not been hijacked while the real Ryan Sleevy is on holidays, or in hospital, or worse? That attack of is a well known social engineering way to infiltrate systems. But what is the attack where the client certificate is misused? ) 
 

yes, a keygen that would create a certificate that allowed you to log in to one site only would not be very interesting. Kind of defeating the whole point of asymmetric crypto.

This is a statement that makes no sense. It defeats the point of _your_ goal, but it's perfectly consistent with asymmetric crypto. There is no requirement for multi-party exchanges in public/private keypairs.

If asymmetric crypto was only used to log in to a site that had access to the secret, then I would not need asymmetric crypto. The point of asymmetric crypto is to allow secure communication with parties you have _never_ communicated with before! The romans 2000 years ago knew how to do cryptography with symmetric keys, where the two parties knew the secret - a.k.a a password! It was only in the 1970ies that mathematics enabling one to publicly share a key so that it could be read even by your enemies while allowing you to communicate securely was invented.

So while one can of course use asymmetric crypto the way symmetric crypto was used, I still stand by my point that if you do that you loose one of the main technical advantages of asymmetric crypto. I'll also note that a number of governments have tried and many even succeeded to make asymmetric crypto illegal for their citizens for fear of the power of that technology. As I understand in the US it is protected by the right to bear arms.  


This would only be the case if the browser allowed Certificate Authority level certificates to be installed. But the browsers usually have 3 different levels of certificates which go into different boxes that are isolated: user certificates, user installed CA certificates and browser CA installed certificates.

Sorry Henry, but this isn't the case.

Let me make it absolutely clear, although I stated as much in the original message: Installing client certificates is a risky and dangerous operation that can break the ability for a user to securely browse the web. This is true for ALL platforms I'm concerned about (e.g. all the platforms on Chrome). I cannot make this more abundantly obvious, but the sheer act of installing a client certificate is a _dangerous_ operation, and one on which the entire premise of WebID is balanced.

Ok, so please point to a document - that describes the attack vector - and that shows that this is not a user interface issue.
How can installing a client certificate be a risky and dangerous operation? 
I have not found any answers to this question in this thread here.
 

 
yes, but it is a straw man, as I don't think any browser automatically would allow CA based certificates to be added. 

No, you're ignoring what I wrote in the original message. Through simple use of <keygen> - without any certificates - I can break users devices. Full stop.

How?
 

Now you can argue it's a "bug" that needs fixing - and I agree, the fact that a <keygen> tag can do this IS quite unfortunate AND well known (although not by the WebID folks, it would seem). But the question is whether or not there is anything worth saving in <keygen> in the face of bugs like this - and the many other reasons I outlined in the original message - and the answer is increasingly "No".

You have managed to avoid in this whole thread explaining how this could be done or pointing to a document describing this "well known" attack.
 
 
From the above I don't get the feeling that you really understand how the certificate systems work. What am I missing here?

Thanks for the vote of confidence. This will be my last reply, as I've yet again been suckered into making a good faith attempt to explain to you the issues.

If you refuse to point to the document explaining how client certificates can be used as an attack vector perhaps someone else can? 

henry...@gmail.com

okunmadı,
1 Ağu 2015 04:27:021.08.2015
alıcı blink-dev, davi...@chromium.org, sle...@google.com, sligh...@chromium.org
Skimming the list for some arguments on the danger of client certificate generation I found this.

On Tuesday, 28 July 2015 23:28:44 UTC+2, Alex Russell wrote:
[snip]

On Tue, Jul 28, 2015 at 1:47 PM, David Benjamin <davi...@chromium.org> wrote:
[snip]

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

That's a stronger argument and one that resonates with me.

It looks like this would be quite easy to fix on the browser side by:

1) placing the generated private key with metadata ( when was it generated, on which site, what form fields were filled in, etc ) in a browser specific store
2) if the certificate is then downloaded to use all this metadata to remind the user of when this was done. The further back it is the more detail obviously needs to be given to remind the user of what he was trying to do. 
3) sites that do certificate spamming should then obvious to the user who can then avoid them like any other spamming site

Note that in the WebID-TLS authentication protocol we can get the certificate to be returned IMMEDIATELY. There is no need to go through a CA "verification" procedure. The extra information could be added and removed later by the CA by adding or removing relations from the WebID Profile document or from other linked documents ( see details in the documents http://www.w3.org/2005/Incubator/webid/spec/ ).
 
 
 
It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.

I can't quite tell from the description, but this raises alarm bells a it sounds like it would allow the private key to leak  out to a wider world. The way things work currently where the private key is under the full and only control of the browser is much more secure. 

David Benjamin

okunmadı,
2 Ağu 2015 12:25:342.08.2015
alıcı henry...@gmail.com, blink-dev, sle...@google.com, sligh...@chromium.org
On Sat, Aug 1, 2015 at 4:27 AM <henry...@gmail.com> wrote:
Skimming the list for some arguments on the danger of client certificate generation I found this.

On Tuesday, 28 July 2015 23:28:44 UTC+2, Alex Russell wrote:
[snip]

On Tue, Jul 28, 2015 at 1:47 PM, David Benjamin <davi...@chromium.org> wrote:
[snip]

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

That's a stronger argument and one that resonates with me.

It looks like this would be quite easy to fix on the browser side by:

1) placing the generated private key with metadata ( when was it generated, on which site, what form fields were filled in, etc ) in a browser specific store
2) if the certificate is then downloaded to use all this metadata to remind the user of when this was done. The further back it is the more detail obviously needs to be given to remind the user of what he was trying to do.  

This does not address the key-spamming issue. Or are you proposing that keygen include a user prompt too? Both this and the prompt below are going to be incomprehensible to most users.
 
3) sites that do certificate spamming should then obvious to the user who can then avoid them like any other spamming site

This kind of scheme would be rejected today. Installing user certificates modifies state outside an origin and thus cannot just be resolved by the user deciding not to visit the site anymore.

Client certificates are also mired with legacy assumptions by other uses. Deployments expect that certificates from the system-wide store be usable which constrains changes.

This should be using an origin-scoped storage API. We already have more of those than I can count. I believe WebCrypto intentionally declined to add a new one and (correctly) leans on existing ones. We cannot add random APIs for every niche use case that comes up. Instead, we should add a minimal set of APIs that allow maximal flexibility and power while satisfying other goals. (Easy to develop for, satisfies security policies, etc.) keygen and x-x509-user-cert satisfy none of this. They're pure legacy.

This is also relevant:
 
Note that in the WebID-TLS authentication protocol we can get the certificate to be returned IMMEDIATELY. There is no need to go through a CA "verification" procedure. The extra information could be added and removed later by the CA by adding or removing relations from the WebID Profile document or from other linked documents ( see details in the documents http://www.w3.org/2005/Incubator/webid/spec/ ).

In that case, you have no need for persistent storage and can simply hold the key in memory while waiting for the certificate to be minted.
 
It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.

I can't quite tell from the description, but this raises alarm bells a it sounds like it would allow the private key to leak  out to a wider world. The way things work currently where the private key is under the full and only control of the browser is much more secure. 

If you can't trust your origin then your enrollment application is already bust. The web's security model assumes that code running in an origin is running on behalf of that origin, just like native code must avoid arbitrary code exec on a traditional platform. In MIT's enrollment system, for instance, the origin is already trusted to handle the user's password, a more valuable credential than the certificates it mints.

Perhaps WebID's enrollment application doesn't have a password it verifies. But defending an origin from itself is not consistent with the browser's security model. We can build mechanisms to help an origin maintain its integrity (CSP, SRI) or help an origin subdivide itself (<iframe sandbox> or this per-page suborigins proposal I've seen floating around), but trying to defend an origin from *all* of itself doesn't make sense. That boils down to moving entire applications into the browser with one-off single-use-case APIs (like keygen and x-x509-user-cert). That is not how to develop a platform. It would be absurd to add a playChess() API to protect your chess win rate from a compromised https://chess.example.com origin.


David

henry...@gmail.com

okunmadı,
2 Ağu 2015 20:33:122.08.2015
alıcı blink-dev, henry...@gmail.com, sle...@google.com, sligh...@chromium.org

On Sunday, 2 August 2015 18:25:34 UTC+2, David Benjamin wrote:
On Sat, Aug 1, 2015 at 4:27 AM <henry...@gmail.com> wrote:
Skimming the list for some arguments on the danger of client certificate generation I found this.

On Tuesday, 28 July 2015 23:28:44 UTC+2, Alex Russell wrote:
[snip]

On Tue, Jul 28, 2015 at 1:47 PM, David Benjamin <davi...@chromium.org> wrote:
[snip]

This is allowing a website to inject a certificate and key pair into the global key store for use *by* your native email client. Now, this is also not thaaaat uncommon of an inter-application integration point, but it's one whose ramifications are extremely unclear. And the way it is done with <keygen> is pretty horrific. Websites can spam the key store with persistent private keys, all of which are orphaned and useless. Only a week later, when the CA has approved your certificate request, does the key then get bound to a certificate and become usable. But, even then, we can either do it without prompting (and now my store is further spammed) or we can do it with a prompt that no user will understand ("would you like to add this certificate to your certificate store?").

That's a stronger argument and one that resonates with me.

It looks like this would be quite easy to fix on the browser side by:

1) placing the generated private key with metadata ( when was it generated, on which site, what form fields were filled in, etc ) in a browser specific store
2) if the certificate is then downloaded to use all this metadata to remind the user of when this was done. The further back it is the more detail obviously needs to be given to remind the user of what he was trying to do.  

This does not address the key-spamming issue..

We were promised by Ryan Sleevi

Installing client certificates is a risky and dangerous operation that can break the ability for a user to securely browse the web. This is true for ALL platforms I'm concerned about (e.g. all the platforms on Chrome). I cannot make this more abundantly obvious, but the sheer act of installing a client certificate is a _dangerous_ operation [snip]

and all we have at the moment is a potential spam issue, that is not yet clearly specified. I suppose I am to imagine a user  going to a site which populates its forms with <keygen> elements, so that each time the user submits a form the browser creates a public/private key pair, sends the public key thus generated along with the other form elements to the server, and then what... ? Up to this point of the story the site has just managed to make usage of its pages slow, as generating public/private key pairs is cpu intensive. How is that different from a site where some bad JS just uses up CPU resources, or a very slow server ? What is the server going to do with the public key to spam and endanger the user's computer? It could return a certificate to the client immediately or at some later point in time, but how is that going to compromise something? Please give a detailed scenario.
 
 Or are you proposing that keygen include a user prompt too? Both this and the prompt below are going to be incomprehensible to most users

As I said I think Firefox does or did have a prompt asking the user if he wanted to add the certificate to his store. Of course the design of this part should not be left up to cryptographers, as that would most likely make it incomprehensible to most users. 

Again we were promised a proof of a fundamental security issue and all we have is a claim that a few people are unable to think of a good user interface, and from that premise  come to the conclusion that it cannot be done. But from my lack of skills at playing the piano it does not follow that no one can play the piano, and from your lack of skills at UI design it does not follow that other gifted designers cannot succeed. This reminds me that people used to repeat everywhere as a matter of fact truth that unix could never succeed because the command line was too difficult to use: OSX and Android have shown just how wrong they were ( and that was not because the command line became popular! )

 
3) sites that do certificate spamming should then obvious to the user who can then avoid them like any other spamming site

This kind of scheme would be rejected today. Installing user certificates modifies state outside an origin and thus cannot just be resolved by the user deciding not to visit the site anymore.

So the idea is I suppose again that you go to a site that sends you certificates that even after a good UI helps you understand what you are about to do, you nevertheless install. Then you go to another site and that asks you to log in and to do so you need to select a certificate. So what is the attack vector here? You suddenly have a large number of certificates from the some spam sites you visited pop up? Why not give the user a chance to delete them then? Problem solved.
 
Client certificates are also mired with legacy assumptions by other uses. Deployments expect that certificates from the system-wide store be usable which constrains changes.

Can you detail these problems, or point to a place where this has been detailed carefully? For the moment this is hand waving.
 
This should be using an origin-scoped storage API. We already have more of those than I can count. I believe WebCrypto intentionally declined to add a new one and (correctly) leans on existing ones. We cannot add random APIs for every niche use case that comes up. Instead, we should add a minimal set of APIs that allow maximal flexibility and power while satisfying other goals. (Easy to develop for, satisfies security policies, etc.) keygen and x-x509-user-cert satisfy none of this. They're pure legacy.

They are not legacy since 
1. storing public/private keys in local storage is a security problem ( the private key is  much more widely visible than it need be ) whereas it is not yet shown to be a security problem for plain old certificates
2. x509 certificates can be used to authenticate across domains for the past 15 years, whereas at present I have only seen hand waving attempts using completely alpha JS apis that are only on the drawing board that may be useable to do this, but may be not. 
yes. We are not installing DLLs here, just certificates which are signed documents into a store that is more widely accessible.
 
 
Note that in the WebID-TLS authentication protocol we can get the certificate to be returned IMMEDIATELY. There is no need to go through a CA "verification" procedure. The extra information could be added and removed later by the CA by adding or removing relations from the WebID Profile document or from other linked documents ( see details in the documents http://www.w3.org/2005/Incubator/webid/spec/ ).

In that case, you have no need for persistent storage and can simply hold the key in memory while waiting for the certificate to be minted.
 
It's just a bad mechanism all around. A much much better mechanism would be, as Ryan describes below, for the enrollment application to construct a PKCS#12 bundle (or some other format) with both key and certificate and then we pass that bundle off somewhere to import as one unit. Before the two halves are correlated, storage of the private key is the enrollment application's responsibility and falls under the isolation and other policies of any other web application storage.

I can't quite tell from the description, but this raises alarm bells a it sounds like it would allow the private key to leak  out to a wider world. The way things work currently where the private key is under the full and only control of the browser is much more secure. 

If you can't trust your origin then your enrollment application is already bust. The web's security model assumes that code running in an origin is running on behalf of that origin, just like native code must avoid arbitrary code exec on a traditional platform. In MIT's enrollment system, for instance, the origin is already trusted to handle the user's password, a more valuable credential than the certificates it mints.

Here my feeling is that there is a lot more to be done to improve the JS security model. I am not the only one to think so as projects such as http://cowl.ws/ reveal. It would be a pity to use the weakness of the current JS security model to weaken something that is stronger. 

Clearly a system where the private key is guaranteed to not leave the browser storage gives a lot stronger security guarantees. This may prove to be very useful later with a stronger JS security model. It's certainly worth considering. 

 
Perhaps WebID's enrollment application doesn't have a password it verifies. But defending an origin from itself is not consistent with the browser's security model. We can build mechanisms to help an origin maintain its integrity (CSP, SRI) or help an origin subdivide itself (<iframe sandbox> or this per-page suborigins proposal I've seen floating around), but trying to defend an origin from *all* of itself doesn't make sense. That boils down to moving entire applications into the browser with one-off single-use-case APIs (like keygen and x-x509-user-cert). That is not how to develop a platform. It would be absurd to add a playChess() API to protect your chess win rate from a compromised https://chess.example.com origin.

yes, so there are improvements to be made. Currently the browser has a very strong security mechanism for keeping a private key in inaccessible storage. Let's not remove strong security features in favor of weaker ones. Rather let's try to increase the security of the web all round.
 


David

henry...@gmail.com

okunmadı,
2 Ağu 2015 20:53:072.08.2015
alıcı blink-dev, henry...@gmail.com, melvinc...@gmail.com, rsl...@chromium.org
Thanks for these interesting pointers.

A lot of this though is still research projects, relying on very new technolgies none of which have yet been tested to see if they allow us to use authentication the way I am currently able to do it with X509 certificates using WebID. 

Reading it earlier today I wondered how would a random site I come across know what iframe or service worker to communicate with, given that it does not yet know who I am? Currently this is done using the well known NASCAR pattern that is: a site one goes to lists all the top centralised providers with which one can log on. ( see https://indiewebcamp.com/NASCAR_problem ) The nascar solution to the problem just re-inforces centrlaised players, is bad for the web, and is a UI disaster.

How do you deal with this without the user needing to type in a URL for his account? With WebID the browser presents a set of certificates which the user can then just select from using point and click. Furthermore that is recogniseably part of the chrome.
 

Jeffrey

min...@sharp.fm

okunmadı,
4 Ağu 2015 19:50:114.08.2015
alıcı Security-dev, blin...@chromium.org, sle...@google.com
On Tuesday, 28 July 2015 21:46:47 UTC+2, Ryan Sleevi wrote:

> Primary eng (and PM) emails
>
> rsl...@chromium.org
>
> Summary
> This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].

Please dont.

If - and only if - a compelling replacement(s) for keygen are introduced and become widely and practically available, *then* consider dropping keygen. Until that time please leave it alone.

For an example of the embarrassment and chaos caused when Mozilla removed the crypto.signText API without checking with anybody, and without offering any replacement, see this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1083118

If keygen has shortcomings, fix them.

Regards,
Graham
--

Ryan Sleevi

okunmadı,
4 Ağu 2015 20:28:334.08.2015
alıcı min...@sharp.fm, blink-dev
On Tue, Aug 4, 2015 at 4:50 PM, <min...@sharp.fm> wrote:
For an example of the embarrassment and chaos caused when Mozilla removed the crypto.signText API without checking with anybody, and without offering any replacement, see this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1083118

I wouldn't call that embarrassment and chaos - that was Mozilla being good stewards of the platform and removing Browser-specific public APIs that have never been implemented by any other browser, nor would they be. While Mozilla ultimately decided to fund development of an extension to replace it, it did so primarily because it was clear that the software vendors were selling products with a clear intent to "only support Firefox", and were unwilling or unable to engineer solutions that were browser agnostic.

This sort of gets to the core point - whether or not it's a healthy function of a browser to encourage and enable developers to write sites that are browser-specific. The answer is, for most browsers, "Definitely not". That is, openness and interoperability is key.

So then the next question is whether <keygen>, window.crypto.signText, generateCRMFRequest, application/x-x509-*-cert, or any of these other "PKI management" functions have a role in the open web platform. And the answer there is, again, no; they're artifices of an early time when security was an afterthought and a rational, interoperable web platform that can scale to billions of devices - from your phone to your PC to your TV - was not a priority.

I know Mozilla does not view this as a success story - that is, that governments and banks rely on these non-standard features. Our own experiences with the vendors selling these governments/banks software is that they are some of the worst actors for standards compliance, and consistently fail to properly implement what they're supposed to for smart cards, so it comes as no surprise that they would use a variety of non-standard features (and worse, unnecessarily so, since they ended up developing plugins to support other browsers and they could have just done the same in Firefox).

However, none of this is relevant to <keygen>, for which the metrics - both crawler and real world usage - show there's virtually no usage, and there are alternatives for what Chrome has implemented (I clarify this to ensure that no improper parallels to Firefox are drawn, but even still, alternatives exist for Firefox as well)

Jeffrey Yasskin

okunmadı,
4 Ağu 2015 21:20:154.08.2015
alıcı Henry Story, blink-dev, Melvin Carvalho, Ryan Sleevi
Service Workers are only needed to make this work offline, which isn't
really in the WebID scope anyway. You can do the online part with just
iframes and Web Crypto, neither of which is a research project.

> Reading it earlier today I wondered how would a random site I come across
> know what iframe or service worker to communicate with, given that it does
> not yet know who I am? Currently this is done using the well known NASCAR
> pattern that is: a site one goes to lists all the top centralised providers
> with which one can log on. ( see https://indiewebcamp.com/NASCAR_problem )
> The nascar solution to the problem just re-inforces centrlaised players, is
> bad for the web, and is a UI disaster.
>
> How do you deal with this without the user needing to type in a URL for his
> account? With WebID the browser presents a set of certificates which the
> user can then just select from using point and click. Furthermore that is
> recogniseably part of the chrome.

Doesn't WebID already assume users know their identifying URL? Can you
have the FOAF file point to the web page that stores the private key?
Is the problem just that you want browsers to autofill a URL? You
don't need the whole <keygen> feature for that.

Jeffrey

henry...@gmail.com

okunmadı,
5 Ağu 2015 01:37:555.08.2015
alıcı blink-dev, henry...@gmail.com, melvinc...@gmail.com, rsl...@chromium.org
Thanks. I suppose it is something like this that Mozilla Persona used...
 

> Reading it earlier today I wondered how would a random site I come across
> know what iframe or service worker to communicate with, given that it does
> not yet know who I am? Currently this is done using the well known NASCAR
> pattern that is: a site one goes to lists all the top centralised providers
> with which one can log on. ( see https://indiewebcamp.com/NASCAR_problem )
> The nascar solution to the problem just re-inforces centrlaised players, is
> bad for the web, and is a UI disaster.
>
> How do you deal with this without the user needing to type in a URL for his
> account? With WebID the browser presents a set of certificates which the
> user can then just select from using point and click. Furthermore that is
> recogniseably part of the chrome.

Doesn't WebID already assume users know their identifying URL?

The user does not need to remember or type the full URL to use it, which is
why it is one of the most user friendly authentication methods on the web
which works with all browsers for the past 15 years. All it needs are minor 
UI improvements to make it excellent ( In the case of Firefox and for some reason
on Linux the UI is so bad it is clear it was written by a student in UI design with no
more talent than someone using Visual Basic for the first time in his life ).

When landing on a resource that requires authentication a popup window
that is clearly part of the chrome appears. This asks the user to select among
a set of certificates. The user just has to click one of the certificates to get going.

There is a video I put together of this a few years ago here:

It would be easy to make the cert selection much more user friendly by 
for example showing information from the WebID profile, so that people updating
their profile could immediately see the change in their selection box: this would
help create a tie between their home page and the certificate.

Instead of removing all of these very well thought out features of current browsers,
there are various ways to make it evolve into something that is competitive and 
( who knows? )  compatible with FIDO  but certainly more decentralised out of the box. 
WebID + TLS is the only user friendly secure  decentralised  authentication scheme 
for a distributed social web that  actually works in current browsers. 

It is clear that it can be improved at many levels - and for that
I am willing to help out with consulting, as it is clear from this thread that
this subject is complex and easy for specialists to get lost in. But removing this
feature will end up costing a fortune as you will need to add those features back
later, in a different guise.

henry...@gmail.com

okunmadı,
5 Ağu 2015 01:55:055.08.2015
alıcı blink-dev, min...@sharp.fm, sle...@google.com


On Wednesday, 5 August 2015 02:28:33 UTC+2, Ryan Sleevi wrote:


On Tue, Aug 4, 2015 at 4:50 PM, <min...@sharp.fm> wrote:
For an example of the embarrassment and chaos caused when Mozilla removed the crypto.signText API without checking with anybody, and without offering any replacement, see this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=1083118

I wouldn't call that embarrassment and chaos - that was Mozilla being good stewards of the platform and removing Browser-specific public APIs that have never been implemented by any other browser, nor would they be. While Mozilla ultimately decided to fund development of an extension to replace it, it did so primarily because it was clear that the software vendors were selling products with a clear intent to "only support Firefox", and were unwilling or unable to engineer solutions that were browser agnostic.

This sort of gets to the core point - whether or not it's a healthy function of a browser to encourage and enable developers to write sites that are browser-specific. The answer is, for most browsers, "Definitely not". That is, openness and interoperability is key.

yes, and keygen has been tested and is now part of html5. 


Often what happens is that something is first tried out in the wild, then if it shows potential  it is standardised. Tim Berners Lee first wrote a web browser and server without going through a standards body ( though he relied on well tested standards such as TCP/IP, DNS, etc ... to get going ).  Then as the need was felt to come to agreement, some pieces were standardised in existing bodies, eg: the URL spec, and then as specific standardisation needs appeared, the W3C appeared to help the process move along.

In the same way, Keygen was non standard to start off with, even though it was deployed in at least 3 browsers for about 10 years.  Then it was standardised as part of html5 as you can see here:
  
HTML5 is a living standard, so it can be improved. 
 

So then the next question is whether <keygen>, window.crypto.signText, generateCRMFRequest, application/x-x509-*-cert, or any of these other "PKI management" functions have a role in the open web platform. And the answer there is, again, no; they're artifices of an early time when security was an afterthought and a rational, interoperable web platform that can scale to billions of devices - from your phone to your PC to your TV - was not a priority.

They show that people at Netscape had a lot more ambitious vision than yours. Netscape was out the replace the
OS with the Web - a vision that is still alive with Android, Firefox OS, .... . In that view the keystore is PART of the web browser, not a separate application: because the web browser and the OS's features are designed to grow together.


I know Mozilla does not view this as a success story - that is, that governments and banks rely on these non-standard features.

They are standard: they are in HTML5! 
 
Our own experiences with the vendors selling these governments/banks software is that they are some of the worst actors for standards compliance, and consistently fail to properly implement what they're supposed to for smart cards, so it comes as no surprise that they would use a variety of non-standard features (and worse, unnecessarily so, since they ended up developing plugins to support other browsers and they could have just done the same in Firefox).

However, none of this is relevant to <keygen>, for which the metrics - both crawler and real world usage - show there's virtually no usage, and there are alternatives for what Chrome has implemented (I clarify this to ensure that no improper parallels to Firefox are drawn, but even still, alternatives exist for Firefox as well)

Well if none of the above is relevant, and your only argument is that crawls of the open web have found that secure authentication methods are not visible, then please just cut the rest of the arguments out.  


 

Ryan Sleevi

okunmadı,
5 Ağu 2015 14:43:175.08.2015
alıcı Henry Story, Graham Leggett, blink-dev


On Aug 4, 2015 10:55 PM, <henry...@gmail.com> wrote:
>

> yes, and keygen has been tested and is now part of html5. 
>
>  http://www.w3.org/TR/html5/forms.html#the-keygen-element

I encourage you to review the history behind this. Even at the time of inclusion, there was significant disagreement, and its inclusion was historic documentation, not aspirational inclusion. And what is in the spec is, again, not what is implemented, so pointing to the spec as proof of standardization is highly misleading.

>
> Often what happens is that something is first tried out in the wild, then if it shows potential  it is standardised.

And when it doesn't, and the experiment fails, it's removed.

The experiment had failed.

> In the same way, Keygen was non standard to start off with, even though it was deployed in at least 3 browsers for about 10 years.  

Never interoperably.

> They are standard: they are in HTML5! 

My entire comment was about the bits that Mozilla only ever implemented - the very thing I was replying to. window.crypto.signText and generateCRMFRequest, two Firefox only features that were never implemented by another browser, but depressingly and unnecessarily became a core part of certain software vendors' offering to government and banking clients. There's no reason for that.

Again, given the opposition to <keygen> by one major browser (who objected at the time to even including in the spec - and which Chrome didn't support for several years after anyways), given our fundamental security concerns, and given Mozilla's vastly different implementation (which they too are looking to remove), it's somewhat silly to argue this is something with consensus and a bright future.

You keep suggesting it is merely an issue of UI, as you have for four years. I've explained repeatedly to you why it isn't. I wrote the Chrome implementation for multiple platforms. I maintain the code underlying Firefox's implementation. I'm extensively familiar with Microsoft's cryptographic APIs down to the assembly level. I've worked with governments and finance on the systems that have relied on such features, and work extensively with CAs and the web PKI, and participate in the IETF TLS WG. This is not a matter of me not understanding certificate systems, as you've suggested, but rather being all too painfully aware of the non-future such systems have, and the many fundamental problems that WebID has.

At this point, I'm not sure what more I can do for you. I am 100% confident that secure and viable alternatives exist for the things that are reasonable, and 100% confident that some of what WebID wants is hostile to security and privacy and thus unreasonable for the platform. I don't think it's reasonable to suggest holding deprecation hostage until Chromium engineers write an alternative, much as was done to Mozilla engineering (not by WebID explicitly, go be clear), but I am wholly convinced - knowing the WebID spec AND how many of the implementations work and have worked for years - that WebID will find suitable and viable alternatives once motivated.

Since it seems we will continue to disagree on even the most basic, core points - such as the documented history of <keygen> and its status of interoperability - and because the security and privacy concerns are wholly being dismissed - I think it best that this be my last reply to you, for I fear we have nothing more productive to offer, and it seems that no matter how much evidence or history provided, we will still keep circling on these points.

The exciting part for you should be that although <keygen> is a demonstrable technological dead end, now, more then ever is there ample means and opportunity to explore ways in creating new APIs that work for the web. There is ample means to polyfill high level APIs based on the low level primitives available, such as WebCrypto and Service Workers, and browsers more than ever provide ample opportunity to explore even new low level primitives such as via extensions. This gives you plenty of tools to explore different shapes of a new API that can address the many fundamental, by design, flaws of <keygen>, and then propose them for standardization if they end up being meaningful. In the process, it can also help the WebID supporters discover how much, if not all, of what they want can be replaced with a simple JavaScript file.

Melvin Carvalho

okunmadı,
5 Ağu 2015 15:20:125.08.2015
alıcı Ryan Sleevi, Henry Story, Graham Leggett, blink-dev
On 5 August 2015 at 20:43, 'Ryan Sleevi' via blink-dev <blin...@chromium.org> wrote:


On Aug 4, 2015 10:55 PM, <henry...@gmail.com> wrote:
>

> yes, and keygen has been tested and is now part of html5. 
>
>  http://www.w3.org/TR/html5/forms.html#the-keygen-element

I encourage you to review the history behind this. Even at the time of inclusion, there was significant disagreement, and its inclusion was historic documentation, not aspirational inclusion. And what is in the spec is, again, not what is implemented, so pointing to the spec as proof of standardization is highly misleading.

>
> Often what happens is that something is first tried out in the wild, then if it shows potential  it is standardised.

And when it doesn't, and the experiment fails, it's removed.

The experiment had failed.

> In the same way, Keygen was non standard to start off with, even though it was deployed in at least 3 browsers for about 10 years.  

Never interoperably.

> They are standard: they are in HTML5! 

My entire comment was about the bits that Mozilla only ever implemented - the very thing I was replying to. window.crypto.signText and generateCRMFRequest, two Firefox only features that were never implemented by another browser, but depressingly and unnecessarily became a core part of certain software vendors' offering to government and banking clients. There's no reason for that.

Again, given the opposition to <keygen> by one major browser (who objected at the time to even including in the spec - and which Chrome didn't support for several years after anyways), given our fundamental security concerns, and given Mozilla's vastly different implementation (which they too are looking to remove), it's somewhat silly to argue this is something with consensus and a bright future.

You keep suggesting it is merely an issue of UI, as you have for four years. I've explained repeatedly to you why it isn't. I wrote the Chrome implementation for multiple platforms. I maintain the code underlying Firefox's implementation. I'm extensively familiar with Microsoft's cryptographic APIs down to the assembly level. I've worked with governments and finance on the systems that have relied on such features, and work extensively with CAs and the web PKI, and participate in the IETF TLS WG. This is not a matter of me not understanding certificate systems, as you've suggested, but rather being all too painfully aware of the non-future such systems have, and the many fundamental problems that WebID has.

At this point, I'm not sure what more I can do for you. I am 100% confident that secure and viable alternatives exist for the things that are reasonable, and 100% confident that some of what WebID wants is hostile to security and privacy and thus unreasonable for the platform. I don't think it's reasonable to suggest holding deprecation hostage until Chromium engineers write an alternative, much as was done to Mozilla engineering (not by WebID explicitly, go be clear), but I am wholly convinced - knowing the WebID spec AND how many of the implementations work and have worked for years - that WebID will find suitable and viable alternatives once motivated.


Ryan, thank for summarizing the case.  I think perhaps your familiarity with WebID might not be as up to date as it could be.  By coincidence I was reading a post entitled "Mistaking Authentication for Identification" today, on an unrelated matter.  And this seems a common misconception of the technology.

WebID is primarily about identifying yourself on the web in a decentralized way.  It is actually security agnostic, and authentication agnostic.  The goal is to create identity on the web that breaks out of silos.

Related is timbl's post about social cloud storage ( http://www.w3.org/DesignIssues/CloudStorage.html )

As a pure identity system, I think you will find it is actually used.  Indeed, bradfitz from google has in the past said google would adopt it (but he may have moved on to other things now) and facebook have made a good effort at an implementation (we didnt even expect them to be first!).  I can testify that there are millions of webid identities on the web across hundreds of sites (still tiny compared with the web at large) but not zero.

The problem with identity on it's own is that it's quite a limited, read only experience.  Think back to the way the web was 15 years ago when Rich McManus wrote his seminal article on the "read write web".  Then came paying online bills, blogs, social networks and more. 

Our goal with webid was for users to be able to authenticate without necessarily building huge password silos that have been commonplace.  We didnt know how to do it, until it was noticed that client side authentication was built into most browsers, and has been for over a decade.

The main advantage of this is that it is tested and we dont have a 10 year lead time to wait for getting it to users. 

The system works quite well, and in our small community, which we acknowledge KEYGEN is not perfect, we have yet to have a single security violation reported in 4 years.

The main issue we have come across is that there is a perception that the certificate dialog is confusing for end users.  For example firefox has a "remember this choice" button, but it doesnt actually remember that choice.  Other things like it being text based, rather than, text and avatar based make it look dated to some people.

I can testify to this as Mark Shuttleworth singled out WebID specifically as a technology to evaluate for the Ubuntu platform.  I was on the call and the feeling was that while the UI might be OK for "geeks" it would be a challenge to roll it out to your grand mother.  And that the lead time for getting browser vendors to change was too uncertain to adopt at that time.  This was about 4 years ago, and a lost opportunity for our community.

While the cert UI has improved a bit, in 4 years, it's a lot less than the advances we've seen in other areas of the chrome.  We've filed bug reports, presented suggestions, but browser vendors have said they havent had time to prioritize modernizing this component.  At least not yet.

More recently informal corridor conversations with Mozilla have said that they may be prepared to accept a pull request on an improved UI, but dont have manpower to do it yet, so that's a big step forward.

The WebID community would love to use other auth methods to augment the current offerings. but it has proved a challenge so far.  Please dont feel that we are open to suggestion, we are just a small community grass roots volunteers that are trying to use the tools available.  We have looked at the web crypto API but it has been a challenge to use this without introducing central points of failure.  Work on WebID +RSA is in progress, maybe that's something you'd like to advise us on?

We would love to hear your opinions on how this can be done, but we've yet to hear feedback on the list about possible strategy.  What would make things easy for you to provide feedback?  Write to the mail list.  Could we open a github repo for issues.  Something else? 

So in summary, WebID is perhaps used more that some may think, and we'd love to improve it.  No one is trying to hold anyone hostage.  But please hold off deprecating things a little longer, until we can agree on good usable and tested alternatives, and we'd really appreciate it if some resource could be spent making what's there today be a little more usable.

Ryan Sleevi

okunmadı,
5 Ağu 2015 15:41:135.08.2015
alıcı Melvin Carvalho, blink-dev, Graham Leggett, Henry Story


On Aug 5, 2015 12:20 PM, "Melvin Carvalho" <melvinc...@gmail.com> wrote:
>
>  And this seems a common misconception of the technology.

It doesn't matter the why, so much as the how, and whether alternatives exist for that. And they do.

>  I can testify that there are millions of webid identities on the web across hundreds of sites (still tiny compared with the web at large) but not zero.

As someone responsible for creating over a thousand of those trying to dig into bugs, I suspect that the numbers here are far more misleading.

> Our goal with webid was for users to be able to authenticate without necessarily building huge password silos that have been commonplace.  We didnt know how to do it, until it was noticed that client side authentication was built into most browsers, and has been for over a decade.

And doesn't work with HTTP/2 and is significantly simplified in TLS1.3, both in ways incompatible with WebID without reengineering it.

>
> The main advantage of this is that it is tested and we dont have a 10 year lead time to wait for getting it to users. 

But it isn't tested. Like, literally, it isn't tested. Firefox or Chrome. Because the interactions with the platform can't be reliably tested dud to the side effects.

Which is yet another problem.

>
> The system works quite well, and in our small community, which we acknowledge KEYGEN is not perfect, we have yet to have a single security violation reported in 4 years.

That's great, but I didn't say WebID was fundamentally insecure, I said <keygen> and application/x-x509* are. And they're insecure in a way that no one would ever report to you, because its a browser security issue fundamentally.

> The main issue we have come across is that there is a perception that the certificate dialog is confusing for end users.  For example firefox has a "remember this choice" button, but it doesnt actually remember that choice.  Other things like it being text based, rather than, text and avatar based make it look dated to some people.

It's far, far more than that, but you don't see those issues because you're held back by the basics. And let's not forget that logout is fundamentally incompatible with the Web as designed and works. It really is. TLS sessions don't work like you want. HTTP connection pooling doesn't work like you need. Browser logic to support client certificates where they are used in abundance (enterprises) is incompatible with WebID. Even mobile platforms like Android and iOS explicitly don't work like you need. And you won't solve that with UI.

The premise is that <keygen> is necessary because client certs are necessary, but the client certs are a technological dead end for the How of what WebID is trying to accomplish - irreparably so, and has been, and you acknowledge - and so too does <keygen> as a vector to those represent a flawed conclusion.


> So in summary, WebID is perhaps used more that some may think, and we'd love to improve it.  No one is trying to hold anyone hostage.  But please hold off deprecating things a little longer, until we can agree on good usable and tested alternatives, and we'd really appreciate it if some resource could be spent making what's there today be a little more usable.

I've been communicating these points to folks involved in WebID for years, on the bugs, at W3C meetings, and in emails. The WebID work has not seen any measurable advance on these measures in four years. It's also clear, from your own numbers, that the number of sites affected is in the hundreds, if that. I certainly have evidence that the numbers of actual Chrome users encountering WebID are on a similar order of magnitude. To the extent it will be disruptive to WebID, I readily acknowledge, but to the impact at the web and to users, it's demonstrably and admittedly a vocal minority, where the number of emails we've sent alone discussing it is within an order of magnitude of sites actually affected.

Graham Leggett

okunmadı,
5 Ağu 2015 16:18:115.08.2015
alıcı Ryan Sleevi, Melvin Carvalho, blink-dev, Henry Story
On 05 Aug 2015, at 9:41 PM, Ryan Sleevi <sle...@google.com> wrote:

> The premise is that <keygen> is necessary because client certs are necessary, but the client certs are a technological dead end for the How of what WebID is trying to accomplish - irreparably so, and has been, and you acknowledge - and so too does <keygen> as a vector to those represent a flawed conclusion.

Please don’t dictate to the rest of us how we should use our technology. Your job is to implement the standards.

Otherwise you’re no better than Microsoft, arguing for the sake of arguing, and being the reason we can’t have nice things. Leave the keygen tag alone.

Regards,
Graham


henry...@gmail.com

okunmadı,
6 Ağu 2015 05:05:526.08.2015
alıcı blink-dev, henry...@gmail.com, min...@sharp.fm, sle...@google.com


On Wednesday, 5 August 2015 20:43:17 UTC+2, Ryan Sleevi wrote:


On Aug 4, 2015 10:55 PM, <henry...@gmail.com> wrote:
>

> yes, and keygen has been tested and is now part of html5. 
>
>  http://www.w3.org/TR/html5/forms.html#the-keygen-element

I encourage you to review the history behind this. Even at the time of inclusion, there was significant disagreement, and its inclusion was historic documentation, not aspirational inclusion. And what is in the spec is, again, not what is implemented, so pointing to the spec as proof of standardization is highly misleading.

>
> Often what happens is that something is first tried out in the wild, then if it shows potential  it is standardised.

And when it doesn't, and the experiment fails, it's removed.

The experiment had failed.

> In the same way, Keygen was non standard to start off with, even though it was deployed in at least 3 browsers for about 10 years.  

Never interoperably.


You were previously arguing that keygen was not standard. I pointed out that they are standard. Now you are saying that
there are standard but there was disagreement in making the standard! How unusual!

I have been able to use keygen on all platforms except IE. There we need to use an ActiveX workaround. It would be nicer if MS implemented keygen too of course, and perhaps this would be a place to work out what improvements are needed.
 

> They are standard: they are in HTML5! 

My entire comment was about the bits that Mozilla only ever implemented - the very thing I was replying to. window.crypto.signText and generateCRMFRequest, two Firefox only features that were never implemented by another browser, but depressingly and unnecessarily became a core part of certain software vendors' offering to government and banking clients. There's no reason for that.


A very different situation to that of <keygen> let us note.
 

Again, given the opposition to <keygen> by one major browser (who objected at the time to even including in the spec - and which Chrome didn't support for several years after anyways), given our fundamental security concerns, and given Mozilla's vastly different implementation (which they too are looking to remove), it's somewhat silly to argue this is something with consensus and a bright future.


What is interesting in all of this is that what is being removed here is not just keygen, etc, it's the built in capability of the browser to do asymetric key cryptography. You are of course keeping the alogorithms of asymmetric keys, but are making it difficult to use across domains where its real value is to be found. 

Now there have been huge political pressures to remove or weaken crypto in the browser. You seem to be one of those who is bending rather than holding strong.
 

You keep suggesting it is merely an issue of UI, as you have for four years. I've explained repeatedly to you why it isn't. I wrote the Chrome implementation for multiple platforms. I maintain the code underlying Firefox's implementation.


If I were you I would not boast about the work at Firefox. How come you never managed to improve the UI here even though
many bug reports have been made stating all the obvious ways of improving it? Here is a picture of a certificate selection box made in Firefox a few years ago:





The obvious errors are:
  * the deep grey box "Test User's we..." is a certificate selection box displayed as a menu where only 1 item is visible.  It does not show which other options are available, when clearly that should be what the first thing to be shown ( as it is in Chrome, IE, and Safari )
  * The details panel shows a lot of information repeated from the selection box, most of it useless for the user to allow him to make a decision
  *  Instead of saying at the top "the site auth.fcsn.eu has requested.... " it is writen out cryptically on a new line

Here is a much better interface in Chrome, which starts to make a lot more sense.


 
But you could go much further by taking inspiration from work by the FIDO alliance for example.
 

I'm extensively familiar with Microsoft's cryptographic APIs down to the assembly level.


we are speaking of the UI layer, the human computer interaction layer that encompasses psychology, sociology and computing, and  not the low level machine assembly level.  Your failure to get any progress on the UI at Firefox clearly
shows that your are not qualified to judge about the UI layer. ( It is very very rare that people with the skills to go down to
the assembly level layer have the skills ( because of lack of time ) to think of the vastly different but just as complicated social/psychological layer. )
 

I've worked with governments and finance on the systems that have relied on such features, and work extensively with CAs and the web PKI, and participate in the IETF TLS WG. This is not a matter of me not understanding certificate systems, as you've suggested, but rather being all too painfully aware of the non-future such systems have, and the many fundamental problems that WebID has.


You are again just trying to argue by authority. Please write up in detail as a thesis the problems at all layers that you see and why you think they cannot be solved, then we'll have a discussion concerning that document. The arguments you have put forward in this thread are weak at best.

In the meantime sites like Facebook are creating huge centralised authentication systems where they track everything everybody does. But perhaps that is what you want?

All that WebID allows us to do is build a distributed secure social web of trust. The main idea is so minimal there is very little that you can fault with WebID. The faults you see are those of UI design of the certificate selection systems, TLS ( which is being improved with TLS 3.0 ), etc... 
 

At this point, I'm not sure what more I can do for you. I am 100% confident that secure and viable alternatives exist for the things that are reasonable, and 100% confident that some of what WebID wants is hostile to security and privacy and thus unreasonable for the platform.


Please write up why this is the case and don't just state it and ask us to take your word for it.
Is it better to have a centralised surveillance system or decentralised security? WebID enables a decentralised web of trust. I can see this may be problematic to some large players.
 

I don't think it's reasonable to suggest holding deprecation hostage until Chromium engineers write an alternative, much as was done to Mozilla engineering (not by WebID explicitly, go be clear), but I am wholly convinced - knowing the WebID spec AND how many of the implementations work and have worked for years - that WebID will find suitable and viable alternatives once motivated.


Of course there are ways of proceeding with broken browsers. We do just that all the time.  One thing one can do is move all of it to server-to-server communication as a fall back for broken browser designs. So if you are trying to kill decentralised authentication systems you are going to fail in any case.
 

Since it seems we will continue to disagree on even the most basic, core points - such as the documented history of <keygen> and its status of interoperability - and because the security and privacy concerns are wholly being dismissed - I think it best that this be my last reply to you, for I fear we have nothing more productive to offer, and it seems that no matter how much evidence or history provided, we will still keep circling on these points.


You have only just brought up the security and privacy concerns, but have not described what they are. 
 

The exciting part for you should be that although <keygen> is a demonstrable technological dead end, now, more then ever is there ample means and opportunity to explore ways in creating new APIs that work for the web. There is ample means to polyfill high level APIs based on the low level primitives available, such as WebCrypto and Service Workers, and browsers more than ever provide ample opportunity to explore even new low level primitives such as via extensions.


Well lets wait to see how well these fare first before removing <keygen> then.

As I pointed out these are (from what I can see)  limited:

 * they rely on the Same Origin Policy which is a huge weakness in the WebApps space, as it confuses the publisher of a piece of code and the author of that piece of code. The only way to deal securely with that is to create subdomains for each piece of JS that one has, as one otherwise has the least reliable piece of code destroy the security of the whole system . Perhaps part of this is being taken care of by http://cowl.ws/ , but I am not sure where that is going. But even with this subdomain creation it does not help the user who cannot not know what the security policies are for each web site. 

* The certificate selection box built into the chrome has the advantage of being unphishable, of protecting my private keys, and of allowing me to authenticate if I want wherever I want without having to type a username or password. It is not clear that there is a way to do this without the user typing an identifier when authenticating to a web site.

This gives you plenty of tools to explore different shapes of a new API that can address the many fundamental, by design, flaws of <keygen>, and then propose them for standardization if they end up being meaningful. In the process, it can also help the WebID supporters discover how much, if not all, of what they want can be replaced with a simple JavaScript file.


 There seem to be a huge number of problems with JS security that you don't even want to consider. Given how successful we have been at getting any feedback from people like you at improving some obvious flaws in UI design of certificate selection boxes, I doubt we'll ever get much more of the same here either. After all you seem to be fundamentally opposed to authentication across sites - and so really to a Social Web. You are clearly in the silos everywhere camp. There is an obvious interest in people like Google and Facebook controlling the APIs of this at different layers such as Facebook connect etc..., giving them a much deeper oversight into the privacy of all people on earth, as you should be aware if you followed the Snowden revelations.

samuel...@gmail.com

okunmadı,
25 Ağu 2015 20:21:4125.08.2015
alıcı blink-dev, sle...@google.com
One of the main differences between Webcrypto and keygen besides the ability to work with the server to create TLS certs is that the only secure browser chrome driven system that seems to exist for user security is TLS client auth. Webcrypto solves a lot of issues in theory by making private keys non-extractable by choice and the API immutable but the weakness that still exists in an overarching fashion is the nature of control the server has over the executing JS. Ideally, there should be some sort of visual component to Webcrypto that allows it to work purely in the interest of the user, such that a malicious server is unable to purposefully get ahold of a private key whether by using handrolled crypto libs that appear to work as intended but arent native and therefore exposing the private key to the server invisibly or even just the server setting extractable to true and getting the private key invisibly (invisibly assuming the person isn't combing their network requests). There needs to be some sort of browser chrome driven way for the user to be confident that a malicious server simply could not get ahold of their private keys because their private keys were generated in browser, unextractably. The only interface that exists for anything like this at all is keygen and client cert chrome. I do not believe this is an absurd use case given that web crypto is a standard now. The key discovery standard which no one has implemented yet may address some of these things, I am unsure, but the point still stands that whether client auth TLS is a good thing or not, there really should be some sort of in browser crypto that truly works in favor of the user even so that a malicious server cannot "win" by dictating all facets of the JS execution.


On Tuesday, July 28, 2015 at 3:46:51 PM UTC-4, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.

5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.

6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.


[3] https://connect.microsoft.com/IE/feedback/details/793734/ie11-feature-request-support-for-keygen

[4] https://lists.w3.org/Archives/Public/public-html/2009Sep/0153.html

[5] https://blog.whatwg.org/this-week-in-html5-episode-35

[6] https://developer.mozilla.org/en-US/docs/Web/HTML/Element/keygen

[7] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[8] https://developer.mozilla.org/en-US/docs/Archive/Mozilla/JavaScript_crypto/generateCRMFRequest


Compatibility Risk

While there is no doubt that <keygen> remains used in the wild, both the use counters [9] and Google's own crawler indicate that it's use is extremely minimal. Given that Mozilla implements a version different than the HTML spec, and given that Microsoft has made it clear they have no desire to implement, the compatibility risk is believed to be low in practice.


Mozilla is also exploring whether or not to remove support for the application/x-x509-*-cert types [10], but has not yet (to my knowledge) discussed <keygen> support - either aligning with the (much more limited) spec, extending the spec with the Mozilla-specific extensions, or removing support entirely.


On the application/x-x509-*-cert support, there is a wide gap of interoperability. Chrome does not support multiple certificates, but Firefox does. Firefox will further reorder certificates that are inconsistent with what was specified, offering a non-standard behaviour. Chrome does not support application/x-x509-ca-cert on Desktop, and on Android, defers to the platform capabilities, which further diverge from Firefox. Both browsers have the underspecified behaviour of requiring the user having a matching key a-priori, except that's not detailed as to how it works. Firefox also handles various mis-encodings (improper DER, DER when it should be base64), which Chrome later implemented, but is not well specified.


Alternative implementation suggestion for web developers

The primary use cases for <keygen>, from our investigations, appear to be tied to one of two use cases:

- Enterprise device management, for which on-device management capabilities (Group Policies, MDM profiles, etc) represent a far more appropriate path. That is, you would not expect a web page to be able to configure a VPN or a users proxy, so too should you not expect a webpage to configure a user's system key store.

- CA certificate enrollment, for which a variety of alternatives exist (e.g. integrated to the web server, CA specific-tools such as SSLMate or protocols such as Let's Encrypt, etc). Here again, it does not seem reasonable to expect your web browser to configure your native e-mail client (such as for S/MIME)


Within the browser space, alternatives exist such as:

- Use the device's native management capabilities if an enterprise use case. On Windows, this is Group Policy. On iOS/Android, this is the mobile device management suites. On OS X, this is Enterprise settings. On ChromeOS, there is chrome.enterprise.platformKeys [11] for enterprise-managed extensions.

- Use WebCrypto to implement certificate enrollment, then deliver the certificate and (exported) private key in an appropriate format for the platform (such as PKCS#7) and allow the native OS UI to guide users through installation of certificates and keys.


On some level, this removal will remove support for arbitrarily adding keys to the users' device-wide store, which is an intentional, by-design behaviour.


While a use case exists for provisioning TLS client certificates for authentication, such a use case is inherently user-hostile for usability, and represents an authentication scheme that does not work well for the web. An alternative means for addressing this use case is to employ the work of the FIDO Alliance [12], which has strong positive signals from Microsoft and Google (both in the WG), is already supported via extensions in Chrome [13], with Mozilla evaluating support via similar means [14]. This offers a more meaningful way to offer strong, non-phishable authentication, in a manner that is more privacy preserving, offers a better user experience, better standards support, and more robust security capabilities.

Henry Story

okunmadı,
26 Ağu 2015 01:36:3526.08.2015
alıcı samuel...@gmail.com, public-webid, blink-dev, sle...@google.com
On 26 Aug 2015, at 02:21, samuel...@gmail.com wrote:

One of the main differences between Webcrypto and keygen besides the ability to work with the server to create TLS certs is that the only secure browser chrome driven system that seems to exist for user security is TLS client auth. Webcrypto solves a lot of issues in theory by making private keys non-extractable by choice and the API immutable but the weakness that still exists in an overarching fashion is the nature of control the server has over the executing JS. Ideally, there should be some sort of visual component to Webcrypto that allows it to work purely in the interest of the user, such that a malicious server is unable to purposefully get ahold of a private key whether by using handrolled crypto libs that appear to work as intended but arent native and therefore exposing the private key to the server invisibly or even just the server setting extractable to true and getting the private key invisibly (invisibly assuming the person isn't combing their network requests). There needs to be some sort of browser chrome driven way for the user to be confident that a malicious server simply could not get ahold of their private keys because their private keys were generated in browser, unextractably. The only interface that exists for anything like this at all is keygen and client cert chrome. I do not believe this is an absurd use case given that web crypto is a standard now. The key discovery standard which no one has implemented yet may address some of these things, I am unsure, but the point still stands that whether client auth TLS is a good thing or not, there really should be some sort of in browser crypto that truly works in favor of the user even so that a malicious server cannot "win" by dictating all facets of the JS execution.

I think one can make a slightly stronger point. You speak here of a malicious server. What people tend to think of here is a server that has been broken into, and root taken. Perhaps code in the Apache or Ngingx server has been changed surrpetitiously. The problem is: nothing this dramatic needs to have taken place.

With javascript same origin policy and some HTTP upload method like WebDAV - even if well secured to only allow a specific set of users to upload anything - any javascript uploaded to the origin anywhere can potentially get access to those private keys in the browser local storage and then use it to impersonate the user. This is not true of uploaded html, css, or any other data. Furthermore: since JS is turing complete language, it can be near impossible for an advanced programmer to tell if a piece of code uploaded is actually doing something malicious. 

The only way to secure a server then is to not allow any upload anywhere of any JS. But then how would one use the JS Crypto libs?

If the JS crypto libs had a way to get the user to sign things through the chrome and everyone be guaranteed that the JS never had access to the private keys, then it would be possible to distinguish who had signed what with the private key x of some public key. 

This would of course then require some good UI design in the chrome - which is unavoidable in the end. But I at least don't believe that this is an impossible task. It's just more complex for things that go beyond the micro capacities of TLS client certificate authentication. 

Henry

Social Web Architect

David Benjamin

okunmadı,
26 Ağu 2015 11:34:3226.08.2015
alıcı Henry Story, samuel...@gmail.com, public-webid, blink-dev, sle...@google.com
On Wed, Aug 26, 2015 at 1:36 AM Henry Story <henry...@gmail.com> wrote:
On 26 Aug 2015, at 02:21, samuel...@gmail.com wrote:

One of the main differences between Webcrypto and keygen besides the ability to work with the server to create TLS certs is that the only secure browser chrome driven system that seems to exist for user security is TLS client auth. Webcrypto solves a lot of issues in theory by making private keys non-extractable by choice and the API immutable but the weakness that still exists in an overarching fashion is the nature of control the server has over the executing JS. Ideally, there should be some sort of visual component to Webcrypto that allows it to work purely in the interest of the user, such that a malicious server is unable to purposefully get ahold of a private key whether by using handrolled crypto libs that appear to work as intended but arent native and therefore exposing the private key to the server invisibly or even just the server setting extractable to true and getting the private key invisibly (invisibly assuming the person isn't combing their network requests). There needs to be some sort of browser chrome driven way for the user to be confident that a malicious server simply could not get ahold of their private keys because their private keys were generated in browser, unextractably. The only interface that exists for anything like this at all is keygen and client cert chrome. I do not believe this is an absurd use case given that web crypto is a standard now. The key discovery standard which no one has implemented yet may address some of these things, I am unsure, but the point still stands that whether client auth TLS is a good thing or not, there really should be some sort of in browser crypto that truly works in favor of the user even so that a malicious server cannot "win" by dictating all facets of the JS execution.

I think one can make a slightly stronger point. You speak here of a malicious server. What people tend to think of here is a server that has been broken into, and root taken. Perhaps code in the Apache or Ngingx server has been changed surrpetitiously. The problem is: nothing this dramatic needs to have taken place.

With javascript same origin policy and some HTTP upload method like WebDAV - even if well secured to only allow a specific set of users to upload anything - any javascript uploaded to the origin anywhere can potentially get access to those private keys in the browser local storage and then use it to impersonate the user. This is not true of uploaded html, css, or any other data. Furthermore: since JS is turing complete language, it can be near impossible for an advanced programmer to tell if a piece of code uploaded is actually doing something malicious. 

That's not quite how it works. Stray JS files are fine because they don't execute when you navigate to them. It's stray HTML files that are a problem because those are executable formats.

(NB: Don't have stray files on your code origin in general. There are interesting interactions with MIME sniffing, headers to disable it, CSP, etc. This mail is not thorough web security advice. Please consult your local web security expert and copy of The Tangled Web.)
 
The only way to secure a server then is to not allow any upload anywhere of any JS. But then how would one use the JS Crypto libs?

No, the way to do this is to separate your code and data. Don't allow random HTTP upload of executable resources (i.e. HTML) under the origin that speaks for your code. If you need to host user-uploaded files then you have a number of options, the simplest of which is to use a different origin (e.g. github.io, googleusercontent.com). WebDAV is probably not the best mechanism and accordingly you don't see much use of it.
 
If the JS crypto libs had a way to get the user to sign things through the chrome and everyone be guaranteed that the JS never had access to the private keys, then it would be possible to distinguish who had signed what with the private key x of some public key. 

The origin that owns the key is the one that's allowed to use WebCrypto or any other JS API on it. That's how platforms with application isolation work. Each application has their own state and can do what they like with it.

If there's another application that's not mutually-trusting, that application MUST live on a different origin. <keygen> does not save you from that; you're fixating on the keys, but keys are just random state. Your auth application may have other state too---a cookie here, some preference there, some user contact information here, a live document with a password form there---It would be just as unacceptable to leak those as it would be for keys.

Should the other origin wish to do key operations, which is what I think you're alluding to with user-uploaded JS, we have cross-origin channels, such as postMessage. The auth origin is free to interpret postMessage and friends as it likes.

The browser tells the user what top-level origin is in control, and the rest is your business. YOU get to decide what "good UI design" means. YOU get to decide what kinds of operations to allow. Is a signing oracle okay? Maybe, maybe not. Maybe you want to require that the message signed always includes the origin's name. Maybe you allow the operation, but only after user prompt. The auth origin can implement all that and more.

The browser doesn't know any of that. If we try to enforce things at the level of keys, no "good UI design" will make "hey, is it okay to sign this hexdump with that key?" usable. You want questions like "hey, is it okay for Fancy TODO List to read your email?" Only applications know what their private state means. We can only enforce high-level things: application isolation, what application is active, don't access the webcam, etc.

David

israel...@gmail.com

okunmadı,
28 Ağu 2015 18:46:0528.08.2015
alıcı blink-dev, sle...@google.com
Microsoft agrees with Google that keygen is not very useful anymore. Keygen doesn’t appear to be rich enough to deal with strong authentication requirements of web applications. Currently, we are investigating MDM as a solution for device management and believe that WebCrypto will play a role in solving national ID problems.  Furthermore, we believe that the future direction for strong authentication, on the web, revolves around FIDO.

rsl...@chromium.org

okunmadı,
31 Ağu 2015 20:52:4331.08.2015
alıcı blink-dev, sle...@google.com


On Tuesday, July 28, 2015 at 12:46:51 PM UTC-7, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


While the discussion was certainly long and fruitful, and ultimately sparked other browsers to evaluate their support, it does not appear any new information or use cases ultimately came to light beyond those already provided.

Given that, I'd like to propose we continue by actively deprecating the <keygen> tag (e.g. "Intent to Deprecate"). Usage sits at/below .0002%, with a slight uptick following the kick-off of this thread.

With knowledge that the HTML Spec has deprecated Keygen ( https://github.com/whatwg/html/commit/1b438067d84da2ee95988842cdd8fe2655444936 ) and that since the tag was first documented, UAs are not required to support any <keygen> algorithms ( https://html.spec.whatwg.org/#the-keygen-element ), it seems reasonable to begin to warn developers via deprecation notices. The infrastructure will still remain in place for the time being (e.g. this is a "Deprecate", not "Deprecate and Remove"), although not indefinitely.

OWNERs, does that work? Chime in with your LGTYs :)

samuel...@gmail.com

okunmadı,
31 Ağu 2015 22:40:4631.08.2015
alıcı blink-dev, sle...@google.com, rsl...@chromium.org
Going back to the last few posts, I understand why this stuff is being phased out with my main disappointment being what I state above, that it is the only browser chrome driven authentication that seems to exist.

However, seeing as how it looks like deprecation is imminent and furthermore I get to a degree, in principle, why it is occurring, I would like to know if there is any movement in attempting to push forward the key discovery spec that for some reason was split from the WebCrypto spec. Having pre-provisioned named keys would certainly alleviate a lot of the issues of striking public key certs from the browser. It would be even nicer if browsers built in some kind of key management interface such that Javascript could trigger a browser chrome prompt for "Do you allow example.com to perform named key generation?"

This would also strike the final shortcoming of the WebCrypto API that I mentioned above: no matter how immutable the global crypto instance is and no matter how opaque the private keys are, the entire thing could simply be bypassed by someone writing JS that just makes extractable private keys or even implements the algorithms directly. There are no backdoors into WebCrypto but there is also no wall, so to speak. With this, you would be able to ensure the browser created the key pair in a secure and opaque way, not trust that the Javascript served to you did what it was supposed to and used WebCrypto with extract set to false.

Finally, named keys are technically on track but with no real progress, except that key management is out of scope, I think, but realistically needed for implementation. So I am not pulling something out of thin air here.

To answer a few things above:

I referred to weaknesses with WebCrypto when there is a malicious server, which I see implies a server that was taken over. A malicious server may be a bad term. There are plenty of reasons to desire a server in which no trust is placed but still is used. For example: someone wants to host signed documents. The server shouldn't need to be trusted, but it can still host signatures and documents. This is a completely legitimate reason for a server that is not rooted or even actively malicious to be barred from private keys: the guarantee that comes with it even if no one on the other end is actually trying to forge things.

Another thing, after reading above about how you can't just solve crypto problems with good UI design, I totally understand the point. After thinking about what good browser chrome for low level crypto would be, I started pondering writing a Chrome plugin as proof of concept in which low level subtle crypto operations were called through the plugin instead and the plugin acted as a gatekeeper in which little popups would say "hey, this website generated a key pair nicknamed foobar" and later perhaps "This website wants to sign blah blah blah with foobar".

Besides realizing how complex this would be having never actually written a Chrome plugin before, and also the general shortcomings with structured clone messaging destroying equality checks for opaque objects like CryptoKeys, I was forced to confront how subtle crypto simply can't be done in a user friendly way because of all the potential oracles and other security weaknesses you give an attacker by letting them use keys even if they can't see them, depending on many factors I am not really qualified to enumerate on.

After working through it on paper for a bit, I realized the futility.

The closest thing to a replacement for keygen seems to be progress on the key discovery front with named keys, with the additional out of bounds scoping of a named key provisioning hook in the web crypto API that triggers some sort of browser chrome indication that the browser is generating a key on your behalf for use with the current origin. However, once again this would still be plagued by the fact that the key itself can be used by the server-controlled website in many ways that expose weaknesses to the key or abuse it's privileges: for example the site could sign anything they wanted in the user's name, decrypt things and get the message defeating encryption, or attack the key as the WebCrypto API essentially provides a ton of oracles and a lot of crypto breaks when this is possible.

Since this brings us back to square one, again, it seems like the shot in the dark solution which has absolutely no basis for implementation at this point in time is a non-subtle WebCrypto API that allows for common high level cryptographic operations done in favor of the user, with the assistance of the browser. For example, in addition to named key provisioning, there is a simple "sign" operation which leads to a browser prompt about "do you wish to sign this text: blah" or "do you wish to sign this binary data" or whatever. This, unfortunately is on no road maps and just me speculating. Even, say, if there was something as simple as non-subtle implementations of public key authentication protocols that were browser assisted using the WebCrypto API.. essentially a replacement for client SSL certs where the authentication was separated from the keygen tag, the automatic server assisted cert generation/installation and TLS connections themselves, but where the browser still exerted some level control in favor of the user.

Tl;dr what I would like to know is, if anything, is the key discovery API on track for implementation? This seems like the closest thing developers could get to a replacement for client authentication, if the scope of the spec was expanded a bit.

-
Samuel

Ryan Sleevi

okunmadı,
31 Ağu 2015 22:54:3231.08.2015
alıcı samuel...@gmail.com, blink-dev, Ryan Sleevi
On Mon, Aug 31, 2015 at 7:40 PM, <samuel...@gmail.com> wrote:
However, seeing as how it looks like deprecation is imminent and furthermore I get to a degree, in principle, why it is occurring, I would like to know if there is any movement in attempting to push forward the key discovery spec that for some reason was split from the WebCrypto spec.

No, it remains as problematic from privacy, security, and usability as it always has.
 
not trust that the Javascript served to you did what it was supposed to and used WebCrypto with extract set to false.

If you can't guarantee this, all pretense of security is lost.
 
I referred to weaknesses with WebCrypto when there is a malicious server, which I see implies a server that was taken over. A malicious server may be a bad term. There are plenty of reasons to desire a server in which no trust is placed but still is used. For example: someone wants to host signed documents. The server shouldn't need to be trusted, but it can still host signatures and documents. This is a completely legitimate reason for a server that is not rooted or even actively malicious to be barred from private keys: the guarantee that comes with it even if no one on the other end is actually trying to forge things.

And such systems can and do exist with Web Crypto.
 
for example the site could sign anything they wanted in the user's name, decrypt things and get the message defeating encryption, or attack the key as the WebCrypto API essentially provides a ton of oracles and a lot of crypto breaks when this is possible.

Correct.
 
This, unfortunately is on no road maps and just me speculating.

Such systems did (kinda) exist, by Firefox, called window.crypto.signText. It was, as you can expect, problematic. The space has been continually explored in a variety of ways - from signing 'plain text' (insufficient for most use cases), signing XML (of which XML DSig is a nightmare, and XAdEs is a joke), signing PDF (with all the attendant issues), signing JSON (not as human readable as you might hope), to even signing manifests of all the things contributing to the DOM (notwithstanding any/all mutations to the DOM that may have modified the internal JS behaviours but then removed themselves from the DOM).

On paper, it looks an easy task. In practice, it's quite unrealistic - which is why the Web Crypto WG ruled it out of scope.
 
Even, say, if there was something as simple as non-subtle implementations of public key authentication protocols that were browser assisted using the WebCrypto API.. essentially a replacement for client SSL certs where the authentication was separated from the keygen tag, the automatic server assisted cert generation/installation and TLS connections themselves, but where the browser still exerted some level control in favor of the user.

This exists today. It's part of the work of FIDO Alliance, provides a workable JS API for a constrained set of operations that allow for browser-mediated signatures that can be used for a variety of cases, from providing end-to-end security (vis-a-vis attestantions of Token Binding / Channel ID) to providing bootstrap-able, discover-able signature schemes.
 
Tl;dr what I would like to know is, if anything, is the key discovery API on track for implementation? 

There is near zero chance of it being implemented by Chrome :) 

samuel...@gmail.com

okunmadı,
31 Ağu 2015 23:06:3131.08.2015
alıcı blink-dev, samuel...@gmail.com, rsl...@chromium.org, sle...@google.com
I suppose all those conclusions mirror the ones I came to but hoped smarter people had solved :) When you say "And such systems can and do exist with Web Crypto" have they successfully managed to guarantee the "intended" usage of the WebCrypto API or are you just saying that if you can't trust the server to even serve non-malicious Javascript, then you already shouldn't be using a web page for the task?

I'll take a closer look at FIDO... it's a bit hard to wrap my head around because so much of the literature seems to be about how you have to buy your way into the group which guarantees you won't get hit with patent lawsuits or something. I didn't dig too deep because that was a bit of a turnoff.

Good to know that the Key Discovery spec is facing pushback... well not "good" as it's disappointing but it's probably the first actual explanation I've seen as to why it's separate and unimplemented. A lot of blog posts around WebCrypto exist expressing disappointment about the key discovery spec not being implemented, but no real answer until now.

Samuel

Philip Jägenstedt

okunmadı,
1 Eyl 2015 05:07:061.09.2015
alıcı Ryan Sleevi, blink-dev, Ryan Sleevi
Do you timeline in mind for when the eventual removal should happen, i.e. a date to put in the deprecation message? In either case, LGTM to deprecate.

To unsubscribe from this group and stop receiving emails from it, send an email to blink-dev+...@chromium.org.

henry...@gmail.com

okunmadı,
1 Eyl 2015 05:32:571.09.2015
alıcı blink-dev, samuel...@gmail.com, rsl...@chromium.org, sle...@google.com
I opened an issue on the WhatWG's recently-moved-to github repository as the 
move to deprecating keygen seems to me to have been done over hastily there,
following a quick PR which may not have been noticed by the other WhatWG members.


I also added a summary of the discussion in the commit


On Tuesday, 1 September 2015 04:54:32 UTC+2, Ryan Sleevi wrote:


On Mon, Aug 31, 2015 at 7:40 PM, <samuel...@gmail.com> wrote:
However, seeing as how it looks like deprecation is imminent and furthermore I get to a degree, in principle, why it is occurring, I would like to know if there is any movement in attempting to push forward the key discovery spec that for some reason was split from the WebCrypto spec.

No, it remains as problematic from privacy, security, and usability as it always has.
 
not trust that the Javascript served to you did what it was supposed to and used WebCrypto with extract set to false.

If you can't guarantee this, all pretense of security is lost.
 
I referred to weaknesses with WebCrypto when there is a malicious server, which I see implies a server that was taken over. A malicious server may be a bad term. There are plenty of reasons to desire a server in which no trust is placed but still is used. For example: someone wants to host signed documents. The server shouldn't need to be trusted, but it can still host signatures and documents. This is a completely legitimate reason for a server that is not rooted or even actively malicious to be barred from private keys: the guarantee that comes with it even if no one on the other end is actually trying to forge things.

And such systems can and do exist with Web Crypto.
 
for example the site could sign anything they wanted in the user's name, decrypt things and get the message defeating encryption, or attack the key as the WebCrypto API essentially provides a ton of oracles and a lot of crypto breaks when this is possible.

Correct.
 
This, unfortunately is on no road maps and just me speculating.

Such systems did (kinda) exist, by Firefox, called window.crypto.signText. It was, as you can expect, problematic. The space has been continually explored in a variety of ways - from signing 'plain text' (insufficient for most use cases), signing XML (of which XML DSig is a nightmare, and XAdEs is a joke), signing PDF (with all the attendant issues), signing JSON (not as human readable as you might hope), to even signing manifests of all the things contributing to the DOM (notwithstanding any/all mutations to the DOM that may have modified the internal JS behaviours but then removed themselves from the DOM).

yes, it is not surprising that higher level capabilities require even better UI design. Anything general will tend not to work in the security space and can be explained in part by the TAG's finding on the "rule of least power" http://www.w3.org/2001/tag/doc/leastPower.html

Which is why precise and narrow uses of keys are more likely to succeed such as TLS authentication ( extending this potentially to HTTP2.0 authentication in the future ). Signatures are more complex and so would require even more work. 
But removing keygen and the ability to do this is the wrong way to go.
 

On paper, it looks an easy task. In practice, it's quite unrealistic - which is why the Web Crypto WG ruled it out of scope.
 
Even, say, if there was something as simple as non-subtle implementations of public key authentication protocols that were browser assisted using the WebCrypto API.. essentially a replacement for client SSL certs where the authentication was separated from the keygen tag, the automatic server assisted cert generation/installation and TLS connections themselves, but where the browser still exerted some level control in favor of the user.

This exists today. It's part of the work of FIDO Alliance, provides a workable JS API for a constrained set of operations that allow for browser-mediated signatures that can be used for a variety of cases, from providing end-to-end security (vis-a-vis attestantions of Token Binding / Channel ID) to providing bootstrap-able, discover-able signature schemes.

So here you are arguing that such specialised applications of crypto are in fact possible, and that UIs are being built for them. It's ok it seems if it is part of FIDO, but not if it is part of W3C specs. 

Is FIDO useable yet so that we can get some idea of how to build applications in that space? Where should one look?

Melvin Carvalho

okunmadı,
1 Eyl 2015 08:25:251.09.2015
alıcı rsl...@chromium.org, blink-dev, Ryan Sleevi
-1

As has stated by a number commentators, including the inventor of the web browser, please do not deprecate this until we can agree that there is an acceptable replacement

1. KEYGEN is in use, and is part of html spec [1]

2. This is the wrong timing to deprecate because

2.1 LetsEncrypt will issue its first certificate next week (coincidence?) which should lead to a significant uptick in web pki, particularly the grass roots community.  I would expect over 1 million profiles to support keygen soon after letsencrypt is ready.

2.2 The FIDO alliance has not finished it's spec

2.3 Ongoing work is being done to try and find replacements, eg with WebID-RSA, why not try and help us?

3.  AFIAK, KEYGEN is used in the wild and no known security breach has been been reported

If there's bugs or usability issues in the implementations let's fix them together with the free, libre and open source community.  As a community, we should not be so strongly influenced by decisions made by Microsoft.  Indeed, part of the motivation for modern free software browsers was a response heavy handed tactics in the 90s.  Google has traditionally been a great ally to the FLOSS and grass roots communities.  Please continue to do so, by making changes over a more reasonable time frame.

[1] http://www.w3.org/TR/html5/forms.html#the-keygen-element
 

Melvin Carvalho

okunmadı,
1 Eyl 2015 09:54:351.09.2015
alıcı blink-dev


On 30 July 2015 at 16:42, <melvinc...@gmail.com> wrote:


On Tuesday, July 28, 2015 at 9:46:51 PM UTC+2, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


[1] https://html.spec.whatwg.org/multipage/forms.html#the-keygen-element

[2] https://wiki.mozilla.org/CA:Certificate_Download_Specification


Motivation

History: <keygen> was an early development by Mozilla to explore certificate provisioning. Originally Firefox exclusive, it was adopted by several mobile platforms (notably, Nokia and Blackberry), along with support for the certificate installation mime-types. During the First Browser Wars, Microsoft provided an alternative, via ActiveX, called CertEnroll/XEnroll. When iOS appeared on the scene, <keygen> got a boost, as being the initial way to do certificate provisioning, and along with it brought support into Safari. It was then retro-spec'd into the HTML spec.


Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.

1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]

2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)

3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.


A question about MD5.  When you say the signing algorithm for SPKAC is insecure due to use of MD5, do you have a reference to what makes it insecure?  I have seen mixed reports doing a web search some sources saying the attack is 2^123 for preimage, others saying a collision attack is cheaper.  Is this actually a problem as of today, or a theoretical problem in the future.  Any reference material on this topic would be helpful.
 

-1 KEYGEN is in use.

This move will be severely detrimental several grass roots communities, such as the WebID community. 

[1] https://www.w3.org/community/webid/participants
 

Melvin Carvalho

okunmadı,
1 Eyl 2015 09:58:151.09.2015
alıcı rsl...@chromium.org, blink-dev, Ryan Sleevi
On 1 September 2015 at 02:52, <rsl...@chromium.org> wrote:


On Tuesday, July 28, 2015 at 12:46:51 PM UTC-7, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


This is a pre-intent, to see if there are any show-stopping use cases or compatibility risks that have hitherto been unconsidered.


While the discussion was certainly long and fruitful, and ultimately sparked other browsers to evaluate their support, it does not appear any new information or use cases ultimately came to light beyond those already provided.

Given that, I'd like to propose we continue by actively deprecating the <keygen> tag (e.g. "Intent to Deprecate"). Usage sits at/below .0002%, with a slight uptick following the kick-off of this thread.

I dont think these usage statistics are comparing like for like here.  For example, a daily user may use KEYGEN once then surf a site for one year before ever needing to visit the tag again.  This would make its usage stats orders of magnitude lower than its utility.

I think there's going to be a significant uptick in usage as letsenctypt is rolled out and PKI is delivered to the masses on the web.

More detailed stats would be helpful if you have them.

David Benjamin

okunmadı,
1 Eyl 2015 11:40:151.09.2015
alıcı Melvin Carvalho, rsl...@chromium.org, blink-dev, Ryan Sleevi
On Tue, Sep 1, 2015 at 9:58 AM Melvin Carvalho <melvinc...@gmail.com> wrote:
I think there's going to be a significant uptick in usage as letsenctypt is rolled out and PKI is delivered to the masses on the web.

Why would Let's Encrypt do anything here? They're signing server certificates, not client certificates. I don't believe one typically generates those keys in a browser. From the website, they seem to have a command-line tool:


Or are you simply suggesting that Lets Encrypt => more HTTPS => HTTPS is a prereq for doing client certs? I don't think the cost or difficulty of getting a server certificate is the limiting factor in deploying client certificates. There are plenty of properties which deploy HTTPS just fine but not that many who use client certificates.

David 

Ryan Sleevi

okunmadı,
1 Eyl 2015 12:40:501.09.2015
alıcı Philip Jägenstedt, Ryan Sleevi, blink-dev
On Tue, Sep 1, 2015 at 2:07 AM, Philip Jägenstedt <phi...@opera.com> wrote:
Do you timeline in mind for when the eventual removal should happen, i.e. a date to put in the deprecation message? In either case, LGTM to deprecate.

We know that FIDO is looking to replace several of the core use cases in a way that respects user security and offers better user security, and there's multi-vendor interest and collaboration going on there. So I don't think we want to fully remove it until we've had multiple browsers ship it. Chrome makes it available behind an extension presently, there's an open bug with mixed interest from Firefox (moreso from Web developers), and you saw the IE/Edge remarks, so I think we're making progress there.

So I think the timeline between the two is somewhat linked - FIDO's ascent heralds <keygen>'s descent - but that we should be communicating that to developers early (via deprecation messages) especially that <keygen> shouldn't be looked to as the future, but as the past.

Since that still leaves the timeline ambiguous, let's say in a year's time, we will have fully decided that FIDO as an idea is a flop and it should be abandoned (unlikely, but hey, it's the web). What does that mean for <keygen> then? I still think we should proceed with deprecation, for many of the reasons given early in the thread.

Given that FIDO hasn't yet shipped in stable, there might be an argument made that we shouldn't show any deprecation until the replacement is fully launched. However, again, for the reasons given on this thread, I think we still want to signal to developers that <keygen> is problematic for new systems.

So yes, it's a potentially prolonged deprecation. However, given that many of the 'key' users of <keygen> (ha ha, so punny) are in the enterprise/systems management space, and we know upgrade cycles for such systems are on 12-18 months, that still provides a lot of signalling for developing/exploring replacements.

henry...@gmail.com

okunmadı,
4 Eyl 2015 10:11:044.09.2015
alıcı blink-dev, sle...@google.com
Sadly a lot of issues and responses got mixed up in this thread. Here to answer one issue in particular the MD5 one:


On Tuesday, 28 July 2015 21:46:51 UTC+2, Ryan Sleevi wrote:

Primary eng (and PM) emails

rsl...@chromium.org


Summary

This is an intent to deprecate, with the goal of eventually removing, support for the <keygen> element [1], along with support for special handling for the application/x-x509-user-cert [2].


[snip]

4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.


To address this point in particular I wrote up a detailed response on the WHATWG mailing list 
which I copy below

<keygen> has been deprecated a few days ago, and the issue has been taken up by @timbl on theTechnical Architecture Group as it removes a useful tool in the asymmetric public key cryptography available in browsers.

One reason given for deprecating that recurs often is that keygen uses MD5 which opens an attack vector presented in a very good paper "MD5 considered harmful today" at the Chaos Communication Congress in Berlin in 2008 by a number of researchers of which Jacob Appelbaum ( aka. @ioerror ) . ( That was the time when Julian Assange was working freely on Wikileaks, so you can even hear him ask a question at the end )

The slides are here:
https://www.trailofbits.com/resources/creating_a_rogue_ca_cert_slides.pdf
The video is here:
http://chaosradio.ccc.de/25c3_m4v_3023.html

In short they were able to create a fake Certificate Authority (CA) because the CA signed its certificates with MD5 and they were able to create hash collisions, to use the certificate signed by the CA
and change some information in it to produce their own top level certificate, with which
they could create a certificate for any site they wished to! ( Pretty awesomely bad - though they did this carefully to avoid misuse ). This is why projects such as IETFs DANE, DNSSEC, and many other improvements to the internet infrastructure are vital.

This was 7 years ago, so all of this should be fixed by now. There should be no CA signing 
Server Certificates with MD5 anymore.

Great. But that has nothing to do with what is going on with <keygen>. The problem
may well be that the documentation of <keygen> is misleading here. The WHATWG documentation on keygen currently states:

If the keytype attribute is in the RSA state: Generate an RSA key pair using the settings given by the user, if appropriate, using the md5WithRSAEncryption RSA signature algorithm (the signature algorithm with MD5 and the RSA encryption algorithm) referenced in section 2.2.1 ("RSA Signature Algorithm") of RFC 3279, and defined in RFC 3447. [RFC3279] [RFC3447]

By whether or not keygen wraps the key and signs it with MD5 is of not much importance, since this is the keyrequest we are speaking of here, not the generated certificate!

To summarise how the keygen is actually used: 
1. The browser creates a public/private key, saves the private key in the secure keychain 
2. and sends the public key in an spkac request to the server which 
3. which on receipt of the certificate request and verification of the data, uses that to create a Client Certificate using any signature algorithm it wants for the creation of the certificate ( And so it SHOULD NOT USE MD5: see CCC talk above )
4. which it returns using one of the x509 mime types available to it,

Here is an illustration of the flow that we use in the WebID-TLS spec to illustrate this:
Certificate Creation Flow

To see some real code implementing this I point you to my ClientCertificateApp.scala code that receives a certificate Request, and either returns an error or a certificate.
The key parts of the code are extracted below:

def generate = Action { implicit request =>
    certForm.bindFromRequest.fold(
      errors => BadRequest(html.webid.cert.genericCertCreator(errors)),
      certreq => {
        Result(
          //https://developer.mozilla.org/en-US/docs/NSS_Certificate_Download_Specification
          header = ResponseHeader(200, Map("Content-Type" -> "application/x-x509-user-cert")),
          body   = Enumerator(certreq.certificate.getEncoded)
        )
      }
    )
}

CertForm just takes the data from the html form (verifies all fields are ok) and generates a CertReqobject. ( or it can also take a CertReq object and generate a form, so that errors can be shown to the user )

val certForm = Form(
  mapping(
    "CN" -> email,
    "webids" -> list(of[Option[URI]]).
      transform[List[URI]](_.flatten,_.map(e=>Some(e))).
      verifying("require at least one WebID", _.size > 0),
    "spkac" -> of(spkacFormatter),
    "years" -> number(min=1,max=20)
  )((CN, webids, pubkey,years) => CertReq(CN,webids,pubkey,tenMinutesAgo,yearsFromNow(years)))
    ((req: CertReq) => Some(req.cn,req.webids,null,2))
)

The spkacFormatter just returns a public key. ( It plays around with testing the challenge, but I am not sure what that is for - would like to know ).

Anyway as I wrote above: if successful the generate method returns an encoded certificate with the right mime type. And as you can see we create a certificate with SHA1withRSA

val sigAlgId = new DefaultSignatureAlgorithmIdentifierFinder().find("SHA1withRSA")
val digAlgId = new DefaultDigestAlgorithmIdentifierFinder().find(sigAlgId)
val rsaParams = CertReq.issuerKey.getPrivate match {
  case k: RSAPrivateCrtKey =>
    new RSAPrivateCrtKeyParameters(
      k.getModulus(), k.getPublicExponent(), k.getPrivateExponent(),
      k.getPrimeP(), k.getPrimeQ(), k.getPrimeExponentP(), k.getPrimeExponentQ(),
      k.getCrtCoefficient());
  case k: RSAPrivateKey =>
    new RSAKeyParameters(true, k.getModulus(), k.getPrivateExponent());
}


val sigGen = new BcRSAContentSignerBuilder(sigAlgId, digAlgId).build(rsaParams);
x509Builder.build(sigGen)

So the MD5 plays no serious role in all this.

This should not be a big surprise. The only thing of value sent to the server is the public key. It sends
back a certificate based on that public key ( and other information it may have on the user ). But the only one to be able to use that certificate is the person owning the private key.

Now my code could presumably be improved in many places I doubt not. But this should show how
<keygen> is actually used. After all remember that <keygen> was added 10 years after it appeared in browsers, and that there was not that much discussion about the documentation when it was added.


Ryan Sleevi

okunmadı,
4 Eyl 2015 12:18:134.09.2015
alıcı Henry Story, blink-dev
Henry,

While I appreciate your continued contributions on this matter, unfortunately, you're operating on an incomplete and inaccurate set of information, and much of what you said has little bearing and brings no new information to the table.

Regardless of your views, you've heard from others - and you can indeed see from the mail archives from when WHATWG first spec'd keygen - it was *already deprecated* when it was introduced. It was documented for historic sake, as the WHATWG was trying to document the reality of the union of all sorts of browser-features that were then un/under-specified, not as an encouragement or endorsement for implementation or usage.

The use of MD5 is required. When vendors explored changing it to support alternative algorithms, it immediately became clear that this was a breaking and incompatible change for the few (limited) <keygen> users. To change this behaviour is expressly a backwards incompatible change - much like you're arguing deprecating is.

You've also completely ignored the many platform security implications, and in other fora, suggested these are not real. I'm sorry that you may not fully appreciate the inherent risks of violating the Same Origin Policy, nor may be familiar enough with the keystore APIs to recognize the intrinsic security risks. I'm trying to balance the delicate line between suggesting an argument from authority versus dropping multiple POC's for bugs you don't have visibility into. But there are multiple issues, that affect multiple platforms, that are intrinsic to the behaviours of <keygen>, and with no solutions we've found particularly acceptable or desirable yet compared to both the intent of <keygen> and the cost of implementing.

Finally, your timeline is shaky. "10 years after it appeared in browsers" is a generous mischaracterization of the timelines for implementing. Up until very, very recently (in terms of the timescales you're talking about), the only non-feature-phone browser that implemented this was Firefox. As you can again see from the discussion of when <keygen> was added, there was opposition to even adding it, both in the WHATWG and when it came to W3C. Even if you were to disregard every single argument I've made (which you certainly can), the historic evidence is ample as to the concerns from vendors regarding this, none of which have been addressed.

henry...@gmail.com

okunmadı,
4 Eyl 2015 13:50:054.09.2015
alıcı blink-dev, henry...@gmail.com, sle...@google.com


On Friday, 4 September 2015 18:18:13 UTC+2, Ryan Sleevi wrote:
Henry,

While I appreciate your continued contributions on this matter, unfortunately, you're operating on an incomplete and inaccurate set of information, and much of what you said has little bearing and brings no new information to the table.

Regardless of your views, you've heard from others - and you can indeed see from the mail archives from when WHATWG first spec'd keygen - it was *already deprecated* when it was introduced. It was documented for historic sake, as the WHATWG was trying to document the reality of the union of all sorts of browser-features that were then un/under-specified, not as an encouragement or endorsement for implementation or usage.

The use of MD5 is required. When vendors explored changing it to support alternative algorithms, it immediately became clear that this was a breaking and incompatible change for the few (limited) <keygen> users. To change this behaviour is expressly a backwards incompatible change - much like you're arguing deprecating is.

The whole point of my post is that MD5 is required for the certificate request, but this causes no security threat. The code I pointed to uses keygen and yet allows the server to create a fully secure certificate such as the following:

$ openssl pkcs12 -clcerts -nokeys -in ~/Certificates.p12  | openssl x509 -text

Enter Import Password:

MAC verified OK

Certificate:

    Data:

        Version: 3 (0x2)

        Serial Number:

            01:4c:19:67:ea:05

        Signature Algorithm: sha1WithRSAEncryption

        Issuer: CN=WebID, O={}

        Validity

            Not Before: Mar 14 17:39:42 2015 GMT

            Not After : Mar 13 17:49:42 2019 GMT

        Subject: dnQualifier=he...@bblfish.net

        Subject Public Key Info:

            Public Key Algorithm: rsaEncryption

            RSA Public Key: (2048 bit)

                Modulus (2048 bit):

                    00:da:b9:d1:e9:41:f6:f8:5a:08:63:16:9d:0d:b6:

                    32:8d:1d:4a:15:a7:1d:ff:e3:d4:f4:d0:87:52:a5:

                    2f:b1:45:4d:73:58:e4:a5:ec:f3:50:1e:39:24:bc:

                    02:52:f3:00:4b:0b:b2:1a:0d:6b:64:ca:05:3f:0f:

                    bc:b5:a5:4e:c9:3e:be:2d:c9:b9:1e:4c:43:2b:82:

                    78:84:c4:cc:2a:d8:a1:02:b4:6d:2a:20:17:bf:45:

                    d9:d4:c8:8a:56:4d:42:02:34:48:4a:1b:2e:44:6d:

                    bb:4c:d4:38:e7:9c:24:66:ce:31:0f:32:77:73:a7:

                    79:d2:4e:d7:b6:0a:05:a6:18:b9:84:75:7b:94:6d:

                    67:ba:79:f2:e0:64:e6:ae:d3:8b:d6:55:9c:e7:fc:

                    95:02:72:08:23:d5:6d:b1:c0:34:09:93:67:d7:db:

                    27:b6:bd:af:da:8c:c4:83:47:13:3f:4a:14:67:5f:

                    67:5f:b4:84:ce:32:df:66:c1:1a:36:38:fa:84:d5:

                    be:69:b1:a6:f2:38:11:5d:ef:9b:0f:79:bb:25:c0:

                    cb:7e:4a:39:45:9a:08:29:b1:fd:35:c0:d1:db:dd:

                    60:f9:c6:79:d8:94:15:ed:7e:a4:1e:b0:2f:bc:01:

                    6f:c0:e7:92:cb:96:98:c9:f4:db:84:2c:da:d5:b5:

                    f5:c9

                Exponent: 65537 (0x10001)

        X509v3 extensions:

            X509v3 Subject Alternative Name: critical

                URI:http://bblfish.net/people/henry/card#me

            X509v3 Key Usage: critical

                Digital Signature, Non Repudiation, Key Encipherment, Key Agreement, Certificate Sign

            X509v3 Basic Constraints: critical

                CA:FALSE

            Netscape Cert Type: 

                SSL Client, S/MIME

    Signature Algorithm: sha1WithRSAEncryption

        03:25:38:47:76:34:ba:da:0b:40:ea:75:63:98:6b:b0:0b:b6:

        11:85:c7:b1:c4:91:cc:5c:99:a5:b5:01:24:6f:1f:8c:03:39:

        80:03:e7:50:59:9f:b0:48:6e:e7:16:b8:b7:92:6f:31:cd:cc:

        ba:60:40:08:9e:3c:38:5d:19:94:fd:2c:be:6d:84:57:d4:ea:

        7f:54:a7:69:73:aa:37:a4:b8:81:21:0c:65:dc:f1:f6:a3:40:

        d1:18:cf:04:a4:d6:8b:9a:1f:43:c2:67:4a:0e:8d:00:b7:e8:

        49:e3:b7:d5:f9:00:0f:98:32:b2:09:5e:ca:c0:44:37:dc:df:

        3b:57:e0:c2:5a:8a:79:0d:55:7a:4a:73:4a:24:64:27:e5:16:

        78:d4:c9:35:5e:f8:67:9c:e9:41:bd:c6:25:6b:1b:d7:03:c1:

        af:64:d0:e3:0a:ea:58:a4:bc:3a:a4:8f:51:8d:33:58:ed:ba:

        af:3d:b7:75:28:32:33:76:65:80:56:ae:ec:43:db:9e:7e:4b:

        74:f5:88:07:9f:2d:e8:74:f1:89:d1:af:52:34:07:52:f3:54:

        2f:60:fd:de:96:f6:00:67:2e:8f:10:23:e6:af:95:bf:a6:3c:

        61:0d:8c:24:47:cf:52:45:0f:96:ee:ca:3a:69:82:69:3b:20:

        87:06:5c:58

-----BEGIN CERTIFICATE-----

MIIDITCCAgmgAwIBAgIGAUwZZ+oFMA0GCSqGSIb3DQEBBQUAMB0xDjAMBgNVBAMM

BVdlYklEMQswCQYDVQQKDAJ7fTAeFw0xNTAzMTQxNzM5NDJaFw0xOTAzMTMxNzQ5

NDJaMBwxGjAYBgNVBC4TEWhlbnJ5QGJibGZpc2gubmV0MIIBIDALBgkqhkiG9w0B

AQEDggEPADCCAQoCggEBANq50elB9vhaCGMWnQ22Mo0dShWnHf/j1PTQh1KlL7FF

TXNY5KXs81AeOSS8AlLzAEsLshoNa2TKBT8PvLWlTsk+vi3JuR5MQyuCeITEzCrY

oQK0bSogF79F2dTIilZNQgI0SEobLkRtu0zUOOecJGbOMQ8yd3OnedJO17YKBaYY

uYR1e5RtZ7p58uBk5q7Ti9ZVnOf8lQJyCCPVbbHANAmTZ9fbJ7a9r9qMxINHEz9K

FGdfZ1+0hM4y32bBGjY4+oTVvmmxpvI4EV3vmw95uyXAy35KOUWaCCmx/TXA0dvd

YPnGediUFe1+pB6wL7wBb8DnksuWmMn024Qs2tW19ckCAwEAAaNqMGgwNQYDVR0R

AQH/BCswKYYnaHR0cDovL2JibGZpc2gubmV0L3Blb3BsZS9oZW5yeS9jYXJkI21l

MA4GA1UdDwEB/wQEAwIC7DAMBgNVHRMBAf8EAjAAMBEGCWCGSAGG+EIBAQQEAwIF

oDANBgkqhkiG9w0BAQUFAAOCAQEAAyU4R3Y0utoLQOp1Y5hrsAu2EYXHscSRzFyZ

pbUBJG8fjAM5gAPnUFmfsEhu5xa4t5JvMc3MumBACJ48OF0ZlP0svm2EV9Tqf1Sn

aXOqN6S4gSEMZdzx9qNA0RjPBKTWi5ofQ8JnSg6NALfoSeO31fkAD5gysgleysBE

N9zfO1fgwlqKeQ1VekpzSiRkJ+UWeNTJNV74Z5zpQb3GJWsb1wPBr2TQ4wrqWKS8

OqSPUY0zWO26rz23dSgyM3ZlgFau7EPbnn5LdPWIB58t6HTxidGvUjQHUvNUL2D9

3pb2AGcujxAj5q+Vv6Y8YQ2MJEfPUkUPlu7KOmmCaTsghwZcWA==

-----END CERTIFICATE-----



There is no MD5 in there.
( note: watch out that certain viewers automatically add MD5 signatures for ease of use )
 
In short the MD5 in the spkac has no impact.


You've also completely ignored the many platform security implications, and in other fora, suggested these are not real. I'm sorry that you may not fully appreciate the inherent risks of violating the Same Origin Policy, nor may be familiar enough with the keystore APIs to recognize the intrinsic security risks. I'm trying to balance the delicate line between suggesting an argument from authority versus dropping multiple POC's for bugs you don't have visibility into. But there are multiple issues, that affect multiple platforms, that are intrinsic to the behaviours of <keygen>, and with no solutions we've found particularly acceptable or desirable yet compared to both the intent of <keygen> and the cost of implementing.

Those are other issues. I am trying to address issues one by one in this thread, as all the issues otherwise become completely entangled and difficult to follow. 

Ryan Sleevi

okunmadı,
4 Eyl 2015 14:20:584.09.2015
alıcı Henry Story, blink-dev
On Fri, Sep 4, 2015 at 10:50 AM, <henry...@gmail.com> wrote:
In short the MD5 in the spkac has no impact.

This is not true, and I believe suggests a misunderstanding about the literature related to MD5. I don't know if it's entirely appropriate for the discussion here on blink-dev, especially when so much material on the topic is readily available. 
 
Those are other issues. I am trying to address issues one by one in this thread, as all the issues otherwise become completely entangled and difficult to follow. 

I feel doing so entirely misses the point about the concerns, and does not so much advance a cause as demonstrate part of the point of deprecation. There are multiple SERIOUS issues with <keygen>. We certainly could try to work through each and every one of them - trying to turn our little balsa biplane into a jumbo jet one component at a time, all while gracefully flying through the air without nary a bump - but to attempt that is to ignore that we're not using the right tool for the job, that it's time to land and get on a plane suited for the transatlantic task at hand, and progress.

I feel you may have conflated the argument for deprecation as "We don't know how to solve these, so we're just going to give up" - but that's not quite how it plans out. It's "We know what it would take to solve, we know it'd take a lot of engineering time, we know it'd break a lot of things, and we know that when all is said and done, to fully address all of the concerns and needs is to end up with something entirely different than <keygen> is or can be [effectively] transmuted to"

I tried to communicate that in the early messages - that we're very aware of the problems, we're aware of the viable (and non-viable) options, and that in the balance of cost (whether you look at this as a dimension of complexity of code, user experience, overall security, or comprehensibility of the web platform) versus benefit (the low usage, the viable and soon to be viable alternatives), the natural and logical conclusion is that deprecation is the only sensible, reasonable alternative.

I realize you disagree about that analysis. But discussing possible solutions doesn't change their intrinsic cost, you haven't substantially demonstrated whether there were yet unconsidered benefits, and the WebID folks are but one consumer of <keygen> (and seemingly not even the main consumer - which is CAs doing S/MIME certificate provisioning for third-party email applications and enterprises doing device enrollments).

I don't want you to feel as if your voice has not been heard - you and other WebID enthusiasts have brought it up in multiple fora, so I think most of the people who are involved in these discussions have, in some form, heard your concerns. But I do want you to understand the goal is not to demonstrate that there's "some" value, but whether or not that value exceeds the costs. It's clear from these threads that it doesn't seem you fully appreciate the costs, and while some can be explained (such as the use of MD5 as part of the key proof of possession), some can't be fully (such as the security implications to platform), at least not until this feature is removed or disabled, and some require familiarity with the broader set of both implementation challenges and overall Web Platform functionality.

Again, I would encourage you that perhaps a more fruitful venture is, rather than disagreeing on the cost, or to try to demonstrate value which is already known and factored in to the discussion, would be to explore the solutions proposed to you on a technical level, for which multiple people are confident you'll find the suitable tools to accomplish your goals.

For example, I see several WebID folks (and I don't recall if you in particular have done so, so apologies) discussing FIDO in the context of biometrics, except that discussion is entirely irrelevant to the aspect you're being encouraged towards, which is the U2F portion of asymmetric key enrollment, exchange, and signing, and the potential to strongly bind those statements to communication channels (such as TLS), and with the full flexibility and control to dictate the user experience, _without_ any of the attendant security and usability concerns raised by your current solution. Hopefully, in doing so, you can realize you're being presented with a technology even more flexible, robust, and secure than what you're currently employing.

But to suggest this biplane would make a good jumbo jet is, unfortunately, missing the point :/

Melvin Carvalho

okunmadı,
4 Eyl 2015 14:29:044.09.2015
alıcı Ryan Sleevi, Henry Story, blink-dev
On 4 September 2015 at 20:20, 'Ryan Sleevi' via blink-dev <blin...@chromium.org> wrote:


On Fri, Sep 4, 2015 at 10:50 AM, <henry...@gmail.com> wrote:
In short the MD5 in the spkac has no impact.

This is not true, and I believe suggests a misunderstanding about the literature related to MD5. I don't know if it's entirely appropriate for the discussion here on blink-dev, especially when so much material on the topic is readily available. 

Please could you state what you believe the attack vector relating to MD5's use in SPKAC and/or KEYGEN is.  You brought this up, so it is reasonable to ask what you think it is.  Subsequent discussion has shown no attack vector that compromises either identity or key material.  If there is, please elaborate, or if not, let's agree that KEYGEN is not insecure.

Ryan Sleevi

okunmadı,
4 Eyl 2015 14:33:474.09.2015
alıcı Melvin Carvalho, Henry Story, blink-dev
On Fri, Sep 4, 2015 at 11:28 AM, Melvin Carvalho <melvinc...@gmail.com> wrote:
Please could you state what you believe the attack vector relating to MD5's use in SPKAC and/or KEYGEN is.  You brought this up, so it is reasonable to ask what you think it is.  Subsequent discussion has shown no attack vector that compromises either identity or key material.  If there is, please elaborate, or if not, let's agree that KEYGEN is not insecure.


Are you arguing that MD5 is secure because you don't understand how it's not secure? 

henry...@gmail.com

okunmadı,
6 Eyl 2015 03:57:446.09.2015
alıcı blink-dev, melvinc...@gmail.com, henry...@gmail.com, sle...@google.com
MD5 is a message digest algorithm. Wether it is secure depends on what it is used for. For example if I create an MD5 of a file on my file system as a way to explain the command line tool md5, this presents no security threat

$ md5 ~/rfc-6592.txt 

MD5 (/Users/hjs/rfc-6592.txt) = 1e001b0a4d2913233714772624d91f29


I can create md5s of any string I want. For some reason the <keygen> authors added an md5 to the key request format sent to the server. But that server does nothing with that MD5 string. All that interests the server is the public key that comes in the spkac packet.


So there was a security problem when Certificate Authorities used MD5 to sign certificates.

But since in the actual usage of <keygen> the MD5 is not at all used, well there is and can be no security hole.


It's a bit of a joke, a bit like the Null Packet RFC 6592 .

henry...@gmail.com

okunmadı,
7 Eyl 2015 05:20:047.09.2015
alıcı blink-dev, melvinc...@gmail.com, henry...@gmail.com, sle...@google.com, public-webid
Ok, the above is way too terse and misleading. So let me try again.

The spkac clearly signs the generated public key + challenge with the private key using the MD5 signature [1]. The server is then meant to verify the signature and that it matches the challenge, as a way of verifying that the agent that sent the public key is actually the agent connected in the session. I suppose this is to avoid the server signing certificates for clients presenting random public keys. Of course the client would not be able to do anything with the generated certificate, given we assume that it does not have the private key. So this signature and challenge can be seen as an extra security feature.

(So I admit that I have not been using this verification in my code - which has not been required for anything at this high level of security, but it will be nice to be able to add it.)

This assumes that between the creation of the form and the challenge ( which should of course contain a very random string as possible, include a date and be tied to the user session - perhaps even the TLS session) , and the receipt of the form and the public key someone would be able to find a public key for which they did not have the private key, find an MD5 hash collision for that given challenge, and send that to the certificate service. Let's stop here for a second.

Let us note the complexity of the attack:

1) for the MD5 break example 7 years ago demonstrated by Jacob Appelbaum, they had to spend 3 days to create the challenge (ok computers are faster now of course ) 
2) there is a lot more information in a X509 cert than in an spkac so that MD5 clashes are presumably much more difficult to generate in spkac
3) plus they were dealing with a certificate authority that would create server certificates with identity numbers that were sequential, so completely predictable ( hence the answer that by  randomising the string this would be difficult to duplicate )

All the above is very very hard.

Let us now think of the result of this attack:

The attackers would end up receiving an X509 certificate signed with SHA2, which is recommended now I am told.
But they would not have a private key for the given public key ( for if they did have the private key, none of the above attack
would of course be necessary ).

This is the opposite of the attack demonstrated by Applebaum, where the whole game was to start off with a public key for which they had the private key, in order then to get the CA to generate a certificate signed with MD5, and then be able to generate another certificate with the same signature that put them in control of a root certificate!

Here the attacker ends up with a secure certificate which he cannot break as it uses SHA2 and for which he does not have
the private key!

And apart from that it is really quite unbelievable that it is hard to improve the keygen tag to add an option to make it SHA2 enabled without allowing for backward compatibility!

Henry

 

Ryan Sleevi

okunmadı,
7 Eyl 2015 16:02:027.09.2015
alıcı Henry Story, blink-dev, Melvin Carvalho
On Sun, Sep 6, 2015 at 12:57 AM, <henry...@gmail.com> wrote:

MD5 is a message digest algorithm. Wether it is secure depends on what it is used for. For example if I create an MD5 of a file on my file system as a way to explain the command line tool md5, this presents no security threat

Henry,

Thanks for clarifying that you don't understand/appreciate the security consequences of MD5 collisions, particularly as they apply to signature schemes (such as that employed by CAs issuing certificate - *or* SPKAC certifying keys) or to message digests (allowing multiple messages to result in the same MD5)

I don't believe it's germane to blink-dev@ to explain the security significance of collision resistance, given that there are two decades' worth of research readily available to MD5, and ample security literature on its significance.

I'm afraid you've entirely missed the point for concluding that the concerns of "MD5 in SPKAC" are equivalent to "MD5 in issued certificates for keys attested to via SPKAC", since they are as far as two can be.

Henry Story

okunmadı,
8 Eyl 2015 08:38:138.09.2015
alıcı Ryan Sleevi, blink-dev, Carvalho Melvin, public-webid, Chadwick David, Tim Berners-Lee
On 7 Sep 2015, at 22:01, Ryan Sleevi <sle...@google.com> wrote:

On Sun, Sep 6, 2015 at 12:57 AM, <henry...@gmail.com> wrote:

MD5 is a message digest algorithm. Wether it is secure depends on what it is used for. For example if I create an MD5 of a file on my file system as a way to explain the command line tool md5, this presents no security threat

Henry,

Thanks for clarifying that you don't understand/appreciate the security consequences of MD5 collisions, particularly as they apply to signature schemes (such as that employed by CAs issuing certificate - *or* SPKAC certifying keys) or to message digests (allowing multiple messages to result in the same MD5)

I don't believe it's germane to blink-dev@ to explain the security significance of collision resistance, given that there are two decades' worth of research readily available to MD5, and ample security literature on its significance.

I am catching up quickly on 20 years of literature. Until now I was relying on you folks to improve this
so I did not have to spend time learning it in detail. It seemed to me and to Tim Berners Lee and others that improving this functionality would come quite naturally. Even if one just stuck to the <keygen> html element one could have imagined quite a lot of ways forward. For example if MD5 is broken then one could give the server the option of asking the client for other signatures by generating html such as:

<keygen keytype="rsa" signature="sha1 sha2">

Perhaps as a default if the client still sent back MD5 signed signature the server could have created a much shorter certificate or refuse to make one.

If the spkac format was the real problem then the keygen type could have been improved with support for JOSE [1] the json based format  with

<keygen keytype="rsa" signature="sha1 sha2" signedpk="jose">

All of these have the advantages that they can work on non JS enabled browsers reducing some security holes due to JS for sites that have high security demands.

But lets look at this as an opportunity to have a discussion we should have had years ago, so that we can learn from each other. How much do you know about the semantic web, logic and modal logic? Those are also key parts of the security story.

I'm afraid you've entirely missed the point for concluding that the concerns of "MD5 in SPKAC" are equivalent to "MD5 in issued certificates for keys attested to via SPKAC", since they are as far as two can be.

That was not my point. And I did not just make 1 point but a number of related ones, that work at different logical levels. Let me move from the top level down.

 Lets assume that MD5 in spkac is broken ( as all of us in the WebID group have ) and that therefore we have a protocol  that is currently much simpler that what was initially intended (which is what I described in my previous mail). This simplified protocol goes like this ( lets also abstract away from any format arguments )

  1.a. User Joe clicks on the keygen form presented by his web agent
     b. the web agents keystore creates public/private key
     c. the web agent sends the form data + public key to server (for all intents and purposed the public key is not signed )
  2.a. the Server receives the form data, 
     b. creates a certificate using the public key and other data 
     c. sends it signed back to client
  3.a. Client receives signed certificate ( lets assume its top quality certificate )
     b. asks the user if he wishes to add it to the keystone
     c. if user agrees, it is added  to keystore and associated with the existing private key

So the question from a number of us was:
   what does an attacker gain by sending a public key in 1.b for which he does not have the private key?

Well he gets a signed certificate containing a relation of the form 

  joe cert:key pk .

 where joe is the identifier for Joe and pk is the name of the public key - some long set of numbers that I don't want to write out here to keep the text concise.

  And here we could just stop and say the problem is that the CA ( be it my freedom box or a large CA ) has signed something false, and that saying something unverified is wrong. I think there is ground for thinking that when someone says or writes something they are responsible for what they write. ( Note of course that context matters a lot here, as we know from going to the cinema that an actor murdering someone is not of course liable for murder in the real world, though the character he plays  should be in the story of the film - we will get to context ritght in the next paragraph).

Still an example of a misues would be better, as it will help us work out what the logical inference looks like that can lead someone into error. To do that we need to distinguish what someone says and what is said. Do do that we need to use a quotation mechanism as that is what a certificate is: it is a signed statement saying that some agent ( often thought of as a CA ) is certifying something. To be formal about this we need to introduce a quotation mechanism. We can do this with the N3 graph quotation symbols '{ and '}' .  Quoted block { } allows one to say what someone said without taking what is said to be true. For example you could use it in belief contexts like this

     jane believes { jon at Work }

It is not because Jane believes something that I or you or Jon should believe it. On the other hand it is not because she believes something that it is false either. That is the whole purpose of quotation contexts.

 So using this we can see that a cert logically looks something like this. 

JoesCertWithNoPrivKey 
   signature [ cert:signedBy <https://myfreedombox.example/>;                      
               cert:signedKey <https://myfreedombox.example/pk#>;
               cert:signature "2asdefs32asda...." ];
   cert:certifies { 
       joe cert:key pk .
   } .

where JoesCertWithNoPrivKey is the name of the certificate for which Joe does not have a private key. We have a signature composed of a number of fields, including who signed it and with what key,
and the signature, and we have in the cert:certifies  quoted block with the content of the statement that was signed.

We can imagine that another CA has published a certificate for Tim Berners Lee with the same public key 

timblsCert 
   signature [ cert:signedBy <https://w3.org/#>;                      
               cert:signedKey <https://w3.org/pk#>;
               cert:signature "de238ab73...." ];
    cert:certifies {
       timbl cert:key pk .
         }

Now someone trusting both <https://myfreedombox.example/> and <https://w3.org/#
could come to the conclusion that 

   joe cert:key pk .
  timbl cert:key pk .

If we have the cert:key relation be inverse functional ( owl:InverseFunctionalProperty [2] )
then from the above two statements it would follow that 

   joe = timbl .

So someone or some software agent, would mistakenly come to conclude that joe was timbl. This could be especially problematic if say Joe was an annoying Troll. ( But you can imagine all kinds of negative consequences here ).

So how do we deal with this? Essentially this boils down to a number of ways of making sure that
the CA does not need to say something false. There are three avenues here:

   A. Fixing Signature and challenge

Adding a signature and challenge to the protocol that works ( ie fixing the MD5 problem ) perhaps with help of FIDO hardware tricks.  This would make it easier for CAs to have some ground for thinking that they are publishing something true.

  B. Unlinkability


 Not allowing the key to be used across origins as FIDO requires for unlinkability ( see my argument to the TAG https://lists.w3.org/Archives/Public/www-tag/2015Sep/0023.html )

  Of course with FIDO the servers still actually need to save the data for each user. So there is a datastructure that they need to store which we could describe like this:

  _:agent cert:key pk .

  and then later they could tie an an openId, WebID, email address and other identifiers to that agent as they ask the user for more information ( all this is allowed by FIDO), at which point their data structure would look like this:

  timbl cert:key pk;
     foaf:mbox <mailto:ti...@w3.org>
     foaf:homepage <http://www.w3.org/People/Berners-Lee/> .


The  FIDO philosophy as it is now, is to make the clash describer earlier impossible because any two keys generated are always generate for one web site only. But still: what if someone accidently creates the same key, or a database is corrupted and two people on two sites end up with the same key, and these two sites are communicating? I suppose other hacks are envisageable.

   Given that FIDO and friends only want the key to be used on one origin each site could be a bit more careful with the reasoning by creating a structure that does not directly tie a key to an account but a key and the origin to the account. Perhaps like this:

   timbl authn [ fido:key pk;
                 origin <https://w3.org/> ].

and the other key can be written out as:

   joe authn [ fido:key pk;
               origin <https://myfreedombox.example/> ].
  
The idea is here that even if the two servers exchanged this information behind the scenes they should not be able to conclude that joe = timbl . They could be surprised that the two accounts share the same key but they would not jump to the conclusion that the people are the same.

  C. change the meaning of cert:certifies

Up till now we have been understanding the cert:certifies relation in the claim 

timblsCert 
   signature [ cert:signedBy <https://w3.org/#>;                      
               cert:signedKey <https://w3.org/pk#>;
               cert:signature "de238ab73...." ];
    cert:certifies {
       timbl cert:key pk .
         }

as meaning that this is a simple statement by the w3c that timbl's key is pk . But this is not actually how certificates work. They are more complicted than this. For example in X509 the contained statement is not unconditional with X509. 
 • The statement is meant to only be valid if the date range is valid
 • Or if the user has verified that the certificate is still valid by using CRL or OCSP
 • Or if there are no need to know structures in the certificate
 ...

So here the extra rule could just be: the content of the certificate should only be believed if a connection is made with someone who can prove that they have the private key of the public key given in the certificate.

This would remove the ability of someone collecting certificates to jump to the conclusion that because two certificates signed by trusted CAs (perhps the same) have the same public key, they are the same person. Rather the logic has to go that one can only believe the content of the CA if one has a case of someone prooving they have the private key of the given public key.

This seems pretty reasonable restriction to make. There is little reason anyone should be collecting certificates and from that coming to conclusions about the identity of people across certificates. This is not a primary use case for certificates.

For the WebID protocol this still leaves the issue that there is another place where the public key is published and that is in the WebId Profile Document [3].  An agent that was crawling the linked data network would still come across two documents and add the following graphs to its graph store

   joeProfile log:semantics { joe cert:key pk . }
  timblProfile log:semantics { timbl cert:key pk . }

 But here again it is not because someone writes in their profile something that it should be believed, so the union of the two profile documents is not automatic. 

  One may go a step further, and take an idea from our solution B. Unlinkability and not publish in the webid profile document the cert:key relation directly to the public key, but instead to the certificate.
The  Profile document would then look like this

   joeProfile log:semantics { joe cert:cert JoesCertWithNoPrivKey . }
  timblProfile log:semantics { timbl cert:cert timblsCert . }

Now merging the two graphs would not lead to an identification of timbl with joe
But still joe could also lie here about his certificate by claiming he had timblsCert .
So I am not sure this really helps that much. But more work is needed here.

Conclusion
---------------

The main problem that creating certificates for users which don't have a private key for a public key is that this certificate could be used to make identifications of people who are not identical. But this requires a logical rule of inference that is not a primary use case for certificates, and it need not be thought of that way. So the easiest way is just to state that such reasoning is invalid. There are many reasons it may be: people may lie about certificates, people may weirdly enough try to create certificates for which they don't have the private key, etc, etc... If this logic inference is not allowed then the fact that MD5 is broken in current SPKACs cannot really lead to any major problems. Rather it is important that this failure be widely known, as it makes it easier to argue against the logical inference we want to argue against.


Hope this helps, and I welcome good well argued feedback based on reasoning and if possible logic

Henry Story


Melvin Carvalho

okunmadı,
8 Eyl 2015 09:02:378.09.2015
alıcı Henry Story, Ryan Sleevi, blink-dev, public-webid, Chadwick David, Tim Berners-Lee
+1

Thanks for going through this, the private key never leaves the browser, so cannot be compromised.  Talk of an attack vector, which has never been described, seems to be mainly FUD, imho.

matt.we...@gmail.com

okunmadı,
8 Eyl 2015 15:44:448.09.2015
alıcı Security-dev, min...@sharp.fm, blin...@chromium.org, sle...@google.com
Please do not deprecate this support.

1. There is extensive use of keygen provisioning and certificate usage for authentication within highly secure, closed networks (e.g. DoD, DIB) that Google does not have visibility on.
2. If you run a website and want to support cryptographically strong authentication for your clients without requiring them to change browsers or install different client software, your only option now is issuing client certificates. There is no similar replacement that is supported in major browsers by default. Even IE, while not directly supporting keygen, supports Javascript API's that can generate the same result, allowing scripts like IEkeygen.js to send the same data to the server.
3. There is also no alternative nearly as secure and frictionless for the general certificate issuance. For example, some CA's, if keygen could not be used, simply generated a private key and emailed it to the user for S/MIME certificates, HTTPS certificates, and code signing certificates. Alternatives generally involve pasting long command-line scripts into a terminal and/or complex manual certificate request specification/generation, both of which are extensively time-consuming, can pose graver security risks, and are more error prone.
4. If you run a website and want to prevent your clients from falling victim to a MITM attack without requiring a change in browser or the installation of different client software, your only option now is issuing client certificates. There are hundreds, if not thousands of root and intermediate CA's trusted by browsers, a compromise of any of them, which has happened many times before, results in loss of secure user data. By checking for client certificates, you can authenticate your clients and secure your communication channel even in the presence of CA compromise.

If you remove support for keygen, not only will you break many authentication systems, but even if such systems are re-written to support FIDO, you will end up with a worse scenario for end-user security:

Scenario 1 (keygen/certificates): User goes to website to set up account, clicks a button, certificate is automatically issued and stored in the browser. User can securely authenticate and browse.
Scenario 2 (FIDO): User goes to website to authenticate. Website informs user that to securely authenticate, user must install browser extension. User downloads and executes privileged code (browser extension) specified by arbitrary website.

FIDO relaxes the MITM protection that is strongest in certificates (it is not a required part of the protocol) and has some other technical characteristics that are concerning, and has governance issues that make it cost-prohibitive for a small business like ourselves to have a voice in the alliance, in contrast to more open standards processes. It is also a newer technology that does not have the decades of support client certificates do, which is why we prefer client certificates. The FIDO protocols nevertheless are a great improvement over most authentication systems in use today. But as it stands now, FIDO is far too young of a technology with too many flaws; not even deployed in most browsers by default, that it is not nearly ready to replace client certificates. Trying to do so introduces severe negative security externalities from flaws like training users to download and execute code to access a website that more than outweigh the advantages.

The author makes several assertions denigrating to client certificates for authentication that are given apparently without any justification: "While a use case exists for provisioning TLS client certificates for authentication, such a use case is inherently user-hostile for usability, and represents an authentication scheme that does not work well for the web." The provisioning of a client certificate generally happens in a single click, which is the easiest authentication setup for any authentication system currently on the web. It is certainly easier than attempting to create and remember a unique and strong password, and at this point, significantly easier than FIDO as well. Client certificate-based authentication is used by millions of people for everything from web-based email: https://technet.microsoft.com/en-us/library/Cc995195.aspx to site authentication and more.

Client certificates have been an integral part of the TLS protocol for decades; since the first RFC's. They remain the best option for those who are concerned about very strong security and usability, and are indeed working well for millions of users. Deprecating keygen etc. would introduce incompatibilities and security and usability issues for many use cases. As the primary means of interacting with websites and authenticating organizations in general, browsers are the logical and safest means of handling this type of authentication with the least amount of friction for users. The loss of these abilities would be a blow to a more secure web.

Henry Story

okunmadı,
9 Eyl 2015 10:52:579.09.2015
alıcı Carvalho Melvin, Ryan Sleevi, blink-dev, public-webid, Chadwick David, Tim Berners-Lee
Things are not that simple. Prof David Chadwick (CCed in the previous post) pointed out to me the following

the only attack I can think of in your given example is that, acting as
MITM, I substitute the certificate on a signed message with my false
certificate containing a different subject and then let the message
continue on its way. In this way the recipient MAY (only may, not will,
because it depends upon the signed content not containing the real
details of the sender) believe that the message came from someone other
than the real sender.

Here is where it might be useful. You submit a patent application
electronically and I, acting as a MITM, substitute your certificate with
one containing me as the subject, and the patent gets registered in my
name and not yours.

So we can imagine that Tim Berners Lee on inventing the web decided to patent it,
and that he somehow mistakenly used a man in the middle web site https://patants.org/
where he uploaded the specs for the web and signed it with his certificate ( though at no
point mentioning his name in the patent ). So my man in the middle web site could then
using the fake certificate connect to the real patent web site and send the patent along
with my certificate that used his public key, and ho presto I'd be potentially super rich. :-)

Of course Tim could then in court of law still proove that the real certificate was his, as he,
and not I had the private key. But this law suite would certainly mark a bad start for the web. ( apart of course from the idea of patenting it )

This is not a problem for WebID-TLS as it is used now, as this only uses the authentication mechanism of TLS, and browsers do not provide a signing mechanism. Still other software with singing abilities could access the certificate in the key chain and propose to use it to sign some document.

If follows that certificates made without solid secure challenge abilities (eg. the current MD5 situation) should not be enabled for signing. ( X509 allows one to specify what a certificate can be used for ).
Clearly this type of impliction should be documented clearly. We should specify note this in the WebID-TLS spec.

It also gives a good reason for having stronger secure challenge features in <keygen> or whatver replaces it. 

Btw. could a FIDO based system - assuming it were extended to allow public keys to be used across origins - improve the secure challenge feature?

I really feel we are making progress here.

Henry

helpc...@gmail.com

okunmadı,
10 Eyl 2015 14:30:3810.09.2015
alıcı blink-dev, sle...@google.com, 01234567...@gmail.com

So don't abandon <keygen>, fix it.

My 2 cents:

Wouldn't something like Webcrypto API allowing to generate keys on browser/system keystore be enough to replace current <keygen> usage for cert enrollment?

window.crypto.subtle.generateKey(
{ name: "RSASSA-PKCS1-v1_5",
 modulusLength: 2048,
 publicExponent: new Uint8Array([1, 0, 1]),
 hash: {name: "SHA-256"},
 keystore: true                <------------------------That's an example. Even a PKCS#11 URI could be used!
}, ...

WebCrypto Discovery API should be able to find keys there too. ie: Firefox has to give access to NSS/softoken/pkcs11 and Chrome/IE to windows  keystore/keychain on OSX

IMHO this is simple and very easy-to-adopt for all: Webcrypto WG/spec, browsers and users(developers) while keeping the keys under user/owner control.

In fact this is something I'm missing from current Webcrypto specs: the lack of linking between creating keys and (current) keystores. I understand each vendor could have its own way, but seems you were only thinking on volatile/server keys, not X509 user certs or local keys. Not to mention smartcards.

Then, when this alternative is ready, I won't complaint about removing keygen. But, please, don't do it without providing a REAL alternative


BTW: couple of questions about FIDO:

 - If I create keys for foo.com using a U2F compliant USB device (yubikey) on my PC and then I want to login using my mobile...shall I generate another keypair? can't I use one for all? (reverse question may apply)

 - could a FIDO key be used for many domains (like eID, allowing me to auth/sign on different domains)

Regards

Ryan Sleevi

okunmadı,
10 Eyl 2015 20:32:3110.09.2015
alıcı helpcrypto helpcrypto, blink-dev
On Thu, Sep 10, 2015 at 11:30 AM, <helpc...@gmail.com> wrote:
WebCrypto Discovery API should be able to find keys there too. ie: Firefox has to give access to NSS/softoken/pkcs11 and Chrome/IE to windows  keystore/keychain on OSX

IE does not grant such access - it's never implemented <keygen>. The ActiveX object it does provide allows for full system modification, so that's hardly a good model.

And there are zero plans for WebCrypto to do this (it's deliberately out of scope of the charter), and I/royal we remain opposed to on numerous grounds (security and privacy being foremost). The Same Origin Policy is a critical piece of the Web.

BTW: couple of questions about FIDO:

 - If I create keys for foo.com using a U2F compliant USB device (yubikey) on my PC and then I want to login using my mobile...shall I generate another keypair? can't I use one for all? (reverse question may apply)

 
 - could a FIDO key be used for many domains (like eID, allowing me to auth/sign on different domains)

No. And it's the fact that it explicitly lacks that property that makes FIDO viable for the Web, without being a security (due to forgery of shared-key attacks) or privacy (due to linkability) nightmare. 

Henry Story

okunmadı,
11 Eyl 2015 05:59:2511.09.2015
alıcı Ryan Sleevi, helpcrypto helpcrypto, blink-dev, sa...@ietf.org
Thanks Ryan for bringing up below what I think are the three underlying reasons
for the move to deprecate <keygen> in html by relating in a very strong way:

1. Same Origin Policy to Linkability,
2. Linkability to loss of Privacy,
3. certificates to loss of Privacy

These are high level Architectural statements, that need to be made explicit, as they affect many other groups in the web, so I am also CCing this to the TAG, and to the SAAG at the IETF as they are working on both TLS 3 and a number of client side certificate technologies.

What would be nice would be if we could have actual documents from the W3C and SAAG that
clarified where these concepts actually are applicable and what the limitations of them are.

As background for people new to this thread, this follows up on a request by the HTML5 WG to deprecate <keygen> https://github.com/whatwg/html/issues/67 . The main discussion actually seems to be occuring on the blink-dev chrome list for some reason and it was brought up by Tim Berners Lee for discussion on the W3C TAG.

> On 11 Sep 2015, at 01:32, 'Ryan Sleevi' via blink-dev <blin...@chromium.org> wrote in an
> e-mail that is archived https://groups.google.com/a/chromium.org/d/msg/blink-dev/pX5NbX0Xack/y5R4Ky9KAQAJ
>
>> On Thu, Sep 10, 2015 at 11:30 AM, <helpc...@gmail.com> wrote:
>> WebCrypto Discovery API should be able to find keys there too. ie: Firefox has to give access to NSS/softoken/pkcs11 and Chrome/IE to windows keystore/keychain on OSX
> IE does not grant such access - it's never implemented <keygen>. The ActiveX object it does provide allows for full system modification, so that's hardly a good model.

What you mean is that it can add client certificates to the keystore, as implementations of <keygen> can, and have, right? It can't also launch programs in user or root mode, edit files, etc.... Because that's a bit what it sounds like when you say "full system modification". ( many people in the web thought the ActiveX idea was not a good idea as it bound the web to particular binary implementations, nevertheless I suppose no serious security leaks persist with this feature, or else it would have been removed a long time ago, especially as Microsoft is keeping it for enterprise customers ).

The major difference betweent the ActiveX control and keygen, is that the ActiveX control
requires JS to be used. (more of that below)

>
> And there are zero plans for WebCrypto to do this (it's deliberately out of scope of the charter), and I/royal we remain opposed to on numerous grounds (security and privacy being foremost). The Same Origin Policy is a critical piece of the Web.

The Same Origin Policy has always been used with respect to JavaScript especially when JS from one origin then connects to other origins to get information. As JS is non declarative and a turing complete language it is quite clear why these restrictions apply: The JS can actually do things in place of the user. It is procedural code, meaning that an extra agency is added to the web page.

On the other hand the web itself is built on Linkeability between pages from different origins: that is what makes the web the web. This form of linkeability comes historically and conceptually before the non linkeability of JS. So linkeability is more essential to the web than JS non linkeability.

As a result non linkeability and same origin policy cannot be invoked without careful attention to the circumstances in which they apply.

What is the difference between the two? The declarative nature of the web-without-JS means that most of the actions in the web remain under the user control. What the User Agent (UA) does is on the whole very limited: it fetches and displays documents. It is up to the user to decide which link she clicks, what he bookmarks, what forms he submits, what document she saves to her local file system. When you add JS to the mix, you add the agency of the JS to the mix, which can now also now follow links, click form elements, etc... So instead of 2 agents ( the UA and the user ) you suddenly have n agents to deal with: the User, the UA, the JS agent for each of n-2 origins ( and here it is clear that bunching all the JS on one origin together gives a really vague concept of code identity ).

As it happens the <keygen> element that is being put up for deprecation is part of the html-non-JS web, and works on clients that have JS disabled.
Client certificate authentication as used by TLS is also declarative: the browser chrome gives the (human) user the ability to choose what certificate to use, and to cancel the authentication request if desired.
In none of these cases is the user out of control ( in a privacy respecting browser - and more can be done in many browsers to improve the user control ).

There is no reason that this TLS client certificate authentication feature can't be improved to work better with HTTP2.0 aka SPDY see the HTTP WG thread:
• starting: https://lists.w3.org/Archives/Public/ietf-http-wg/2015AprJun/0558.html
• most recent: https://lists.w3.org/Archives/Public/ietf-http-wg/2015JulSep/0310.html

>
> BTW: couple of questions about FIDO:
>
> - If I create keys for foo.com using a U2F compliant USB device (yubikey) on my PC and then I want to login using my mobile...shall I generate another keypair? can't I use one for all? (reverse question may apply)
>
> Yes. https://fidoalliance.org/wp-content/uploads/html/fido-appid-and-facets-v1.0-ps-20141208.html
>
> - could a FIDO key be used for many domains (like eID, allowing me to auth/sign on different domains)
>
> No. And it's the fact that it explicitly lacks that property that makes FIDO viable for the Web, without being a security (due to forgery of shared-key attacks) or privacy (due to linkability) nightmare.

What is the forgery of shared-key attacks? ( a detailed pointer would be nice )
Is that due to the MD5 weakness in the Signed Public Key and Challenge ( SPKAC ) format used by <keygen> at present, which allows the UA's keychain to sign the public key with the private key, allowing the server to verify that the certificate request comes from an agent that is actually in possession of the private key? I tried to consider what could
be done with such an attack in the thread
https://groups.google.com/a/chromium.org/d/msg/blink-dev/pX5NbX0Xack/dn_7RguGAAAJ
As far as we can see not that much can be done with it, especially if one considers how
certificates are actually used.

But of course it is easy to imagine improving keygen so that this weakness does not arise, for example by extending it with

<keygen keytype="rsa" signature="sha1 sha2" signedpk="jose">

JOSE is a format that is being developed by the IETF https://tools.ietf.org/wg/jose/
and so are X509 Certificates.

Given that it is so easy to see how keygen could be extended to allow for different certificate formats to be used, your argument cannot be that it is just X509 that is
problematic, but that any certificate format is a privacy problem.

If so this needs to be brought to the attention of the IETF SAAG, as they are investing
time working on this.

As you have been singing the praises of the FIDO alliance spec, I'd like to note that it is difficult to see how FIDO actually would work without certificates. It requires them for TLS verification of servers.

Also as OpenID, OAuth and SAML which are designed to fit on top of FIDO as seen here

https://fidoalliance.org/wp-content/uploads/html/fido-uaf-overview-v1.0-ps-20141208.html#relationship-to-other-technologies

require some form globally known relation between public key and global identifier to be available. Otherwise how would attribute exchange work? If the Relying Party has no way to
know who the Identity provider is, why should it even trust the Identity Provider?

So Server Certificates are good, but client certificates are bad aparently. What is the principled argument that allows one to make this distinction?

Henry

Ryan Sleevi

okunmadı,
11 Eyl 2015 11:10:5311.09.2015
alıcı Henry Story, helpcrypto helpcrypto, blink-dev
On Fri, Sep 11, 2015 at 2:59 AM, Henry Story <henry...@gmail.com> wrote:
Thanks Ryan for bringing up below what I think are the three underlying reasons
for the move to deprecate <keygen> in html

Henry,

I encourage you to re-read the original thread. It is not a fair or accurate summary to suggest these are the "three underlying reasons for deprecating <keygen>"

Regardless of any argument or discussion beyond that, these are neither the sole nor primary motivating factors, as was explained several times.

On a matter of basic list etiquette, please note that crossposting is considered quite rude and problematic. I have removed your cross-posts, although I do hope you can make sure to let the relevant CC'd lists know that you're summary is not correct.

Cheers.

Henry Story

okunmadı,
11 Eyl 2015 11:45:1511.09.2015
alıcı Ryan Sleevi, helpcrypto helpcrypto, blink-dev
On 11 Sep 2015, at 16:10, Ryan Sleevi <sle...@google.com> wrote:



On Fri, Sep 11, 2015 at 2:59 AM, Henry Story <henry...@gmail.com> wrote:
Thanks Ryan for bringing up below what I think are the three underlying reasons
for the move to deprecate <keygen> in html

Henry,

I encourage you to re-read the original thread. It is not a fair or accurate summary to suggest these are the "three underlying reasons for deprecating <keygen>"

I meant three implicit reasons. These reasons are not the reasons you state at the top of this thread, which anyone can review as I kept a link to this thread in the posts I CCed. They are reasons you yourself mention in the post to which the mail was a reply. 

Let me quote 

 I/royal we remain opposed to on numerous grounds (security and privacy being foremost). The Same Origin Policy is a critical piece of the Web.

and 

It it explicitly lacks that property that makes FIDO viable for the Web, without being a security (due to forgery of shared-key attacks) or privacy (due to linkability) nightmare. 

I believe the 3 reasons mentioned above explain many of the explicitly stated reasons that launched this thread and the consequent deprecation of <keygen> in html5, which I have had trouble understanding the basis of. You have used these arguments throughout the thread. And they are explicit goals of the FIDO alliance which you support strongly.

These may be reasonable principles in certain circumstanced which need to be stated explictly and defended. I think that you are applying them too widely. The TAG and the SAAG need to review these principles, then you can use them, which is why I brought their attention to this. 

Regardless of any argument or discussion beyond that, these are neither the sole nor primary motivating factors, as was explained several times.

They have come up quite a lot and deserve being explicitly stated in the reasons given for deprecating keygen. The issue Google Chrome issue does not mention these either https://code.google.com/p/chromium/issues/detail?id=514767


On a matter of basic list etiquette, please note that crossposting is considered quite rude and problematic. I have removed your cross-posts, although I do hope you can make sure to let the relevant CC'd lists know that you're summary is not correct.

The mail I sent contains a link to the post I was replying to. I am sure the TAG and SAAG members are aware of  mail etiquette. But point taken. At the same time this is quite a serious move you are making, and those two organisations' members should be aware of this.


Cheers.

Henry Story

okunmadı,
11 Eyl 2015 12:09:2811.09.2015
alıcı Ryan Sleevi, helpcrypto helpcrypto, blink-dev

> On 11 Sep 2015, at 16:45, Henry Story <henry...@gmail.com> wrote:
>
>
>> On 11 Sep 2015, at 16:10, Ryan Sleevi <sle...@google.com> wrote:
>>
>>
>>
>> On Fri, Sep 11, 2015 at 2:59 AM, Henry Story <henry...@gmail.com> wrote:
>> Thanks Ryan for bringing up below what I think are the three underlying reasons
>> for the move to deprecate <keygen> in html
>>
>> Henry,
>>
>> I encourage you to re-read the original thread. It is not a fair or accurate summary to suggest these are the "three underlying reasons for deprecating <keygen>"
>
> I meant three implicit reasons. These reasons are not the reasons you state at the top of this thread, which anyone can review as I kept a link to this thread in the posts I CCed. They are reasons you yourself mention in the post to which the mail was a reply.
>
> Let me quote
>
>> I/royal we remain opposed to on numerous grounds (security and privacy being foremost). The Same Origin Policy is a critical piece of the Web.
>
> and
>
>> It it explicitly lacks that property that makes FIDO viable for the Web, without being a security (due to forgery of shared-key attacks) or privacy (due to linkability) nightmare.
>
> I believe the 3 reasons mentioned above explain many of the explicitly stated reasons that launched this thread and the consequent deprecation of <keygen> in html5, which I have had trouble understanding the basis of. You have used these arguments throughout the thread. And they are explicit goals of the FIDO alliance which you support strongly.

It is worth substantiating this with a few quotes which I have already done by citing the FIDO alliance specs in the discussion at the TAG
https://lists.w3.org/Archives/Public/www-tag/2015Sep/0023.html

I'll make the points again on this list:

FIDO makes a big deal of unlinkability. We find in the FIDO UAF Architectural Overview

• "The UAF protocol generates unique asymmetric cryptographic key pairs on a per-device, per-user account, and per-relying party basis. Cryptographic keys used with different relying parties will not allow any one party to link all the actions to the same user, hence the unlinkability property of UAF."
• "UAF authenticators can only be identified by their attestation certificates on a production batch-level or on manufacturer- and device model-level. They cannot be identified individually. The UAF specifications require implementers to ship UAF authenticators with the same attestation certificate and private key in batches of 100,000 or more in order to provide unlinkability.

https://fidoalliance.org/wp-content/uploads/html/fido-uaf-overview-v1.0-ps-20141208.html

I also mention Brad Hill at the "Web Cryptography Next Steps" w3c workshop where he is reported as saying

• "Users and servers want hardware-bound keys but users also want unlinkability, and FIDO has ways outside the browser to approach these problems."

http://www.w3.org/2012/webcrypto/webcrypto-next-workshop/report.html

Here we see how public keys are related to linkeability.

It is felt by people coming from this perspective that certificate authentication across sites creates linkability. If only because two different sites can see that someone authenticated with the same public key, which they think of as a global cookie. But this does not take into account that unlike cookies browsers actually ask the user which certificate if any the user wishes to use. See also the other points I made earlier in this thread.

Of course what users will actually do is then connect to their OpenID or OAuth provider and use that to create a linkeable profile, which will pass on the OpenID or for OAuth give access to some account with a global ID. I have never seen any use of OpenID or OAuth that does not do this ( though I heard rumours of this ).

Furthermore I don't see how if attribute exchange is used the attributes can be exchanged without the identity provider providing its identity, using something like a globally known public key and identifier ( for at least the ID provider ). So again certificates ( signed statements by some agent) are important here in some way.

Anyway, so the principle of unlinkeability is so strong that it is mentioned in the FIDO Architectural specs, and Brad Hill felt it was important enough to make the point of the importance of Same Origin Policy at the W3C Workshop, as you do. But I argued these don't obviously apply to keygen.

Ryan Sleevi

okunmadı,
11 Eyl 2015 12:09:3211.09.2015
alıcı Henry Story, helpcrypto helpcrypto
On Fri, Sep 11, 2015 at 8:45 AM, Henry Story <henry...@gmail.com> wrote:
I believe the 3 reasons mentioned above explain many of the explicitly stated reasons that launched this thread and the consequent deprecation of <keygen> in html5, 

I'll be explicit, then: They really don't.

The reasons for deprecation of <keygen> were provided. This isn't a meta-hidden-agenda sort of thing. It really is just that simple.

which I have had trouble understanding the basis of.

I can understand you're still confused about the reasoning. But setting up a strawman, that you can then understand, doesn't help advance the discussion. If you're confused, it may help to simply ask questions, rather than try to present arguments.

Let's also keep in mind the context. You forked this thread from a discussion specifically about WebCrypto and smartcards. While WebCrypto has certainly been brought up in the discussion of <keygen>, it's important to keep in mind that <keygen> was not being responded to - WebCrypto is. While there are similar principles, which may help further your understanding of the arguments for deprecating <keygen>, care must be taken not to naively equate the arguments as being identical or interchangeable.

While I appreciate your willingness to change the topic to suit what's being discussed, I'll note that it has the consequence of fragmenting discussions across multiple threads, and when you keep interchanging arguments from one thread to another, that fragmentation doesn't really serve your ostensible goal of keeping threads "narrowly focused". I think it'd be far more beneficial to not constantly fork, especially as you work through questions to further your own understanding.

While you wrote a considerable amount in your previous replies, much was spent demolishing strawmen that were built through your own misunderstanding, and it'd be a mistake for me to engage on the technical accuracies of your 'rebuttals', when the 'rebuttals' themselves were to fallacious and fictitious arguments. While I can realize and appreciate the benefit of trying to fully address everything in a single email, perhaps it would have been more fruitful for you - and more likely to result in appropriate engagement, to simply first make sure your understanding is correct, before trying to argue against or provide counterfactuals, or, worse, engage other groups because you believe their domain to be incurred on.

From your previous reply, it's clear you were and are confused about several of the arguments put forth. Perhaps it's better to just state that - and nothing else - and work from an honest point of understanding before advancing to rebuttals.

For example, rewriting your previous email to simply seek understanding:

"I'm not sure I understand the ActiveX issues. Could you explain those further? I thought the argument was about binary objects. If there were security issues, wouldn't Microsoft have disabled ActiveX.

I don't understand what you meant by forgery of shared key attacks. Is that related to MD5, or is it something different?

You've been singing the praises of FIDO, but it seems like that requires certificates. I understand your issue to be about certificates and privacy, is that correct? I'm not sure I understand why server certificates would be different security than client certificates - could you explain?"

The above message, considerably shorter, would have actually worked to enhance your understanding. Almost the entirety of the email, however, was spent operating on a flawed conclusion, derived from an answer you provided to questions you asked. It's understandably difficult when you put words in others' mouths - whether in trying to 'reduce' their argument when you're clear that you don't yet understand it, or provide incorrect answers for questions you ask, and then work to demonstrate how those answers are incorrect.

Jeffrey Yasskin

okunmadı,
11 Eyl 2015 13:21:1111.09.2015
alıcı Henry Story, blink-dev
On Fri, Sep 11, 2015 at 2:59 AM, Henry Story <henry...@gmail.com> wrote:
As background for people new to this thread, this follows up on a request by the HTML5 WG to deprecate <keygen> https://github.com/whatwg/html/issues/67 .

The WHATWG isn't a W3C working group.
 
The main discussion actually seems to be occuring on the blink-dev chrome list for some reason …

It's happening on a Chromium list because it's Chromium deciding whether to deprecate the feature. Other browsers might follow along, and some of the coordination on doing that might happen through standardization organizations, but ultimately it's an implementation's decision which features to support. The W3C can't force browsers to keep a feature any more than they could force browsers to implement XHTML2.

Jeffrey

Henry Story

okunmadı,
11 Eyl 2015 13:59:3811.09.2015
alıcı Ryan Sleevi, blink-dev, helpcrypto helpcrypto
On 11 Sep 2015, at 17:09, Ryan Sleevi <sle...@google.com> wrote:

On Fri, Sep 11, 2015 at 8:45 AM, Henry Story <henry...@gmail.com> wrote:
I believe the 3 reasons mentioned above explain many of the explicitly stated reasons that launched this thread and the consequent deprecation of <keygen> in html5, 

I'll be explicit, then: They really don't.

The reasons for deprecation of <keygen> were provided. This isn't a meta-hidden-agenda sort of thing. It really is just that simple.

Let's look at the reasons given then:

Issues: There are a number of issues with <keygen> today that make it a very incompatible part of the Web Platform.
1) Microsoft IE (and now Edge) have never supported the <keygen> tag, so its cross-browser applicability is suspect. [3] Microsoft has made it clear, in no uncertain terms, they don't desire to support Keygen [4][5]
2) <keygen> is unique in HTML (Javascript or otherwise) in that by design, it offers a way to persistently modify the users' operating system, by virtue of inserting keys into the keystore that affect all other applications (Safari, Chrome, Firefox when using a smart card) or all other origins (Firefox, iOS, both which use a per-application keystore)
3) <keygen> itself is not implemented consistently across platforms, nor spec'd consistently. For example, Firefox ships with a number of extensions not implemented by any other browser (compare [6] to [7])
4) <keygen> itself is problematically and incompatibly insecure - requiring the use of MD5 in a signing algorithm as part of the SPKAC generated. This can't easily be changed w/o breaking compatibility with UAs.
5) <keygen> just generates keys, and relies on application/x-x509-*-cert to install certificates. This MIME handling, unspecified but implemented by major browsers, represents yet-another-way for a website to make persistent modifications to the user system.
6) Mozilla (then Netscape) quickly realized that <keygen> was inadequate back in the early 2000s, and replaced it with window.crypto.generateCRMFRequest [8], to compete with the CertEnroll/XEnroll flexibility, but recently removed support due to being Firefox only. This highlights that even at the time of introduction, <keygen> was inadequate for purpose.

1) Is not really a reason since they even when MS did not support it, Firefox, Chrome and Opera had them. Also it has been pointed out there there are numerous features that are not supported by all browsers: the proof is that it was added to the html5 spec even without MS supporting it!

2) Is badly phrased: keygen does not modify the Operating System, it adds a public/private key and later a certificate to the keychain. The keychain can be part of the browser as it was in Firefox most of the time. 


Certificate and key management - which is system wide, affects all other applications and origins - is itself a fundamentally flawed premise for a browser to be involved in, nor is it required for any 'spec-following' <keygen> implementation. As the original message laid out, there's no way such a feature would have a reasonable chance of succeeding. The very premise of WebID rests on the ability to create a cross-origin super-cookie (the client cert), and the bugs filed make it clear there's a desire to reduce or eliminate the user interaction in doing so.

Note how you quickly move to the notion of a "super cookie" and invoke the dangerous effects of "cross origin" in line with the FIDO alliance. But as explained this does not apply to keygen as the browser asks the user what certificate if any he should use, whereas cookies are set without needing to do so ( except for various local legal requirements in various countries, but these are not implemented in the browser chrome  ).

You made other statements along the same lines.

3) Here you jump from <keygen> to speaking about JS APIs that were removed from Firefox after the JS web crypto group got going, as they standardised those features. So you bring in an irrelevant point.

4) I have treated at legth of the  MD5 issue in spkac in this thread especially 
  We found no serious security problem here, and you were unable to cite any though you mentioned there are 20 years of literature on the subject. In any case this is not a <keygen> problem but an spkac problem. There are a number of ways of improving keygen to fix this. Off the top of my head I imagined the following:

<keygen keytype="rsa" signature="sha1 sha2" signedpk="jose">

As Tim Berners Lee wrote in his post to the TAG: "Don't abandon <keygen>, fix it". And it is easy to see how this can be fixed ( perhaps even in alliance with FIDO technology ).

5) is the same point as 2) which we already saw is misstated and relies on the unstated linking of "same origin policy" which actually applies to JS ( which keygen does not require ), and linkeability which is not a problem if one asks the user ( ask one can with FIDO by using OpenID or OAuth as mentioned in their "FIDO UAF Architectural Overview" document's section "relation to other technologies"


6) is the same point as 3) . Adding JS features and then removing them does not invalidate the use of <keygen>

Now as the spec writer of JS crypto API it's understandeable you want that to succeed. But there is no need to remove other features to do that, and to bring in issues relative to JS to an html tag that does not suffer from those issues.


which I have had trouble understanding the basis of.

I can understand you're still confused about the reasoning. But setting up a strawman, that you can then understand, doesn't help advance the discussion. If you're confused, it may help to simply ask questions, rather than try to present arguments.

Let's also keep in mind the context. You forked this thread from a discussion specifically about WebCrypto and smartcards. While WebCrypto has certainly been brought up in the discussion of <keygen>, it's important to keep in mind that <keygen> was not being responded to - WebCrypto is. While there are similar principles, which may help further your understanding of the arguments for deprecating <keygen>, care must be taken not to naively equate the arguments as being identical or interchangeable.

The arguments you put up against web crypto echo the arguments you put up against <keygen>. Let me repeat them 


 I/royal we remain opposed to on numerous grounds (security and privacy being foremost). The Same Origin Policy is a critical piece of the Web.

and 

It it explicitly lacks that property that makes FIDO viable for the Web, without being a security (due to forgery of shared-key attacks) or privacy (due to linkability) nightmare. 

Now it would be nice if instead of just repeating my other questions as you do for me so nicely, you would actually answer them. Let me quote you quoting me here:

"I don't understand what you meant by forgery of shared key attacks. Is that related to MD5, or is it something different?"

You've been singing the praises of FIDO, but it seems like that requires certificates. I understand your issue to be about certificates and privacy, is that correct? I'm not sure I understand why server certificates would be different security than client certificates - could you explain?"

helpcrypto helpcrypto

okunmadı,
14 Eyl 2015 04:05:0714.09.2015
alıcı Ryan Sleevi, blink-dev
On Fri, Sep 11, 2015 at 2:32 AM, Ryan Sleevi <sle...@google.com> wrote:

On Thu, Sep 10, 2015 at 11:30 AM, <helpc...@gmail.com> wrote:
WebCrypto Discovery API should be able to find keys there too. ie: Firefox has to give access to NSS/softoken/pkcs11 and Chrome/IE to windows  keystore/keychain on OSX

IE does not grant such access - it's never implemented <keygen>. The ActiveX object it does provide allows for full system modification, so that's hardly a good model.

Well...here, I wasn't giving full-system-modification but a way to add/use keys to/from keystore.
If MS doesn't provide this feature, is not a reason not to support it. (Otherwise Chrome shouldn't support <file multiple> until....2015? lol)

 
And there are zero plans for WebCrypto to do this (it's deliberately out of scope of the charter), and I/royal we remain opposed to on numerous grounds (security and privacy being foremost). The Same Origin Policy is a critical piece of the Web.

I understand SOP as a security principle, but I'm also worried about the user needing to register on each government domain to generate a keypair...one for taxes, another for work-office. another for townhall...how many keys is yubikey able to store? :P
If I understand properly, only if a National SSO is deployed, this could be "bypassed".
 


BTW: couple of questions about FIDO:

 - If I create keys for foo.com using a U2F compliant USB device (yubikey) on my PC and then I want to login using my mobile...shall I generate another keypair? can't I use one for all? (reverse question may apply)


I still don't understand, but probably it's a lack-of-english issue.
Yes=you need a new keypair? or Yes=you can share the keypair somehow?

 
 
 - could a FIDO key be used for many domains (like eID, allowing me to auth/sign on different domains)

No. And it's the fact that it explicitly lacks that property that makes FIDO viable for the Web, without being a security (due to forgery of shared-key attacks) or privacy (due to linkability) nightmare. 

Clear as water. As I stated previously, a National SSO or one key per domain (user nightmare for the registering process?) are possible.
 

Anyhow, I don't yet see the problem (/reasons not) to integrate FIDO+keystore and even FIDO+X509. Could you elaborate?

Neither I see the "urgency" to deprecate <keygen>. At least, you have to admit this process has been "quite fast" and perhaps "didn't have enough public discussion" before being deprecated on WHATWG/Chrome.

BTW: If this decision is already made (there's no way back, no matter how many reasons anyone give against, including timbl), it's correct to consider keygen will be removed/disabled on Chrome 47? Help me with the maths: in 3 months? Can you give an estimated date?


Regards (and thanks for your time).

PhistucK

okunmadı,
14 Eyl 2015 04:21:0114.09.2015
alıcı helpcrypto helpcrypto, Ryan Sleevi, blink-dev

On Mon, Sep 14, 2015 at 11:04 AM, helpcrypto helpcrypto <helpc...@gmail.com> wrote:
BTW: If this decision is already made (there's no way back, no matter how many reasons anyone give against, including timbl), it's correct to consider keygen will be removed/disabled on Chrome 47?

While a bit shady, this is just a pre-intent to deprecate, it seems. So it is not deprecated just yet, I think.
Once it is deprecated (which can be a matter of a few months, I think), its removal​ will probably take place after a few (or more) more months (like with NPAPI, window.showModalDialog and others). That depends on the usage - if it goes even further down, the period will be shortened. If all of the currently supporting browser agree on a coordinated date for a removal, it may be longer (or shorter ;)).


PhistucK

Ryan Sleevi

okunmadı,
14 Eyl 2015 04:39:0614.09.2015
alıcı helpcrypto helpcrypto, blink-dev
On Mon, Sep 14, 2015 at 1:04 AM, helpcrypto helpcrypto <helpc...@gmail.com> wrote:
If MS doesn't provide this feature, is not a reason not to support it. (Otherwise Chrome shouldn't support <file multiple> until....2015? lol)


It's one thing for a browser vendor to not support something _yet_. It's quite another to not support it and explicitly have no plans to support it and objections to it.
 
I understand SOP as a security principle, but I'm also worried about the user needing to register on each government domain to generate a keypair...one for taxes, another for work-office. another for townhall...how many keys is yubikey able to store? :P
If I understand properly, only if a National SSO is deployed, this could be "bypassed".

All of this is a misdirect. No national ID card is using <keygen> for enrollment in Chrome, as the functionality does not and has never existed in Chrome to let you provision national ID cards, nor will it.

So it's clear that no national ID scheme - such as filing taxes and for work - will be affected. If you have such an ID card, and you install the appropriate smart card middleware/drivers, it will just work, regardless of the handling for either application/x-x509-user-cert or <keygen>. If you need to renew your card, you have to go in person with that card, the same as you would have to do with <keygen> support today.

There is not and has never been a way to provision smartcards via HTML in Chrome.

I still don't understand, but probably it's a lack-of-english issue.
Yes=you need a new keypair? or Yes=you can share the keypair somehow?

I had hoped the link would have made it clear, because in the introductory paragraph, it explains precisely that it is intended for using the same key across multiple applications (which may themselves be across multiple devices). That is, using the same key between an Android App, an iOS app, and a Web origin - provided that all three are operated by the same (logical) entity.

No, you do not need a new keypair.
Yes, you can share the keypair - using the steps in the link I gave.
 
Anyhow, I don't yet see the problem (/reasons not) to integrate FIDO+keystore and even FIDO+X509. Could you elaborate?

I suspect this is a terminology issue, but the entire reason that FIDO is viable is because it's not integrated with the keystore. And there's nothing preventing a FIDO-backed key from being integrated with X.509, but FIDO intentionally and explicitly defines the format of messages that can be signed with that key (which is the only reason that FIDO is secure in a multi-origin world). So you would not be able to integrate FIDO with an arbitrary protocol (such as CMS or SCEP), since it's explicitly designed to restrict the ability of an origin to affect the overall format of messages (the origin can sign whatever they want in the unauthenticated portion, but the browser/UA will always implement a trusted wrapper that provides browser-mediated information)
 
Neither I see the "urgency" to deprecate <keygen>. At least, you have to admit this process has been "quite fast" and perhaps "didn't have enough public discussion" before being deprecated on WHATWG/Chrome.

Some behaviours of <keygen> are going away, soon, period.

The question is, in light of these changes, what value does <keygen> itself pose at all? None.
 
BTW: If this decision is already made (there's no way back, no matter how many reasons anyone give against, including timbl), it's correct to consider keygen will be removed/disabled on Chrome 47? Help me with the maths: in 3 months? Can you give an estimated date?

You should arguably consider it deprecated now (as it was already deprecated since introduction). The only question is when and how the actual _specified_ behaviours will change.

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 05:22:4714.09.2015
alıcı blink-dev, sle...@google.com, di...@webweaving.org
Folks,

Having red the various arguments; I'd like to add a few things from having worked for, with or at various government institutions, large enterprises and at fairly large public organisations (in the medical field, nuclear research labs and in broadcasting).

And most of all - I'd like to plea for an 'open' web in which one can build & experiment without asking permission.

While some state strongly that client side, x.509 based PKI's do not work at internet scale; they are a common beast in isolated communities. A very large percentage of the military on the western seaboard and NATO (NAVO/OTAN) uses them; there are countries where they are very widely deployed in places like hospitals, pharmacies and primary care; and lots and lots of enterprises.

While some smartcard vendors may want to spin this differently; I've found that a lot of this relies on *BOTH* keygen -and- the MS ActiveX component. If not throughout its whole lifecycle; then certainly during key phases at the start _and_ during the phase where it migrates into the more arcane parts of the organisation with old browsers, blackberries and odd industrial kit.

And it is my experience that this is conversely even 'more' true if there are mature MS Active Directory, SCEP and what not proceses in place. 

As the combination of keygen -and- MS its ActiveX enrol provides the 'fallback' to a very low (and crappy!) common denominator which is always there. And gives the organisation a fallback.

The primary use cases for <keygen> is there. In that fallback.

Or to put it the other way; in organisations where one deploys a client PKI without such a fallback - there is generally more of a need for some special 'backdoor' or 'fallback password' over basic-auth. And with that you 'break' a lot of strategic & organisational patterns on the way to more mature technologies.

I would stress that it is _also_ a key social element in a large organisation while it follows its path to better things like FIDO and what not. It creates, for the tech thought leaders, especially those close to operations and engineering, a 'no need to ask permission for' path to experiment, try tactics prior to embarking on something more strategic and 'big company'.

So for those to reasons - I would strongly urge to leave keygen in; until there is a solid replacements that, in that key webby style, does not require permission&money or logistics up front, but can be experimented with early and deployed with forgiveness later.

Finally - contrary to a lot of these posts; I've generally found that enterprises do tend to have smart individuals who understand the brokenness of MD5 in the SPKAC and what not. And very consciously make the decision to use it anyway. 

BECAUSE relying on some smartcard vendor, third party, some big system integrator or whatever is, in a lot of federated/distributed organisations, just as painful (albeit in a different manner) - i.e. it is almost as 'hostile' to the user as the type of hostility alluded to in the original post. 

My personal take on this is that having keygen for quite a while longer is at the very least going to _help_, not hinder, the introduction of the various FIDO approaches; and at its 'worst' a key safety net that is essential. People in the enterprise world are conservative; and they recall the Liberty Alliance, Windows CardSpace, Windows Live ID, Microsoft Passport and so on.

The open web and its outmoded KEYGEN (and its ActiveX.entroll berthed) have been the only constant through all of that.

Dw.





Ryan Sleevi

okunmadı,
14 Eyl 2015 05:30:1414.09.2015
alıcı Dirk-Willem van Gulik, blink-dev, di...@webweaving.org
On Mon, Sep 14, 2015 at 2:22 AM, Dirk-Willem van Gulik <dir...@gmail.com> wrote:
While some state strongly that client side, x.509 based PKI's do not work at internet scale; they are a common beast in isolated communities. A very large percentage of the military on the western seaboard and NATO (NAVO/OTAN) uses them; there are countries where they are very widely deployed in places like hospitals, pharmacies and primary care; and lots and lots of enterprises.

While this is certainly true, it's also categorically true that none of these enterprises use <keygen>, because <keygen> does not allow interaction with the smart card.

What remains is internal PKIs in use without a smart card, and as such, existing systems management tools work (and work better) for this case. As you note, more mature solutions - such as Active Directory - exist to provision such devices.

Should the browser be the way to configure your operating system? Unquestionably, no. It may help you get the configuration files necessary, but the idea of the user agent making persistent operating system modifications is not a reasonable request or a sustainable feature.

dir...@gmail.com

okunmadı,
14 Eyl 2015 05:34:2114.09.2015
alıcı blink-dev, helpc...@gmail.com, sle...@google.com

On Monday, September 14, 2015 at 10:39:06 AM UTC+2, Ryan Sleevi wrote:
I understand SOP as a security principle, but I'm also worried about the user needing to register on each government domain to generate a keypair...one for taxes, another for work-office. another for townhall...how many keys is yubikey able to store? :P
If I understand properly, only if a National SSO is deployed, this could be "bypassed".

All of this is a misdirect. No national ID card is using <keygen> for enrollment in Chrome, as the functionality does not and has never existed in Chrome to let you provision national ID cards, nor will it.
....
There is not and has never been a way to provision smartcards via HTML in Chrome.

While, in my personal experience, very few of the modern National ID cards use anything but a smartcard - and hence are centrally provisioned and then activated in the field without a the need for KEYGEN (as it only operates on the `soft token' in the browser); I would not dare to say that no National ID card scheme does not use Keygen at all.

I've found it fairly common to see things at the fringes of these schemes fall back to soft-tokens and keygen for special things. Such as an permission cert for a electronic passport reader, a entry stile, a national ID coupled energy credit scheme that needs a client cert in some solar panel array, an older generation taxi, weed-killer or nitrate-management app. And as it gets more industry specific (e.g. medical cards and the high end of embedded devices) the amount of leeway & tolerance to 'non chipcard' but soft token devices gets more mature.

Dw.

dir...@gmail.com

okunmadı,
14 Eyl 2015 05:41:0414.09.2015
alıcı blink-dev, dir...@gmail.com, di...@webweaving.org, sle...@google.com


On Monday, September 14, 2015 at 11:30:14 AM UTC+2, Ryan Sleevi wrote:


On Mon, Sep 14, 2015 at 2:22 AM, Dirk-Willem van Gulik <dir...@gmail.com> wrote:
While some state strongly that client side, x.509 based PKI's do not work at internet scale; they are a common beast in isolated communities. A very large percentage of the military on the western seaboard and NATO (NAVO/OTAN) uses them; there are countries where they are very widely deployed in places like hospitals, pharmacies and primary care; and lots and lots of enterprises.

While this is certainly true, it's also categorically true that none of these enterprises use <keygen>, because <keygen> does not allow interaction with the smart card.

I think you are going to fast there; yes; the common mode in the modern systems is indeed a smartcard; and KEYGEN has no bearing to that at all. But virtually all of these systems also need to deal with the realities of arcane devices, special cases, older kit, 'soft tokens', hardware, special exceptions where you actually want permit a clonable key under some limited settings and so on.

And they keygen comes in again.

What remains is internal PKIs in use without a smart card, and as such, existing systems management tools work (and work better) for this case. As you note, more mature solutions - such as Active Directory - exist to provision such devices.

And again - in my personal experience; the more mature and functional such an AD based setup is; the more likely it is to find, at its fringes, key use of ActiveX.enroll() and keygen. And *because*, not despite of that, the solution as a whole had the capture -and- staying power for a large enterprise.

Sure; technically a lot of this can be done differently; better (e.g. with a few openssl commands and curl) - but the persuasive path of the web; experiment and ask forgiveness later, combined with a less steep learning curve - generally seems to not 'go' straight to the complex thing.

So IMHO giving FIDO et.al. the best possible bedding to land on & grow - is exactly by keeping a very low (and crappy) common denominator there until you no longer need it.

But strip it too early - and you loose the 'trust' of the people that understand the tech (and may have been bitter by an earlier 'identity' promise before). Because they trust it a lot more if they can verify that something like keygen still works 'worst comes to worst'.

Dw.


 

Ryan Sleevi

okunmadı,
14 Eyl 2015 05:59:2614.09.2015
alıcı Dirk-Willem van Gulik, blink-dev, di...@webweaving.org
On Mon, Sep 14, 2015 at 2:41 AM, <dir...@gmail.com> wrote:

I think you are going to fast there; yes; the common mode in the modern systems is indeed a smartcard; and KEYGEN has no bearing to that at all. But virtually all of these systems also need to deal with the realities of arcane devices, special cases, older kit, 'soft tokens', hardware, special exceptions where you actually want permit a clonable key under some limited settings and so on.

But doesn't that logically imply that <keygen> is unnecessary for Chrome, since its' use in such provisioning system is for arcane devices, older kit, and special exceptions? That is, as you note, <keygen> has no bearing on modern, secure deployments, because it's unfit for that purpose.

The argument of soft token provisioning (aka modifying the operating system) doesn't seem particularly compelling, especially when if you're willing to accept softokens, and you're willing to trust the provisioning endpoint (which is true for every one of these systems, by virtue of being enterprise), then you can use either server-generated keys (which you already have to do for the many devices that don't support <keygen>) or WebCrypto-mediated export keys.

If the argument is that these systems aren't updated, therefore can't use any of the _existing_ alternatives offered by Chrome (such as WebCrypto) or the OS (such as PKCS#12 import or out-of-band provisioning), well, that's not necessarily a strong argument. Inevitably though, if that were truly the case, then one could presumably polyfill <keygen> via a Chrome extension (which could be enterprise deployed), such that it injected JS into target pages and looked for <keygen> elements, replacing them with custom Javascript that used WebCrypto to create a key in the extension-backed key storage, and then used PKCS#8/PKCS#12 to deliver that into the user's keystore of choice. That would still flow.

But to argue that no change is acceptable is not really a productive argument, since some change is inevitable, and arguing "no user interaction is acceptable" is neither viable for the present nor the future. While I appreciate the distinction between one click versus two, at least if we acknowledge that then we're acknowledging it's functionally identical, and it's just an experience issue - at which point, we can then accept that user experiences change.

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 06:09:2714.09.2015
alıcı Ryan Sleevi, Dirk-Willem van Gulik, blink-dev
Two thoughts here - I sought to make clear that it is not a situation of 'no change is acceptable’ — but one where you need to carry the torch until you are in a good position to change.

We are not today. We are siting on a huge pile of failed initiatives; FIDO is not quite there yet; and the enterprise world is a mess. 

So, as in the enterprise, todays modern system; is tomorrows legacy system, it behoves us to keep having a lowest common denominator in our technology until we have the alternatives well established; and think in periods of 10 years and longer.  Give UAF en U2F the decade they need to grow & establish themselves. And give the early pioneers enough (old crappy) tech to fall back on _during_ that decade to give the FIDO community their fighting chance.

Dw.

Ryan Sleevi

okunmadı,
14 Eyl 2015 06:17:0414.09.2015
alıcı Dirk-Willem van Gulik, Dirk-Willem van Gulik, blink-dev
On Mon, Sep 14, 2015 at 3:08 AM, Dirk-Willem van Gulik <di...@webweaving.org> wrote:

So, as in the enterprise, todays modern system; is tomorrows legacy system, it behoves us to keep having a lowest common denominator in our technology until we have the alternatives well established;

And we have that - via WebCrypto.

If the argument is that we need to support <keygen> itself in perpetuity, and that no amount of suitable alternatives exist (whether they be via Extensions or via alternatives), then that's not really a good state either.
 
and think in periods of 10 years and longer.  

If those are the timescales you think of, then relying on a browser is not a good position :) Few platforms survive 10 years - you're either terribly insecure or you're in an entirely different position. It's not reasonable to ask the browser not to change for 10 years, when the OS underneath the browser can't even make that promise for 5 years.

For example, in the past 10 years, SSL2, SSL3, and MD5 support have been removed from browsers. RC4 is on the way out shortly. Servers have had to adjust to that - and thus should be able to adjust to <keygen>. For systems and servers that cannot, alternatives exist aplenty, so the tales of doom are not really there.

Henry Story

okunmadı,
14 Eyl 2015 06:32:5714.09.2015
alıcı Ryan Sleevi, Dirk Willem van Gulik, Dirk-Willem van Gulik, blink-dev, Jan Suhr
I'll answer two of Ryan Sleevi's points.
  1. the one about WebCrypto being a replacement for <keygen>
     summary: it can't because WebCrypto is limited to same origin, and has no tie into chrome, so no ability to put the user in control
  2. that keygen cannot be used to provision with an external hardware device
     We have a video of such a thing being done on open hardware

On 14 Sep 2015, at 11:16, 'Ryan Sleevi' via blink-dev <blin...@chromium.org> wrote:



On Mon, Sep 14, 2015 at 3:08 AM, Dirk-Willem van Gulik <di...@webweaving.org> wrote:

So, as in the enterprise, todays modern system; is tomorrows legacy system, it behoves us to keep having a lowest common denominator in our technology until we have the alternatives well established;

And we have that - via WebCrypto.

You don't have that via WebCrypto, since WebCrypto does not allow one to create a certificate
that can be used in the keystore used by TLS to authenticate to other web sites.

WebCrypto as has been argued by you here and on the TAG is designed as FIDO is
to be single origin. 

I argued that SOP is different to User Control here

X509 Client Certificates ( and things that replace them ) allow an agent to authenticate cryptographically with built in support by the chrome across origins, whilst being in control,
which is why this is  so useful.


If the argument is that we need to support <keygen> itself in perpetuity, and that no amount of suitable alternatives exist (whether they be via Extensions or via alternatives), then that's not really a good state either.
 
and think in periods of 10 years and longer.  

If those are the timescales you think of, then relying on a browser is not a good position :) Few platforms survive 10 years - you're either terribly insecure or you're in an entirely different position. It's not reasonable to ask the browser not to change for 10 years, when the OS underneath the browser can't even make that promise for 5 years.

For example, in the past 10 years, SSL2, SSL3, and MD5 support have been removed from browsers. RC4 is on the way out shortly. Servers have had to adjust to that - and thus should be able to adjust to <keygen>. For systems and servers that cannot, alternatives exist aplenty, so the tales of doom are not really there.

yes, and as SSL2 and SSL3 was removed from browsers it was replaced by something better namely TLS1.0, TLS 1.2 and soon TLS 1.3 that is being worked on and that could solve a lot
of the client certificate authentication issues of current systems.

In each of these cases the aim was to improve the technology, not to abandon it. The failings
of keygen could be easily improved and there is no case made yet that this cannot be integrated 
into FIDO's hardware capabilities.

On Mon, Sep 14, 2015 at 2:22 AM, Dirk-Willem van Gulik <dir...@gmail.com> wrote:
While some state strongly that client side, x.509 based PKI's do not work at internet scale; they are a common beast in isolated communities. A very large percentage of the military on the western seaboard and NATO (NAVO/OTAN) uses them; there are countries where they are very widely deployed in places like hospitals, pharmacies and primary care; and lots and lots of enterprises.

While this is certainly true, it's also categorically true that none of these enterprises use <keygen>, because <keygen> does not allow interaction with the smart card.

I have a video up that shows that this is not the case. This is a video showing how the German
Privacy foundations' open source, open hardware crypto stick uses keygen to use the stick for provisioning.

It's the video at the bottom of the page here. 

Perhaps he and others can provide more technical documentation pointers to such features.

Henry

helpcrypto helpcrypto

okunmadı,
14 Eyl 2015 06:37:1914.09.2015
alıcı Ryan Sleevi, blink-dev
On Mon, Sep 14, 2015 at 10:38 AM, Ryan Sleevi <sle...@google.com> wrote:
On Mon, Sep 14, 2015 at 1:04 AM, helpcrypto helpcrypto <helpc...@gmail.com> wrote:
If MS doesn't provide this feature, is not a reason not to support it. (Otherwise Chrome shouldn't support <file multiple> until....2015? lol)

It's one thing for a browser vendor to not support something _yet_. It's quite another to not support it and explicitly have no plans to support it and objections to it.

Probably there will be examples of this kind of situations I'm not aware of. Anyhow, I see your point.

 
 
I understand SOP as a security principle, but I'm also worried about the user needing to register on each government domain to generate a keypair...one for taxes, another for work-office. another for townhall...how many keys is yubikey able to store? :P
If I understand properly, only if a National SSO is deployed, this could be "bypassed".

All of this is a misdirect. No national ID card is using <keygen> for enrollment in Chrome, as the functionality does not and has never existed in Chrome to let you provision national ID cards, nor will it.

At least in Spain, National ID is not populated on browser, and considering 910/2014 EU, each country has to support "all the certificates published on "european" TSLs).
That is, there some certs from specific Issuers/CA that are generated/enrolled using <keygen>. We use some of them. eg: FNMT.
Evenmore: Spanish eID is a pain in the ass, so most people use FNMT/other issuer/pre-shared passwords.

 
So it's clear that no national ID scheme - such as filing taxes and for work - will be affected.

I hope you now agree that "it will be affected".

 
If you have such an ID card, and you install the appropriate smart card middleware/drivers, it will just work, regardless of the handling for either application/x-x509-user-cert or <keygen>. If you need to renew your card, you have to go in person with that card, the same as you would have to do with <keygen> support today.

Not the case for Spain's main CA. (And others also). Enrollment and renewal use <keygen>. In fact, FNMT renewal got screw when Mozilla removed signText function.

 
There is not and has never been a way to provision smartcards via HTML in Chrome.

We are using Firefox, but I think similar implications would arise. Mozilla discussion seem to be "happening here" also.

 

I still don't understand, but probably it's a lack-of-english issue.
Yes=you need a new keypair? or Yes=you can share the keypair somehow?

I had hoped the link would have made it clear, because in the introductory paragraph, it explains precisely that it is intended for using the same key across multiple applications (which may themselves be across multiple devices). That is, using the same key between an Android App, an iOS app, and a Web origin - provided that all three are operated by the same (logical) entity.

No, you do not need a new keypair.
Yes, you can share the keypair - using the steps in the link I gave.

Thanks. I'll re-read to practise my english comprehension skills ;)

 
 
Anyhow, I don't yet see the problem (/reasons not) to integrate FIDO+keystore and even FIDO+X509. Could you elaborate?

I suspect this is a terminology issue, but the entire reason that FIDO is viable is because it's not integrated with the keystore. And there's nothing preventing a FIDO-backed key from being integrated with X.509, but FIDO intentionally and explicitly defines the format of messages that can be signed with that key (which is the only reason that FIDO is secure in a multi-origin world). So you would not be able to integrate FIDO with an arbitrary protocol (such as CMS or SCEP), since it's explicitly designed to restrict the ability of an origin to affect the overall format of messages (the origin can sign whatever they want in the unauthenticated portion, but the browser/UA will always implement a trusted wrapper that provides browser-mediated information)

My "real" question still remain: Why I can't store FIDO keys on NSS/browser/keystore?
From this paragraph...could I use an X509 public key for the FIDO key register process? (hence sharing X509/FIDO?)

 
 
Neither I see the "urgency" to deprecate <keygen>. At least, you have to admit this process has been "quite fast" and perhaps "didn't have enough public discussion" before being deprecated on WHATWG/Chrome.

Some behaviours of <keygen> are going away, soon, period.

As already asked: I will love what precisely means "soon". Tomorrow? Before 2016? 2Q 2016?
I understand your position, but an ETA will be much appreciated. If I'm not wrong seems M-47.
 

The question is, in light of these changes, what value does <keygen> itself pose at all? None.
 
BTW: If this decision is already made (there's no way back, no matter how many reasons anyone give against, including timbl), it's correct to consider keygen will be removed/disabled on Chrome 47? Help me with the maths: in 3 months? Can you give an estimated date?

You should arguably consider it deprecated now (as it was already deprecated since introduction). The only question is when and how the actual _specified_ behaviours will change.

See above. :P

Thanks for your answers...you kind opened pandora's box :P

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 06:39:2514.09.2015
alıcı Ryan Sleevi, Dirk-Willem van Gulik, blink-dev
On 14 Sep 2015, at 12:16, Ryan Sleevi <sle...@google.com> wrote:

> On Mon, Sep 14, 2015 at 3:08 AM, Dirk-Willem van Gulik <di...@webweaving.org> wrote:
>
> So, as in the enterprise, todays modern system; is tomorrows legacy system, it behoves us to keep having a lowest common denominator in our technology until we have the alternatives well established;
>
> And we have that - via WebCrypto.
>
> If the argument is that we need to support <keygen> itself in perpetuity, and that no amount of suitable alternatives exist (whether they be via Extensions or via alternatives), then that's not really a good state either.

No - again - as tried to say in the sentence before that - I sought to make clear that it is not a situation of 'no change is acceptable’ — but one where you need to carry the torch until you are in a good position to change.

> and think in periods of 10 years and longer.
>
> If those are the timescales you think of, then relying on a browser is not a good position :) Few platforms survive 10 years - you're either terribly insecure or you're in an entirely different position. It's not reasonable to ask the browser not to change for 10 years, when the OS underneath the browser can't even make that promise for 5 years.

Well - we have a lot of browser & server sundry and other technology that does live on the scales of 10 years and longer. And needs to.

Organisations and physical infrastructures have not yet learned how to change quickly. And while they are learning, quicker and quicker, 5 years is still very fast.

> For example, in the past 10 years, SSL2, SSL3, and MD5 support have been removed from browsers. RC4 is on the way out shortly. Servers have had to adjust to that - and thus should be able to adjust to <keygen>. For systems and servers that cannot, alternatives exist aplenty, so the tales of doom are not really there.

I’d be more optimistic about that - in that space we have learned a lot; and while we cannot easily swap ‘like for like’ a capability; we have learned how to deal with those parameter changes - and how to adjust.

The annual 'Algorithms, key size and parameters report 2014’ with cryptographic guidelines by ENSIA and its equivalent NIST publications are now widely understood and, in my experience well implemented. So no need to worry about doom there. Publications like this has done a world of good.

And in fact - using publications like this may be a very good way to deal with the transition away from keygen. Having it announced, scheduled and annually reported on is very very helpful for the people who try to make this happen -and- are having to find arguments why it has to happen in 2018 rather than 2025.

And absolutely - dropping keygen is something we can deal with. No need to invoke doom there either. But the collateral damage to the trust in the infrastructure by retiring it early being such that it becomes harder to introduce WebCrypto and, especially, the FIDO standards.

In an enterprise setting you do not move easily people with the stick of taking something away; people ‘inside’ are desperately aware that they need to move. And when something is important to the organisation - you cannot keep it away. Plenty of WIn98 and, I hear, even IPv4!, still in use.

Instead - you give them the tools to move. And that means an alternative; at least one other modern alternative in case the first dies _AND_ most of all a ‘worst comes to worst’ fallback that is going to be around for as long as the credible plan you had for the new/modern alternative.

So the more we want to replace keygen - the more critical it is that we retire it in a very controlled fashion. As otherwise a common reaction is complete stagnation.

Dw

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 06:46:5414.09.2015
alıcı Henry Story, Ryan Sleevi, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On 14 Sep 2015, at 12:32, Henry Story <henry...@gmail.com> wrote:
>
> 2. that keygen cannot be used to provision with an external hardware device
> We have a video of such a thing being done on open hardware

Aye - it is also pretty common on commercial hardware. Though in terms of overall impact; I’d be more worried about loosing the soft-token than the keygen+hardware use.

And it is noted that the latter would effect some of the more thought leader user communities. Which is a double edged sword; as it may also be exactly that community that is more able to modernise quicker. For the enterprise I would still hold out on the position that the stick is the worst thing to use.

Dw

Henry Story

okunmadı,
14 Eyl 2015 07:39:4114.09.2015
alıcı Dirk Willem van Gulik, Ryan Sleevi, Dirk-Willem van Gulik, blink-dev, Jan Suhr

> On 14 Sep 2015, at 11:45, Dirk-Willem van Gulik <di...@webweaving.org> wrote:
>
> On 14 Sep 2015, at 12:32, Henry Story <henry...@gmail.com> wrote:
>>
>> 2. that keygen cannot be used to provision with an external hardware device
>> We have a video of such a thing being done on open hardware
>> second video at bottom of page http://bblfish.net/blog/2011/05/25/
> Aye - it is also pretty common on commercial hardware. Though in terms of overall impact; I’d be more worried about loosing the soft-token than the keygen+hardware use.
>
> And it is noted that the latter would effect some of the more thought leader user communities. Which is a double edged sword; as it may also be exactly that community that is more able to modernise quicker. For the enterprise I would still hold out on the position that the stick is the worst thing to use.

The external stick helps demonstrate how ultra secure hardware based cryptography can tie into <keygen>. This means that it is directly relevant to hardware based crypto proposed by FIDO which would not suffer from the complexities of an external device.
Just imagine this with fingerprint in the OS enabled crypto. One could use such keys
to open one's front door, open one's car, etc, etc...

Hardware generated public/private keys built into smart phones and other devices in combination with an improved <keygen> or equivalent functionality would allow Identity Providers to create much more reliable Client Certificates ( be it X509 or future JOSE ones such as worked on by the IETF ) to be used for cross-origin authentication. This would make cryptography every more secure and much more widely adopted. Add WebID to the mix, and one could have an explosion of decentralised user controlled client certificate authentication.

I am told this type of provisioning may be more widely used than currently
acknowledged in this debate. Hopefully those more knowledgeable wil be able to
chime in here.

>
> Dw

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 08:24:5114.09.2015
alıcı Henry Story, Ryan Sleevi, Dirk-Willem van Gulik, blink-dev, Jan Suhr

> On 14 Sep 2015, at 13:39, Henry Story <henry...@gmail.com> wrote:
...
> ... stick helps demonstrate how ultra secure hardware based cryptography can tie into <keygen>.

But in all fairness - we often build our commercial or open source stuff & hook it into -> PKCS (#10, #12) (and hence the wiring up to Enroll and Keygen) - as that simply makes it spring to live much easier; in places like the keygen pull down, the OS keychain and so on.

Though I guess that is also a powerful argument to keep KEYGEN as one of the places to let that infrastructure surface. As your video on http://bblfish.net/blog/2011/05/25/ shows so nicely.

> I am told this type of provisioning may be more widely used than currently
> acknowledged in this debate.

Agreed. And a large part is that it pretty much is the only hook we have to wire this in; everything else means not just a simple PKCS module (which can then also be wired into PAM, into the keychain); but supplying a whole ecosystem.

Dw.


Ryan Sleevi

okunmadı,
14 Eyl 2015 11:28:0814.09.2015
alıcı Henry Story, Dirk Willem van Gulik, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On Mon, Sep 14, 2015 at 3:32 AM, Henry Story <henry...@gmail.com> wrote:
I'll answer two of Ryan Sleevi's points.
  1. the one about WebCrypto being a replacement for <keygen>
     summary: it can't because WebCrypto is limited to same origin, and has no tie into chrome, so no ability to put the user in control

You're not discussing <keygen>, you're discussing application/x-x509-user-cert then.
 
  2. that keygen cannot be used to provision with an external hardware device
     We have a video of such a thing being done on open hardware

This is not and has never been possible in Chrome, so it's clearly not a necessary or required implementation feature. It's also something Chrome would never implement.

You don't have that via WebCrypto, since WebCrypto does not allow one to create a certificate
that can be used in the keystore used by TLS to authenticate to other web sites.

Neither does <keygen>. <keygen> does *nothing* with certificates nor has anything to do. That's application/x-x509-user-cert. However, that handling is also going away, with no replacement, and that's never been standardized behaviour nor implemented beyond Firefox and Chrome, and incompatibly so.

However, you can still synthesize and download a PKCS#12 file, so you still have a path to import a key and a certificate into an OS store, mediated with a user interactivity requirement (of finding and executing the downloaded file, and spawning the OS interactivity). This is the same requirement imposed by Safari, so again, if the concern is interoperability, the future direction is already interoperable with Safari's implementation.

You're arguing for a preservation of functionality that was never part of <keygen>, nor specified, so you can hopefully see why that's so problematic.

Ryan Sleevi

okunmadı,
14 Eyl 2015 11:34:3214.09.2015
alıcı helpcrypto helpcrypto, blink-dev
On Mon, Sep 14, 2015 at 3:36 AM, helpcrypto helpcrypto <helpc...@gmail.com> wrote:
At least in Spain, National ID is not populated on browser, and considering 910/2014 EU, each country has to support "all the certificates published on "european" TSLs).
That is, there some certs from specific Issuers/CA that are generated/enrolled using <keygen>. We use some of them. eg: FNMT.
Evenmore: Spanish eID is a pain in the ass, so most people use FNMT/other issuer/pre-shared passwords.

Except those certificates aren't QCP-SSD, they're, at best, QCP. You can't handle enrollment or renewal of eID cards through <keygen>, so FNMT is at best issuing QCP certificates, which don't carry the same 'disruption to basic services' that are being suggested here. That is, only a QCP-SSD cert carries the ability to be used for all the services, and Chrome has never been able to provision such a certificates.

Would this affect Firefox? Sure. But the very idea of using the browser as the vector for provisioning QCP-SSD is already problematic on multiple fronts.
 
We are using Firefox, but I think similar implications would arise. Mozilla discussion seem to be "happening here" also.

It helps to be clear in that distinction, because these concerns are not concerns that apply to Chrome. They're concerns of "One UA's implementation that is different from every other UA's interpretation", which hopefully positions it to be more easily understood as why relying on that is so problematic.
 
My "real" question still remain: Why I can't store FIDO keys on NSS/browser/keystore?
From this paragraph...could I use an X509 public key for the FIDO key register process? (hence sharing X509/FIDO?)

As I indicated before, you can using X.509 certificates (I have no idea what you mean by "X509 public key") with FIDO, but the keypair generated MUST be generated as part of the FIDO enrollment, and thus must necessarily be scoped to a single origin. You cannot create a keypair that can be used across origins, and that's precisely why FIDO represents a viable path forward.

It is by design, and intentionally not supported. Could FIDO relax that? Yes. Then it would not get implemented, because multi-origin keypairs are fundamentally problematic.

Henry Story

okunmadı,
14 Eyl 2015 11:58:2314.09.2015
alıcı Ryan Sleevi, Dirk Willem van Gulik, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On 14 Sep 2015, at 16:28, Ryan Sleevi <sle...@google.com> wrote:



On Mon, Sep 14, 2015 at 3:32 AM, Henry Story <henry...@gmail.com> wrote:
I'll answer two of Ryan Sleevi's points.
  1. the one about WebCrypto being a replacement for <keygen>
     summary: it can't because WebCrypto is limited to same origin, and has no tie into chrome, so no ability to put the user in control

You're not discussing <keygen>, you're discussing application/x-x509-user-cert then.

Note that in the name of this thread you have yourself tied both x509 and keygen together.
"(Pre-)Intent to Deprecate: <keygen> element and application/x-x509-*-cert MIME handling"

You can't deny that both <keygen> and x-x509-* cert mime type handling have tie ins into the chrome. <keygen> brings up an extra field in a form, and x509 cert mime type handling shows that
the certificate is being uploaded to the keystore.

And you can't deny that webcrypto does not. 

 
  2. that keygen cannot be used to provision with an external hardware device
     We have a video of such a thing being done on open hardware

This is not and has never been possible in Chrome, so it's clearly not a necessary or required implementation feature. It's also something Chrome would never implement.

It's not helpful in a discussion to take absolute positions like this. It makes you look like a dictator, with no intent to listen to what anyone has to say. As far as I know this is an open source project,
and there is a community of web users that goes beyond the browser vendors that have things to say, which it would be polite if you were to listen to them without dismissing all ideas in advance.


You don't have that via WebCrypto, since WebCrypto does not allow one to create a certificate
that can be used in the keystore used by TLS to authenticate to other web sites.

Neither does <keygen>. <keygen> does *nothing* with certificates nor has anything to do. That's application/x-x509-user-cert. However, that handling is also going away, with no replacement, and that's never been standardized behaviour nor implemented beyond Firefox and Chrome, and incompatibly so.

From my experience and others who have actually written code to use these features I'd like to disagree. We have been able to use keygen and the x-509 certificate mime types without distinction across Chrome, Opera, Firefox and Safari. In IE the code was a bit different but the logic was pretty much the same.

That this was not well documented is another matter entirely. And one that Tim Berners Lee in this thread has asked to be fixed.


However, you can still synthesize and download a PKCS#12 file, so you still have a path to import a key and a certificate into an OS store, mediated with a user interactivity requirement (of finding and executing the downloaded file, and spawning the OS interactivity). This is the same requirement imposed by Safari, so again, if the concern is interoperability, the future direction is already interoperable with Safari's implementation.

We'd like to import it into the store used by the browser. Otherwise what's the point?
Apple has not problem thinking of the browser as part of the OS, just as microsoft does. So the OS keystore is for them not that different from the browser. You may disagree and create your browser store as Opera and Firefox does. Fine. It's not because you don't want to see the relation between the two that others don't.


You're arguing for a preservation of functionality that was never part of <keygen>, nor specified, so you can hopefully see why that's so problematic.

You are pretending that because something was not specified fully that it did not actually work cross platform. We have code and experience to prove it does. That is why it is actually not problematic.

The problem is that you don't really want to take input from any users, be they Tim Berners Lee, people from the BBC, governmetns, hackers, etc. etc... That is not the way to run an open source project.

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 12:17:4314.09.2015
alıcı Ryan Sleevi, Henry Story, Dirk-Willem van Gulik, blink-dev, Jan Suhr
Ryan,

On 14 Sep 2015, at 17:28, Ryan Sleevi <sle...@google.com> wrote:
> On Mon, Sep 14, 2015 at 3:32 AM, Henry Story <henry...@gmail.com> wrote:
...
> summary: it can't because WebCrypto is limited to same origin, and has no tie into chrome, so no ability to put the user in control
>
> You're not discussing <keygen>, you're discussing application/x-x509-user-cert then.
..
> This is not and has never been possible in Chrome, so it's clearly not a necessary or required implementation feature. It's also something Chrome would never implement.

I understand what you are trying to say; and the documents on whatwg.org are very light on what is implemented behind the keygen element (and does note ven mention the visual basic/activeX enrol()).

But in all fairness - keygen is used as an (early) step in enrolling for a generally quite persistent client side x.509 certificate. And pretty much only for that. So yes; we need to discuss the whole x509 mess; as _that_ is what we break.

Now the docs may be crap or non-existent. Which is a problem and should be fixed so (in)compliance becomes measurable. If I had to implement it from scratch - I would loathe the inordinate amount of reverse engineering and testing it would take.

But at the same time - people have gotten this to work rather reliable; over very long periods of time; sometimes at a fairly large scale; across quite a few generations of browsers & servers - in a very wide range of settings. With little control over their environments; environments that evolved.

Now arguably a lot of that is thanks to PKCS; a lot of ‘hardcoded’ assumptions in OpenSC and downright cargo culting between the various implementations UI wise. So, while often as badly documented as keygen, they keep the backbone common & stable.

Some of the oldest ones I can think off date back to ’98*; and are still operable enough for enterprises; across over 15 years of windows, Solaris, and later OSX and Linux; and have been able to move along the various key lengths and algorithms as well.

So despite the crappy docs & specs - folks have kept it all working.

Dw

*: The oldest commercial deployment I still have docs for was with Netscape Gold 3.0.1 though.

Ryan Sleevi

okunmadı,
14 Eyl 2015 13:15:0114.09.2015
alıcı Henry Story, Dirk Willem van Gulik, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On Mon, Sep 14, 2015 at 8:58 AM, Henry Story <henry...@gmail.com> wrote:
You can't deny that both <keygen> and x-x509-* cert mime type handling have tie ins into the chrome. <keygen> brings up an extra field in a form, and x509 cert mime type handling shows that
the certificate is being uploaded to the keystore.

And you can't deny that webcrypto does not. 

Let me try to put it in a different way:
- Can you accomplish with WebCrypto+application/x-x509-user-cert what you can with <keygen>+application/x-x509-user-cert?

The answer is yes.

Elsewhere, you argue that WebCrypto is not secure, because an attacker 'might' be able to inject JS. Well, if they can inject JS, they can manipulate <keygen> (such as removing it and replacing it with a WebCrypto polyfill), so that argument doesn't hold. The server's access to the key material is limited to the step of provisioning - but it's that same step of provisioning where they're expected to use <keygen>. An evil server could remove the <keygen> tag and just use application/x-x509-user-cert as a cert+key delivery vector - or condition the user to click a PKCS#12 file - so the 'security' strengths of <keygen> do not hold up under any conceivable attack model when you actually look at the system holistically, rather than in isolation.

The next question is whether you can do what you can with WebCrypto what you can do with WebCrypto+application/x-x509-user-cert. And the answer again is "Yes" on a technical level, with the only difference being a degree of user interaction.

But here again, application/x-x509-user-cert has to change to include interactivity, so having a requirement of interaction is no different. So yes,  you can do the same thing, with the same costs.
 
It's not helpful in a discussion to take absolute positions like this. It makes you look like a dictator, with no intent to listen to what anyone has to say. As far as I know this is an open source project,
and there is a community of web users that goes beyond the browser vendors that have things to say, which it would be polite if you were to listen to them without dismissing all ideas in advance.

Security decisions are made to protect the security of users. When there's a conflict, our overriding concern is that of security. That's a point that seems to be repeatedly missed here - regardless of any discussion here, there are some solutions that are unacceptable from a security standpoint, and that's not negotiable. Yes, it's open-source, but there are also unacceptable lines to be crossed.

 
From my experience and others who have actually written code to use these features I'd like to disagree. We have been able to use keygen and the x-509 certificate mime types without distinction across Chrome, Opera, Firefox and Safari. In IE the code was a bit different but the logic was pretty much the same.

That this was not well documented is another matter entirely. And one that Tim Berners Lee in this thread has asked to be fixed.

This is the same group that in 2014 said <keygen> was entirely unnecessary ( https://lists.w3.org/Archives/Public/public-webid/2014Jul/0075.html ) , and in 2011 showed it was possible to polyfill with Javascript ( http://www.w3.org/2011/identity-ws/papers/idbrowser2011_submission_7.pdf )

You're arguing for the sanctity of a feature that even your fellow developers agree is unnecessary for your use case.
 
The problem is that you don't really want to take input from any users, be they Tim Berners Lee, people from the BBC, governmetns, hackers, etc. etc... That is not the way to run an open source project.

I think that's a fairly gross mischaracterization. Your input and feedback has been taken into consideration - and your use case even before you provided feedback.

We remain at an impasse, however, because you don't understand why things are changing - despite efforts to explain - and you don't want to accept you don't need it - despite acknowledging it for years. Just because the conclusion is different than what you want doesn't mean your feedback wasn't considered - but it doesn't mean that "taking feedback" will result in the desired outcome. We gather feedback, but that doesn't mean we won't take action if there's objections - it means that those objections are weighed with the other risks and costs.

This is no different than the notion of "rough consensus" (whether you talk W3C or IETF). It's not that everyone needs to agree. It's that everyone needs to be heard. And I think the sheer number of replies detailing and explaining things to you should unquestionably give you the realization that you have been heard, and understood, but there remains disagreement, and there hasn't been new information brought to this thread that was not considered, not since the thread started.

I've tried to explain to you, in the 20+ messages on this and related threads, these concerns, so that you can understand *why* your feedback is not sufficient to meaningfully alter the discussion of considerations, precisely so that you understand you have been heard. But that's not something that can go on indefinitely - at the end of the day, work needs to be done.

Henry Story

okunmadı,
14 Eyl 2015 14:44:2214.09.2015
alıcı Ryan Sleevi, Dirk Willem van Gulik, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On 14 Sep 2015, at 18:14, Ryan Sleevi <sle...@google.com> wrote:



On Mon, Sep 14, 2015 at 8:58 AM, Henry Story <henry...@gmail.com> wrote:
You can't deny that both <keygen> and x-x509-* cert mime type handling have tie ins into the chrome. <keygen> brings up an extra field in a form, and x509 cert mime type handling shows that
the certificate is being uploaded to the keystore.

And you can't deny that webcrypto does not. 

Let me try to put it in a different way:
- Can you accomplish with WebCrypto+application/x-x509-user-cert what you can with <keygen>+application/x-x509-user-cert?

The answer is yes.

We are working on finding an answer to this question. For the moment me and a number of 
people are not convinced. But we are making progress in understanding what is at stake.


Elsewhere, you argue that WebCrypto is not secure, because an attacker 'might' be able to inject JS. Well, if they can inject JS, they can manipulate <keygen> (such as removing it and replacing it with a WebCrypto polyfill), so that argument doesn't hold. The server's access to the key material is limited to the step of provisioning - but it's that same step of provisioning where they're expected to use <keygen>. An evil server could remove the <keygen> tag and just use application/x-x509-user-cert as a cert+key delivery vector - or condition the user to click a PKCS#12 file - so the 'security' strengths of <keygen> do not hold up under any conceivable attack model when you actually look at the system holistically, rather than in isolation.

As far as I know application/x-x509-user-cert  do not come with private keys.
So, if the evil JS generated an application/x-x509-user-cert it would not have been able to place
the private key in the browser's favorite keychain, so that certificate would be of no use.


The next question is whether you can do what you can with WebCrypto what you can do with WebCrypto+application/x-x509-user-cert. And the answer again is "Yes" on a technical level, with the only difference being a degree of user interaction.

yes, that's what is important since the legal principle we need to abide by is that of user control.
See the tag finding on Unsanctioned Web Tracking that makes the importance of this concept


But here again, application/x-x509-user-cert has to change to include interactivity, so having a requirement of interaction is no different. So yes,  you can do the same thing, with the same costs.
 
It's not helpful in a discussion to take absolute positions like this. It makes you look like a dictator, with no intent to listen to what anyone has to say. As far as I know this is an open source project,
and there is a community of web users that goes beyond the browser vendors that have things to say, which it would be polite if you were to listen to them without dismissing all ideas in advance.

Security decisions are made to protect the security of users. When there's a conflict, our overriding concern is that of security. That's a point that seems to be repeatedly missed here - regardless of any discussion here, there are some solutions that are unacceptable from a security standpoint, and that's not negotiable. Yes, it's open-source, but there are also unacceptable lines to be crossed.

You mean using public keys across origins? One can actually do that with WebCrypto


 
From my experience and others who have actually written code to use these features I'd like to disagree. We have been able to use keygen and the x-509 certificate mime types without distinction across Chrome, Opera, Firefox and Safari. In IE the code was a bit different but the logic was pretty much the same.

That this was not well documented is another matter entirely. And one that Tim Berners Lee in this thread has asked to be fixed.

This is the same group that in 2014 said <keygen> was entirely unnecessary ( https://lists.w3.org/Archives/Public/public-webid/2014Jul/0075.html ) ,

Kingsley Idehen meant that one can write plugins to fix the flaws of browsers. That is a workaround rather than a solution. 

and in 2011 showed it was possible to polyfill with Javascript ( http://www.w3.org/2011/identity-ws/papers/idbrowser2011_submission_7.pdf )

I hope you'll agree that using Flash is not a good answer.

There were discussions on the mailing list but these did not lead to consensus. 

Otherwise I suppose I could just cite Rigo Wenning, W3C legal council, has just argued on WebAppSec that Same Origin Policy cannot be used as an automatic argument in all
cases


SOP is a technical tool that is important in providing certain guarantees, but it does
not answer all the issues of security and control that are legal requirements.


You're arguing for the sanctity of a feature that even your fellow developers agree is unnecessary for your use case.
 
The problem is that you don't really want to take input from any users, be they Tim Berners Lee, people from the BBC, governmetns, hackers, etc. etc... That is not the way to run an open source project.

I think that's a fairly gross mischaracterization. Your input and feedback has been taken into consideration - and your use case even before you provided feedback.

We remain at an impasse, however, because you don't understand why things are changing - despite efforts to explain - and you don't want to accept you don't need it - despite acknowledging it for years. Just because the conclusion is different than what you want doesn't mean your feedback wasn't considered - but it doesn't mean that "taking feedback" will result in the desired outcome. We gather feedback, but that doesn't mean we won't take action if there's objections - it means that those objections are weighed with the other risks and costs.

This is no different than the notion of "rough consensus" (whether you talk W3C or IETF). It's not that everyone needs to agree. It's that everyone needs to be heard. And I think the sheer number of replies detailing and explaining things to you should unquestionably give you the realization that you have been heard, and understood, but there remains disagreement, and there hasn't been new information brought to this thread that was not considered, not since the thread started.

I've tried to explain to you, in the 20+ messages on this and related threads, these concerns, so that you can understand *why* your feedback is not sufficient to meaningfully alter the discussion of considerations, precisely so that you understand you have been heard. But that's not something that can go on indefinitely - at the end of the day, work needs to be done.

Perhaps you can help me with the question I asked on the TAG here where I discuss an interesting
use of WebCrypto for distributed authentication 


As you see we are very keen to learn. It would help if instead of forcing deprecation of a well known
feature we had time to work out if WebCrypto actually allowed us to replace the functionality you say it allows us to. 

By entereing this discussion with an open mind we may yet reach consensus.

Dirk-Willem van Gulik

okunmadı,
14 Eyl 2015 15:05:0214.09.2015
alıcı Ryan Sleevi, Henry Story, Dirk-Willem van Gulik, blink-dev, Jan Suhr

On 14 Sep 2015, at 19:14, Ryan Sleevi <sle...@google.com> wrote:



On Mon, Sep 14, 2015 at 8:58 AM, Henry Story <henry...@gmail.com> wrote:
You can't deny that both <keygen> and x-x509-* cert mime type handling have tie ins into the chrome. <keygen> brings up an extra field in a form, and x509 cert mime type handling shows that
the certificate is being uploaded to the keystore.

And you can't deny that webcrypto does not. 

Let me try to put it in a different way:

- Can you accomplish with WebCrypto+application/x-x509-user-cert what you can with <keygen>+application/x-x509-user-cert?

Would you be able to point me to a fully functional/detailed example; that works in a reasonable federated/no-single origin fashion ? 

I.e. where entity A can issue a client cert against a private key that is (only) in a browser (lets limit this to soft tokens for now); and that entity B can rely on. And with the only ‘out of band’, collusion or coordination between A and B being some cert higher up in the chain A used when it issued & signed.

I am not familiar with such a cross domain one.

I think that would help focus the discussion. Given how _few_ flavours there are of the server side path behind keygen and Enroll() (and that support both sufficiently wide) — having such a detailed example could conceivably allow us to implement it for the 2 or 3 server side paths everyone (that does not go full smartcard of some commercial entity) relies on.

And we can then tackle the PKCS et.al. module world separately.

Dw.

Ryan Sleevi

okunmadı,
14 Eyl 2015 15:06:3514.09.2015
alıcı Henry Story, Dirk Willem van Gulik, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On Mon, Sep 14, 2015 at 11:44 AM, Henry Story <henry...@gmail.com> wrote:
As far as I know application/x-x509-user-cert  do not come with private keys.

It can. It's complicated.
 
So, if the evil JS generated an application/x-x509-user-cert it would not have been able to place
the private key in the browser's favorite keychain, so that certificate would be of no use.

I think there's an important distinction here. No use is not the same as no impact (to the user). The ability to place certificates - without keys - is equally impactful as the ability to create keys without certificates. Both are bad.
 
yes, that's what is important since the legal principle we need to abide by is that of user control.
See the tag finding on Unsanctioned Web Tracking that makes the importance of this concept

Well, it's not a legal principle. Let's stick to facts and clear terminology.

But it does sound like you agree that user interactivity is a necessary and acceptable control. Your assertion is that the user interactivity requirement is met when it's used for TLS client authentication in the browser. However, from a security and privacy standpoint, that's already too late in the game - you may have impacted other applications (outside of the browser) using TLS client authentication, disrupting service or communications.

As such, the user interactivity requirement must be met *before* modifications are made.

As such, a model where you download a PKCS#12 file, where the browser provides no interactive UI for it, and instead defers to the user making an intentional choice to open it (via the download experience), and, when doing so, having the OS let the existing registered file handler (such as the OS) handle it is no different than if the browser were to mediate/require the same level of interactivity.

Given that the OS has already developed such interactivity flows to explain the user the actions they're about to take, it makes no sense for the browser to serve as this mediator, and thus simply 'saving' the file meets the security requirement of user consent.

Which is why removing application/x-x509-user-cert is no different than "fixing" it (with a browser mediated flow), except one is significantly less work and attack surface, and more intuitively interacts with the user's chosen configuration, thus fulfilling the obligation as The User's Agent.
 
You mean using public keys across origins? One can actually do that with WebCrypto

No. I do not. I mean generating keys in the OS keystore (which has OS level affects) or storing certificates in the OS certificate store (which has OS level effects), without a requirement of user interactivity.

That part is unacceptable.

So when we look at where do we go from there, if we keep <keygen>, we end up in a model where it's browser-mediated flows. Except the browser-mediated flows are just duplicating the existing OS-mediated flows for importing certificates and private keys, which offers a more consistent (from the OS application perspective) flow, and thus it's preferable NOT to invent a browser-mediated flow when an OS-mediated flow exists.

The original discussion regarding the same origin and privacy properties of <keygen>/application-x-x509-user-cert was to clearly illustrate there's no middle ground between what WebID desires and what's necessary for security. There's no intermediate step, because of the privacy and security issues related to cross-origin sharing, there's privacy and security issues related to OS interaction (*independent* of cross-origin sharing), and there's no utility for the <keygen> consumers in a single-origin-scoped key (as the WebID proponents have repeatedly asserted).

In this model - where there's no acceptable, workable solution - the only logical conclusion is to remove.

Much of the discussion has been arguing for some form of intermediate step, while the point I've been repeatedly trying to communicate is that there is no acceptable intermediate step, and not for lack of consideration or trying.
 
Kingsley Idehen meant that one can write plugins to fix the flaws of browsers. That is a workaround rather than a solution. 

That's not really what the thread says though. The thread is saying the same thing I've been saying for some time now - that you have viable alternatives to mediated access of the OS (key, certificate) store, so the premise that <keygen> is necessary, either from a technical side or from a security side, is not a valid premise.

The security properties for <keygen> under the threat models proposed don't hold. The only argument that really holds any water whatsoever is that <keygen>, as a tag, is not JS, ergo you can browse with JS disabled. Except in 2015, that's a shaky argument, doesn't quite hold with extensiblewebmanifesto, and means that <keygen>, as a tag, will necessarily lack the primitives and opportunities for implementors to address the security issues in a Web-compatible way. As such, it's a solid reason for deprecation - especially when WebCrypto affords all of the same security guarantees, but with a viable path towards implementor mitigation when necessary.
 
I hope you'll agree that using Flash is not a good answer.

Yet earlier in this thread you argued ActiveX was a perfectly acceptable answer. I hope you can spot the logical inconsistency here.
 
SOP is a technical tool that is important in providing certain guarantees, but it does
not answer all the issues of security and control that are legal requirements.

If you mean legal requirements related to browsers, then I'm sorry, you're just introducing terms that have no bearing.
If you mean legal requirements with respect to identify (such as eID schemes), then we've demonstrated already that such schemes have no bearing on <keygen> because they've never been able to use <keygen>.
If you mean it in some other sense, then it would do well to clarify, but my gut is that it's not entirely relevant, for the reasons outlined in this thread and the many preceding it.
 
Perhaps you can help me with the question I asked on the TAG here where I discuss an interesting
use of WebCrypto for distributed authentication 


That method, is, of course, terribly insecure :)

I do hope you have someone with a cryptographic background willing to engage in the design of that collaborate with you. As it stands, given that I've tried to engage the WebID community towards viable solutions since 2011, I'm not terribly keen to provide free cryptographic consulting to you, other than to point out you have a variety of fundamental security issues there.

If the issues are not obvious, and you lack cryptographic expert engagement, then you will find most of the issues being represented on http://cryptopals.com/ . If you work through those exercises, and then examine the proposal, you should be able to readily spot the issues.
 
As you see we are very keen to learn. It would help if instead of forcing deprecation of a well known
feature we had time to work out if WebCrypto actually allowed us to replace the functionality you say it allows us to. 

While not wanting to be too entirely dismissive, I don't think this thread supports the claim that you're very keen to learn, especially considering that I've raised these concerns in person in the past (2012 TPAC, for example), and worked through explaining these issues. You're keen not to enact any changes until you understand the issues, and I can appreciate the place that it's coming from, but it's been years explaining these issues, and it's clear that some areas are blocked by fundamental philosophical differences that prevent forward progress. I have no reason to expect that delaying deprecation by three months, for example, would yield any more successful result than the discussions that have been ongoing the past three years. We really are at more or less the same point we've been for years, with the same level of enthusiasm and engagement, and no progress to show.

Ryan Sleevi

okunmadı,
14 Eyl 2015 15:08:3014.09.2015
alıcı Dirk-Willem van Gulik, Henry Story, Dirk-Willem van Gulik, blink-dev, Jan Suhr
On Mon, Sep 14, 2015 at 12:03 PM, Dirk-Willem van Gulik <di...@webweaving.org> wrote:

I.e. where entity A can issue a client cert against a private key that is (only) in a browser (lets limit this to soft tokens for now); and that entity B can rely on. And with the only ‘out of band’, collusion or coordination between A and B being some cert higher up in the chain A used when it issued & signed.

The problem in your assumption is "against a private key that is (only) in a browser". My point is that under the security threat model you're implicitly operating on (which is "Don't trust the origin"), <keygen> fails to meet this requirement. As such, it's not a reasonable expectation that WebCrypto can meet this model.
 
And we can then tackle the PKCS et.al. module world separately.

If you mean PKCS#11, we're not going to tackle that.
If you mean PKCS#12, then that IS the example that provides the same (effective) security guarantees as <keygen> under an active-attacker model. 
Diğer iletiler yükleniyor.
0 yeni ileti