Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

CA generated keys

697 views
Skip to first unread message

Tim Hollebeek

unread,
Dec 9, 2017, 1:21:39 PM12/9/17
to mozilla-dev-s...@lists.mozilla.org


Apologies for the new thread. It's difficult for me to reply to messages
that were sent before I joined Digicert.



With respect to CA generated SSL keys, there are a few points that I feel
should be considered.



First, third parties who are *not* CAs can run key generation and escrow
services, and then the third party service can apply for a certificate for
the key, and deliver the certificate and the key to a customer. I'm not
sure how this could be prevented. So if this actually did end up being a
Mozilla policy, the practical effect would be that SSL keys can be generated
by third parties and escrowed, *UNLESS* that party is trusted by Mozilla.
This seems . backwards, at best.



Second, although I strongly believe that in general, as a best practice,
keys should be generated by the device/entity it belongs to whenever
possible, we've seen increasing evidence that key generation is difficult
and many devices cannot do it securely. I doubt that forcing the owner of
the device to generate a key on a commodity PC is any better (it's probably
worse). With an increasing number of small devices running web servers,
keys generated by audited, trusted third parties under whatever rules
Mozilla chooses to enforce about secure key delivery may actually in many
circumstances be superior than what would happen if the practice is banned.



-Tim



Gervase Markham

unread,
Dec 11, 2017, 2:48:16 AM12/11/17
to mozilla-dev-s...@lists.mozilla.org
Hi Tim,

The more I think about it, the more I see this is actually a interesting
question :-)

I suspect the first thing Mozilla allowing this would do would be to
make it much more common. (Let's assume there are no other policy
barriers.) I suspect there are several simpler workflows for certificate
issuance and installation that this could enable, and CAs would be keen
to make their customers lives easier and reduce support costs.

On 09/12/17 18:20, Tim Hollebeek wrote:
> First, third parties who are *not* CAs can run key generation and escrow
> services, and then the third party service can apply for a certificate for
> the key, and deliver the certificate and the key to a customer.

That is true. Do you know how common this is in SSL/TLS?

> I'm not
> sure how this could be prevented. So if this actually did end up being a
> Mozilla policy, the practical effect would be that SSL keys can be generated
> by third parties and escrowed, *UNLESS* that party is trusted by Mozilla.

Another way of putting it it: "unless that party were the party the
customer is already dealing with and trusts". IoW, there's a much lower
barrier for the customer in getting the CA to do it (trust and
convenience) compared to someone else. So removing this ban would
probably make it much more common, as noted above. If it's something we
want to discourage even if we can't prevent it, the current ban makes sense.

> Second, although I strongly believe that in general, as a best practice,
> keys should be generated by the device/entity it belongs to whenever
> possible, we've seen increasing evidence that key generation is difficult
> and many devices cannot do it securely. I doubt that forcing the owner of
> the device to generate a key on a commodity PC is any better (it's probably
> worse).

That's also a really interesting question. We've had dedicated device
key generation failures, but we've also had commodity PC key generation
failures (Debian weak keys, right?). Does that mean it's a wash? What do
the risk profiles look like here? One CA uses a MegaRNG2000 to generate
hundreds of thousands of certs.. and then a flaw is found in it. Oops.
Better or worse than a hundred thousand people independently using a
broken OpenSSL shipped by their Linux vendor?

> With an increasing number of small devices running web servers,
> keys generated by audited, trusted third parties under whatever rules
> Mozilla chooses to enforce about secure key delivery may actually in many
> circumstances be superior than what would happen if the practice is banned.

Is there a way to limit the use of this to those circumstances?

Gerv

Nick Lamb

unread,
Dec 11, 2017, 10:55:49 AM12/11/17
to dev-secur...@lists.mozilla.org, Tim Hollebeek
On Sat, 9 Dec 2017 18:20:56 +0000
Tim Hollebeek via dev-security-policy
<dev-secur...@lists.mozilla.org> wrote:

> First, third parties who are *not* CAs can run key generation and
> escrow services, and then the third party service can apply for a
> certificate for the key, and deliver the certificate and the key to a
> customer. I'm not sure how this could be prevented. So if this
> actually did end up being a Mozilla policy, the practical effect
> would be that SSL keys can be generated by third parties and
> escrowed, *UNLESS* that party is trusted by Mozilla. This seems .
> backwards, at best.

I'm actually astonished that CAs would _want_ to be doing this.

A CA like Let's Encrypt can confidently say that it didn't lose the
subscriber's private keys, because it never had them, doesn't want them.
If there's an incident where the Let's Encrypt subscriber's keys go
"walk about" we can start by looking at the subscriber - because that's
where the key started.

In contrast a CA which says "Oh, for convenience and security we've
generated the private keys you should use" can't start from there. We
have to start examining their generation and custody of the keys. Was
generation predictable? Were the keys lost between generation and
sending? Were they mistakenly kept (even though the CA can't possibly
have any use for them) after sending? Were they properly secured during
sending?

So many questions, all trivially eliminated by just not having "Hold
onto valuable keys that belong to somebody else" as part of your
business model.

> Second, although I strongly believe that in general, as a best
> practice, keys should be generated by the device/entity it belongs to
> whenever possible, we've seen increasing evidence that key generation
> is difficult and many devices cannot do it securely.

I do not have any confidence that a CA will do a comprehensively better
job. I don't doubt they'd _try_ but the problem is Debian were trying,
we have every reason to assume Infineon were trying. Trying wasn't
enough.

If subscribers take responsibility for generating keys we benefit from
heterogeneity, and the subscriber gets to decide directly to choose
better quality implementations versus lower costs. Infineon's "Fast
Prime" was optional, if you were happy with a device using a proven
method that took a few seconds longer to generate a key, they'd sell
you that. Most customers, it seems, wanted faster but more dangerous.

Aside from the Debian weak keys (which were so few you could usefully
enumerate all the private keys for yourself) these incidents tend to
just make the keys easier to guess. This is bad, and we aim to avoid
it, but it's not instantly fatal. But losing a customer keys to a bug
in your generation, dispatch or archive handling probably _is_
instantly fatal, and it's unnecessary when you need never have those
keys at all.


Nick.

Jeremy Rowley

unread,
Dec 11, 2017, 11:18:13 AM12/11/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org
I think key escrow services are pretty rare related to TLS certs. However,
there's lots of CAs and services that escrow signing keys for s/MIME certs.
Although, I'm not sure how companies can claim non-repudiation if they've
escrowed the signing key, a lot of enterprises use dual-use keys and want at
least the encryption portion in case an employee leaves.

-----Original Message-----
From: dev-security-policy
[mailto:dev-security-policy-bounces+jeremy.rowley=digice...@lists.mozilla
.org] On Behalf Of Gervase Markham via dev-security-policy
Sent: Monday, December 11, 2017 12:48 AM
To: mozilla-dev-s...@lists.mozilla.org
Subject: Re: CA generated keys

Hi Tim,

The more I think about it, the more I see this is actually a interesting
question :-)

I suspect the first thing Mozilla allowing this would do would be to make it
much more common. (Let's assume there are no other policy
barriers.) I suspect there are several simpler workflows for certificate
issuance and installation that this could enable, and CAs would be keen to
make their customers lives easier and reduce support costs.

On 09/12/17 18:20, Tim Hollebeek wrote:
> First, third parties who are *not* CAs can run key generation and
> escrow services, and then the third party service can apply for a
> certificate for the key, and deliver the certificate and the key to a
customer.

That is true. Do you know how common this is in SSL/TLS?

> I'm not
> sure how this could be prevented. So if this actually did end up
> being a Mozilla policy, the practical effect would be that SSL keys
> can be generated by third parties and escrowed, *UNLESS* that party is
trusted by Mozilla.

Another way of putting it it: "unless that party were the party the customer
is already dealing with and trusts". IoW, there's a much lower barrier for
the customer in getting the CA to do it (trust and
convenience) compared to someone else. So removing this ban would probably
make it much more common, as noted above. If it's something we want to
discourage even if we can't prevent it, the current ban makes sense.

> Second, although I strongly believe that in general, as a best
> practice, keys should be generated by the device/entity it belongs to
> whenever possible, we've seen increasing evidence that key generation
> is difficult and many devices cannot do it securely. I doubt that
> forcing the owner of the device to generate a key on a commodity PC is
> any better (it's probably worse).

That's also a really interesting question. We've had dedicated device key
generation failures, but we've also had commodity PC key generation failures
(Debian weak keys, right?). Does that mean it's a wash? What do the risk
profiles look like here? One CA uses a MegaRNG2000 to generate hundreds of
thousands of certs.. and then a flaw is found in it. Oops.
Better or worse than a hundred thousand people independently using a broken
OpenSSL shipped by their Linux vendor?

> With an increasing number of small devices running web servers, keys
> generated by audited, trusted third parties under whatever rules
> Mozilla chooses to enforce about secure key delivery may actually in
> many circumstances be superior than what would happen if the practice is
banned.

Is there a way to limit the use of this to those circumstances?

Gerv
_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://clicktime.symantec.com/a/1/7IiyfqIYVYHVo4yJwZ1gujE6ewgPbVhdeNR8nQYMk
tE=?d=-Wn_VctZunngdEk_ioG0-YmJpPH0bSY7avkVy2G5jkppW7WbRwmFtauXnqI4GVKzIanQD2
ZA6NInKdI3JGkcf9ryTq6n-s4c4pg5s3wE4vkp4yda03M7jQfN5_Ag8-70lEsjQb45m2On8sIoG_
dT07uGS0eLuIUFBs5Ejb7aU7SMDef-aiw2SMmHSy34HrobgXESUV5rtJhwEAyCZvSWTdlhTt2mUM
XVuNXdmFtAYun19fEnhCuxZTm44Inip_9XUfKb73PIvmELdwusC79xu_WgoRGUvPUEFfEYMZQJLz
r1wo3PfgH3YtIhu55H4aSMlU8UOVe5JjW6WYG0wIKfKfGKta_cm5JB9HGONmcRvB8nw-A2xd5kr6
jSh2Pb6kH9EJMOhxcnioBU4Gm_IH7he9MnhbhTu2BATkoSNvbqOoNB&u=https%3A%2F%2Flists
.mozilla.org%2Flistinfo%2Fdev-security-policy

Steve Medin

unread,
Dec 11, 2017, 11:35:28 AM12/11/17
to Jeremy Rowley, Gervase Markham, mozilla-dev-s...@lists.mozilla.org
Loosen the interpretation of escrow from a box surrounded by KRAs, KROs, and access controls with a rolling LTSK and escrow could describe what many white glove and CDN tier hosting operations do. The CDN has written consent, but the end customer never touches the TLS cert.
> > First, third parties who are *not* CAs can run key generation and
> > escrow services, and then the third party service can apply for a
> > certificate for the key, and deliver the certificate and the key to a
> customer.
>
> That is true. Do you know how common this is in SSL/TLS?
>
> > I'm not
> > sure how this could be prevented. So if this actually did end up
> > being a Mozilla policy, the practical effect would be that SSL keys
> > can be generated by third parties and escrowed, *UNLESS* that party is
> trusted by Mozilla.
>
> Another way of putting it it: "unless that party were the party the customer
> is already dealing with and trusts". IoW, there's a much lower barrier for
> the customer in getting the CA to do it (trust and
> convenience) compared to someone else. So removing this ban would
> probably
> make it much more common, as noted above. If it's something we want to
> discourage even if we can't prevent it, the current ban makes sense.
>
> > Second, although I strongly believe that in general, as a best
> > practice, keys should be generated by the device/entity it belongs to
> > whenever possible, we've seen increasing evidence that key generation
> > is difficult and many devices cannot do it securely. I doubt that
> > forcing the owner of the device to generate a key on a commodity PC is
> > any better (it's probably worse).
>
> That's also a really interesting question. We've had dedicated device key
> generation failures, but we've also had commodity PC key generation
> failures
> (Debian weak keys, right?). Does that mean it's a wash? What do the risk
> profiles look like here? One CA uses a MegaRNG2000 to generate hundreds
> of
> thousands of certs.. and then a flaw is found in it. Oops.
> Better or worse than a hundred thousand people independently using a
> broken
> OpenSSL shipped by their Linux vendor?
>
> > With an increasing number of small devices running web servers, keys
> > generated by audited, trusted third parties under whatever rules
> > Mozilla chooses to enforce about secure key delivery may actually in
> > many circumstances be superior than what would happen if the practice
> is
> banned.
>

Tim Hollebeek

unread,
Dec 11, 2017, 11:44:09 AM12/11/17
to Gervase Markham, mozilla-dev-s...@lists.mozilla.org

> The more I think about it, the more I see this is actually a interesting
question :-)

I had the same feeling. It seems like an easy question to answer until you
start thinking about it.

> I suspect the first thing Mozilla allowing this would do would be to make
it much more common. (Let's assume
> there are no other policy barriers.) I suspect there are several simpler
workflows for certificate issuance and
> installation that this could enable, and CAs would be keen to make their
customers lives easier and reduce
> support costs.

This may or may not be true. I think it probably isn't. The standard
method via a CSR is actually simpler, so I think that will continue to be
the predominant way of doing things. I think it's more likely to remain
limited to large enterprise customers with unique requirements, IoT use
cases, and so on.

> > First, third parties who are *not* CAs can run key generation and
> > escrow services, and then the third party service can apply for a
> > certificate for the key, and deliver the certificate and the key to a
customer.
>
> That is true. Do you know how common this is in SSL/TLS?

I know it happens. I can try to find out how common it is, and what the use
cases are.

> > Second, although I strongly believe that in general, as a best
> > practice, keys should be generated by the device/entity it belongs to
> > whenever possible, we've seen increasing evidence that key generation
> > is difficult and many devices cannot do it securely. I doubt that
> > forcing the owner of the device to generate a key on a commodity PC is
> > any better (it's probably worse).
>
> That's also a really interesting question. We've had dedicated device key
generation failures, but we've also had
> commodity PC key generation failures (Debian weak keys, right?). Does that
mean it's a wash? What do the risk
> profiles look like here? One CA uses a MegaRNG2000 to generate hundreds of
thousands of certs.. and then a
> flaw is found in it. Oops.
> Better or worse than a hundred thousand people independently using a
broken OpenSSL shipped by their
> Linux vendor?

I'd argue that the second is worse, since the large number of independent
people are going to have a much harder time becoming aware of the issue,
applying the appropriate fixes, and performing whatever remediation is
necessary.

The general rule is that you're able to do more rigorous things at scale
than you can when you're generating a key or two a year.

> > With an increasing number of small devices running web servers, keys
> > generated by audited, trusted third parties under whatever rules
> > Mozilla chooses to enforce about secure key delivery may actually in
> > many circumstances be superior than what would happen if the practice is
banned.
>
> Is there a way to limit the use of this to those circumstances?

I don't know but it's worth talking about. I think the discussion should be
"when should this be allowed, and how can it be done securely?"

-Tim

Matthew Hardeman

unread,
Dec 11, 2017, 11:44:41 AM12/11/17
to dev-secur...@lists.mozilla.org
The (I believe) meritorious point that Mr. Hollebeek raises mainly pertains
to embedded devices.

As the IoT craze heats up, I keep seeing platforms ship with unfinished OS
stacks, missing drivers, etc. A lot of hardware in the field is shipping
with decent dedicated entropy sources on board coupled with OS stacks that
don't bother to make use of said entropy.

Some of this stuff doesn't even have an RTC or high precision timers.

Some of these things try to helpfully generate to-be-used keys and CSRs
upon first boot. Unfortunately, they do so in a consistent first boot
configuration that creates a ridiculously low amount of true entropy and
yields the use of a PRNG over and over by like model devices under like
circumstances, significantly increasing the likelihood of "independent"
generation of the same small set of key pairs.

In a race to ship, quality gets overlooked. The people shipping this stuff
respond to issues that cause high return rates.

Today, people aren't returning retail product for bad number generation.

I don't really think CA generated keys is the right answer in the general
case. However, recent events and market trends suggest that more
continuous, ongoing key quality analysis needs to become part of the
ecosystem.





On Mon, Dec 11, 2017 at 9:55 AM, Nick Lamb via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On Sat, 9 Dec 2017 18:20:56 +0000
> Tim Hollebeek via dev-security-policy
> <dev-secur...@lists.mozilla.org> wrote:
>
> > First, third parties who are *not* CAs can run key generation and
> > escrow services, and then the third party service can apply for a
> > certificate for the key, and deliver the certificate and the key to a
> > customer. I'm not sure how this could be prevented. So if this
> > actually did end up being a Mozilla policy, the practical effect
> > would be that SSL keys can be generated by third parties and
> > escrowed, *UNLESS* that party is trusted by Mozilla. This seems .
> > backwards, at best.
>
> I'm actually astonished that CAs would _want_ to be doing this.
>
> A CA like Let's Encrypt can confidently say that it didn't lose the
> subscriber's private keys, because it never had them, doesn't want them.
> If there's an incident where the Let's Encrypt subscriber's keys go
> "walk about" we can start by looking at the subscriber - because that's
> where the key started.
>
> In contrast a CA which says "Oh, for convenience and security we've
> generated the private keys you should use" can't start from there. We
> have to start examining their generation and custody of the keys. Was
> generation predictable? Were the keys lost between generation and
> sending? Were they mistakenly kept (even though the CA can't possibly
> have any use for them) after sending? Were they properly secured during
> sending?
>
> So many questions, all trivially eliminated by just not having "Hold
> onto valuable keys that belong to somebody else" as part of your
> business model.
>
> > Second, although I strongly believe that in general, as a best
> > practice, keys should be generated by the device/entity it belongs to
> > whenever possible, we've seen increasing evidence that key generation
> > is difficult and many devices cannot do it securely.
>
> I do not have any confidence that a CA will do a comprehensively better
> job. I don't doubt they'd _try_ but the problem is Debian were trying,
> we have every reason to assume Infineon were trying. Trying wasn't
> enough.
>
> If subscribers take responsibility for generating keys we benefit from
> heterogeneity, and the subscriber gets to decide directly to choose
> better quality implementations versus lower costs. Infineon's "Fast
> Prime" was optional, if you were happy with a device using a proven
> method that took a few seconds longer to generate a key, they'd sell
> you that. Most customers, it seems, wanted faster but more dangerous.
>
> Aside from the Debian weak keys (which were so few you could usefully
> enumerate all the private keys for yourself) these incidents tend to
> just make the keys easier to guess. This is bad, and we aim to avoid
> it, but it's not instantly fatal. But losing a customer keys to a bug
> in your generation, dispatch or archive handling probably _is_
> instantly fatal, and it's unnecessary when you need never have those
> keys at all.
>
>
> Nick.
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

Wayne Thayer

unread,
Dec 12, 2017, 1:39:37 PM12/12/17
to mozilla-dev-s...@lists.mozilla.org
On Mon, Dec 11, 2017 at 9:43 AM, Tim Hollebeek via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> I don't know but it's worth talking about. I think the discussion should
> be
> "when should this be allowed, and how can it be done securely?"
>
> The outcome to be avoided is a CA that holds in escrow thousands of
private keys used for TLS. I don’t think that a policy permitting a CA to
generate the key pair is bad as long as the CA doesn’t hold on to the key
(unless the certificate was issued to the CA or the CA is hosting the
site).

What if the policy were to allow CA key generation but require the CA to
deliver the private key to the Subscriber and destroy the CA’s copy prior
to issuing a certificate? Would that make key generation easier? Tim, some
examples describing how this might be used would be helpful here.

A policy allowing CAs to generate key pairs should also include provisions
for:
- The CA must generate the key in accordance with technical best practices
- While in possession of the private key, the CA must store it securely

Wayne

Tim Hollebeek

unread,
Dec 12, 2017, 2:31:18 PM12/12/17
to Wayne Thayer, mozilla-dev-s...@lists.mozilla.org

> A policy allowing CAs to generate key pairs should also include provisions
> for:
> - The CA must generate the key in accordance with technical best practices
> - While in possession of the private key, the CA must store it securely

Don't forget appropriate protection for the key while it is in transit. I'll
look a bit closer at the use cases and see if I can come up with some
reasonable suggestions.

-Tim

Jakob Bohm

unread,
Dec 12, 2017, 2:46:27 PM12/12/17
to mozilla-dev-s...@lists.mozilla.org
On 12/12/2017 19:39, Wayne Thayer wrote:
> On Mon, Dec 11, 2017 at 9:43 AM, Tim Hollebeek via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>>
>> I don't know but it's worth talking about. I think the discussion should
>> be
>> "when should this be allowed, and how can it be done securely?"
>>
>> The outcome to be avoided is a CA that holds in escrow thousands of
> private keys used for TLS. I don’t think that a policy permitting a CA to
> generate the key pair is bad as long as the CA doesn’t hold on to the key
> (unless the certificate was issued to the CA or the CA is hosting the
> site).
>
> What if the policy were to allow CA key generation but require the CA to
> deliver the private key to the Subscriber and destroy the CA’s copy prior
> to issuing a certificate? Would that make key generation easier? Tim, some
> examples describing how this might be used would be helpful here.
>

That would conflict with delivery in PKCS#12 format or any other format
that delivers the key and certificate together, as users of such
services commonly expect.

It would also conflict with keeping the issuing CA key far removed from
public web interfaces, such as the interface used by users to pick up
their key and certificate, even if separate, as it would not be fun to
have to log in twice with 1 hour in between (once to pick up key, then
once again to pick up certificate).

It would only really work with a CSR+key generation service where the
user receives the key at application time, then the cert after vetting.
And many end systems cannot easily import that.

> A policy allowing CAs to generate key pairs should also include provisions
> for:
> - The CA must generate the key in accordance with technical best practices
> - While in possession of the private key, the CA must store it securely
>
> Wayne
>


Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Wayne Thayer

unread,
Dec 12, 2017, 3:40:03 PM12/12/17
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

> On 12/12/2017 19:39, Wayne Thayer wrote:
>
>> The outcome to be avoided is a CA that holds in escrow thousands of
>> private keys used for TLS. I don’t think that a policy permitting a CA to
>> generate the key pair is bad as long as the CA doesn’t hold on to the key
>> (unless the certificate was issued to the CA or the CA is hosting the
>> site).
>>
>> What if the policy were to allow CA key generation but require the CA to
>> deliver the private key to the Subscriber and destroy the CA’s copy prior
>> to issuing a certificate? Would that make key generation easier? Tim, some
>> examples describing how this might be used would be helpful here.
>>
>>
> That would conflict with delivery in PKCS#12 format or any other format
> that delivers the key and certificate together, as users of such
> services commonly expect.
>
> Yes, it would. But it's a clear policy. If the requirement is to deliver
the key at the same time as the certificate, then how long can the CA hold
the private key?



> It would also conflict with keeping the issuing CA key far removed from
> public web interfaces, such as the interface used by users to pick up
> their key and certificate, even if separate, as it would not be fun to
> have to log in twice with 1 hour in between (once to pick up key, then
> once again to pick up certificate).
>
> I don't think I understand this use case, or how the proposed policy
relates to the issuing CA.


> It would only really work with a CSR+key generation service where the
> user receives the key at application time, then the cert after vetting.
> And many end systems cannot easily import that.
>
> Many commercial CAs could accommodate a workflow where they deliver the
private key at application time. Maybe you are thinking of IOT scenarios?
Again, some use cases describing the problem would be helpful.


> A policy allowing CAs to generate key pairs should also include provisions
>> for:
>> - The CA must generate the key in accordance with technical best practices
>> - While in possession of the private key, the CA must store it securely
>>
>> Wayne
>>
>>
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>

Jakob Bohm

unread,
Dec 12, 2017, 4:08:24 PM12/12/17
to mozilla-dev-s...@lists.mozilla.org
On 12/12/2017 21:39, Wayne Thayer wrote:
> On Tue, Dec 12, 2017 at 7:45 PM, Jakob Bohm via dev-security-policy <
> dev-secur...@lists.mozilla.org> wrote:
>
>> On 12/12/2017 19:39, Wayne Thayer wrote:
>>
>>> The outcome to be avoided is a CA that holds in escrow thousands of
>>> private keys used for TLS. I don’t think that a policy permitting a CA to
>>> generate the key pair is bad as long as the CA doesn’t hold on to the key
>>> (unless the certificate was issued to the CA or the CA is hosting the
>>> site).
>>>
>>> What if the policy were to allow CA key generation but require the CA to
>>> deliver the private key to the Subscriber and destroy the CA’s copy prior
>>> to issuing a certificate? Would that make key generation easier? Tim, some
>>> examples describing how this might be used would be helpful here.
>>>
>>>
>> That would conflict with delivery in PKCS#12 format or any other format
>> that delivers the key and certificate together, as users of such
>> services commonly expect.
>>
>> Yes, it would. But it's a clear policy. If the requirement is to deliver
> the key at the same time as the certificate, then how long can the CA hold
> the private key?
>
>

Point is that many end systems (including Windows IIS) are designed to
either import certificates from PKCS#12 or use a specific CSR generation
procedure. If the CA delivered the key and cert separately, then the
user (who is apparently not sophisticated enough to generate their own
CSR) will have a hard time importing the key+cert into their system.

>
>> It would also conflict with keeping the issuing CA key far removed from
>> public web interfaces, such as the interface used by users to pick up
>> their key and certificate, even if separate, as it would not be fun to
>> have to log in twice with 1 hour in between (once to pick up key, then
>> once again to pick up certificate).
>>
>> I don't think I understand this use case, or how the proposed policy
> relates to the issuing CA.
>

If the issuing CA HSM is kept away from online systems and processes
vetted issuance requests only in a batched offline manner, then a user
responding to a message saying "your application has been accepted,
please log in with your temporary password to retrieve your key and
certificate" would have to download the key, after which the CA can
delete key and queue the actual issuance to the offline CA system, and
only after that can the user actually download their certificate.

Another thing with similar effect is the BR requirement that all the
OCSP responders must know about issued certificates, which means that
both the serial number and a hash of the signed certificate must be
replicated to all the OCSP machines before the certificate is delivered.
(One of the good OCSP extensions is to include a hash of the valid
certificate in the OCSP response, thus allowing the relying party
software to check that a "valid" response is actually for the
certificate at hand).




>
>> It would only really work with a CSR+key generation service where the
>> user receives the key at application time, then the cert after vetting.
>> And many end systems cannot easily import that.
>>
>> Many commercial CAs could accommodate a workflow where they deliver the
> private key at application time. Maybe you are thinking of IOT scenarios?
> Again, some use cases describing the problem would be helpful.
>

One major such use case is IIS or Exchange at the subscriber end.
Importing the key and cert at different times is just not a feature of
Windows server.

Tim Hollebeek

unread,
Dec 13, 2017, 11:06:48 AM12/13/17
to mozilla-dev-s...@lists.mozilla.org

Wayne,

For TLS/SSL certificates, I think PKCS #12 delivery of the key and certificate
at the same time should be allowed, and I have no problem with a requirement
to delete the key after delivery. I also think server side generation along
the lines of RFC 7030 (EST) section 4.4 should be allowed. I realize RFC 7030
is about client certificates, but in a world with lots of tiny communicating
devices that interface with people via web browsers, there are lots of highly
resource constrained devices with poor access to randomness out there running
web servers. And I think we are heading quickly towards that world.
Tightening up the requirements to allow specific, approved mechanisms is fine.
We don't want people doing random things that might not be secure.

As usual, non-TLS certificates have a completely different set of concerns.
Demand for escrow of client/email certificates is much higher and the practice
is much more common, for a variety of business reasons.

-Tim

Ryan Sleevi

unread,
Dec 13, 2017, 11:52:56 AM12/13/17
to Tim Hollebeek, mozilla-dev-s...@lists.mozilla.org
On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> Wayne,
>
> For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> certificate
> at the same time should be allowed, and I have no problem with a
> requirement
> to delete the key after delivery. I also think server side generation
> along
> the lines of RFC 7030 (EST) section 4.4 should be allowed. I realize RFC
> 7030
> is about client certificates, but in a world with lots of tiny
> communicating
> devices that interface with people via web browsers, there are lots of
> highly
> resource constrained devices with poor access to randomness out there
> running
> web servers. And I think we are heading quickly towards that world.
> Tightening up the requirements to allow specific, approved mechanisms is
> fine.
> We don't want people doing random things that might not be secure.
>

Tim,

I'm afraid that the use case to justify this change seems to be inherently
flawed and insecure. I'm hoping you can correct my misunderstanding, if I
am doing so.

As I understand it, the motivation for this is to support devices with
insecure random number generators that might be otherwise incapable of
generating secure keys. The logic goes that by having the CAs generate
these keys, we end up with better security - fewer keys leaking.

Yet I would challenge that assertion, and instead suggest that CAs
generating keys for these devices inherently makes the system less secure.
As you know, CAs are already on the hook to evaluate keys against known
weak sets and reject them. There is absent a formal definition of this in
the BRs, other than calling out illustrative examples such as
Debian-generated keys (which share the flaw you mention), or, in more
recent discussions, the ROCA-affected keys. Or, for the academic take,
https://factorable.net/weakkeys12.extended.pdf , or the research at
https://crocs.fi.muni.cz/public/papers/usenix2016 that itself appears to
have lead to ROCA being detected.

Quite simply, the population you're targeting - "tiny communication devices
... with poor access to randomness" - are inherently insecure in a TLS
world. TLS itself depends on entropy, especially for the ephemeral key
exchange ciphersuites required for use in HTTP/2 or TLS 1.3, and so such
devices do not somehow become 'more' secure by having the CA generate the
key, but then negotiate poor TLS ciphersuites.

More importantly, the change you propose would have the incidental effect
of making it more difficult to detect such devices and work with vendors to
replace or repair them. This seems to overall make Mozilla users less
secure, and the ecosystem less secure.

I realize that there is somewhat a conflict - we're today requiring that
CDNs and vendors can generate these keys (thus masking off the poor entropy
from detection), while not allowing the CA to participate - but I think
that's consistent with a viewpoint that the CA should not actively
facilitate insecurity, which I fear your proposal would.

Thus, I would suggest that the current status quo - a prohibition against
CA generated keys - is positive for the SSL/TLS ecosystem in particular,
and any such devices that struggle with randomness should be dismantled and
replaced, rather than encouraged and proliferated.

Tim Hollebeek

unread,
Dec 13, 2017, 12:41:06 PM12/13/17
to ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org
As I’m sure you’re aware, RSA key generation is far, far more reliant on the quality of the random number generation and the prime selection algorithm than TLS is dependent on randomness. In fact it’s the combination of poor randomness with attempts to reduce the cost of RSA key generation that has and will continue to cause problems.



While the number of bits in the key pair is an important security parameter, the number of potential primes and their distribution has historically not gotten as much attention as it should. This is why there have been a number of high profile breaches due to poor RSA key generation, but as far as I know, no known attacks due to the use of randomness elsewhere in the TLS protocol. This is because TLS, like most secure protocols, has enough of gap between secure and insecure that small deviations from ideal behavior don’t break the entire protocol. RSA has a well-earned reputation for finickiness and fragility.



It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to it, given that if any two key pairs share a prime number, it’s just a matter of time before someone uses Euclid’s algorithm in order to find it. There are PLENTY of possible primes of the appropriate size so that this should never happen, but it’s been seen to happen. I would be shocked if we’ve seen the last major security breach based on poor RSA key generation by resource constrained devices.



Given that there exist IETF approved alternatives that could help with that problem, they’re worth considering. I’ve been spending a lot of time recently looking at the state of the IoT world, and it’s not good.



-Tim



From: Ryan Sleevi [mailto:ry...@sleevi.com]
Sent: Wednesday, December 13, 2017 9:52 AM
To: Tim Hollebeek <tim.ho...@digicert.com>
Cc: mozilla-dev-s...@lists.mozilla.org
Subject: Re: CA generated keys







Matthew Hardeman

unread,
Dec 13, 2017, 1:07:48 PM12/13/17
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org, Tim Hollebeek
In principle, I support Mr. Sleevi's position, practically I lean toward
Mr. Thayer's and Mr. Hollebeek's position.

Sitting on my desk are not less than 3 reference designs. At least two of
them have decent hardware RNG capabilities. What's noteworthy is the
garbage software stack, kernel support, etc. for that hardware. The FAEs
for these run the gamut from "I'm ashamed of the reference software we're
crippling this fantastic design with" all the way to "Here, just use this
library for random." (It's a PRNG with a static seed. And a bad PRNG alg
at that.)

Access to this kind of hardware requires a devil's bargain in which you
sign away your right to detail these kinds of things. That's the case here.

What I can say is that fresh new reference designs being incorporated into
consumer products today certainly don't make things any easier for anyone
hurrying to bring a product to market. At least not if they want security.

Having said that, some practical thoughts:

It's mostly a linux kernel universe on these devices. Even in cases where
the kernel isn't plumbed through to the hw rng generation, as a function of
increasing run-time and actual sporadic use, the entropy pool improves to a
tolerable level with time. The trouble arises from the fact that key
generation tends to be a new device setup and on boarding procedure and
thus executes in a predictable manner on a pretty precisely predictable
timing. Thus, these keys tend to be generated before a sufficient entropy
pool exists.

Regarding the security of a device with poor original entropy and its
appropriateness in TLS, I would point out that a deterministic
pseudo-random number generator is perfectly acceptable for cryptographic
purposes as long as there is a sufficient initial random seed. In the
absence of a better source, the limited entropic data that is available
could be combined with a value deterministically derived from, for example,
the well engineered generated-off-device private key. This can all be
trivially implemented in user space and by developers with less familiarity
with interacting with proprietary devices on various hardware busses,
special random generation processor opcodes, etc, etc.

It is naive to believe that you will timely become aware of the various
permutations of the weak keys. It is naive to believe that policy making
it hard to get certificates for those devices will cause those devices to
be timely replaced.

These SCADA devices caught up in the ROCA mess - did they actually replace
those devices, update the software with an off-platform key generator, or
just front them with a reverse proxy? I'm betting it was the second or
third of those options. And that's for professional gear deployed in
presumably large commercial environments.



On Wed, Dec 13, 2017 at 10:52 AM, Ryan Sleevi via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

Ryan Sleevi

unread,
Dec 13, 2017, 1:12:06 PM12/13/17
to Tim Hollebeek, ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org
Tim,

I appreciate your reply, but that seems to be backwards looking rather than
forwards looking. That is, it looks and assumes static-RSA ciphersuites are
acceptable, and thus the entropy risk to TLS is mitigated by client-random
to this terrible TLS-server devices, and the issue to mitigate is the poor
entropy on the server.

However, I don't think that aligns with what I was mentioning - that is,
the expectation going forward of the use of forward-secure cryptography and
ephemeral key exchanges, which do become more relevant to the quality of
entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key
construction does not meaningfully improve the security of Mozilla users.

I'm curious whether any use case can be brought forward that isn't "So that
we can aid and support the proliferation of insecure devices into users
everyday lives" - as surely that doesn't seem like a good outcome, both for
Mozilla users and for society at large. Nor do I think the propose changes
meaningfully mitigate the harm caused by them, despite the well-meaning
attempt to do so.
> On Wed, Dec 13, 2017 at 11:06 AM, Tim Hollebeek via dev-security-policy <
> dev-secur...@lists.mozilla.org <mailto:dev-security-policy@

Matthew Hardeman

unread,
Dec 13, 2017, 1:24:59 PM12/13/17
to Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org, Tim Hollebeek
> I appreciate your reply, but that seems to be backwards looking rather than
> forwards looking. That is, it looks and assumes static-RSA ciphersuites are
> acceptable, and thus the entropy risk to TLS is mitigated by client-random
> to this terrible TLS-server devices, and the issue to mitigate is the poor
> entropy on the server.
>
> However, I don't think that aligns with what I was mentioning - that is,
> the expectation going forward of the use of forward-secure cryptography and
> ephemeral key exchanges, which do become more relevant to the quality of
> entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key
> construction does not meaningfully improve the security of Mozilla users.
>

As I pointed out, it can be demonstrated that quality ECDHE exchanges can
happen assuming a stateful DPRNG with a decent starting entropy corpus.

Beyond that, I should point out, I'm not talking about legacy devices
already in market. I'm not sure the community fully understands how much
hot-off-the-presses stuff (at least the stuff in the cheap, and so selected
by the marketplace) is really really set up for failure in terms of
security.

What I want to emphasize is that I don't believe policy here will make
things better. In fact, there are real dangers that it gets worse.

It would be an egregiously bad decision -- but in the eyes of a budding
device software stack developer -- to just implement an RSA key pair
generation alg in Javascript and rely upon the browser to build the key set
that will form the raw private key and the CSR. That's definitely not
secure or better.

Assuming you lock javascript to the point that large integer primitives and
operations are unavailable outside secure mode, these people will just
stand up an HTTP endpoint that spits out newly generated random RSA or EC
key pair to feed to the device. And it'll be unsigned and not even
protected by HTTPS, unless required, and then they'll do the bare minimum.

The device reference design space is improving and is becoming more
security conscious but you're YEARS away from anything resembling best
practice. I just don't believe anything Mozilla or anyone else outside
that world can do will speed it along.

Tim Hollebeek

unread,
Dec 13, 2017, 1:30:05 PM12/13/17
to ry...@sleevi.com, mozilla-dev-s...@lists.mozilla.org
So ECHDE is an interesting point that I had not considered, but as Matt noted, the quality of randomness in the devices does generally improve with time. It tends to be the initial bootstrapping where things go horribly wrong.



A couple years ago I was actually on the opposite side of this issue, so it’s very easy for me to see both sides. I just don’t see it as useful to categorically rule out something that can provide a significant security benefit in some circumstances.



-Tim



As an unrelated but funny aside, I once heard about a expensive, high assurance device with a embedded bi-stable circuit for producing high quality hardware random numbers. As part of a rigorous validation and review process in order to guarantee product quality, the instability was noticed and corrected late in the development process, and final testing showed that the output of the key generator was completely free of any pesky one bits that might interfere with the purity of all zero keys.



From: Ryan Sleevi [mailto:ry...@sleevi.com]
Sent: Wednesday, December 13, 2017 11:11 AM
To: Tim Hollebeek <tim.ho...@digicert.com>
Cc: ry...@sleevi.com; mozilla-dev-s...@lists.mozilla.org
Subject: Re: CA generated keys



Tim,



I appreciate your reply, but that seems to be backwards looking rather than forwards looking. That is, it looks and assumes static-RSA ciphersuites are acceptable, and thus the entropy risk to TLS is mitigated by client-random to this terrible TLS-server devices, and the issue to mitigate is the poor entropy on the server.



However, I don't think that aligns with what I was mentioning - that is, the expectation going forward of the use of forward-secure cryptography and ephemeral key exchanges, which do become more relevant to the quality of entropy. That is, negotiating an ECDHE_RSA exchange with terrible ECDHE key construction does not meaningfully improve the security of Mozilla users.



I'm curious whether any use case can be brought forward that isn't "So that we can aid and support the proliferation of insecure devices into users everyday lives" - as surely that doesn't seem like a good outcome, both for Mozilla users and for society at large. Nor do I think the propose changes meaningfully mitigate the harm caused by them, despite the well-meaning attempt to do so.



On Wed, Dec 13, 2017 at 12:40 PM, Tim Hollebeek via dev-security-policy <dev-secur...@lists.mozilla.org <mailto:dev-secur...@lists.mozilla.org> > wrote:

As I’m sure you’re aware, RSA key generation is far, far more reliant on the quality of the random number generation and the prime selection algorithm than TLS is dependent on randomness. In fact it’s the combination of poor randomness with attempts to reduce the cost of RSA key generation that has and will continue to cause problems.



While the number of bits in the key pair is an important security parameter, the number of potential primes and their distribution has historically not gotten as much attention as it should. This is why there have been a number of high profile breaches due to poor RSA key generation, but as far as I know, no known attacks due to the use of randomness elsewhere in the TLS protocol. This is because TLS, like most secure protocols, has enough of gap between secure and insecure that small deviations from ideal behavior don’t break the entire protocol. RSA has a well-earned reputation for finickiness and fragility.



It doesn’t help that RSA key generation has a sort of birthday paradoxy feel to it, given that if any two key pairs share a prime number, it’s just a matter of time before someone uses Euclid’s algorithm in order to find it. There are PLENTY of possible primes of the appropriate size so that this should never happen, but it’s been seen to happen. I would be shocked if we’ve seen the last major security breach based on poor RSA key generation by resource constrained devices.



Given that there exist IETF approved alternatives that could help with that problem, they’re worth considering. I’ve been spending a lot of time recently looking at the state of the IoT world, and it’s not good.



-Tim



_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org <mailto:dev-secur...@lists.mozilla.org>
https://lists.mozilla.org/listinfo/dev-security-policy



Ryan Sleevi

unread,
Dec 13, 2017, 1:50:38 PM12/13/17
to Matthew Hardeman, Ryan Sleevi, mozilla-dev-s...@lists.mozilla.org, Tim Hollebeek
On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman <mhar...@gmail.com>
wrote:

> As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> happen assuming a stateful DPRNG with a decent starting entropy corpus.
>

Agreed - but that's also true for the devices Tim is mentioning.

Which I guess is the point I was trying to make - if this can be 'fixed'
relatively easily for the use case Tim was bringing up, what other use
cases are there? The current policy serves a purpose, and although that
purpose is not high in value nor technically rigorous, it serves as an
external check.

And yes, I realize the profound irony in me making such a comment in this
thread while simultaneously arguing against EV in a parallel thread, on the
basis that the purpose EV serves is not high in value nor technically
rigorous - but I am having trouble, unlike in the EV thread, understanding
what harm is caused by the current policy, or what possible things that are
beneficial are prevented.

I don't think we'll see significant security benefit in some circumstances
- I think we'll see the appearances of, but not the manifestation - so I'm
trying to understand why we'd want to introduce that risk?

I also say this knowing how uninteroperable the existing key delivery
mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
protection of those are. Combine that with CAs repeated failure to
correctly implement the specs that are less ambiguous, and I'm worried
about a proliferation of private keys flying around - as some CAs do for
their other, non-TLS certificates. So I see a lot of potential harm in the
ecosystem, and question the benefit, especially when, as you note, this can
be mitigated rather significantly by developers not shoveling crap out the
door. If developers who view "time to market" as more important than
"Internet safety" can't get their toys, I ... don't lose much sleep.

Matthew Hardeman

unread,
Dec 13, 2017, 1:54:33 PM12/13/17
to mozilla-dev-s...@lists.mozilla.org

> As an unrelated but funny aside, I once heard about a expensive, high assurance device with a embedded bi-stable circuit for producing high quality hardware random numbers. As part of a rigorous validation and review process in order to guarantee product quality, the instability was noticed and corrected late in the development process, and final testing showed that the output of the key generator was completely free of any pesky one bits that might interfere with the purity of all zero keys.
>

More perniciously, an excellent PRNG algorithm will "whiten" sufficiently that the standard statistical tests will not be able to distinguish the random output stream as completely lacking in seed entropy.

I believe the CC EAL target evaluations standards require that during the testing a mode be enabled to access the raw uncleaned, pre-algorithmic-balancing, values so that tests can be incorporated to check the raw entropy source for that issue.

Matthew Hardeman

unread,
Dec 13, 2017, 2:17:49 PM12/13/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, December 13, 2017 at 12:50:38 PM UTC-6, Ryan Sleevi wrote:
> On Wed, Dec 13, 2017 at 1:24 PM, Matthew Hardeman <mhar...@gmail.com>
> wrote:
>
> > As I pointed out, it can be demonstrated that quality ECDHE exchanges can
> > happen assuming a stateful DPRNG with a decent starting entropy corpus.
> >
>
> Agreed - but that's also true for the devices Tim is mentioning.

I do not mean this facetiously. If I kept a diary, I might make a note. I feel like I've accomplished something.

>
> Which I guess is the point I was trying to make - if this can be 'fixed'
> relatively easily for the use case Tim was bringing up, what other use
> cases are there? The current policy serves a purpose, and although that
> purpose is not high in value nor technically rigorous, it serves as an
> external check.
>
> And yes, I realize the profound irony in me making such a comment in this
> thread while simultaneously arguing against EV in a parallel thread, on the
> basis that the purpose EV serves is not high in value nor technically
> rigorous - but I am having trouble, unlike in the EV thread, understanding
> what harm is caused by the current policy, or what possible things that are
> beneficial are prevented.

I, for one, respect that you pointed out the dichotomy. I think I understand it.

I believe that opening the door to ca-side key generation under specific terms and circumstances offers an opportunity for various consumers of PKI key pairs to acquire higher quality key pairs than a lot of the alternatives which would otherwise fill the void.

>
> I don't think we'll see significant security benefit in some circumstances
> - I think we'll see the appearances of, but not the manifestation - so I'm
> trying to understand why we'd want to introduce that risk?

Sometime we accept one risk, under terms that we can audit and control, in order to avoid the risks which we can reasonably predict the rise of in a vacuum. I am _not_ well qualified to weigh this particular set of risk exposures, most especially in the nature of the risk of an untrustworthy CA intentionally acting to cache these keys, etc. I am well qualified to indicate that both risks exist. I believe they should probably be weighed in the nature of a "this or that" dichotomy.

>
> I also say this knowing how uninteroperable the existing key delivery
> mechanisms are (PKCS#12 = minefield), and how terrible the cryptographic
> protection of those are. Combine that with CAs repeated failure to
> correctly implement the specs that are less ambiguous, and I'm worried
> about a proliferation of private keys flying around - as some CAs do for

It _is_ absolutely essential that the question of secure transport and destruction be part of what is controlled for and monitored in a scheme where key generation by the CA is permitted. The mechanism becomes worse than almost everything else if that falls apart.


> their other, non-TLS certificates. So I see a lot of potential harm in the
> ecosystem, and question the benefit, especially when, as you note, this can
> be mitigated rather significantly by developers not shoveling crap out the
> door. If developers who view "time to market" as more important than
> "Internet safety" can't get their toys, I ... don't lose much sleep.

Aside from the cryptography enthusiast or professional, it is hard to find developers with the right intersection of skill and interest to address the security implications. It becomes complicated further when security implications aren't necessarily a business imperative. Further complicated when the customer base realizes it has real costs and begins to question the value. It's not just the developers. The trend of good _looking_ quick reference designs lately is that they have a great spec sheet and take every imaginable short cut where the requirements are not explicitly stated and audited. It's an ecosystem problem that is really hard to solve.

A couple of years ago, I and my team were doing interop testing between a device and one of our products. In that course of events, we discovered a nasty security issue that was blatantly obvious to someone skilled in our particular application area. We worked with the manufacturer to trace the product design back to a reference design from a Chinese ODM. They were amenable to fixing the issue ultimately, but we found at least 14 affected distinct products in the marketplace based upon that design that did pull in those changes as of a year later.

Even as the line between hardware engineer and software developer get more and more blurred, there remains a stark division of skill set, knowledge base, and even understanding of each others' needs. That's problematic.

Peter Gutmann

unread,
Dec 13, 2017, 6:52:16 PM12/13/17
to Ryan Sleevi, Matthew Hardeman, mozilla-dev-s...@lists.mozilla.org, Tim Hollebeek
Matthew Hardeman via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>In principle, I support Mr. Sleevi's position, practically I lean toward Mr.
>Thayer's and Mr. Hollebeek's position.

I probably support at least one of those, if I can figure out who's been
quoted as saying what.

>Sitting on my desk are not less than 3 reference designs. At least two of
>them have decent hardware RNG capabilities.

My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
which has hardware RNGs. Or an OS, for that matter, at least in the sense of
something Unix-like. However, in all cases the RNG system is pretty secure,
you preload a fixed seed at manufacture and then get just enough changing data
to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
the very useful taskRegsGet() for which the docs tell you "self-examination is
not advisable as results are unpredictable", which is perfect).

In all of these cases, the device is going to be a safer place to generate
keys than the CA, in particular because (a) the CA is another embedded
controller somewhere so probably no better than the target device and (b)
there's no easy way to get the key securely from the CA to the device.

However, there's also an awful lot of IoS out there that uses shared private
keys (thus the term "the lesser-known public key" that was used at one
software house some years ago). OTOH those devices are also going to be
running decade-old unpatched kernels with every service turned on (also years-
old binaries), XSS, hardcoded admin passwords, and all the other stuff that
makes the IoS such a joy for attackers. So in that case I think a less-then-
good private key would be the least of your worries.

So the bits we need to worry about are what falls between "full of security
holes anyway" and "things done right". What is that, and does it matter if
the private keys aren't perfect?

Peter.

Matthew Hardeman

unread,
Dec 13, 2017, 7:10:52 PM12/13/17
to mozilla-dev-s...@lists.mozilla.org
On Wednesday, December 13, 2017 at 5:52:16 PM UTC-6, Peter Gutmann wrote:

> >Sitting on my desk are not less than 3 reference designs. At least two of
> >them have decent hardware RNG capabilities.
>
> My code runs on a lot (and I mean a *lot*) of embedded, virtually none of
> which has hardware RNGs. Or an OS, for that matter, at least in the sense of
> something Unix-like. However, in all cases the RNG system is pretty secure,
> you preload a fixed seed at manufacture and then get just enough changing data
> to ensure non-repeating values (almost every RTOS has this, e.g. VxWorks has
> the very useful taskRegsGet() for which the docs tell you "self-examination is
> not advisable as results are unpredictable", which is perfect).

I agree - and this same technique (the use of a stateful deterministic pseudo-random number generator seeded with adequate entropy) - is what I was proposing be utilized in the case of the generation of random data needs for EC signatures, ECDHE exchanges, etc.

This mechanism is only safe if that seed data process actually happens under secure circumstances, but for many devices and device manufacturers that can be assured.

>
> In all of these cases, the device is going to be a safer place to generate
> keys than the CA, in particular because (a) the CA is another embedded
> controller somewhere so probably no better than the target device and (b)
> there's no easy way to get the key securely from the CA to the device.

Agreed, as I mentioned the secure transport aspect is essential for remote key generation to be a secure option at any level.

>
> However, there's also an awful lot of IoS out there that uses shared private
> keys (thus the term "the lesser-known public key" that was used at one
> software house some years ago). OTOH those devices are also going to be
> running decade-old unpatched kernels with every service turned on (also years-
> old binaries), XSS, hardcoded admin passwords, and all the other stuff that
> makes the IoS such a joy for attackers. So in that case I think a less-then-
> good private key would be the least of your worries.

So, the platforms I'm talking about are the kind of stuff that sit somewhere in the middle of this. They're intended for professional consumption into the device development cycle, intended to be tweaked to the specifics of the use case. Often, the "manufacturer" makes quite few changes to the hardware reference design, fewer to the software reference design -- sometimes as shallow as branding -- and ships.

A lot of platforms with great potential at the hardware level and shockingly under-engineered, minimally designed software stacks are coming to prominence. They're cheap and in the right hands can be very effective. Unfortunately, some of these reference software stacks encourage good enough practice that they won't be quickly caught out -- no pre-built single shared private key, yet a first-boot random initialized with a script that seeds a PRNG with uptime microseconds, clock ticks since reset, or something like that, which across that line will be a very narrow band of values for a given first boot of a given reference design and set of boot scripts.

Nevertheless, many of these stacks do at least minimize extraneous services and the target customers (pseudo-manufacturers to manufactures) have gotten savvy to ancient kernels and known major remotely exploitable holes. We could call it the Internet of DeceptiveInThatImSomewhatShittyButHideItAtFirstGlance.

>
> So the bits we need to worry about are what falls between "full of security
> holes anyway" and "things done right". What is that, and does it matter if
> the private keys aren't perfect?

Agreed and I attempt address the first half of that just above -- my "Internet Of ....." description.

Wayne Thayer

unread,
Dec 13, 2017, 7:40:11 PM12/13/17
to Tim Hollebeek, mozilla-dev-s...@lists.mozilla.org
On Wed, Dec 13, 2017 at 4:06 PM, Tim Hollebeek via dev-security-policy <
dev-secur...@lists.mozilla.org> wrote:

>
> Wayne,
>
> For TLS/SSL certificates, I think PKCS #12 delivery of the key and
> certificate
> at the same time should be allowed, and I have no problem with a
> requirement
> to delete the key after delivery.


How would you define a requirement to discard the private key "after
delivery"? This seems like a very slippery slope.

I also think server side generation along
> the lines of RFC 7030 (EST) section 4.4 should be allowed. I realize RFC
> 7030
> is about client certificates, but in a world with lots of tiny
> communicating
> devices that interface with people via web browsers, there are lots of
> highly
> resource constrained devices with poor access to randomness out there
> running
> web servers. And I think we are heading quickly towards that world.
> Tightening up the requirements to allow specific, approved mechanisms is
> fine.
> We don't want people doing random things that might not be secure.
>
> Why is it unreasonable in this IoT scenario to require the private key to
be delivered prior to issuance?

Tim Hollebeek

unread,
Dec 14, 2017, 10:09:55 AM12/14/17
to Wayne Thayer, mozilla-dev-s...@lists.mozilla.org
Within 24 hours? Once the download completes? It doesn’t seem significantly harder than the other questions we grapple with. I’m sure there are plenty of reasonable solutions.



If you want to deliver the private key first, before issuance, that’d be fine too. It just means two downloads instead of one and I tend to prefer avoiding unnecessary complexity.



-Tim



From: Wayne Thayer [mailto:wth...@mozilla.com]
Sent: Wednesday, December 13, 2017 5:40 PM
To: Tim Hollebeek <tim.ho...@digicert.com>
Cc: mozilla-dev-s...@lists.mozilla.org
Subject: Re: CA generated keys



Ryan Hurst

unread,
Dec 15, 2017, 4:21:54 PM12/15/17
to mozilla-dev-s...@lists.mozilla.org
Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems is not a great candidate for the role of carrying keys anymore. You can see my blog post on this topic here: http://unmitigatedrisk.com/?p=543

The core issue is the use of old cryptographic primitives that barely live up to the equivalent cryptographic strengths of keys in use today. The offline nature of the protection involved also enables an attacker to grind any value used as the password as well.

Any plan to allow a CA to generate keys on behalf of users, which I am not against as long as there are strict and auditable practices associated with it, needs to take into consideration the protection of those keys in transit and storage.

I also believe any language that would be adopted here would clearly addresses cases where a organization that happens to operate a CA but is also a relying party. For example Amazon, Google and Apple both operate WebTrust audited CAs but they also operate cloud services where they are the subscriber of that CA. Any language used would need to make it clear the relative scopes and responsibilities in such a case.

Ryan Hurst

unread,
Dec 15, 2017, 4:34:05 PM12/15/17
to mozilla-dev-s...@lists.mozilla.org
I agree that the "right way(tm)" is to have the keys generated in a HSM, the keys exported in ciphertext and for this to be done in a way that the CA can not decrypt the keys.

Technically the PKCS#12 format would allow for such a model as you can encrypt the keybag to a public key (in a certificate. You could, for example generate a key in a HSM, export it encrypted to a public key, and the CA would never see the key.

This has several issues, the first is, of course, you must trust the CA not to use a different key; this could be addressed by requiring the code performing this logic to be made public, and that it utilize some transparent logging mechanism (ex: merkle trees, etc) that could be audited against the HSM logs in some way. The second is once you use such a mechanism you now have produced PKCS#12 files that cannot be opened by OpenSSL or Windows.

Another approach would be to write an applet that would do all of this on the HSM (some HSMs contain a TEE) and this would be more auditable as you wouldnt have a soft linkage between HSM logs and the outer logs; that said you are still trusting the CA to pass in the right key. I think you would want this log mechanism to be publicly verifiable also.

Regardless you end up in a situation where you enable this use case but do so in a way the files are not consumable by the applications that need to use the keys. This may be appropriate for cases like IOT where custom code could be relied upon on the client but would probably fail to meet business requirements that folks are trying to achieve here.

Matthew Hardeman

unread,
Dec 15, 2017, 4:34:30 PM12/15/17
to mozilla-dev-s...@lists.mozilla.org
On Friday, December 15, 2017 at 3:21:54 PM UTC-6, Ryan Hurst wrote:

> Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems is not a great candidate for the role of carrying keys anymore. You can see my blog post on this topic here: http://unmitigatedrisk.com/?p=543
>
> The core issue is the use of old cryptographic primitives that barely live up to the equivalent cryptographic strengths of keys in use today. The offline nature of the protection involved also enables an attacker to grind any value used as the password as well.
>
> Any plan to allow a CA to generate keys on behalf of users, which I am not against as long as there are strict and auditable practices associated with it, needs to take into consideration the protection of those keys in transit and storage.
>
> I also believe any language that would be adopted here would clearly addresses cases where a organization that happens to operate a CA but is also a relying party. For example Amazon, Google and Apple both operate WebTrust audited CAs but they also operate cloud services where they are the subscriber of that CA. Any language used would need to make it clear the relative scopes and responsibilities in such a case.

I had long wondered about the PKCS#12 issue. To the extent that any file format in use today is convenient for delivering a package of certificates including a formal validation chain and associated private key(s), PKCS#12 is so convenient and fairly ubiquitous.

It is a pain that the cryptographic and integrity portions of the format are showing their age -- at least, as you point out, in the manner in which they're actually implemented in major software today.

Ryan Hurst

unread,
Dec 15, 2017, 5:03:01 PM12/15/17
to mozilla-dev-s...@lists.mozilla.org
So I have read this thread in its entirety now and I think it makes sense for it to reset to first principles, specifically:

What are the technological and business goals trying to be achieved,
What are the requirements derived from those goals,
What are the negative consequences of those goals.

My feeling is there is simply an abstract desire to allow for the CA, on behalf of the subject, to generate the keys but we have not sufficiently articulated a business case for this.

In my experience building and working with embedded systems I, like Peter, have found it is possible to build a sufficient pseudo random number generator on these devices, In practice however deployed devices commonly either do not do so or seed them poorly.

This use case is one where transport would likely not need to be PKCS#12 given the custom nature of these solutions.

At the same time, these devices are often provisioned in a production line and the key generation could just as easily (and probably more appropriately) happen there.

In my experience as a CA the desire to do server side key generation almost always stems from a desire to reduce the friction for customers to acquire certificates for use in regular old web servers. Seldom does this case come up with network appliances as they do not support the PKCS#12 format normally. While the reduction of friction is a laudable goal, it seems the better way to do that would be to adopt a protocol like ACME for certificate lifecycle managment.

As I said in a earlier response I am not against the idea of server side key generation as long as:
There is a legitimate business need,
This can be done in a way that the CA does not have access to the key,
The process in which that this is done is fully transparent and auditable,
The transfer of the key is done in a way that is sufficiently secure,
The storage of the key is done in a way that is sufficiently secure,
We are extremely clear in how this can be done securely.

Basically I believe due to the varying degrees of technical background and skill in the CA operator ecosystem allowing this without being extremely is probably a case of the cure is worse than the ailment.

With that background I wonder, is this even worth exploring?

Gervase Markham

unread,
Dec 15, 2017, 6:31:01 PM12/15/17
to Ryan Hurst
On 15/12/17 16:02, Ryan Hurst wrote:
> So I have read this thread in its entirety now and I think it makes sense for it to reset to first principles, specifically:
>
> What are the technological and business goals trying to be achieved,
> What are the requirements derived from those goals,
> What are the negative consequences of those goals.
>
> My feeling is there is simply an abstract desire to allow for the CA, on behalf of the subject, to generate the keys but we have not sufficiently articulated a business case for this.

I think I'm in exactly this position also; thank you for articulating
it. One might also add:

* What are the inevitable technical consequences of a scheme which meets
these goals? (E.g. "use of PKCS#12 for key transport" might be one
answer to that question.)

Gerv

Tim Hollebeek

unread,
Dec 18, 2017, 10:58:51 AM12/18/17
to Gervase Markham, Ryan Hurst, mozilla-dev-s...@lists.mozilla.org

> On 15/12/17 16:02, Ryan Hurst wrote:
> > So I have read this thread in its entirety now and I think it makes
sense for it
> to reset to first principles, specifically:
> >
> > What are the technological and business goals trying to be achieved,
> > What are the requirements derived from those goals, What are the
> > negative consequences of those goals.
> >
> > My feeling is there is simply an abstract desire to allow for the CA, on
behalf
> of the subject, to generate the keys but we have not sufficiently
articulated a
> business case for this.
>
> I think I'm in exactly this position also; thank you for articulating it.
One might
> also add:
>
> * What are the inevitable technical consequences of a scheme which meets
> these goals? (E.g. "use of PKCS#12 for key transport" might be one answer
to
> that question.)

I actually agree with Ryan, too. I think it's more of an issue of what sort
of future we want, and we have time. I'm actually far less interested in
the PKCS#12 use case, and more interested with things like RFC 7030, which
keep popping up in the IoT space.

Also, in response to Ryan's other comments on PKCS#12, replacing it with
something more modern for the use cases where it is currently common (e.g.
client certificates, email certificates) would also be a huge improvement.

-Tim

Peter Gutmann

unread,
Dec 18, 2017, 8:02:22 PM12/18/17
to mozilla-dev-s...@lists.mozilla.org, Ryan Hurst
Ryan Hurst via dev-security-policy <dev-secur...@lists.mozilla.org> writes:

>Unfortunately, the PKCS#12 format, as supported by UAs and Operating Systems
>is not a great candidate for the role of carrying keys anymore. You can see
>my blog post on this topic here: http://unmitigatedrisk.com/?p=543

It's even worse than that, I use it as my teaching example of now not to
design a crypto standard:

https://www.cs.auckland.ac.nz/~pgut001/pubs/pfx.html

In other words its main function is as a broad-spectrum antipattern that you
can use for teaching purposes.

>The core issue is the use of old cryptographic primitives that barely live up
>to the equivalent cryptographic strengths of keys in use today. The offline
>nature of the protection involved also enables an attacker to grind any value
>used as the password as well.

That, and about five hundred other issues. An easier solution would be to use
PKCS #15, which dates from roughly the same time as #12 but doesn't have any
of those problems (PKCS #12 only exists because it was a political compromise
created to appease Microsoft, who really, really wanted everyone to use their
PFX design).

Peter.

Michael Ströder

unread,
Dec 23, 2017, 8:25:22 AM12/23/17
to mozilla-dev-s...@lists.mozilla.org
Matthew Hardeman wrote:
> On Wednesday, December 13, 2017 at 5:52:16 PM UTC-6, Peter Gutmann wrote:
>> In all of these cases, the device is going to be a safer place to generate
>> keys than the CA, in particular because (a) the CA is another embedded
>> controller somewhere so probably no better than the target device and (b)
>> there's no easy way to get the key securely from the CA to the device.
>
> Agreed, as I mentioned the secure transport aspect is essential for
> remote key generation to be a secure option at any level.

I have strong doubts that all these Internet-of-shitty-things
manufactures will get ever anything like this right.
I agree with Peter: Private key generation is the least you have to
worry about when using such devices.

Also I'm seriously concerned that if the policy is changed to allow
CA-side key generation and this gets adopted, the CAs will be forced to
implement key escrow disclosing keys to <name-any-interested-party-here>.

=> Mozilla policy *shall not* be changed to allow CAs to generate the
end entities' keys.

(The only reasonable use-case for a CA generating the private keys is to
ensure that they are immediately stored in a secure device. But that's
not really applicable in this broad use-case.)

Ciao, Michael.

Jakob Bohm

unread,
Dec 29, 2017, 12:26:09 AM12/29/17
to mozilla-dev-s...@lists.mozilla.org
Please reread my message above.

I was talking about the protections of the semi-offline HSM holding the
issuing CA key getting in the way of a procedure that delivers the
private key, erases the private key, then signs the certificate all in a
matter of seconds as the subscriber (who is presumable not smart enough
to create a CSR locally) waits impatiently.

PKCS#12 encrypted with a certificate is unlikely to be useful to any
first time not-terribly-smart end user. As that would

- Work on a tiny minority of end user systems (password based PKCS#12 is
the commonly implemented and tested case).

- A user needing such a service, especially for first time issuance or
post-expiry issuance, is unlikely to have the training and tools to set
up a certificate (and associated private key) just to decrypt the CA
generated private key.
0 new messages