Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Removal of 1024 bit CA roots - interoperability

394 views
Skip to first unread message

Hubert Kario

unread,
Jul 4, 2014, 9:27:49 AM7/4/14
to dev-secur...@lists.mozilla.org
The newly released NSS 3.16.3 doesn't include 1024 bit CA certificates
any more[1]. This will of course impact users of servers that still use
it.

Interestingly, some intermediate CA certificates that were originally
signed by those 1024 bit CA certificates got cross signed using
different roots that will remain trusted[2]. In particular I mean the
"USERTrust Legacy Secure Server CA" certificate.

Problem is, that some administrators haven't updated their servers
to provide the new intermediate certificate for 3 years. As such,
I don't think we can realistically expect all of them to update their
configuration now.

While testing found just 217 sites as of 2014-05-30 that are
impacted by this change[2], it did test only top 200 000
SSL enabled servers. I'd estimate the total number in Alexa top 1M
alone at over 373k. Moreover, some of those sites include sites that
are in the top 10000 sites, like groupon.my[3]. So loss of connectivity
to them may have bigger impact than the above quoted 217 could lead
us to believe.

That's why I think that we should ship the intermediate CA certificates
to make Firefox continue to interoperate with such sites.
I don't mean only the USERTrust certificate, but others too, if they
exist.

1 - https://bugzilla.mozilla.org/show_bug.cgi?id=1021967
2 - https://bugzilla.mozilla.org/show_bug.cgi?id=936304
3 - https://www.ssllabs.com/ssltest/analyze.html?d=groupon.my
--
Regards,
Hubert Kario
Quality Engineer, QE BaseOS Security team
Email: hka...@redhat.com
Web: www.cz.redhat.com
Red Hat Czech s.r.o., Purkyňova 99/71, 612 45, Brno, Czech Republic

Kurt Roeckx

unread,
Jul 4, 2014, 1:23:08 PM7/4/14
to Hubert Kario, dev-secur...@lists.mozilla.org
On Fri, Jul 04, 2014 at 09:27:49AM -0400, Hubert Kario wrote:
> The newly released NSS 3.16.3 doesn't include 1024 bit CA certificates
> any more[1]. This will of course impact users of servers that still use
> it.
>
> Interestingly, some intermediate CA certificates that were originally
> signed by those 1024 bit CA certificates got cross signed using
> different roots that will remain trusted[2]. In particular I mean the
> "USERTrust Legacy Secure Server CA" certificate.

Not sure which certificte you mean with that.

> Problem is, that some administrators haven't updated their servers
> to provide the new intermediate certificate for 3 years. As such,
> I don't think we can realistically expect all of them to update their
> configuration now.
>
> While testing found just 217 sites as of 2014-05-30 that are
> impacted by this change[2], it did test only top 200 000
> SSL enabled servers. I'd estimate the total number in Alexa top 1M
> alone at over 373k. Moreover, some of those sites include sites that
> are in the top 10000 sites, like groupon.my[3]. So loss of connectivity
> to them may have bigger impact than the above quoted 217 could lead
> us to believe.

Using Rapid7's Solar data from 30 june 2014, I see those
certificates that many times:
99a69be61afe886b4d2b82007cb854fc317e1539 11204
97817950d81c9670cc34d809cf794431367ef474 19815
e5df743cb601c49b9843dcab8ce86a81109fe48e 7
317a2ad07f2b335ef5a1c34e4b57e8b7d8f1fca6 89707
69bd8cf49cd300fb592e1793ca556af3ecaa35fb 116

> That's why I think that we should ship the intermediate CA certificates
> to make Firefox continue to interoperate with such sites.

Is it an option to instead ship the intermediate so that we find
an alternative trust path? We might already pick up that
alternative in most cases.


Kurt

cl...@jhcloos.com

unread,
Jul 4, 2014, 2:24:42 PM7/4/14
to mozilla-dev-s...@lists.mozilla.org
Hubert Kario <hka...@redhat.com> writes:

> Problem is, that some administrators haven't updated their servers
> to provide the new intermediate certificate for 3 years. As such,
> I don't think we can realistically expect all of them to update their
> configuration now.

That is not surprising. IME the vendors do not announce the new
intermediates to their customers. (Some might, but some definitely
do not.)

-JimC
--
James Cloos <cl...@jhcloos.com> OpenPGP: 1024D/ED7DAEA6

David E. Ross

unread,
Jul 4, 2014, 2:42:58 PM7/4/14
to mozilla-dev-s...@lists.mozilla.org
Why should Mozilla provide cover for server administrators who fail to
update their servers and for certification authorities who fail to
communicate clearly with their customers? I believe such action will
only encourage further such failures.

If the servers and certification authorities can actually be identified
and contact individuals be found, I would go as far as to inform them
that 1024 root certificates will no longer function in Mozilla products
by some date and suggest how to mitigate that situation (e.g., by
updating intermediate certificates to point to newer roots). I would
not go further.

--

David E. Ross
<http://www.rossde.com/>

On occasion, I filter and ignore all newsgroup messages
posted through GoogleGroups via Google's G2/1.0 user agent
because of spam, flames, and trolling from that source.

Hubert Kario

unread,
Jul 7, 2014, 7:29:38 AM7/7/14
to David E. Ross, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "David E. Ross" <nob...@nowhere.invalid>
> To: mozilla-dev-s...@lists.mozilla.org
> Sent: Friday, July 4, 2014 8:42:58 PM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> On 7/4/2014 6:27 AM, Hubert Kario wrote:
> Why should Mozilla provide cover for server administrators who fail to
> update their servers and for certification authorities who fail to
> communicate clearly with their customers? I believe such action will
> only encourage further such failures.

Because it is Mozilla that distrusts 1024 bit RSA CA keys ahead of
CA/Browser forum schedule:

" Root CA Certificate issued prior to 31 Dec. 2010 with an RSA
key size less than 2048 bits MAY still serve as
a trust anchor for Subscriber Certificates issued in accordance
with these Requirements."

There is no date as to when 1024 bit RSA roots are to be untrusted,
unlike the intermediate certificates which all *do* have a hard date:
31st December 2014.

> If the servers and certification authorities can actually be identified
> and contact individuals be found, I would go as far as to inform them
> that 1024 root certificates will no longer function in Mozilla products
> by some date and suggest how to mitigate that situation (e.g., by
> updating intermediate certificates to point to newer roots). I would
> not go further.

Like I said, they already were contacted by the CA's. 3 years ago!

While it is negligence on the administrators part, working around it
won't cause long lasting effects or security problems.

I say that we should accommodate all the changes that are necessary to
increase the strength of the trust chain. If shipping a pre cached (not
explicitly trusted!) intermediate CA certificate requires that, so be it.

Kurt Roeckx

unread,
Jul 7, 2014, 7:46:18 AM7/7/14
to mozilla-dev-s...@lists.mozilla.org
On 2014-07-07 13:29, Hubert Kario wrote:
> ----- Original Message -----
>> From: "David E. Ross" <nob...@nowhere.invalid>
>> Why should Mozilla provide cover for server administrators who fail to
>> update their servers and for certification authorities who fail to
>> communicate clearly with their customers? I believe such action will
>> only encourage further such failures.
>
> Because it is Mozilla that distrusts 1024 bit RSA CA keys ahead of
> CA/Browser forum schedule:
>
> " Root CA Certificate issued prior to 31 Dec. 2010 with an RSA
> key size less than 2048 bits MAY still serve as
> a trust anchor for Subscriber Certificates issued in accordance
> with these Requirements."
>
> There is no date as to when 1024 bit RSA roots are to be untrusted,
> unlike the intermediate certificates which all *do* have a hard date:
> 31st December 2014.

That's 31st December 2013.

> I say that we should accommodate all the changes that are necessary to
> increase the strength of the trust chain. If shipping a pre cached (not
> explicitly trusted!) intermediate CA certificate requires that, so be it.

Yes, I've made the same suggestion and I think that is the best way forward.


Kurt

Hubert Kario

unread,
Jul 7, 2014, 7:52:04 AM7/7/14
to Kurt Roeckx, dev-secur...@lists.mozilla.org
----- Original Message -----
> From: "Kurt Roeckx" <ku...@roeckx.be>
> To: "Hubert Kario" <hka...@redhat.com>
> Cc: dev-secur...@lists.mozilla.org
> Sent: Friday, July 4, 2014 7:23:08 PM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> On Fri, Jul 04, 2014 at 09:27:49AM -0400, Hubert Kario wrote:
> > Interestingly, some intermediate CA certificates that were originally
> > signed by those 1024 bit CA certificates got cross signed using
> > different roots that will remain trusted[2]. In particular I mean the
> > "USERTrust Legacy Secure Server CA" certificate.
>
> Not sure which certificte you mean with that.

SHA1: 4a7edf9daa8955f800f8276ec70e9c44267416c7

the one referenced in Comment 19 in BZ#936304.
But I have checked just this one root CA, that's why I was saying that
there may be more.

> > That's why I think that we should ship the intermediate CA certificates
> > to make Firefox continue to interoperate with such sites.
>
> Is it an option to instead ship the intermediate so that we find
> an alternative trust path? We might already pick up that
> alternative in most cases.

that's what I had in mind, not to explicitly trust it, but to have it
"precached". So that FF behaves in the same way as if I already visited
some different site that uses this intermediate CA and properly presents
it to the clients.

Hubert Kario

unread,
Jul 7, 2014, 7:54:30 AM7/7/14
to Kurt Roeckx, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "Kurt Roeckx" <ku...@roeckx.be>
> To: mozilla-dev-s...@lists.mozilla.org
> Sent: Monday, July 7, 2014 1:46:18 PM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> On 2014-07-07 13:29, Hubert Kario wrote:
> > ----- Original Message -----
> >> From: "David E. Ross" <nob...@nowhere.invalid>
> >> Why should Mozilla provide cover for server administrators who fail to
> >> update their servers and for certification authorities who fail to
> >> communicate clearly with their customers? I believe such action will
> >> only encourage further such failures.
> >
> > Because it is Mozilla that distrusts 1024 bit RSA CA keys ahead of
> > CA/Browser forum schedule:
> >
> > " Root CA Certificate issued prior to 31 Dec. 2010 with an RSA
> > key size less than 2048 bits MAY still serve as
> > a trust anchor for Subscriber Certificates issued in accordance
> > with these Requirements."
> >
> > There is no date as to when 1024 bit RSA roots are to be untrusted,
> > unlike the intermediate certificates which all *do* have a hard date:
> > 31st December 2014.
>
> That's 31st December 2013.

yes, 2013, miss-pressed a key

Kathleen Wilson

unread,
Jul 25, 2014, 6:11:11 PM7/25/14
to mozilla-dev-s...@lists.mozilla.org
On 7/4/14, 6:27 AM, Hubert Kario wrote:
> The newly released NSS 3.16.3 doesn't include 1024 bit CA certificates
> any more[1]. This will of course impact users of servers that still use
> it.
<snip>
> That's why I think that we should ship the intermediate CA certificates
> to make Firefox continue to interoperate with such sites.


Hubert and all,

I apologize for my delay in responding to this. I was on a 3-week family
vacation, and am still trying to catch up.

Thank you for your consideration and input on this topic. I don't yet
have an answer, but I wanted to let you all know that we have been
looking into this.


== Background ==
We have begun removal of 1024-bit roots with the following 2 bugs:
https://bugzilla.mozilla.org/show_bug.cgi?id=936304
-- Remove Entrust.net, GTE CyberTrust, and ValiCert 1024-bit root
certificates from NSS
https://bugzilla.mozilla.org/show_bug.cgi?id=986005
-- Turn off SSL and Code Signing trust bits for VeriSign 1024-bit roots

There are two more sets of 1024-bit root changes that will need to follow:
https://bugzilla.mozilla.org/show_bug.cgi?id=986014
-- Remove Thawte 1024-bit roots
https://bugzilla.mozilla.org/show_bug.cgi?id=986019
-- Turn off SSL and Code Signing trust bits for Equifax 1024-bit roots

Note that in the future we may have to take similar action for SHA1
roots, or for other old roots that are being replaced with new and more
compliant roots (e.g. baseline requirements). So, there will be an
ongoing need to transition servers from using old cert chains to new
cert chains.


== Problem ==
Some web server administrators have not updated their web servers to
provide a new intermediate certificate signed by a newer root, even
though the CA has implored them to do so. For those websites, users may
get the Untrusted Connection error when the old root is removed.


== Possible Solution ==
One possible way to help mitigate the pain of migration from an old root
is to directly include the cross-signed intermediate certificate that
chains up to the new root in NSS for 1 or 2 years. With classic NSS path
building, the path with the newer issuer is taken, so path validation
will go through the (newer) cross-signed intermediate certificate. With
mozilla::pkix all paths are considered until path validation succeeds.
Therefore, directly including the cross-signed intermediate certificate
for a while could provide a smoother transition. Presumably over that
time, the SSL certs will expire and the web server operators will
upgrade to the new cert chains.

This does not mean that we would begin including intermediate certs upon
request. We would only consider using this approach as a way to provide
a smoother transition when we remove a root certificate. Mozilla would
determine when it is necessary to include an intermediate certificate
for the purpose of removing a root certificate.


== For this batch of root changes ==

We are still investigating if we should use this possible solution for
this batch of root changes. Please stay tuned and continue to share data
and test results that should be considered.


== Side note ==

>> 3 - https://www.ssllabs.com/ssltest/analyze.html?d=groupon.my

The SSL cert for https://www.groupon.my/ is a 2-year cert that was
created on July 23, 2012, and expires on October 22, 2014, so hopefully
they plan to update their SSL cert and chain soon.

The intermediate cert on this site (USERTrust Legacy Secure Server CA)
expires on November 1, 2015, so I expect that the new SSL cert will not
be issued by this intermediate cert. So the website operator will have
to update the entire cert chain -- which they should do whenever they
update their SSL cert.


Thanks,
Kathleen

Hubert Kario

unread,
Jul 28, 2014, 6:37:24 AM7/28/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "Kathleen Wilson" <kwi...@mozilla.com>
> To: mozilla-dev-s...@lists.mozilla.org
> Sent: Saturday, 26 July, 2014 12:11:11 AM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> On 7/4/14, 6:27 AM, Hubert Kario wrote:
> > The newly released NSS 3.16.3 doesn't include 1024 bit CA certificates
> > any more[1]. This will of course impact users of servers that still use
> > it.
> <snip>
> > That's why I think that we should ship the intermediate CA certificates
> > to make Firefox continue to interoperate with such sites.
>
> == Possible Solution ==
> One possible way to help mitigate the pain of migration from an old root
> is to directly include the cross-signed intermediate certificate that
> chains up to the new root in NSS for 1 or 2 years.
<snip>
> This does not mean that we would begin including intermediate certs upon
> request. We would only consider using this approach as a way to provide
> a smoother transition when we remove a root certificate. Mozilla would
> determine when it is necessary to include an intermediate certificate
> for the purpose of removing a root certificate.

Thank you for looking into this

> == For this batch of root changes ==
>
> We are still investigating if we should use this possible solution for
> this batch of root changes. Please stay tuned and continue to share data
> and test results that should be considered.

I did perform a scan of the Alexa Top 1 million the week before this,
unfortunately I didn't have the time to write a script to perform analysis
of this data yet. I'll try to do this this week.

Brian Smith

unread,
Jul 28, 2014, 2:00:34 PM7/28/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
On Fri, Jul 25, 2014 at 3:11 PM, Kathleen Wilson <kwi...@mozilla.com> wrote:
> == Possible Solution ==
> One possible way to help mitigate the pain of migration from an old root is
> to directly include the cross-signed intermediate certificate that chains up
> to the new root in NSS for 1 or 2 years.

I suggest that, instead of including the cross-signing certificates in
the NSS certificate database, the mozilla::pkix code should be changed
to look up those certificates when attempting to find them through NSS
fails. That way, Firefox and other products that use NSS will have a
lot more flexibility in how they handle the compatibility logic. Also,
leaving out the cross-signing certificates is a more secure default
configuration for NSS. We should be encouraging more secure default
configurations in widely-used crypto libraries instead of adding
compatibility hacks to them that are needed by just a few products.

> are considered until path validation succeeds. Therefore, directly including
> the cross-signed intermediate certificate for a while could provide a
> smoother transition. Presumably over that time, the SSL certs will expire
> and the web server operators will upgrade to the new cert chains.

I am not so sure. If the websites are using a cert chain like:

EE <- intermediate-1024 <- root-1024

then you are right. But, if the websites are using a cert chain like these:

EE <- intermediate-2048 <- root-1024
EE <- intermediate-2048 <- intermediate-1024 <- root-1024

Then it is likely that many of the websites may not update enough of
the cert chain to make the use of 1024-bit certificates to go away.

Cheers,
Brian

Rob Stradling

unread,
Jul 28, 2014, 3:00:40 PM7/28/14
to Brian Smith, Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
On 28/07/14 19:00, Brian Smith wrote:
> On Fri, Jul 25, 2014 at 3:11 PM, Kathleen Wilson <kwi...@mozilla.com> wrote:
<snip>
>> are considered until path validation succeeds. Therefore, directly including
>> the cross-signed intermediate certificate for a while could provide a
>> smoother transition. Presumably over that time, the SSL certs will expire
>> and the web server operators will upgrade to the new cert chains.
>
> I am not so sure. If the websites are using a cert chain like:
>
> EE <- intermediate-1024 <- root-1024
>
> then you are right. But, if the websites are using a cert chain like these:
>
> EE <- intermediate-2048 <- root-1024
> EE <- intermediate-2048 <- intermediate-1024 <- root-1024
>
> Then it is likely that many of the websites may not update enough of
> the cert chain to make the use of 1024-bit certificates to go away.

The particular case that Hubert mentioned at the start of this thread is
the "USERTrust Legacy Secure Server CA": 2 intermediate certs containing
the same Name and Public Key, one signed by a 1024-bit root and the
other signed by a 2048-bit root. i.e...

EE <- intermediate-2048 <- root-1024
EE <- intermediate-2048 <- root-2048

USERTrust Legacy Secure Server CA ceased issuing certs in September
2012, but many of those certs still have significant lifetime remaining.
All of the certificate holders were initially instructed to configure
their servers to serve the chain up to the 1024-bit root.

Since then, despite efforts to persuade certificate holders to
reconfigure their servers to serve the chain up to the 2048-bit root, a
significant proportion of the certificate holders have left their server
configuration unchanged.

I can see the attraction of bundling the second intermediate cert
(signed by the 2048-bit root) with Firefox and/or NSS, especially given
that Firefox/NSS doesn't attempt to fetch "missing" intermediates using
AIA->caIssuers URLs.

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online

Kai Engert

unread,
Jul 28, 2014, 3:02:19 PM7/28/14
to mozilla-dev-s...@lists.mozilla.org
On Mon, 2014-07-28 at 11:00 -0700, Brian Smith wrote:
> I suggest that, instead of including the cross-signing certificates in
> the NSS certificate database, the mozilla::pkix code should be changed
> to look up those certificates when attempting to find them through NSS
> fails.

We are looking for a way to fix all applications that use NSS, not just
Firefox. Only Firefox uses the mozilla::pkix library.

Kai


Kai Engert

unread,
Jul 28, 2014, 3:05:59 PM7/28/14
to mozilla-dev-s...@lists.mozilla.org
Actually, including intermediates in the Mozilla root CA list should
even help applications that use other crypto toolkits (not just NSS).

Kai


Kathleen Wilson

unread,
Jul 28, 2014, 4:28:54 PM7/28/14
to mozilla-dev-s...@lists.mozilla.org
On 7/25/14, 3:11 PM, Kathleen Wilson wrote:
> On 7/4/14, 6:27 AM, Hubert Kario wrote:
>> The newly released NSS 3.16.3 doesn't include 1024 bit CA certificates
>> any more[1]. This will of course impact users of servers that still use
>> it.
> <snip>
>> That's why I think that we should ship the intermediate CA certificates
>> to make Firefox continue to interoperate with such sites.
>
>
<snip>
>
> == For this batch of root changes ==
>
> We are still investigating if we should use this possible solution for
> this batch of root changes. Please stay tuned and continue to share data
> and test results that should be considered.
>


I have filed a bug regarding this:
https://bugzilla.mozilla.org/show_bug.cgi?id=1045189

Thanks,
Kathleen


Kathleen Wilson

unread,
Jul 30, 2014, 3:17:27 PM7/30/14
to mozilla-dev-s...@lists.mozilla.org
On 7/28/14, 11:00 AM, Brian Smith wrote:
> I suggest that, instead of including the cross-signing certificates in
> the NSS certificate database, the mozilla::pkix code should be changed
> to look up those certificates when attempting to find them through NSS
> fails. That way, Firefox and other products that use NSS will have a
> lot more flexibility in how they handle the compatibility logic.


There's already a bug for fetching missing intermediates:
https://bugzilla.mozilla.org/show_bug.cgi?id=399324

I think it would help with removal of roots (the remaining 1024-bit
roots, non-BR-complaint roots, SHA1 roots, retired roots, etc.), and IE
has been supporting this capability for a long time.

So, Should we do this?
Does it introduce security concerns?

Kathleen

Brian Smith

unread,
Jul 30, 2014, 4:00:48 PM7/30/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
On Wed, Jul 30, 2014 at 12:17 PM, Kathleen Wilson <kwi...@mozilla.com> wrote:
> On 7/28/14, 11:00 AM, Brian Smith wrote:
>>
>> I suggest that, instead of including the cross-signing certificates in
>> the NSS certificate database, the mozilla::pkix code should be changed
>> to look up those certificates when attempting to find them through NSS
>> fails. That way, Firefox and other products that use NSS will have a
>> lot more flexibility in how they handle the compatibility logic.
>
> There's already a bug for fetching missing intermediates:
> https://bugzilla.mozilla.org/show_bug.cgi?id=399324
>
> I think it would help with removal of roots (the remaining 1024-bit roots,
> non-BR-complaint roots, SHA1 roots, retired roots, etc.), and IE has been
> supporting this capability for a long time.

First of all, there is no such thing as a SHA1 root. Unlike the public
key algorithm, the hash algorithm is NOT fixed per root. That means
any RSA-2048 root can already issue certificates signed using SHA256
instead of SHA1. AFAICT, there's no reason for a CA to insist on
adding new roots for SHA256 support.

Other desktop browsers do support AIA certificate fetching, but many
mobile browsers don't. For example, Chrome on Android does not support
AIA fetching (at least, at the time I tried it) but Chrome on desktop
does support it. So, if Firefox were to add support for AIA
certificate fetching, it would be encouraging website administrators
to create websites that don't work on all browsers.

The AIA fetching mechanism is not reliable, for the same reasons that
OCSP fetching is not reliable. So, if Firefox were to add support for
AIA certificate fetching, it would be encouraging websites to create
websites that don't work reliably.

The AIA fetching process and OCSP fetching are both very slow--much
slower than the combination of all other SSL handshaking and
certificate verification. So, if Firefox were to add support for AIA
certificate fetching, it would be encouraging websites to create slow
websites.

The AIA fetching mechanism and OCSP fetching require an HTTP
implementation in order to verify certificates, and both of those
mechanisms require (practically, if not theoretically) the fetching to
be done over unauthenticated and unencrypted channels. It is not a
good idea to add the additional attack surface of an entire HTTP stack
to the certificate verification process.

If we are willing to encourage administrators to create websites that
don't work with all browsers, then we should just preload the
commonly-missing intermediate certificates into Firefox and/or NSS.
This would avoid all the performance problems, reliability problems,
and additional attack surface, and still provide a huge compatibility
benefit. In fact, most misconfigured websites would then work better
(faster, more reliably) in Firefox than in other browsers.

One of the motivations for creating mozilla::pkix was to make it easy
for Firefox to preload these certificates without having to have them
preloaded into NSS, because Wan-Teh had objected to preloading them
into NSS when I proposed it a couple of years ago. So, I think the
best course of action would be for us to try the preloading approach
first, and then re-evaluate whether AIA fetching is necessary later,
after measuring the results of preloading.

Cheers,
Brian

Matt Palmer

unread,
Jul 30, 2014, 4:29:32 PM7/30/14
to dev-secur...@lists.mozilla.org
On Wed, Jul 30, 2014 at 12:17:27PM -0700, Kathleen Wilson wrote:
> On 7/28/14, 11:00 AM, Brian Smith wrote:
> >I suggest that, instead of including the cross-signing certificates in
> >the NSS certificate database, the mozilla::pkix code should be changed
> >to look up those certificates when attempting to find them through NSS
> >fails. That way, Firefox and other products that use NSS will have a
> >lot more flexibility in how they handle the compatibility logic.
>
>
> There's already a bug for fetching missing intermediates:
> https://bugzilla.mozilla.org/show_bug.cgi?id=399324
>
> I think it would help with removal of roots (the remaining 1024-bit
> roots, non-BR-complaint roots, SHA1 roots, retired roots, etc.), and
> IE has been supporting this capability for a long time.
>
> So, Should we do this?
> Does it introduce security concerns?

No more so than allowing servers to provide intermediates in the TLS
handshake. It's all untrusted info as far as the browser's concerned, until
it performs the path validation.

- Matt

Brian Smith

unread,
Jul 30, 2014, 5:02:46 PM7/30/14
to Kai Engert, mozilla-dev-s...@lists.mozilla.org
On Mon, Jul 28, 2014 at 12:05 PM, Kai Engert <ka...@kuix.de> wrote:
> On Mon, 2014-07-28 at 21:02 +0200, Kai Engert wrote:
>> On Mon, 2014-07-28 at 11:00 -0700, Brian Smith wrote:
>> > I suggest that, instead of including the cross-signing certificates in
>> > the NSS certificate database, the mozilla::pkix code should be changed
>> > to look up those certificates when attempting to find them through NSS
>> > fails.
>>
>> We are looking for a way to fix all applications that use NSS, not just
>> Firefox. Only Firefox uses the mozilla::pkix library.
>
> Actually, including intermediates in the Mozilla root CA list should
> even help applications that use other crypto toolkits (not just NSS).

It depends on your definition of "help." I assume the goal is to
encourage websites to migrate from 1024-bit signatures to RSA-2048-bit
or ECDSA-P-256 signatures. If so, then including the intermediates in
NSS so that all NSS-based applications can use them will be
counterproductive to the goal, because when the system administrator
is testing his server using those other NSS-based tools, he will not
notice that he is depending on 1024-bit certificates (cross-signed or
root) because everything will work fine.

Similarly, as you note, many non-NSS-based tools copy the NSS
certificate set into their own certificate databases. Thus, the effect
of encouraging the continued dependency on 1024-bit signatures would
have an even wider impact beyond NSS-based applications.

I remember that we had a discussion about this a long time ago, but I
think it might have been private. In the previous discussion, I noted
that removing a 1024-bit root but still supporting a
1024-bit-to-2048-bit cross-signed intermediate results in no
improvement in security, but it does have a negative performance
impact because all the affected certificate chains grow by one
certificate. That's why I've been against removing the 1024-bit roots
while continuing to trust the 1024-bit-to-2048-bit cross-signing
certificates.

It is important to understand the cryptographic aspect of why 1024-bit
signatures are bad. People feel like it is possible for some people to
create valid signatures using a 1024-bit key even if they were not the
original holders of the private key. The only way to protect against
somebody with this capability is to reject ANY 1024-bit signature,
whether it is in a cross-signing certificate or a root certificate or
something else.

If it is not reasonable to reject all 1024-bit signatures, then I'd
suggest trying to find a different approach for gradually removing
support for 1024-bit signatures. For example, Firefox could keep
trusting 1024-bit signatures for most websites, but start rejecting
them for HSTS sites and for key-pinned websites. This would provide a
useful level of protection for those sites at least, even if it
wouldn't afford any protection for other websites. That would be an
improvement over the current change, which seems to hurt compatibility
and/or performance without improving security for any websites.

Cheers,
Brian

David E. Ross

unread,
Jul 30, 2014, 6:14:39 PM7/30/14
to mozilla-dev-s...@lists.mozilla.org
I do indeed have a security concern over this.

If a server's operator is lax in updating intermediate certificates or
(worse) not installing necessary intermediate certificates, that
indicates poor or non-existent attention to necessary security
procedures. That raises the question: What other security lapses exist
for that server?

Having a browser automatically supply a missing intermediate certificate
or replacing an incorrect one with the correct one effectively hides
other possible security lapses.

--
David E. Ross

The Crimea is Putin's Sudetenland.
The Ukraine will be Putin's Czechoslovakia.
See <http://www.rossde.com/editorials/edtl_PutinUkraine.html>.

Ondrej Mikle

unread,
Jul 30, 2014, 7:17:06 PM7/30/14
to dev-secur...@lists.mozilla.org
On 07/30/2014 09:17 PM, Kathleen Wilson wrote:
> On 7/28/14, 11:00 AM, Brian Smith wrote:
>> I suggest that, instead of including the cross-signing certificates in
>> the NSS certificate database, the mozilla::pkix code should be changed
>> to look up those certificates when attempting to find them through NSS
>> fails. That way, Firefox and other products that use NSS will have a
>> lot more flexibility in how they handle the compatibility logic.
>
>
> There's already a bug for fetching missing intermediates:
> https://bugzilla.mozilla.org/show_bug.cgi?id=399324
>
> I think it would help with removal of roots (the remaining 1024-bit roots,
> non-BR-complaint roots, SHA1 roots, retired roots, etc.), and IE has been
> supporting this capability for a long time.
>
> So, Should we do this?
> Does it introduce security concerns?

It definitely introduces non-deterministic behavior controlled by a potential
MitM attacker, in addition being hard to debug.

Example:

1. client requests certificate indicated via AIA over http (common in IE)
2. MitM attacker supplies one that triggers known bug - attacker can control
what is being exploited
3. remote code execution or chain validation success that shouldn't happen

I personally think that factorization of 1024-bit RSA roots or SHA-1 collisions
is much harder than exploiting certificate validation code.

Regards,
Ondrej

Ondrej Mikle

unread,
Jul 30, 2014, 7:29:19 PM7/30/14
to dev-secur...@lists.mozilla.org
On 07/31/2014 01:17 AM, Ondrej Mikle wrote:
> On 07/30/2014 09:17 PM, Kathleen Wilson wrote:

[...]

>> So, Should we do this?
>> Does it introduce security concerns?
>
> It definitely introduces non-deterministic behavior controlled by a potential
> MitM attacker, in addition being hard to debug.
>
> Example:
>
> 1. client requests certificate indicated via AIA over http (common in IE)
> 2. MitM attacker supplies one that triggers known bug - attacker can control
> what is being exploited
> 3. remote code execution or chain validation success that shouldn't happen
>
> I personally think that factorization of 1024-bit RSA roots or SHA-1 collisions
> is much harder than exploiting certificate validation code.

I should probably add that a MitM attacker like an ISP can generally tamper with
certificate chains sent in TLS handshake anyway, but AIA fetching would allow an
adversary more hops away on a different continent to tamper with the connection.

Ondrej

Kurt Roeckx

unread,
Jul 31, 2014, 3:54:45 AM7/31/14
to mozilla-dev-s...@lists.mozilla.org
On 2014-07-31 01:29, Ondrej Mikle wrote:
> I should probably add that a MitM attacker like an ISP can generally tamper with
> certificate chains sent in TLS handshake anyway, but AIA fetching would allow an
> adversary more hops away on a different continent to tamper with the connection.

How would an ISP tamper with the certificates send in TLS without TLS
giving an error that the packets were tampered with?

I understand that it's possible with SSL 3.0 but not with TLS 1.0.


Kurt

Hubert Kario

unread,
Jul 31, 2014, 5:46:29 AM7/31/14
to Brian Smith, mozilla-dev-s...@lists.mozilla.org, Kai Engert
----- Original Message -----
> From: "Brian Smith" <br...@briansmith.org>
> To: "Kai Engert" <ka...@kuix.de>
> Cc: mozilla-dev-s...@lists.mozilla.org
> Sent: Wednesday, 30 July, 2014 11:02:46 PM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> On Mon, Jul 28, 2014 at 12:05 PM, Kai Engert <ka...@kuix.de> wrote:
> > On Mon, 2014-07-28 at 21:02 +0200, Kai Engert wrote:
> >> On Mon, 2014-07-28 at 11:00 -0700, Brian Smith wrote:
> >> > I suggest that, instead of including the cross-signing certificates in
> >> > the NSS certificate database, the mozilla::pkix code should be changed
> >> > to look up those certificates when attempting to find them through NSS
> >> > fails.
> >>
> >> We are looking for a way to fix all applications that use NSS, not just
> >> Firefox. Only Firefox uses the mozilla::pkix library.
> >
> > Actually, including intermediates in the Mozilla root CA list should
> > even help applications that use other crypto toolkits (not just NSS).
>
> It depends on your definition of "help." I assume the goal is to
> encourage websites to migrate from 1024-bit signatures to RSA-2048-bit
> or ECDSA-P-256 signatures. If so, then including the intermediates in
> NSS so that all NSS-based applications can use them will be
> counterproductive to the goal, because when the system administrator
> is testing his server using those other NSS-based tools, he will not
> notice that he is depending on 1024-bit certificates (cross-signed or
> root) because everything will work fine.

The point is not to ship a 1024 bit cert, the point is to ship a 2048 bit cert.


So for sites that present a chain like this:

2048 bit host cert <- 2048 bit old sub CA <- 1024 bit root CA

we can find a certificate chain like this:

2048 bit host cert <- 2048 bit new cross-signed sub CA <- 2048 bit root CA

where the cross-signed sub CA is shipped by NSS

Varga Viktor

unread,
Jul 31, 2014, 6:00:15 AM7/31/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
Hello,

There can be a few different view:

a) weakens security, because lazy administrators, who don't install the intermediate.
At my worplace, we always try to tell IT people install it.

b) strengthens security because the browser fills the gaps with the AIA url, not the enduser clicks on some alarm without reading.

c) for personal (for authentication or form signing) certificates, its hard to give a good way to import the full chain, at the import of a personal certificate, this can be useful

And finaly do not forget, that these problems are also applicable for the Thunderbird too, because there are a lot of SSL certificates, which are used for mail servers.

I think this behaviour should be copied, but maybe a popup accept it window, or managing this behaviour from config needed to control it, if somebody want it to control.

üdvözlettel/best regards:

Varga Viktor
Netlock Kft.
Üzemeltetési Vezető
IT Service Executive

-----Original Message-----
From: dev-security-policy [mailto:dev-security-policy-bounces+varga.viktor=netlo...@lists.mozilla.org] On Behalf Of Kathleen Wilson
Sent: Wednesday, July 30, 2014 9:17 PM
To: mozilla-dev-s...@lists.mozilla.org
Subject: Dynamic Path Resolution in AIA CA Issuers

On 7/28/14, 11:00 AM, Brian Smith wrote:
> I suggest that, instead of including the cross-signing certificates in
> the NSS certificate database, the mozilla::pkix code should be changed
> to look up those certificates when attempting to find them through NSS
> fails. That way, Firefox and other products that use NSS will have a
> lot more flexibility in how they handle the compatibility logic.


There's already a bug for fetching missing intermediates:
https://bugzilla.mozilla.org/show_bug.cgi?id=399324

I think it would help with removal of roots (the remaining 1024-bit roots, non-BR-complaint roots, SHA1 roots, retired roots, etc.), and IE has been supporting this capability for a long time.

So, Should we do this?
Does it introduce security concerns?

Kathleen

_______________________________________________
dev-security-policy mailing list
dev-secur...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

_______________________________________________________________________
Ezt az e-mailt virus- es SPAM-szuresnek vetettuk ala a filter:mail MessageLabs rendszerrel. Tovabbi informacio: http://www.filtermax.hu

This email has been scanned for viruses and SPAM by the filter:mail MessageLabs System. More information: http://www.filtermax.hu ________________________________________________________________________

_______________________________________________________________________
Ezt az e-mailt virus- es SPAM-szuresnek vetettuk ala a filter:mail MessageLabs rendszerrel. Tovabbi informacio: http://www.filtermax.hu

This email has been scanned for viruses and SPAM by the filter:mail MessageLabs System. More information: http://www.filtermax.hu ________________________________________________________________________________________

Hubert Kario

unread,
Jul 31, 2014, 5:54:23 AM7/31/14
to Kurt Roeckx, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "Kurt Roeckx" <ku...@roeckx.be>
> To: mozilla-dev-s...@lists.mozilla.org
> Sent: Thursday, 31 July, 2014 9:54:45 AM
> Subject: Re: Dynamic Path Resolution in AIA CA Issuers
>
> On 2014-07-31 01:29, Ondrej Mikle wrote:
> > I should probably add that a MitM attacker like an ISP can generally tamper
> > with
> > certificate chains sent in TLS handshake anyway, but AIA fetching would
> > allow an
> > adversary more hops away on a different continent to tamper with the
> > connection.
>
> How would an ISP tamper with the certificates send in TLS without TLS
> giving an error that the packets were tampered with?

Because until you parse the certificates and validate the signatures you
have no way of knowing if the packets you receive are coming from the
server or the MitM box at the ISP.

--
Regards,
Hubert Kario

Brian Smith

unread,
Jul 31, 2014, 10:37:44 AM7/31/14
to Hubert Kario, mozilla-dev-s...@lists.mozilla.org, Kai Engert
Hubert Kario <hka...@redhat.com> wrote:
> Brian Smith wrote:
>> It depends on your definition of "help." I assume the goal is to
>> encourage websites to migrate from 1024-bit signatures to RSA-2048-bit
>> or ECDSA-P-256 signatures. If so, then including the intermediates in
>> NSS so that all NSS-based applications can use them will be
>> counterproductive to the goal, because when the system administrator
>> is testing his server using those other NSS-based tools, he will not
>> notice that he is depending on 1024-bit certificates (cross-signed or
>> root) because everything will work fine.
>
> The point is not to ship a 1024 bit cert, the point is to ship a 2048 bit cert.
>
> So for sites that present a chain like this:
>
> 2048 bit host cert <- 2048 bit old sub CA <- 1024 bit root CA
>
> we can find a certificate chain like this:
>
> 2048 bit host cert <- 2048 bit new cross-signed sub CA <- 2048 bit root CA
>
> where the cross-signed sub CA is shipped by NSS

Sure. I have no objection to including cross-signing certificates
where both the subject public key and the issuer public key are 2048
bits (or more). I am objecting only to including any cross-signing
certificates of the 1024-bit-subject-signed-by-2048-bit-issuer
variety. It has been a long time since we had the initial
conversation, but IIRC both types of cross-signing certificates exist.

Cheers,
Brian

Ondrej Mikle

unread,
Jul 31, 2014, 11:15:58 AM7/31/14
to dev-secur...@lists.mozilla.org
On 07/31/2014 09:54 AM, Kurt Roeckx wrote:
> On 2014-07-31 01:29, Ondrej Mikle wrote:
>> I should probably add that a MitM attacker like an ISP can generally tamper with
>> certificate chains sent in TLS handshake anyway, but AIA fetching would allow an
>> adversary more hops away on a different continent to tamper with the connection.
>
> How would an ISP tamper with the certificates send in TLS without TLS giving an
> error that the packets were tampered with?
>
> I understand that it's possible with SSL 3.0 but not with TLS 1.0.

IIRC the Server Certificate message in hanshake protocol is not authenticated.
Thus one could exchange an intermediate certificate(s) for a different one
having identical public key etc, but for example different validity period or
extensions (if such certificate exists, but CA reissue intermediates).

It's basically like finding another path with path-building algorithm. Under
normal circumstances you'd get only another valid chain or validation error. Not
much useful unless there is a way to trigger validation bug this way.

Ondrej

David E. Ross

unread,
Jul 31, 2014, 1:24:34 PM7/31/14
to mozilla-dev-s...@lists.mozilla.org
Furthermore, automatically providing an intermediate certificate when
none or a bad one is found on the server only encourages further lax
security procedures.

Kurt Roeckx

unread,
Jul 31, 2014, 1:37:31 PM7/31/14
to Ondrej Mikle, dev-secur...@lists.mozilla.org
On Thu, Jul 31, 2014 at 05:15:58PM +0200, Ondrej Mikle wrote:
> On 07/31/2014 09:54 AM, Kurt Roeckx wrote:
> > On 2014-07-31 01:29, Ondrej Mikle wrote:
> >> I should probably add that a MitM attacker like an ISP can generally tamper with
> >> certificate chains sent in TLS handshake anyway, but AIA fetching would allow an
> >> adversary more hops away on a different continent to tamper with the connection.
> >
> > How would an ISP tamper with the certificates send in TLS without TLS giving an
> > error that the packets were tampered with?
> >
> > I understand that it's possible with SSL 3.0 but not with TLS 1.0.
>
> IIRC the Server Certificate message in hanshake protocol is not authenticated.
> Thus one could exchange an intermediate certificate(s) for a different one
> having identical public key etc, but for example different validity period or
> extensions (if such certificate exists, but CA reissue intermediates).

As far as I understand, the Finished message will detect that
the certificates have been changed. You might be validating
some other chain first, but you'll end up with an error.


Kurt

Kathleen Wilson

unread,
Jul 31, 2014, 4:17:23 PM7/31/14
to mozilla-dev-s...@lists.mozilla.org
On 7/25/14, 3:11 PM, Kathleen Wilson wrote:
> == Background ==
> We have begun removal of 1024-bit roots with the following 2 bugs:
> https://bugzilla.mozilla.org/show_bug.cgi?id=936304
> -- Remove Entrust.net, GTE CyberTrust, and ValiCert 1024-bit root
> certificates from NSS
> https://bugzilla.mozilla.org/show_bug.cgi?id=986005
> -- Turn off SSL and Code Signing trust bits for VeriSign 1024-bit roots
>
> There are two more sets of 1024-bit root changes that will need to follow:
> https://bugzilla.mozilla.org/show_bug.cgi?id=986014
> -- Remove Thawte 1024-bit roots
> https://bugzilla.mozilla.org/show_bug.cgi?id=986019
> -- Turn off SSL and Code Signing trust bits for Equifax 1024-bit roots
>
> == Problem ==
> Some web server administrators have not updated their web servers to
> provide a new intermediate certificate signed by a newer root, even
> though the CA has implored them to do so. For those websites, users may
> get the Untrusted Connection error when the old root is removed.
>
> == For this batch of root changes ==
>
> We are still investigating if we should use this possible solution for
> this batch of root changes. Please stay tuned and continue to share data
> and test results that should be considered.
>


Here's what we are doing for this first batch of root changes that was
made in NSS 3.16.3, and is currently in Firefox 32, which is in Beta.

NSS 3.16.4 will be created and included in Firefox 32. It will only
contain these two changes:

1) https://bugzilla.mozilla.org/show_bug.cgi?id=1045189 -- Add the
2048-bit version of the "USERTrust Legacy Secure Server CA" intermediate
cert to NSS, this intermediate cert expires in November 2015.

2) https://bugzilla.mozilla.org/show_bug.cgi?id=1046343 -- Backout
removal of the 1024-bit GTE CyberTrust Global Root


I have filed another bug to make a new plan for migration off of the
1024-bit GTE CyberTrust Global Root, and then remove it.
https://bugzilla.mozilla.org/show_bug.cgi?id=1047011

Thanks,
Kathleen

Ondrej Mikle

unread,
Jul 31, 2014, 7:31:51 PM7/31/14
to Kurt Roeckx, dev-secur...@lists.mozilla.org
On 07/31/2014 07:37 PM, Kurt Roeckx wrote:
> On Thu, Jul 31, 2014 at 05:15:58PM +0200, Ondrej Mikle wrote:
>> On 07/31/2014 09:54 AM, Kurt Roeckx wrote:
>>> On 2014-07-31 01:29, Ondrej Mikle wrote:
>>>> I should probably add that a MitM attacker like an ISP can generally tamper with
>>>> certificate chains sent in TLS handshake anyway, but AIA fetching would allow an
>>>> adversary more hops away on a different continent to tamper with the connection.
>>>
>>> How would an ISP tamper with the certificates send in TLS without TLS giving an
>>> error that the packets were tampered with?
>>>
>>> I understand that it's possible with SSL 3.0 but not with TLS 1.0.
>>
>> IIRC the Server Certificate message in hanshake protocol is not authenticated.
>> Thus one could exchange an intermediate certificate(s) for a different one
>> having identical public key etc, but for example different validity period or
>> extensions (if such certificate exists, but CA reissue intermediates).
>
> As far as I understand, the Finished message will detect that
> the certificates have been changed. You might be validating
> some other chain first, but you'll end up with an error.

This is interesting. I checked TLS 1.2 RFC 5246 whether Finished message should
work this way, but I'm not sure. I think you mean that
"Hash(handshake_messages)" should detect this, right? But it's still just hash,
thus again not authenticated and malleable by a MitM attacker.

Ondrej

Ryan Sleevi

unread,
Jul 31, 2014, 7:52:28 PM7/31/14
to Ondrej Mikle, dev-secur...@lists.mozilla.org, Kurt Roeckx
On Thu, July 31, 2014 4:31 pm, Ondrej Mikle wrote:
> This is interesting. I checked TLS 1.2 RFC 5246 whether Finished message
> should
> work this way, but I'm not sure. I think you mean that
> "Hash(handshake_messages)" should detect this, right? But it's still just
> hash,
> thus again not authenticated and malleable by a MitM attacker.
>
> Ondrej

I suspect you're both talking past eachother.

The set of certificates is protected by the Finished message. In an RSA
key exchange, the Premaster Secret is encrypted to the peer's public key,
the master secret is derived from the PMS, and the Finished messages act
as key confirmation. If the Peer's Finished message doesn't align, they
either tampered with the message, or don't possess the key, ergo the
connection is dropped.

That doesn't prevent an attacker from forcing a client to validate a
'hostile' set of certificates, as the Certificate message from the server
is sent, and (because the Client Certificate message, if any, follows the
Server Certificate message), the assumption is that the client will
immediately validate the certificate upon receipt of the Certificate
message - before any confirmation of keys (or possession of keys) has
occurred. Ergo, a hostile ISP could cause TLS handshakes to have 'hostile'
certificates, exploiting the TLS stack, and this would happen prior to any
cryptographic confirmations.

Different clients do this differently - some don't validate certificates
until after the Finished message (most notably, SChannel didn't for some
time under some cases. Likewise, Chrome validates after the Finished
message for non-False Start, and before the Finished message but ALSO
before any app data is sent for the False-Start).

Brian's point still stands, though. Having a full-stack HTTP client
necessary for AIA chasing *is* a big attack surface, and *has* caused real
security bugs in the past, *and* serves to mask real misconfigurations.

AIA chasing's value is primarily in heterogenous PKIs, of which the
Internet "shouldn't" be (but which things like the Federal Bridge PKI or
the GRID PKI are), and primarily for platforms for which the root store
cannot update over time (which NSS/Firefox can and does) or to work around
CA's with poorly designed PKIs (of which there are many)

I agree whole-heartedly with Brian that AIA chasing is one of those
"workarounds for the Internet" that makes everything harder to work with
and less predictable, impinges performance, and largely should be
unnecessary for the issues that NSS is concerned about.

Ondrej Mikle

unread,
Jul 31, 2014, 8:09:04 PM7/31/14
to ryan-mozde...@sleevi.com, dev-secur...@lists.mozilla.org, Kurt Roeckx
On 08/01/2014 01:52 AM, Ryan Sleevi wrote:
> On Thu, July 31, 2014 4:31 pm, Ondrej Mikle wrote:
>> This is interesting. I checked TLS 1.2 RFC 5246 whether Finished message
>> should
>> work this way, but I'm not sure. I think you mean that
>> "Hash(handshake_messages)" should detect this, right? But it's still just
>> hash,
>> thus again not authenticated and malleable by a MitM attacker.

[...]

> Different clients do this differently - some don't validate certificates
> until after the Finished message (most notably, SChannel didn't for some
> time under some cases. Likewise, Chrome validates after the Finished
> message for non-False Start, and before the Finished message but ALSO
> before any app data is sent for the False-Start).

[...]

> I agree whole-heartedly with Brian that AIA chasing is one of those
> "workarounds for the Internet" that makes everything harder to work with
> and less predictable, impinges performance, and largely should be
> unnecessary for the issues that NSS is concerned about.

Thanks for the insight on clients' side validation implementation. I also agree
that AIA chasing makes things less predictable.

Ondrej

Hubert Kario

unread,
Aug 4, 2014, 10:03:13 AM8/4/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "Hubert Kario" <hka...@redhat.com>
>
> ----- Original Message -----
> > From: "Kathleen Wilson" <kwi...@mozilla.com>
> >
> > == For this batch of root changes ==
> >
> > We are still investigating if we should use this possible solution for
> > this batch of root changes. Please stay tuned and continue to share data
> > and test results that should be considered.

So I've analysed the data.

I simulated removal of 11 roots mentioned in bugs linked to https://bugzilla.mozilla.org/show_bug.cgi?id=1021967:

Entrust.net Secure Server Certification Authority
99:A6:9B:E6:1A:FE:88:6B:4D:2B:82:00:7C:B8:54:FC:31:7E:15:39
GTE CyberTrust Global Root
97:81:79:50:D8:1C:96:70:CC:34:D8:09:CF:79:44:31:36:7E:F4:74
ValiCert Class 1 Policy Validation Authority
E5:DF:74:3C:B6:01:C4:9B:98:43:DC:AB:8C:E8:6A:81:10:9F:E4:8E
ValiCert Class 2 Policy Validation Authority
31:7A:2A:D0:7F:2B:33:5E:F5:A1:C3:4E:4B:57:E8:B7:D8:F1:FC:A6
ValiCert Class 3 Policy Validation Authority
69:BD:8C:F4:9C:D3:00:FB:59:2E:17:93:CA:55:6A:F3:EC:AA:35:FB
NetLock Uzleti (Class B) Tanusitvanykiado
87:9F:4B:EE:05:DF:98:58:3B:E3:60:D6:33:E7:0D:3F:FE:98:71:AF
NetLock Uzleti (Class C) Tanusitvanykiado
E3:92:51:2F:0A:CF:F5:05:DF:F6:DE:06:7F:75:37:E1:65:EA:57:4B
VeriSign, Inc. / Class 3 Public Primary Certification Authority
A1:DB:63:93:91:6F:17:E4:18:55:09:40:04:15:C7:02:40:B0:AE:6B
74:2C:31:92:E6:07:E4:24:EB:45:49:54:2B:E1:BB:C5:3E:61:74:E2
Sociedad Cameral de Certificacion Digital
CB:A1:C5:F8:B0:E3:5E:B8:B9:45:12:D3:F9:34:A2:E9:06:10:D3:36
TDC Internet Root CA
21:FC:BD:8E:7F:6C:AF:05:1B:D1:B3:43:EC:A8:E7:61:47:F2:0F:8A

The data (and as such, the certificates) were collected between
11th and 19th of July 2014 and did include Alexa top 1 million domain
names as of 10th of July.

With the above certificates:

Server provided chains Count Percent
-------------------------+---------+-------
complete 359484 61.6908
incomplete 29543 5.0699
untrusted 193692 33.2393

Without the above certificates:

Server provided chains Count Percent
-------------------------+---------+-------
complete 359265 61.6532
incomplete 29663 5.0904
untrusted 193791 33.2563

Change (without-with) Count
-------------------------+---------
complete -219
incomplete +120
untrusted +99

Attached are the lists of those servers.

(disclaimer: trust chain building did ignore host names, extended key
usage limitations, and used OpenSSL for chain building)


I've also gathered a bit extra statistics.

The use of key sizes in CA certificates (count is number of chains that use them):

With the 1024 bit roots:

Chains with CA key Count Percent
-------------------------+---------+-------
ECDSA 256 2 0.0004
ECDSA 384 2 0.0004
RSA 1024 1776 0.399
RSA 2045 1 0.0002
RSA 2048 443399 99.619
RSA 4096 16134 3.6248

Without the 1024 bit roots:

Chains with CA key Count Percent
-------------------------+---------+-------
ECDSA 256 2 0.0004
ECDSA 384 2 0.0004
RSA 1024 1539 0.3459
RSA 2045 1 0.0002
RSA 2048 443308 99.6229
RSA 4096 16121 3.6228

it has limited effect on overall security of connection (if we assume 80 bit
level of security for both SHA1 and 1024 bit RSA and ignore signature
algorithm on the root certs):

With weak roots:

Eff. host cert chain LoS Count Percent
-------------------------+---------+-------
80 398413 89.5119
112 46680 10.4876
128 2 0.0004

Without weak roots:

Eff. host cert chain LoS Count Percent
-------------------------+---------+-------
80 398304 89.5093
112 46680 10.4902
128 2 0.0004

Kai Engert

unread,
Aug 4, 2014, 6:24:33 PM8/4/14
to Hubert Kario, mozilla-dev-s...@lists.mozilla.org, Kathleen Wilson
Hubert, what's your conclusion of your analysis?

Thanks
Kai


Kurt Roeckx

unread,
Aug 4, 2014, 6:44:13 PM8/4/14
to Hubert Kario, mozilla-dev-s...@lists.mozilla.org, Kathleen Wilson
On Mon, Aug 04, 2014 at 10:03:13AM -0400, Hubert Kario wrote:
>
> So I've analysed the data.
>
> Change (without-with) Count
> -------------------------+---------
> complete -219
> incomplete +120
> untrusted +99

So this is in the order of 0.05% of the sites that would break?
I'm happy to ignore that and just do it.


Kurt

Kathleen Wilson

unread,
Aug 4, 2014, 6:52:10 PM8/4/14
to mozilla-dev-s...@lists.mozilla.org
On 7/31/14, 1:17 PM, Kathleen Wilson wrote:
>
> Here's what we are doing for this first batch of root changes that was
> made in NSS 3.16.3, and is currently in Firefox 32, which is in Beta.
>
> NSS 3.16.4 will be created and included in Firefox 32. It will only
> contain these two changes:
>
> 1) https://bugzilla.mozilla.org/show_bug.cgi?id=1045189 -- Add the
> 2048-bit version of the "USERTrust Legacy Secure Server CA" intermediate
> cert to NSS, this intermediate cert expires in November 2015.
>

It turns out that including the 2048-bit version of the cross-signed
intermediate certificate does not help NSS at all. It would only help
Firefox, and would cause confusion.

https://bugzilla.mozilla.org/show_bug.cgi?id=1045189#c13
--
old intermediate:
Subject: "CN=USERTrust Legacy Secure Server CA,O=The USERTRUST
Network,L=Salt Lake City,ST=UT,C=US"
Issuer: "CN=Entrust.net Secure Server Certification Authority,OU=(c)
1999 Entrust.net Limited,OU=www.entrust.net/CPS incorp. by ref. (limits
liab.),O=Entrust.net,C=US"
Serial Number: 1184831531 (0x469f182b)
Validity:
Not Before: Thu Nov 26 20:33:13 2009
Not After : Sun Nov 01 04:00:00 2015

the replacement intermediate::
Subject: "CN=USERTrust Legacy Secure Server CA,O=The USERTRUST
Network,L=Salt Lake City,ST=UT,C=US"
Issuer: "CN=Entrust.net Certification Authority (2048),OU=(c) 1999
Entrust.net Limited,OU=www.entrust.net/CPS_2048 incorp. by ref. (limits
liab.),O=Entrust.net"
Serial Number: 946071786 (0x3863e8ea)
Validity:
Not Before: Thu Nov 26 20:05:16 2009
Not After : Sun Nov 01 05:00:00 2015

When given the choice of the above two certificates for chaining, which
use an identical subject, the legacy NSS chaining code will try only one
path. It will decide which certificate to use based on the validity
time/date. It will pick the one that looks newer.

Unfortunately, the time/date of the certificates don't indicate a clear
"winner".
--

Kai tested this adding the 2048-bit intermediate cert to NSS, and found
that the 1024-bit intermediate cert was still used.

It works for Firefox, because mozilla::pkix keeps trying until it finds
a certificate path that works.

Therefore, it looks like including the 2048-bit intermediate cert
directly in NSS would cause different behavior depending on where the
root store is being used. This would lead to confusion.

Kathleen

Brian Smith

unread,
Aug 4, 2014, 7:48:32 PM8/4/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org
On Mon, Aug 4, 2014 at 3:52 PM, Kathleen Wilson <kwi...@mozilla.com> wrote:
> It turns out that including the 2048-bit version of the cross-signed
> intermediate certificate does not help NSS at all. It would only help
> Firefox, and would cause confusion.

That isn't true, AFAICT.

> It works for Firefox, because mozilla::pkix keeps trying until it finds a
> certificate path that works.

NSS's libpkix also keeps trying until if finds a certificate path that
works. libpkix is used by Chromium and by Oracle's products (IIUC).

> Therefore, it looks like including the 2048-bit intermediate cert directly
> in NSS would cause different behavior depending on where the root store is
> being used. This would lead to confusion.

IMO, it isn't reasonable to make decisions like this based on the
behavior of the "classic" NSS path building. Really, the classic NSS
path building logic is obsolete, and anybody still using it is going
to have lots of compatibility problems due to this change and other
things, some of which are out of our control.

Cheers,
Brian

Brian Smith

unread,
Aug 4, 2014, 7:53:06 PM8/4/14
to Hubert Kario, mozilla-dev-s...@lists.mozilla.org, Kathleen Wilson
On Mon, Aug 4, 2014 at 7:03 AM, Hubert Kario <hka...@redhat.com> wrote:
> it has limited effect on overall security of connection (if we assume 80 bit
> level of security for both SHA1 and 1024 bit RSA and ignore signature
> algorithm on the root certs):

Hi Hubert,

Thanks for doing that.

Note that because 1024-bit-to-2048-bit cross-signing certificates
exist for many CAs, removal of the these roots alone isn't going to
have a big effect on its own. Instead, removal of these roots is a
stepping stone. The next step is to stop accepting <2048-bit
*intermediate* CA certificates from the built-in trust anchors, even
if they chain to a trusted >=2048-bit root.

Cheers,
Brian

Rob Stradling

unread,
Aug 5, 2014, 4:34:35 AM8/5/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org, Kai Engert
Kathleen, to work around the "classic" NSS path building behaviour you
observed yesterday, we will issue another cross-certificate to
"USERTrust Legacy Secure Server CA", with a newer notBefore date, from
our "AddTrust External CA Root" built-in root.
Then, you can include this new cross-certificate in NSS instead of the
one issued by the 2048-bit Entrust built-in root.

We'll pull out all the stops and get this new cross-certificate issued
today.

Kai, just in case you were planning to tag NSS 3.16.4 within the next
few hours...please wait, if you can. :-)

On 04/08/14 23:52, Kathleen Wilson wrote:
> On 7/31/14, 1:17 PM, Kathleen Wilson wrote:
>>
>> Here's what we are doing for this first batch of root changes that was
>> made in NSS 3.16.3, and is currently in Firefox 32, which is in Beta.
>>
>> NSS 3.16.4 will be created and included in Firefox 32. It will only
>> contain these two changes:
>>
>> 1) https://bugzilla.mozilla.org/show_bug.cgi?id=1045189 -- Add the
>> 2048-bit version of the "USERTrust Legacy Secure Server CA" intermediate
>> cert to NSS, this intermediate cert expires in November 2015.
>>
>
> It turns out that including the 2048-bit version of the cross-signed
> intermediate certificate does not help NSS at all. It would only help
> Firefox, and would cause confusion.
>
> It works for Firefox, because mozilla::pkix keeps trying until it finds
> a certificate path that works.
>
> Therefore, it looks like including the 2048-bit intermediate cert
> directly in NSS would cause different behavior depending on where the
> root store is being used. This would lead to confusion.
>
> Kathleen
>
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>

--
Rob Stradling
Senior Research & Development Scientist
COMODO - Creating Trust Online
Office Tel: +44.(0)1274.730505
Office Fax: +44.(0)1274.730909
www.comodo.com

COMODO CA Limited, Registered in England No. 04058690
Registered Office:
3rd Floor, 26 Office Village, Exchange Quay,
Trafford Road, Salford, Manchester M5 3EQ

This e-mail and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error please notify the
sender by replying to the e-mail containing this attachment. Replies to
this email may be monitored by COMODO for operational or business
reasons. Whilst every endeavour is taken to ensure that e-mails are free
from viruses, no liability can be accepted and the recipient is
requested to use their own virus checking software.

Rob Stradling

unread,
Aug 5, 2014, 5:40:07 AM8/5/14
to Kathleen Wilson, mozilla-dev-s...@lists.mozilla.org, Kai Engert
On 05/08/14 09:34, Rob Stradling wrote:
> Kathleen, to work around the "classic" NSS path building behaviour you
> observed yesterday, we will issue another cross-certificate to
> "USERTrust Legacy Secure Server CA", with a newer notBefore date, from
> our "AddTrust External CA Root" built-in root.
> Then, you can include this new cross-certificate in NSS instead of the
> one issued by the 2048-bit Entrust built-in root.
>
> We'll pull out all the stops and get this new cross-certificate issued
> today.
>
> Kai, just in case you were planning to tag NSS 3.16.4 within the next
> few hours...please wait, if you can. :-)

We've issued this new cross-certificate and I've attached it to bug 1045189.

Hubert Kario

unread,
Aug 5, 2014, 8:17:45 AM8/5/14
to Kai Engert, mozilla-dev-s...@lists.mozilla.org, Kathleen Wilson
----- Original Message -----
> From: "Kai Engert" <ka...@kuix.de>
> To: "Hubert Kario" <hka...@redhat.com>
> Cc: "Kathleen Wilson" <kwi...@mozilla.com>, mozilla-dev-s...@lists.mozilla.org
> Sent: Tuesday, August 5, 2014 12:24:33 AM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> Hubert, what's your conclusion of your analysis?

Sorry, looks like mailman ate the attachments. I'll summarise them below.

Basically, the only sites that are severely affected are the ones that link up
to the GTE CyberTrust Global Root, there are 88 such sites. Since we're already adding
this root back, I won't be quoting this list here.

The other 11 sites affected (with the CA's they link up to) are:

191...@217.169.121.228 /L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.valicert.com//emailAddress=in...@valicert.com
chevrole...@198.208.245.20 /L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.valicert.com//emailAddress=in...@valicert.com
del...@143.166.83.38 /L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.valicert.com//emailAddress=in...@valicert.com
dell....@143.166.83.38 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
impresase...@77.238.17.230 /L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.valicert.com//emailAddress=in...@valicert.com
motu...@212.124.107.182 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
planchevro...@198.208.145.32 /L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.valicert.com//emailAddress=in...@valicert.com
www.bas...@194.65.55.203 /C=CO/O=Sociedad Cameral de Certificaci\xC3\xB3n Digital - Certic\xC3\xA1mara S.A./CN=AC Ra\xC3\xADz Certic\xC3\xA1mara S.A.
www.chevr...@198.208.106.109 /L=ValiCert Validation Network/O=ValiCert, Inc./OU=ValiCert Class 2 Policy Validation Authority/CN=http://www.valicert.com//emailAddress=in...@valicert.com
www.e-fina...@213.13.158.241 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority
www.theameric...@96.43.149.94 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority

So it doesn't look like the removal of the roots (in the upcoming version) has large impact.

If we look at sites which chains have gained the incomplete status, 113 of them link
up the the Entrust.net roots and 16 of them link up to GTE CyberTrust roots (9 sites
had incomplete chains and then became untrusted).

So if we ship the new intermediate cert and the GTE root, only the above sites should be
affected.

Hubert Kario

unread,
Aug 5, 2014, 8:22:09 AM8/5/14
to Kurt Roeckx, mozilla-dev-s...@lists.mozilla.org, Kathleen Wilson
----- Original Message -----
> From: "Kurt Roeckx" <ku...@roeckx.be>
> To: "Hubert Kario" <hka...@redhat.com>
> Cc: "Kathleen Wilson" <kwi...@mozilla.com>, mozilla-dev-s...@lists.mozilla.org
> Sent: Tuesday, August 5, 2014 12:44:13 AM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
0.05% of sites doesn't mean 0.05% of users, especially if we look at local, not global,
user share. Some of them are high profile sites, e.g.:
volkswagen.at, dell.com, cadillaceurope.com, www.portaldasfinancas.gov.pt

Kurt Roeckx

unread,
Aug 5, 2014, 8:59:30 AM8/5/14
to mozilla-dev-s...@lists.mozilla.org
On 2014-08-05 14:22, Hubert Kario wrote:
> 0.05% of sites doesn't mean 0.05% of users, especially if we look at local, not global,
> user share. Some of them are high profile sites, e.g.:
> volkswagen.at, dell.com, cadillaceurope.com, www.portaldasfinancas.gov.pt

It's not because they have an https site that people actually use it
over https.

so testing those sites:
- dell.com: Doesn't work without www. It's not mentioned in your other
mail, but dell.cl and dell.com.br are. They all send the same
certificate, and that's not valid for those hostnames.
- cadillaceurope.com: it's not valid without www.


Kurt




Hubert Kario

unread,
Aug 5, 2014, 10:30:10 AM8/5/14
to Kurt Roeckx, mozilla-dev-s...@lists.mozilla.org
----- Original Message -----
> From: "Kurt Roeckx" <ku...@roeckx.be>
> To: mozilla-dev-s...@lists.mozilla.org
> Sent: Tuesday, August 5, 2014 2:59:30 PM
> Subject: Re: Removal of 1024 bit CA roots - interoperability
>
> On 2014-08-05 14:22, Hubert Kario wrote:
> > 0.05% of sites doesn't mean 0.05% of users, especially if we look at local,
> > not global,
> > user share. Some of them are high profile sites, e.g.:
> > volkswagen.at, dell.com, cadillaceurope.com, www.portaldasfinancas.gov.pt
>
> It's not because they have an https site that people actually use it
> over https.
>
> so testing those sites:
> - dell.com: Doesn't work without www. It's not mentioned in your other
> mail, but dell.cl and dell.com.br are. They all send the same
> certificate, and that's not valid for those hostnames.
> - cadillaceurope.com: it's not valid without www.

sites which are presented without www are as such because they resolve to the
same IP addresses than the sites that do have the www prefix/host part, this
is an artefact of the way the scanning script works.

Additionally, just because a site doesn't redirect to https, doesn't mean
that it doesn't use it ever. It may use it for login for administrators, it
may use it only when asking for personally identifiable information, only
for specific subpages, etc.
0 new messages