Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Update to phasing out SHA-1 Certs

485 views
Skip to first unread message

Kathleen Wilson

unread,
Oct 20, 2015, 2:39:58 PM10/20/15
to mozilla-dev-s...@lists.mozilla.org
All,

We've posted a security blog to provide current status and direction on
phasing out SHA-1 certs:

https://blog.mozilla.org/security/2015/10/20/continuing-to-phase-out-sha-1-certificates/

Of particular note:

- In Firefox 43 we plan to show an overridable “Untrusted Connection”
error whenever Firefox encounters a SHA-1 based certificate that has
ValidFrom after Jan 1, 2016. This includes the web server certificate as
well as any intermediate certificates that it chains up to.

- We are re-evaluating when we should start rejecting all SHA-1 SSL
certificates (regardless of when they were issued). As we said before,
the current plan is to make this change on January 1, 2017. However, in
light of recent attacks on SHA-1, we are also considering the
feasibility of having a cut-off date as early as July 1, 2016.


Also of note:

- We do not currently plan to display an error if an OCSP response is
signed by a SHA-1 certificate.

- we do not currently plan to throw an error when SHA-1 S/MIME and
client authentication certificates are encountered.


Kathleen

Kurt Roeckx

unread,
Oct 21, 2015, 4:18:13 AM10/21/15
to mozilla-dev-s...@lists.mozilla.org
On 2015-10-20 20:39, Kathleen Wilson wrote:
> - We are re-evaluating when we should start rejecting all SHA-1 SSL
> certificates (regardless of when they were issued). As we said before,
> the current plan is to make this change on January 1, 2017. However, in
> light of recent attacks on SHA-1, we are also considering the
> feasibility of having a cut-off date as early as July 1, 2016.

I'm all for moving away from it as soon as possible. But from what I
understand the SHAppening is no reason to panic yet, so I currently
don't see a reason to change the schedule.


Kurt


s...@gmx.ch

unread,
Oct 21, 2015, 4:19:02 PM10/21/15
to dev-secur...@lists.mozilla.org
There was also a plan for certificates with 'notAfter >= 2017-1-1'
(still valid in 2017+).
Chrome already shows a broken https icon for them.
See https://sha1-2017.badssl.com/

This was discussed in https://bugzilla.mozilla.org/show_bug.cgi?id=942515




Am 21.10.2015 um 10:17 schrieb Kurt Roeckx:
> On 2015-10-20 20:39, Kathleen Wilson wrote:
>> - We are re-evaluating when we should start rejecting all SHA-1 SSL
>> certificates (regardless of when they were issued). As we said before,
>> the current plan is to make this change on January 1, 2017. However, in
>> light of recent attacks on SHA-1, we are also considering the
>> feasibility of having a cut-off date as early as July 1, 2016.
>
> I'm all for moving away from it as soon as possible. But from what I
> understand the SHAppening is no reason to panic yet, so I currently
> don't see a reason to change the schedule.
>
>
> Kurt
>
>
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy


signature.asc

Kurt Roeckx

unread,
Oct 22, 2015, 4:32:25 AM10/22/15
to mozilla-dev-s...@lists.mozilla.org
On 2015-10-21 22:18, s...@gmx.ch wrote:
> There was also a plan for certificates with 'notAfter >= 2017-1-1'
> (still valid in 2017+).
> Chrome already shows a broken https icon for them.
> See https://sha1-2017.badssl.com/
>
> This was discussed in https://bugzilla.mozilla.org/show_bug.cgi?id=942515

So my understanding is that with Mozilla's current plan a SHA-1
certificate with NotBefore < 2016-01-01 (and NotAfter >= 2017-01-01)
will not get any "Untrusted Connection".

It would be nice that there was some other indication about SHA-1
certificates other than in the Web Console that nobody will see.


Kurt

s...@gmx.ch

unread,
Nov 5, 2015, 2:36:04 PM11/5/15
to dev-secur...@lists.mozilla.org
It seems that we are going to untrust SHA-1 generally on July 1, 2016
[1]. Do we already have a bug number for this? I can't find any.
I think certificates with 'notAfter >= 2017-7-1' should get a triangle
instead of the lock icon from now.

[1]
https://blog.mozilla.org/security/2015/10/20/continuing-to-phase-out-sha-1-certificates/


Am 22.10.2015 um 10:31 schrieb Kurt Roeckx:
> On 2015-10-21 22:18, s...@gmx.ch wrote:
>> There was also a plan for certificates with 'notAfter >= 2017-1-1'
>> (still valid in 2017+).
>> Chrome already shows a broken https icon for them.
>> See https://sha1-2017.badssl.com/
>>
>> This was discussed in
>> https://bugzilla.mozilla.org/show_bug.cgi?id=942515
>
> So my understanding is that with Mozilla's current plan a SHA-1
> certificate with NotBefore < 2016-01-01 (and NotAfter >= 2017-01-01)
> will not get any "Untrusted Connection".
>
> It would be nice that there was some other indication about SHA-1
> certificates other than in the Web Console that nobody will see.
>
>
signature.asc

Kathleen Wilson

unread,
Nov 5, 2015, 3:27:45 PM11/5/15
to mozilla-dev-s...@lists.mozilla.org
On 11/5/15 11:34 AM, s...@gmx.ch wrote:
> It seems that we are going to untrust SHA-1 generally on July 1, 2016
> [1]. Do we already have a bug number for this?


https://bugzilla.mozilla.org/show_bug.cgi?id=942515



> I think certificates with 'notAfter >= 2017-7-1' should get a triangle
> instead of the lock icon from now.
>


https://bugzilla.mozilla.org/show_bug.cgi?id=1183718



Lothsahn

unread,
Nov 6, 2015, 4:49:33 PM11/6/15
to mozilla-dev-s...@lists.mozilla.org
If you're going to be phasing out SHA1, can Mozilla support SHA256 and GCM for DHE?
https://bugzilla.mozilla.org/show_bug.cgi?id=1084554

Rick Andrews

unread,
Nov 6, 2015, 7:15:31 PM11/6/15
to mozilla-dev-s...@lists.mozilla.org
> - We are re-evaluating when we should start rejecting all SHA-1 SSL
> certificates (regardless of when they were issued). As we said before,
> the current plan is to make this change on January 1, 2017. However, in
> light of recent attacks on SHA-1, we are also considering the
> feasibility of having a cut-off date as early as July 1, 2016.

I think that pulling in this date will create chaos for some large enterprises who are already scrambling to phase out SHA-1 by the end of 2016. They had been counting on using all of 2016 to complete their migration. It wouldn't just be an inconvenience - it would make an already-difficult situation nearly impossible.

And I'll point out that Microsoft is considering the same thing but with a different date - June 1, 2016. Would you at least consider collaborating with other browser vendors to agree on the same date?


Bruce

unread,
Nov 10, 2015, 2:28:17 PM11/10/15
to mozilla-dev-s...@lists.mozilla.org
I agree with Rick that I don't think the date should change. Currently the CAs will stop issuing SHA-1 as of 1 January 2016. This will largely mitigate the collision attack similar to the previous MD5 attack. The input from SHAppening, makes it arguable that mitigating collision on 1 January 2016 was probably too late, but there is not much we can do about that decision.

>From what I understand we are not yet concerned about preimage or second-preimage attacks on SHA-1, so this would not be a reason to change the date.

It would be great to understand what vulnerability we are trying to mitigate before changing the date when SHA-1 will be rejected.

Thanks, Bruce.

Richard Barnes

unread,
Nov 10, 2015, 3:42:17 PM11/10/15
to Bruce, mozilla-dev-s...@lists.mozilla.org
To be clear, we have not decided on any hard cutoff date for SHA-1, much
less made it July 1.

January 1, 2017 and July 1, 2016 are both options under consideration. I
expect that we will try to come to a final decision some time in early
2016, once we have more data about how quickly people seem to be retiring
SHA-1. Right now, it's still used in around 10% of Firefox certificate
verifications in release, but the trend line is downward. The beta
population went from ~10.7% usage to ~8.2% usage over the course of October.

On Tue, Nov 10, 2015 at 2:28 PM, Bruce <bruce....@entrust.com> wrote:

> On Friday, November 6, 2015 at 7:15:31 PM UTC-5, Rick Andrews wrote:
> I agree with Rick that I don't think the date should change. Currently the
> CAs will stop issuing SHA-1 as of 1 January 2016. This will largely
> mitigate the collision attack similar to the previous MD5 attack. The input
> from SHAppening, makes it arguable that mitigating collision on 1 January
> 2016 was probably too late, but there is not much we can do about that
> decision.
>
> From what I understand we are not yet concerned about preimage or
> second-preimage attacks on SHA-1, so this would not be a reason to change
> the date.
>
> It would be great to understand what vulnerability we are trying to
> mitigate before changing the date when SHA-1 will be rejected.
>
> Thanks, Bruce.

gdelg...@gmail.com

unread,
Jan 17, 2016, 2:13:00 PM1/17/16
to mozilla-dev-s...@lists.mozilla.org

it's early 2016 and wondering if a decision has been made on the dates?

s...@gmx.ch

unread,
Jan 17, 2016, 2:20:42 PM1/17/16
to dev-secur...@lists.mozilla.org
We failed because of MITM certs:
https://blog.mozilla.org/security/2016/01/06/man-in-the-middle-interfering-with-increased-security/

But you can set security.pki.sha1_enforcement_level manually.


Am 16.01.2016 um 00:16 schrieb gdelg...@gmail.com:
> it's early 2016 and wondering if a decision has been made on the dates?
signature.asc

Richard Barnes

unread,
Jan 18, 2016, 10:19:32 AM1/18/16
to s...@gmx.ch, dev-secur...@lists.mozilla.org
"Failed" might be a bit strong :) We had a temporary setback.

Like the blog post says, we're working on more precisely characterizing how
widespread and how broken these middleboxes are, before taking steps to
re-enable the SHA-1 restrictions. I still think we're on track for turning
off SHA-1 entirely (together with the other browsers) sometime around EOY,
but obviously there's a bit more uncertainty now.

One thing that has been proposed is to have an exception for local roots,
i.e., to let non-default trust anchors continue to use SHA-1 for some more
time. What do folks here think about that idea?


On Sun, Jan 17, 2016 at 2:19 PM, <s...@gmx.ch> wrote:

> We failed because of MITM certs:
>
> https://blog.mozilla.org/security/2016/01/06/man-in-the-middle-interfering-with-increased-security/
>
> But you can set security.pki.sha1_enforcement_level manually.
>
>
> Am 16.01.2016 um 00:16 schrieb gdelg...@gmail.com:
> > it's early 2016 and wondering if a decision has been made on the dates?

Jakob Bohm

unread,
Jan 18, 2016, 11:07:43 AM1/18/16
to mozilla-dev-s...@lists.mozilla.org
On 18/01/2016 16:19, Richard Barnes wrote:
> "Failed" might be a bit strong :) We had a temporary setback.
>
> Like the blog post says, we're working on more precisely characterizing how
> widespread and how broken these middleboxes are, before taking steps to
> re-enable the SHA-1 restrictions. I still think we're on track for turning
> off SHA-1 entirely (together with the other browsers) sometime around EOY,
> but obviously there's a bit more uncertainty now.
>
> One thing that has been proposed is to have an exception for local roots,
> i.e., to let non-default trust anchors continue to use SHA-1 for some more
> time. What do folks here think about that idea?
>
>

How about letting certs that chain to roots that are self-signed with
SHA-1 use SHA-1, assuming no such roots remain in the default trust
list.

However this would not work if the default root list contains roots
that are self-signed (historically) using SHA-1, but which no longer
issue certificates signed with SHA-1 (this is possible for non-DSA
roots only).

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded

Eric Mill

unread,
Jan 18, 2016, 3:27:36 PM1/18/16
to Richard Barnes, dev-secur...@lists.mozilla.org
On Mon, Jan 18, 2016 at 10:19 AM, Richard Barnes <rba...@mozilla.com>
wrote:

> ...
>
> One thing that has been proposed is to have an exception for local roots,
> i.e., to let non-default trust anchors continue to use SHA-1 for some more
> time. What do folks here think about that idea?
>

That seems like a choice to make only if it must be made, in order to shut
off SHA-1 for public roots in the absence of change in the enterprise. It's
not something I would proactively accept and move towards, since it removes
all pressure from vendors and enterprises to fix up their stuff.

This also seems like something of enough import that a multi-browser/OS
plan would probably be more effective than any single browser leading on
it, since enterprises tend to have 0 qualms about directing their entire
staff to use whatever browser works around the problem they're seeing.

-- Eric


>
>
> On Sun, Jan 17, 2016 at 2:19 PM, <s...@gmx.ch> wrote:
>
> > We failed because of MITM certs:
> >
> >
> https://blog.mozilla.org/security/2016/01/06/man-in-the-middle-interfering-with-increased-security/
> >
> > But you can set security.pki.sha1_enforcement_level manually.
> >
> >
> > Am 16.01.2016 um 00:16 schrieb gdelg...@gmail.com:
> > > it's early 2016 and wondering if a decision has been made on the dates?
> > > _______________________________________________
> > > dev-security-policy mailing list
> > > dev-secur...@lists.mozilla.org
> > > https://lists.mozilla.org/listinfo/dev-security-policy
> >
> >
> >
> > _______________________________________________
> > dev-security-policy mailing list
> > dev-secur...@lists.mozilla.org
> > https://lists.mozilla.org/listinfo/dev-security-policy
> >
> >
> _______________________________________________
> dev-security-policy mailing list
> dev-secur...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-security-policy
>



--
konklone.com | @konklone <https://twitter.com/konklone>

Richard Barnes

unread,
Jan 18, 2016, 4:18:30 PM1/18/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Mon, Jan 18, 2016 at 11:07 AM, Jakob Bohm <jb-mo...@wisemo.com> wrote:

> On 18/01/2016 16:19, Richard Barnes wrote:
>
>> "Failed" might be a bit strong :) We had a temporary setback.
>>
>> Like the blog post says, we're working on more precisely characterizing
>> how
>> widespread and how broken these middleboxes are, before taking steps to
>> re-enable the SHA-1 restrictions. I still think we're on track for
>> turning
>> off SHA-1 entirely (together with the other browsers) sometime around EOY,
>> but obviously there's a bit more uncertainty now.
>>
>> One thing that has been proposed is to have an exception for local roots,
>> i.e., to let non-default trust anchors continue to use SHA-1 for some more
>> time. What do folks here think about that idea?
>>
>>
>>
> How about letting certs that chain to roots that are self-signed with
> SHA-1 use SHA-1, assuming no such roots remain in the default trust
> list.
>

I don't think that assumption is true, unfortunately. And even if it were,
it seems like this strategy would result in some hard-to-debug errors
without much benefit.

--Richard


> However this would not work if the default root list contains roots
> that are self-signed (historically) using SHA-1, but which no longer
> issue certificates signed with SHA-1 (this is possible for non-DSA
> roots only).
>
> Enjoy
>
> Jakob
> --
> Jakob Bohm, CIO, Partner, WiseMo A/S. https://www.wisemo.com
> Transformervej 29, 2860 Søborg, Denmark. Direct +45 31 13 16 10
> This public discussion message is non-binding and may contain errors.
> WiseMo - Remote Service Management for PCs, Phones and Embedded
>

Richard Barnes

unread,
Jan 18, 2016, 4:21:08 PM1/18/16
to Eric Mill, dev-secur...@lists.mozilla.org
On Mon, Jan 18, 2016 at 3:26 PM, Eric Mill <er...@konklone.com> wrote:

> On Mon, Jan 18, 2016 at 10:19 AM, Richard Barnes <rba...@mozilla.com>
> wrote:
>
>> ...
>>
>> One thing that has been proposed is to have an exception for local roots,
>> i.e., to let non-default trust anchors continue to use SHA-1 for some more
>> time. What do folks here think about that idea?
>>
>
> That seems like a choice to make only if it must be made, in order to shut
> off SHA-1 for public roots in the absence of change in the enterprise. It's
> not something I would proactively accept and move towards, since it removes
> all pressure from vendors and enterprises to fix up their stuff.
>

To be clear: I said "some more time", not "forever" :)



> This also seems like something of enough import that a multi-browser/OS
> plan would probably be more effective than any single browser leading on
> it, since enterprises tend to have 0 qualms about directing their entire
> staff to use whatever browser works around the problem they're seeing.
>

Even if the browsers coordinate, though, it seems like enterprises also
have very little problem using old browsers, which is something we
definitely don't want to encourage.

--Richard


>
> -- Eric
>
>
>>
>>
>> On Sun, Jan 17, 2016 at 2:19 PM, <s...@gmx.ch> wrote:
>>
>> > We failed because of MITM certs:
>> >
>> >
>> https://blog.mozilla.org/security/2016/01/06/man-in-the-middle-interfering-with-increased-security/
>> >
>> > But you can set security.pki.sha1_enforcement_level manually.
>> >
>> >
>> > Am 16.01.2016 um 00:16 schrieb gdelg...@gmail.com:
>> > > it's early 2016 and wondering if a decision has been made on the
>> dates?
>> > > _______________________________________________
>> > > dev-security-policy mailing list
>> > > dev-secur...@lists.mozilla.org
>> > > https://lists.mozilla.org/listinfo/dev-security-policy
>> >
>> >
>> >
>> > _______________________________________________
>> > dev-security-policy mailing list
>> > dev-secur...@lists.mozilla.org
>> > https://lists.mozilla.org/listinfo/dev-security-policy
>> >
>> >
>> _______________________________________________
>> dev-security-policy mailing list
>> dev-secur...@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security-policy
>>
>
>
>

Jakob Bohm

unread,
Jan 18, 2016, 6:02:19 PM1/18/16
to mozilla-dev-s...@lists.mozilla.org
On 18/01/2016 22:18, Richard Barnes wrote:
> On Mon, Jan 18, 2016 at 11:07 AM, Jakob Bohm <jb-mo...@wisemo.com> wrote:
>
>> On 18/01/2016 16:19, Richard Barnes wrote:
>>
>>> "Failed" might be a bit strong :) We had a temporary setback.
>>>
>>> Like the blog post says, we're working on more precisely characterizing
>>> how
>>> widespread and how broken these middleboxes are, before taking steps to
>>> re-enable the SHA-1 restrictions. I still think we're on track for
>>> turning
>>> off SHA-1 entirely (together with the other browsers) sometime around EOY,
>>> but obviously there's a bit more uncertainty now.
>>>
>>> One thing that has been proposed is to have an exception for local roots,
>>> i.e., to let non-default trust anchors continue to use SHA-1 for some more
>>> time. What do folks here think about that idea?
>>>
>>>
>>>
>> How about letting certs that chain to roots that are self-signed with
>> SHA-1 use SHA-1, assuming no such roots remain in the default trust
>> list.
>>
>
> I don't think that assumption is true, unfortunately. And even if it were,
> it seems like this strategy would result in some hard-to-debug errors
> without much benefit.
>

I was attempting to avoid the even-harder-to-debug error where behavior
depends on how a root cert was added to the configuration.

Ryan Sleevi

unread,
Jan 18, 2016, 8:28:30 PM1/18/16
to Jakob Bohm, mozilla-dev-s...@lists.mozilla.org
On Mon, January 18, 2016 3:01 pm, Jakob Bohm wrote:
> I was attempting to avoid the even-harder-to-debug error where behavior
> depends on how a root cert was added to the configuration.

I think you underestimate the challenges to debugging your solution proposes.

It's fairly easy to understand that "shipped by Mozilla" has a SHA-1
policy, much in the same way it must adhere to the BRs and Mozilla
inclusion policy, and *any* local changes are exempt from that.

It's much harder to explain why this publicly trusted root, which may have
N number of versions of the self-signed certificate available (for
concrete examples, look at the MD5, SHA-1, and SHA-2 versions of
Symantec's various roots)

It also makes it harder to alter behaviour. If a vendor product that only
signs MITM certs with SHA-1, and which happens to use SHA-2 for the
self-signed root (and yes, I have seen such certificates out there -
usually because the root is generated off-device and then installed on the
device, while the leafs are all generated on-device), then you have to go
re-generate the root in order to meet your proposed policy language.

So I'm definitely in agreement with Richard that it'd be much harder to
debug and advise people on, compared to the simpler "Manually installed
roots are exempt".

Ryan Sleevi

unread,
Jan 18, 2016, 8:47:19 PM1/18/16
to Eric Mill, dev-secur...@lists.mozilla.org, Richard Barnes
On Mon, January 18, 2016 12:26 pm, Eric Mill wrote:
> On Mon, Jan 18, 2016 at 10:19 AM, Richard Barnes <rba...@mozilla.com>
> wrote:
>
> > ...
> >
> > One thing that has been proposed is to have an exception for local
> > roots,
> > i.e., to let non-default trust anchors continue to use SHA-1 for some
> > more
> > time. What do folks here think about that idea?
> >
>
> That seems like a choice to make only if it must be made, in order to shut
> off SHA-1 for public roots in the absence of change in the enterprise.
> It's
> not something I would proactively accept and move towards, since it
> removes
> all pressure from vendors and enterprises to fix up their stuff.

I think this misses out as to how enterprise support generally works.

I think we're all in agreement that we want users safe by default.

I think it should, but may not be, obvious that the risk profiles for a
publicly trusted CA are very different than that of an enterprise-operated
CA. In both cases, a SHA-1 chosen-prefix collision puts all users who
trust that certificate at risk, but the population size for a given
enterprise-root versus Mozilla-default root are many orders of magnitude
different (a single enterprise versus all Mozilla users)

It's also worth considering that enterprise *users* are in a very
different position to enact change. Generally, they're dependent on
vendors, who unlike CAs, don't participate in many (any) industry forums
and which have a very different set of use cases and constraints than
publicly trusted CAs do. Showing a *user* an error because the *vendor* is
slacking doesn't really help protect the user - it just gets them to shift
off that browser or innoculate them to warnings, more than it gets them to
apply pressure to the vendor.

Because of this, anyone who has ever supported enterprise software is
aware of the all too present necessity for enterprise flags. That is,
off-by-default configuration flags that can change or alter behaviour to
support an enterprises' needs. Yes, some of these directly reduce security
- such as enabling NTLM over HTTP sites, for example, or the "Intranet
Zone" of IE - but are necessary to support the slow-moving,
contract-driven reality of enterprise software support.

So imagine a world where we turn SHA-1 off by default. I can assure you
that both Microsoft and Chrome will end up having some form of enterprise
policy to enable it, in some specific cases. Quite possibly Android too.
So what should the flag do when it's enabled? Enable it for all sites?
Probably not - that's encouraging publicly trusted CAs to act in bad-faith
for Internet security. It's most likely that it would do exactly what
Richard proposes - whether it be metadata associated with the root at time
of installation ("Allow this certificate to issue SHA-1 certificates") or
general policy ("Allow enterprise-installed certificates to issue SHA-1"),
the effect is the same.

I agree that we (as browser vendors and security community members) want
to exert some influence onto vendors to guide them to secure behaviour.
But the reality is the incentives of these vendors is very much *not* in
security. If you have any doubt of that, see examples like
https://code.google.com/p/google-security-research/issues/detail?id=693 (
or really, any number of examples from
https://code.google.com/p/google-security-research/issues/list?can=1 )

But there's opportunity to exert softer pressure. For example, UI that
reminds users every time they access one of these SHA-1 sites that their
vendor software is insecure, but which doesn't require the user to get
conditioned to click through interstitials, and for which the noisiness
ratchets up over time.

There's no question that a number of vendors will need months to fix their
products, and aren't doing their due dilligence now. And even when they
do, it may take enterprises months-to-years to deploy these new versions,
because it's always with trade-offs (perhaps the vendor discontinued Vital
Feature X in the version that supports SHA-2, for example).

So we definitely can't think of this as a game of chicken with enterprise
vendors - it's a complex ecosystem balance - and one where, if we want to
support enterprise users, needs to be sensitive to their needs. Whether
it's a default policy (which I support Richard's proposal, which is not
surprising, considering I was one of several to have suggested it) or a
custom flag in about:flags, or an environment variable to be set, a
registry key to be tweaked, whatever have you, I think we need to
recognize a distinction in timelines and risk factors between publicly
trusted and enterprise trusted.

> This also seems like something of enough import that a multi-browser/OS
> plan would probably be more effective than any single browser leading on
> it, since enterprises tend to have 0 qualms about directing their entire
> staff to use whatever browser works around the problem they're seeing.

This suggest it's all or nothing, but I tried to spell out above how it's
not the case. Collective action against enterprises consistently
backfires, and just alienates users and makes enterprises more risk averse
- which makes them less likely to apply updates and fixes if they think
the risks are high. We balance that with customization - defaults that
work for the 99.99% of users, and leave enough of an escape hatch where we
believe it's reasonable or that the tradeoffs are balanced.

It doesn't have to be a permanent exception, by any means - enterprise
flags go away all the time too, especially if and as they increase support
costs or decrease velocity in maintaining the code (which grows to support
each possible permutation and combination of flags). But all or nothing,
nah.

For what it's worth, the policy proposed by Richard is more or less what
Chrome has presently adopted ( see
https://googleonlinesecurity.blogspot.com/2015/12/an-update-on-sha-1-certificates-in.html
), and almost certainly more or less what Microsoft will adopt (given that
RSA keysize deprecation already has enterprise flags associated with it,
as does the 'strong crypto' behaviour of CryptoAPI & Edge), so it's not at
all unreasonable.

A possible migration path is to distinguish PTCs (Publicly Trusted Certs)
from ETCs (Enterprise Trusted Certs), and have separate flags, with
defaults being SHA-1 off for PTCs, and SHA-1 left on for ETCs by default.
Enterprises can use Firefox's policy management to adjust the ETC flags,
and with a date set in the future where the default for ETCs, in the
absence of policy explicitly configuring it, will switch from "On" to
"Off". This is how you securely migrate things, potentially with
additional UI to communicate if/when ETCs are encountered (which is a
separable issue, really - you can introduce the flag well before and w/o
needing any corresponding UI; the UI is for evangelism, not enforcement)

Alternatively, a more complicated change would be configuring per-root
whether SHA-1 was allowed or not-allowed, but the downside for that is it
creates incentives for PTCs to encourage or instruct users to do so. Given
the history of root CAs encouraging and evangelizing insecure practices
when it can net them additional revenue, I think that would perhaps be
unwise, but it is a more robust option for organizations that have many
ETCs with distinct threat/risk profiles. In my experience, however, such
enterprises are rare, and the risk/reward/complexity tradeoffs favor the
simple flag-for-all solution rather than flag-per-CA.

Eric Mill

unread,
Jan 18, 2016, 10:21:21 PM1/18/16
to ryan-mozde...@sleevi.com, dev-secur...@lists.mozilla.org, Richard Barnes
On Mon, Jan 18, 2016 at 8:46 PM, Ryan Sleevi <
ryan-mozde...@sleevi.com> wrote:

Because of this, anyone who has ever supported enterprise software is
> aware of the all too present necessity for enterprise flags. That is,
> off-by-default configuration flags that can change or alter behaviour to
> support an enterprises' needs. Yes, some of these directly reduce security
> - such as enabling NTLM over HTTP sites, for example, or the "Intranet
> Zone" of IE - but are necessary to support the slow-moving,
> contract-driven reality of enterprise software support.
>

I do understand this general dynamic, and have had the great pleasure of
enjoying the slow-moving contract-driven reality of enterprise software in
my daily life.

I was suggesting not deciding a year ahead of time that enterprise apathy
will be too overwhelmingly powerful to even try pushing on, in the specific
case of SHA-1 deprecation.


So imagine a world where we turn SHA-1 off by default. I can assure you
> that both Microsoft and Chrome will end up having some form of enterprise
> policy to enable it, in some specific cases. Quite possibly Android too.
>

Is there an enterprise command line flag for MD5 support in Internet
Explorer? :) There isn't in Chrome, and here's the bug thread where the
Chrome team denied fervent requests by someone behind an enterprise
firewall to add MD5 support in behind a command line flag:

https://code.google.com/p/chromium/issues/detail?id=121705

To quote rsl...@chromium.org in comment #15:

> From the net/security side, the discussion has universally been WontFix.
> So far, all vendors (of man in the middle boxes signing with MD5) have
> released updates to their product, to the best of my knowledge. The
> security of MD5 is distinctively troubling enough that this would be akin
to
> disabling SSL security for users behind such proxies.

I understand that MD5 was in a much more dire situation in 2012 than SHA-1
is in in 2016, and that middlebox vendors are not as far along with SHA-2
support.

But that's only a temporary situation, so I don't think it's all that
far-fetched to suggest applying any kind of pressure at all to enterprises
and middlebox vendors.



> I agree that we (as browser vendors and security community members) want
> to exert some influence onto vendors to guide them to secure behaviour.
> But the reality is the incentives of these vendors is very much *not* in
> security. If you have any doubt of that, see examples like
> https://code.google.com/p/google-security-research/issues/detail?id=693 (
> or really, any number of examples from
> https://code.google.com/p/google-security-research/issues/list?can=1 )
>

Oh I don't have any doubt of that, but I do like how the costs of
insecurity are rising and becoming more visible to enterprises -- the Trend
Micro example above being a good one. We should look for more opportunities
to make bad behavior expensive for enterprises and vendors.


>
> But there's opportunity to exert softer pressure. For example, UI that
> reminds users every time they access one of these SHA-1 sites that their
> vendor software is insecure, but which doesn't require the user to get
> conditioned to click through interstitials, and for which the noisiness
> ratchets up over time.
>

That's a great short-term strategy, if it's decided -- hopefully later in
2016 than January -- that outright removal of support isn't immediately
feasible.

Incidentally, it's also the same path I would argue that browsers should
take in handling the situation of HPKP being overridden by local roots:
even if hard-fail enforcement would ultimately lead to a bad ecosystem
outcome, surfacing this information in a non-interstitial fashion is far
likelier to strike the right balance of user awareness that doesn't create
an arms race.


registry key to be tweaked, whatever have you, I think we need to
> recognize a distinction in timelines and risk factors between publicly
> trusted and enterprise trusted.
>

Absolutely.


> This also seems like something of enough import that a multi-browser/OS
> > plan would probably be more effective than any single browser leading on
> > it, since enterprises tend to have 0 qualms about directing their entire
> > staff to use whatever browser works around the problem they're seeing.
>
> This suggest it's all or nothing, but I tried to spell out above how it's
> not the case. Collective action against enterprises consistently
> backfires, and just alienates users and makes enterprises more risk averse
> - which makes them less likely to apply updates and fixes if they think
> the risks are high. We balance that with customization - defaults that
> work for the 99.99% of users, and leave enough of an escape hatch where we
> believe it's reasonable or that the tradeoffs are balanced.
>

How weak does SHA-1 have to get before that balance changes? Is it totally
dependent on existing enterprise adoption rates, and ambient non-disruptive
user warnings?

In any case, I very much get the general point you're making. Maybe it's
just because I don't personally develop and maintain a browser, but it
seems early to pre-capitulate on the issue.

It'd be nice to have data. Mozilla is now gathering and publicly releasing
telemetry data on middleboxes. Peter Bowen from Amazon recently publicly
released a large TLS telemetry dataset that shows middlebox fingerprints:

https://cabforum.org/pipermail/public/2015-December/006507.html

Maybe that's something other browsers could work on publishing too?

-- Eric

Ryan Sleevi

unread,
Jan 18, 2016, 11:24:54 PM1/18/16
to Eric Mill, dev-secur...@lists.mozilla.org, Richard Barnes
On Mon, January 18, 2016 7:20 pm, Eric Mill wrote:
> I was suggesting not deciding a year ahead of time that enterprise apathy
> will be too overwhelmingly powerful to even try pushing on, in the
> specific
> case of SHA-1 deprecation.

Chrome has already decided this, as has (in effect) Microsoft.

> Is there an enterprise command line flag for MD5 support in Internet
> Explorer? :)

There is. KB2862973 only applied to roots in the Microsoft program - not
enterprise roots. There's also flags to set the strong encryption
settings.

> There isn't in Chrome, and here's the bug thread where the
> Chrome team denied fervent requests by someone behind an enterprise
> firewall to add MD5 support in behind a command line flag:

That's not a decision we would repeat today. It's a decision we made only
because the issues didn't surface until we hit stable, so any fix would
have taken us 6 months (based on the then scheduled 8 week release
iteration)

But even if we launched at 2017 for everything, we'd have a flag, and it'd
likely last for 18 months. But that's still TBD, and me reading tea
leaves.

> How weak does SHA-1 have to get before that balance changes? Is it totally
> dependent on existing enterprise adoption rates, and ambient
> non-disruptive
> user warnings?

Even if it's totally broken, I think the risk proposition is still
questionable, given how exploiting a chosen-prefix works.

> Maybe that's something other browsers could work on publishing too?

I think such telemetry (from Firefox and Chrome) will be horribly
misleading for this case. Our opt-in rate of metrics for enterprises are
so low that any conclusions would be grossly misleading. We've certainly
seen this with MD5 and SHA-1 measurements.

Eric Mill

unread,
Jan 19, 2016, 12:06:40 AM1/19/16
to ryan-mozde...@sleevi.com, dev-secur...@lists.mozilla.org, Richard Barnes
On Mon, Jan 18, 2016 at 11:24 PM, Ryan Sleevi <
ryan-mozde...@sleevi.com> wrote:

>
> > There isn't in Chrome, and here's the bug thread where the
> > Chrome team denied fervent requests by someone behind an enterprise
> > firewall to add MD5 support in behind a command line flag:
>
> That's not a decision we would repeat today. It's a decision we made only
> because the issues didn't surface until we hit stable, so any fix would
> have taken us 6 months (based on the then scheduled 8 week release
> iteration)
>

Really? Given your last few years of experience, if you could time travel
back to 2012, you would tell Past Ryan Sleevi to make a different decision
at that time about adding a flag for MD5 support in the enterprise? Was
there significant observed negative fallout of that decision?



> > How weak does SHA-1 have to get before that balance changes? Is it
> totally
> > dependent on existing enterprise adoption rates, and ambient
> > non-disruptive
> > user warnings?
>
> Even if it's totally broken, I think the risk proposition is still
> questionable, given how exploiting a chosen-prefix works.
>

Sure, but part of the benefit of shutting off SHA-1 issuance is to remove
SHA-1 code from the overall software pipeline altogether, and to remove the
opportunity for bugs and mistakes from having outsized impacts on critical
infrastructure.

I would put browser certificate validation code in a similar category of
critical software infrastructure as CA issuance code. Removing SHA-1
validation code from browsers altogether is a much stronger guarantee than
depending on logic which distinguishes between publicly trusted and locally
trusted roots, which, as discussed on this thread already, is quite tricky.


> Maybe that's something other browsers could work on publishing too?
>
> I think such telemetry (from Firefox and Chrome) will be horribly
> misleading for this case. Our opt-in rate of metrics for enterprises are
> so low that any conclusions would be grossly misleading. We've certainly
> seen this with MD5 and SHA-1 measurements.
>

That's a great point, but Peter's data was from website logs, and detecting
middleboxes in that data is about comparing TLS "fingerprints" to sent user
agents. That's not something enterprises have to opt-in to. So, large
website operators could be providing valuable (appropriately aggregated,
etc.) data in this regard.

I'd say that a lack of clear data to describe the impact of middleboxes on
the internet today is part of what led to the Legacy Validation proposal in
December, and part of what made it hard to discuss in a clear and grounded
fashion.

Ryan Sleevi

unread,
Jan 19, 2016, 4:16:02 AM1/19/16
to Eric Mill, dev-secur...@lists.mozilla.org, Richard Barnes
On Mon, January 18, 2016 9:05 pm, Eric Mill wrote:
> Really? Given your last few years of experience, if you could time travel
> back to 2012, you would tell Past Ryan Sleevi to make a different decision
> at that time about adding a flag for MD5 support in the enterprise?

Yes.

> Was there significant observed negative fallout of that decision?

Yes.

> Sure, but part of the benefit of shutting off SHA-1 issuance is to remove
> SHA-1 code from the overall software pipeline altogether, and to remove
> the
> opportunity for bugs and mistakes from having outsized impacts on critical
> infrastructure.

This argument doesn't really hold any water for the case of enterprise
CAs, nor does it (in a practical sense) hold water for SHA-1. For that
matter, even MD5 as an implementation isn't removed from most
cryptographic libraries - including OpenSSL, BoringSSL, and CryptoAPI -
because of its use in other constructs (e.g. HMAC-MD5 or the MD5+SHA1
concatenation in TLS < 1.2)

>From an issuance standpoint, it really doesn't reduce code, and from the
validation standpoint, the need for SHA-1 in the PKI products will remain
for some time (e.g. AKI/SKI synthetic construction, or the use in OCSP)

> I would put browser certificate validation code in a similar category of
> critical software infrastructure as CA issuance code. Removing SHA-1
> validation code from browsers altogether is a much stronger guarantee than
> depending on logic which distinguishes between publicly trusted and
> locally
> trusted roots, which, as discussed on this thread already, is quite
> tricky.

Again, not really.

> That's a great point, but Peter's data was from website logs, and
> detecting
> middleboxes in that data is about comparing TLS "fingerprints" to sent
> user
> agents. That's not something enterprises have to opt-in to. So, large
> website operators could be providing valuable (appropriately aggregated,
> etc.) data in this regard.

Only to an extent. You're again presuming the enterprise MITM box, which
may show for sites like Amazon (of course, it would not show at all for
enterprise MITM boxes that blocked it). This would not, however, show up
at all for the case of using UAs to access internal enterprise resources,
which is a far greater (by volume of users, though necessarily not volume
of certificates) use case.

My point is that the over-reliance on metrics underestimates (on orders of
magnitude) the impact to enterprises, which is why IF a user agent wishes
to support enterprises (and it's a complex question of business and
product direction), more nuance is needed.

I think the benefits to restricting SHA-1 in PTCs is both obvious and
uncontroversial, and offers positive security steps - more than doing
nothing. I can understand the desire of those *not* impacted by such
changes to wish to push the industry towards changes - I know, I've been
there myself. But I certainly am more aware of the impact these decisions
have, and of the real tradeoffs involved for these users, and thus the
need for a softer touch.

Arguing that enterprise users should be thrown under the bus is not a new
argument, nor is it one without grounding. We've certainly seen the
energetically emotional appeals in cases like HPKP, where some corners
have argued for making it harder and harder for enterprises to accomplish
their goals because they're seen as disagreeable. And while I certainly
don't agree with many of the reasons why such organizations see the need
to MITM, I'm quite aware of the lengths that MITM vendors will go to, and
of the lengths businesses will go to to support their needs. The same can
be said for SHA-1 here.

To me, the root of the issue here is education - how do we educate
enterprises that SHA-1 issuance is risky (to their organization, not to
the Internet at large), such that they lean on their vendors, such that
they have the economic incentives to switch vendors or, in many cases, pay
the exorbitant fees the vendors demand in order to support better
security. It's certainly one strategy to "hold users hostage" (via
interstitials), but that's one that doesn't seem to pay off well. Even
holding users hostage via lock icons is one that, as it played out, was
significantly less effective than desired. Nor is it a good game - for
users or for browser vendors - to get in the habit of hostage taking "for
the greater good". Another strategy is "pure outreach", although that's
unlikely to have the economies of scale necessary to get the industry to
move, and is the one previously attempted with MD5. So there's likely
somewhere in the middle ground - and that's something I hope Mozilla will
consider, in taking the necessary steps to secure PTCs, and working to
employ all appropriate means for ETCs.

Jakob Bohm

unread,
Jan 19, 2016, 7:12:14 AM1/19/16
to mozilla-dev-s...@lists.mozilla.org
On 19/01/2016 10:15, Ryan Sleevi wrote:
> On Mon, January 18, 2016 9:05 pm, Eric Mill wrote:
>> Really? Given your last few years of experience, if you could time travel
>> back to 2012, you would tell Past Ryan Sleevi to make a different decision
>> at that time about adding a flag for MD5 support in the enterprise?
>
> Yes.
>
>> Was there significant observed negative fallout of that decision?
>
> Yes.
>
> ...
>
>
> Only to an extent. You're again presuming the enterprise MITM box, which
> may show for sites like Amazon (of course, it would not show at all for
> enterprise MITM boxes that blocked it). This would not, however, show up
> at all for the case of using UAs to access internal enterprise resources,
> which is a far greater (by volume of users, though necessarily not volume
> of certificates) use case.

Indeed. In our small enterprise, I have observed the following sources
of non-public weaker certificates:

1. Embedded https administration interface servers in difficult to
upgrade hardware (such as otherwise functional HP printers long past
warranty, with the added sting that downgrading to even older
firmware was the only solution for a major vulnerability).

2. An internal CA, which has been almost completely supplanted by a new
one now. It used SHA-1 solely for compatibility with older clients
and may survive in that role as long as such clients are needed for
special tasks.

3. MITM-box like behavior in endpoint antivirus programs (these default
to intercepting SSL/TLS traffic to scan it for viruses. I have not
checked the algorithm used to signing the pseudo-certificates used
for traffic that never leaves the computer on which the signature is
checked, and this is going to be brand-specific anyway.

4. E-mail certificates compatible with Outlook 2007. That one is a
real bummer because of the upgrade costs. And the lack of
confidentially when using "cloud-focuses" programs that do too much
telemetry.


>
> My point is that the over-reliance on metrics underestimates (on orders of
> magnitude) the impact to enterprises, which is why IF a user agent wishes
> to support enterprises (and it's a complex question of business and
> product direction), more nuance is needed.

Indeed. In any security conscious environment, telemetry is an alias
for industrial espionage and can easily get a product thrown out if
blocking the telemetry isn't trivially easy and reliable.

>
> ...

Eric Mill

unread,
Jan 19, 2016, 8:40:13 PM1/19/16
to ryan-mozde...@sleevi.com, dev-secur...@lists.mozilla.org, Richard Barnes
On Tue, Jan 19, 2016 at 4:15 AM, Ryan Sleevi <
ryan-mozde...@sleevi.com> wrote:

> On Mon, January 18, 2016 9:05 pm, Eric Mill wrote:
> > Really? Given your last few years of experience, if you could time
> travel
> > back to 2012, you would tell Past Ryan Sleevi to make a different
> decision
> > at that time about adding a flag for MD5 support in the enterprise?
>
> Yes.
>
> > Was there significant observed negative fallout of that decision?
>
> Yes.
>

What was the nature of that fallout? I don't know how to incorporate that
into my worldview without details.


> That's a great point, but Peter's data was from website logs, and
> > detecting
> > middleboxes in that data is about comparing TLS "fingerprints" to sent
> > user
> > agents. That's not something enterprises have to opt-in to. So, large
> > website operators could be providing valuable (appropriately aggregated,
> > etc.) data in this regard.
>
> Only to an extent. You're again presuming the enterprise MITM box, which
> may show for sites like Amazon (of course, it would not show at all for
> enterprise MITM boxes that blocked it). This would not, however, show up
> at all for the case of using UAs to access internal enterprise resources,
> which is a far greater (by volume of users, though necessarily not volume
> of certificates) use case.
>

That is a great point. But do we throw up our hands and say we can just
never know how enterprises use browsers? It doesn't seem practical from a
product management standpoint either -- it'd be foolhardy to have no data
that tells you how it's being used.

The general public uses browsers which also serve the enterprise without
substantial modification, so data on how enterprises use browsers is
relevant to decisions that affect the general public. I encourage your team
to find a way to publicly contribute some version of the data you're using
to drive your decisions.


But I certainly am more aware of the impact these decisions
> have, and of the real tradeoffs involved for these users, and thus the
> need for a softer touch.
>
> Arguing that enterprise users should be thrown under the bus is not a new
> argument, nor is it one without grounding. We've certainly seen the
> energetically emotional appeals in cases like HPKP, where some corners
> have argued for making it harder and harder for enterprises to accomplish
> their goals because they're seen as disagreeable. And while I certainly
> don't agree with many of the reasons why such organizations see the need
> to MITM, I'm quite aware of the lengths that MITM vendors will go to, and
> of the lengths businesses will go to to support their needs. The same can
> be said for SHA-1 here.
>
> To me, the root of the issue here is education - how do we educate
> enterprises that SHA-1 issuance is risky (to their organization, not to
> the Internet at large), such that they lean on their vendors, such that
> they have the economic incentives to switch vendors or, in many cases, pay
> the exorbitant fees the vendors demand in order to support better
> security. It's certainly one strategy to "hold users hostage" (via
> interstitials), but that's one that doesn't seem to pay off well. Even
> holding users hostage via lock icons is one that, as it played out, was
> significantly less effective than desired. Nor is it a good game - for
> users or for browser vendors - to get in the habit of hostage taking "for
> the greater good".


I didn't think Chrome regretted using the lock icon to raise awareness
among site owners about SHA-1. It may not have been as effective as you
were hoping it would be, but I think plenty of people, myself included,
observed that it generated real action by site owners across the public and
enterprise spheres.

We don't have the alternate universe where Chrome didn't do that to compare
it to, but if it felt like less than desired, there's another conclusion
you could draw: that if Firefox and IE and Safari were in the habit of
trying the same thing, the effect it did have would been amplified.

Another strategy is "pure outreach", although that's
> unlikely to have the economies of scale necessary to get the industry to
> move, and is the one previously attempted with MD5. So there's likely
> somewhere in the middle ground - and that's something I hope Mozilla will
> consider, in taking the necessary steps to secure PTCs, and working to
> employ all appropriate means for ETCs.
>

I can understand and agree with your larger point, and the idea that
browser influence is much more limited and specific than it may seem to the
outside community.

I just don't like the feeling of making decisions without any data, and I
don't like the idea that browsers should default to using kid gloves with
enterprises.

We're willing to break connections to significant fractions of the world's
general population over the next few years as the SHA-1 issuance deadline
has its intended effect -- but we're not willing to even show warnings in
enterprise environments?

I get what you're saying, but it still sounds like a classic browser
collective action problem: no single browser is willing to increase the
friction out of fear that the enterprise will marginalize that browser in
favor of others.

If your experience with MD5 supports the notion that removing support for
it in the enterprise hurt user security in some other way, such as causing
enterprises to lock their users to older versions of Chrome for a long
period of time, please give more qualitative or quantitative detail to
support that. Otherwise, I have to assume a more traditional and typical
competitive dynamic that doesn't generally work in the public's interest.

Ryan Sleevi

unread,
Jan 19, 2016, 9:38:47 PM1/19/16
to Eric Mill, dev-secur...@lists.mozilla.org, Richard Barnes
On Tue, January 19, 2016 5:38 pm, Eric Mill wrote:
> If your experience with MD5 supports the notion that removing support for
> it in the enterprise hurt user security in some other way, such as causing
> enterprises to lock their users to older versions of Chrome for a long
> period of time, please give more qualitative or quantitative detail to
> support that. Otherwise, I have to assume a more traditional and typical
> competitive dynamic that doesn't generally work in the public's interest.

While I sent a more comprehensive reply off-list explaining why I have
trouble with your arguments, I don't believe I can in good-faith continue
this conversation with you, Eric.

I appreciate your curiosity and enthusiasm, but I don't believe your
questions are at all relevant to this discussion, nor do I appreciate the
implication that my participation is an attempt to gain competitive
advantage - simply because I don't want to see users switch to Firefox or
another browser.

I don't believe it's necessary to satiate your curiosity, nor is it a
reasonable request, especially when ample information about the impact
that the MD5 deprecation had (as shown on the bug you previously linked),
ample academic literature exists to warning fatigue, and by your own
admission, you're familiar with the purchasing, upgrade, and deployment
cycles of enterprises and the challenges therein.

I've suggested several paths that Richard and the Firefox team may
consider, as compromises that allow Firefox to ensure secure
communications for users, while allowing enterprises the necessary relief
valves for their (longer) timelines and unique challenges. I can
appreciate that you don't see the utility in the relief valve, but there's
ample evidence (and your own experience should tell you) that such things
would and are necessary. They are paths being pursued by the Chrome team,
and, based on the evidence and historical precedence, believed to be the
Microsoft strategy as well.

In any event, I don't believe either of us are contributing positively to
the conversation at this point, so I'll bow out, and would encourage
considering the same.

Best,

Eric Mill

unread,
Jan 19, 2016, 10:40:22 PM1/19/16
to ryan-mozde...@sleevi.com, dev-secur...@lists.mozilla.org, Richard Barnes
On Tue, Jan 19, 2016 at 9:38 PM, Ryan Sleevi <
ryan-mozde...@sleevi.com> wrote:

> On Tue, January 19, 2016 5:38 pm, Eric Mill wrote:
> > If your experience with MD5 supports the notion that removing support
> for
> > it in the enterprise hurt user security in some other way, such as
> causing
> > enterprises to lock their users to older versions of Chrome for a long
> > period of time, please give more qualitative or quantitative detail to
> > support that. Otherwise, I have to assume a more traditional and typical
> > competitive dynamic that doesn't generally work in the public's
> interest.
>
> While I sent a more comprehensive reply off-list explaining why I have
> trouble with your arguments, I don't believe I can in good-faith continue
> this conversation with you, Eric.
>
> I appreciate your curiosity and enthusiasm, but I don't believe your
> questions are at all relevant to this discussion, nor do I appreciate the
> implication that my participation is an attempt to gain competitive
> advantage - simply because I don't want to see users switch to Firefox or
> another browser.
>

That was a wholly unintentional implication -- I did not mean to say that
you personally were arguing in bad faith, or were seeking competitive
advantage. In fact, you're one of the last people I would ever accuse of
bad faith, since your level of personal and direct honesty is maybe the
highest in the entire community.

However, I can see how my comments would be taken that way, which is my
fault, and I apologize to you for that, and for potentially lowering the
level of discourse on the thread.

Avoiding losing users is a legitimate product interest, not intrinsically
bad, and I didn't think the idea that browsers considered this interest
would be a controversial one. Again, my fault for addressing that poorly.


I've suggested several paths that Richard and the Firefox team may
> consider, as compromises that allow Firefox to ensure secure
> communications for users, while allowing enterprises the necessary relief
> valves for their (longer) timelines and unique challenges. I can
> appreciate that you don't see the utility in the relief valve, but there's
> ample evidence (and your own experience should tell you) that such things
> would and are necessary. They are paths being pursued by the Chrome team,
> and, based on the evidence and historical precedence, believed to be the
> Microsoft strategy as well.
>

I believe in the utility of that relief valve -- my only disagreement has
been whether it was early enough to know whether that relief valve was
needed in this particular case. Your position is clear, and even though I
don't think it's futile to consider making choices other than Chrome's or
Microsoft's on this issue, I appreciate the details and rationale you've
provided, and hope others continue discussing it.
0 new messages