Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

An open letter to Gervase Markham, re http://www.gerv.net/security/self-signed-certs/

37 views
Skip to first unread message

Kyle Hamilton

unread,
Feb 1, 2010, 7:32:14 PM2/1/10
to dev-secur...@lists.mozilla.org
(Prior reading material: http://www.gerv.net/security/self-signed-certs/ )

There's a fourth situation, in your treatise, that you don't address,
and don't even acknowledge. You dismiss the most obvious situation
why someone might want to run self-signed certificates: When that
someone, herself, is the certifier. (i.e., that particular someone
knows, outside of what Firefox has available to report, that the site
is legitimate.)

Many network appliances (including, but not limited to, NetApp
equipment), have a command which is used to set up TLS. Generating
the TLS key also, by necessity, generates a self-signed certificate as
well as a CSR for its idea of its own FQDN. This is because that the
next logical thing for the administrator to do, after configuring the
private and public TLS keys with this command, is to reconnect to it
with https -- before he gets a trusted certificate back to install, if
he even has need of an internal CA. Ooops! Self-signed certificate,
abort! abort! Danger, Will Robinson!

Now, in contrast, Bluetooth, the XBox 360, and X10 Security all do
something similar in their pairing protocols -- the protocols used to
establish communications between two devices. A device must be in
"discoverable" mode before it can be discovered, and the device that's
trying to pair with that device must attempt discovery while the other
is in that mode. (For additional security, Bluetooth allows the use
of a passcode, but that's mostly irrelevant for the purpose of this
discussion.) This relies on pair of attributes called "time" and
"location", combined with the fact that pairing is a
once-in-a-blue-moon thing.

This model of security is much more useful to the people who have to
maintain their enterprise and datacenter infrastructures from day to
day. You're on the same network segment, or you're working on setting
up your TLS webserver, or whatever -- why are you forcing network
administrators (who really have no time to figure out how to configure
the product that they use to configure the products that they
administer) to adhere to the same security policy as you expect every
one of your users to operate under?

This, by the way, is what is meant by "But in many situations, we're
not concerned about identity in particular, we just want to get the
basic https: crypto stream up and running": when the identity of the
host is known in some way outside of the inspection of the browser.
You know, like how you're supposed to identify and verify trust
anchors. You completely dismiss the entire configuration process.
(and most of the time, because Verisign and Verisign-like certifiers
charge an arm and a leg, they don't want to have externally-verifiable
certificates until they're staging an externally-facing site for
production.)

In this case, these devices *are* (for all intents and purposes) trust
anchors which have been accepted outside the inspection of the
browser. An administrator may need to deal with hundreds of these
things, and I personally know of infrastructures where there is no CA
-- the master repository for the devices' certificates is the softoken
in the admin's profile. (Why should they have to have a third-party
certificate for authentication when they've already been authenticated
by and to the only person who they have to authenticate themselves
to?)

Why do you make it so difficult for people who use your software for
their work to do so? Oh, right, it's in the interest of the users
who... haven't been systematically polled... or use-case studied... or
anything?

(I've heard mention of one usability study, but that's how many years old now?)

-Kyle H

Gervase Markham

unread,
Feb 2, 2010, 12:37:16 PM2/2/10
to
Hi Kyle,

Firstly, one of the normal things you do with an open letter is, er,
send it to the person it's addressed to, as well as make it public. I
haven't yet received a copy of this in my inbox, or by postal mail. Is
this the UK Royal Mail being slow again?

I'm afraid that all you do by calling this an Open Letter, rather than
just starting a discussion, is make people think you are grandstanding.

On 02/02/10 00:32, Kyle Hamilton wrote:
> There's a fourth situation, in your treatise, that you don't address,
> and don't even acknowledge. You dismiss

If I don't address or acknowledge it, how can I be dismissing it? If we
could ease off on the loaded language, we might have a more productive
discussion.

> the most obvious situation
> why someone might want to run self-signed certificates: When that
> someone, herself, is the certifier. (i.e., that particular someone
> knows, outside of what Firefox has available to report, that the site
> is legitimate.)

How? If they are comparing fingerprints, then that's not "outside of
what Firefox has available to report". Do you mean they know they've got
to the right place when they type http://mynetapp because they trust
everyone on (their section of) the network they are using?

> Many network appliances (including, but not limited to, NetApp
> equipment), have a command which is used to set up TLS. Generating
> the TLS key also, by necessity, generates a self-signed certificate as
> well as a CSR for its idea of its own FQDN. This is because that the
> next logical thing for the administrator to do, after configuring the
> private and public TLS keys with this command, is to reconnect to it
> with https -- before he gets a trusted certificate back to install, if
> he even has need of an internal CA. Ooops! Self-signed certificate,
> abort! abort! Danger, Will Robinson!

As you well know, Firefox allows one to connect to such sites. And as
the admin knows what they are doing, no doubt they will.

> Now, in contrast, Bluetooth, the XBox 360, and X10 Security all do
> something similar in their pairing protocols -- the protocols used to
> establish communications between two devices. A device must be in
> "discoverable" mode before it can be discovered, and the device that's
> trying to pair with that device must attempt discovery while the other
> is in that mode. (For additional security, Bluetooth allows the use
> of a passcode, but that's mostly irrelevant for the purpose of this
> discussion.) This relies on pair of attributes called "time" and
> "location", combined with the fact that pairing is a
> once-in-a-blue-moon thing.

Indeed. I've always thought that this design is good. It trades off
perfection ("But someone could interfere during the pairing process!")
for simplicity in what I think is an excellent way. The short range of
the wireless is the key factor here. I would trust this sort of thing a
lot less on a system which had a range of hundreds of metres.

This is the same model used by people who just click "Yeah, whatever"
the first time they connect to a new machine via SSH, rather than
checking the key fingerprint. The risk of doing so, of course, depends
strongly on your network environment. I'd be unlikely to do it on the
wifi at Black Hat.

> This model of security is much more useful to the people who have to
> maintain their enterprise and datacenter infrastructures from day to
> day. You're on the same network segment, or you're working on setting
> up your TLS webserver, or whatever -- why are you forcing network
> administrators (who really have no time to figure out how to configure
> the product that they use to configure the products that they
> administer) to adhere to the same security policy as you expect every
> one of your users to operate under?

Are you proposing a set of UI or technical changes to implement your
idea? How do you prevent putting people not in the categories you cite
at greater risk?

> This, by the way, is what is meant by "But in many situations, we're
> not concerned about identity in particular, we just want to get the
> basic https: crypto stream up and running": when the identity of the
> host is known in some way outside of the inspection of the browser.

Same question as above: how? Because you trust the whole network?

> Why do you make it so difficult for people who use your software for
> their work to do so?

Have you considered writing an extension to make it easier? You could
even ask for donations from relieved sysadmins...

> Oh, right, it's in the interest of the users
> who... haven't been systematically polled... or use-case studied... or
> anything?

I have a feeling that if we did a usability study of a random selection
of 1000 users, precisely 0 would be in the category you describe. With
great luck, you might get 1.

Gerv

Kyle Hamilton

unread,
Feb 2, 2010, 2:48:21 PM2/2/10
to Gervase Markham, dev-secur...@lists.mozilla.org
On Tue, Feb 2, 2010 at 9:37 AM, Gervase Markham <ge...@mozilla.org> wrote:
> Hi Kyle,
>
> Firstly, one of the normal things you do with an open letter is, er,
> send it to the person it's addressed to, as well as make it public. I
> haven't yet received a copy of this in my inbox, or by postal mail. Is
> this the UK Royal Mail being slow again?
>
> I'm afraid that all you do by calling this an Open Letter, rather than
> just starting a discussion, is make people think you are grandstanding.

Well, I knew you were part of this group. I figure that if there's an
open letter, the least I can do is ensure you only get one copy of it,
so you weren't inundated with additional, useless communication.

> On 02/02/10 00:32, Kyle Hamilton wrote:
>> There's a fourth situation, in your treatise, that you don't address,
>> and don't even acknowledge.  You dismiss
>
> If I don't address or acknowledge it, how can I be dismissing it? If we
> could ease off on the loaded language, we might have a more productive
> discussion.

You dismissed it through lack of acknowledgement: someone (Lauren
Weinstein) stated that there are times when his security policy has
less interest in identity than getting the (basic) https crypto stream
up and running. You *categorically* dismissed the concept that there
is any conceivable situation where the security policy can be relaxed
on the issue of identity, by restating your proposition -- without
proof -- that there is no security if there is no concept of identity.

I provided you one of the possible scenarios later in the mail, and
you're now trying to tell me that increasing the number of operations
that an administrator -- whose time is paid for at much higher rates
and is therefore demonstrably more valuable than the 99.999% of
end-users you're trying to "protect" -- must perform in a
mind-numbing, useless, data entry-like, demonstrably harmful manner is
helpful.

For example, I'm going to describe the manner in which I have to
identify the fingerprint of a particular key:

1) Go to the https:// site in question. It fills the page with the
"Untrusted Connection" stuff.
2) Technical Details:
172.18.***.*** uses an invalid security certificate.

The certificate is not trusted because it is self-signed.
The certificate is only valid for wolfden.hybridk9.com

(Error code: sec_error_untrusted_issuer)

[You might notice something missing: WHERE'S THE FINGERPRINT? Also,
I've redacted my internal network numbering. You know, security
policy and all that.]

3) I understand the risks.

If you understand what's going on, you
can tell Firefox to start trusting this site's identification.
Even if you trust the site, this error could mean that someone is
tampering with your connection.

Don't add an exception unless
you know there's a good reason why this site doesn't use trusted identification.

(add Exception) button

[well, duh, I know why this is the way it is. I've a machine with 4
interfaces, all of which are pointed to by A records from
wolfden.hybridk9.com (an internal machine name). I'm addressing a
single one of them. The certificate doesn't have sAN, so the
numbering won't work.]

[Oh yeah, and I actually *tried* to get a publicly-working certificate
for this machine. Its implementation of TLS isn't actually
conformant, as it won't provide an entire chain of certs up to a trust
anchor, but only provide its own certificate relying on AIA or other
information to chase the pointer to the issuer.]

4) Push the button "Add Exception", and wait for the dialog to
populate. Once it does, look for fingerprint.

Wrong Site
Untrusted Issuer

Ummm... where's the fingerprint in this? Oh, it looks like I have to
click on the 'View' button.

And there I have it. While, I might add, in a modal dialog which
makes it impossible to look at other tabs, so I can't even check a
central wiki which has fingerprints for all the machines.

(And the dialog by which Firefox makes this information available is
the closest thing to completely antithetical to the concept of
"security" as I can imagine. There's no way to select text therein,
so there's no way to select the fingerprints, so there's no way to
paste them into a wiki or anything else. This means that the only way
available is via slow, laborious, error-prone, manual transcription or
comparation.)

Seriously, I can't imagine how Firefox says that it's a browser for
everyone when it not only asks, but demands and enforces to the best
of its user-interface ability, that the most basic things necessary
for security only be provided in the manner that it expects. We've
already seen how that model can be attacked, and we'll undoubtedly see
more attacks against that model.

>> the most obvious situation
>> why someone might want to run self-signed certificates: When that
>> someone, herself, is the certifier.  (i.e., that particular someone
>> knows, outside of what Firefox has available to report, that the site
>> is legitimate.)
>
> How? If they are comparing fingerprints, then that's not "outside of
> what Firefox has available to report". Do you mean they know they've got
> to the right place when they type http://mynetapp because they trust
> everyone on (their section of) the network they are using?

"Outside of what Firefox has available to report" includes "topology
of local network segment", "interface connected to segment", "other
machines on segment", and "this is what the Office of Security
Administrator of my company (which I hold) told the Office of System
Administrator (which I also hold) to do." Among others.

How is one to add trust anchors if there is no mechanism outside of
what Firefox has available to report to compare with what Firefox does
have available to report?

And remember the temporal nature of the configuration. With XBox 360,
if you don't pair your controller and console within 90 seconds, they
fall back out of configuration mode into the mode they were running
before. If someone's configuring a new machine, the best practices of
HTTPS (and TLS/SSL in general) have always stated that one does not
configure it with the final public-facing certificate until it's
otherwise completely configured.

A warning saying "this site is running a private-label certifier, and
is most likely not yet ready for public use" instead of "legitimate
sites will never ask you to" [the latter being patently false, for
definitions of "legitimate" that include "we're not asking anyone to
commit any crimes, and we're not committing any crimes ourselves" but
not "we're running a business so we need to post our business
permit"]. I will note that I do like the new text at the top of the
Exception dialog, even though it's too small to be worth anything, and
I would suggest raising it to around 16 point from its current 12:
"Legitimate banks, stores, and public sites will never ask you do to
do this."

Certificates seem, to my mind, to be about as useful as business
permits. Nevermind that the essential service that the permit
provides is that a governmental entity knows about the business
operation, and thus knows where to look to enforce it... no commercial
CA can offer a building, occupancy, residency, or transaction
privilege permit, as they are not representatives of the state nor
representatives of the government, and thus the fact that Mozilla's
predecessor created a de-facto state where third parties must pay and
repay and repay again for the privilege of being able to say "this is
a legitimate public site" amounts to little more than extortion.

What people don't understand is that *there is often no need to know,
to any standard of due diligence, who you're talking with*.

(I'm using Wes Kussmaul's definitions for 'state' and 'government', by
the way -- for most purposes, they're the same, but I'm being precise
because there are semantic differences that could possibly come into
play.)

>> Many network appliances (including, but not limited to, NetApp
>> equipment), have a command which is used to set up TLS.  Generating
>> the TLS key also, by necessity, generates a self-signed certificate as
>> well as a CSR for its idea of its own FQDN.  This is because that the
>> next logical thing for the administrator to do, after configuring the
>> private and public TLS keys with this command, is to reconnect to it
>> with https -- before he gets a trusted certificate back to install, if
>> he even has need of an internal CA.  Ooops!  Self-signed certificate,
>> abort! abort!  Danger, Will Robinson!
>
> As you well know, Firefox allows one to connect to such sites. And as
> the admin knows what they are doing, no doubt they will.

As you well know, Firefox also has a checkbox to disable updates.
This checkbox is ignored, and when Mozilla pushes out updates it
*always* interrupts the flow of what's being done. "I told you never
to update, you stupid piece of $#!+!" is often heard around here, and
I'm never the one who says it.

As you may or may not know, this qualifies as "unauthorized code
execution" on any machine that I run. I explicitly opt out of
updates, and thus expect that the code to check for them *will never
run*. It does. I explicitly opt out of updates, and thus expect that
they *will never download silently*. They do. I explicitly opt out
of updates, expecting that my workflow will *never* be interrupted by
a piece of software that thinks it knows better than me. It is.

At some point I expect a grand jury investigation into this particular
practice. (Software can't have it both ways: either it's not
responsible for its own bugs and security holes, in which case the
update mechanism makes no sense... or it is, in which case the update
mechanism still doesn't make sense, because by disabling updates one
also explicitly opts into liability for security holes which were
patched by the vendor but never applied by the admin -- thus, prima
facie negligence on the admin's part.)

>> Now, in contrast, Bluetooth, the XBox 360, and X10 Security all do
>> something similar in their pairing protocols -- the protocols used to
>> establish communications between two devices.  A device must be in
>> "discoverable" mode before it can be discovered, and the device that's
>> trying to pair with that device must attempt discovery while the other
>> is in that mode.  (For additional security, Bluetooth allows the use
>> of a passcode, but that's mostly irrelevant for the purpose of this
>> discussion.)  This relies on pair of attributes called "time" and
>> "location", combined with the fact that pairing is a
>> once-in-a-blue-moon thing.
>
> Indeed. I've always thought that this design is good. It trades off
> perfection ("But someone could interfere during the pairing process!")
> for simplicity in what I think is an excellent way. The short range of
> the wireless is the key factor here. I would trust this sort of thing a
> lot less on a system which had a range of hundreds of metres.

I'm going to expand on this concept a bit:

Someone can always interfere during the configuration process. If
there were a (in the case of the X360) malicious 802.11 traffic storm,
it's difficult to figure out who'd be doing what. If there are
multiple X360s in a house (there are where I live), only one of them
can be trying to pair controllers at a time, else the entire thing
goes under.

VLAN tagging goes a long way to reducing the stretch of a system --
but even if it didn't, there's still the fact that many systems are
brought up as part of hosted platforms. Each of these has a period of
"hey, this is initial configuration" the instant they're brought up
onto the network. It's up to the lessee to figure out what to do
during those times.

Temporal nature for configuration is a very good thing. If you can
limit the attacker-space even more (by, for example, having the
hosting provider add a firewall rule that only allows hosts on the
IPv4/24 that you're on), you've achieved the other half of the "limit
configuration access" best practice.

(Notwithstanding the above: New business and technical models, such as
Amazon's C4, allow for the end-administrator to have even more control
over external provisioning than enterprise-class hardware. This means
that these virtual machines can be started and configured within
seconds of being brought online. Of course, they also have
authorized_keys files and give you the SSH fingerprint of the VM.)

> This is the same model used by people who just click "Yeah, whatever"
> the first time they connect to a new machine via SSH, rather than
> checking the key fingerprint. The risk of doing so, of course, depends
> strongly on your network environment. I'd be unlikely to do it on the
> wifi at Black Hat.

If I just set up the machine, and it's on my own network... and it's
not connected to any Internet access except through my machine, and
then only when it needs to download service
packs/updates/firmware/whatever... I'm pretty sure that I can ignore
checking the fingerprint. Of either SSH or X.509 keys.

I'd be unlikely to do it at Black Hat, as well. There are extremes to
be found on both ends of the spectrum -- and the reality is probably
more in the center, with a moderate to strong leaning toward "less
intrusion".

(I think that most people adhere to their banks' security policies
just because their banks have them, and it's most convenient for
access to their money to follow those policies. Let's see... another
fundamental maxim: people are, on the whole, lazy. They tend to like
doing the minimum amount of work necessary to get what they need done
completed.)

>> This model of security is much more useful to the people who have to
>> maintain their enterprise and datacenter infrastructures from day to
>> day.  You're on the same network segment, or you're working on setting
>> up your TLS webserver, or whatever -- why are you forcing network
>> administrators (who really have no time to figure out how to configure
>> the product that they use to configure the products that they
>> administer) to adhere to the same security policy as you expect every
>> one of your users to operate under?
>
> Are you proposing a set of UI or technical changes to implement your
> idea? How do you prevent putting people not in the categories you cite
> at greater risk?

I would promote an alternative UI that would be switched as an
about:config option. I'd call it
'security.administrator.mode.i-really-do-know-what-I-am-doing'. (only
half tongue-in-cheek.)

The idea is that it would have a bit of chrome that plays instead of
the current chrome when there's a certificate error. Instead of
having useless twist-down options ('useless' because they don't
provide the truly technical user anything useful -- sure, bad site,
and untrusted issuer, but unless I can see the fingerprint I can't
tell whether it's okay or not), it would present a page with most of
the content of the certificate and the session that Firefox used to
obtain that certificate (i.e., a text representation of the
certificate, and the information such as hostname, hostIP, port,
expected protocol, and possibly the Referer to see if there's anything
weird going on) and ask if the admin wanted to accept it or not.
(perhaps with a small countdown on the 'confirm exception' and
'confirm permanent exception' dialogs, akin to the add-on installation
dialog.)

>> This, by the way, is what is meant by "But in many situations, we're
>> not concerned about identity in particular, we just want to get the
>> basic https: crypto stream up and running":  when the identity of the
>> host is known in some way outside of the inspection of the browser.
>
> Same question as above: how? Because you trust the whole network?

Because I trust the same concept that I trust for Bluetooth and XBox
360: it's staggeringly unlikely that someone else is going to be doing
the same operation on the same machine at the same time.

(Also, I should point out: I'm hearing these complaints from people
who work at businesses which create common-criteria evaluated hardware
for the US government. Some of these networks are physically,
optically separated, and others are simply VLANned. If someone who
understands Common Criteria, how it's supposed to work, and the
security principles involved is complaining, I would lay odds that
Mozilla's overprotection of its users goes far beyond any "reasonable"
measure. Especially since CC is usually considered "unreasonable"!)

>> Why do you make it so difficult for people who use your software for
>> their work to do so?
>
> Have you considered writing an extension to make it easier? You could
> even ask for donations from relieved sysadmins...

When the only extension that I can find (KeyManager) that interacts
with the PSM has to include its own XPCOM-exposed copies of the key
and certificate management code as part of its bundle, I don't think
that Mozilla has any idea (much less any care) just how thoroughly
they're alienating the people who are most likely to give tech support
to the people who they've brought into the Mozilla fold. I *can't*
write an extension that's cross-platform with native components. I do
not have the technical knowledge required.

At this point, it's bad enough that I'm recommending that users and
admins completely ditch Firefox. What's more, some of these admins I
know have already called for its removal from their
corporate-supported images. You(Netscape>Firefox|collectively) have
had over a decade to get your heads out of your respective dark
places, be they buried in sand or somewhere less pleasant -- and you
haven't.

This means that you are out of touch, and are so bull-headed and
mulish enough in your belief that The Extended Exception Dialog Is The
Way To Go that there is no hope of convincing you that there is even a
single exception to your holy crusade.

>> Oh, right, it's in the interest of the users
>> who... haven't been systematically polled... or use-case studied... or
>> anything?
>
> I have a feeling that if we did a usability study of a random selection
> of 1000 users, precisely 0 would be in the category you describe. With
> great luck, you might get 1.

I have a feeling that it's impossible to know without actually running
the numbers. I also think that you'll get different responses from
different sectors in different types of areas with different kinds of
economies (i.e., farming/agriculture, blue-collar industrial,
white-collar knowledge), and that your feeling has about as much
weight behind it as mine. (If you did a usability study in San Jose
or Santa Clara, for example, my feeling is you're much more likely to
get 30 out of that 1000.)

I, at least, am willing to admit that I'm pulling numbers out of my
ass. I'm even trying to figure out a means by which I could get more
solid data. You're simply going on "feeling".

-Kyle H

Gervase Markham

unread,
Feb 4, 2010, 6:36:26 AM2/4/10
to
On 02/02/10 20:48, Kyle Hamilton wrote:
> You dismissed it through lack of acknowledgement: someone (Lauren
> Weinstein) stated that there are times when his security policy has
> less interest in identity than getting the (basic) https crypto stream
> up and running. You *categorically* dismissed the concept that there
> is any conceivable situation where the security policy can be relaxed
> on the issue of identity, by restating your proposition -- without
> proof -- that there is no security if there is no concept of identity.

Do you disagree with that proposition? I don't think you do. As far as I
can understand from your argument, you are saying that there are
different ways of confirming identity other than having that identity
encoded in the certificate. For example, if your computer and the target
computer are the only two on your network.

I certainly think that's true,

> I provided you one of the possible scenarios later in the mail, and
> you're now trying to tell me that increasing the number of operations
> that an administrator -- whose time is paid for at much higher rates
> and is therefore demonstrably more valuable than the 99.999% of
> end-users you're trying to "protect" -- must perform in a
> mind-numbing, useless, data entry-like, demonstrably harmful manner is
> helpful.

It's not helpful to them in particular, but I assert it's a net win in
general. What my article is about is trying to get people to grapple
with the tradeoff - "we have not found a way to make it any easier for
geeks to use the KCM model without putting at risk all the people who
only ever use the standard model.".

> For example, I'm going to describe the manner in which I have to
> identify the fingerprint of a particular key:

<snip>

You know about
https://addons.mozilla.org/en-US/firefox/addon/6843
right?

> [You might notice something missing: WHERE'S THE FINGERPRINT? Also,

That's a good point. The fingerprint should be displayed somewhere
without the user having to open the certificate viewer, and in a
non-modal dialog. Feel free to file a bug on that.

> (And the dialog by which Firefox makes this information available is
> the closest thing to completely antithetical to the concept of
> "security" as I can imagine. There's no way to select text therein,
> so there's no way to select the fingerprints, so there's no way to
> paste them into a wiki or anything else. This means that the only way
> available is via slow, laborious, error-prone, manual transcription or
> comparation.)

That too. Hackers wanted :-)

> Seriously, I can't imagine how Firefox says that it's a browser for
> everyone

Er, that's Chrome's slogan.

http://3.bp.blogspot.com/_ShssuEE1cx0/SydsjP9dILI/AAAAAAAAEEg/wagxoXqougY/s400/Google%2BChrome%2BNewspaper%2Bad%2Bfront%2Bpage.jpg

Is their implementation of this better than ours?

> What people don't understand is that *there is often no need to know,
> to any standard of due diligence, who you're talking with*.

If you don't mind who you are talking with, don't use SSL. You are
concerned about being eavesdropped? But I thought you said you didn't
mind who you were talking with. Oh, you do? ...

> As you well know, Firefox also has a checkbox to disable updates.
> This checkbox is ignored, and when Mozilla pushes out updates it
> *always* interrupts the flow of what's being done. "I told you never
> to update, you stupid piece of $#!+!" is often heard around here, and
> I'm never the one who says it.

I claim "Tu quoque".

>> Are you proposing a set of UI or technical changes to implement your
>> idea? How do you prevent putting people not in the categories you cite
>> at greater risk?
>
> I would promote an alternative UI that would be switched as an
> about:config option. I'd call it
> 'security.administrator.mode.i-really-do-know-what-I-am-doing'. (only
> half tongue-in-cheek.)

Like the extension referenced above, in fact?

> At this point, it's bad enough that I'm recommending that users and
> admins completely ditch Firefox.

In favour of...?

> This means that you are out of touch, and are so bull-headed and
> mulish enough in your belief that The Extended Exception Dialog Is The
> Way To Go that there is no hope of convincing you that there is even a
> single exception to your holy crusade.

This is why we have an extension system.

Gerv

0 new messages