Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Proposal for improving the security of add-on updates

0 views
Skip to first unread message

Dave Townsend

unread,
Jun 14, 2007, 4:48:46 PM6/14/07
to
Hi all, I am looking for some feedback on a proposal I'm working on to
improve the security of add-on updates in Mozilla products. Let me give
an overview of the problem I wish to solve and then what I have come up
with so far as a potential solution.

In the Mozilla applications we have an add-ons installed. I'm ignoring
how the add-ons are installed but let's assume that once there we have
some faith in them. The application periodically checks for available
updates to an add-on by downloading an update file from a url specified
by the installed add-on.

What I want is to be able to be able to establish some trust that the
update file retrieved is correct, and has not been tampered with,
intercepted and is as it was originally written by the add-on author.

The key problem is that I wish to do this in a way that does not cost
the add-on author any money (or at least a very small amount of money),
so getting a certificate signed by one of the root CA's is not an option
nor is serving the file from an ssl server.

The potential solution that I am considering is by using a digital
signature. The add-on author, when he first writes the add-on creates a
public and private key. The public key is included in the add-on on
initial install. After that the private key is used to sign the update
file. In this way the application has the author's public key and can
use it to verify the signature of the update file.

There are a few problems already pointed out:

This all assumes that the initial add-on was not tampered with in the
first place. This is true however that is really a different problem for
the future, right now I have to concentrate on the update mechanism alone.

If the add-on author has their private key compromised then of course we
lose all security. I don't see a real way around this but we have
certain possibilities in place (add-on blacklisting for example) that
could be called into play in such an event.

Finally the public key for the add-on on the users machine could be
changed however in the event that someone has gained access to the users
machine I think ensuring the security of updates is a moot point.

Now I am not an expert on cryptography so I would appreciate any
comments you have on other weaknesses in the proposal or any other
solutions or problems you think I might have missed.

Cheers

Dave

Kyle Hamilton

unread,
Jun 14, 2007, 5:50:45 PM6/14/07
to Dave Townsend, dev-tec...@lists.mozilla.org
so, essentially, you're trying to create a "key continuity" system
rather than a "trusted certification" system. As you point out, the
initial bootstrapping is outside the realm of what can be dealt with.

A public/private key pair could just as easily have a certificate
created for it (self-signed certificate), and the add-on should be
able to initiate adding the self-signed certificate to the certificate
store. [The user should be prompted 'this is to allow X addon to
automatically and securely update itself'.] This would allow for a
standardized certificate revocation process (add-on author sends the
certificate to a "revokeaddoncert" or such mailing address, perhaps
signed using the private key to the certificate, stating that it is
administratively revoked?) to be included in Mozilla suite updates.

I would propose that a critical certificate extension be included in
those self-signed certificates: 'name of Mozilla add-on'. This would
allow the certificate's scope to be limited appropriately. It should
also allow for the add-on's name to be blacklisted through the
authorization mechanism [remember, certificates are designed to solve
the authentication, not authorization, problem] even if the key is
never manipulated in the user's certificate store to be 'untrusted'.

The security of this system relies on two things: good key management
practices at the developer side, and appropriate limitations
implemented in the Mozilla software side as well. It allows for the
re-use of the existing code and model in a way that does not require
third-party intervention, while maintaining editorial control over
badware.

[For those not in the know: A 'critical extension' is a piece of
information stuck in an X.509v3 certificate that is specified as 'if
you don't know what this extension is, stop processing this
certificate and return a certificate validation error'. When properly
implemented, this would allow for the add-on update process to verify
the signature of the add-on with a critically-marked certificate,
without allowing that certificate to inadvertently be used for website
authentication. If the critical extension includes the name of the
add-on, then there comes a two-fold benefit: 1) it's cryptographically
authenticated, and 2) An add-on that uses one key cannot be
overwritten by any other -- for example, the 'firebug' add-on could
not be overwritten by an add-on signed with a key that belongs to a
'firebad' add-on.]

I will need to do some analysis to be able to describe/define the
threat model against this type of system. I should point out that
this is not a unique concept -- the One Laptop Per Child project
insists on a key continuity management system rather than a
certification authority system, though I have not looked at their
threat model and analysis.

Preliminarily, I like this proposal. I don't know how doable it will
be, given the politics involved in the MoFo, but I think that it
certainly bears looking into. (I'm suspending my own comments on the
weaknesses involved until I analyze it -- but I've mentioned at least
some of the already-existing pieces which can be re-used to reduce
implementation time and effort.)

-Kyle H

> _______________________________________________
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>

Nelson Bolyard

unread,
Jun 15, 2007, 1:44:41 PM6/15/07
to Dave Townsend
Dave Townsend wrote:
> Hi all, I am looking for some feedback on a proposal I'm working on to
> improve the security of add-on updates in Mozilla products. Let me give
> an overview of the problem I wish to solve and then what I have come up
> with so far as a potential solution.
>
> In the Mozilla applications we have an add-ons installed. I'm ignoring
> how the add-ons are installed but let's assume that once there we have
> some faith in them. The application periodically checks for available
> updates to an add-on by downloading an update file from a url specified
> by the installed add-on.
>
> What I want is to be able to be able to establish some trust that the
> update file retrieved is correct, and has not been tampered with,
> intercepted and is as it was originally written by the add-on author.
>
> The key problem is that I wish to do this in a way that does not cost
> the add-on author any money (or at least a very small amount of money),
> so getting a certificate signed by one of the root CA's is not an option
> nor is serving the file from an ssl server.

$18/year is too expensive, eh?

Dave Townsend

unread,
Jun 15, 2007, 2:01:21 PM6/15/07
to
Nelson Bolyard wrote:
>
> $18/year is too expensive, eh?
>

Heh, this is true. My attempts to find cheap SSL certificates had only
yielded $100/per year jobs. Given that they are not that expensive I
have started doing a straw poll of authors to see whether that would be
too much or not.

Dave

Nelson B

unread,
Jun 16, 2007, 7:39:22 PM6/16/07
to

I have heard that SSL server certs are available for FREE from Startcom
(one of the CAs already known to mozilla products) at this web page:
http://cert.startcom.org/

I have not tried this, because I'm not trying to run an SSL server.
But I figure it's worth trying out.

--
Nelson B

Dave Townsend

unread,
Jun 17, 2007, 7:52:44 AM6/17/07
to

I'm certainly looking into this, but so far my un-scientific straw poll
of add-on authors suggests that ssl certification would not be a viable
route. The cost of the certificate is one thing, then there is the
additional hosting costs (general web hosting does not have the unique
IP required), then there is just hte complexity of getting and
installing such certificates.

Dave

Nelson B

unread,
Jun 17, 2007, 12:08:47 PM6/17/07
to

There is no unique IP address required any more. Modern TLS implementations
like the one in Mozilla products, allow the client and server to negotiate
the host name over the SSL connection, before the server presents its cert,
So that the server can pick the right cert. It's good old virtual hosting,
done with https. Works well.

Now, I could believe that many of these hosting companies don't yet have
servers that know how to do that host name/cert negotiation. But if
mozilla authors start to create a demand for that service, and they go
to the hosting services that do offer it, that will cause more hosting
services to offer it.

> then there is just the complexity of getting and
> installing such certificates.

The administration part is generally the easy part. It only seems complex
the first time, like going on a first date. :)

--
Nelson B

Dave Townsend

unread,
Jun 17, 2007, 1:00:27 PM6/17/07
to
Nelson B wrote:
> There is no unique IP address required any more. Modern TLS implementations
> like the one in Mozilla products, allow the client and server to negotiate
> the host name over the SSL connection, before the server presents its cert,
> So that the server can pick the right cert. It's good old virtual hosting,
> done with https. Works well.

Ah interesting, I was under the impression that bug 276813 was required
for that sort of thing. I shall have to take a look over some popular
hosts to see whether they offer this. Out of interests besides Mozilla
do other browsers support this, IE? Safari? Opera?

Dave

Nelson B

unread,
Jun 17, 2007, 4:44:07 PM6/17/07
to
I wrote:
>>> There is no unique IP address required any more. Modern TLS implementations
>>> like the one in Mozilla products, allow the client and server to negotiate
>>> the host name over the SSL connection, before the server presents its cert,
>>> So that the server can pick the right cert. It's good old virtual hosting,
>>> done with https. Works well.

Dave Townsend wrote:
>> Ah interesting, I was under the impression that bug 276813 was required
>> for that sort of thing. I shall have to take a look over some popular
>> hosts to see whether they offer this. Out of interests besides Mozilla
>> do other browsers support this, IE? Safari? Opera?

Nope. That's something else, an alternative proposal that (IMO) has been
roundly rejected by the SSL and http implementors of the world.

Eddy Nigg (StartCom Ltd.) wrote:
> To all of my knowledge this works only if the various domain names are
> bundled in one certificate as DSN extensions. I'm not aware that one can
> host multiple certificates and multiple hosts on one and same IP address
> and port (443).

Well, then let me introduce you to "Server Name Indication" (SNI). It's
SSL on port 443 (could be any port, such as the port for IMAP-over-SSL, that
negotiates SSL before starting the application protocol [http, IMAP, etc.]).

The client tells the server, in the SSL/TLS "client hello", the name of
the server that it's trying to contact at that address/port. The server
then uses that information to switch to the virtual host named in the SNI,
and also to switch to the certificate for that virtual host. Each Virtual
host has its own certificate. The server sends to the client the cert
for the virtual host that the client wants to see. The client sees no
host name mismatch. Everyone is happy.

This simplifies things for server admins a LOT because there's no need to
have the hosting service try to get one cert with dozens (hundreds?
thousands?) of host names in it. The hosting admin can let each of
his customers who wants to use SSL by his own separate cert, and he
merely puts all those certs into his server.

NSS supports SNI on the client side. Mozilla browsers have supported
SNI beginning with FF 2.0. It's just in there. No feature to enable.
It's incompatible with SSL2, so SSL2-format client hellos must be
disabled to use it, and they are disabled (by default) in FF2.

Vista supports SNI, too (it's tied to the SSL code in the OS, not to
the browser version, so XP users of IE7 don't get the feature, AFAIK).

For your reference, the b.m.o bug for adding client SNI to NSS was bug
https://bugzilla.mozilla.org/show_bug.cgi?id=116168
The bug for adding server side SNI to NSS is bug
https://bugzilla.mozilla.org/show_bug.cgi?id=360421
The RFC that defines SNI is RFC 4366, pages 9-10.
http://www.rfc-editor.org/rfc/rfc4366.txt

--
Nelson B

Nelson B

unread,
Jun 17, 2007, 11:25:56 PM6/17/07
to
Eddy Nigg (StartCom Ltd.) wrote:

> Which means that at least 60 % of all clients don't support it yet. It's
> not there yet and it will take some time until real hosting providers
> can rely on that and deploy without fear...just imagine supporting only
> 40% of all clients/browsers ;-)

That's 40% more than they support today, evidently.

Gervase Markham

unread,
Jun 18, 2007, 6:22:16 AM6/18/07
to
Dave Townsend wrote:
> Out of interests besides Mozilla
> do other browsers support this, IE? Safari? Opera?

Why does that matter? It's Firefox that's going to be downloading the
updates, isn't it?

Gerv

Gervase Markham

unread,
Jun 18, 2007, 6:24:43 AM6/18/07
to
Dave Townsend wrote:
> What I want is to be able to be able to establish some trust that the
> update file retrieved is correct, and has not been tampered with,
> intercepted and is as it was originally written by the add-on author.

Link Fingerprints was designed for precisely this purpose, and is
currently being implemented in Firefox by Ed Lee, who is sitting next to
Dan Veditz:
http://www.gerv.net/security/link-fingerprints/

You get the URL from a trusted source (i.e. the updates.rdf downloaded
from addons.mozilla.org over SSL) and then use the fingerprint to verify
that the data you get is actually the correct data. You can download it
over HTTP, from an "untrusted" site, because if you get the wrong data,
the implementation throws it away and tells you so.

No changes would be required to the updates system (apart from update
authors having to specify the fingerprint when they register a new
update) and they can use any web host they like - cheap, free, whatever.
It's much easier to manage than certificates and I believe, for this
application, gives equivalent security.

Gerv

Nils Maier

unread,
Jun 18, 2007, 12:18:43 PM6/18/07
to
Gervase Markham schrieb:

Err, I actually started wondering when you'll get here advertising your
solution...

AMO already uses link-fingerprint like hash verification...

Link-Fingerprints cannot solve the problem of update.rdf-over-HTTP as
update.rdf is likely to update often and you cannot compute hash(es) for
it in advance.
So even if update.rdf included LF for the update XPI it wouldn't matter,
as update.rdf itself can be intercepted and the LF could be replaced or
removed.

If you're proposing that all extensions or at least their update.rdf
must be served via AMO, then no thanks.

Link-Fingerprints in general fail the requirements that Dave defined:


> What I want is to be able to be able to establish some trust that the update file retrieved is correct, and has not been tampered with, intercepted and is as it was originally written by the add-on author.

* correct?
OK, possible at least in some scenarios.
* tampered with/intercepted?
It cannot be ensured that somebody tampered with update.rdf.
* written by the add-on author?
No way for LF to solve this.

Nils

PS: the Link-Fingerprints "standard" says:
> Clients are encouraged not to implement any hash algorithms other than MD5 and SHA-256, until and unless SHA-256 is found to have flaws.
MD5 is broken, you can find collisions in just hours.
Therefore having authors providing md5 hashes doesn't really help either.

Dave Townsend

unread,
Jun 18, 2007, 12:23:51 PM6/18/07
to

Sorry it doesn't matter for the scope of this thread, I was more
interested in general.

Dave

Dave Townsend

unread,
Jun 18, 2007, 12:29:39 PM6/18/07
to
Gervase Markham wrote:
> Dave Townsend wrote:
>> What I want is to be able to be able to establish some trust that the
>> update file retrieved is correct, and has not been tampered with,
>> intercepted and is as it was originally written by the add-on author.
>
> Link Fingerprints was designed for precisely this purpose, and is
> currently being implemented in Firefox by Ed Lee, who is sitting next to
> Dan Veditz:
> http://www.gerv.net/security/link-fingerprints/

No this is really a different case to where link fingerprints are
useful. The update manifest file cannot be hashed before hand, i.e. in
version 1 of my extension I don't know the hash of the update manifest
in advance for when 2 is released.

> You get the URL from a trusted source (i.e. the updates.rdf downloaded
> from addons.mozilla.org over SSL) and then use the fingerprint to verify
> that the data you get is actually the correct data. You can download it
> over HTTP, from an "untrusted" site, because if you get the wrong data,
> the implementation throws it away and tells you so.

Yes for retrieving the final xpi a hash specified in the update manifest
is useful, and already implemented. And were there a possibility to host
third party update manifests on addons.mozilla.org then this could
work without any extra effort. Currently that is not in place, however I
will be speaking to them to find out what possibilities exist along
those lines. Note however that some authors may not even want to do that
(I would certainly prefer not to).

Cheers

Dave

Nelson B

unread,
Jun 18, 2007, 3:37:34 PM6/18/07
to
Eddy Nigg (StartCom Ltd.) wrote:
> Nelson B wrote:
>> Eddy Nigg (StartCom Ltd.) wrote:
>>> ...it will take some time until real hosting providers

>>> can rely on that and deploy without fear...just imagine supporting only
>>> 40% of all clients/browsers ;-)
>>>
>> That's 40% more than they support today, evidently.

> LOL...you mean todays hosting providers are providing support to 40% of
> their clients already? ;-)

No, I mean that today, very few hosting providers provide any SSL server
support at all, or do so only at greatly increased cost related to assigning
a fixed IP address, due to the perception that fixed IP addresses are
still required for https. I don't know of ANY hosting providers that even
attempt to use certs that include an amalgamation of served host names.
Consequently there are very few, almost no, https servers (~ 0%) operated
by hosting providers today.

If servers began to support SNI, and hosting providers allowed their
customers to provide certs for their individual virtual servers without the
cost of a fixed IP address, then, even if only 40% of the browser users
were able to make use of it, that would be ~40% more users able to access
https pages hosted by hosting providers than are available today.

--
Nelson B

Gervase Markham

unread,
Jun 19, 2007, 5:21:06 AM6/19/07
to
Dave Townsend wrote:
> Gervase Markham wrote:
>> Dave Townsend wrote:
>>> What I want is to be able to be able to establish some trust that the
>>> update file retrieved is correct, and has not been tampered with,
>>> intercepted and is as it was originally written by the add-on author.
>>
>> Link Fingerprints was designed for precisely this purpose, and is
>> currently being implemented in Firefox by Ed Lee, who is sitting next
>> to Dan Veditz:
>> http://www.gerv.net/security/link-fingerprints/
>
> No this is really a different case to where link fingerprints are
> useful. The update manifest file cannot be hashed before hand, i.e. in
> version 1 of my extension I don't know the hash of the update manifest
> in advance for when 2 is released.

Indeed not. I wasn't suggesting downloading the manifest using Link
Fingerprints, but the final download. I was under the impression that
all manifests were hosted on a.m.o.

> Yes for retrieving the final xpi a hash specified in the update manifest
> is useful, and already implemented.

OK, no problem then :-)

And were there a possibility to host
> third party update manifests on addons.mozilla.org then this could work
> without any extra effort.

What do you mean by "third party manifests"? All the software on a.m.o.
is third party.

> Currently that is not in place, however I will
> be speaking to them to find out what possibilities exist along those
> lines.

This seems like the right solution to me. In fact, I had assumed it was
already the case, and that we were trying to solve the other half of the
problem.

Gerv

Gervase Markham

unread,
Jun 19, 2007, 5:25:24 AM6/19/07
to
Nils Maier wrote:
> Link-Fingerprints cannot solve the problem of update.rdf-over-HTTP as
> update.rdf is likely to update often and you cannot compute hash(es) for
> it in advance.

I didn't suggest that it could.

> If you're proposing that all extensions or at least their update.rdf
> must be served via AMO, then no thanks.

What's the problem with centralising updates.rdf on a.m.o., for those
people who can't or won't host their own SSL site?

> Link-Fingerprints in general fail the requirements that Dave defined:
>> What I want is to be able to be able to establish some trust that the update file retrieved is correct, and has not been tampered with, intercepted and is as it was originally written by the add-on author.
> * correct?

I misunderstood "update file" as the actual update data, rather than
updates.rdf. Hence my suggestion. Thanks for setting me straight so
politely.

> PS: the Link-Fingerprints "standard" says:
>> Clients are encouraged not to implement any hash algorithms other than MD5 and SHA-256, until and unless SHA-256 is found to have flaws.
> MD5 is broken, you can find collisions in just hours.

OK, then. I have a file with the following MD5:
6dfabd7a681569ac0b9d6b010bec88d6 /home/gerv/docs/hacking/doorbell.png
Please find any other file (I'll make it easy, and won't require it to
be a valid XPI with a trojan payload) with the same MD5 in "just hours",
and send me a copy as an attachment.

Gerv

Benjamin Smedberg

unread,
Jun 19, 2007, 8:30:41 AM6/19/07
to
Gervase Markham wrote:

> This seems like the right solution to me. In fact, I had assumed it was
> already the case, and that we were trying to solve the other half of the
> problem.

We already support hashes specified by the upate.rdf for the XPI, and AMO
uses this to serve the XPIs over http. However, the issue at hand is when
the extension has nothing to do with AMO, and serves the update.rdf over
HTTP or the XPI over HTTP without specifying a hash.

--BDS

Nils Maier

unread,
Jun 19, 2007, 9:36:39 AM6/19/07
to
Gervase Markham schrieb:
> Nils Maier wrote:
> [...]

>> PS: the Link-Fingerprints "standard" says:
>>> Clients are encouraged not to implement any hash algorithms other
>>> than MD5 and SHA-256, until and unless SHA-256 is found to have flaws.
>> MD5 is broken, you can find collisions in just hours.
>
> OK, then. I have a file with the following MD5:
> 6dfabd7a681569ac0b9d6b010bec88d6 /home/gerv/docs/hacking/doorbell.png
> Please find any other file (I'll make it easy, and won't require it to
> be a valid XPI with a trojan payload) with the same MD5 in "just hours",
> and send me a copy as an attachment.
>
> Gerv

Good overview incl. some conclusions by the author:
http://cryptography.hyperlink.cz/md5/MD5_collisions.pdf
Certificates:
http://www.win.tue.nl/~bdeweger/CollidingCertificates/
http://www.win.tue.nl/hashclash/TargetCollidingCertificates/
Chosen Prefix (impractical ATM):
http://www.win.tue.nl/hashclash/ChosenPrefixCollisions/
Nice Demo (different executables, chosen suffixes):
http://www.mscs.dal.ca/~selinger/md5collision/
ISO format remarks (see Updates):
http://cryptography.hyperlink.cz/2004/collisions.htm

And just to give you that warm fuzzy feeling you hoped for when you
created your "challenge":
No, it isn't that easily possible to create files for a given hash
(while they found better ways than brute-force already; see rainbow tables).
But that was never my point anyway (I talked about collisions)... And
does not make md5 less broken than I claimed and researchers found it was.

Nils

Dave Townsend

unread,
Jun 19, 2007, 12:24:30 PM6/19/07
to

Indeed, the issue is with add-on authors who do not want to host on AMO
(for a variety of quite valid reasons). A compromise allowing authors to
host their xpis on their own sites but the update.rdf on AMO or some
other Mozilla provided secure site might be a potential solution, but I
think even this is not ideal from both author's and Mozilla's point of view.

Dave

Gervase Markham

unread,
Jun 20, 2007, 4:58:28 AM6/20/07
to
Nils Maier wrote:
> But that was never my point anyway (I talked about collisions)... And
> does not make md5 less broken than I claimed and researchers found it was.

MD5 is not "broken" or "not broken" - it depends on your particular
application. In this case, the attacker would need to generate a valid
yet trojaned XPI for the same hash. This is much harder to do than the
limited attacks which have succeeded so far.

I entirely agree that if Link Fingerprints were used as part of the
solution here (which it seems they aren't needed for anyway), then
sha256 should be used. There's no reason not to.

Gerv

Gervase Markham

unread,
Jun 20, 2007, 5:01:29 AM6/20/07
to
Benjamin Smedberg wrote:
> We already support hashes specified by the upate.rdf for the XPI, and AMO
> uses this to serve the XPIs over http. However, the issue at hand is when
> the extension has nothing to do with AMO, and serves the update.rdf over
> HTTP or the XPI over HTTP without specifying a hash.

Well, the latter we can just forbid - i.e. refuse to download the
update. There's no reason not to put the hash in the XPI. It doesn't
cost anything to get hash-generating tools :-)

Let's look at scenarios here. Someone wants to make their extension
available. They can either:

- Host it on a.m.o, for free, taking advantage of the security
infrastructure and download bandwidth

- Host it themselves, and pay $40-$60 per year for a fixed IP and
free/cheap SSL certificate, and some bandwidth to serve up the copies.

Who are we not serving here? If people don't want to pay any money to
put out an addon, a.m.o. is there for them. Why are we trying to solve
this problem? Let's just make updates.rdf over HTTPS compulsory.

Gerv

Gervase Markham

unread,
Jun 20, 2007, 5:02:11 AM6/20/07
to
Dave Townsend wrote:
> Indeed, the issue is with add-on authors who do not want to host on AMO
> (for a variety of quite valid reasons).

Could you expand on what those reasons are?

> A compromise allowing authors to
> host their xpis on their own sites but the update.rdf on AMO or some
> other Mozilla provided secure site might be a potential solution, but I
> think even this is not ideal from both author's and Mozilla's point of
> view.

Why would this be not ideal?

Gerv

Dave Townsend

unread,
Jun 20, 2007, 6:47:23 AM6/20/07
to
Gervase Markham wrote:
> Benjamin Smedberg wrote:
>> We already support hashes specified by the upate.rdf for the XPI, and AMO
>> uses this to serve the XPIs over http. However, the issue at hand is when
>> the extension has nothing to do with AMO, and serves the update.rdf over
>> HTTP or the XPI over HTTP without specifying a hash.
>
> Well, the latter we can just forbid - i.e. refuse to download the
> update. There's no reason not to put the hash in the XPI. It doesn't
> cost anything to get hash-generating tools :-)

Agreed and my plan is to forbid updates without hashes, this just leaves
the issue of getting the update.rdf in a secure fashion.

> Let's look at scenarios here. Someone wants to make their extension
> available. They can either:
>
> - Host it on a.m.o, for free, taking advantage of the security
> infrastructure and download bandwidth
>
> - Host it themselves, and pay $40-$60 per year for a fixed IP and
> free/cheap SSL certificate, and some bandwidth to serve up the copies.
>
> Who are we not serving here? If people don't want to pay any money to
> put out an addon, a.m.o. is there for them. Why are we trying to solve
> this problem? Let's just make updates.rdf over HTTPS compulsory.

Yes that plan allows everyone to host, however we are forcing them down
a path they previously didn't want, i.e. hosting on AMO, or paying for
the privilege of writing extensions.

Don't get me wrong, I would almost love it if this was the chosen route,
it's a piece of cake to implement. The question is do we want to make
the implementation easy for us at the expense of making things harder
than they need to be for the extensions community?

Dave

Dave Townsend

unread,
Jun 20, 2007, 6:54:15 AM6/20/07
to
Gervase Markham wrote:
> Dave Townsend wrote:
>> Indeed, the issue is with add-on authors who do not want to host on
>> AMO (for a variety of quite valid reasons).
>
> Could you expand on what those reasons are?

Some examples that I have heard (or experienced myself):

Long review times leading to slow updates for the users.
Dissatisfaction with the new Sandbox.
Poor download statistics.
Restrictions on what kind of add-ons they will host.
Restrictions on the application compatibility you can set.
Not appropriate for the market of the add-on (site specific add-ons etc.)

>> A compromise allowing authors to host their xpis on their own sites
>> but the update.rdf on AMO or some other Mozilla provided secure site
>> might be a potential solution, but I think even this is not ideal from
>> both author's and Mozilla's point of view.
>
> Why would this be not ideal?

Well there would be an added burden on AMO (or whatever) to be able to
host those rdf's and someone would have to develop and maintain whatever
system was used to update the files. And for authors it doesn't really
solve some of the problems with hosting on AMO itself as above.

Dave

Kai Engert

unread,
Jun 20, 2007, 7:41:36 AM6/20/07
to
Nelson B schrieb:

> Dave Townsend wrote:
>
>> Nelson Bolyard wrote:
>>
>>> $18/year is too expensive, eh?
>>>
>> Heh, this is true. My attempts to find cheap SSL certificates had only
>> yielded $100/per year jobs. Given that they are not that expensive I
>> have started doing a straw poll of authors to see whether that would be
>> too much or not.
>>
>
> I have heard that SSL server certs are available for FREE from Startcom
> (one of the CAs already known to mozilla products) at this web page:
> http://cert.startcom.org/
>

Wouldn't he require an object-signing aka code-signing cert?
Are those available for a low price, too?

Kai

Gervase Markham

unread,
Jun 21, 2007, 5:12:48 AM6/21/07
to
Kai Engert wrote:
> Wouldn't he require an object-signing aka code-signing cert?

Not as I understand it. We are talking about making sure that the
downloaded file is the correct file, not making sure that the code is
traceable back to a particular named individual. That's a separate issue.

Gerv

Gervase Markham

unread,
Jun 21, 2007, 5:15:10 AM6/21/07
to
Dave Townsend wrote:
> Some examples that I have heard (or experienced myself):
>
> Long review times leading to slow updates for the users.
> Dissatisfaction with the new Sandbox.
> Poor download statistics.
> Restrictions on what kind of add-ons they will host.
> Restrictions on the application compatibility you can set.
> Not appropriate for the market of the add-on (site specific add-ons etc.)

OK. So instead of using our resource to fix these things, we are fixing
the problem that they can't afford $40 for SSL hosting?

a.m.o. isn't the best thing, but it's free. Hosting your own with SSL
isn't free, but it gives you more flexibility. I really think the two
options available here cover all the bases.

We seem to be planning to spend a great deal of resource to enable
people to not use our infrastructure, and also not spend $40. Is this
really cost-effective?

Gerv

Gervase Markham

unread,
Jun 21, 2007, 5:20:15 AM6/21/07
to
Dave Townsend wrote:
> Yes that plan allows everyone to host, however we are forcing them down
> a path they previously didn't want, i.e. hosting on AMO, or paying for
> the privilege of writing extensions.
>
> Don't get me wrong, I would almost love it if this was the chosen route,
> it's a piece of cake to implement. The question is do we want to make
> the implementation easy for us at the expense of making things harder
> than they need to be for the extensions community?

The question is: how much harder is "harder"? Anyone can write an
extension and make it available for free to the world today, paying not
a penny. OK, so the service isn't instantaneous, and they don't get
great stats. But it's free!

Let's also compare this with the digital signature solution proposed.
That doesn't make things harder in terms of money - anyone can generate
a key pair - but it does make things harder in terms of process
complexity, and the need to guard your key. It also has the potential
for a bad user experience if the addon author screws up the signing for
their latest update.

I'm really finding it hard to see the big win that all this effort
produces... Of course, I'm not the one who gets to tell you what to work
on :-)

Gerv

Eddy Nigg (StartCom Ltd.)

unread,
Jun 21, 2007, 5:31:55 AM6/21/07
to Gervase Markham, dev-tec...@lists.mozilla.org
Gervase Markham wrote:
> OK. So instead of using our resource to fix these things, we are fixing
> the problem that they can't afford $40 for SSL hosting?
>
> a.m.o. isn't the best thing, but it's free. Hosting your own with SSL
> isn't free, but it gives you more flexibility. I really think the two
> options available here cover all the bases.
>
> We seem to be planning to spend a great deal of resource to enable
> people to not use our infrastructure, and also not spend $40. Is this
> really cost-effective?
I agree! You can't have it both ways....Mozilla Add-ons provides a
reasonable solution which is also free for developers to use (Except I
suggest to improve login procedures and switch to certificate login
instead of user/pass pairs so). If one really needs to have the
application self-hosted, than the interested party has to cover the
costs obviously, which isn't really such a big deal either (If you are
picky then I guess you can afford it ;-)).

However from the original question of this thread I think the question
was, if to require add-ons to be served from SSL secured sites or other
secure mechanism (policy wise).

--
Regards

Signer: Eddy Nigg, StartCom Ltd.
Jabber: star...@startcom.org
Phone: +1.213.341.0390

Eddy Nigg (StartCom Ltd.)

unread,
Jun 21, 2007, 5:54:07 AM6/21/07
to Gervase Markham, dev-tec...@lists.mozilla.org
Gervase Markham wrote:
>
> The question is: how much harder is "harder"? Anyone can write an
> extension and make it available for free to the world today, paying not
> a penny. OK, so the service isn't instantaneous, and they don't get
> great stats. But it's free!
>
Gerv, I think stats is something which could be improved perhaps? How
hard would this be? But I must confess, that I don't have a big idea how
the Mozilla Addons site is organized etc...

> Let's also compare this with the digital signature solution proposed.
> That doesn't make things harder in terms of money - anyone can generate
> a key pair - but it does make things harder in terms of process
> complexity, and the need to guard your key. It also has the potential
> for a bad user experience if the addon author screws up the signing for
> their latest update.
>
> I'm really finding it hard to see the big win that all this effort
> produces...
I think security should be improved somewhat in that respect. I always
thought it funny, that whenever I installed an extension on FF or TB,
this warning popped up, saying the software isn't signed. However up to
date I never encountered a signed on...it started to be some kind of
routine to wait for the Install button to appear...

But there are different options to achieve a better and secure mechanism
perhaps. Certificates might be one (we could make an effort and start to
provide ours faster if needed), but also some hash embedded into the
software could be another one...The correct hash could be served from a
secured web site, which would - by comparing the hash - make it
reasonable secure?
Serving the download of the software itself via SSL isn't really the
best idea (even if possible), but since they are most likely served from
different mirrors I guess this is not a viable option. Except that, how
can one guaranty that the parent application (FF, TB etc) wasn't
tampered in first place?

Dave Townsend

unread,
Jun 21, 2007, 6:19:56 AM6/21/07
to
Gervase Markham wrote:
> Dave Townsend wrote:
>> Some examples that I have heard (or experienced myself):
>>
>> Long review times leading to slow updates for the users.
>> Dissatisfaction with the new Sandbox.
>> Poor download statistics.
>> Restrictions on what kind of add-ons they will host.
>> Restrictions on the application compatibility you can set.
>> Not appropriate for the market of the add-on (site specific add-ons etc.)
>
> OK. So instead of using our resource to fix these things, we are fixing
> the problem that they can't afford $40 for SSL hosting?

I think it's foolishness to think that any one add-ons hosting solution
is going to be right for everyone. AMO is probably about what it needs
to be, right for 95% of people and extra investment in making it right
for the other 5% is probably more work than making the updates system
right for those other 5% of people.

> a.m.o. isn't the best thing, but it's free. Hosting your own with SSL
> isn't free, but it gives you more flexibility. I really think the two
> options available here cover all the bases.

It doesn't cover those that won't pay for the SSL and don't want to host
on AMO. Yes there are people saying they are in that situation. Numbers
are difficult to guess at though.

> We seem to be planning to spend a great deal of resource to enable
> people to not use our infrastructure, and also not spend $40. Is this
> really cost-effective?

At this stage there's been no decision on which way to go. I'm hoping to
draw this discussion to a close in the next day or so and then post some
kind of conclusion and then the decision can be made. In order to do
that I was really hoping this thread to come out with some suggestions
on my initial proposal. So far I've only seen one post which suggests
that it can work. Given that I need to look at just how much work it
would be to implement before we can decide if it is worth implementing
it in order to continue to support those who need it.

Dave

Nils Maier

unread,
Jun 21, 2007, 9:12:57 AM6/21/07
to
Dave Townsend schrieb:

Addressing gerv:
Just to give a counter example for "these two options cover all bases":
We already have a "nightlies" repository for dTa, incl. an update.rdf
always pointing to the latest nightly.
The audience for this is obviously pretty limited. I'm not willing to
spend more effort and/or money on it just to get stuff on AMO sandbox
(hard to use) or to get a cert and setup an SSL server.
Oh, and we will likely publish 2 betas before 1.0 and additional RCs...
Not suitable for AMO either really.

Another example:
I have some extensions I give to buddies, instructing them not to share
those as I want to control the distribution.
Nothing illegal, however. One is e.g. for a small private forum; maybe
100-150 folks using it. Not suited for AMO obviously; I would go nuts if
I had my search results "polluted" by useless-to-me site-specific
extensions (I actually hate all these xy-toolbars, or xy-helpers, that
are already there and I'm not willing to add to that problem). Unless
this drastically changes AMO is not an option, and SSL certs aren't too.

And what about new/prototype extensions not yet ready to be consumed by
general public?

Do you really want to wipe out these use types (and therefore a good
amount of extensions) and limit FX/TB/SM/XulRunner as platforms for
innovation just to introduce a lazy-devs workaround to a theoretical and
practically hard-to-exploit MITM attack vector?
Now this would sound like the overreaction and misguided risk management
described by Bruce Schneier in that essay eric.jung copied over at
m.d.extensions.

Therefore I agree with Dave that this would be foolishness, not to say
destructive.

Addressing Dave's demand for proposals:
If there is no workable solution then don't implement one.
It isn't like there are armies of evil folks out there lurking on wifi
hotspots just to intercept update requests.
Furthermore the first installation could be hijacked as well, so until
this is addressed the problem persists anyway.
Make it opt-in, not must or opt-out, if there is no solution in reach
which would satisfy all requirements.

Try to promote workable solutions, evangelize authors, to use what is
available already (extension code signing, SSL install/updates). Most if
not all high-impact (or better popular) extensions will likely implement
secure updates, especially if they get some guidance from mozilla.
Those are the ones with the highest risk of being "attacked" anyway.

That community program that gives away software/hardware could be
extended to provide popular extensions with (money for) code-signing
certificates...

Maybe make it easy for authors to add their own self-signed cert that
might check updates.
Thinking of a fuel-esque way to easily add those certificates, and tools
that make creating certs and signing more easy (all those "help me
signing that xpi" threads clearly indicate that it is not easy enough yet).
Maybe even allow to use self-signed certificates for installation, then
asking the user to import the certificate if he trust it. (Admittedly
this raises the question if the users will understand such
confirmation-dialog and not just OK-it away).

My observation is that none of the distribution systems solved this
trust-issue fully yet.
E.g. DPKG/RPM will warn you but still allow you to install
unsigned/unknown-key-signed packages.

Nils

Dave Townsend

unread,
Jun 21, 2007, 9:27:37 AM6/21/07
to
Nils Maier wrote:
> Addressing Dave's demand for proposals:

Sorry but I didn't actually demand proposals. I gave one and asked for
opinions on it. I am of course open to other proposals and a few have
been given.

> If there is no workable solution then don't implement one.

As far as I can tell there is one, I need to investigate the code more
though.

> It isn't like there are armies of evil folks out there lurking on wifi
> hotspots just to intercept update requests.

Maybe not, but given the working demonstration and if we choose to do
nothing about it who can say that people won't start to give it a try?

> Furthermore the first installation could be hijacked as well, so until
> this is addressed the problem persists anyway.

Correct, but I'm not convinced it matters which order we tackle these
two problems in.

> Make it opt-in, not must or opt-out, if there is no solution in reach
> which would satisfy all requirements.

It will absolutely not be opt-in, that would be equivalent to doing
nothing. I seriously hope it will not be opt-out either, so yes I am
looking for a solution that applies all the time. The only exception to
this could be local machine development.

> Try to promote workable solutions, evangelize authors, to use what is
> available already (extension code signing, SSL install/updates). Most if
> not all high-impact (or better popular) extensions will likely implement
> secure updates, especially if they get some guidance from mozilla.
> Those are the ones with the highest risk of being "attacked" anyway.

This is already in progress I believe.

> That community program that gives away software/hardware could be
> extended to provide popular extensions with (money for) code-signing
> certificates...

Worth a thought, I shall flag this for Seth's attention.

> Maybe make it easy for authors to add their own self-signed cert that
> might check updates.
> Thinking of a fuel-esque way to easily add those certificates, and tools
> that make creating certs and signing more easy (all those "help me
> signing that xpi" threads clearly indicate that it is not easy enough yet).
> Maybe even allow to use self-signed certificates for installation, then
> asking the user to import the certificate if he trust it. (Admittedly
> this raises the question if the users will understand such
> confirmation-dialog and not just OK-it away).
>
> My observation is that none of the distribution systems solved this
> trust-issue fully yet.
> E.g. DPKG/RPM will warn you but still allow you to install
> unsigned/unknown-key-signed packages.

I wouldn't consider that much of an indicator. dpkg/rpm aren't exactly
for novice users. It would be interesting to know why they allow you to
override the security though. As I understand it windows updates are
signed and rejected if they fail the signature check.

Dave

Nils Maier

unread,
Jun 21, 2007, 11:05:26 AM6/21/07
to
Dave Townsend schrieb:

> Nils Maier wrote:
>> Addressing Dave's demand for proposals:
>
> Sorry but I didn't actually demand proposals. I gave one and asked for
> opinions on it. I am of course open to other proposals and a few have
> been given.
>
>> If there is no workable solution then don't implement one.
>
> As far as I can tell there is one, I need to investigate the code more
> though.
>
>> It isn't like there are armies of evil folks out there lurking on wifi
>> hotspots just to intercept update requests.
>
> Maybe not, but given the working demonstration and if we choose to do
> nothing about it who can say that people won't start to give it a try?

This treat is greatly exaggerated IMO.
There are far more easy ways to anonymously get into a user's system
than lurking around wifi hostspots with the fear of getting caught.
From a (criminal) business point of view the risk and effort simply does
not match the revenue.
So I would imagine some script kiddies playing around and performing
such attacks, but not a serious bot herder or identity thief to do this.
But you're right of course, you never know before ;)

>> Furthermore the first installation could be hijacked as well, so until
>> this is addressed the problem persists anyway.
>
> Correct, but I'm not convinced it matters which order we tackle these
> two problems in.

I was just saying that it doesn't make any sense to rush a solution for
updates (instead of taking the time to get it right; which I fear could
happen here) when the other major MITM attack vector is still wide open.

>> Make it opt-in, not must or opt-out, if there is no solution in reach
>> which would satisfy all requirements.
>
> It will absolutely not be opt-in, that would be equivalent to doing
> nothing. I seriously hope it will not be opt-out either, so yes I am
> looking for a solution that applies all the time. The only exception to
> this could be local machine development.

Opt-in is not equal to "nothing". A damn lot of extension authors will
still opt-in limiting the possibly attackable targets and thus changing
the effort/revenue calculation bad boys do, maybe down to that point
that is does not make sense to take the effort and risk of mounting a
MITM attack against the update service at all.
Remember, we are talking just about a small fraction of extensions that
use insecure updates, and providing workable opt-in solutions and
evangelizing authors will even lower that small fraction.
It is still a security against usability trade, but making/keeping the
numbers low and therefore the possibility of an attack might be secure
"enough".


>
>> Try to promote workable solutions, evangelize authors, to use what is
>> available already (extension code signing, SSL install/updates). Most if
>> not all high-impact (or better popular) extensions will likely implement
>> secure updates, especially if they get some guidance from mozilla.
>> Those are the ones with the highest risk of being "attacked" anyway.
>
> This is already in progress I believe.

Yeah, but still these efforts could be maximized.

Starting with devmo documentation:
I don't really see real information about it e.g. on devmo. There is a
warning in "Install manifests" suggesting to use https, but no further
explanation beside a generic MITM remark. Not all authors know what this
means or the implications of this.

http://developer.mozilla.org/en/docs/Code_snippets:Signing_a_XPI is hard
to understand and has red text all over it.
There is no information about CAs that offer code signing certs. I went
out to find and compare offers for this, but it was incredible hard to
track down CAs and their pricing.

So somebody with a real knowledge and some writing skills would need to
improve these articles and examples.

>> That community program that gives away software/hardware could be
>> extended to provide popular extensions with (money for) code-signing
>> certificates...
>
> Worth a thought, I shall flag this for Seth's attention.
>
>> Maybe make it easy for authors to add their own self-signed cert that
>> might check updates.
>> Thinking of a fuel-esque way to easily add those certificates, and tools
>> that make creating certs and signing more easy (all those "help me
>> signing that xpi" threads clearly indicate that it is not easy enough
>> yet).
>> Maybe even allow to use self-signed certificates for installation, then
>> asking the user to import the certificate if he trust it. (Admittedly
>> this raises the question if the users will understand such
>> confirmation-dialog and not just OK-it away).
>>
>> My observation is that none of the distribution systems solved this
>> trust-issue fully yet.
>> E.g. DPKG/RPM will warn you but still allow you to install
>> unsigned/unknown-key-signed packages.
>
> I wouldn't consider that much of an indicator. dpkg/rpm aren't exactly
> for novice users. It would be interesting to know why they allow you to
> override the security though. As I understand it windows updates are
> signed and rejected if they fail the signature check.
>
> Dave

Windows updates don't really apply. MS and only MS issues them.
However there is not even an unified install/update mechanism for
Windows. There is msi (which is not but can be made secure, similar to
dpkg/rpm), mozilla upgrade, ... There are thousands of different
application-mechanisms in fact and I guess most of them aren't secure.
Oh, and IIRC if you don't use automatic updates put pull them directly
from microsoft.com/download then MITM similar to what we discuss here
are still possible (last time I checked they didn't have https download
servers nor were those msi packages required to be signed).

dpkg/rpm however are issued by different entities and maintained by
different entities (i.e. mirrors driven by universities).
First thing a lot of Ubuntu users will do is to install automatix or
similar. The corresponding deb is signed IIRC, but the key is unknown.
So even novice users are likely to be affected by this. ;)
You would either have to disallow those completely (not an option), or
have to require that the key is imported manually prior to installing,
but that is just the same overriding of security/trust although
admittedly harder to do for n00bs (and therefore bad for user experience).

Disabling installation/updates of unsigned packages would create an
uproar, as it takes away the users freedom (which cannot be tolerated in
F/OSS) and creates a de-facto monopoly where the distro makers would
effectively decide which software a regular user may run.
I could imagine further side-implications, like courts ordering to
prevent certain software from being installed because of legal issues
(like mpeg2-codecs which are covered by patents) although the user
either has a valid license or lives somewhere were it isn't illegal
(like Germany where software patents are not enforceable ATM).

This whole issues can be seen as a case of proposed DRM, crafted with
good intension, but which could easily turn bad if not crafted carefully
enough.

Nils

Nelson B

unread,
Jun 21, 2007, 3:49:12 PM6/21/07
to

As I understand it, presently the downloads of mozilla addons are
validated not with code signatures but by the following method:
A hash of the file is stored on an https server operated by mozilla,
the actual file may be downloaded from anywhere, by any means including
ftp, but it must have the expected hash to be downloaded. (OK so far.)

This scheme is intended to make code signatures unnecessary. (Someone
at mozilla is allergic to code signing, evidently.) But at the cost that
mozilla must be given the new hashes for any new addons and any new updates
to addons.

The issue has to do with automated downloading of updates to addons.
Some makers of addons don't want to have to coordinate distribution of
addon updates with mozilla. They don't want to have to give new hashes
to mozilla for every update, as I understand it. So mozilla wants to
accommodate them, WITHOUT requiring them to sign their code.

So, the current idea being explored (as I understand it) is to allow others
to automatically download their addons (or rather, updates to their addons)
from their own servers, as long as their own servers are https servers.

I think all this aversion to code signing merely demonstrates a lack of
commitment to user security. (Frankly, Microsoft now seems to have the
stronger position with respect to requiring strongly authenticated code
for downloading of updates.) But I am merely an observer of all this
evolution.

--
Nelson B

Nils Maier

unread,
Jun 22, 2007, 2:26:03 AM6/22/07
to
Nelson B schrieb:

> As I understand it, presently the downloads of mozilla addons are
> validated not with code signatures but by the following method:
> A hash of the file is stored on an https server operated by mozilla,
> the actual file may be downloaded from anywhere, by any means including
> ftp, but it must have the expected hash to be downloaded. (OK so far.)

Right.

> This scheme is intended to make code signatures unnecessary. (Someone
> at mozilla is allergic to code signing, evidently.) But at the cost that
> mozilla must be given the new hashes for any new addons and any new updates
> to addons.

No, AMO will hash the XPI once uploaded. This is an automated process,
the author does not have to hash himself when using AMO.

Code singing isn't exactly free. The author has to spend money and
effort in obtaining the certificate, maintaining it, and learning how to
sign his XPI, what isn't easy at the moment either.

Automated signing with a mozilla owned certificate isn't an option either.
This would put a big fat "mozilla" tag on third party software.
Not to mention the problem of maintaining that cert (store that cert
somewhere without password, nor store that password somewhere too?)

> The issue has to do with automated downloading of updates to addons.
> Some makers of addons don't want to have to coordinate distribution of
> addon updates with mozilla. They don't want to have to give new hashes
> to mozilla for every update, as I understand it. So mozilla wants to
> accommodate them, WITHOUT requiring them to sign their code.

Right, some people don't want to use AMO for distribution of their
extensions. But not because they would have to give hashes (s.a.).
The reasons not to use AMO are various, like simply not liking AMO,
lacking functionality of AMO in some areas like stats, extensions not
suitable for AMO, bringing new extensions to testers, etc... Some post
by Dave summarizes most reasons, and over at m.d.extensions authors
discussed a lot of reasons not to use AMO.

> So, the current idea being explored (as I understand it) is to allow others
> to automatically download their addons (or rather, updates to their addons)
> from their own servers, as long as their own servers are https servers.

Not exactly true. What is currently discussed are ways to let authors
host updates themselves in a secure manner.
Installation is out of scope of this discussion for now.
Proposals so far include https-only updates and signed updates, either
by GPG-like public keys or self-signed certificates (with critical
extensions) that are provided by the initial installation XPI.

> I think all this aversion to code signing merely demonstrates a lack of
> commitment to user security. (Frankly, Microsoft now seems to have the
> stronger position with respect to requiring strongly authenticated code
> for downloading of updates.) But I am merely an observer of all this
> evolution.
>

The aversion to code signing lies more in the money, effort and required
knowledge associated with it.
These requirements would stall extension development a lot if code
signing was mandatory. And IMO the risks and impact associated with such
a MITM attack don't worth stalling extension development that much.

So we (under the lead of Dave) are looking for ways to secure updates
while not burden extension developers that much.

Oh, and MS isn't that better actually. Downloads from
microsoft.com/downloads are affected by the very same MITM attack that
brought up this discussion in the first place.

Nils

Gervase Markham

unread,
Jun 22, 2007, 6:20:33 AM6/22/07
to
Nelson B wrote:
> This scheme is intended to make code signatures unnecessary. (Someone
> at mozilla is allergic to code signing, evidently.) But at the cost that
> mozilla must be given the new hashes for any new addons and any new updates
> to addons.

Not allergic; we don't want to accept sucky code-signing certs, and we
don't want app authors to have to pay lot of money for non-sucky ones.

Gerv

Gervase Markham

unread,
Jun 22, 2007, 6:22:30 AM6/22/07
to
Dave Townsend wrote:
> It doesn't cover those that won't pay for the SSL and don't want to host
> on AMO. Yes there are people saying they are in that situation. Numbers
> are difficult to guess at though.

I'm sure there are people saying they are in that situation; there are
people who want something for nothing everywhere you look :-)

> At this stage there's been no decision on which way to go. I'm hoping to
> draw this discussion to a close in the next day or so and then post some
> kind of conclusion and then the decision can be made. In order to do
> that I was really hoping this thread to come out with some suggestions
> on my initial proposal. So far I've only seen one post which suggests
> that it can work. Given that I need to look at just how much work it
> would be to implement before we can decide if it is worth implementing
> it in order to continue to support those who need it.

OK. Well, you know what I think, so I'll leave you to cogitate :-)

Gerv

Jean-Marc Desperrier

unread,
Jun 22, 2007, 8:17:07 AM6/22/07
to

I agree. *Therefore* Mozilla.org need to have it's own code signing
authority, and only accept code signed by it. You have all the
competence needed on this group to help you set it up.

Gervase Markham

unread,
Jun 22, 2007, 12:22:00 PM6/22/07
to
Jean-Marc Desperrier wrote:
> I agree. *Therefore* Mozilla.org need to have it's own code signing
> authority, and only accept code signed by it. You have all the
> competence needed on this group to help you set it up.

Where in this group is there competence and experience in worldwide
identity vetting and validation?

My definition of a "sucky" code signing cert is one in which the
information inside about the owner of the cert isn't accurate.

Gerv

Arrakis

unread,
Jun 22, 2007, 4:20:48 PM6/22/07
to dev-tec...@lists.mozilla.org
Why not use digital certificates provided by CACert. They are free, and
have high levels of assurity, as opposed to a CAs like Verisign that
have little to no assurity, and charge a ransom.

> _______________________________________________
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>

Nils Maier

unread,
Jun 22, 2007, 5:33:39 PM6/22/07
to
Eddy Nigg (StartCom Ltd.) schrieb:

> Nils Maier wrote:
>> The aversion to code signing lies more in the money, effort and required
>> knowledge associated with it.
> Hypothetical question: If Mozilla or an independent organization could
> provide this service for free and reduce the efforts required to a
> minimum, would this solve the problem? Would the various applications,
> add-ons etc be digitally signed from then on (policy wise)?
>
Cannot speak for everybody else, but it would solve the technical issues
for me, at least to that point where I can "live" with it.
I would be able to host some of my extensions myself, then signed of course.
If it was easy to sign stuff then signing test versions wouldn't be a
problem either, so that problem would be gone too.

But to be honest I don't really see signing for all under a single CA.

This would require (IMO) to have various CAs that provide such
certifications for free not just one (or two that are deeply affiliated
for that matter).
The reason for this is quite simple: one is a monopoly.

See TIVO/DRM/GPL.

And somebody could sue mozilla, e.g. over some extension infringing some
patent. I wouldn't normally care, as I'm German citizen and software
patents aren't enforceable here and in many other countries.
But that court ordering moco CA to revoke the cert would still prevent
me from distributing my legal extension.

Oh, and nothing prevent mozilla CA or the independent CA to say "well,
we figured it would be best for our profits to charge for our certs;
awaiting your payments starting next month".

This stuff would affect https-only updates as well, but at least there
is some competition/choice in the SSL-Cert CA market.


Please note: This was just a more or less quick brainstorming. I might
be totally incorrect, missing the point or overlooking totally important
stuff ;)

Greets
Nils

Alaric Dailey

unread,
Jun 22, 2007, 5:53:38 PM6/22/07
to dev-tec...@lists.mozilla.org
Arrakis wrote:
> Why not use digital certificates provided by CACert. They are free, and
> have high levels of assurity, as opposed to a CAs like Verisign that
> have little to no assurity, and charge a ransom.
>
>
>
Well

1) Not audited
2.) Not in Mozilla, or most other browsers because of #1
3.) Assurance is burdensome and impossible for some.
4.) Assurances are made by hobbyists, such a system can be easily bought
off or compromised by carelessness because the assurers have no
accountability.

>> _______________________________________________
>> dev-tech-crypto mailing list
>> dev-tec...@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-tech-crypto
>>
>>
> _______________________________________________
> dev-tech-crypto mailing list
> dev-tec...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-crypto
>


--
*Pengdows, Inc.*

Everyone deserves privacy.

Pengdows, Inc. <http://www.pengdows.com> Alaric Dailey - President

* StartCom 'Web of Trust' Member <http://www.startssl.org>
* Thawte 'Web of Trust' Notary
<http://www.thawte.com/secure-email/web-of-trust-wot/index.html>
* Notary Public and NNA member <http://www.nationalnotary.org/>
* CAcert 'Web of Trust' Assurer <http://www.cacert.org/wot.php?id=3>

National Notary Association Member

ATTENTION USERS OF MICROSOFT OUTLOOK AND MICROSOFT OUTLOOK EXPRESS:
Some versions of these products have trouble replying to digitally
signed emails, like this one.
For more information on this error, and how to fix it please visit Mark
Nobles website here <http://www.marknoble.com/tutorial/smime/smime.aspx>.

Having trouble validating the digital signature? Install the
Certification Authority <http://cert.startcom.org/?app=109>

Gervase Markham

unread,
Jun 25, 2007, 4:27:45 AM6/25/07
to
Arrakis wrote:
> Why not use digital certificates provided by CACert. They are free, and
> have high levels of assurity, as opposed to a CAs like Verisign that
> have little to no assurity, and charge a ransom.

Because CAcert have not applied for inclusion into (and therefore,
obviously, not been accepted into) the Mozilla root store, and so their
certs are not trusted.

Gerv

Jean-Marc Desperrier

unread,
Jun 25, 2007, 12:02:24 PM6/25/07
to
Gervase Markham wrote:
> My definition of a "sucky" code signing cert is one in which the
> information inside about the owner of the cert isn't accurate.

It's a bad definition of a "sucky" code signing certificate.

You don't care *who* the owner of the cert is. What you care about is if
he intends to use his signing cert to distribute spyware extensions. And
his identity tells you nothing about that.

Cf http://www.schneier.com/crypto-gram-0402.html#6
"In an ideal world, what we'd want is some kind of ID that denotes
intention." "This is, of course, ridiculous, so we rely on identity as a
substitute. In theory, if we know who you are, and if we have enough
information about you, we can somehow predict whether you're likely to
be an evildoer"
Schneier then goes on explaining how this doesn't really work in the
real world. But here you're lower that that, you only have the identity,
and nothing to base upon to predict if the owner is likely to be an
evildoer. And when I say that you have the identity, I'm being generous.
I'm afraid with certain CA a personnal code signing certificate only
proves the credit card transaction made to obtain it has not yet been
repudiated. Or that you have not yet found out it was charged on a very
recent bank account opened with a fake ID.

What you'd really want is some process to review the requester (or his
code) before granting him the code signing certificate. You would want
to freely customize that process to find out what works best, what the
best compromises are between convenience and security.
But we know in advance no process with be perfect. So what's really
important is to have the absolute garantee that his certificate gets
revoked as soon as you decide it should. And very efficient
dissemination process for revocation information, relying on the user
downloading tens of crl from various CAs will never fit the bill.

All those things will work infinitively better with a private
certificate authority than with the various public code signing
authorities that exist.

Also about the worldwide identity vetting and validation, they are few
public certificate authorities that really can do that. If really
needed, I think it would be easier for the mozilla community to find a
recognised community member to do face to face authentication on almost
any place on the planet than for them.

Gervase Markham

unread,
Jun 26, 2007, 5:11:00 AM6/26/07
to
Jean-Marc Desperrier wrote:
> You don't care *who* the owner of the cert is. What you care about is if
> he intends to use his signing cert to distribute spyware extensions. And
> his identity tells you nothing about that.

No, but it does tell you whose door the police can go knocking on if he
logs into your online banking and steals all your money.

Identity is a reasonable proxy for intention, because criminals don't
want to be caught.

> What you'd really want is some process to review the requester (or his
> code) before granting him the code signing certificate.

Except that you would need to review all the code before it was signed,
not just at the beginning, and (in the case of malicious intent) find
things the code did which the code author was intending to hide from
you. Which is impractically expensive and time-consuming.

> But we know in advance no process with be perfect. So what's really
> important is to have the absolute garantee that his certificate gets
> revoked as soon as you decide it should. And very efficient
> dissemination process for revocation information, relying on the user
> downloading tens of crl from various CAs will never fit the bill.

So it seems that you are suggesting that we should issue code-signing
certificates to anyone who wants them, and use revocation to pull out
the bad actors?

The problem with that is that because there's no strong identity, the
bad actor will just go back and get another code-signing cert from you
and repeat the process.

Gerv

Eddy Nigg (StartCom Ltd.)

unread,
Jun 26, 2007, 7:06:02 AM6/26/07
to Gervase Markham, dev-tec...@lists.mozilla.org
Gervase Markham wrote:
>
> No, but it does tell you whose door the police can go knocking on if he
> logs into your online banking and steals all your money.
>
> Identity is a reasonable proxy for intention, because criminals don't
> want to be caught.
> Except that you would need to review all the code before it was signed,
> not just at the beginning, and (in the case of malicious intent) find
> things the code did which the code author was intending to hide from
> you. Which is impractically expensive and time-consuming.
>
Absolutely right! This is the logic about code-signing certificates,
something which many seem to ignore here...

>> But we know in advance no process with be perfect. So what's really
>> important is to have the absolute garantee that his certificate gets
>> revoked as soon as you decide it should. And very efficient
>> dissemination process for revocation information, relying on the user
>> downloading tens of crl from various CAs will never fit the bill.
One note concerning that: A CRL gets downloaded whenever a certificate
from the specific CA is encountered. Also CRL are valid for a certain
time, so there isn't a need to "download tens of CRLs"! Obviously CRLs
are not really flexible that's why OCSP responders are (going to be)
used. OCSP provides almost instant information about the validity of a
certificate and will be by default used in Firefox 3.

But I don't believe that the Mozilla foundation has an interested in
running a CA, because this entails much more than publishing a
CRL....Usually those are complex systems with very high security
requirements and regulations. I guess that Gerv has a lot of knowledge
on this subject and can confirm that this isn't an option...

Bill

unread,
Jun 26, 2007, 8:28:14 AM6/26/07
to
> OCSP provides almost instant information about the validity of a
> certificate...

This depends on the OCSP implementation; the use of OCSP does not
automatically equate to 'real-time' or (in some cases) even
'moderately-close-to-real-time' certificate status. If the responder
is referencing a CRL for status info, the data will only be as fresh
as the CRL it references.

Personally, I don't agree with such implementations of OCSP...but they
do exist.

Cheers,

Bill

Jean-Marc Desperrier

unread,
Jun 27, 2007, 9:09:28 AM6/27/07
to
Gervase Markham wrote:
> Jean-Marc Desperrier wrote:
>> You don't care *who* the owner of the cert is. What you care about is
>> if he intends to use his signing cert to distribute spyware
>> extensions. And his identity tells you nothing about that.
>
> No, but it does tell you whose door the police can go knocking on if he
> logs into your online banking and steals all your money.

How effective has this approach been until now to block spam and spyware
? Is it really that difficult to identify those responsible for it,
especially in the case of spamware, do you really think it's much harder
to escape reliability after buying a signing certificate than to escape
it when operating a large scale spam platform ? How hard and how long
will it be to convince a judge take the right decision between one
side's "spyware" and the other's "innovative marketing technic" ?

In real life "we'll sue" simply doesn't work to deter spyware and MoFo
just doesn't want to wage this fight as proven by this :
http://weblogs.mozillazine.org/asa/archives/2007/03/please_get_fire.html
(I mean given that you know this exist where is your action to get
judicial injunctions to make it stop ?)

> Except that you would need to review all the code before it was signed,
> not just at the beginning, and (in the case of malicious intent) find
> things the code did which the code author was intending to hide from
> you. Which is impractically expensive and time-consuming.

Code review is certainly impracticable, and the front of determining
what should be signed is not really rosy.

But I think functional review would bring more than you might believe.
It's a significant barrier to entry if someone who wants to distribute a
spyware extension needs first to integrate real functionalities inside
it, that can convince a reviewer it's enough useful for the general
public to deserve been signed (there would be documented alternatives to
signing when the extension is only useful to a closed group of people).
Extensions of course would not be accepted when there's already another
extension that does the same, or has so few difference that it would be
better to provide a patch to it.

And when you say rejecting malicious extensions is impracticable,
doesn't AMO already have to handle this category of problems when
deciding if an extension should be accepted on the site ?

> So it seems that you are suggesting that we should issue code-signing
> certificates to anyone who wants them, and use revocation to pull out
> the bad actors?

In short yes. Just that it's not to anyone who wants them, you have the
final say to who gets them, and if you experimentally find out you have
to be very restrictive to avoid a bad experience for your users, then
you'll just do that.

> The problem with that is that because there's no strong identity, the
> bad actor will just go back and get another code-signing cert from you
> and repeat the process.

With the functional review I suggest above, he will have first to create
a different, functionally useful extension. I don't think there will be
many cycles, or at least, it will the added value of leading to the
creation of many good extensions :-)

The point is it's not *that* difficult for the bad actors to do the same
with the existing code signing commercial CAs. And in this case, the
seriousness of the CAs and the effectiveness of the legal threat are
your only lines of defense. Using revocation to pull out the bad actors
will be a lot more difficult to make really work and be effective.

Of course you can also choose to make deals to only accept a few
selected commercials CA for which a highly effective revocation channel
will be in place (in addition to a high quality identity review process,
and only accepting requests from selected countries where the legal
threat will be effective).

Eddy Nigg (StartCom Ltd.)

unread,
Jun 27, 2007, 6:56:10 PM6/27/07
to Jean-Marc Desperrier, dev-tec...@lists.mozilla.org
Hi Jean Marc,

Jean-Marc Desperrier wrote:
>
> How effective has this approach been until now to block spam and spyware
> ?

Errrr...how many times did you encounter and installed signed spyware
and adware on your computer? I guess close to zero!
Would all software one installs be singed by a verifiable certificate,
this wouldn't be an issue altogether. Instead users are trained to click
through the various pop-ups and warnings when installing something on
their computer...


> Is it really that difficult to identify those responsible for it,
> especially in the case of spamware,

If everybody would use S/MIME signatures and refuse any non-signed mail,
it would improve the spam problem a lot. Additionally mail server could
work in TLS mode only to further reduce this problem.


> do you really think it's much harder
> to escape reliability after buying a signing certificate than to escape
> it when operating a large scale spam platform ?

Yes! First of all it would make casual users aware of certificates,
second you could trace such attempts much easier. Remember, the crooks
go the easiest way always...


> How hard and how long
> will it be to convince a judge take the right decision between one
> side's "spyware" and the other's "innovative marketing technic" ?
>

That would be an interesting question indeed, but at least you know to
whom you are talking to. I assume that code signing is only issued
after reasonable verification (at least).


>> So it seems that you are suggesting that we should issue code-signing
>> certificates to anyone who wants them, and use revocation to pull out
>> the bad actors?
>>
>
> In short yes. Just that it's not to anyone who wants them, you have the
> final say to who gets them, and if you experimentally find out you have
> to be very restrictive to avoid a bad experience for your users, then
> you'll just do that.
>

That's about they same (failed) approach of blacklisting mail servers,
phishing web sites etc...


>
> And in this case, the
> seriousness of the CAs and the effectiveness of the legal threat are
> your only lines of defense.
>

But perhaps the best one? What if YOU failed to detect a serious problem
in the code and YOU issued a certificate for that application? You'll be
liable under certain circumstances for damage! Remember you suggest to
review the code, not verify the identity!


> Of course you can also choose to make deals to only accept a few
> selected commercials CA for which a highly effective revocation channel
> will be in place (in addition to a high quality identity review process,
> and only accepting requests from selected countries where the legal
> threat will be effective).

Which perhaps will shut out most developers, specially the latest
condition of "selected" countries...not speaking about costs even...

Gervase Markham

unread,
Jun 28, 2007, 5:57:44 AM6/28/07
to
Jean-Marc Desperrier wrote:
> Gervase Markham wrote:
>> Jean-Marc Desperrier wrote:
>>> You don't care *who* the owner of the cert is. What you care about is
>>> if he intends to use his signing cert to distribute spyware
>>> extensions. And his identity tells you nothing about that.
>>
>> No, but it does tell you whose door the police can go knocking on if
>> he logs into your online banking and steals all your money.
>
> How effective has this approach been until now to block spam and spyware
> ?

Well, it has nothing to do with spam one way or the other. As for
spyware, it depends what type you mean. There's plenty of stuff out
there that we'd call spyware which is still legal. And the stuff that
isn't legal is normally installed via browser flaws, not via the user
clicking "OK" on some code-signing dialog.

> In real life "we'll sue" simply doesn't work to deter spyware and MoFo
> just doesn't want to wage this fight as proven by this :
> http://weblogs.mozillazine.org/asa/archives/2007/03/please_get_fire.html
> (I mean given that you know this exist where is your action to get
> judicial injunctions to make it stop ?)

We have an active enforcement program, going after people who abuse our
trademarks in various ways.

However, charging a fee to obtain Firefox is not illegal.

> But I think functional review would bring more than you might believe.
> It's a significant barrier to entry if someone who wants to distribute a
> spyware extension

It's a significant barrier to entry for someone who wants to distribute
_any_ sort of extension!

> Extensions of course would not be accepted when there's already another
> extension that does the same, or has so few difference that it would be
> better to provide a patch to it.

This would require a level of control over the extension community which
we have so far been entirely unwilling to even consider, and is
antithetical to our "choice and innovation" principles.

Gerv

0 new messages