Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Dan Stillman's concerns about Extension Signing

391 views
Skip to first unread message

David Rajchenbach-Teller

unread,
Nov 25, 2015, 4:14:19 AM11/25/15
to dev-pl...@lists.mozilla.org
I admit I have followed extension signing/scanning only very remotely,
but Dan Stillman has a number of good points:

http://danstillman.com/2015/11/23/firefox-extension-scanning-is-security-theater

Could someone who's actually involved in this feature provide an answer?

Cheers,
David

Mike Hommey

unread,
Nov 25, 2015, 4:30:48 AM11/25/15
to David Rajchenbach-Teller, dev-pl...@lists.mozilla.org
As mentioned in the blog post, he posted an abridged version to
firefox-dev.

Mike

David Rajchenbach-Teller

unread,
Nov 25, 2015, 4:32:02 AM11/25/15
to Mike Hommey, dev-pl...@lists.mozilla.org
And didn't receive any reply, afaict.
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Till Schneidereit

unread,
Nov 25, 2015, 6:16:12 AM11/25/15
to David Rajchenbach-Teller, Mike Hommey, dev-platform
FWIW, I received questions about this via private email and phone calls
from two people working on extensions that support their products. Their
extensions sit in the review queue with not chance of getting through it
before the signing requirement kicks in. This puts them into a situation
where their only reasonable course of action is to advise their users to
switch browsers.

Jeff Gilbert

unread,
Nov 25, 2015, 2:16:26 PM11/25/15
to Till Schneidereit, Mike Hommey, David Rajchenbach-Teller, dev-platform
On Wed, Nov 25, 2015 at 3:16 AM, Till Schneidereit
<ti...@tillschneidereit.net> wrote:
> FWIW, I received questions about this via private email and phone calls
> from two people working on extensions that support their products. Their
> extensions sit in the review queue with not chance of getting through it
> before the signing requirement kicks in. This puts them into a situation
> where their only reasonable course of action is to advise their users to
> switch browsers.
>

Is it just me, or does this sounds completely unacceptable. Sloughing
more users? Things like this are why it's hard not to be cynical.

I doubt anyone is going to switch to Firefox because our extension
signing is safe. (though I do think we should have some form of
signing) But they will gladly switch away when anything breaks,
particularly when we reduce the activation energy needed to switch: If
their extension won't work in new Firefox, it doesn't matter so much
that they won't have that extension in, say, Chrome.

Chris Peterson

unread,
Nov 25, 2015, 4:22:55 PM11/25/15
to
On 11/25/15 11:16 AM, Jeff Gilbert wrote:
> I doubt anyone is going to switch to Firefox because our extension
> signing is safe. (though I do think we should have some form of
> signing) But they will gladly switch away when anything breaks,
> particularly when we reduce the activation energy needed to switch: If
> their extension won't work in new Firefox, it doesn't matter so much
> that they won't have that extension in, say, Chrome.

And that assumes the same extension, or an equivalent, is not available
for Chrome. Picking a not-so-random example, Zotero has a Chrome extension:

https://chrome.google.com/webstore/detail/zotero-connector/ekhagklcjbdpajgpjgmbionohlpdbjgc

Thomas Zimmermann

unread,
Nov 26, 2015, 4:02:40 AM11/26/15
to Jeff Gilbert, Till Schneidereit, Mike Hommey, David Rajchenbach-Teller, dev-platform
Am 25.11.2015 um 20:16 schrieb Jeff Gilbert:
> On Wed, Nov 25, 2015 at 3:16 AM, Till Schneidereit
> <ti...@tillschneidereit.net> wrote:
>> FWIW, I received questions about this via private email and phone calls
>> from two people working on extensions that support their products. Their
>> extensions sit in the review queue with not chance of getting through it
>> before the signing requirement kicks in. This puts them into a situation
>> where their only reasonable course of action is to advise their users to
>> switch browsers.
>>
> Is it just me, or does this sounds completely unacceptable. Sloughing
> more users? Things like this are why it's hard not to be cynical.

It's not just you. Reading the blog post made me think that extension
signing is complete nonsense and we should stop it now. This will only
break one of Firefox' best features for nothing. And especially bad was
it to blacklist the proof-of-concept exploit, instead of addressing the
actual problem.

Best regards
Thomas

>
> I doubt anyone is going to switch to Firefox because our extension
> signing is safe. (though I do think we should have some form of
> signing) But they will gladly switch away when anything breaks,
> particularly when we reduce the activation energy needed to switch: If
> their extension won't work in new Firefox, it doesn't matter so much
> that they won't have that extension in, say, Chrome.

Till Schneidereit

unread,
Nov 26, 2015, 7:56:58 AM11/26/15
to Thomas Zimmermann, Mike Hommey, Jeff Gilbert, dev-platform, David Rajchenbach-Teller
On Thu, Nov 26, 2015 at 10:02 AM, Thomas Zimmermann <tzimm...@mozilla.com
> wrote:

> Am 25.11.2015 um 20:16 schrieb Jeff Gilbert:
> > On Wed, Nov 25, 2015 at 3:16 AM, Till Schneidereit
> > <ti...@tillschneidereit.net> wrote:
> >> FWIW, I received questions about this via private email and phone calls
> >> from two people working on extensions that support their products. Their
> >> extensions sit in the review queue with not chance of getting through it
> >> before the signing requirement kicks in. This puts them into a situation
> >> where their only reasonable course of action is to advise their users to
> >> switch browsers.
> >>
> > Is it just me, or does this sounds completely unacceptable. Sloughing
> > more users? Things like this are why it's hard not to be cynical.
>
> It's not just you. Reading the blog post made me think that extension
> signing is complete nonsense and we should stop it now. This will only
> break one of Firefox' best features for nothing. And especially bad was
> it to blacklist the proof-of-concept exploit, instead of addressing the
> actual problem.
>

I read the blog post, too, and if that were the final, uncontested word on
the matter, I think I would agree. As it is, this assessment strikes me as
awfully harsh: many people have put a lot of thought and effort into this,
so calling for it to simply be canned should require a substantial amount
of background knowledge.

I should also give a bit more information about the feedback I received: in
both cases, versions of the extensions exist for at least Chrome and
Safari. In at least one case, the extension uses a large framework that
needs to be reviewed in full for the extension to be approved. Apparently
this'd only need to happen once per framework, but it hasn't, yet. That
means that the review is bound to take much longer than if just the
extension's code was affected. While I think this makes sense, two things
strike me as very likely that make it a substantial problem: many authors
of extensions affected in similar ways will come out of the woodwork very
shortly before 43 is released or even after that, in reaction to users'
complaints. And many of these extensions will use large frameworks not
encountered before, or simply be too complex to review within a day or two.

I *do* think that we shouldn't ship enforced signing without having a solid
way of dealing with this problem. Or without having deliberately decided
that we're willing to live with these extensions' authors recommending (or
forcing, as the case may be) their users to switch browsers.


till

Thomas Zimmermann

unread,
Nov 26, 2015, 8:50:41 AM11/26/15
to Till Schneidereit, Mike Hommey, Jeff Gilbert, dev-platform, David Rajchenbach-Teller
Hi

Am 26.11.2015 um 13:56 schrieb Till Schneidereit:

> I read the blog post, too, and if that were the final, uncontested word on
> the matter, I think I would agree. As it is, this assessment strikes me as
> awfully harsh: many people have put a lot of thought and effort into this,
> so calling for it to simply be canned should require a substantial amount
> of background knowledge.

Ok, I take back the 'complete nonsense' part. There can be ways of
improving security that involve signing, but the proposed one isn't. I
think the blog post makes this obvious.


>
> I should also give a bit more information about the feedback I received: in
> both cases, versions of the extensions exist for at least Chrome and
> Safari. In at least one case, the extension uses a large framework that
> needs to be reviewed in full for the extension to be approved. Apparently
> this'd only need to happen once per framework, but it hasn't, yet. That
> means that the review is bound to take much longer than if just the
> extension's code was affected. While I think this makes sense, two things
> strike me as very likely that make it a substantial problem: many authors
> of extensions affected in similar ways will come out of the woodwork very
> shortly before 43 is released or even after that, in reaction to users'
> complaints. And many of these extensions will use large frameworks not
> encountered before, or simply be too complex to review within a day or two.

Thanks for this perspective. He didn't seem to use any frameworks, but
the review process failed for an apparently trivial case. Regarding
frameworks in general: there are many and there are usually different
versions in use. Sometimes people make additional modifications. So this
helps only partially.

And of course reviews are not a panacea at all. Our own Bugzilla is
proof of that. ;) Pretending that a reviewed extension (or any other
piece of code) is more trust-worthy is not credible IMHO. Code becomes
trust-worthy by working successfully in "the real world."

>
> I *do* think that we shouldn't ship enforced signing without having a solid
> way of dealing with this problem. Or without having deliberately decided
> that we're willing to live with these extensions' authors recommending (or
> forcing, as the case may be) their users to switch browsers.

I think, a good approach would be to hand-out signing keys to extension
developers and require them to sign anything they upload to AMO. That
would establish a trusted path from developers to users; so users would
know they downloaded the official release of an extension. A malicious
extensions can then be disabled/blacklisted by simply revoking the keys
and affected users would notice. For anything non-AMO, the user is on
their own.

Best regards
Thomas

>
>
> till
>

Kartikaya Gupta

unread,
Nov 26, 2015, 10:11:09 AM11/26/15
to Thomas Zimmermann, Mike Hommey, Jeff Gilbert, Till Schneidereit, dev-platform, David Rajchenbach-Teller
On Thu, Nov 26, 2015 at 8:50 AM, Thomas Zimmermann
<tzimm...@mozilla.com> wrote:
> For anything non-AMO, the user is on
> their own.
>

I don't know if that would fly. As I understand it, a large part of
the purpose of extension signing is to protect users from malicious
add-ons that get installed by non-AMO means - sideloading, installed
by other apps, and so on. If we ignore the non-AMO add-ons then we're
not really solving any problems worth solving. (Caveat: I don't have
access to any actual data on the numbers and types of malicious
add-ons, but from what I've heard I believe this to be the case. I
could be wrong.)

kats

David Burns

unread,
Nov 26, 2015, 10:12:07 AM11/26/15
to Thomas Zimmermann, Mike Hommey, Jeff Gilbert, Till Schneidereit, dev-platform, David Rajchenbach-Teller
Another data point that we seem to have overlooked is that users want to be
able to side load their extensions for many different reasons. We see this
with apps on phones and with extensions currently. I appreciate that users
have grown to be warning blind but, as others have pointed out, this feels
like a sure way to have users move from us to Chrome if there extension
lives there too. Once they are lost it will be non-trivial to get them back.

My main gripe is that we will be breaking tools like WebDriver[1] (better
known as Selenium) and not once have we approached that community. Luckily
we have Marionette being developed as a replacement for them, and was being
developed before we started the addon signing. When mentioned I was told
that since it instruments the browser it can never get signed and we need
to get a move on or get everyone to change to the "whitelabel" version to
use WebDriver. Having spoke to peers at other large tech companies they
said no, they will remain on older versions and if it breaks then stop
support for it until they have a like for like replacement. They will stop
caring about WebCompat until they have a like for like replacement. We will
drive away other users because Firefox does work as well on their favourite
website.

There are also companies that have developed internal tools in addons that
they don't want in AMO. We are essentially telling them that we don't care
about how much effort they have put in or how "sooper sekrit" their addon
is. It's in AMO or else...

I honestly thought we would do the "signing keys to developers" approach
and revoke when they are being naughty.

David

[1] http://github.com/seleniumhq/selenium

On 26 November 2015 at 13:50, Thomas Zimmermann <tzimm...@mozilla.com>
wrote:
> and affected users would notice. For anything non-AMO, the user is on
> their own.
>
> Best regards
> Thomas
>
> >
> >
> > till

Thomas Zimmermann

unread,
Nov 26, 2015, 11:07:13 AM11/26/15
to David Rajchenbach-Teller, dev-pl...@lists.mozilla.org
Hi,

I haven't followed the overall discussion closely, but I'm very
concerned about this change and that we're driving away extension
developers. I hope that some of the relevant people read this thread, as
I'd like to propose a different strategy for extension signing.

1) As dburns mentioned in this thread, some people have to run unsigned
extensions. We should continue to allow this if the users explicitly
enables it in about:config. Unsigned extensions are disabled by default
and should come with a big warning sign.

2) If extension signing is enabled (the default), Firefox should only
allow for extensions that have been signed by a Mozilla-generated key.

3) Obtaining a signing key from Mozilla should be automated in a way
similar to Let's Encrypt. So the overhead for extension developers is
minimal.

4) Keys should be bound to URLs and there can only be one URL per
extension. So it's not possible to modify and redistribute someone
else's extension.

5) Changing an extension's URL requires manual intervention.

6) If an extension turns out to be malicious we can revoke the key.
Firefox would then notice all affected users and disable the extension
automatically.

7) Popular extensions on AMO should be reviewed by Mozilla staff 'behind
the scenes' and get an additional quality label or something similar.

Best regards
Thomas


Am 25.11.2015 um 10:14 schrieb David Rajchenbach-Teller:
> I admit I have followed extension signing/scanning only very remotely,
> but Dan Stillman has a number of good points:
>
> http://danstillman.com/2015/11/23/firefox-extension-scanning-is-security-theater
>
> Could someone who's actually involved in this feature provide an answer?
>
> Cheers,
> David

Gijs Kruitbosch

unread,
Nov 26, 2015, 11:18:46 AM11/26/15
to Thomas Zimmermann, David Rajchenbach-Teller
On 26/11/2015 16:07, Thomas Zimmermann wrote:
> Hi,
>
> I haven't followed the overall discussion closely, but I'm very
> concerned about this change and that we're driving away extension
> developers. I hope that some of the relevant people read this thread, as
> I'd like to propose a different strategy for extension signing.
>
> 1) As dburns mentioned in this thread, some people have to run unsigned
> extensions. We should continue to allow this if the users explicitly
> enables it in about:config. Unsigned extensions are disabled by default
> and should come with a big warning sign.

This really misses the point. There have been many discussions about
this in the past. If we just use an about:config flag, malware/greyware
will set that in the user's pref file and then install itself anyway
(unsigned, obviously). There is nothing we can do in the UI if an
untrusted extension is installed permanently that that untrusted
extension won't be able to hide anyway.

For the signing to provide any meaningful protection, it needs to be
impossible to turn it off permanently.

If users want to run unsigned extensions repeatedly (rather than
one-offs for testing, for which support was recently added), they can
either self-build, or run unbranded builds, or run nightly or aurora.
IIRC 43 will still ship with a pref to turn off signing, and 44 won't
anymore.

Please read earlier discussions about this to get more context before
proposing alternatives.

~ Gijs

(who, ftr, is not on the add-on team or "relevant people" - just happens
to have been following this discussion for a long time)

Mike Hoye

unread,
Nov 26, 2015, 12:14:04 PM11/26/15
to dev-pl...@lists.mozilla.org
On 2015-11-26 11:07 AM, Thomas Zimmermann wrote:
>
> I haven't followed the overall discussion closely, but
This is not OK.

Does anyone here actually think that the team that's been busting their
asses over this for months _doesn't_ have better information and more
insight into this problem than what you've come up with after thinking
about it for five minutes? That all the data they've gathered, all the
experience and expertise they're bringing to bear on this problem are
just sitting in a box in the corner somewhere while they daydream how
much fun it is to write security-critical software and brush off our
users' rights and developer community's needs?

Really?

Stillman wrote some new code and put it through a process meant to catch
problems in old code, and it passed. That's unfortunate, but does it
really surprise anyone that security is an evolving process? That it
might be be full of hard tradeoffs? There is a _huge_gap_ between "new
code can defeat old security measures" and "therefore all the old
security measures are useless". It's an even bigger step from there to
the implication that people working on this either haven't thought about
it already, or just don't care.

We're bad at communications, I get that, but maybe we could all talk to
someone on that team for ten minutes before telling them how to do their
jobs. Ask them about their reasoning, what decisions they made and why,
what the tradeoffs were. I have, and watching the discussion in this
thread is like watching someone tell Jason Bourne he should tie his
shoes and look both ways before crossing the street. It would be
hilarious if I didn't know for a fact that it's insulting and
demoralizing to really smart people who've worked hard and cared
intensely about Mozilla's users and developers for a long, long time.



- mhoye



WaltS48

unread,
Nov 26, 2015, 12:14:24 PM11/26/15
to
Perhaps you missed.

Add-ons/Extension Signing - MozillaWiki -
<https://wiki.mozilla.org/Addons/Extension_Signing#FAQ>

I've noticed a couple new items there about how an extension developer
can get their extension signed if it isn't hosted on AMO.


--
Linux Mint 17.2 "Rafaela" | KDE 4.14.2 | Thunderbird 45.0a1 (Daily)
You don't need zero-days when machines wherever are packed with old-days.
Go Bucs! (next season) Go Pens! Go Sabres! Go Pitt!
[Visit Pittsburgh]<http://www.visitpittsburgh.com/>
[Coexist · Understanding Across Divides]<https://www.coexist.org/>

David Rajchenbach-Teller

unread,
Nov 26, 2015, 12:51:25 PM11/26/15
to Mike Hoye, dev-pl...@lists.mozilla.org
For what it's worth, this thread was not meant to point fingers, but
specifically to get an answer from said team. I see concern about
Extension Signing, and I see points made by add-on developers and which
appear valid to me and which I am unable to answer.

That doesn't mean that we have done something wrong, but it is
sufficient to get my spider(monkey)-sense tingling. We have had cases in
the past where teams have « been busting their asses over [some feature]
for months » and we realized too late that the feature was not aligned
with what we needed. I have no idea whether this is the case here, hence
the need to communicate.

As a side-note, yeah, it would be great if signing add-ons was as simple
as using Let's Encrypt, without having to pile even more work upon an
understaffed team of reviewers.

Best regards,
David

On 26/11/15 18:13, Mike Hoye wrote:
> On 2015-11-26 11:07 AM, Thomas Zimmermann wrote:
>>
>> I haven't followed the overall discussion closely, but
> This is not OK.
>
> Does anyone here actually think that the team that's been busting their
> asses over this for months _doesn't_ have better information and more
> insight into this problem than what you've come up with after thinking
> about it for five minutes?

[...]

Philip Chee

unread,
Nov 26, 2015, 12:57:59 PM11/26/15
to
On 27/11/2015 00:07, Thomas Zimmermann wrote:

> I haven't followed the overall discussion closely, but I'm very
> concerned about this change and that we're driving away extension
> developers. I hope that some of the relevant people read this thread, as
> I'd like to propose a different strategy for extension signing.
....

NOT THAT I DISAGREE WITH YOU (I am an extension developer too) but in
the original discussion months ago. All your points were raised and
ultimately dismissed [1]. Rehashing this won't change anybodies mind.

[1] jorgev and John-Galt can give you the gory details.

Phil

--
Philip Chee <phi...@aleytys.pc.my>, <phili...@gmail.com>
http://flashblock.mozdev.org/ http://xsidebar.mozdev.org
Guard us from the she-wolf and the wolf, and guard us from the thief,
oh Night, and so be good for us to pass.

Jorge Villalobos

unread,
Nov 26, 2015, 1:57:20 PM11/26/15
to
Unfortunately, this discussion has been going on in too many places,
which is why I would like to keep it focused in the
mozilla.addons.user-experience group, which is where most of it is
happening.

This project has been discussed to death for years now, but it's only
getting into some people's radar until now, so it's understandable that
there are lots of old questions coming up, and some of us are responding
a bit impatiently.

We knew going into this that it would alienate a subset of the add-on
developer community, just like we know that dropping features in Firefox
will lose us some users, but help us in the long term. That doesn't mean
we're just dismissing all criticism, and we're discussing making some
adjustments to improve things for developers. Given that many people are
away for Thanksgiving Day, I don't expect anything to happen until early
next week. I'll make sure something is posted on this thread.

Jorge

Thomas Zimmermann

unread,
Nov 27, 2015, 3:46:55 AM11/27/15
to WaltS48, dev-pl...@lists.mozilla.org
Am 26.11.2015 um 18:14 schrieb WaltS48:

> Perhaps you missed.
>
> Add-ons/Extension Signing - MozillaWiki -
> <https://wiki.mozilla.org/Addons/Extension_Signing#FAQ>
>
> I've noticed a couple new items there about how an extension developer
> can get their extension signed if it isn't hosted on AMO.
>
>

Thanks for pointing to further information.

Gervase Markham

unread,
Nov 27, 2015, 7:16:49 AM11/27/15
to Mike Hoye
On 26/11/15 17:13, Mike Hoye wrote:
> Stillman wrote some new code and put it through a process meant to catch
> problems in old code, and it passed. That's unfortunate, but does it
> really surprise anyone that security is an evolving process? That it
> might be be full of hard tradeoffs? There is a _huge_gap_ between "new
> code can defeat old security measures" and "therefore all the old
> security measures are useless".

But the thing is, members of our security group are now piling into the
bug pointing out that trying to find malicious JS code by static code
review is literally _impossible_ (and perhaps hinting that they'd have
said so much earlier if someone had asked them).

You can evolve your process all you like, but if something is
impossible, it's impossible. And not only that, but attempting it seems
to be causing significant collateral damage.

> It's an even bigger step from there to
> the implication that people working on this either haven't thought about
> it already, or just don't care.

I agree with that.

Gerv

Thomas Zimmermann

unread,
Nov 27, 2015, 7:39:13 AM11/27/15
to Mike Hoye, dev-pl...@lists.mozilla.org
Hi

Am 26.11.2015 um 18:13 schrieb Mike Hoye:

> Stillman wrote some new code and put it through a process meant to
> catch problems in old code, and it passed. That's unfortunate, but
> does it really surprise anyone that security is an evolving process?
> That it might be be full of hard tradeoffs? There is a _huge_gap_
> between "new code can defeat old security measures" and "therefore all
> the old security measures are useless". It's an even bigger step from
> there to the implication that people working on this either haven't
> thought about it already, or just don't care.

I was trying to suggest a possible solution; nothing more and nothing
less. I didn't intend to imply that you or anyone else didn't know or
didn't care, and I especially didn't intend to insult anyone. I'm sorry
if my mail came across like this.

Best regards
Thomas

Gijs Kruitbosch

unread,
Nov 27, 2015, 7:59:37 AM11/27/15
to
On 27/11/2015 12:16, Gervase Markham wrote:
> On 26/11/15 17:13, Mike Hoye wrote:
>> Stillman wrote some new code and put it through a process meant to catch
>> problems in old code, and it passed. That's unfortunate, but does it
>> really surprise anyone that security is an evolving process? That it
>> might be be full of hard tradeoffs? There is a _huge_gap_ between "new
>> code can defeat old security measures" and "therefore all the old
>> security measures are useless".
>
> But the thing is, members of our security group are now piling into the
> bug pointing out that trying to find malicious JS code by static code
> review is literally _impossible_ (and perhaps hinting that they'd have
> said so much earlier if someone had asked them).

That's not what they're saying. They're saying it is impossible to
guarantee with static analysis that code is not malicious. Nobody
disputes that, and nobody did before they started saying it, either. The
distinction is that the static code review finds *some* malicious JS
code. Dan's charge is that this is not useful because malware authors
will realize this (we tell them that their add-on got rejected and why!)
and try to bypass that review until they manage it.

This entire discussion is pretty orthogonal to the fact that we're
signing add-ons and the issues that Till and Thomas were talking about
anyway (which was basically: now add-ons need to be reviewed to be
signed and that review takes a long time, and that disrupts users and
add-on developers).

> And not only that, but attempting it seems
> to be causing significant collateral damage.

This is the interesting bit. The reason Dan is bringing this up is not
his concern for users' safety but the fact that the same automated
scanning is flagging things in his add-on that he claims aren't "real"
issues, this causes him to land in the "manual review" queue, and that
takes time.

To my mind, the logical conclusion of Dan's post is that he should want
all add-ons to be manually reviewed all the time. He claims that if
static analysis does not guarantee benign code, it is not worth having
it at all (something which I would dispute (see distinction drawn
above), but let's stick with it for now).

The reason we're drawing different conclusions is that I believe we
still want to do something about the issues caused by frontloaded
add-ons not distributed through AMO, and so we will need to do some kind
of review, and if we get rid of the static analysis, the logical thing
to do would be to use manual review.

Of course, having to manually review all the add-ons is going to cause
even more delays, and is therefore not in Dan's interest, so instead he
posits that we should just drop all our attempts to control issues
caused by frontloaded add-ons not distributed through AMO.

The interesting thing is that Dan is nominally talking about how we're
trying to moderate quality and safety in a space where we haven't before
(ie add-ons not distributed through AMO), but then says stuff like "And
it’s just depressing that the entire Mozilla developer community spent
the last year debating extension signing and having every single
counterargument be dismissed only to end up with a system that is
utterly incapable of actually combating malware."

which basically boils down to an ad-hominem on Mozilla and an indictment
of "the system" and signing and the add-ons space generally, when
really, all we're talking about right now is how/whether to review
non-AMO-distributed add-ons before signing them. Dan acknowledges
elsewhere in his post that signing has other benefits, but the polemic
tone sure makes it seem like the entire plan we have here is rubbish
from start to finish.

There's been a general trend there that Dan sees our attempts to try to
do something in that space as a one-way street where Mozilla should
basically make sure that all add-ons that used to work and weren't
distributed through AMO should not be disrupted, and we have been saying
that it's hard to improve user experience here if there are 0
restrictions, and so "something's gotta give". Dan wants a system where
he can (grudgingly) submit his add-on to AMO, and AMO gives it back to
him signed (ideally automatically via APIs) and nobody from Mozilla
(human or otherwise) reviews his code or tells him how to do stuff. And
we're trying to improve the state of non-AMO add-ons. Those two desires
are fundamentally very hard to reconcile. And that has very little to do
with whether or not static analysis can guarantee you non-malicious code
or not.

~ Gijs

Frederik Braun

unread,
Nov 27, 2015, 8:41:29 AM11/27/15
to dev-pl...@lists.mozilla.org
On 27.11.2015 13:16, Gervase Markham wrote:
> On 26/11/15 17:13, Mike Hoye wrote:
>> Stillman wrote some new code and put it through a process meant to catch
>> problems in old code, and it passed. That's unfortunate, but does it
>> really surprise anyone that security is an evolving process? That it
>> might be be full of hard tradeoffs? There is a _huge_gap_ between "new
>> code can defeat old security measures" and "therefore all the old
>> security measures are useless".
>
> But the thing is, members of our security group are now piling into the
> bug pointing out that trying to find malicious JS code by static code
> review is literally _impossible_ (and perhaps hinting that they'd have
> said so much earlier if someone had asked them).
>
> You can evolve your process all you like, but if something is
> impossible, it's impossible. And not only that, but attempting it seems
> to be causing significant collateral damage.
>

We can detect obfuscation and disallow it, though. It's not "all is
lost", but "impossible to be 100% exact, if we allow arbitrary
JavaScript". I think we already disallow certain language features (e.g.
eval?).

Gijs Kruitbosch

unread,
Nov 27, 2015, 9:33:47 AM11/27/15
to Frederik Braun
I don't think we currently disallow eval() on the Firefox side, and I
think disallowing both eval() and Function() and friends might break a
lot of add-ons. AMO discourages its use, but I don't know that there are
literally 0 cases of it, and the use of eval() can be obfuscated itself...

That and we have the subscript loader, Cu.import and numerous other ways
of running script. If necessary as an add-on you could implement your
own protocol handler for strings and load scripts that way. It's pretty
hard to nail this down for XUL add-ons, and, to a lesser degree, jetpack
- yet another reason for Web Extensions.

It's actually an interesting idea, IMO, to see if we could just remove
access to the Function constructor and eval() in chrome compartments (ie
not make them available on |window| or any other chrome global at all),
and see how much stuff breaks. My guess is "quite a lot", but I've been
wrong before - and of course, we could fix it...

~ Gijs

Gavin Sharp

unread,
Nov 27, 2015, 10:51:03 AM11/27/15
to Gervase Markham, dev-platform
On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org> wrote:
> But the thing is, members of our security group are now piling into the
> bug pointing out that trying to find malicious JS code by static code
> review is literally _impossible_ (and perhaps hinting that they'd have
> said so much earlier if someone had asked them).

No, that's not right. There's an important distinction between
"finding malicious JS code" and "finding _all_ malicious JS code". The
latter is impossible, but the former isn't.

Proving "the validator won't catch everything" isn't particularly
relevant when it isn't intended to, in the overall add-on signing
system design.

Gavin

dsti...@zotero.org

unread,
Nov 27, 2015, 6:46:24 PM11/27/15
to
On Friday, November 27, 2015 at 7:59:37 AM UTC-5, Gijs Kruitbosch wrote:
> On 27/11/2015 12:16, Gervase Markham wrote:
> > On 26/11/15 17:13, Mike Hoye wrote:
> >> Stillman wrote some new code and put it through a process meant to catch
> >> problems in old code, and it passed. That's unfortunate, but does it
> >> really surprise anyone that security is an evolving process? That it
> >> might be be full of hard tradeoffs? There is a _huge_gap_ between "new
> >> code can defeat old security measures" and "therefore all the old
> >> security measures are useless".
> >
> > But the thing is, members of our security group are now piling into the
> > bug pointing out that trying to find malicious JS code by static code
> > review is literally _impossible_ (and perhaps hinting that they'd have
> > said so much earlier if someone had asked them).
>
> That's not what they're saying. They're saying it is impossible to
> guarantee with static analysis that code is not malicious. Nobody
> disputes that, and nobody did before they started saying it, either. The
> distinction is that the static code review finds *some* malicious JS
> code. Dan's charge is that this is not useful because malware authors
> will realize this (we tell them that their add-on got rejected and why!)
> and try to bypass that review until they manage it.

Repeatedly claiming (as a number of Mozilla folks have now done) that "nobody ever said we could detect all malicious code" isn't helpful. That's a strawman argument.

The issue here is that this new system -- specifically, an automated scanner sending extensions to manual review -- has been defended by Jorge's saying, from March when I first brought this up until yesterday on the hardening bug [1], that he believes the scanner can "block the majority of malware". That's simply not the case. The scanner cannot block even trivial attempts at obfuscation. That's what members of your security group are saying, and what my PoC demonstrates. You literally cannot block the sorts of examples in my PoC without blocking all extensions. (As I note on the hardening bug, the confusion here may be partly my fault. The examples in my PoC -- e.g., 'e'.replace() + 'val' -- were meant to be somewhat humorous, but the point is just that you can bypass the scanner by generating a string, and it's provably impossible to figure out what those strings would be from static analysis.)

The implication of this is that manual review would be strictly voluntary on the part of the extension developer, and so you would almost by definition be sending only legitimate developers to manual review. If malware wants to be detected, it's sort of by definition not malware.

Jorge has been saying he believes the scanner can block most malware because he genuinely doesn't understand the technical issues here, as his statements (and his absurd blocklisting of the PoC) make clear. It's hard not to make this sound like a personal attack, but that's really not my intention. Throughout this process, Jorge should have had the support of Mozilla engineers who understand that the claims he's been making about the scanner are not possible. Some people are now attempting to defend him by saying "well we never said it could block all malware", but that's a misrepresentation of the disconnect here.

Even the developer of the validator, asking a month ago whether combating this sort of trivial obfuscation was possible (it's not, as he was told), said, "Without it the validator remains more an advisory/helpful tool than something we could use to automate security validation." [2]

> > And not only that, but attempting it seems
> > to be causing significant collateral damage.
>
> This is the interesting bit. The reason Dan is bringing this up is not
> his concern for users' safety but the fact that the same automated
> scanning is flagging things in his add-on that he claims aren't "real"
> issues, this causes him to land in the "manual review" queue, and that
> takes time.

Zotero is flagged for manual review 1) because of our use of things like nsIProcess and js-ctypes and 2) because we have hundreds of thousands of lines of code, which both time out the validator (though presumably that could be fixed) and all but guarantee that there will be something in every update that is flagged, because the scanner blocks all sorts of highly ambiguous things.

And yes, in our case the things it is flagging are indeed not "real" issues, as AMO editors have acknowledged in every review we've ever received. An AMO review has never identified a legitimate security issue in Zotero -- if they had, 1) we would've been grateful and 2) they wouldn't have approved us.

The collateral damage here is legitimate developers and the users of their extensions, who are denied timely updates (including important bug fixes) based on a scanner that blocks legitimate extensions but that by definition cannot block anyone who doesn't want to be blocked.

> To my mind, the logical conclusion of Dan's post is that he should want
> all add-ons to be manually reviewed all the time.

No, because that would be massively disruptive to all, not just some, developers of unlisted extensions, who are likely unlisted specifically because they want or need to release timely updates to their users. Furthermore, in Zotero's case, we have a vastly better understanding of our very complex code base than an AMO reviewer, and we have every incentive to keep our users, whose trust we've earned over a decade of direct distribution and support from our site, safe.

> He claims that if
> static analysis does not guarantee benign code, it is not worth having
> it at all (something which I would dispute (see distinction drawn
> above), but let's stick with it for now).

It is not worth causing massive disruption for legitimate developers in the name of a system that cannot stop even trivial attempts at obfuscation, yes.

> The reason we're drawing different conclusions is that I believe we
> still want to do something about the issues caused by frontloaded
> add-ons not distributed through AMO, and so we will need to do some kind
> of review, and if we get rid of the static analysis, the logical thing
> to do would be to use manual review.
>
> Of course, having to manually review all the add-ons is going to cause
> even more delays, and is therefore not in Dan's interest, so instead he
> posits that we should just drop all our attempts to control issues
> caused by frontloaded add-ons not distributed through AMO.

But you are doing something: you're introducing signing, you're forcing manual review for the far more dangerous and problematic category of sideloaded extensions, and you're creating a record of deployed code that can be reviewed later by AMO editors in order to provide feedback or (reliably) blocklist. I suggest, additionally, using the time saved by discontinuing repeated reviews of legitimate extensions to perform an initial review of submissions, which wouldn't prevent malware later but would reduce id churn and provide an opportunity to give initial feedback to new developers.

The problem is thinking that you're accomplishing anything by also trying to block malware in any meaningful way through automated scanning. As I note in my post, onerous but trivially bypassable automated rules could actually work against the benefits we actually agree on, by reducing the amount of code you have to search through or review later.

> The interesting thing is that Dan is nominally talking about how we're
> trying to moderate quality and safety in a space where we haven't before
> (ie add-ons not distributed through AMO), but then says stuff like "And
> it's just depressing that the entire Mozilla developer community spent
> the last year debating extension signing and having every single
> counterargument be dismissed only to end up with a system that is
> utterly incapable of actually combating malware."
>
> which basically boils down to an ad-hominem on Mozilla and an indictment
> of "the system" and signing and the add-ons space generally, when
> really, all we're talking about right now is how/whether to review
> non-AMO-distributed add-ons before signing them. Dan acknowledges
> elsewhere in his post that signing has other benefits, but the polemic
> tone sure makes it seem like the entire plan we have here is rubbish
> from start to finish.

It's the people defending automated scanning as a meaningful deterrent against malware that are failing to make a distinction between different parts of the system, not me. I make extremely clear in my post that signing has benefits. What I find frustrating is the seemingly reflexive unwillingness, in the name of defending some colleagues who are out of their technical depth, to acknowledge that some parts of the system don't and can't achieve the goals that Mozilla has laid out for them.

> There's been a general trend there that Dan sees our attempts to try to
> do something in that space as a one-way street where Mozilla should
> basically make sure that all add-ons that used to work and weren't
> distributed through AMO should not be disrupted, and we have been saying
> that it's hard to improve user experience here if there are 0
> restrictions, and so "something's gotta give". Dan wants a system where
> he can (grudgingly) submit his add-on to AMO, and AMO gives it back to
> him signed (ideally automatically via APIs) and nobody from Mozilla
> (human or otherwise) reviews his code or tells him how to do stuff.

Read my post. I'm not calling for no signing. I'm not calling for no restrictions. I'm not calling for no review. I'm calling for changing the parts of the process that provide essentially no additional protection against malicious code but that are hugely disruptive to legitimate developers.

But wait, now it's unreasonable that I should want us to be able to release Zotero using automated build tools rather than by running a build script, downloading the XPI, manually uploading it to a web app, waiting four days for a review that may then require weeks of back-and-forth discussion resulting in no actual changes, going back to the web app and downloading the signed XPI manually, uploading the XPI to our servers, and running additional build steps to update manifests and distribute it? And you wonder why some extension developers feel there's been a lack of respect demonstrated by Mozilla throughout this discussion?

> And
> we're trying to improve the state of non-AMO add-ons. Those two desires
> are fundamentally very hard to reconcile. And that has very little to do
> with whether or not static analysis can guarantee you non-malicious code
> or not.

It's Mozilla that has repeatedly called for balancing user safety and developer freedom. What you have now is a system that is extremely disruptive to legitimate developers while providing almost no additional security, because it can be trivially bypassed by anyone with a passing knowledge of JavaScript.

Fortunately, as Jorge has said, the process, and the manual review step in particular, is being "reconsidered".

- Dan

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867#c26
[2] https://groups.google.com/forum/#!topic/mozilla.dev.static-analysis/qTCBKh2bRsE

Ehsan Akhgari

unread,
Nov 27, 2015, 7:10:47 PM11/27/15
to Gavin Sharp, dev-platform, Gervase Markham
On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp <ga...@gavinsharp.com> wrote:

> On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org> wrote:
> > But the thing is, members of our security group are now piling into the
> > bug pointing out that trying to find malicious JS code by static code
> > review is literally _impossible_ (and perhaps hinting that they'd have
> > said so much earlier if someone had asked them).
>
> No, that's not right. There's an important distinction between
> "finding malicious JS code" and "finding _all_ malicious JS code". The
> latter is impossible, but the former isn't.
>

Note that malicious code here might look like this:

console.log("success");

It's impossible to tell by looking at the code whether that line prints a
success message on the console, or something entirely different, such as
running calc.exe.

A better framing for the problem is "finding some arbitrary instances of
malicious JS code" vs "finding malicious JS code". My point in the bug and
in the discussions prior to that was that a static checker can only do the
former, and as such, if the goal of the validator is finding malicious
code, its effectiveness is bound to be a lint tool at best.


> Proving "the validator won't catch everything" isn't particularly
> relevant when it isn't intended to, in the overall add-on signing
> system design.
>

The specific problem here is that we allow automatic signing of extensions
once they pass the add-on validator checks, and we allow our users to run
signed extensions without any other checks. Therefore, the current system
is vulnerable to attacks such as what Dan's PoC extension has demonstrated.

No matter what the overall system design is and what the intended purpose
of the validator is, the concrete issue here is the vulnerability above. I
admit that I don't have a great understanding of the goals and design of
the add-on signing process (even though I have tried hard to understand it
by looking at the public info we've published), but we cannot dismiss this
vulnerability because the validator is intended to do something different.

Cheers,
--
Ehsan

Ehsan Akhgari

unread,
Nov 27, 2015, 7:12:34 PM11/27/15
to Frederik Braun, dev-pl...@lists.mozilla.org
On 2015-11-27 8:41 AM, Frederik Braun wrote:
> On 27.11.2015 13:16, Gervase Markham wrote:
>> On 26/11/15 17:13, Mike Hoye wrote:
>>> Stillman wrote some new code and put it through a process meant to catch
>>> problems in old code, and it passed. That's unfortunate, but does it
>>> really surprise anyone that security is an evolving process? That it
>>> might be be full of hard tradeoffs? There is a _huge_gap_ between "new
>>> code can defeat old security measures" and "therefore all the old
>>> security measures are useless".
>>
>> But the thing is, members of our security group are now piling into the
>> bug pointing out that trying to find malicious JS code by static code
>> review is literally _impossible_ (and perhaps hinting that they'd have
>> said so much earlier if someone had asked them).
>>
>> You can evolve your process all you like, but if something is
>> impossible, it's impossible. And not only that, but attempting it seems
>> to be causing significant collateral damage.
>>
>
> We can detect obfuscation and disallow it, though.

No, we unfortunately cannot do that. That is really the same problem as
detecting malicious add-ons by looking at the code which is impossible
because of the previously mentioned reasons.

(Note that you may be thinking about obfuscations done by tools such as
minifiers, but the interesting obfuscation here is deliberate ones done
by an attacker trying to mislead a human or machine reviewing their code
for maliciousness.)

Eric Rescorla

unread,
Nov 27, 2015, 8:50:13 PM11/27/15
to Ehsan Akhgari, Gavin Sharp, dev-platform, Gervase Markham
On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari <ehsan....@gmail.com>
wrote:

> On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp <ga...@gavinsharp.com>
> wrote:
>
> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org>
> wrote:
> > > But the thing is, members of our security group are now piling into the
> > > bug pointing out that trying to find malicious JS code by static code
> > > review is literally _impossible_ (and perhaps hinting that they'd have
> > > said so much earlier if someone had asked them).
> >
> > No, that's not right. There's an important distinction between
> > "finding malicious JS code" and "finding _all_ malicious JS code". The
> > latter is impossible, but the former isn't.
> >
>
> Note that malicious code here might look like this:
>
> console.log("success");
>
> It's impossible to tell by looking at the code whether that line prints a
> success message on the console, or something entirely different, such as
> running calc.exe.
>
> A better framing for the problem is "finding some arbitrary instances of
> malicious JS code" vs "finding malicious JS code". My point in the bug and
> in the discussions prior to that was that a static checker can only do the
> former, and as such, if the goal of the validator is finding malicious
> code, its effectiveness is bound to be a lint tool at best.


Indeed. And if the validator is publicly accessible, let alone has public
source code, it's likely to be straightforward for authors of malicious
code to evade the validator. All they need to do is run their code
through the validator, see what errors it spits out, and modify the
code until it no longer spits out errors.

Again, this goes back to threat model. If we're trying to make it easier
for authors to comply with our policies (and avoid writing problematic
add-ons), then a validator seems reasonable. However, if we're trying
to prevent authors of malicious add-ons from getting their add-ons
through, that seems much more questionable, for the reasons listed above.
However, once we accept that we can't stop authors who are trying
to evade detection, then treating it as a linter and allowing authors
to override it seems a lot more sensible.

-Ekr

Gavin Sharp

unread,
Nov 28, 2015, 2:07:27 AM11/28/15
to Eric Rescorla, Gervase Markham, Ehsan Akhgari, dev-platform
The assumption that the validator must catch all malicious code for add-on signing to be beneficial is incorrect, and seems to be what's fueling most of this thread. Validation being a prerequisite for automatic signing is not primarily a security measure, but rather just a way of eliminating "obvious" problems (security-related or otherwise) from installed and enabled add-ons generally. With add-on singing fully implemented, if (when) malicious add-ons get automatically signed, you'll have several more effective tools to deal with them, compared to the status quo.

Gavin

> On Nov 27, 2015, at 8:49 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>
>
>> On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari <ehsan....@gmail.com> wrote:
>> On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp <ga...@gavinsharp.com> wrote:
>>
>> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org> wrote:
>> > > But the thing is, members of our security group are now piling into the
>> > > bug pointing out that trying to find malicious JS code by static code
>> > > review is literally _impossible_ (and perhaps hinting that they'd have
>> > > said so much earlier if someone had asked them).
>> >

Dan Stillman

unread,
Nov 28, 2015, 3:04:22 AM11/28/15
to dev-platform
On 11/28/15 2:06 AM, Gavin Sharp wrote:
> The assumption that the validator must catch all malicious code for add-on signing to be beneficial is incorrect, and seems to be what's fueling most of this thread. Validation being a prerequisite for automatic signing is not primarily a security measure, but rather just a way of eliminating "obvious" problems (security-related or otherwise) from installed and enabled add-ons generally. With add-on singing fully implemented, if (when) malicious add-ons get automatically signed, you'll have several more effective tools to deal with them, compared to the status quo.

Gavin, an "assumption that the validator must catch all malicious code
for add-on signing to be beneficial" is not fueling any part of this
thread. Based on this comment, it sounds like you haven't read either my
original post [1] or my post to this list from a few hours ago [2]. It
would be helpful if you would do so before trying to engage in this
discussion.

Again, I'm not objecting to signing, automated review, or manual review
on their own — I explicitly explain their benefits in my original post —
but the pointlessly disruptive way they are currently implemented, which
stems from faulty assumptions about the capabilities of the automated
scanner.

[1]
http://danstillman.com/2015/11/23/firefox-extension-scanning-is-security-theater
[2]
https://groups.google.com/d/msg/mozilla.dev.platform/AGW3-zSBjl8/iOZ-kYSmCQAJ

Gijs Kruitbosch

unread,
Nov 28, 2015, 5:06:04 AM11/28/15
to
On 27/11/2015 23:46, dsti...@zotero.org wrote:
> The issue here is that this new system -- specifically, an automated
> scanner sending extensions to manual review -- has been defended by
> Jorge's saying, from March when I first brought this up until
> yesterday on the hardening bug [1], that he believes the scanner can
> "block the majority of malware".

Funny how you omit part of the quote you've listed elsewhere, namely:
"block the majority of malware, but it will never be perfect".

You assert the majority of malware will be 'smarter' than the validator
expects (possibly after initial rejection) and bypass it. Jorge asserts,
from years of experience, that malware authors are lazy and the
validator has already been helpful, in conjunction with manual review.
It's not helpful to say that what Jorge is saying is "not true" - you
mean different things when you say "the majority of malware".

> Jorge has been saying he believes the scanner can block most malware
because he genuinely doesn't understand the technical issues here, as
his statements (and his absurd blocklisting of the PoC) make clear. It's
hard not to make this sound like a personal attack,

This is what's so offensive. It's hard to make this not sound like a
personal attack because it *is* a personal attack. What's more, Jorge's
competence or otherwise is irrelevant to the discussion. Your
insistently bringing it up and your condescending attitude towards Jorge
and other Mozilla folks is offensive, unhelpful, and not constructive in
addressing the actual issue at hand. If we were some nameless
corporation you wouldn't even know the name of the person responsible
for the add-ons system, but that wouldn't change its quality or the
validity of its approach one iota.

As a sidenote about the blocklisting: without signing being required,
that's the only thing that could actually be done at that time. I mean,
that or close off submissions for all non-AMO-listed frontloaded
add-ons, which presumably would have made you (and many other people)
even more angry. I wasn't involved in the decision, but I don't think it
is "absurd", or that your calling attention to it (in your blogpost and
elsewhere) was anything but sensationalizing the issue.

>> [Dan] says stuff like "And
>> it's just depressing that the entire Mozilla developer community spent
>> the last year debating extension signing and having every single
>> counterargument be dismissed only to end up with a system that is
>> utterly incapable of actually combating malware."
>>
>> which basically boils down to an ad-hominem on Mozilla and an indictment
>> of "the system" and signing and the add-ons space generally, when
>> really, all we're talking about right now is how/whether to review
>> non-AMO-distributed add-ons before signing them. Dan acknowledges
>> elsewhere in his post that signing has other benefits, but the polemic
>> tone sure makes it seem like the entire plan we have here is rubbish
>> from start to finish.
>
> It's the people defending automated scanning as a meaningful
> deterrentagainst malware that are failing to make a distinction between different
parts of the system, not me.

I quoted you in the paragraph above this statement of yours. It is a
matter of English spelling and grammar that your phrasing condemns all
of the signing and review changes. Stop blame-shifting.

>> There's been a general trend there that Dan sees our attempts to try to
>> do something in that space as a one-way street where Mozilla should
>> basically make sure that all add-ons that used to work and weren't
>> distributed through AMO should not be disrupted, and we have been saying
>> that it's hard to improve user experience here if there are 0
>> restrictions, and so "something's gotta give". Dan wants a system where
>> he can (grudgingly) submit his add-on to AMO, and AMO gives it back to
>> him signed (ideally automatically via APIs) and nobody from Mozilla
>> (human or otherwise) reviews his code or tells him how to do stuff.
>
> Read my post.

I read it before posting, so please don't insinuate I did not.

> I'm not calling for no signing. I'm not calling for no
restrictions. I'm not calling for no review.

You're asking us to remove every bit of the automated review that
prevents you from publishing zotero automatically without a blocking
human review of your codebase.

I don't know how many of those bits there are (ie which bits are
currently getting you dropped into the manual review queue), and how
much would be left, and you have not specified this. If there were just
a few, I assume you would simply have argued against those specific
rules because that would have been a simpler change to make and convince
people of, so I believe the conclusion I drew is reasonable.

In any case, if we left something of the automated review in, chances
are Zotero would just run into the same thing in a future update where
you added some more code that ran into the bit that wasn't problematic
before, right?

> I'm calling for changing
the parts of the process that provide essentially no additional
protection against malicious code but that are hugely disruptive to
legitimate developers.

This sounds eminently reasonable - but doesn't correspond to the
specific parts of your original post and this reply that I have referred
to before. You could have constructively called out the automated review
requirement for frontloaded, non-AMO-distributed add-ons in an objective
and simple manner. Instead we get a long angry rant about it, mixed with
references to "security theatre" and calling people incompetent.

> But wait, now it's unreasonable

I simply said that this was what you wanted - ie no additional burden
for you compared to the status quo - and in both that and the next
paragraph, I outlined what "we" wanted, and that those two things are at
odds.

> What you have now is a system that is extremely
disruptive to legitimate developers

I will just point out that not all legitimate developers seem to be
struggling as much with it as you do, so I don't know that your
generalization is justified. Struggling with signing, privately-run
add-ons, modifying public add-ons, the overall debate and its
consequences wrt e.g. government surveillance, centralizing a bunch of
infrastructure that used to be distributed - yes. Struggling
specifically with the automated portion of the review system for
frontloaded, non-AMO add-ons... not so much.

~ Gijs

Eric Rescorla

unread,
Nov 28, 2015, 10:32:39 AM11/28/15
to Gavin Sharp, Gervase Markham, Ehsan Akhgari, dev-platform
On Fri, Nov 27, 2015 at 11:06 PM, Gavin Sharp <ga...@gavinsharp.com> wrote:

> The assumption that the validator must catch all malicious code for add-on
> signing to be beneficial is incorrect, and seems to be what's fueling most
> of this thread.
>

I'm not sure how you got that out of my comments, since I explicitly said
the
opposite:"If we're trying to make it easier for authors to comply with our
policies (and avoid writing problematic add-ons), then a validator seems
reasonable"


Validation being a prerequisite for automatic signing is not primarily a
> security measure, but rather just a way of eliminating "obvious" problems
> (security-related or otherwise) from installed and enabled add-ons
> generally.
>

Sure, but the argument for it being a *hard* requirement is primarily a
security
one, and that's the one that falls afoul of the threat model point I made
below.



> With add-on singing fully implemented, if (when) malicious add-ons get
> automatically signed, you'll have several more effective tools to deal with
> them, compared to the status quo.
>

Yes.

-Ekr





> Gavin
>
> On Nov 27, 2015, at 8:49 PM, Eric Rescorla <e...@rtfm.com> wrote:
>
>
>
> On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari <ehsan....@gmail.com>
> wrote:
>
>> On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp <ga...@gavinsharp.com>
>> wrote:
>>
>> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org>
>> wrote:
>> > > But the thing is, members of our security group are now piling into
>> the
>> > > bug pointing out that trying to find malicious JS code by static code
>> > > review is literally _impossible_ (and perhaps hinting that they'd have
>> > > said so much earlier if someone had asked them).
>> >

Eric Rescorla

unread,
Nov 28, 2015, 10:36:00 AM11/28/15
to Gijs Kruitbosch, dev-platform
On Sat, Nov 28, 2015 at 2:06 AM, Gijs Kruitbosch <gijskru...@gmail.com>
wrote:

> On 27/11/2015 23:46, dsti...@zotero.org wrote:
>
>> The issue here is that this new system -- specifically, an automated
>> scanner sending extensions to manual review -- has been defended by
>> Jorge's saying, from March when I first brought this up until
>> yesterday on the hardening bug [1], that he believes the scanner can
>> "block the majority of malware".
>>
>
> Funny how you omit part of the quote you've listed elsewhere, namely:
> "block the majority of malware, but it will never be perfect".
>
> You assert the majority of malware will be 'smarter' than the validator
> expects (possibly after initial rejection) and bypass it. Jorge asserts,
> from years of experience, that malware authors are lazy and the validator
> has already been helpful, in conjunction with manual review.


Did Jorge in fact assert that that as a matter of fact or as a matter of
opinion?
Maybe I missed it.

This seems like an empirical question. how many pieces of obvious malware
(in the sense that once the functionality is found it's clearly malicious
code
as opposed to a mistake, not in the sense that it's easy to find the
functionality)
have been found by the review process? How many pieces of obvious malware
(in the sense above) have passed the review process or otherwise been
found in the wild?

-Ekr

Kartikaya Gupta

unread,
Nov 28, 2015, 2:30:47 PM11/28/15
to Eric Rescorla, dev-platform, Gijs Kruitbosch
So it seems to me that people are actually in general agreement about
what the validator can and cannot do, but have different evaluations
of the cost-benefit tradeoff.

On the one hand we have the camp (let's say camp A) that believes the
validator provides negligible actual benefit, because it is trival to
bypass, but at the same time provides a huge cost to add-on
developers. And on the other hand we have the camp ("camp B") that
believes the validator provides some non-negligible benefit, even
though it may significantly increase the cost to add-on developers.

>From what I have been told from multiple people, Mozilla does have
actual data on the type and number of malicious add-ons in the wild,
and it cannot be published. I don't really like this since it goes
against openness and whatnot, but I can accept that there are
legitimate reasons for not publishing this data. So the question is -
do the people in camp A or the people in camp B have access to this
data? I would argue that whoever has access to the data is in a better
position to make the right call with respect to the cost-benefit
tradeoff, and everybody else should defer to them. If people in both
camps have access to the data, then clearly they have different
interpretations of the data and they should discuss it further.
Presumably they know who they are.

kats


On Sat, Nov 28, 2015 at 10:35 AM, Eric Rescorla <e...@rtfm.com> wrote:
> On Sat, Nov 28, 2015 at 2:06 AM, Gijs Kruitbosch <gijskru...@gmail.com>
> wrote:
>
>> On 27/11/2015 23:46, dsti...@zotero.org wrote:
>>
>>> The issue here is that this new system -- specifically, an automated
>>> scanner sending extensions to manual review -- has been defended by
>>> Jorge's saying, from March when I first brought this up until
>>> yesterday on the hardening bug [1], that he believes the scanner can
>>> "block the majority of malware".
>>>
>>
>> Funny how you omit part of the quote you've listed elsewhere, namely:
>> "block the majority of malware, but it will never be perfect".
>>
>> You assert the majority of malware will be 'smarter' than the validator
>> expects (possibly after initial rejection) and bypass it. Jorge asserts,
>> from years of experience, that malware authors are lazy and the validator
>> has already been helpful, in conjunction with manual review.
>
>
> Did Jorge in fact assert that that as a matter of fact or as a matter of
> opinion?
> Maybe I missed it.
>
> This seems like an empirical question. how many pieces of obvious malware
> (in the sense that once the functionality is found it's clearly malicious
> code
> as opposed to a mistake, not in the sense that it's easy to find the
> functionality)
> have been found by the review process? How many pieces of obvious malware
> (in the sense above) have passed the review process or otherwise been
> found in the wild?
>
> -Ekr

Gavin Sharp

unread,
Nov 28, 2015, 2:34:56 PM11/28/15
to Eric Rescorla, Gervase Markham, Ehsan Akhgari, dev-platform
I wasn't suggesting that you had made that incorrect assumption.

Gavin
>>> > > But the thing is, members of our security group are now piling into
>>> the
>>> > > bug pointing out that trying to find malicious JS code by static code
>>> > > review is literally _impossible_ (and perhaps hinting that they'd
>>> have
>>> > > said so much earlier if someone had asked them).
>>> >

Eric Rescorla

unread,
Nov 28, 2015, 2:40:48 PM11/28/15
to Gavin Sharp, Gervase Markham, Ehsan Akhgari, dev-platform
How odd that your e-mail was in response to mine, then.

-Ekr
>>>> > > But the thing is, members of our security group are now piling into
>>>> the
>>>> > > bug pointing out that trying to find malicious JS code by static
>>>> code
>>>> > > review is literally _impossible_ (and perhaps hinting that they'd
>>>> have
>>>> > > said so much earlier if someone had asked them).
>>>> >

Dan Stillman

unread,
Nov 28, 2015, 2:42:42 PM11/28/15
to dev-platform
On 11/28/15 5:06 AM, Gijs Kruitbosch wrote:
> On 27/11/2015 23:46, dsti...@zotero.org wrote:
>> The issue here is that this new system -- specifically, an automated
>> scanner sending extensions to manual review -- has been defended by
>> Jorge's saying, from March when I first brought this up until
>> yesterday on the hardening bug [1], that he believes the scanner can
>> "block the majority of malware".
>
> Funny how you omit part of the quote you've listed elsewhere, namely:
> "block the majority of malware, but it will never be perfect".
>
> You assert the majority of malware will be 'smarter' than the
> validator expects (possibly after initial rejection) and bypass it.
> Jorge asserts, from years of experience, that malware authors are lazy
> and the validator has already been helpful, in conjunction with manual
> review. It's not helpful to say that what Jorge is saying is "not
> true" - you mean different things when you say "the majority of malware".

I've addressed this repeatedly. In my view, saying "it will never be
perfect" is a misleading statement that betrays a misunderstanding of
the technical issues. If the system is so trivial to bypass that anyone
with a basic grasp of JavaScript would have to essentially volunteer to
be manually reviewed, the system cannot block malware. Again, malware
that wants to be detected isn't really malware.

The idea that a malware author is going to say, "Oh, man, I was all set
to release this malware and make lots of money, but then the scanner
flagged 'nsIProcess', and 'n'.replace() + 'sIProcess' is just too much
effort, so I guess I'll just give up and go attack users some other way"
is totally absurd. You're really going to defend that claim? You can't
just say, "OK, I guess that argument doesn't really make sense, so maybe
we should reconsider what we're actually trying to block"?

>> Jorge has been saying he believes the scanner can block most malware
> because he genuinely doesn't understand the technical issues here, as
> his statements (and his absurd blocklisting of the PoC) make clear. It's
> hard not to make this sound like a personal attack,
>
> This is what's so offensive. It's hard to make this not sound like a
> personal attack because it *is* a personal attack. What's more,
> Jorge's competence or otherwise is irrelevant to the discussion. Your
> insistently bringing it up and your condescending attitude towards
> Jorge and other Mozilla folks is offensive, unhelpful, and not
> constructive in addressing the actual issue at hand. If we were some
> nameless corporation you wouldn't even know the name of the person
> responsible for the add-ons system, but that wouldn't change its
> quality or the validity of its approach one iota.

If the person who's been defending this system for the last year isn't
aware of the technical issues and has been making statements that aren't
borne out by what's technically possible, and I point that out, that's
not a personal attack. It's a relevant data point in understanding how a
bad policy might have been put into place and defended against
criticism. If that person has been the one refusing to implement a
whitelist for extensions like Zotero without understanding that, because
of what I've demonstrated, whitelisted extensions couldn't do anything
that unlisted extensions couldn't, that's relevant to the issue.

Jorge admitted he doesn't understand the PoC, so that's not really up
for debate: "I don't know if we will be able to detect the particular
workarounds implemented in this bypass add-on; I'll leave that to the
dev team to determine and file individual dependencies." [1]

> As a sidenote about the blocklisting: without signing being required,
> that's the only thing that could actually be done at that time. I
> mean, that or close off submissions for all non-AMO-listed frontloaded
> add-ons, which presumably would have made you (and many other people)
> even more angry. I wasn't involved in the decision, but I don't think
> it is "absurd", or that your calling attention to it (in your blogpost
> and elsewhere) was anything but sensationalizing the issue.

What? The only thing that could have been done? To accomplish what? It
was a proof of concept, with non-malicious example code hardcoded to
localhost. And the issues in it can't, by definition, be blocked by the
validator, which Jorge didn't understand (as he said himself on the
hardening bug). No one who understood what the PoC was or what it
implied would have blocklisted it, because it makes literally no sense
to do so.

>>> [Dan] says stuff like "And
>>> it's just depressing that the entire Mozilla developer community spent
>>> the last year debating extension signing and having every single
>>> counterargument be dismissed only to end up with a system that is
>>> utterly incapable of actually combating malware."
>>>
>>> which basically boils down to an ad-hominem on Mozilla and an
>>> indictment
>>> of "the system" and signing and the add-ons space generally, when
>>> really, all we're talking about right now is how/whether to review
>>> non-AMO-distributed add-ons before signing them. Dan acknowledges
>>> elsewhere in his post that signing has other benefits, but the polemic
>>> tone sure makes it seem like the entire plan we have here is rubbish
>>> from start to finish.
>>
>> It's the people defending automated scanning as a meaningful
>> deterrentagainst malware that are failing to make a distinction
>> between different
> parts of the system, not me.
>
> I quoted you in the paragraph above this statement of yours. It is a
> matter of English spelling and grammar that your phrasing condemns all
> of the signing and review changes. Stop blame-shifting.

OK, I think that's a willful misreading of my post, given how I clearly
explain the parts of extension signing I believe to be valuable, but if
you want me to say that that sentence could have been phrased better to
clarify that I was referring to relying on the automated scanner for
combating "a majority of malware", sure.

>> I'm not calling for no signing. I'm not calling for no
> restrictions. I'm not calling for no review.
>
> You're asking us to remove every bit of the automated review that
> prevents you from publishing zotero automatically without a blocking
> human review of your codebase.
>
> I don't know how many of those bits there are (ie which bits are
> currently getting you dropped into the manual review queue), and how
> much would be left, and you have not specified this. If there were
> just a few, I assume you would simply have argued against those
> specific rules because that would have been a simpler change to make
> and convince people of, so I believe the conclusion I drew is reasonable.
>
> In any case, if we left something of the automated review in, chances
> are Zotero would just run into the same thing in a future update where
> you added some more code that ran into the bit that wasn't problematic
> before, right?

I've explained why we feel non-blocking releases are necessary for
Zotero. We've gained people's trust over the last decade by being able
to quickly address issues, and we're not going to jeopardize that. We
haven't said we wouldn't respond to legitimate issues raised by AMO
editors in post-release reviews (which I actually call for in my post).

As for what, if anything, should block release without override, I'm
happy to talk specifics, but we can't have a discussion about that
without even agreeing on the point of the validator, and it seems no one
from Mozilla can even agree on this. Will it "block the majority of
malware" (Jorge)? Is it "not primarily a security measure" (Gavin)? Is
it "an advisory/helpful tool [rather] than something we could use to
automate security validation" (Matt, the author of the validator)?

In my view, if the scanner can be trivially bypassed by malware authors
and is just an advisory tool, there's no justification for blocking
release. It should be seen as a linter, providing conscientious
developers with an opportunity to fix potential (but rarely unambiguous)
issues and flagging them for later review by AMO editors. If AMO editors
feel that developers are ignoring legitimate security issues, they could
temporarily rescind the ability to publish without review. Essentially,
I'm calling for whitelist-by-default.

>> I'm calling for changing
> the parts of the process that provide essentially no additional
> protection against malicious code but that are hugely disruptive to
> legitimate developers.
>
> This sounds eminently reasonable - but doesn't correspond to the
> specific parts of your original post and this reply that I have
> referred to before. You could have constructively called out the
> automated review requirement for frontloaded, non-AMO-distributed
> add-ons in an objective and simple manner. Instead we get a long angry
> rant about it, mixed with references to "security theatre" and calling
> people incompetent.

I'm sorry you felt it was an angry rant. I believe I provided context,
explained both the merits and flaws of the current system, and provided
detailed, concrete steps for how I think it could improved to be more
consistent with Mozilla's stated goals. But yes, I'm angry that I had to
spend the last three months arguing with people about whitelisting when
it's now clear that whitelisting wouldn't allow anyone to do anything
they couldn't trivially do otherwise.

And yes, using the automated scanner to try to combat malware is, in my
view, security theater: "the practice of investing in countermeasures
intended to provide the feeling of improved security while doing little
or nothing to actually achieve it" [2]. It may be a harsh assessment,
but I don't think it's unfair.

>> What you have now is a system that is extremely
> disruptive to legitimate developers
>
> I will just point out that not all legitimate developers seem to be
> struggling as much with it as you do, so I don't know that your
> generalization is justified. Struggling with signing, privately-run
> add-ons, modifying public add-ons, the overall debate and its
> consequences wrt e.g. government surveillance, centralizing a bunch of
> infrastructure that used to be distributed - yes. Struggling
> specifically with the automated portion of the review system for
> frontloaded, non-AMO add-ons... not so much.

I don't know how many extensions are being flagged for manual review,
true. But some certainly are, and for them it's extremely disruptive, to
the point where, in Zotero's case, we've decided that we would need to
cease development rather than be in a position where we couldn't release
timely updates to our users. Given that we now see how useless the
automated scanner is in its stated goal of actually combating malware,
I'm not sure why that wouldn't bother you.

- Dan


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867#c5
[2] https://en.wikipedia.org/wiki/Security_theater

Eric Rescorla

unread,
Nov 28, 2015, 2:52:53 PM11/28/15
to Kartikaya Gupta, dev-platform, Gijs Kruitbosch
On Sat, Nov 28, 2015 at 11:30 AM, Kartikaya Gupta <kgu...@mozilla.com>
wrote:

> So it seems to me that people are actually in general agreement about
> what the validator can and cannot do, but have different evaluations
> of the cost-benefit tradeoff.
>
> On the one hand we have the camp (let's say camp A) that believes the
> validator provides negligible actual benefit, because it is trival to
> bypass, but at the same time provides a huge cost to add-on
> developers. And on the other hand we have the camp ("camp B") that
> believes the validator provides some non-negligible benefit, even
> though it may significantly increase the cost to add-on developers.
>
> From what I have been told from multiple people, Mozilla does have
> actual data on the type and number of malicious add-ons in the wild,
> and it cannot be published. I don't really like this since it goes
> against openness and whatnot, but I can accept that there are
> legitimate reasons for not publishing this data.
>

It may be the case that access to the raw data needs to be restricted
(though it's not clear to me why) but I don't see why the basic facts
I asked for need to be restricted, and those are all that is needed
to evaluate the question at hand.

-Ekr



>
> On Sat, Nov 28, 2015 at 10:35 AM, Eric Rescorla <e...@rtfm.com> wrote:
> > On Sat, Nov 28, 2015 at 2:06 AM, Gijs Kruitbosch <
> gijskru...@gmail.com>
> > wrote:
> >
> >> On 27/11/2015 23:46, dsti...@zotero.org wrote:
> >>
> >>> The issue here is that this new system -- specifically, an automated
> >>> scanner sending extensions to manual review -- has been defended by
> >>> Jorge's saying, from March when I first brought this up until
> >>> yesterday on the hardening bug [1], that he believes the scanner can
> >>> "block the majority of malware".
> >>>
> >>
> >> Funny how you omit part of the quote you've listed elsewhere, namely:
> >> "block the majority of malware, but it will never be perfect".
> >>
> >> You assert the majority of malware will be 'smarter' than the validator
> >> expects (possibly after initial rejection) and bypass it. Jorge asserts,
> >> from years of experience, that malware authors are lazy and the
> validator
> >> has already been helpful, in conjunction with manual review.
> >
> >

Dan Stillman

unread,
Nov 28, 2015, 2:54:24 PM11/28/15
to dev-platform
On 11/28/15 2:30 PM, Kartikaya Gupta wrote:
> So it seems to me that people are actually in general agreement about
> what the validator can and cannot do, but have different evaluations
> of the cost-benefit tradeoff.
>
> On the one hand we have the camp (let's say camp A) that believes the
> validator provides negligible actual benefit, because it is trival to
> bypass, but at the same time provides a huge cost to add-on
> developers. And on the other hand we have the camp ("camp B") that
> believes the validator provides some non-negligible benefit, even
> though it may significantly increase the cost to add-on developers.
>
> From what I have been told from multiple people, Mozilla does have
> actual data on the type and number of malicious add-ons in the wild,
> and it cannot be published. I don't really like this since it goes
> against openness and whatnot, but I can accept that there are
> legitimate reasons for not publishing this data. So the question is -
> do the people in camp A or the people in camp B have access to this
> data? I would argue that whoever has access to the data is in a better
> position to make the right call with respect to the cost-benefit
> tradeoff, and everybody else should defer to them. If people in both
> camps have access to the data, then clearly they have different
> interpretations of the data and they should discuss it further.
> Presumably they know who they are.

Unfortunately I think there is still some confusion about the
implications of my PoC [1].

But putting that aside, I don't see how historical data is valid, given
how trivial the bypass is. Since this sort of obfuscation hasn't been
necessary, there's been no reason for it to be done. But that doesn't
make it any less trivial, or require malware authors to be any less
"lazy" to get their code signed.

Certainly arguments that have been made against whitelisting over the
last few months don't hold up to scrutiny in light of the PoC, unless
you're willing to argue that someone who compromised Zotero's servers,
got into our VCS, got code past our review process, purchased Zotero
from a large research university to turn it into malware, etc., would
also be unable to dynamically generate a property name.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867#c26

Mike Hoye

unread,
Nov 28, 2015, 8:29:03 PM11/28/15
to dev-pl...@lists.mozilla.org
On 2015-11-28 2:40 PM, Eric Rescorla wrote:
> How odd that your e-mail was in response to mine, then.
>
Thanks, super helpful, really moved the discussion forward, high five.

To Ehsan's point that "malicious code here might look like this:
console.log("success"); [and] It's impossible to tell by looking at the
code whether that line prints a success message on the console, or
something entirely different, such as running calc.exe." - that's true,
but it also looks a lot like the sort of problem antivirus vendors have
been dealing with for a long time now. Turing completeness is a thing,
the halting problem exists and monsters are real, sure, but that doesn't
mean having antivirus software is a waste of time that solves no
problems and protects nobody.

One key claim Stillman made, that " A system that takes five minutes to
circumvent does not “raise the bar” in any real way", is perhaps true in
an academic sense, but not in a practical one. We know a lot more than
we did a decade ago about the nature of malicious online actors, and one
of the things we know for a fact is the great majority of malicious
actors on the 'net are - precisely as Jorge asserts - lazy, and that
minor speedbumps - sometimes as little as a couple of extra clicks - are
an effective barrier to people who are doing whatever it is they're
about to do because they're bored and it's easy. And that's most of them.

Any semicompetent locksmith can walk through your locked front door
without breaking stride, but you lock it anyway because keeping out
badly-raised teenagers is not "security theater", it's sensible,
cost-effective risk management.

- mhoye

Eric Rescorla

unread,
Nov 28, 2015, 8:50:46 PM11/28/15
to Mike Hoye, dev-platform
On Sat, Nov 28, 2015 at 5:28 PM, Mike Hoye <mh...@mozilla.com> wrote:

> On 2015-11-28 2:40 PM, Eric Rescorla wrote:
>
>> How odd that your e-mail was in response to mine, then.
>>
>> Thanks, super helpful, really moved the discussion forward, high five.


Glad I could help.


To Ehsan's point that "malicious code here might look like this:
> console.log("success"); [and] It's impossible to tell by looking at the
> code whether that line prints a success message on the console, or
> something entirely different, such as running calc.exe." - that's true, but
> it also looks a lot like the sort of problem antivirus vendors have been
> dealing with for a long time now. Turing completeness is a thing, the
> halting problem exists and monsters are real, sure, but that doesn't mean
> having antivirus software is a waste of time that solves no problems and
> protects nobody.
>

Interesting you should mention antivirus. One of the advantages that
antivirus
manufacturers have is that they are able to deploy signatures for malware
which
is already in the wild, so that they get to update their virus signatures
after the
malware is already written, so they know that the fielded malware will be
detectable.
And even then, it's well-known that malware authors test their prototype
malware against existing antivirus packages, which is part of the reason for
the relatively low effectiveness of commercial antivirus packages against
novel malware [0]. The system we are discussing here is quite similar,
except
much easier for the attacker because there is only one scanner they need
to defeat and they can download it and try it for themselves.



> One key claim Stillman made, that " A system that takes five minutes to
> circumvent does not “raise the bar” in any real way", is perhaps true in an
> academic sense, but not in a practical one. We know a lot more than we did
> a decade ago about the nature of malicious online actors, and one of the
> things we know for a fact is the great majority of malicious actors on the
> 'net are - precisely as Jorge asserts - lazy, and that minor speedbumps -
> sometimes as little as a couple of extra clicks - are an effective barrier
> to people who are doing whatever it is they're about to do because they're
> bored and it's easy. And that's most of them.
>

This might be true or it might not. I'd be interested in seeing some
evidence that it is
in fact true, specifically, that the scanner catches a lot of malware, as
opposed to
just broken-ware. Do you have such evidence?

-Ekr


[0]
http://krebsonsecurity.com/2010/04/virus-scanners-for-virus-authors-part-ii/

Dan Stillman

unread,
Nov 28, 2015, 9:57:13 PM11/28/15
to dev-pl...@lists.mozilla.org
On 11/28/15 8:28 PM, Mike Hoye wrote:
> To Ehsan's point that "malicious code here might look like this:
> console.log("success"); [and] It's impossible to tell by looking at
> the code whether that line prints a success message on the console, or
> something entirely different, such as running calc.exe." - that's
> true, but it also looks a lot like the sort of problem antivirus
> vendors have been dealing with for a long time now. Turing
> completeness is a thing, the halting problem exists and monsters are
> real, sure, but that doesn't mean having antivirus software is a waste
> of time that solves no problems and protects nobody.

You can block known malware signatures with the scanner if you think
that's a good use of time. But that doesn't require blocking valid APIs
and patterns that have legitimate uses. That's what we're discussing
here. AV software doesn't result in long delays in legitimate software
updates so that AV vendors can manually review software.

> One key claim Stillman made, that " A system that takes five minutes
> to circumvent does not “raise the bar” in any real way", is perhaps
> true in an academic sense, but not in a practical one. We know a lot
> more than we did a decade ago about the nature of malicious online
> actors, and one of the things we know for a fact is the great majority
> of malicious actors on the 'net are - precisely as Jorge asserts -
> lazy, and that minor speedbumps - sometimes as little as a couple of
> extra clicks - are an effective barrier to people who are doing
> whatever it is they're about to do because they're bored and it's
> easy. And that's most of them.
>
> Any semicompetent locksmith can walk through your locked front door
> without breaking stride, but you lock it anyway because keeping out
> badly-raised teenagers is not "security theater", it's sensible,
> cost-effective risk management.

I just don't see how this argument makes any sense.

First, we're not talking about locksmiths. We're talking about people
who know how to turn doorknobs. Any JS developer is able to do this sort
of obfuscation in a minute or two.

But here's the point: just setting up the skeleton extension for my PoC
took longer than writing the examples. Actually writing any malicious
code certainly would take longer. And surely if they're as lazy as you
suggest, they're not going to bother creating a dummy extension,
creating an account, submitting it for an initial manual review (as I
suggest in my post), waiting days for approval, and adding in the
malicious code, only to then decide to go eat some Cheez-Its instead of
spending another minute modifying the code to pass the automated
scanner. Do you honestly believe that?

Even if you do — which seems crazy to me — the relevant question is
whether it's worth delaying legitimate extension updates for days at a
time and possibly driving developers away from the platform (as in the
case of Zotero) in the name of blocking that incomprehensible level of
laziness. Do you think it is?

And even if we somehow don't agree on any of that, surely we can agree
that someone who bought off or compromised a legitimate extension
developer (or was the developer to begin with) would be willing to put
in that extra minute?

Jonas Sicking

unread,
Nov 29, 2015, 6:54:31 PM11/29/15
to Mike Hoye, dev-platform
On Sat, Nov 28, 2015 at 5:28 PM, Mike Hoye <mh...@mozilla.com> wrote:
> One key claim Stillman made, that " A system that takes five minutes to
> circumvent does not “raise the bar” in any real way", is perhaps true in an
> academic sense, but not in a practical one. We know a lot more than we did a
> decade ago about the nature of malicious online actors, and one of the
> things we know for a fact is the great majority of malicious actors on the
> 'net are - precisely as Jorge asserts - lazy, and that minor speedbumps -
> sometimes as little as a couple of extra clicks - are an effective barrier
> to people who are doing whatever it is they're about to do because they're
> bored and it's easy. And that's most of them.

I don't understand this claim.

We are talking about malware authors who have decided to write a
Firefox-specific addon, done a bunch of research into how firefox
addons work and then written and debugged a working Firefox addon.

It does not seem likely to me that a person that has gone through all
that trouble would then simply give up after having spent time on all
the other steps. Especially given that in many ways, the other steps
are more work and takes longer to accomplish.

This is also why I think comparison to antivirus software doesn't seem
very fitting. Malware authors know that they don't have have to bother
with working around antivirus software since a lot of people don't
have any antivirus at all. And so not working around it still gives
you benefit for your labor.

Compare that to firefox addons where if you don't work around the
scanner you will soon get literally zero installs. I have a hard time
imagining that malware authors are so lazy that they are ok with that
number.

/ Jonas

Gijs Kruitbosch

unread,
Nov 30, 2015, 4:53:29 AM11/30/15
to
On 29/11/2015 02:56, Dan Stillman wrote:
> You can block known malware signatures with the scanner if you think
> that's a good use of time. But that doesn't require blocking valid APIs
> and patterns that have legitimate uses. That's what we're discussing
> here. AV software doesn't result in long delays in legitimate software
> updates so that AV vendors can manually review software.

It doesn't work the same way because AV vendors have no control over
what apps the OS is letting run, but if it did, it would cause the same
problems. Quick bugzilla search:

https://bugzilla.mozilla.org/show_bug.cgi?id=1116819
https://bugzilla.mozilla.org/show_bug.cgi?id=1168855
https://bugzilla.mozilla.org/show_bug.cgi?id=1095049
https://bugzilla.mozilla.org/show_bug.cgi?id=799980

I haven't looked at the bugs, but they are a small sample of a large set
of bugs, and it's just a fact that we (just like other legitimate
software developers) occasionally get flagged by various
anti-virus/malware software.

~ Gijs

Gijs Kruitbosch

unread,
Nov 30, 2015, 5:31:39 AM11/30/15
to
We have data on pre-signing add-ons that we consider malware, but we
have no way of knowing (structurally, besides incidental reports on
bugzilla with the malware uploaded) the contents of the XPIs in question
and/or whether they would have passed the validator - they wouldn't go
through the validator, because they would have been distributed outside
of AMO (front- or sideloaded - either way we would not have source code).

So really, nobody has data on what will happen in a post-signing world.
There's an interesting question about how much the pre-signing system
can predict what will happen here, but it's sadly not as clear-cut as
you hope.

~ Gijs

Gijs Kruitbosch

unread,
Nov 30, 2015, 6:24:48 AM11/30/15
to
On 28/11/2015 19:42, Dan Stillman wrote:
> On 11/28/15 5:06 AM, Gijs Kruitbosch wrote:
>> On 27/11/2015 23:46, dsti...@zotero.org wrote:
>>> The issue here is that this new system -- specifically, an automated
>>> scanner sending extensions to manual review -- has been defended by
>>> Jorge's saying, from March when I first brought this up until
>>> yesterday on the hardening bug [1], that he believes the scanner can
>>> "block the majority of malware".
>>
>> Funny how you omit part of the quote you've listed elsewhere, namely:
>> "block the majority of malware, but it will never be perfect".
>>
>> You assert the majority of malware will be 'smarter' than the
>> validator expects (possibly after initial rejection) and bypass it.
>> Jorge asserts, from years of experience, that malware authors are lazy
>> and the validator has already been helpful, in conjunction with manual
>> review. It's not helpful to say that what Jorge is saying is "not
>> true" - you mean different things when you say "the majority of malware".
>
> I've addressed this repeatedly. In my view, saying "it will never be
> perfect" is a misleading statement that betrays a misunderstanding of
> the technical issues.

The validator has been used for years for AMO-published add-ons. It is
in that context that it did what it did, and it did so reasonably well -
there was always manual review for what the validator didn't catch. Not
all add-ons would be published on AMO, and there was no reason for
malware authors to publish on AMO except if they wanted to frontload it
and have the slightly-less-bumpy install flow that AMO offers because of
its default whitelist status in terms of sources of XPIs. Some people
did try this. I have no comprehensive data or anything, but I do not
believe that there was much if any such malware that made it to "fully
reviewed" status on AMO.

In that context, saying that the validator blocked the majority of
malware and would never be perfect was a perfectly valid statement and
does not imply any lack of technical understanding.

The change that muddies the waters here is that we're now signing
add-ons, and we're signing the ones that aren't distributed on AMO. We
do not have source code for non-AMO distributed add-ons pre-signing, and
so we have no way of knowing how much of that malware would or would not
be picked up by the validator as-is. It seems likely that, as-is, a lot
of it would be, because they have had no cause to use any techniques of
circumvention. It seems equally likely that they would proceed to use
such techniques in order to get signed anyway.

IOW, I think you're both right, and it would be helpful if you stopped
attacking Jorge and other folks because I don't think that is a
constructive discussion to be having.

> If the person who's been defending this system for the last year

Jorge does not make decisions in a vacuum, wasn't the only person who
architected the current solution, isn't some kind of lone dictator,
doesn't get to decide what the Firefox team does (the team that actually
implemented signing on the browser side), and he isn't the only person
to have "defended the system" for whatever definition of system you're
using here (it's not clear) -- and really, as I have pointed out before,
even if he was all those things, it would not make "the system" any
better or worse. Stop making this all about Jorge and his supposed lack
of understanding. As I have repeatedly said, it is not helpful.


> As for what, if anything, should block release without override, I'm
> happy to talk specifics, but we can't have a discussion about that
> without even agreeing on the point of the validator,

Why do we have to agree (and who are 'we'?) on the 'point' of the
validator? As it is, the people in this discussion without access to the
validator's results for zotero (which is almost everybody) have no idea
what you're running into and which things are bothering you because they
flag up false positives. For all we know you obfuscate all your code and
use eval(btoa(...)) all over the place. More seriously, all I remember
explicitly being mentioned is assigning to innerHTML in documents that
are not (and won't be) in any docshell and therefore shouldn't be
exploitable. It would be good to get a broader idea of the issues you're
actually running into (as I'm assuming there are more than just this
one, particularly because this one would be pretty easy to fix on your
side, as you say the line in question only runs under other browsers).

In any case, if you want to insist, here's my view: the point of the
validator is to raise the bar for both malware and for trivially-found
security holes in otherwise benign (or seemingly benign / greyware, if
you assume collusion between the 'benign' add-on submitted and the
website that will use that add-on's security hole for privilege
escalation of their website / remote code) add-ons. Raising the bar is
helpful so that editors don't waste time reviewing script kiddie or
copy-pasted / metasploit-style submissions of bad add-ons (assuming the
validator gets updated consistently, anything with 0 creativity compared
with a previous submission should be detectable in some way). The
validator won't be able to defeat a concerted malicious actor by itself,
and we should therefore manually review all add-ons in addition to what
the validator does. Yes, that means delays for everyone. I personally do
not believe that's avoidable if we want to make any kind of useful
guarantee about the add-ons.

For AMO-based add-ons, the validator should additionally help people
deal with Firefox code changes to interfaces, e10s, etc.

> In my view, if the scanner can be trivially bypassed by malware authors
> and is just an advisory tool, there's no justification for blocking
> release.

Can we be specific, please? Release of new add-ons? Updates? Both? Just
AMO add-ons? Frontloaded and sideloaded non-AMO add-ons?

> It should be seen as a linter, providing conscientious
> developers with an opportunity to fix potential (but rarely unambiguous)
> issues and flagging them for later review by AMO editors. If AMO editors
> feel that developers are ignoring legitimate security issues, they could
> temporarily rescind the ability to publish without review. Essentially,
> I'm calling for whitelist-by-default.

As I've said before, I think this provides no improvement over the
pre-signing era. If we allow just anyone to register as a new developer
with a throwaway email address and automatically get whatever kind of
rubbish signed, with an API to do that to boot, I don't see what the
point of signing would be. Getting post-facto blocking in place for
those add-ons is of comparatively little use if (a) by that time the
add-on can e.g. stop Firefox updating and/or fetching the blocklist; (b)
people can resubmit the exact same thing with a different id and get
signed - all fully automatically (note that you're saying that the
validator shouldn't be allowed to stop signing, so even if we had
patterns of bad add-ons encoded in the validator, this would be possible).

> And yes, using the automated scanner to try to combat malware is, in my
> view, security theater: "the practice of investing in countermeasures
> intended to provide the feeling of improved security while doing little
> or nothing to actually achieve it" [2]. It may be a harsh assessment,
> but I don't think it's unfair.

It is unfair when people have repeatedly pointed out that it has
actually achieved improved security. Perhaps not by as much as you would
like (ie not enough to dispense with manual review), but it raises the bar.

>>> What you have now is a system that is extremely
>> disruptive to legitimate developers
>>
>> I will just point out that not all legitimate developers seem to be
>> struggling as much with it as you do, so I don't know that your
>> generalization is justified. Struggling with signing, privately-run
>> add-ons, modifying public add-ons, the overall debate and its
>> consequences wrt e.g. government surveillance, centralizing a bunch of
>> infrastructure that used to be distributed - yes. Struggling
>> specifically with the automated portion of the review system for
>> frontloaded, non-AMO add-ons... not so much.
>
> I don't know how many extensions are being flagged for manual review,
> true.

This seems like something we should be able to get data about. (I do not
have such data.) Have you asked anyone?

> But some certainly are, and for them it's extremely disruptive, to
> the point where, in Zotero's case, we've decided that we would need to
> cease development rather than be in a position where we couldn't release
> timely updates to our users.

You have also dismissed all suggestions that you could fix at least some
of the issues the validator flags up (as noted above, I don't know what
all of them are, so I don't know if *all* of them are fixable, though
that seems likely enough to me), and that in case of "emergency" fixes,
you (and anyone else to whom this would apply) could email the amo
editors list to get an expedited review. We have reviewers in a number
of timezones, both paid and volunteer, and so response times are usually
pretty quick.

In other words, it is not like we have not provided you options. You
don't think the options are good enough, but it's disingenuous to
suggest that we haven't been trying to help. What we haven't been
willing to do is dispense with automated and manual review altogether.

~ Gijs

Ehsan Akhgari

unread,
Nov 30, 2015, 9:40:07 AM11/30/15
to Gavin Sharp, Eric Rescorla, Gervase Markham, dev-platform
On 2015-11-28 2:06 AM, Gavin Sharp wrote:
> The assumption that the validator must catch all malicious code for
> add-on signing to be beneficial is incorrect, and seems to be what's
> fueling most of this thread.

It would be really helpful if we can get past defending the add-on
validator; the only thing that everyone is this thread seems to agree on
is the list of things it is capable of doing.

The problem is how we're using it seems to not make sense according to
what it can and does do.

> Validation being a prerequisite for
> automatic signing is not primarily a security measure, but rather just a
> way of eliminating "obvious" problems (security-related or otherwise)
> from installed and enabled add-ons generally.

Successful validation is currently not merely a prerequisite for
automatic signing of non-AMO add-ons, it is also a sufficient condition.
Let me repeat the part of my previous response which you didn't reply to:

"The specific problem here is that we allow automatic signing of
extensions once they pass the add-on validator checks, and we allow our
users to run signed extensions without any other checks. Therefore, the
current system is vulnerable to attacks such as what Dan's PoC extension
has demonstrated."

Perhaps that is not what was supposed to happen, but we're doing this
for a fact, and it's definitely the wrong thing to do.

> With add-on singing fully
> implemented, if (when) malicious add-ons get automatically signed,
> you'll have several more effective tools to deal with them, compared to
> the status quo.
>
> Gavin
>
> On Nov 27, 2015, at 8:49 PM, Eric Rescorla <e...@rtfm.com
> <mailto:e...@rtfm.com>> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 4:09 PM, Ehsan Akhgari
>> <ehsan....@gmail.com <mailto:ehsan....@gmail.com>> wrote:
>>
>> On Fri, Nov 27, 2015 at 10:50 AM, Gavin Sharp
>> <ga...@gavinsharp.com <mailto:ga...@gavinsharp.com>> wrote:
>>
>> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org <mailto:ge...@mozilla.org>> wrote:
>> > > But the thing is, members of our security group are now piling into the
>> > > bug pointing out that trying to find malicious JS code by static code
>> > > review is literally _impossible_ (and perhaps hinting that they'd have
>> > > said so much earlier if someone had asked them).
>> >
>> > No, that's not right. There's an important distinction between
>> > "finding malicious JS code" and "finding _all_ malicious JS code". The
>> > latter is impossible, but the former isn't.
>> >
>>
>> Note that malicious code here might look like this:
>>
>> console.log("success");
>>
>> It's impossible to tell by looking at the code whether that line
>> prints a
>> success message on the console, or something entirely different,
>> such as
>> running calc.exe.
>>

Ehsan Akhgari

unread,
Nov 30, 2015, 9:40:13 AM11/30/15
to Mike Hoye, dev-pl...@lists.mozilla.org
On 2015-11-28 8:28 PM, Mike Hoye wrote:
> On 2015-11-28 2:40 PM, Eric Rescorla wrote:
>> How odd that your e-mail was in response to mine, then.
>>
> Thanks, super helpful, really moved the discussion forward, high five.
>
> To Ehsan's point that "malicious code here might look like this:
> console.log("success"); [and] It's impossible to tell by looking at the
> code whether that line prints a success message on the console, or
> something entirely different, such as running calc.exe." - that's true,
> but it also looks a lot like the sort of problem antivirus vendors have
> been dealing with for a long time now. Turing completeness is a thing,
> the halting problem exists and monsters are real, sure, but that doesn't
> mean having antivirus software is a waste of time that solves no
> problems and protects nobody.

As others have pointed, your antivirus analogy is really irrelevant
here. Also, may I suggest that starting to say things such as "Turing
completeness is a thing... and monsters are real" in the discussion
related to an actual security issue trivializes the discussion to a
point where important issues will get ignored, as I've seen happen a few
times before in this thread?

> One key claim Stillman made, that " A system that takes five minutes to
> circumvent does not “raise the bar” in any real way", is perhaps true in
> an academic sense, but not in a practical one. We know a lot more than
> we did a decade ago about the nature of malicious online actors, and one
> of the things we know for a fact is the great majority of malicious
> actors on the 'net are - precisely as Jorge asserts - lazy, and that
> minor speedbumps - sometimes as little as a couple of extra clicks - are
> an effective barrier to people who are doing whatever it is they're
> about to do because they're bored and it's easy. And that's most of them.

I agree with Jonas about this. Even if all of the malware we have seen
on AMO so far have been stuff done by script kiddies, the right way to
think about this is "maybe we've not seen the more sophisticated ones."
It would be terrible to base the security of our add-on ecosystem on
assumptions about the laziness of the malicious actors.

(Also, anecdotally, some of the exploit code against Firefox from web
pages that I have seen myself is among the most sophisticated code and
tricks I've seen in my career so far.)

> Any semicompetent locksmith can walk through your locked front door
> without breaking stride, but you lock it anyway because keeping out
> badly-raised teenagers is not "security theater", it's sensible,
> cost-effective risk management.

Please see my reply to Gavin on Friday? To fit the status quo with this
analogy, we're currently copying keys to our front door to strangers
that successfully fill out a questionnaire.

Cheers,
Ehsan

David Rajchenbach-Teller

unread,
Nov 30, 2015, 10:29:20 AM11/30/15
to Ehsan Akhgari, Mike Hoye, dev-pl...@lists.mozilla.org
Could we perhaps organize a MozLando workshop to discuss add-ons security?

Thomas Zimmermann

unread,
Nov 30, 2015, 10:30:30 AM11/30/15
to Gavin Sharp, Gervase Markham, dev-platform
Hi

Am 27.11.2015 um 16:50 schrieb Gavin Sharp:
> On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org> wrote:
>> But the thing is, members of our security group are now piling into the
>> bug pointing out that trying to find malicious JS code by static code
>> review is literally _impossible_ (and perhaps hinting that they'd have
>> said so much earlier if someone had asked them).
> No, that's not right. There's an important distinction between
> "finding malicious JS code" and "finding _all_ malicious JS code". The
> latter is impossible, but the former isn't.
>
> Proving "the validator won't catch everything" isn't particularly
> relevant when it isn't intended to, in the overall add-on signing
> system design.

I think the fact that the validator (or manual review) cannot catch
everything is very relevant.

Users cannot rely on the review process (automatic or manual), because
it can never catch all bugs (malicious or not). So users still have to
rely on an extension's developers to get their code into good shape;
just as it is currently the case. And I'd guess that malicious code will
get more sophisticated when the review procedures improve.

Another point is that one never knows how close to 'good' an extension
or a review is, because this would require knowledge about the absolute
number of bugs in the extension. Getting this number requires a perfect
validator. So all bugs from a review might get fixed, but the overall
extension is still in the 'crap territory'. I'm a bit surprised that
this hasn't been mentioned here yet.

Therefore I'm skeptical about the effective benefit for the users. The
mandatory review seems to create a promise of security that it cannot
fulfill. Reviews and validation are good things, but holding back an
update for a pending review doesn't seem helpful.

Best regards
Thomas

>
> Gavin

Gavin Sharp

unread,
Nov 30, 2015, 10:40:59 AM11/30/15
to Thomas Zimmermann, dev-platform, Gervase Markham
It looks to me like you're arguing about a separate point (AMO review
requirements for add-on updates), when the subject at hand is the add-on
signing system's reliance on the AMO validator as the only prerequisite for
automatic signing.

Gavin

Gavin Sharp

unread,
Nov 30, 2015, 10:46:07 AM11/30/15
to Ehsan Akhgari, Gervase Markham, Eric Rescorla, dev-platform
> and it's definitely the wrong thing to do.

Fundamentally the add-on signing system was designed with an important
trade-off in mind: security (ensuring no malicious add-ons are
installed/executed) vs. maintaining a healthy add-on ecosystem (ensuring
that building and distributing add-ons is as easy as it can be).

If your proposed alternative plan is "get rid of automatic signing", then
we know that it's going to significantly hamper Mozilla's ability to
maintain a healthy add-on ecosystem, and harm what were considered some
important add-on use cases. I don't think it strikes the right balance.

If your proposed alternative plan is something else, maybe it would help to
clarify it.

Gavin

On Mon, Nov 30, 2015 at 9:33 AM, Ehsan Akhgari <ehsan....@gmail.com>
wrote:
>>> > On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org
>>> <mailto:ge...@mozilla.org>> wrote:
>>> > > But the thing is, members of our security group are now piling
>>> into the
>>> > > bug pointing out that trying to find malicious JS code by static
>>> code
>>> > > review is literally _impossible_ (and perhaps hinting that
>>> they'd have
>>> > > said so much earlier if someone had asked them).
>>> >
>>> > No, that's not right. There's an important distinction between
>>> > "finding malicious JS code" and "finding _all_ malicious JS code".
>>> The
>>> > latter is impossible, but the former isn't.
>>> >
>>>
>>> Note that malicious code here might look like this:
>>>
>>> console.log("success");
>>>
>>> It's impossible to tell by looking at the code whether that line
>>> prints a
>>> success message on the console, or something entirely different,
>>> such as
>>> running calc.exe.
>>>

Thomas Zimmermann

unread,
Nov 30, 2015, 11:00:37 AM11/30/15
to Gavin Sharp, dev-platform, Gervase Markham
Hi

Am 30.11.2015 um 16:40 schrieb Gavin Sharp:
> It looks to me like you're arguing about a separate point (AMO review
> requirements for add-on updates), when the subject at hand is the add-on
> signing system's reliance on the AMO validator as the only prerequisite for
> automatic signing.

OK. Or maybe I used the term 'update' a bit sloppy. My question is: is
it worth holding back an extension because of of a pending review
(either by a tool or human)? I guess updating existing add-ons is the
more common case, compared to signing new one's.

Your reply makes me think that the whole discussion implicitly seems to
assume that a manual review can fix any problems with the automated
tools, or is always better. I would not agree to this. Manual reviews
depend a lot on the reviewer and the reviewer's constitution during the
review. With tools, at least you know what you get.

Best regards
Thomas

>
> Gavin
>
> On Mon, Nov 30, 2015 at 10:30 AM, Thomas Zimmermann <tzimm...@mozilla.com
>> wrote:
>> Hi
>>
>> Am 27.11.2015 um 16:50 schrieb Gavin Sharp:
>>> On Fri, Nov 27, 2015 at 7:16 AM, Gervase Markham <ge...@mozilla.org>
>> wrote:
>>>> But the thing is, members of our security group are now piling into the
>>>> bug pointing out that trying to find malicious JS code by static code
>>>> review is literally _impossible_ (and perhaps hinting that they'd have
>>>> said so much earlier if someone had asked them).
>>> No, that's not right. There's an important distinction between
>>> "finding malicious JS code" and "finding _all_ malicious JS code". The
>>> latter is impossible, but the former isn't.
>>>

Jonathan Kew

unread,
Nov 30, 2015, 11:15:52 AM11/30/15
to Gavin Sharp, Ehsan Akhgari, dev-platform, Eric Rescorla, Gervase Markham
On 30/11/15 15:45, Gavin Sharp wrote:
>> and it's definitely the wrong thing to do.
>
> Fundamentally the add-on signing system was designed with an important
> trade-off in mind: security (ensuring no malicious add-ons are
> installed/executed) vs. maintaining a healthy add-on ecosystem (ensuring
> that building and distributing add-ons is as easy as it can be).
>
> If your proposed alternative plan is "get rid of automatic signing", then
> we know that it's going to significantly hamper Mozilla's ability to
> maintain a healthy add-on ecosystem, and harm what were considered some
> important add-on use cases. I don't think it strikes the right balance.
>
> If your proposed alternative plan is something else, maybe it would help to
> clarify it.
>

Perhaps if there were a mechanism whereby "trusted" add-on developers
could have their add-ons -- or even just updates for
previously-reviewed-and-signed add-ons -- automatically signed without
having to jump through the validator/review hoops each time?

How would a developer acquire "trusted" status? By demonstrating a track
record of producing add-ons that pass AMO review -- which may be a
combination of automatic validation and/or human review.

And of course any add-on developer who is found to have abused their
"trusted" status to sign and deploy malicious code would have that
status revoked, in addition to the malicious add-on being blocked.

ISTM this would maintain most of the intended benefits of the signing
system, while substantially smoothing the path for developers such as
Dan who need to deliver frequent updates to their users.

Feasible?

JK

Gavin Sharp

unread,
Nov 30, 2015, 11:25:19 AM11/30/15
to Jonathan Kew, Gervase Markham, Ehsan Akhgari, dev-platform, Eric Rescorla
That's one of the suggestions Dan Stillman makes in his post, and it
seems like a fine idea to me.

Gavin

Bobby Holley

unread,
Nov 30, 2015, 2:32:00 PM11/30/15
to Gavin Sharp, dev-platform, Ehsan Akhgari, Gervase Markham, Eric Rescorla, Jonathan Kew
(Gingerly wading into this thread and hoping not to get sucked in)

Given the fundamental limits of static analysis, dynamic analysis might be
a better approach. I think we can do a reasonable job (with the help of
interpositions) of monitoring the various escape points at which addon code
might do arbitrary dangerous things, without actually preventing it from
doing those things in a way that would break lots of addons. We could then
keep an eye on what addons are doing in the wild, and revoke the signatures
for the addon / developer if we find them to be misbehaving.

I proposed this in [1] and it got filed separately as [2]. Detailed
follow-up discussion is probably better to do in that bug.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1199628#c26
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1227464

David Rajchenbach-Teller

unread,
Nov 30, 2015, 2:51:02 PM11/30/15
to Bobby Holley, Gavin Sharp, Gervase Markham, Ehsan Akhgari, dev-platform, Eric Rescorla, Jonathan Kew

Ehsan Akhgari

unread,
Nov 30, 2015, 2:53:21 PM11/30/15
to Gavin Sharp, Jonathan Kew, Gervase Markham, Eric Rescorla, dev-platform
That sounds like a good idea to me as well.

Ehsan Akhgari

unread,
Nov 30, 2015, 2:53:22 PM11/30/15
to David Rajchenbach-Teller, Mike Hoye, dev-pl...@lists.mozilla.org
On 2015-11-30 10:29 AM, David Rajchenbach-Teller wrote:
> Could we perhaps organize a MozLando workshop to discuss add-ons security?

I think you need to reach out to the add-ons team. I was not involved
in any of the design process; I just happened to note the same issues as
Dan noticed after the fact.

Dan Stillman

unread,
Nov 30, 2015, 2:57:42 PM11/30/15
to dev-pl...@lists.mozilla.org
On 11/30/15 6:24 AM, Gijs Kruitbosch wrote:
> On 28/11/2015 19:42, Dan Stillman wrote:
>> As for what, if anything, should block release without override, I'm
>> happy to talk specifics, but we can't have a discussion about that
>> without even agreeing on the point of the validator,
>
> Why do we have to agree (and who are 'we'?) on the 'point' of the
> validator?

Because we're having a discussion about policy, and you can't make good
policy without understanding its goals, and actual capabilities have an
effect on whether those goals can be attained?

> For all we know you obfuscate all your code and use eval(btoa(...))
> all over the place.

Zotero has been manually reviewed and approved countless times,
including a month ago. Our ability to get approved is not the issue here.

> In any case, if you want to insist, here's my view: the point of the
> validator is to raise the bar for both malware and for trivially-found
> security holes in otherwise benign (or seemingly benign / greyware, if
> you assume collusion between the 'benign' add-on submitted and the
> website that will use that add-on's security hole for privilege
> escalation of their website / remote code) add-ons. Raising the bar is
> helpful so that editors don't waste time reviewing script kiddie or
> copy-pasted / metasploit-style submissions of bad add-ons (assuming
> the validator gets updated consistently, anything with 0 creativity
> compared with a previous submission should be detectable in some way).

An initial manual review, which I suggest in my post, would slow down
script kiddies.

Blocking known malware signatures, as I've said is an option, could
perhaps stop extremely lazy copy/paste jobs.

The scanner does not otherwise "raise the bar" — that's the point.
Anyone else who wants to release malware has already spent much longer
creating the extension, waiting for an initial review, and writing the
malicious code than they would bypassing the validator, which requires
either literally no work (Example 1) or a minute of work (Examples 2/3,
dynamically generated properties to get to eval or anything else).

As for trivially found security holes, I don't believe that blocking
ambiguous but possibly perfectly legitimate code patterns justifies
blocking releases of front-loaded unlisted extensions. These will be
flagged, and if AMO editors reviewing the code after the fact (even in
the same ~4-day window) feel the developers are ignoring legitimate
security issues, they can temporarily require manual review for updates
until the issues are fixed, with the threat of a version blocklist
depending on severity.

Note that after-the-fact reviews would still benefit from all the same
reviewing features that have been or are being built, such as the
ability of reviewers to whitelist specific instances of code patterns so
they're not repeatedly shown to the reviewer.

> The validator won't be able to defeat a concerted malicious actor by
> itself, and we should therefore manually review all add-ons in
> addition to what the validator does. Yes, that means delays for
> everyone. I personally do not believe that's avoidable if we want to
> make any kind of useful guarantee about the add-ons.

I'm pretty sure manual review of all extension updates was dismissed as
unrealistic a year or so ago, and I don't think you can argue for it
without misunderstanding (or not caring about) the needs of many
unlisted extensions and the reasons they're unlisted to begin with. It
would drive many developers away from the platform. It's also far from a
safety guarantee, as Ehsan has explained — tricking a human reviewer is
certainly tougher than bypassing the automated scanner, but it's still
pretty easy. In an extension of any significant size, it would be trivial.

> For AMO-based add-ons, the validator should additionally help people
> deal with Firefox code changes to interfaces, e10s, etc.

We're not discussing AMO extensions. But these benefits would apply to
unlisted extensions too without forcing manual review. If we operate
under the assumption that many/most extension developers are
conscientious (which perhaps you're not willing to do), that's a real
benefit.

>> In my view, if the scanner can be trivially bypassed by malware authors
>> and is just an advisory tool, there's no justification for blocking
>> release.
>
> Can we be specific, please? Release of new add-ons? Updates? Both?
> Just AMO add-ons? Frontloaded and sideloaded non-AMO add-ons?

I think I've been pretty clear on this. I don't think there's a
justification for blocking updates to front-loaded unlisted extensions.
I've suggested an initial manual review of front-loaded unlisted
extensions, but that's only to prevent id churn.

>> It should be seen as a linter, providing conscientious
>> developers with an opportunity to fix potential (but rarely unambiguous)
>> issues and flagging them for later review by AMO editors. If AMO editors
>> feel that developers are ignoring legitimate security issues, they could
>> temporarily rescind the ability to publish without review. Essentially,
>> I'm calling for whitelist-by-default.
>
> As I've said before, I think this provides no improvement over the
> pre-signing era. If we allow just anyone to register as a new
> developer with a throwaway email address and automatically get
> whatever kind of rubbish signed, with an API to do that to boot, I
> don't see what the point of signing would be. Getting post-facto
> blocking in place for those add-ons is of comparatively little use if
> (a) by that time the add-on can e.g. stop Firefox updating and/or
> fetching the blocklist; (b) people can resubmit the exact same thing
> with a different id and get signed - all fully automatically (note
> that you're saying that the validator shouldn't be allowed to stop
> signing, so even if we had patterns of bad add-ons encoded in the
> validator, this would be possible).

I've suggested an initial manual review to reduce id churn and said you
could block known malware signatures if you thought it would be useful.

But this is what's crazy: you're still arguing that automated signing
can stop someone from attacking the Firefox blocklist, or anything else!
Pretty much everyone else on this list has acknowledged that, no, it
can't do that — it can't stop someone from doing anything, trivially.

Enforcing add-on ids, having a record of code to review, forcing manual
review of side-loaded extensions: these are useful improvements from the
pre-signing era. You have to take what you can get.

>> And yes, using the automated scanner to try to combat malware is, in my
>> view, security theater: "the practice of investing in countermeasures
>> intended to provide the feeling of improved security while doing little
>> or nothing to actually achieve it" [2]. It may be a harsh assessment,
>> but I don't think it's unfair.
>
> It is unfair when people have repeatedly pointed out that it has
> actually achieved improved security. Perhaps not by as much as you
> would like (ie not enough to dispense with manual review), but it
> raises the bar.

Who has made that claim, and where? I haven't seen anyone argue that
automated scanning as it exists currently has stopped malware. It would
be a bizarre claim to make, given that signing isn't even enforced yet.

>> I don't know how many extensions are being flagged for manual review,
>> true.
>
> This seems like something we should be able to get data about. (I do
> not have such data.) Have you asked anyone?

If it's only Zotero that's affected by this, then we should have been
whitelisted three months ago when we first asked about it. That would've
left Zotero users — and only Zotero users — in the exact same situation
they've been in for the last decade (or theoretically a little better,
by virtue of AMO having a copy of the code). If it's other extensions —
say, the other extensions I see in the unlisted review queue counter
while waiting days for approval — then it's affecting more people, with
who knows how many millions of users behind them. There's also an
opportunity cost: if the process is too onerous, no one is going to
bother trying to build the next Zotero on this platform.

Either way, if you can't actually accomplish the stated goal of
automated signing — combating malware — why would you insist on impeding
any legitimate developers?

> You have also dismissed all suggestions that you could fix at least
> some of the issues the validator flags up (as noted above, I don't
> know what all of them are, so I don't know if *all* of them are
> fixable, though that seems likely enough to me), and that in case of
> "emergency" fixes, you (and anyone else to whom this would apply)
> could email the amo editors list to get an expedited review. We have
> reviewers in a number of timezones, both paid and volunteer, and so
> response times are usually pretty quick.

I've addressed both of these.

We're flagged for things like nsIProcess, js-ctypes, evalInSandbox, safe
and unavoidable uses of innerHTML, checks for "about:blank", and various
other things. So no, we can't rewrite them. (I mean, we could rewrite
them to bypass the validator, but I assume that's not what you mean.)
Not to mention that the validator can't even run on Zotero without
timing out, as has been the case for most of the last four years, though
I assume that will be fixed with the new JS version. As I've said, if
this was just a matter of rewriting some
setTimeout(ZoteroPane_Local.updateToolbarPosition) calls, we would do
so, but that wouldn't fix the problem.

As for emergency fixes, as I've explained, every minute that a version
with a critical issue is available means hundreds more people — who rely
on Zotero for time-sensitive work — will be stuck with the bad version
for days. If it's a less-than-critical issue that's nevertheless
impeding users' work, we'll often have a beta version available for them
within minutes. We're not going to put ourselves in a position where
we're frantically emailing amo-editors at 3 a.m. on a Saturday night
begging for them to let a bug fix through. We'd just leave the platform.

Here's what you seem to be forgetting: we don't have to develop for
Firefox. Many people appreciate our full-featured Firefox version — our
original version, and why we're on the Mozilla platform to begin with —
and would be very sad to lose it, but they'll use Zotero regardless.
They come to our site and download Zotero, and have for the last decade.
If we say that, because of Mozilla's policies, we're no longer able to
provide what we believe to be an acceptable level of support for our
full-featured Firefox extension, they'll use our standalone version and
one of our lightweight browser extensions. Based on current browser
statistics, they'll probably do so in a browser other than Firefox. As
far as we can tell, the majority of Zotero for Firefox users use Firefox
because of Zotero rather than the other way around.

If you felt losing an extension like Zotero — and losing many of its
users to other browsers — was worth the benefits provided by the
automated scanner, that would be one thing. But it's just bizarre and
sad that you'd be comfortable losing Zotero or other ambitious
extensions, and so opposed to the alternatives I suggest, in light of
clear evidence of just how useless the scanner will be in actually
protecting users.

> What we haven't been willing to do is dispense with automated and
> manual review altogether.

Neither of which I've suggested. I've said that automated review
shouldn't block release of front-loaded unlisted updates, and that, for
flagged issues in legitimate extensions that won't try to bypass the
validator, manual review would be just as effective after the fact as
before.

By the way, I do appreciate that you actually proposed whitelist rules
on the AMO list. I've now shown that even the idea of a tightly
restricted whitelist doesn't make sense, since whitelisted extensions
wouldn't be able to do anything that non-whitelisted extensions couldn't
trivially do, but if some of your colleagues had actually moved forward
with your suggestions, things probably wouldn't have gotten to this point.

- Dan

emilian...@iris-advies.com

unread,
Nov 30, 2015, 3:24:46 PM11/30/15
to
On Monday, November 30, 2015 at 8:57:42 PM UTC+1, Dan Stillman wrote:
> On 11/30/15 6:24 AM, Gijs Kruitbosch wrote:
> > This seems like something we should be able to get data about. (I do
> > not have such data.) Have you asked anyone?
>
> If it's only Zotero that's affected by this, then we should have been
> whitelisted three months ago when we first asked about it.

And the number is skewed in any case, as there will be extensions (such as mine) who neuter functionality in order to avoid having to go through manual review. So I would be put in the "not affected" pile even though I *am* effected.

Dan Stillman

unread,
Nov 30, 2015, 3:36:09 PM11/30/15
to dev-platform
Just to give some context here, we've been asking for a "trusted author"
whitelist for three months. Gijs even helpfully proposed specific rules.
The reason things came to this point is that it was still being argued
as of last week that the whitelist was inherently more dangerous by
allowing whitelisted developers to do malicious things and evade
detection. We can now see that's not true — non-whitelisted extensions
can do the same thing, trivially.

We've assumed that Zotero would be whitelisted, but that doesn't help
other legitimate extension developers if the whitelist is designed with
an assumption that a whitelisted extension is more dangerous. The
proposed rules were still going to restrict whitelist status to
extensions with large numbers of users.

Given what we know now, I can't see a justification for not
"whitelisting" (meaning allowing an automated review override) any
demonstrably legitimate developer. In terms of malware, you'd have to
argue that someone who bought, compromised, or developed a legitimate
extension would then be unable or unwilling to rewrite one line to get
code past the validator. In terms of potentially insecure code (e.g.,
innerHTML), you'd have to argue that those patterns posed sufficient
risk to users over the next several days that blocking an extension
update (which possibly contained other important bug fixes that
definitely affected users) made sense, as opposed to AMO reviewers
looking at them after the fact and asking for immediate fixes or
blocklisting depending on severity (and rescinding "whitelist"
privileges if there's a pattern of ignoring issues).

I've gone further and argued that, given the ease of a validator bypass,
just doing an initial manual review for the first release of any
front-loaded unlisted extension would be the meaningful blocking step,
but that's not as important. I just don't want to see obviously
legitimate developers of existing extensions blocked for no reason.

Adam Roach

unread,
Dec 2, 2015, 11:38:15 AM12/2/15
to David Rajchenbach-Teller, dev-pl...@lists.mozilla.org
In case you missed it, Kev Needham (the Add-Ons Product Manager) has put
together a blog post on this topic:
https://blog.mozilla.org/addons/2015/12/01/de-coupling-reviews-from-signing-unlisted-add-ons/

He also sent the same information to the addons-user-experience mailing
list:
https://groups.google.com/forum/#!topic/mozilla.addons.user-experience/iwjQbLIb-Fo

I recommend that interested parties who wish to continue the discussion
respond in one of those two forums.

Thanks!

--
Adam Roach
Principal Platform Engineer
a...@mozilla.com
+1 650 903 0800 x863

Jorge Villalobos

unread,
Dec 3, 2015, 9:23:00 AM12/3/15
to
We've added a session on Thursday afternoon. You can find it on Sched.

Jorge
0 new messages