On 11/25/15 9:59 PM, Mike Connor wrote:
> On Wed, Nov 25, 2015 at 5:18 PM, Dan Stillman <dsti...@zotero.org
> On 11/25/15 4:21 PM, Mike Connor wrote:
> 1) Mozilla's position is not now, nor has it ever been, that
> the validator represents a panacea for malware. It's a useful
> tool for catching common bad patterns, and there's tons of
> room for improvement, but it'd be absurd to claim 100%
> protection is possible.
> Let's please not rewrite history.
> As I note in my post, it was Jorge who said the scanner could
> "block the majority of malware" when I first raised these concerns
> 9 months ago.
> I think it's a substantial stretch to take that statement, without
> context, and interpret it into scanning being a panacea or 100% reliable.
Huh? "Panacea" and "100% protection" were your words. Here's what Jorge
said, when I asked about this sort of trivial string concatentation: "We
believe we can refine our detection system so we can block the majority
of malware, but it will never be perfect." 
That has been his underlying assumption throughout this entire process,
and it's simply not true. You can't block even the most trivial efforts
at obfuscation. Based on his blocklisting of the PoC id and his comments
on the hardening bug , it's not clear to me that Jorge understands
that even now. Which is OK — I don't mean that as a personal attack.
He's been operating under bad info. But it calls into question most of
the policy arguments he's made on this subject in the last year. You
can't make good policy without understanding just how little the
automated scanner can actually do.
Even the developer of the validator, asking a month ago whether
combating this sort of trivial obfuscation was possible (it's not, as he
was told), said, "Without it the validator remains more an
advisory/helpful tool than something we could use to automate security
> You've repeatedly said throughout this discussion that automated
> scanning provides meaningful protections against malware, and that
> whitelisted extensions are therefore inherently more dangerous.
> My assumption has always been that we'd need to do something beyond
> static analysis to stop the smart bad guys. If I've failed to
> communicate that assumption, my bad.
The smart bad guys who can concatenate two strings?
Anyhow, if you've shared that assumption, I have not seen it. Certainly
nobody else has been talking about that. We've been talking about the
automated scanner and manual review.
> (We've had some interesting conversations with a potential partner
> who's already doing runtime analysis of add-ons, but the tech is
> proprietary, so that's been slow.)
Runtime analysis...which has no bearing on a discussion about an
automated scanner and manual review?
> That said, there's a lot of lazy, unskilled "hackers" out there.
Right, so again, you're advocating putting in place a system that's
massively disruptive to legitimate developers in order to combat
"hackers" who can't concatenate two strings, or read an HTTP header and
issue an XHR. And since you're saying it's potentially grounds for
denying whitelist status to an extension like Zotero, those are also the
people who would have purchased Zotero from George Mason University,
taken over Zotero development, compromised the Zotero servers,
compromised our VCS, snuck code past our review process, or any of the
other nightmare scenarios that have been offered.
> And yes, in the magical future world where automated scanning catches
> bad actors a whitelisted add-on would be an attractive target. I
> don't think that's a bad assumption.
I'm sorry, are you now saying that for the last three months you've been
arguing not about the scanning system that everyone else is talking
about but a magical future system that can catch someone who doesn't
want to be manually reviewed, which no one else weighing in on my post,
including other Mozilla engineers, seems to think is possible?
> I defend code signing in my post, and give concrete suggestions
> for building off of the actual protections that code signing
> itself provides. But as currently implemented, the system only
> impedes legitimate developers, while providing no meaningful
> protections against malware and in fact endangering users by
> providing a false sense of security.
> Read your own point one. That's more or less the conversation we had
> over a year ago, before we announced any of this publicly. That
> people believe we can get an additional win from better automatic
> testing is entirely orthogonal to whether the signing approach has
> other wins for combating malware.
> To assert that the system has no value (money quote: "only to end up
> with a system that is utterly incapable of actually combating
> malware.") is where this point comes in. There's real, effective wins
> from the current system, and you're dismissing them entirely over what
> I assumed to be an additional opportunity to raise the bar. That's a
> huge disconnect.
Yes, it is not capable of combating malware by anyone who 1) has the
slightest bit of JS knowledge and 2) does not want to be detected. Jorge
wasn't aware of that, but hopefully now is, after several Mozilla
engineers have explained it. The developer of the validator acknowledges
it, after Mozilla engineers confirmed it to him (and he's now calling it
a "linter" rather than a "validator"). You seem to still be disputing it.
I'm glad we agree on my point 1. It's point 2 — the parts of the system
that block legitimate developers from releasing extensions without
manual review — that this entire discussion has been about.
You've referred repeatedly to the need to balance user safety and
developer freedom. My PoC demonstrates that the scanner provides
essentially no user safety, because anyone who wants to can bypass it. I
hope by now everyone is clear on the costs it imposes on developers and
their users. You can keep arguing that those costs are worth it, but it
doesn't seem like many people are going to agree with you in light of
what I've shown.
In point 2 I suggest alterations that could be made to the new plan to
both meaningfully increase user safety and avoid the serious drawbacks
of the current plan in light of its sheer ineffectiveness.
> 3) It seems like a significant misrepresentation to claim that
> anyone's said Zotero "will either turn rogue or become an
> attack vector" if you are whitelisted. It's an extremely
> unlikely situation, as I've tried to take pains to note. That
> said, humans make errors in judgement, and we'd be foolish to
> ignore the possibility in drafting an official whitelisting
> policy. My primary goal on whitelisting has been to ensure we
> have a fair policy for all affected developers that
> incorporates clear resolutions to all possible situations.
> It's been repeatedly suggested on this list that those were
> significant possibilities.
> Link/quote? Again, I've taken great care to note that however unlikely
> we may believe something to be, it's something that a strong policy
> will cover as a possibility. That's it, at least for my part, but
> maybe I missed something absurd someone else said.
> I don't know how many times I need to say that for you to believe me.
> I'd sincerely like to collaborate, but I do react badly to an
> assumption of bad faith.
See the "nightmare scenarios" I list above. I'm not going to go find all
the threads, but I believe you've said most or all of those, except for
sneaking-code-past-Zotero-reviewers (which was Jorge). Jorge said some
of the same things. He also said that whitelisting "leads to code
getting significantly worse over time because there's no external party
reviewing it", including, in many cases, "exploitable security holes"
, and you backed him up on that, ignoring the fact that Zotero has in
effect been whitelisted for the last decade without that happening.
The issue isn't that you've suggested these as possibilities. It's that,
rather than evaluating the likelihood of them in Zotero's specific case
(e.g., the likelihood of Zotero being sold for malware) or weighing our
proven history of security against the risks, you've used them as
blanket reasons to not even commit to whitelisting Zotero under any
reasonable system. And you've done so in defense of a system that
everyone can now plainly see does essentially nothing to prevent
malware, because an attacker of any extension, whitelisted or not, can
get any code they want past the validator.
> But you're still not understanding. If Zotero or any other
> extension is compromised, the attacker will simply submit an
> extension that bypasses the automated scanner, which by
> definition, and as your own colleagues attest, cannot be
> prevented. The only change with whitelisting is that behaviors
> that no attempt was made to hide would be allowed through. There
> are no additional protections.
> If we intend to rely solely on static analysis, yes. Again, my
> assumption was that scanning was an initial step, and we would need to
> do much more over time. I thought that was obvious enough that I
> didn't need to expand the definition every time. Sadly not the case.
I don't know what "much more" you're referring to. Everyone else has
been talking about automated scanning, which provides essentially no
protections against malicious code, as all the engineers on the
hardening bug agree , and fallback manual review, which causes all
the problems for developers we've been discussing. But if the entire
dialogue with you has been one big misunderstanding, that's great.