With the recent unfortunate revelations about the NSA PRISM program, there
have been a number of people making note of Mozilla not being one of the
named participating parties. While the overall situation is horrible,
personally I think it's fantastic to see Mozilla's greater-than-usual
commitment to privacy acknowledged.
One of the things that has disappointed me, however, is that many people are
not really aware of the degree that Mozilla's actions are protective:
"Although there's nothing in theory stopping the NSA from
getting data from Mozilla Corp. the way it does from
other companies, Mozilla has taken a fairly structured
stand against encroachments on civil liberties in the past"
(
http://www.bizjournals.com/sanjose/news/2013/06/07/Protect-yourself-from-prism.html )
I'm sad to see this because "you can trust us because we have not, yet,
been observed to betray you" is something that a third party should not
regard as a useful commitment: betrayals of confidentiality are, by their
nature, rarely observable. That is all the author credits Mozilla for and
you could have easily made the same statement using just about any of the
PRISM-named companies in place of "Mozilla" there a short time ago.
But, in reality, what Mozilla has done is often much stronger: compared
to the alternatives, the design of services like Firefox Sync and Persona
or the very fact that Firefox itself is open-source fundamentally curtails
the amount of damage Mozilla could do if it were to act against its users'
interest. Deep in the protocol-bowels of the Internet, people working
with Mozilla have advanced things that make undetectable surveillance
hard, such as mandatory DTLS-SRTP as part of WebRTC, etc.
It's my opinion that Mozilla has the freedom to build architecturally
secure systems because it doesn't have a business model that is predicated
on mining users' data, so making the data unavailable—even to Mozilla
itself—isn't disruptive. The fact that Mozilla develops its software
openly also makes using secrecy as a architectural tool simply more
expensive than it is for other organizations which are secret by
default and open by exception.
(As Jacob Appelbaum recently wrote:
"if the architecture of a system, even a mostly *technically* secure
system, is optimized for surveillance to the company's benefit - it
*will* almost certainly be forced to hand your data over when ordered.
Simply because it *is able to do so* at all,"
https://mailman.stanford.edu/pipermail/liberationtech/2013-April/008250.html )
Architectural security reduces the problem of making promises we
cannot keep: should Mozilla have to make a decision between
upholding privacy promises in a strong sense and continuing to exist
as an organization, I don't think there is much ambiguity about what
would happen, as romantic as it might be to think otherwise. It may
someday be the case that people inside Mozilla with privileged access
might be secretly serving other interests—perhaps with the belief
that qualified immunity would shield them from liability (or, perhaps,
against their will). There is realistically no hope that Mozilla—or
anyone else—could screen for this kind of subterfuge. When our
concerns include shielding people from misconduct by entities as powerful
as major nation-states, or just simply wealthy parties who are willing
to ignore the law, there is little we can assume is not within the
attacker's capability.
So I believe that one major reason people should trust Mozilla
is because we prove our trustworthiness by building systems where trust
isn't required. But this point is subtle. If we want people to choose
Firefox over the alternatives so they can benefit from it, we will
eventually have to educate people about the difference between "trust us
because we say so" and "trust us because you don't have to"—because
everyone can (and does) promise to be trustworthy, at least until people
get the bad and always too-late news that the trust was broken.
But what about services which we don't know how to provide in
a manner where they architecturally simply can't cheat? Cryptographic
tools are becoming more powerful constantly but confidentiality is
especially tricky, especially when people want 'social software' whose
whole basis is sharing data (but only with the "right" people!).
I don't believe that advertising a commitment to architectural security
precludes offering less-secure alternatives in areas where we aren't yet
able to do better, but having services with different privacy commitments
makes educating people complicated:
How do we tell people that our privacy commitment is better than someone
else's because we are maximally transparent and build systems where its
harder for us to cheat… except where we aren't and don't?
If we don't make architectural security part of how we promote our privacy
and security commitment, how do we make it clear that our promises are
better than someone else's when everyone else makes similar sounding
promises?
How can we tell people how important these goals are for us without
understating the realistic limitations on our ability to achieve them for
our services which are not based on cryptographic reduced-trust? ("We
won't betray you! /Unless someone makes us/" sounds pretty lame, but
it's truthful)
But I suspect these trade-off questions can't be intelligently
navigated unless architectural security is first viewed as the aspirational
gold standard for upholding individuals’ security and privacy on
the Internet.