On 09/21/2013 05:49 AM, Jonas Sicking wrote:
> On Tue, Sep 10, 2013 at 9:17 AM, Jim Blandy <
jbl...@mozilla.com> wrote:
>> Debugging is how many people are going to learn how to write apps; think
>> about the role of "view source" in introducing people to web technology. And
>> it's consistent with our goal of putting the user in charge of their own
>> device. I think we should treat all apps as debuggable, on any device.
>>
>> As far as non-developer users are concerned: debugging is turned off by
>> default. You need to go (rather deep) into the settings and explicitly turn
>> debugging on, before the server begins listening. There is no reason a
>> non-developer would ever need to enable debugging (and we should ensure this
>> remains true). So I don't see the risk to non-developer users.
> The attack here is if the user gets the device stolen, then the thief
> could go into the settings and explicitly turn debugging on. He/she
> could then use the debugger to suck out all sorts of data from various
> apps. Things like login tokens to your email or even raw passwords
> from applications that store those client-side.
Right - the debugging protocol allows users, thieves or otherwise,
access to data apps don't offer via their UIs. But most data the user
cares about protecting *is* available via the UIs. And access to the
user's email account will allow the attacker to change the user's
passwords (in the guise of "password recovery", so auth tokens are
effectively available to the thief, debugger or not. So the additional
exposure here is minor; am I missing something?
> There's also the "evil maid" attack, where a maid which gets access to
> your phone for 5 minutes, can do the same and quickly suck out all
> data from your phone.
I have to say, this attack, which only works against non-passcode
protected phones, and is only valuable when the attacker is uninterested
in simply stealing the phone outright, seems a bit artificial to me,
like what Scheier calls a "movie plot attack": one that sounds dramatic,
but is not worth designing security policy around.
Is there more background here that would help me apprehend this better?
How can we responsibly assess the significance of this attack against
non-passcode-protected phones, and compare it to the value of having the
phone enable development-oriented users to actually learn how apps work?
(By the way - can we call it the 'office spy' attack, or something? The
'evil maid' monicker - and I know it was coined in full innocence - is
just a little icky, since hotel and custodial staff are, I'm told, often
blamed for thefts they didn't commit.)
> In neither scenario the user is particularly protected by hiding the
> debugging-enabling checkbox deeper in the settings app.
I completely agree with this. I wrote poorly. Sorry for the distraction.
> Ideal would be if the user had to enter some code in order to turn on
> debugging, but what code would we use? It would be pointless to enable
> setting the code the first time debugging is turned on, since most
> people will never turn on debugging. And so the thief/maid would just
> be able to select the code themselves.
Without a passcode, there's no way for the phone to differentiate
between legimate and illegitimate users. Whatever level of access we
offer goes to both sides, thus the question.
> One idea that was floated was that we're in a good state if turning on
> debugging only enables debugging of apps installed after debugging was
> enabled. That would let the user turn on debugging, then install an
> app that they want to know how it works, and start debugging away.
This sounds similar to the "wipe sensitive data when enabling debugging"
approach. If we conclude that the additional exposure via the debugging
protocol is salient, then this might be a good solution, presuming users
can re-install pre-installed apps. But as I say above, I don't really
agree with the premise.