Mark,I was just taking a look at Marc Steigler's intro to capabilitysystems. A key point is that if you were sent a virus like theclassic Love Bug or Melissa, and you were tricked into running theattacked executable: (the following is from
Suppose you were running a capability-secure operation system, or thatyour mail system was written in a capability-secure programminglanguage. In either case, each time an executable program in youremail executed, each time it needed a capability, you the user wouldbe asked whether to grant that capability or not. So Melissa, uponstarting up, would first find itself required to ask you, "Can I readyour address book?" Since you received the message from a trustedfriend, perhaps you would say yes - neither Melissa nor anything elsecan hurt you just by reading the file. But this would be an unusualrequest from an email message, and should reasonably set you on guard.Next, Melissa would have to ask you, "Can I have a direct connectionto the Internet?" At this point only the most naive user would fail torealize that this email message, no matter how strong the claim thatit came from a friend, is up to no good purpose. You would say "No!"And that would be the end of all such viruses. No fuss, no muss. Theywould never rate a mention in the news.Every month or so, I get an enticing looking request when I try usingFacebook. It'll say that some fun experience will be mine to savor ifonly I will say "yes, go ahead". Then, Facebook says, oh, by the way,this application wants access to your list of friends. The first timeI got one of these, I was fooled, and the malware application sent onits stupid message to all of my "friends". Once burned, forever shy,and I have not been fooled again. But every month or so, it isevident that someone I know fell for one of these.In other words, I think Marc is being too optimistic about the users.In order for this to work, the user has to do a cost-benefitcalculation. The cost may be hard to understand. A person might notrealize what the real-world effects might be of OK'ing an app askingthese questions. More insidiously, the user might not even botherreading the questions if the user already feels at sea with Facebook's(notoriously confusing) privacy rules.On the benefit side, what the little app is offering looks nice, andalso it seems to be coming from a friend. Socially, that can berather compelling. At least that's what I'm seeing.A similar case are the warnings that browsers present when they do anSSL handshake and don't like the certificate or something else isn'tcopacetic. What used to happen is that a technically-worded messagecame up, asking whether you wanted to go ahead. From the point of theuser, the message meant "blah, blah, blah, blah, blah, click yes ifyou want me to do what you said, or click no if you want to not getwhat you asked for."I think the browser-writers have tried hard to do a good job of makingthese messages as clear as possible, taking into account that theydon't know the technical sophistication of the user. You can use thehandy https://www.fnmoc.navy.mil/ URL that I mentioned in my previousmail to see what each browser does with an untrusted cert. If youlook at Chrome, you can see that even the basic message is fairly longand technical, and the "Help me understand", well, my father wouldnever understand it. And my Dad is a physician, so he is generally aneducated and intelligent person, but with no engineering background.He even has trouble understanding nested folders (i.e. hierarchicalfile system), which really surprised me.A LOT of the computer software developers I know are confused aboutPKI. Almost all of them talk as if a certificate were a set ofcredentials, i.e. that a certificate is just some fancy kind ofpassword. If I correct them and explain that it's the private keythat's the secret, not the certificate, they say soemthing like "yeah,I know" but then they keep making the same mistake. They're extremelysmart people, but it's just not an area they've studied.This is just SUCH a hard problem.-- Dan
So, the ideas for just about everything has evolved since I wrote the
Intro to Capabilities back in 1998 or thereabouts. For one thing, we
have distinguished "good" capabilities from the confusion of all the
different things that used to be called capabilities that didn't work;
we now talk about "object capabilities" :-) These are defined briefly in
"Emily: A High Performance Language for Enabling Secure Cooperation" at
http://ieeexplore.ieee.org/iel5/4144916/4144917/04144948.pdf?arnumber=4144948
as
An object-capability language is an object-oriented
language that, to a first approximation, ensures the
following additional properties (see [Miller06] for a full
definition):
• Objects are encapsulated.
• The only way for an object to influence the world
outside itself is to send messages on references.
• The creator of an object can deny that object
access to anything else simply by not providing
references to anything else.
You should read Markm's dissertation for the gory explanation.
A key advancement in the user interface area was the realization that
bundling designation of an object with the authorization to use that
object can make lots of elements of user interfaces seamlessly secure.
You can read several takes on this theme. Oldest is Granma's Rules of
POLA at
http://www.skyhunter.com/marcs/granmaRulesPola.html
which was written after developing CapDesk, in which we identified 2
basic mechanisms for conveying authority:
1) at application installation time one makes an endowment of
authorities. This approach is used in Android, though it is used at the
wrong level of abstraction to actually be user friendly.
2) During execution, when one specifies an object the app should use,
one confers the authority at the same time. The now-widely-understood
example of this is the file dialog box for which, when the user selects
a file, the authority to access the file is granted because the user
designated it.
Then there's the paper from SOUPS about the secure cooperative file
sharing and email system we built named SCOOPFS, Not 1 Click for
Security, at
http://www.hpl.hp.com/techreports/2009/HPL-2009-53.html
In this application, the security was so invisible that one of the users
in the pilot program asked, after about 2 weeks of usage, "you know,
this system is so cool, is there any way I can make it secure?"
Though we have demonstrated in several different contexts how to build
applications for which the security is effectively invisible, from file
sharing to desktop application launching to online payment systems to
secure remote bash shells for server management, one thing remains true,
which is that invention in the user interface is still needed. Good user
interface design requires innovative user interface designers who are
equipped with good security paradigms that offer the possibility of good
interfaces (a user interface designer who is told that the application
will be using a traditional username/password with a verisign cert on
the server for its security is pretty much out of luck, the bad
decisions have already been made).
Let us take a look at the example you mention below. Consider the app
coming to your Facebook account that demands your friends list. First,
my apologies: I do not have a facebook account and don't know anything
about facebook apps. But here are some observations, perhaps some will
be helpful.
-- Traditional security approaches have screwed up the world so badly
that we are now all trained to click ok on security monolog boxes
without thinking (I call them "monolog" boxes, not "dialog", because the
user has no meaningful choice. He must click ok to get his work done,
and so that is what he will do). While it is sometimes possible to have
meaningful security dialog boxes that engage the user in an actual
dialog, we must accept that, for some number of years and for many many
people, the click ok reflex will impede progress. Obviously, you want to
avoid dialog/monolog boxes with no purpose other than to pester the user
about security whenever possible. In CapDesk and Polaris secure
desktops, we eliminated all but one such dialog box. In SCOOPFS file
sharing, we eliminated all such dialog boxes. In ShareShell remote
command execution, we eliminated all dialog boxes. In the Purse payment
system we eliminated all dialog boxes. So it can often be done, but
every new situation may require new inventions to succeed.
-- You had an unpleasant experience that taught you to be very careful
about clicking ok for friends, i.e., you needed a "teachable moment" to
appreciate how powerful that authority is. I don't see a way around
needing a certain amount of this. Sorry. There are things people must
learn to succeed in cyberspace, but what we must do is make the things
that people must learn actually learnable (a counterexample is teaching
people to avoid phishing by doing things like looking at certs.
Experimental results have demonstrated that, even given a teachable
moment, normal human beings cannot learn to be invulnerable to phishing
attacks. We must use an entirely different tech to replace
username/passwords if we want to eliminate phishing -- we must move to a
tech whose security characteristics are learnable to the extent to which
those characteristics are visible). The good news is that those who
experience such teachable moments may be able to teach their
grandmothers without forcing the granmas' to have teachable moments of
their own. While I know this is unsatisfying, in fact, this is how
people have been learning to work in new environments for about 100,000
years now.
-- In some sense, it sounds like the question about friends authority
was asked as part of the "endowment", i.e., during installation of the
app. If one can characterize facebook applications into categories, a
better strategy would be to eschew letting the app ask explicitly for
friends, and instead specify to the facebook kernel what category it
belongs in. The user would then tell facebook what category the user
thinks the app belongs in, and then facebook would grant only those
authorities requested by the app that fit into the user-specified
category. For example, in the desktop world, interesting categories of
app include document processors (text editor, picture editor,
spreadsheet, etc), comm tools (mail, IM), and online games (world of
warcraft, etc). Each of these categories is associated with a limited
set of authorities that is reasonably safe as a distinct bundle.
-- The friends authority sounds powerful enough so that it is reasonable
to ask the question, how many different ways can it be attenuated? For
example, if the friends authority includes the ability to send email to
all the friends with your name as the sending party, can you separate
out that authority and create a weaker version of the friend power? For
example, how 'bout an authority without email power that can read those
friend's pages and do some analysis and deliver the result to the user,
but only if the app does not have the authority to communicate back to
its server, programmer, and master? Such an attenuated authority might
be adequate for interesting legitimate purposes, and is quite safe. Or
would it make sense to have a one-time-only access to the friends and
let it send one email message -- but only after the email message is
presented to the user who is about to send it -- the message goes out
when the user presses the Send button? And for heaven's sake, don't let
this be a dialog box. If this is not necessarily a part of an endowment,
have the app launch, and have a place where the user can drag and drop
additional authorities for the app to perform additional services.
As I think about this more, a possible right answer for this particular
problem is that all apps are by default endowed with the authority to
create an email message for all the friends: but that email will be
submitted to the user for sending (i.e., all the user has to do is press
the Send button), with a list of all the friends with their addresses
checked Yes, which the user can toggle off before pressing the Send
button. In that case, legitimate apps never ask the user any questions
at all until it is time to send something, at which point the user is
brought into the loop to do the one thing that only the person can do,
namely, decide, for whom is this email appropriate?
But this is all based on my ignorance of facebook, so my apologies if I
have said nothing helpful.
The most important thing I said, which I'll say again :-), is: good
tools and good ui designers. You must have both. A good object
capability underpinning will not give you a good ui if you let a
programmer design the ui. A good ui designer will not give you a good ui
if you stick him with traditional security machinery.
Daniel, I see you have a google employee email account. Are you in the
Bay Area? If so, I'd be happy to get together in person to talk about
these things.
--marcs
Markm, we can discuss friday whether I have to join another discussion group :-) Meanwhile, if you could post this probably-last reply to the group, I would appreciate it.
--marcs
Comments embedded between excerpts below:
I'm not sure how well it would work for the user to tellFacebook what category an app is in, although that'sa great idea and better than anything I had everheard.
Just to make sure accreditation is done, this idea was first proposed by Ken Kahn after seeing how CapDesk did installation endowments.
I have been given to understand that Android has evolved an interesting enhancement on the endowment acceptance process, by using some "crowd sourcing": the community identifies combinations of authorities that are sound and publish and endorse them. This is a great idea too, particularly if well-supported and integrated. You could use such a community process to evolve an understanding of the categories, wherein a "category" is a bundle of sound authorities, and applications could then specify that they want to use bundle XYZ that has already been endorsed by the community.
These days, a lot of apps seem to wantpermissions that one might not have expected.For example, it's so common today for someonedeveloping an app to be told, hey, as long asyou have that app, and it runs on (say) mobiledevices, you've just GOT to take advantageof the location ability of the phone, so put in somekind of cool feature that sees where the useris geographically and does X.
I am about to write my first mobile app, and make no mistake, I'm going to give it a feature that uses the phone location too :-) However, I will make my app capable of working without the location authority, giving you all the functionality that is not location dependant :-) All honorable apps should be expected to do the same.
I don't think any of this is inconsistent withanything you have said, though.
I don't think so either. There are some apps for which you have to have a trust relationship with the vendor because the app must have powers that are extreme. Examples are web browsers and operating systems and compilers and code confinement verifiers. For everything else, every time we conclude that an app needs such powers that we have to trust unto the vendor, we should look askance at ourselves, and though we may have to ship something sooner rather than later, and hence leave the problem unsolved and trust the vendor, it is worthwhile to chew on the problem. There is an app that irritated me for almost ten years -- the file navigation application for CapDesk. When we shipped CapDesk, the file navigator was a fully trusted component, i.e., if you wanted to use a third-party file navigator, you had to trust the vendor. After a decade of evolution of insights, I finally figured out how to confine file navigators so that you could use one developed by the Chinese Army's cyberwar division with very limited risk :-)