On Thu, Mar 8, 2012 at 10:25 AM, Jonas Sicking <
jo...@sicking.cc> wrote:
> There are plenty of examples of bad security models that we don't want
> to follow. One example is the security model that "traditional" OSs,
> like Windows and OS X, uses which is "anything installed has full
> access, so don't install something you don't trust".
technically that's inaccurate, but practically speaking it is.
there's a difference.
* the security model of OSX is actually BSD-based i.e. POSIX underneath.
* the security model of windows is actually NT which is actually from
dave cutler's knowledge of VAX/VMS when he was working for DEC, and
is, i think the technical term is that it's a "DAC" system -
Discretionary ACL system.
however, once you put even a superb ACL system like that which was
designed for NT in front of mass-volume users, it doesn't matter *how*
good the "sicuuurriteee" is: it always has to cater for the lowest
common denominator, and you also run into the "mono-culture" problem.
this is best expressed as a hilarious mathematical formulae i saw
once on the whiteboard of a friend who worked for ISS:
Sigma (SUM) Security -> Zero, as idiots=0 (subscript), idiots=n
(superscript), limit(n) tends to infinity. it's better done as a
diagram :)
but in effect it says "security goes to shit as the number of idiots
increases".
> I.e. it's fully
> the responsibility of the user to not install something that they
> don't 100% trust.
yes.
idiots.
> Not only that, but the decision that the user has to
> make is pretty extreme, either grant full privileges to the
> application, or don't run the application at all.
that is a mistake made on the part of the application writers, who
have, by and large, given up on the whole "security" concept on both
windows and macosx, and have not made good use of the available
security measures.
in the case of windows, it's because they know there's no fucking
point, there's so many ways to piss all over an OS that is written
through profit-maximisation that they too don't bother wasting their
money even trying.
in the case of MacOSX, it's because they aren't hit by the
"mono-culture" problem (there's not enough of a critical mass to make
viruses mainstream)
> A slightly better security model is that of Android, which when you
> install an app shows you a list of what capabilities the app will
> have.
... which STILL doesn't help you because, again a) android is hitting
the mono-culture "critical mass" problem b) that increases the number
of idiots c) the pay-off for attackers is very very high: being able
to send SMS messages to premium-rate telephone numbers, gaining access
to bank accounts etc.
> This is somewhat better, but we're still relying heavily on the
> user to understand what all capabilities mean.
yes. exactly.
> And the user still has
> to make a pretty extreme decision: Either grant all the capabilities
> that the app is requesting, or don't run the app at all.
which you DON'T have the source code for, so cannot even attempt to
verify that the code is secure, or even count on the "reputation" of
people and the whole Software (Libre) ethos, nor ask or pay anyone to
perform that verification audit should they even _want_ to.
> Another security model that often comes up is the Apple iOS one. Here
> the security model is basically that Apple takes on the full
> responsibility to check that the app doesn't do anything harmful to
> the user.
yeah. rriiiight. i think my opinion of such a system is best not
expressed on public mailing lists. i'm soo tempted though. oh go on.
i'll just use the one phrase. close your ears if you have a
politically-sensitive disposition. "Yahvohl mein Fuhrer!" *snaps
heels together*.
soo.. uhmm.... no. i don't think the Mozilla Foundation has the
right to set itself up, nor to set anyone *else* up, as the "God" that
shall dictate that which users shall put on the hardware which *they*
have purchased.
... apart from anything, what happens if you make a mistake?
anyway: it's worthwhile mentioning that you've left a rather
important security model off the list: it's called "FLASK". FLASK was
developed as a replacement for the hierarchical "ACL" concept. its
premise is "when you want to do something different from what you were
doing right now, it doesn't matter who you were before, you lose ALL
rights and you now start again. and one of those rights includes 'the
right to do something different' ".
translation: when an application runs a new application (exec)
there's absolutely *no* transferrance of any "rights" over the
execution boundary. the right to even *execute* a new program is also
a right which must be explicitly granted. in FLASK, what you can do
depends on who you were, where you are *AND* what you're executing.
it's rather neat, and it actually reflects *REAL LIFE* security
especially in military environments. take an example: i went to work
for NC3A. the "security policy" for me being able to do that was
written by NC3A and vetted by the UK Ministry of Defense. they wrote
the "rules". when i went to work each day, i had to hand over my
passport, and was given a "badge". my identity - my rights -
specifically the right to leave even the country known as "Holland"
had been removed. i was no longer an "EU Citizen". the "badge"
allowed me to open certain doors but not others. the "badge" had a
time-limit placed on it.
also, FLASK is also what if memory serves me correctly a "Mandatory"
Access Control system. the default is "zero permissions".
"Discretionary" Access Control is where you can do anything unless
told otherwise.
you will be familiar with FLASK through SE/Linux. SE/Linux is an
implementation of the FLASK security model.
the advantage of using FLASK will be that it already exists. the
disadvantage of FLASK (i say "disadvantage" but it's not) is that its
primary method of isolating security areas is to use "exec()". the
NSA, who commissioned FLASK and helped implement it in SE/Linux, fully
recognise that "fork()" simply *cannot* be properly secured [caveat
described below].
you - the mozilla foundation - would do well to heed the experience
and knowledge of the NSA.
this is why i advocated that the mozilla foundation *not* design B2G
as a monolithic "WebAPI" extender but instead write all WebAPI code as
JSONRPC services which are contacted by the B2G applications using
nothing more sophisticated than XMLHttpRequest (AJAX).
the reasoning is very simple: if someone manages to write a buffer
overrun or other exploit which gains control of B2G's execution space,
it's "Game Over". it doesn't *matter* what kind of "nice security
API" is implemented in B2G, because that security API is a
*cooperative* i.e. *user-space* security model that can easily be
bypassed by simply... not calling the function, and because it's a
multi-threaded / multi-process application, *every* thread is
compromised. including access to the GSM/3G modem. whoops.
[caveat: when i worked with SE/Linux, i did describe a use-case for
allowing security-context "names" to be switched, with cooperation
from the actual application, on a "fork". normally, SE/Linux
automatically tracks processes via exec() boundaries so that the
applications don't even have to know that they're being run under
SE/Linux. but because up until that time, "fork" was not a recognised
secure boundary for SE/Linux contexts, samba and other servers were in
a spot of bother. by adding in a function which notified SE/Linux,
"hey SE/Linux, a fork's just about to come up, and we'd really
appreciate it if any applications which were exec()'d by the new
fork()ed process would be done under a completely different context
named {insert_name_of_user}, please take note, just fyi and all" :)
yes it's complicated, but.. welcome to security! ]
> Proposal:
>
> The basic ideas of my proposal is as follows. For privacy-related
> questions, we generally want to defer to the user. For example for
> almost all apps that want to have access to the users addressbook, we
> should check with the user that this is ok. Most of the time we should
> be able to show a "remember this decision" box, which many times can
> default to checked, so the user is only faced with this question once
> per app.
>
> For especially sensitive APIs, in particular security related ones,
> asking the user is harder. For example asking the user "do you want to
> allow USB access for this app" is unlikely a good idea since most
> people don't know what that means. Similarly, for the ability to send
> SMS messages, only relying on the user to make the right decision
> seems like a big risk.
>
> For such sensitive APIs I think we need to have a trusted party verify
> and ensure that the app won't do anything harmful.
... or, you enforce an absolute requirement that:
* all apps must have the source code published IN FULL;
* that the application be digitally-signed by a PGP/GPG key which
must be within a registered and well-known "Web of Trust"
* that the identity of the person who digitally-signs the application
be verified BEFORE they are allowed to publish apps.
there is pre-existing infrastructure and well-established precedent
for this type of security infrastructure: it's called "The Debian
Project".
the reputation of the debian maintainers is of ABSOLUTE paramount
importance. can you imagine what would happen a) to the debian
project b) to the maintainer if the source code of one of their
packages was found to contain malicious code?? what do you think
would happen if that debian maintainer had been found to actually be
the one RESPONSIBLE for inserting that malicious code into the
application's source code?
so this is "security by reputation", and its success is predicated on
being able to link the origins of the source code back to the person's
*real* identity.
> This verification
> doesn't need to happen by inspecting the code, it can be enforced
> through non-technical means.
@begin Family Fortunes "our survey said"...
bzzzzt er-errrrrch
@end Family Fortunes
*grin* :)
> For example if the fitbit company comes
> to mozilla and says that they want to write an App which needs USB
> access so that they can talk with their fitbit hardware, and that they
> won't use that access to wipe the data on people's fitbit hardware, we
> can either choose to trust them on this, or we can hold them to it
> through contractual means.
... yes. exactly. yeah, you're along the right lines. except that
you now have to have *technical* measures to actually enforce those
contracts, and funnily enough there happens to exist infrastructure
which does exactly that [the debian project].
> However we also don't want all app developers which need access to
> sensitive APIs to have to come to mozilla (and any other browser
> vendor which implement OWA). We should be able to delegate the ability
> to hand out this trust to parties that we trust.
yes. exactly like the Debian Project does. and has been developing
the infrastructure to handle exactly this scenario, for 15+ years.
> So if someone else
> that we trust wants to open a web store, we could give them the
> ability to sell apps which are granted access to these especially
> sensitive APIs.
well... there you have a bit of a problem. how do you ensure that
devices are going to run a version of B2G which actually enforces the
security model?
i've changed tack slightly, here, allow me to explain.
you've got several issues to cover:
a) the distribution of applications, and ensuring that the chances of
rogue apps entering the market are reduced in the first place
b) the management and granting of permissions for access within apps
to certain APIs
c) the *enforcement* of permissions on the actual physical devices
up until that last paragraph, you were talking about a) and b) but
hadn't covered c) but it's slightly confused in that b) and c) are
related/connected. also, you'd described an introduction which
covered bits of a, b and c.
> This basically creates a chain of trust from the user to the apps.
yes. it's why the debian project actually formally encodifies that
chain of trust in the "Debian Keyring". the debian keyring is i
believe i may be correct in saying it's one of the largest most
heavily-used keyrings in the world? dunno.
> The
> user trusts the web browser (or other OWA-runtime) developers. The
> browser developers trusts the store owners. The store owners trust the
> app developers.
>
> Of course, in the vast majority of cases apps shouldn't need access to
> these especially sensitive APIs. But we need a solution for the apps
> that do.
>
> [1]
https://bugzilla.mozilla.org/show_bug.cgi?id=679966
>
>
> How to implement this:
>
> We create a set of capabilities, such as "allowed to see the users
> idle state", "allowed to modify files in the users photo directory",
> "allowed low-level access to the USB port", "allowed unlimited storage
> space on the device", "allowed access to raw TCP sockets".
this is implementation details b+c above. you also need to make a
decision to implement an "a)" as well.
> Each API which requires some sort of elevated privileges will require
> one of these capabilities. There can be multiple APIs which
> semantically have the same security implications and thus might map to
> the same capabilities. However it should never need to be the case
> that an API requires to separate capabilities. This will keep the
> model simpler.
>
> For each of these capabilities we'll basically have 4 levels of
> access: "deny", "prompt default to remember", "prompt default to not
> remember", "allow". For the two "prompt..." ones we'll pop up UI and
> show the user yes/no buttons and a "remember this decision" box. The
> box is checked for the "prompt default to remember" level.
no - you want two levels of access: allow or deny. you're confusing
network-style access control which comes from these dumbed-down
windows-based firewall systems which go "you wanna let dis program do
sumfink on a port you don't even know wot it means? it da inturnet
maaan, dat's all u no"
so you're assuming that users are intelligent enough to know what
they're actually letting themselves in for.
as this is an OS for mass-volume products, you simply cannot make the
assumption that users will know what to do.
that having been said, yes you really do want something like the
"firewall popup" thing... but *not* at the actual hard access control
level in the OS. that should be "allow" or "deny" - period. with, it
has to be said, a notification system which allows the approval to be
"vetted"... *sigh*.
> We then enhance the OWA format such that an app can list which
> capabilities it wants. We could possibly also allow listing which
> level of access it wants for each capability, but I don't think that
> is needed.
ok, now you're into "a" and "b", here.
i think... you [mozilla foundation] need to pick a security model
that is capable of reflecting all of these kinds of things. i'd
recommend FLASK (aka SE/Linux) because it will be capable of handling
these things and has a mature well-established infrastructure for
doing so.
it _is_ however a lot of work. security always is.
hey, did you hear about how long each Head of the Windows Security
Team lasts, in microsoft? it's not long. each new enthusiastic
employee/victim who
gets the job goes to his superiors and says, "Hey boss! I've got a
Great Idea on how to improve the security of Windows!" and they get
asked "does it increase the profits of the Microsoft Corporation?" and
every time they say "err no, actually it would cost us money and
probably customers". they get told "well go away and find us
something that increases our profits".
pretty soon they quit.
the mozilla foundation doesn't have _limitless_ funds (but it _does_
have a different set of priorities) so you do actually have the
opportunity to do something different (an actually... like... properly
secure OS and infrastructure? y'know?) but it would be wise to
leverage existing infrastructure where possible, so as to save time,
funds and the sanity of people implementing it.
> Another thing which came up during a recent security review is that
> we'll likely want to have some technical restrictions on which sites
> can be granted some of these capabilities. For example something as
> sensitive as SMS access might require that the site uses STS (strict
> transport security) and/or EV-certs. This is also applies to the
> stores which we trust to hand out these capabilities.
the more you're describing, the more convinced i am that you want
FLASK. it's capable of creating a security context that not only
links code with users but also links in networking (and other
resources) as well.
> There's also very interesting things we can do by playing around with
> cookies, but I'll leave that for a separate thread as that's a more
> narrow discussion.
adding an SE/Linux security context to a cookie. ooh bloody 'ellfire :)
l.