Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

OpenWebApps/B2G Security model

149 views
Skip to first unread message

Jonas Sicking

unread,
Mar 8, 2012, 5:25:39 AM3/8/12
to dev...@lists.mozilla.org, dev-w...@lists.mozilla.org
Hi All,

I'm way over due to write a proposal for the Open Web Apps and
Boot-to-Gecko security models.

Background:

In general our aim should always be to design any API such that we can
expose it to as broad of set of web pages/apps as possible. A good
example of this is the Vibration API [1] which was designed such that
it covers the vast majority of use cases, while still safe enough that
we can expose it to all web pages with risk of annoying the user too
much, or putting him/her at a security or privacy risk.

But we can't always do that, and so we'll need a way to safely grant
certain pages/apps higher privilege. This gets very complicated in
scenarios where describing the security impact to the user

There are plenty of examples of bad security models that we don't want
to follow. One example is the security model that "traditional" OSs,
like Windows and OS X, uses which is "anything installed has full
access, so don't install something you don't trust". I.e. it's fully
the responsibility of the user to not install something that they
don't 100% trust. Not only that, but the decision that the user has to
make is pretty extreme, either grant full privileges to the
application, or don't run the application at all. The result, as we
all know, is plenty of malware/grayware out there, with users having a
terrible experience as a result.

A slightly better security model is that of Android, which when you
install an app shows you a list of what capabilities the app will
have. This is somewhat better, but we're still relying heavily on the
user to understand what all capabilities mean. And the user still has
to make a pretty extreme decision: Either grant all the capabilities
that the app is requesting, or don't run the app at all.

Another security model that often comes up is the Apple iOS one. Here
the security model is basically that Apple takes on the full
responsibility to check that the app doesn't do anything harmful to
the user. The nice thing about this is that we're no longer relying on
the user to make informed decisions about what is and what isn't safe.
However Apple further has the restriction that *only* they can say
what is safe and what is not. Additionally they deny apps for reasons
other than security/privacy problems. The result is that even when
there are safe apps being developed, that the user wants to run, the
user can't do so if apple says "no". Another problem that iOS has, and
which has made headlines recently, is that Apple enforces some of its
privacy policies not using technical means, but rather using social
means. This has lately lead to discoveries of apps which extracts the
users contact list and sends it to a server, without the users
consent. This is things that Apple tries to catch during their review,
but it's obviously hard to do so perfectly.


Proposal:

The basic ideas of my proposal is as follows. For privacy-related
questions, we generally want to defer to the user. For example for
almost all apps that want to have access to the users addressbook, we
should check with the user that this is ok. Most of the time we should
be able to show a "remember this decision" box, which many times can
default to checked, so the user is only faced with this question once
per app.

For especially sensitive APIs, in particular security related ones,
asking the user is harder. For example asking the user "do you want to
allow USB access for this app" is unlikely a good idea since most
people don't know what that means. Similarly, for the ability to send
SMS messages, only relying on the user to make the right decision
seems like a big risk.

For such sensitive APIs I think we need to have a trusted party verify
and ensure that the app won't do anything harmful. This verification
doesn't need to happen by inspecting the code, it can be enforced
through non-technical means. For example if the fitbit company comes
to mozilla and says that they want to write an App which needs USB
access so that they can talk with their fitbit hardware, and that they
won't use that access to wipe the data on people's fitbit hardware, we
can either choose to trust them on this, or we can hold them to it
through contractual means.

However we also don't want all app developers which need access to
sensitive APIs to have to come to mozilla (and any other browser
vendor which implement OWA). We should be able to delegate the ability
to hand out this trust to parties that we trust. So if someone else
that we trust wants to open a web store, we could give them the
ability to sell apps which are granted access to these especially
sensitive APIs.

This basically creates a chain of trust from the user to the apps. The
user trusts the web browser (or other OWA-runtime) developers. The
browser developers trusts the store owners. The store owners trust the
app developers.

Of course, in the vast majority of cases apps shouldn't need access to
these especially sensitive APIs. But we need a solution for the apps
that do.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966


How to implement this:

We create a set of capabilities, such as "allowed to see the users
idle state", "allowed to modify files in the users photo directory",
"allowed low-level access to the USB port", "allowed unlimited storage
space on the device", "allowed access to raw TCP sockets".

Each API which requires some sort of elevated privileges will require
one of these capabilities. There can be multiple APIs which
semantically have the same security implications and thus might map to
the same capabilities. However it should never need to be the case
that an API requires to separate capabilities. This will keep the
model simpler.

For each of these capabilities we'll basically have 4 levels of
access: "deny", "prompt default to remember", "prompt default to not
remember", "allow". For the two "prompt..." ones we'll pop up UI and
show the user yes/no buttons and a "remember this decision" box. The
box is checked for the "prompt default to remember" level.

We then enhance the OWA format such that an app can list which
capabilities it wants. We could possibly also allow listing which
level of access it wants for each capability, but I don't think that
is needed.

When a call is made to the OWA .install function, to install an app,
the store also passes along a list of capabilities that the store
entrusts the app with, and which level of trust for these
capabilities. The browser internally knows which stores it trusts to
hand out which capabilities and which level of trust it trusts the
store to hand out. The capabilities granted the app is basically the
intersection of these two lists. I.e. the lowest level in either of
these lists for either capability.

In the installation UI we could enable the user to see which
capabilities will be granted, and which level is granted. However it
should always be safe for the user to click yes, so we have a lot of
freedom in how we display this.

Further, we should allow the user to modify these settings during the
installation process as well as after an app is installed. We should
even allow users to set a default policy like "always 'deny' TCP
socket access", though this is mostly useful for advanced users. If
the user does that we intersect with this list too before granting
permissions to an app.

For any privacy-sensitive capabilities, we simply don't grant stores
the ability to hand out trust higher than one of the "prompt ..."
levels. That way we ensure that users are always asked before their
data is shared.

In addition to this, I think we should have a default set of
capabilities which are granted to installed apps. For example the
ability to use unlimited amount of device storage, the ability to
replace context menus and the ability to run background workers (once
we have those). This fits nicely with this model since we can simply
some capabilities to all installed apps (we'd need to decide if they
should still be required to list these capabilities in the manifest or
not).

Another thing which came up during a recent security review is that
we'll likely want to have some technical restrictions on which sites
can be granted some of these capabilities. For example something as
sensitive as SMS access might require that the site uses STS (strict
transport security) and/or EV-certs. This is also applies to the
stores which we trust to hand out these capabilities.

There's also very interesting things we can do by playing around with
cookies, but I'll leave that for a separate thread as that's a more
narrow discussion.

Let me know what you think.

/ Jonas

Lucas Adamski

unread,
Mar 8, 2012, 11:31:10 AM3/8/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list
Hi Jonas,

Thank you for sending this out! I really like the model overall.

With sensitive APIs, even if a 3d party vouches for the capabilities of the app, I believe we would still want to communicate that to the user somehow at installation time? I'm concerned we'd end up with a pretty long and arcane list. Maybe we could map those to a general "system access" meta-capability.

Actually, does this proposal assume all apps will go through the same installation experience (i.e. do we have the concept of an app without an explicit installation)?

Adding dev-security for more brains.
Lucas.

--
Nothing can be more abhorrent to democracy than to imprison a person or keep him in prison because he is unpopular. This is really the test of civilization. - Sir Winston Churchill
> _______________________________________________
> dev-b2g mailing list
> dev...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-b2g

ptheriault

unread,
Mar 8, 2012, 4:48:46 PM3/8/12
to cjo...@mozilla.com, Jonas Sicking, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list
Jonas,

Thanks for taking the time to document your thoughts. I also caught up with Chris Jones from B2G yesterday to go security, and we discussed app permissions as well. I have written up a couple pages of notes, but I'd like to a key difference. From our discussion yesterday (and Chris correct me if I misunderstood) there will be two levels of trusted stores required: one that sells and hosts "privileged installed apps", and one that provides Web Apps under the current third party model. As I understood it from Chris you would have:

- Privileged store: this is the store which your B2G device comes preconfigured with - mozilla's marketplace, and/or the telco's etc. Apps from this store will be able to grant permissions like dialer, SMS - services which are critical to the phone operation, and for which regulatory contrasts exist (emergency calls etc). Detailed technical review of both code and permissions will be required, and will contractual terms for any apps that are third party. These Web Apps would be hosted on the marketplace (not a third party web server) so that code integrity can be maintained post review. All updates would be brokered by the store. The install process would be downloading a discrete package of files, to be served in an offline manner on a unique synthetic domain. I don't know that we would want users to be able to add privileged stores - or maybe they could but it might void their warranty or something?

- Trusted store: this is the store where you get your "apps" (i.e. analogous to the Appstore or Android Marketplace). This is as you have described below. Some permissions would not be allowed to be granted here (consider the regulatory requirements for the dialer App for example), but this store is trusted to review/verify Third Party Web Apps. The challenge as I see it for these stores, is that as far as I understand it, Web Apps are hosted by third parties. The store has a local copy of the manifest, so it can check that doesn't change, but the Web App site can change whatever it likes, so any review is meaningless down the track. So reviewing permissions in the manifest, and enforcing contractual terms are the main controls here? I think it then becomes a question of which permissions we can trust to such a mode, maybe permissions granted by this store must only be those which users would currently grant to a website that they trust.

Or maybe there is some middle ground between the two. The main balance I think we need to strike is the risks introduce by having remotely hosted Web Apps outside of the control of a store, and sensitive permissions critical to a devices function.

In terms of the rest of your points, I think our discussion yesterday was pretty much in line. I've made a few comments in line below.
Isn't this just like Apple's policy except now without the technical review component? At a minimum we should be reviewing the manifests to ensure the permissions remain as requested, but even so, this doesn't seem like a strong control to me. Do we need to be careful about what permission we grant under such a scheme? (My assumption is that If fitbit's Web App website gets owned, then all their user's devices owned too - is that correct? Maybe I am missing something, or maybe we can do something technically to mitigate this risk...) As Lucas mentions, I also thing we should be informing the user somehow, but it sounds like this will be part of the installation UI where the user can disable permissions an app as requested.


>>
>> However we also don't want all app developers which need access to
>> sensitive APIs to have to come to mozilla (and any other browser
>> vendor which implement OWA). We should be able to delegate the ability
>> to hand out this trust to parties that we trust. So if someone else
>> that we trust wants to open a web store, we could give them the
>> ability to sell apps which are granted access to these especially
>> sensitive APIs.
>>
>> This basically creates a chain of trust from the user to the apps. The
>> user trusts the web browser (or other OWA-runtime) developers. The
>> browser developers trusts the store owners. The store owners trust the
>> app developers.

Will this create a financial disincentive between marketplaces to review Apps properly? (more review = more cost = higher app prices?) Just a thought.

Also how will a user know which store's to trust?

How do domains which install themselves as Web Apps fit into this model? Is there perhaps a default lower set of permissions that websites can install themselves with - basically the same types as websites, except that with apps permissions might be able t get "prompt to remember" instead of just "prompt"?)

>>
>> Of course, in the vast majority of cases apps shouldn't need access to
>> these especially sensitive APIs. But we need a solution for the apps
>> that do.
>>
>> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966
>>
>>
>> How to implement this:
>>
>> We create a set of capabilities, such as "allowed to see the users
>> idle state", "allowed to modify files in the users photo directory",
>> "allowed low-level access to the USB port", "allowed unlimited storage
>> space on the device", "allowed access to raw TCP sockets".
>>
>> Each API which requires some sort of elevated privileges will require
>> one of these capabilities. There can be multiple APIs which
>> semantically have the same security implications and thus might map to
>> the same capabilities. However it should never need to be the case
>> that an API requires to separate capabilities. This will keep the
>> model simpler.

Agree but APIs then need to be split accordingly into trust groups, like Camera API and Camera Control API.

>>
>> For each of these capabilities we'll basically have 4 levels of
>> access: "deny", "prompt default to remember", "prompt default to not
>> remember", "allow". For the two "prompt..." ones we'll pop up UI and
>> show the user yes/no buttons and a "remember this decision" box. The
>> box is checked for the "prompt default to remember" level.
>>
>> We then enhance the OWA format such that an app can list which
>> capabilities it wants. We could possibly also allow listing which
>> level of access it wants for each capability, but I don't think that
>> is needed.

Allow is quite different to prompt? Is this isn't in the manifest, where is it set - would a store set these for an App?
Chris brought up the issue of regulatory controls for functions like the dialer. (e.g. phones always need to be able to make emergency calls).
Hence the description of the privileged store concept above, where the store hosts the code of the Web App. It would likely also be a completely offline web app, so might also be able to add technical controls like CSP. (e.g. dialer should be restricted from making connections to the internet?)

Jonas Sicking

unread,
Mar 9, 2012, 6:53:33 PM3/9/12
to Lucas Adamski, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list
On Thu, Mar 8, 2012 at 8:31 AM, Lucas Adamski <lada...@mozilla.com> wrote:
> Hi Jonas,
>
> Thank you for sending this out!  I really like the model overall.
>
> With sensitive APIs, even if a 3d party vouches for the capabilities of the app, I believe we would still want to communicate that to the user somehow at installation time?  I'm concerned we'd end up with a pretty long and arcane list.  Maybe we could map those to a general "system access" meta-capability.

I think we'll have a lot of freedom in how we construct the UI here
given that in some sense it's mostly "informational". So we can group
things together as needed to reduce UI clutter.

> Actually, does this proposal assume all apps will go through the same installation experience (i.e. do we have the concept of an app without an explicit installation)?

I think for all really sensitive APIs we'd be relying on going through
the installation experience yeah, in order to have the trusted store
validate that access is ok.

For a lot of not quite so sensitive APIs, like geolocation or ability
to add files to DeviceStorage, a simple prompt to the user should be
ok. This could be expressed as a default level for all of these
capabilities for non-installed apps, where we default to "no access"
for the especially sensitive APIs.

/ Jonas

lkcl luke

unread,
Mar 9, 2012, 7:26:54 PM3/9/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev...@lists.mozilla.org
On Thu, Mar 8, 2012 at 10:25 AM, Jonas Sicking <jo...@sicking.cc> wrote:

> There are plenty of examples of bad security models that we don't want
> to follow. One example is the security model that "traditional" OSs,
> like Windows and OS X, uses which is "anything installed has full
> access, so don't install something you don't trust".

technically that's inaccurate, but practically speaking it is.
there's a difference.

* the security model of OSX is actually BSD-based i.e. POSIX underneath.

* the security model of windows is actually NT which is actually from
dave cutler's knowledge of VAX/VMS when he was working for DEC, and
is, i think the technical term is that it's a "DAC" system -
Discretionary ACL system.

however, once you put even a superb ACL system like that which was
designed for NT in front of mass-volume users, it doesn't matter *how*
good the "sicuuurriteee" is: it always has to cater for the lowest
common denominator, and you also run into the "mono-culture" problem.

this is best expressed as a hilarious mathematical formulae i saw
once on the whiteboard of a friend who worked for ISS:

Sigma (SUM) Security -> Zero, as idiots=0 (subscript), idiots=n
(superscript), limit(n) tends to infinity. it's better done as a
diagram :)

but in effect it says "security goes to shit as the number of idiots
increases".


> I.e. it's fully
> the responsibility of the user to not install something that they
> don't 100% trust.

yes.

idiots.

> Not only that, but the decision that the user has to
> make is pretty extreme, either grant full privileges to the
> application, or don't run the application at all.

that is a mistake made on the part of the application writers, who
have, by and large, given up on the whole "security" concept on both
windows and macosx, and have not made good use of the available
security measures.

in the case of windows, it's because they know there's no fucking
point, there's so many ways to piss all over an OS that is written
through profit-maximisation that they too don't bother wasting their
money even trying.

in the case of MacOSX, it's because they aren't hit by the
"mono-culture" problem (there's not enough of a critical mass to make
viruses mainstream)

> A slightly better security model is that of Android, which when you
> install an app shows you a list of what capabilities the app will
> have.

... which STILL doesn't help you because, again a) android is hitting
the mono-culture "critical mass" problem b) that increases the number
of idiots c) the pay-off for attackers is very very high: being able
to send SMS messages to premium-rate telephone numbers, gaining access
to bank accounts etc.

> This is somewhat better, but we're still relying heavily on the
> user to understand what all capabilities mean.

yes. exactly.

> And the user still has
> to make a pretty extreme decision: Either grant all the capabilities
> that the app is requesting, or don't run the app at all.

which you DON'T have the source code for, so cannot even attempt to
verify that the code is secure, or even count on the "reputation" of
people and the whole Software (Libre) ethos, nor ask or pay anyone to
perform that verification audit should they even _want_ to.


> Another security model that often comes up is the Apple iOS one. Here
> the security model is basically that Apple takes on the full
> responsibility to check that the app doesn't do anything harmful to
> the user.

yeah. rriiiight. i think my opinion of such a system is best not
expressed on public mailing lists. i'm soo tempted though. oh go on.
i'll just use the one phrase. close your ears if you have a
politically-sensitive disposition. "Yahvohl mein Fuhrer!" *snaps
heels together*.

soo.. uhmm.... no. i don't think the Mozilla Foundation has the
right to set itself up, nor to set anyone *else* up, as the "God" that
shall dictate that which users shall put on the hardware which *they*
have purchased.

... apart from anything, what happens if you make a mistake?


anyway: it's worthwhile mentioning that you've left a rather
important security model off the list: it's called "FLASK". FLASK was
developed as a replacement for the hierarchical "ACL" concept. its
premise is "when you want to do something different from what you were
doing right now, it doesn't matter who you were before, you lose ALL
rights and you now start again. and one of those rights includes 'the
right to do something different' ".

translation: when an application runs a new application (exec)
there's absolutely *no* transferrance of any "rights" over the
execution boundary. the right to even *execute* a new program is also
a right which must be explicitly granted. in FLASK, what you can do
depends on who you were, where you are *AND* what you're executing.

it's rather neat, and it actually reflects *REAL LIFE* security
especially in military environments. take an example: i went to work
for NC3A. the "security policy" for me being able to do that was
written by NC3A and vetted by the UK Ministry of Defense. they wrote
the "rules". when i went to work each day, i had to hand over my
passport, and was given a "badge". my identity - my rights -
specifically the right to leave even the country known as "Holland"
had been removed. i was no longer an "EU Citizen". the "badge"
allowed me to open certain doors but not others. the "badge" had a
time-limit placed on it.

also, FLASK is also what if memory serves me correctly a "Mandatory"
Access Control system. the default is "zero permissions".
"Discretionary" Access Control is where you can do anything unless
told otherwise.

you will be familiar with FLASK through SE/Linux. SE/Linux is an
implementation of the FLASK security model.

the advantage of using FLASK will be that it already exists. the
disadvantage of FLASK (i say "disadvantage" but it's not) is that its
primary method of isolating security areas is to use "exec()". the
NSA, who commissioned FLASK and helped implement it in SE/Linux, fully
recognise that "fork()" simply *cannot* be properly secured [caveat
described below].

you - the mozilla foundation - would do well to heed the experience
and knowledge of the NSA.

this is why i advocated that the mozilla foundation *not* design B2G
as a monolithic "WebAPI" extender but instead write all WebAPI code as
JSONRPC services which are contacted by the B2G applications using
nothing more sophisticated than XMLHttpRequest (AJAX).

the reasoning is very simple: if someone manages to write a buffer
overrun or other exploit which gains control of B2G's execution space,
it's "Game Over". it doesn't *matter* what kind of "nice security
API" is implemented in B2G, because that security API is a
*cooperative* i.e. *user-space* security model that can easily be
bypassed by simply... not calling the function, and because it's a
multi-threaded / multi-process application, *every* thread is
compromised. including access to the GSM/3G modem. whoops.

[caveat: when i worked with SE/Linux, i did describe a use-case for
allowing security-context "names" to be switched, with cooperation
from the actual application, on a "fork". normally, SE/Linux
automatically tracks processes via exec() boundaries so that the
applications don't even have to know that they're being run under
SE/Linux. but because up until that time, "fork" was not a recognised
secure boundary for SE/Linux contexts, samba and other servers were in
a spot of bother. by adding in a function which notified SE/Linux,
"hey SE/Linux, a fork's just about to come up, and we'd really
appreciate it if any applications which were exec()'d by the new
fork()ed process would be done under a completely different context
named {insert_name_of_user}, please take note, just fyi and all" :)
yes it's complicated, but.. welcome to security! ]


> Proposal:
>
> The basic ideas of my proposal is as follows. For privacy-related
> questions, we generally want to defer to the user. For example for
> almost all apps that want to have access to the users addressbook, we
> should check with the user that this is ok. Most of the time we should
> be able to show a "remember this decision" box, which many times can
> default to checked, so the user is only faced with this question once
> per app.
>
> For especially sensitive APIs, in particular security related ones,
> asking the user is harder. For example asking the user "do you want to
> allow USB access for this app" is unlikely a good idea since most
> people don't know what that means. Similarly, for the ability to send
> SMS messages, only relying on the user to make the right decision
> seems like a big risk.
>
> For such sensitive APIs I think we need to have a trusted party verify
> and ensure that the app won't do anything harmful.

... or, you enforce an absolute requirement that:

* all apps must have the source code published IN FULL;
* that the application be digitally-signed by a PGP/GPG key which
must be within a registered and well-known "Web of Trust"
* that the identity of the person who digitally-signs the application
be verified BEFORE they are allowed to publish apps.

there is pre-existing infrastructure and well-established precedent
for this type of security infrastructure: it's called "The Debian
Project".

the reputation of the debian maintainers is of ABSOLUTE paramount
importance. can you imagine what would happen a) to the debian
project b) to the maintainer if the source code of one of their
packages was found to contain malicious code?? what do you think
would happen if that debian maintainer had been found to actually be
the one RESPONSIBLE for inserting that malicious code into the
application's source code?

so this is "security by reputation", and its success is predicated on
being able to link the origins of the source code back to the person's
*real* identity.


> This verification
> doesn't need to happen by inspecting the code, it can be enforced
> through non-technical means.

@begin Family Fortunes "our survey said"...
bzzzzt er-errrrrch
@end Family Fortunes

*grin* :)

> For example if the fitbit company comes
> to mozilla and says that they want to write an App which needs USB
> access so that they can talk with their fitbit hardware, and that they
> won't use that access to wipe the data on people's fitbit hardware, we
> can either choose to trust them on this, or we can hold them to it
> through contractual means.

... yes. exactly. yeah, you're along the right lines. except that
you now have to have *technical* measures to actually enforce those
contracts, and funnily enough there happens to exist infrastructure
which does exactly that [the debian project].

> However we also don't want all app developers which need access to
> sensitive APIs to have to come to mozilla (and any other browser
> vendor which implement OWA). We should be able to delegate the ability
> to hand out this trust to parties that we trust.

yes. exactly like the Debian Project does. and has been developing
the infrastructure to handle exactly this scenario, for 15+ years.

> So if someone else
> that we trust wants to open a web store, we could give them the
> ability to sell apps which are granted access to these especially
> sensitive APIs.

well... there you have a bit of a problem. how do you ensure that
devices are going to run a version of B2G which actually enforces the
security model?

i've changed tack slightly, here, allow me to explain.

you've got several issues to cover:

a) the distribution of applications, and ensuring that the chances of
rogue apps entering the market are reduced in the first place
b) the management and granting of permissions for access within apps
to certain APIs
c) the *enforcement* of permissions on the actual physical devices

up until that last paragraph, you were talking about a) and b) but
hadn't covered c) but it's slightly confused in that b) and c) are
related/connected. also, you'd described an introduction which
covered bits of a, b and c.


> This basically creates a chain of trust from the user to the apps.

yes. it's why the debian project actually formally encodifies that
chain of trust in the "Debian Keyring". the debian keyring is i
believe i may be correct in saying it's one of the largest most
heavily-used keyrings in the world? dunno.

> The
> user trusts the web browser (or other OWA-runtime) developers. The
> browser developers trusts the store owners. The store owners trust the
> app developers.
>
> Of course, in the vast majority of cases apps shouldn't need access to
> these especially sensitive APIs. But we need a solution for the apps
> that do.
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966
>
>
> How to implement this:
>
> We create a set of capabilities, such as "allowed to see the users
> idle state", "allowed to modify files in the users photo directory",
> "allowed low-level access to the USB port", "allowed unlimited storage
> space on the device", "allowed access to raw TCP sockets".

this is implementation details b+c above. you also need to make a
decision to implement an "a)" as well.

> Each API which requires some sort of elevated privileges will require
> one of these capabilities. There can be multiple APIs which
> semantically have the same security implications and thus might map to
> the same capabilities. However it should never need to be the case
> that an API requires to separate capabilities. This will keep the
> model simpler.
>
> For each of these capabilities we'll basically have 4 levels of
> access: "deny", "prompt default to remember", "prompt default to not
> remember", "allow". For the two "prompt..." ones we'll pop up UI and
> show the user yes/no buttons and a "remember this decision" box. The
> box is checked for the "prompt default to remember" level.

no - you want two levels of access: allow or deny. you're confusing
network-style access control which comes from these dumbed-down
windows-based firewall systems which go "you wanna let dis program do
sumfink on a port you don't even know wot it means? it da inturnet
maaan, dat's all u no"

so you're assuming that users are intelligent enough to know what
they're actually letting themselves in for.

as this is an OS for mass-volume products, you simply cannot make the
assumption that users will know what to do.

that having been said, yes you really do want something like the
"firewall popup" thing... but *not* at the actual hard access control
level in the OS. that should be "allow" or "deny" - period. with, it
has to be said, a notification system which allows the approval to be
"vetted"... *sigh*.


> We then enhance the OWA format such that an app can list which
> capabilities it wants. We could possibly also allow listing which
> level of access it wants for each capability, but I don't think that
> is needed.

ok, now you're into "a" and "b", here.
i think... you [mozilla foundation] need to pick a security model
that is capable of reflecting all of these kinds of things. i'd
recommend FLASK (aka SE/Linux) because it will be capable of handling
these things and has a mature well-established infrastructure for
doing so.

it _is_ however a lot of work. security always is.

hey, did you hear about how long each Head of the Windows Security
Team lasts, in microsoft? it's not long. each new enthusiastic
employee/victim who
gets the job goes to his superiors and says, "Hey boss! I've got a
Great Idea on how to improve the security of Windows!" and they get
asked "does it increase the profits of the Microsoft Corporation?" and
every time they say "err no, actually it would cost us money and
probably customers". they get told "well go away and find us
something that increases our profits".

pretty soon they quit.

the mozilla foundation doesn't have _limitless_ funds (but it _does_
have a different set of priorities) so you do actually have the
opportunity to do something different (an actually... like... properly
secure OS and infrastructure? y'know?) but it would be wise to
leverage existing infrastructure where possible, so as to save time,
funds and the sanity of people implementing it.


> Another thing which came up during a recent security review is that
> we'll likely want to have some technical restrictions on which sites
> can be granted some of these capabilities. For example something as
> sensitive as SMS access might require that the site uses STS (strict
> transport security) and/or EV-certs. This is also applies to the
> stores which we trust to hand out these capabilities.

the more you're describing, the more convinced i am that you want
FLASK. it's capable of creating a security context that not only
links code with users but also links in networking (and other
resources) as well.


> There's also very interesting things we can do by playing around with
> cookies, but I'll leave that for a separate thread as that's a more
> narrow discussion.

adding an SE/Linux security context to a cookie. ooh bloody 'ellfire :)

l.

lkcl luke

unread,
Mar 9, 2012, 7:33:36 PM3/9/12
to ptheriault, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list, cjo...@mozilla.com
> On Mar 9, 2012, at 3:31 AM, Lucas Adamski wrote:

> Also how will a user know which store's to trust?

[apologies to the dev-security list, the reply i wrote went to the
original recipients, i hadn't noted the addition of dev-security as it
was later in the thread. you can see a copy of what i wrote, which is
the background behind this particular follow-up reply, here:
https://groups.google.com/d/msg/mozilla.dev.b2g/AQYPkIjKxjE/65jok-pPKw0J
]

in the case of the debian distribution, that's encoded into the
/etc/apt/sources.list file. if users edit that file and start adding
e.g. "deb http://debian-multimedia.org" then they get prompted
"WARNING! application from untrusted source! wark wark". if however
they also take an *extra* step which is to add the debian-multimedia
keyring package (which, of course, will fire up a "WARNING!
application from untrusted source! wark wark" warning), *then* they're
ok, and have *actively* taken steps to say "we trust packages from
source named debian-multimedia.org".

l.

lkcl luke

unread,
Mar 9, 2012, 7:52:40 PM3/9/12
to ptheriault, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list, cjo...@mozilla.com
On Thu, Mar 8, 2012 at 9:48 PM, ptheriault <pther...@mozilla.com> wrote:

> Chris brought up the issue of regulatory controls for functions like the dialer. (e.g. phones always need to be able to make emergency calls).

the experience of the OpenMoko project i believe is relevant here.
their infrastructure was so top-heavy based on d-bus that the entire
OS was completely unresponsive to receiving and making phone calls.

to cater for both that and the security issue, if you followed my
recommendation to use FLASK (SE/Linux) i would recommend the creation
of an actual separate application which is small but permanently
memory-resident (so as to be quick and instantly responsive) that has
access to the RIL/GSM, and can be a "one-shot-dialer". this
executable could also be given ultra-high priority by the linux
kernel.... because it is a separate application. it could even be
fired off by a "panic button" as well as being triggered by dialing
"911", "999" or "112" (depending on the country).

so even if the UI is heavily busy performing other tasks (such as
running the latest and greatest version of angry birds or consuming
vast amounts of memory because it's a google app or a flash
application) and oh look! the UI *is* the B2G application... even if
that were the case, it would *still* be possible to make emergency
calls, even if you couldn't actually see the UI telling you that the
call had connected because it was still too damn busy running "angri
burds".

_and_, most importantly, under the FLASK security model you'd be able
to prioritise that application with a different set of permissions to
allow it to be contacted by a wider range of other apps (or narrower?)
than might otherwise be granted to "ordinary" apps which are allowed
(or otherwise) to make "ordinary" phone calls.

_and_ it would still be accessible even if rogue apps had compromised
the rest of the B2G infrastructure.

l.

Jonas Sicking

unread,
Mar 9, 2012, 11:01:47 PM3/9/12
to ptheriault, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list, cjo...@mozilla.com
On Thu, Mar 8, 2012 at 1:48 PM, ptheriault <pther...@mozilla.com> wrote:
> Jonas,
>
> Thanks for taking the time to document your thoughts. I also caught up with
> Chris Jones from B2G yesterday to go security, and we discussed app
> permissions as well. I have written up a couple pages of notes, but I'd like
> to a key difference. From our discussion yesterday (and Chris correct me if
> I misunderstood) there will be two levels of trusted stores required:  one
> that sells and hosts "privileged installed apps", and one that provides Web
> Apps under the current third party model. As I understood it from Chris you
> would have:
>
> - Privileged store: this is the store which your B2G device comes
> preconfigured with - mozilla's marketplace, and/or the telco's etc. Apps
> from this store will be able to grant permissions like dialer, SMS -
> services which are critical to the phone operation, and for which regulatory
> contrasts exist (emergency calls etc). Detailed technical review of both
> code and permissions will be required, and will contractual terms for any
> apps that are third party. These Web Apps would be hosted on the marketplace
> (not a third party web server) so that code integrity can be maintained post
> review. All updates would be brokered by the store. The install process
> would be downloading a discrete package of files, to be served in an offline
> manner on a unique synthetic domain. I don't know that we would want users
> to be able to add privileged stores - or maybe they could but it might void
> their warranty or something?

I'm not sure that we need a dedicated store for the built-in apps.

As described in my initial email, mozilla would have relationships
with a number of stores (including its own) which would allow those
stores to grant apps higher privileges. So for each of those stores we
would list the capabilities that that store is allowed to grant, and
how high level of access for each capability.

I could certainly see that a telco shipping a B2G device would want to
add to that list its own store. And configure it such that the telco's
store has the ability to install apps with most capabilities and with
very high level of access for them. And then only grant such high
level to apps that they write themselves or otherwise feel that they
can trust. For example to do things like dialers.

If the telco so wants (or is required to by law) it can host these
apps on its own servers and do whatever review needed before they
indicate trust to them through the store.

However there is no reason we also couldn't allow "preinstalled" apps
on B2G. I.e. apps that are already on the device and in the various
internal databases describing what level of trust each app should
have. So a telco could grant preintalled apps any levels of privilege
they want independent on any stores.

Beyond that I don't think we need a concept of a "Privileged store".

> - Trusted store: this is the store where you get your "apps" (i.e. analogous
> to the Appstore or Android Marketplace). This is as you have described
> below. Some permissions would not be allowed to be granted here (consider
> the regulatory requirements for the dialer App for example), but this store
> is trusted to review/verify Third Party Web Apps. The challenge as I see it
> for these stores, is that as far as I understand it, Web Apps are hosted by
> third parties. The store has a local copy of the manifest, so it can check
> that doesn't change, but the Web App site can change whatever it likes, so
> any review is meaningless down the track. So reviewing permissions in the
> manifest, and enforcing contractual terms are the main controls here? I
> think it then becomes a question of which permissions we can trust to such a
> mode, maybe permissions granted by this store must only be those which users
> would currently grant to a website that they trust.

I don't have a whole lot of faith in code reviews catching people not
misusing permissions given to them. Code review is notoriously hard
and if developers try to get something past you they generally can.
And it has a tendency to get in the way of things like security
updates that need to happen on extremely tight schedules.

A better solution is to establish non-technical relationships with the
application authors and use in combination with technical means to
enforce policies. So for example mozilla could have as a policy to
only grant applications the ability to send SMS messages to companies
that have signed contracts saying that they won't send messages other
than in response to explicit user actions to do so.

> > For such sensitive APIs I think we need to have a trusted party verify
> > and ensure that the app won't do anything harmful. This verification
> > doesn't need to happen by inspecting the code, it can be enforced
> > through non-technical means. For example if the fitbit company comes
> > to mozilla and says that they want to write an App which needs USB
> > access so that they can talk with their fitbit hardware, and that they
> > won't use that access to wipe the data on people's fitbit hardware, we
> > can either choose to trust them on this, or we can hold them to it
> > through contractual means.
>
> Isn't this just like Apple's policy except now without the technical review
> component?  At a minimum we should be reviewing the manifests to ensure the
> permissions remain as requested, but even so, this doesn't seem like a
> strong control to me. Do we need to be careful about what permission we
> grant under such a scheme?

Reviewing manifests won't really be needed. Access is granted based on
what the *store* tells us that it wants to grant that app, not based
on what is requested in the manifest. Or rather, it'd be the lowest of
the two. So even if the manifest changes, that won't automatically
grant the app any additional privileges.

We will need a way for an app to request additional privileges during
an upgrade though. But at that point we need to verify from the store
that the app was installed through that the store is willing to grant
the app those additional privileges.

> (My assumption is that If fitbit's Web App
> website gets owned, then all their user's devices owned too - is that
> correct? Maybe I am missing something, or maybe we can do something
> technically to mitigate this risk...)

This is correct. I don't think it's possible to create a permission
model where we can tell an app appart from hacked code running inside
the app. If we could we would simply shut down the app any time we
detected that it was hacked.

> As Lucas mentions, I also thing we
> should be informing the user somehow, but it sounds like this will be part
> of the installation UI where the user can disable permissions an app as
> requested.

I'm all for showing the user what privileges will be granted. However
the model should be very clear that this is strictly informative UI.
It should always be safe for users to simply press "yes". The store is
responsible for making sure that unsafe privileges aren't granted.

> > However we also don't want all app developers which need access to
> > sensitive APIs to have to come to mozilla (and any other browser
> > vendor which implement OWA). We should be able to delegate the ability
> > to hand out this trust to parties that we trust. So if someone else
> > that we trust wants to open a web store, we could give them the
> > ability to sell apps which are granted access to these especially
> > sensitive APIs.
> >
> > This basically creates a chain of trust from the user to the apps. The
> > user trusts the web browser (or other OWA-runtime) developers. The
> > browser developers trusts the store owners. The store owners trust the
> > app developers.
>
> Will this create a financial disincentive between marketplaces to review
> Apps properly? (more review = more cost = higher app prices?) Just a
> thought.

Yes. Though again, I don't think stores should get into the business
of reviewing App's code. It's mozilla's responsibility to hand out
trust to stores which will handle that trust responsibly.

> Also how will a user know which store's to trust?

It's not the users that makes that decision. As described in my email,
it's Mozilla's job to create the list of stores that we make Firefox
trust. Similarly it'll be Google's job to select who they want to put
in the list of trusted webapp stores and Apple's job to select who
they put in Safari's etc.

> How do domains which install themselves as Web Apps fit into this model?  Is
> there perhaps a default lower set of permissions that websites can install
> themselves with - basically the same types as websites, except that with
> apps permissions might be able t get "prompt to remember" instead of just
> "prompt"?)

Such store's generally won't be trusted. So those stores will work
just fine, however they won't be able to install apps which need SMS
privileges.

It would be great if we could come up with a way where sites could
sell their own apps on their own website, but have them provide a
pointer to a trusted store which could vouch that they are a
trustworthy app. Or to make an example.

Say that SMSMessagingInc has developeded their AwesomeSMS+ app. They
go to mozilla and get mozilla to verify that they are to be trusted
with the "ability to send SMS messages" capability after a simple user
prompt. Mozilla put the AwesomeSMS+ app in the Mozilla webstore. When
Firefox sees a .install(...) call from the Mozilla webstore for the
AwesomeSMS+ app which says that the app is to be granted "ability to
send SMS messages" capability after a simple user prompt, it knows
that the Mozilla store has the permission to grant that capability and
so the app will work as expected.

Additionally SMSMessagingInc want to sell the app on their website.
They do so and somehow provide a pointer to the Mozilla webstore. When
firefox gets the .install call from the SMSMessagingInc website, it
goes to the Mozilla webstore and checks which privileges Mozilla says
that the app should have. The Mozilla webstore says that the app
should have "ability to send SMS messages" capability after a simple
user prompt and so the app gets installed with that capability and
works as it should.

> > Each API which requires some sort of elevated privileges will require
> > one of these capabilities. There can be multiple APIs which
> > semantically have the same security implications and thus might map to
> > the same capabilities. However it should never need to be the case
> > that an API requires to separate capabilities. This will keep the
> > model simpler.
>
> Agree but APIs then need to be split accordingly into trust groups, like
> Camera API and Camera Control API.

Note that an "API" can be defined as a single function. So we can, and
likely will, have functions on the same object which have different
capability levels.

So for example we will likely have different capability requirements
for DeviceStorage.add and DeviceStorage.delete

> > For each of these capabilities we'll basically have 4 levels of
> > access: "deny", "prompt default to remember", "prompt default to not
> > remember", "allow". For the two "prompt..." ones we'll pop up UI and
> > show the user yes/no buttons and a "remember this decision" box. The
> > box is checked for the "prompt default to remember" level.
>
> > We then enhance the OWA format such that an app can list which
> > capabilities it wants. We could possibly also allow listing which
> > level of access it wants for each capability, but I don't think that
> > is needed.
>
> Allow is quite different to prompt? Is this isn't in the manifest, where is
> it set - would a store set these for an App?

As stated in the original email. The list of capabilities, and their
access level, is handed to the .install() function when a store
installs an app. So yes, this comes from the store.

> > Another thing which came up during a recent security review is that
> > we'll likely want to have some technical restrictions on which sites
> > can be granted some of these capabilities. For example something as
> > sensitive as SMS access might require that the site uses STS (strict
> > transport security) and/or EV-certs. This is also applies to the
> > stores which we trust to hand out these capabilities.
>
> Chris brought up the issue of regulatory controls for functions like the
> dialer. (e.g. phones always need to be able to make emergency calls).
> Hence the description of the privileged store concept above, where the store
> hosts the code of the Web App. It would likely also be a completely offline
> web app, so might also be able to add technical controls like CSP. (e.g.
> dialer should be restricted from making connections to the internet?)

I hope I answered the parts about privileged stores above. I.e. I
don't think we need them.

I also don't see why we wouldn't let dialer apps connect to the
internet? Especially in the scenario where the dialer app is
preinstalled and thus fully trusted.

/ Jonas

Jonas Sicking

unread,
Mar 9, 2012, 11:16:06 PM3/9/12
to dev...@lists.mozilla.org, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org
On Thu, Mar 8, 2012 at 2:25 AM, Jonas Sicking <jo...@sicking.cc> wrote:
> Hi All,
>
> I'm way over due to write a proposal for the Open Web Apps and
> Boot-to-Gecko security models.
>
> Background:
>
> In general our aim should always be to design any API such that we can
> expose it to as broad of set of web pages/apps as possible. A good
> example of this is the Vibration API [1] which was designed such that
> it covers the vast majority of use cases, while still safe enough that
> we can expose it to all web pages with risk of annoying the user too
> much, or putting him/her at a security or privacy risk.
>
> But we can't always do that, and so we'll need a way to safely grant
> certain pages/apps higher privilege. This gets very complicated in
> scenarios where describing the security impact to the user
>
> There are plenty of examples of bad security models that we don't want
> to follow. One example is the security model that "traditional" OSs,
> like Windows and OS X, uses which is "anything installed has full
> access, so don't install something you don't trust". I.e. it's fully
> the responsibility of the user to not install something that they
> don't 100% trust. Not only that, but the decision that the user has to
> make is pretty extreme, either grant full privileges to the
> application, or don't run the application at all. The result, as we
> all know, is plenty of malware/grayware out there, with users having a
> terrible experience as a result.
>
> A slightly better security model is that of Android, which when you
> install an app shows you a list of what capabilities the app will
> have. This is somewhat better, but we're still relying heavily on the
> user to understand what all capabilities mean. And the user still has
> to make a pretty extreme decision: Either grant all the capabilities
> that the app is requesting, or don't run the app at all.
>
> Another security model that often comes up is the Apple iOS one. Here
> the security model is basically that Apple takes on the full
> responsibility to check that the app doesn't do anything harmful to
> the user. The nice thing about this is that we're no longer relying on
> the user to make informed decisions about what is and what isn't safe.
> However Apple further has the restriction that *only* they can say
> what is safe and what is not. Additionally they deny apps for reasons
> other than security/privacy problems. The result is that even when
> there are safe apps being developed, that the user wants to run, the
> user can't do so if apple says "no". Another problem that iOS has, and
> which has made headlines recently, is that Apple enforces some of its
> privacy policies not using technical means, but rather using social
> means. This has lately lead to discoveries of apps which extracts the
> users contact list and sends it to a server, without the users
> consent. This is things that Apple tries to catch during their review,
> but it's obviously hard to do so perfectly.
>
>
> Proposal:
>
> The basic ideas of my proposal is as follows. For privacy-related
> questions, we generally want to defer to the user. For example for
> almost all apps that want to have access to the users addressbook, we
> should check with the user that this is ok. Most of the time we should
> be able to show a "remember this decision" box, which many times can
> default to checked, so the user is only faced with this question once
> per app.
>
> For especially sensitive APIs, in particular security related ones,
> asking the user is harder. For example asking the user "do you want to
> allow USB access for this app" is unlikely a good idea since most
> people don't know what that means. Similarly, for the ability to send
> SMS messages, only relying on the user to make the right decision
> seems like a big risk.
>
> For such sensitive APIs I think we need to have a trusted party verify
> and ensure that the app won't do anything harmful. This verification
> doesn't need to happen by inspecting the code, it can be enforced
> through non-technical means. For example if the fitbit company comes
> to mozilla and says that they want to write an App which needs USB
> access so that they can talk with their fitbit hardware, and that they
> won't use that access to wipe the data on people's fitbit hardware, we
> can either choose to trust them on this, or we can hold them to it
> through contractual means.
>
> However we also don't want all app developers which need access to
> sensitive APIs to have to come to mozilla (and any other browser
> vendor which implement OWA). We should be able to delegate the ability
> to hand out this trust to parties that we trust. So if someone else
> that we trust wants to open a web store, we could give them the
> ability to sell apps which are granted access to these especially
> sensitive APIs.
>
> This basically creates a chain of trust from the user to the apps. The
> user trusts the web browser (or other OWA-runtime) developers. The
> browser developers trusts the store owners. The store owners trust the
> app developers.
>
> Of course, in the vast majority of cases apps shouldn't need access to
> these especially sensitive APIs. But we need a solution for the apps
> that do.
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966
>
>
> How to implement this:
>
> We create a set of capabilities, such as "allowed to see the users
> idle state", "allowed to modify files in the users photo directory",
> "allowed low-level access to the USB port", "allowed unlimited storage
> space on the device", "allowed access to raw TCP sockets".
>
> Each API which requires some sort of elevated privileges will require
> one of these capabilities. There can be multiple APIs which
> semantically have the same security implications and thus might map to
> the same capabilities. However it should never need to be the case
> that an API requires to separate capabilities. This will keep the
> model simpler.
>
> For each of these capabilities we'll basically have 4 levels of
> access: "deny", "prompt default to remember", "prompt default to not
> remember", "allow". For the two "prompt..." ones we'll pop up UI and
> show the user yes/no buttons and a "remember this decision" box. The
> box is checked for the "prompt default to remember" level.
>
> We then enhance the OWA format such that an app can list which
> capabilities it wants. We could possibly also allow listing which
> level of access it wants for each capability, but I don't think that
> is needed.
>
> Another thing which came up during a recent security review is that
> we'll likely want to have some technical restrictions on which sites
> can be granted some of these capabilities. For example something as
> sensitive as SMS access might require that the site uses STS (strict
> transport security) and/or EV-certs. This is also applies to the
> stores which we trust to hand out these capabilities.
>
> There's also very interesting things we can do by playing around with
> cookies, but I'll leave that for a separate thread as that's a more
> narrow discussion.
>
> Let me know what you think.
>
> / Jonas

There were a couple of pieces that I forgot to mention above and which
I think are quite important.

User control:

I think it's very important in all this that we put the user in
ultimate control. I don't think we want to rely on the user to make
security decisions for all APIs, however I think it's important that
we enable users to do so if they so desire. And I think that users
should be able to make security decisions in "both directions", I.e.
both enable more access as well as less access than the above system
provides.

So during installation I think users should be able to tune down
access on a capability-by-capability basis. I.e. the user should be
able to say, "I want to run this SMS app, but I want to completely
disable the ability to send SMS messages"

Additionally, we should have some way for a user to install an app
from a completely untrusted source and grant it any privilege that
he/she wants to. This needs to be a quite complicated UI so that users
don't do this accidentally, but I think it's important to allow as to
not create situations like on iOS where certain apps require users to
hack the device to get to install at all.


Block listing:

We need to figure out a good system for blocking apps if it's detected
that they are harming users. This can happen if for example an app is
hacked, or if it's detected after access has been granted that the app
is intentionally doing malicious things such as sending SMS messages
to high-cost phone numbers. In these situations it needs to be
possible the app developer, for the store as well as for mozilla to
push a message to the device that immediately disables the app.

We also need to make this system be user friendly. Right now in
firefox we often end up choosing not to blocklist an addon becuse the
user experience of doing so isn't good enough in some way or another.

So for example an app might have a bug which causes it to use
tremendous amounts of system resources. Enough that it brings the
phone to a crawl and thus needs to be disabled until this bug is
fixed. However some companies might be critically depending on this
app for their employees. If it's completely impossible for users to
override the block, then the various parties might be reluctant to
disable the app since it'd harm some of the users too much.

I don't know exactly what requirements we'd have for this blocklisting
system, but we should look at the experience we have from the firefox
addon blocklisting mechanism, in particular in the cases where we've
chosen *not* to use it, and learn from that.


/ Jonas

Jonas Sicking

unread,
Mar 9, 2012, 11:48:47 PM3/9/12
to dev...@lists.mozilla.org, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org
On Fri, Mar 9, 2012 at 8:16 PM, Jonas Sicking <jo...@sicking.cc> wrote:
> User control:
>
> I think it's very important in all this that we put the user in
> ultimate control. I don't think we want to rely on the user to make
> security decisions for all APIs, however I think it's important that
> we enable users to do so if they so desire. And I think that users
> should be able to make security decisions in "both directions", I.e.
> both enable more access as well as less access than the above system
> provides.
>
> So during installation I think users should be able to tune down
> access on a capability-by-capability basis. I.e. the user should be
> able to say, "I want to run this SMS app, but I want to completely
> disable the ability to send SMS messages"
>
> Additionally, we should have some way for a user to install an app
> from a completely untrusted source and grant it any privilege that
> he/she wants to. This needs to be a quite complicated UI so that users
> don't do this accidentally, but I think it's important to allow as to
> not create situations like on iOS where certain apps require users to
> hack the device to get to install at all.

And of course in this part I forgot to mention that I think we should
have a place where users can see a list of all the apps they have
installed, and which privileges they are granted, and give them the
ability to lower those privileges to either "prompt" or "deny" (where
prompt would be with "default to not remember" since at this point I
think we can assume that the user is capable of checking the box if
desired).

/ Jonas

Jonas Sicking

unread,
Mar 10, 2012, 12:27:35 AM3/10/12
to lkcl luke, dev-w...@lists.mozilla.org, dev...@lists.mozilla.org
On Fri, Mar 9, 2012 at 4:26 PM, lkcl luke <luke.l...@gmail.com> wrote:
>> Each API which requires some sort of elevated privileges will require
>> one of these capabilities. There can be multiple APIs which
>> semantically have the same security implications and thus might map to
>> the same capabilities. However it should never need to be the case
>> that an API requires to separate capabilities. This will keep the
>> model simpler.
>>
>> For each of these capabilities we'll basically have 4 levels of
>> access: "deny", "prompt default to remember", "prompt default to not
>> remember", "allow". For the two "prompt..." ones we'll pop up UI and
>> show the user yes/no buttons and a "remember this decision" box. The
>> box is checked for the "prompt default to remember" level.
>
>  no - you want two levels of access: allow or deny.  you're confusing
> network-style access control which comes from these dumbed-down
> windows-based firewall systems which go "you wanna let dis program do
> sumfink on a port you don't even know wot it means?  it da inturnet
> maaan, dat's all u no"

For certain things I definitely think the user is the only person that
can make the choice that is correct for the user.

For example determining which apps I want to give geolocation
information to is a highly personal choice. As is which apps to give
access to my list of contacts to.

But yes, we can only do so when we can explain to the user what it
that they are granting. That means we can't include port numbers in
that question.

/ Jonas

lkcl luke

unread,
Mar 10, 2012, 11:08:44 AM3/10/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, ptheriault, Mozilla B2G mailing list, cjo...@mozilla.com
ok, apologies for asking this / chipping in. here's an assessment,
from an "outsider's" perspective [one with software engineering
experience and 10+ years of being a lead developer of free software
projects].

* this is a _massively_ complex area with far-reaching implications
for not only the success of the whole B2G project but also for the
codebase required to make it a success.

* the discussions so far qualify as informal "functional analysis".

* there's been no mention of a "requirements specification".

* there's no mention been made of a wiki page in which everything is
being collated. as a lead developer of free software projects i
absolutely insist that people collate "static information" into wiki
pages, regardless of its source (irc, mailing list, private
discussion) _and_ that they reguarly refer to that wiki page as the
authoritative source. if they don't do that, i pitch in to the
conversation, recording in some cases verbatim the relevant sections
of the posted message onto the wiki.

* there is however a single mention of a bugreport, instigated by jonas.

if i was actively employed and paid to work B2G by the mozilla
foundation i would informally chip in and help in this way, but i am
not being so paid so i will not be doing so until such time as payment
_is_ received.

so it is down to you.

why am i mentioning this at all?

because linux on smartphones is something that is of particular
interest to me, and after having seen the messy consequences for free
software of android i'd rather it didn't happen again.

so.

you can see that this is a complex area, given the length of the
write-up by jonas, and also the length of the reply that i sent.
security isn't something you can add in "after the fact" - it has to
be designed in *from the ground up*.

can i respectfully suggest that you begin by creating a wiki page and
that someone takes responsibility for ensuring that everyone refers to
it or that the mozilla foundation pays someone to specifically pay
attention to this rather important area?

i would recommend beginning with the 3 points a) b) and c) (reminder
below) which can be utilised as the top-level headings for the
Functional Analysis, and you can then move forward by linking to the
various discussions from there, as well as moving on once the issues
and implications are fully understood to the Requirements
Specification.

for example: in mentioning the debian project, the successful
digital-signing of source code of the application works *only* if the
source code of the app is actually available. with that in mind, are
there plans to:

* make damn sure that apps only *ever* are in source code form (e.g.
javascript)?
* allow any other programming languages into the mix? (python a la
pyjamas-desktop or a la pyxpcomext)?
* add this "assembly-level" thing which google chrome has added?

this latter will have serious consequences for security. arbitrary
code execution with access to the NPAPI through XPCOM at the assembly
level would be both fun _and_ scarey!

but if that is planned some time in the future, you simply can't have
"random apps" being downloaded and expect them to be secure.

if such a strategy (execution of assembly code) is OUTRIGHT BANNED and
will NEVER BE CONSIDERED, then and only then can the security model be
dramatically simplified.

basically what i'm saying is that you a) need to think about these
things b) need to document them for input and review c) need to _tell_
people where they can go to review things d) need to make it easy for
them to review it.

the implications are too big to just walk into this blind.

l.

copy of points a b and c

Jim Straus

unread,
Mar 10, 2012, 4:00:52 PM3/10/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, ptheriault, Mozilla B2G mailing list, cjo...@mozilla.com
Jonas, Paul, etc. -
For any app, but in particular, a third party hosted apps, we could require that the manifest contain a signed cryptographic hash of the core of the application (javascript, html, css?), along with the signature of the trusted store. This hash would be validated as signed by a trusted source (like or the same as SSL certs) and that the applications core matches the hash. This would require that the browser/device pre-load the given content, but hopefully apps will be using the local-cache mechanism, so this should not be burdensome. Using this, once a trusted store has validate an application, the application can't be changed, even if it is hosted by a third party. We would have to enforce that a signed application can't download untrusted javascript (eval becomes a sensitive API?). This would allow a third party to host the apps approved by a given store. It would also prevent a hacked store site from distributing hacked apps (well, things like images could still be hacked, but not functionally) as long as the hacker doesn't have access to the signing system (which should clearly not be on a public machine). This doesn't prevent a hacker from gaining access to information communicated bak to a server, but at least makes sure that it isn't re-directed somewhere else.
The signing mechanism can also be used to black list an app. If Mozilla maintains a site with a list of blacklisted signatures and devices query that site, the apps could be disabled. In whatever UI we have to view the list of apps and control their permissions, a blacklisted app would show up as black listed and all permissions denied. A user, who needs the app would then explicitly re-enable it and re-add permissions (making it a pain to go through the process of looking at the permissions and enabling them), along with suitable warnings when they do so. Probably the black list site should contain both the signatures to deny and an explanation of why, (consumes excess resources, connects to high-cost SMS servers, leaks contacts, etc.) so that the user can make an informed choice such as to allow an app that consumes excess resources, but not allow an app that leaks personal information or incurs excessive costs.
-Jim Straus
>>> For such sensitive APIs I think we need to have a trusted party verify
>>> and ensure that the app won't do anything harmful. This verification
>>> doesn't need to happen by inspecting the code, it can be enforced
>>> through non-technical means. For example if the fitbit company comes
>>> to mozilla and says that they want to write an App which needs USB
>>> access so that they can talk with their fitbit hardware, and that they
>>> won't use that access to wipe the data on people's fitbit hardware, we
>>> can either choose to trust them on this, or we can hold them to it
>>> through contractual means.
>>
>>> However we also don't want all app developers which need access to
>>> sensitive APIs to have to come to mozilla (and any other browser
>>> vendor which implement OWA). We should be able to delegate the ability
>>> to hand out this trust to parties that we trust. So if someone else
>>> that we trust wants to open a web store, we could give them the
>>> ability to sell apps which are granted access to these especially
>>> sensitive APIs.
>>>
>>> This basically creates a chain of trust from the user to the apps. The
>>> user trusts the web browser (or other OWA-runtime) developers. The
>>> browser developers trusts the store owners. The store owners trust the
>>> app developers.
>>
>>> Each API which requires some sort of elevated privileges will require
>>> one of these capabilities. There can be multiple APIs which
>>> semantically have the same security implications and thus might map to
>>> the same capabilities. However it should never need to be the case
>>> that an API requires to separate capabilities. This will keep the
>>> model simpler.
>>
>> Agree but APIs then need to be split accordingly into trust groups, like
>> Camera API and Camera Control API.
>
> Note that an "API" can be defined as a single function. So we can, and
> likely will, have functions on the same object which have different
> capability levels.
>
> So for example we will likely have different capability requirements
> for DeviceStorage.add and DeviceStorage.delete
>
>>> For each of these capabilities we'll basically have 4 levels of
>>> access: "deny", "prompt default to remember", "prompt default to not
>>> remember", "allow". For the two "prompt..." ones we'll pop up UI and
>>> show the user yes/no buttons and a "remember this decision" box. The
>>> box is checked for the "prompt default to remember" level.
>>
>>> We then enhance the OWA format such that an app can list which
>>> capabilities it wants. We could possibly also allow listing which
>>> level of access it wants for each capability, but I don't think that
>>> is needed.
>>
>> Allow is quite different to prompt? Is this isn't in the manifest, where is
>> it set - would a store set these for an App?
>
> As stated in the original email. The list of capabilities, and their
> access level, is handed to the .install() function when a store
> installs an app. So yes, this comes from the store.
>
>>> Another thing which came up during a recent security review is that
>>> we'll likely want to have some technical restrictions on which sites
>>> can be granted some of these capabilities. For example something as
>>> sensitive as SMS access might require that the site uses STS (strict
>>> transport security) and/or EV-certs. This is also applies to the
>>> stores which we trust to hand out these capabilities.
>>
>> Chris brought up the issue of regulatory controls for functions like the
>> dialer. (e.g. phones always need to be able to make emergency calls).
>> Hence the description of the privileged store concept above, where the store
>> hosts the code of the Web App. It would likely also be a completely offline
>> web app, so might also be able to add technical controls like CSP. (e.g.
>> dialer should be restricted from making connections to the internet?)
>
> I hope I answered the parts about privileged stores above. I.e. I
> don't think we need them.
>
> I also don't see why we wouldn't let dialer apps connect to the
> internet? Especially in the scenario where the dialer app is
> preinstalled and thus fully trusted.
>
> / Jonas

lkcl luke

unread,
Mar 10, 2012, 4:41:27 PM3/10/12
to Jim Straus, dev-w...@lists.mozilla.org, ptheriault, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Sat, Mar 10, 2012 at 9:00 PM, Jim Straus <jst...@mozilla.com> wrote:
> Jonas, Paul, etc. -
>  For any app, but in particular, a third party hosted apps, we could require that the manifest contain a signed cryptographic hash of the core of the application (javascript, html, css?), along with the signature of the trusted store.

that's standard practice as part of e.g. debian package distribution
management, which is why i mentioned it as an example of
infrastructure to copy (verbatim) or "draw inspiration from".

>  This hash would be validated as signed by a trusted source (like or the same as SSL certs) and that the applications core matches the hash.

this is also again part of the same standard practice followed by
e.g. debian for 15+ years.


>  This would require that the browser/device pre-load the given content,

that would be inadviseable. i would recommend following verbatim the
practice deployed by e.g. debian package management. package
information is stored separately.

in fact, i'd _actually_ just recommend that B2G just... adopt the
debian packaging system, period. it works, it's proven, it does the
job, you don't need to arse about, and the tools for actually creating
packages _and_ the tools for managing the uploading of packages are
all well-established.

so far, as best that people have described the requirements so far
(not that there has been a formal document describing what they are,
hint hint bloody well write one *grumble* :) there's nothing in the
requirements for a B2G app store that are *any* different from that
which the debian packaging and distribution system already
provides.... perfectly.

you would also have the strong advantage that OS quotes firmware
quotes upgrades would be a matter of just doing "apt-get
dist-upgrade".

reeaaall simple.

> but hopefully apps will be using the local-cache mechanism, so this should not be burdensome.  Using this, once a trusted store has validate an application, the application can't be changed, even if it is hosted by a third party.  We would have to enforce that a signed application can't download untrusted javascript (eval becomes a sensitive API?).

*eyebrows raised*. interesting. you're the first person to raise
this as an issue within this thread. normally i _would_ say "if i was
the lead developer of this project i would be advising people to add
this to the relevant section of the wiki" but have received no
response as to where that wiki page is, whether anyone's taking
responsibility for that task etc. etc. so... *shrugs*.

ok, so, meta-comment: at this point, you've switched over from
"discussion of app distribution" and have switched modes to
"discussion of enforcement of permissions".

>  This would allow a third party to host the apps approved by a given store.

unf? ok, right, right. we're back to "discussion of app
distribution" mode. again: debian shows the way forward: you're
referring conceptually to debian "mirrors". as the mirrors store
digitally-signed apps which were centrally signed.... yyup, you got
it.

> It would also prevent a hacked store site from distributing hacked apps (well, things  like images could still be
>  hacked, but not functionally) as long as the hacker doesn't have access to the signing system (which should clearly not be on a public machine).

correct. again, here, you really need to talk to the debian people
as to how they go about doing this. some of them have created some
severely paranoid stuff, including i believe requiring that the master
keys be crytographically distributed across N of M people.

so yes, even if the quotes master quotes distribution site is
compromised it still doesn't matter because the apps are actually
double-signed. first signature is by the app "maintainer" (which
requires that they register _as_ a maintainer and join the debian GPG
ring-of-trust). second signature is by the FTP masters. packages
that are not signed by the maintainer are REJECTED by the ftp upload
site, period.

etc. etc.

so i really wasn't kidding when i said that this stuff has already
been solved, done, working for 15+ years and you have about 1,000
people that you can call on to ask "how do we do this, really?"


> This doesn't prevent a hacker from gaining access to information communicated bak to a server, but at least makes sure that it isn't re-directed somewhere else.
>  The signing mechanism can also be used to black list an app.

ok, that's something that debian *doesn't* have [explicitly]. this
should be added to that non-existent wiki which is non-existently
documenting the functional analysis from which the requirements
specification needs to be written and formally reviewed.

but, *thinks*.... ok, i know: you just create an updated package with
a revision number one higher than the rejected app, and you enable
auto-update. job done. package gone.

> If Mozilla maintains a site with a list of blacklisted signatures and devices query that site, the apps could be disabled.  In whatever UI we have to view the list of apps and control their permissions, a blacklisted app would show up as black listed and all permissions denied.

to be honest it would be best to just create an override package and
have it replaced. if the replacement package contains "no files",
it's effectively "blacklisted".

> A user, who needs the app would then explicitly re-enable it and re-add permissions (making it a pain to go through the process of looking at the permissions and enabling them), along with suitable warnings when they do so.  Probably the black list site should contain both the signatures to deny and an explanation of why, (consumes excess resources, connects to high-cost SMS servers, leaks contacts, etc.) so that the user can make an informed choice such as  to allow an app that consumes excess resources, but not allow an app that leaks personal information or incurs excessive costs.

well, if the replacement application (one higher revision number) has
those ACLs etc. then heck it's all good, right? if it's _really_
serious you put a package (with one higher revision number) which has
zero files in it (and maybe a README / notice saying why and what
happened).

this is all really good stuff, jim. but i have to reiterate: WHERE
IS IT BEING FORMALLY DOCUMENTED? please don't say "in the mailing
list".

l.

Jonas Sicking

unread,
Mar 11, 2012, 7:46:26 PM3/11/12
to Jim Straus, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, ptheriault, Mozilla B2G mailing list, cjo...@mozilla.com
On Sat, Mar 10, 2012 at 1:00 PM, Jim Straus <jst...@mozilla.com> wrote:
> Jonas, Paul, etc. -
>  For any app, but in particular, a third party hosted apps, we could require that the manifest contain a signed cryptographic hash of the core of the application (javascript, html, css?), along with the signature of the trusted store.  This hash would be validated as signed by a trusted source (like or the same as SSL certs) and that the applications core matches the hash.  This would require that the browser/device pre-load the given content, but hopefully apps will be using the local-cache mechanism, so this should not be burdensome.  Using this, once a trusted store has validate an application, the application can't be changed, even if it is hosted by a third party.  We would have to enforce that a signed application can't download untrusted javascript (eval becomes a sensitive API?).  This would allow a third party to host the apps approved by a given store.  It would also prevent a hacked store site from distributing hacked apps (well, things  like images could still be hacked, but not functionally) as long as the hacker doesn't have access to the signing system (which should clearly not be on a public machine).  This doesn't prevent a hacker from gaining access to information communicated bak to a server, but at least makes sure that it isn't re-directed somewhere else.

It's not entirely clear what problem it is that you are trying to
solve? Are you trying to avoid relying on SSL for safe delivery? Or
trying to provide the ability for stores to do code verification
before the grant apps access to sensitive APIs while still letting
those apps be hosted outside of the store?

I don't see a problem with relying on SSL. It's a technology that
developers understand very well and which we should IMHO encourage
more (especially in combination with SPDY).

I'm not a big believer in code signing. It's much too easy to miss
something in a big codebase and JS certainly doesn't lend itself well
to static analysis (which is what code review really is), neither by
humans or computers. Additionally one of the big benefits of the web
is the ease of deploying new code, which would be significantly
hampered if developers had to get any new version reviewed by stores.

So I'd prefer to push back against code reviews as much as we can. If
we end up needing it, then something like what you are describing
might be a possible solution.

>  The signing mechanism can also be used to black list an app.  If Mozilla maintains a site with a list of blacklisted signatures and devices query that site, the apps could be disabled.  In whatever UI we have to view the list of apps and control their permissions, a blacklisted app would show up as black listed and all permissions denied.  A user, who needs the app would then explicitly re-enable it and re-add permissions (making it a pain to go through the process of looking at the permissions and enabling them), along with suitable warnings when they do so.  Probably the black list site should contain both the signatures to deny and an explanation of why, (consumes excess resources, connects to high-cost SMS servers, leaks contacts, etc.) so that the user can make an informed choice such as  to allow an app that consumes excess resources, but not allow an app that leaks personal information or incurs excessive costs.

I think black-listing would be more effectively done by black listing
a origin (scheme+host+port), rather than by signature. Working around
a blacklist that is by signature is easy enough that I'd be worried
people would even do it by mistake.

But I like your ideas about including a description when black
listing, as well as (probably optionally) disabling an apps all
elevated privileges. In fact, I think one of our blacklist options
should be to let an app keep running, but disable a specific elevated
privilege. So for example a game which works great but ends up sharing
high scores over SMS a bit too much should still be able to run, but
have the SMS capability disabled.

/ Jonas

Jonas Sicking

unread,
Mar 11, 2012, 7:48:09 PM3/11/12
to lkcl luke, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
On Sat, Mar 10, 2012 at 1:41 PM, lkcl luke <luke.l...@gmail.com> wrote:
> this is all really good stuff, jim.  but i have to reiterate: WHERE
> IS IT BEING FORMALLY DOCUMENTED?  please don't say "in the mailing
> list".

Once we've had a bit more of a discussion here on the list, I think we
should document everything both as part of the OWA documentation, as
well as part of the general B2G documentation. But at this point I'm
not sure that there is enough consensus to start editing wikis.

/ Jonas

lkcl luke

unread,
Mar 11, 2012, 8:09:34 PM3/11/12
to Jonas Sicking, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
not being funny or anything, but consensus be damned! can you
remember everything that's going on? because i can't! there's some
superb technical input and absolutely critical ideas being discussed
here; i'm used to remembering lots of different issues but even *i*
can't remember them all.

let me try and illustrate, by doing a recap.

* someone mentioned "do we need to put ACLs on eval()"?
* someone else came up with the brilliant idea of putting words into
dialog boxes rather than "yes" or "no", i think it was you
* the original post that you wrote was something like 3,000 words
long: i actually read it all and came up with 3 separate areas which
need discussion and expansion, that might have expanded to 4
* my reply was probably another 3,000 words, i mentioned things like
SE/Linux as a potential solution *if* certain criteria were met, i
also mentioned debian distro infrastructure as a way to solve one of
the _other_ requirements.

in other words, even with only .. what.... 15 messages on this topic,
it's already getting out of hand... and this is *security* being
discussed. even in 1 week's worth of discussion i can't remember who
mentioned what.

and the reason i can't remember it is because it's all in an
uncontrolled and unstructured format that has no top-level headings
which are a crucial critical memory-aid.

if this was a "User Interface" discussion i'd go "ahh fuckit: not important."

... but for something as critical and fundamental as the security of
an entire operating system and its applications management
infrastructure, which, if you get it wrong will result in a repeat of
the android monoculture nightmare or the windows monoculture
nightmare, it would be irresponsible of me _not_ to say "i really
think the mozilla foundation needs to take a _little_ bit more care
over how the security of B2G is discussed, designed and structured".

l.

lkcl luke

unread,
Mar 11, 2012, 8:20:43 PM3/11/12
to Jonas Sicking, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
On Sun, Mar 11, 2012 at 11:46 PM, Jonas Sicking <jo...@sicking.cc> wrote:
> On Sat, Mar 10, 2012 at 1:00 PM, Jim Straus <jst...@mozilla.com> wrote:
>> Jonas, Paul, etc. -
>>  For any app, but in particular, a third party hosted apps, we could require that the manifest contain a signed cryptographic hash of the core of the application (javascript, html, css?), along with the signature of the trusted store.  This hash would be validated as signed by a trusted source (like or the same as SSL certs) and that the applications core matches the hash.  This would require that the browser/device pre-load the given content, but hopefully apps will be using the local-cache mechanism, so this should not be burdensome.  Using this, once a trusted store has validate an application, the application can't be changed, even if it is hosted by a third party.  We would have to enforce that a signed application can't download untrusted javascript (eval becomes a sensitive API?).  This would allow a third party to host the apps approved by a given store.  It would also prevent a hacked store site from distributing hacked apps (well, things  like images could still be hacked, but not functionally) as long as the hacker doesn't have access to the signing system (which should clearly not be on a public machine).  This doesn't prevent a hacker from gaining access to information communicated bak to a server, but at least makes sure that it isn't re-directed somewhere else.
>
> It's not entirely clear what problem it is that you are trying to
> solve?

the problem he's referring to is the one that the debian project has
solved with its distribution infrastructure.

> I'm not a big believer in code signing. It's much too easy to miss

jonas. i'm a bit shocked and taken aback that you're saying this.
do you understand why the debian project has the infrastructure that
it has?

this is *important*, jonas.

if you do not understand what jim is referring to, and you are going
to be the person in charge of implementing the security of B2G please
for god's sake do some research into why debian digitally-signs all
packages.

you've come up with some absolutely fantastic ideas on the issue of
B2G security but if you are, as you say, "not a big believer in code
signing" this is a really big alarm bell.

the simple version is: the reason why debian digitally signs each
individual package is to ensure that malicious packages cannot be
installed [on an uncompromised unmodified system].

if you do *not* have such a system in place it is incredibly easy for
a malicious user to take any arbitrary package, wrap it with malicious
code and release it under the exact same name.

how do you prevent this from happening? jim has described the
problem space, very very well. it turns out that there already exists
a near-perfect solution to that problem (debian packaging). SSL is
like... waayyyy down at the bottom of the infrastructure that is
*used* by those solutions.

l.

David Barrera

unread,
Mar 11, 2012, 9:11:48 PM3/11/12
to lkcl luke, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
I've been following this mailing list and thought I should chime in on
the topic of digital signatures for apps.

On Sun, Mar 11, 2012 at 8:20 PM, lkcl luke <luke.l...@gmail.com> wrote:
>  jonas.  i'm a bit shocked and taken aback that you're saying this.
> do you understand why the debian project has the infrastructure that
> it has?
>
>  this is *important*, jonas.
>
>  if you do not understand what jim is referring to, and you are going
> to be the person in charge of implementing the security of B2G please
> for god's sake do some research into why debian digitally-signs all
> packages.
>
>  you've come up with some absolutely fantastic ideas on the issue of
> B2G security but if you are, as you say, "not a big believer in code
> signing" this is a really big alarm bell.
>
>  the simple version is: the reason why debian digitally signs each
> individual package is to ensure that malicious packages cannot be
> installed [on an uncompromised unmodified system].
>

Debian signs packages so that they can use mirrors to distribute
updates. Some mirrors might use http, some mirrors may be malicious,
some mirrors may be defective and exhibit file corruption. Digitally
signing each file allows client software to verify that the packages
originated from Debian, and that they haven't been modified in
transit. If Debian decides to package up malware (intentionally or
not), digital signatures will still verify. Signing is used for
authentication, not malware prevention.

>  if you do *not* have such a system in place it is incredibly easy for
> a malicious user to take any arbitrary package, wrap it with malicious
> code and release it under the exact same name.

It is trivial to remove a signature from a package and re-sign it with
a different key. If the system is set up to only trust certain
certificates, this attack can be detected. If you are using, for
example, self-signed certificates and a trust on first use model (a la
Android), signatures don't help too much. You would need to check that
the first time you install a package, you got it from the right
source.

>  how do you prevent this from happening?  jim has described the
> problem space, very very well.  it turns out that there already exists
> a near-perfect solution to that problem (debian packaging).  SSL is
> like... waayyyy down at the bottom of the infrastructure that is
> *used* by those solutions.

SSL works well for authentication and as Jonas points out, it is well
understood by developers and the community. If B2G apps will *only* be
distributed through a single store, then SSL can provide the
authentication you need without the overhead of a signing/verifying
infrastructure. If there is going to be more than one app store, or
developers can distribute apps on their own, then I think it is
sensible to think about digital signatures.

> l.
> _______________________________________________
> dev-b2g mailing list
> dev...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-b2g



--
David Barrera
Carleton Computer Security Lab
Carleton University, Ottawa, ON. Canada

Jim Straus

unread,
Mar 11, 2012, 9:51:34 PM3/11/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, ptheriault, Mozilla B2G mailing list, cjo...@mozilla.com
Hello Jonas -
The problem I'm trying to solve is knowing that the app I think I'm getting is the app that I'm getting. SSL doesn't do anything to solve this, it just protects the privacy on the wire, not what is actually going through it. If a store wants to assign elevated permissions of any sort, I want assurance that the app they are providing the elevated permissions to is the app that I'm getting. It does't matter if the store is validating the app by hand inspecting code, doing some automated inspection, contractual obligations, the word of the developer, whatever. If they are asserting that the application is to be trusted with any elevated permissions, I don't want to get something else. Code signing doesn't tell me what an app does, just that it is the app hasn't been modified. Either through a developer changing their application, a hacker breaking into a site, or anything else. If the developer does want to update their app, I want to the store to re-assert that the new app should be able allowed to have whatever permissions it is granting, not just the developer doing it unilaterally. I suspect that stores will compete partially on price and breadth of offerings, but also on their assurances that the apps they are providing are safe.
Actually, in thinking about it, I think that stores that sell apps that come from a third party server are more secure, not less as a hacker would have to obtain the ability to sign an app and also break into the third party server to affect a change. And they would have to hack into another server to affect a second app. If a store hosts everything themselves, hacking that single server and getting the ability to sign apps would expose lots of apps to being hacked.
Black listing based on scheme/host/port is probably not sufficient for an organization that distributes more than one application. This was raised in a different discussion related to web apps in general. But even if it was, we may want to blacklist a particular version of an application, not the application in general. The signature provides a mechanism for this.
I agree that removing permissions for some application infractions might be a good idea. The actual semantics of what the black list can do to to a particular app can be discussed and enumerated. But there will definitely be apps that we want to completely disable (eg. if we find an app is hijacking information, I don't want it running at all.)
-Jim Straus

On Mar 11, 2012, at 7:46 PM, Jonas Sicking wrote:

> On Sat, Mar 10, 2012 at 1:00 PM, Jim Straus <jst...@mozilla.com> wrote:
>> Jonas, Paul, etc. -
>> For any app, but in particular, a third party hosted apps, we could require that the manifest contain a signed cryptographic hash of the core of the application (javascript, html, css?), along with the signature of the trusted store. This hash would be validated as signed by a trusted source (like or the same as SSL certs) and that the applications core matches the hash. This would require that the browser/device pre-load the given content, but hopefully apps will be using the local-cache mechanism, so this should not be burdensome. Using this, once a trusted store has validate an application, the application can't be changed, even if it is hosted by a third party. We would have to enforce that a signed application can't download untrusted javascript (eval becomes a sensitive API?). This would allow a third party to host the apps approved by a given store. It would also prevent a hacked store site from distributing hacked apps (well, things like images could still be hacked, but not functionally) as long as the hacker doesn't have access to the signing system (which should clearly not be on a public machine). This doesn't prevent a hacker from gaining access to information communicated bak to a server, but at least makes sure that it isn't re-directed somewhere else.
>
> It's not entirely clear what problem it is that you are trying to
> solve? Are you trying to avoid relying on SSL for safe delivery? Or
> trying to provide the ability for stores to do code verification
> before the grant apps access to sensitive APIs while still letting
> those apps be hosted outside of the store?
>
> I don't see a problem with relying on SSL. It's a technology that
> developers understand very well and which we should IMHO encourage
> more (especially in combination with SPDY).
>
> I'm not a big believer in code signing. It's much too easy to miss

Jim Straus

unread,
Mar 11, 2012, 10:09:19 PM3/11/12
to David Barrera, dev-w...@lists.mozilla.org, ptheriault, lkcl luke, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
Hello David -
I agree that signage doesn't prevent malware. But it does allow for people to assert that a particular package/app has been approved by both the developer and the issuer/store. How they go about assuring there isn't malware is up to them and may involve a variety of mechanisms (contracts, people, automated tools, etc.)
I suspect that B2G won't want to accept self-signed certificates. It would not lead to a robust eco-system and I also expect that the carriers would not accept that (or maybe require it to be removed if they don't notice it right away). It has been asserted that app stores would have some kind of relationship with Mozilla, so we may not want this either. And beyond that, signing allows for distinguishing versions. Related to that, I, as a consumer, would like to know when an app has changed. I don't want may apps changing on me willy nilly. There are definitely apps that I have purchased for my iPhone that I specifically did not upgrade when an upgrade became available, as I was concerned that functionality had been removed. It did turn out that in later upgrades the functionality came back, but I would want to allow the user to control that, not the developer unilaterally.

Just for fun, see http://www.youtube.com/watch?feature=player_embedded&v=k4EbCkotKPU It is has some relevance to the discussion.
-Jim Straus

Adrienne Porter Felt

unread,
Mar 11, 2012, 10:31:37 PM3/11/12
to Lucas Adamski, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
>
> > Each API which requires some sort of elevated privileges will require
> > one of these capabilities. There can be multiple APIs which
> > semantically have the same security implications and thus might map to
> > the same capabilities. However it should never need to be the case
> > that an API requires to separate capabilities. This will keep the
> > model simpler.
>

This constraint might prove rather difficult to stick to. Android has a
significant number of API calls with multiple permission requirements.
This seems to be a natural side effect of transitivity + a large API:
given methods A and B that are protected by different permissions, you will
eventually create a method C that invokes both A and B. A natural way that
this occurs is that you have separate "read" and "write" permissions, and
then you create a new action that involves both read and write actions.


> > Another thing which came up during a recent security review is that
> > we'll likely want to have some technical restrictions on which sites
> > can be granted some of these capabilities. For example something as
> > sensitive as SMS access might require that the site uses STS (strict
> > transport security) and/or EV-certs. This is also applies to the
> > stores which we trust to hand out these capabilities.
>

I strongly second restricting certain capabilities to all-HTTPS websites.
I have no reason to believe that website developers are much better than
Chrome extension developers, and Chrome extension developers use HTTP
resources in insecure ways all the time regardless of how privileged the
extensions are.

On another note:

I think a permission test suite is crucial for the long-term success of the
API. Every time someone defines a new API, he/she should have to build an
accompanying permission check test. I don't know what Mozilla's code
review model is but perhaps this process could be incorporated into it.
This way, the permission policy is always known and it is always possible
to verify that it has been implemented correctly. Otherwise, you end up
with Android's complex-yet-undocumented permission model.

lkcl luke

unread,
Mar 12, 2012, 12:08:24 AM3/12/12
to David Barrera, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
no it's more than that. it's good that you've described this, but
it's not the whole picture.

> It is trivial to remove a signature from a package and re-sign it with
> a different key.

yes but you've missed the point. it is *not* trivial to sign it with
the ftp master's private key, is it?

not only would you have to compromise the package maintainer's
private key, but also you would need to compromise the ftp master's
private key, which is actually distributed across N out of M different
people. also, those people are absolutely paranoid about where they
keep it.

there is actually a weaker link in the chain: you compromise the
person's machine and you disable the public key checking. or, you add
an extra line to /etc/apt/sources.list _and_ add a keychain package
which contains the "illegitimate" public key from the malwared site.

but that requires that you actually compromise the person's machine, first!

bottom line: the setup is a bit more sophisticated that it first
appears, and it definitely fulfils the requirements raised by ....
by... arg, going back through a few messages.... jim! yes.
definitely solves the issues raised by jim.

l.

lkcl luke

unread,
Mar 12, 2012, 12:19:54 AM3/12/12
to David Barrera, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Mon, Mar 12, 2012 at 1:11 AM, David Barrera
<dbar...@ccsl.carleton.ca> wrote:

> SSL works well for authentication and as Jonas points out, it is well
> understood by developers and the community. If B2G apps will *only* be

ah... actually.... :)

it was only until the BBC started using SSL as a means to terminate
the possibility of viewing iplayer videos that i understood that it
was even _possible_ to use SSL for that purpose.

up until that point i always understood SSL to be nothing more than a
means to guarantee end-to-end privacy.

if someone with 30 years experience of working with computers can be
ignorant (me!) of all the uses to which SSL can be put, it's probably
best *not* to assume that it's well-understood by "developers and the
community", eh? :)

if there was a wiki page which was being utilised to collate all this
information i would, of course be inviting people to add that
information such that its impact can be properly assessed.

even i forgot that SSL can be used for private-key/public-key
authentication, so i made an error earlier today in the replies to
jonas that i will have to reassess.

gaah. complicated.

l.

Jonas Sicking

unread,
Mar 12, 2012, 2:01:49 AM3/12/12
to Adrienne Porter Felt, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Lucas Adamski, Mozilla B2G mailing list
On Sun, Mar 11, 2012 at 7:31 PM, Adrienne Porter Felt <a...@berkeley.edu> wrote:
>> > Each API which requires some sort of elevated privileges will require
>> > one of these capabilities. There can be multiple APIs which
>> > semantically have the same security implications and thus might map to
>> > the same capabilities. However it should never need to be the case
>> > that an API requires to separate capabilities. This will keep the
>> > model simpler.
>
> This constraint might prove rather difficult to stick to.  Android has a
> significant number of API calls with multiple permission requirements.  This
> seems to be a natural side effect of transitivity + a large API: given
> methods A and B that are protected by different permissions, you will
> eventually create a method C that invokes both A and B.  A natural way that
> this occurs is that you have separate "read" and "write" permissions, and
> then you create a new action that involves both read and write actions.

So far I don't think we've run into this need. I'd be curious to know
where Android did.

One example where we might be pushing the boundaries might be the
Device Storage API [1] where we'll have different levels of security
for:

1. Adding new files
2. Reading existing files
3. Full read/write access

[1] https://wiki.mozilla.org/WebAPI/DeviceStorageAPI

>> > Another thing which came up during a recent security review is that
>> > we'll likely want to have some technical restrictions on which sites
>> > can be granted some of these capabilities. For example something as
>> > sensitive as SMS access might require that the site uses STS (strict
>> > transport security) and/or EV-certs. This is also applies to the
>> > stores which we trust to hand out these capabilities.
>
>
> I strongly second restricting certain capabilities to all-HTTPS websites.  I
> have no reason to believe that website developers are much better than
> Chrome extension developers, and Chrome extension developers use HTTP
> resources in insecure ways all the time regardless of how privileged the
> extensions are.

Sounds good.

> On another note:
>
> I think a permission test suite is crucial for the long-term success of the
> API.  Every time someone defines a new API, he/she should have to build an
> accompanying permission check test.  I don't know what Mozilla's code review
> model is but perhaps this process could be incorporated into it.  This way,
> the permission policy is always known and it is always possible to verify
> that it has been implemented correctly.  Otherwise, you end up with
> Android's complex-yet-undocumented permission model.

Agreed. Mozilla has as a policy that everything that we check in has
tests. Having checks for security aspects is especially important
though I don't think that's spelled out explicitly in the policy.

The security policy will definitely be front-and-center in every API
that we design. I.e. we should design with security policy in mind
from the ground up. So far we haven't quite been able to do that given
that we haven't had the necessary vocabulary and infrastructure in
place. The aim of this thread is to fix that.

/ Jonas

Ian Melven

unread,
Mar 12, 2012, 2:34:01 PM3/12/12
to dev-se...@lists.mozilla.org, dev-w...@lists.mozilla.org, dev...@lists.mozilla.org

Hi,

another note about 'always prompting' for sensitive/private UI's :

while i personally agree that these should always prompt, partially since
prompting when the permission is about to be used helps users understand
how and when it's going to be used, there will be pushback against this - it's not how other devices operate
as well, which helps the case of the people who feel differently.
(even Firefox for Android doesn't always prompt for geolocation every time, see
https://bugzilla.mozilla.org/show_bug.cgi?id=682455)

there are definitely going to be users who want to permanently grant
sensitive permissions to apps who will find being prompted every time
annoying them, especially when other platforms don't do this. i particularly
dislike systems where doing something X times or for example, granting
for some period of time (like iOS with geolocation, granting it allows your location
to be accessed by that domain for 24 hrs, with a status bar indicator to show
you something has access to your location - usually lacking information about exactly
what, beyond 'Safari' in my personal experience) - because i feel
that this has the user implicitly opting into sharing their data via a contract
that wasn't explicitly communicated to them (I would be more alright with this if the
user was informed what was decided on their behalf and could possibly
revoke it via an additional dialog, perhaps.) .

I understand the viewpoint that 'users don't read dialogs and always click through them'
but Adrienne's research on Android has proved this isn't even true for those giant
lists of permissions at install time and i'm yet to hear a better suggestion rather
than just implicitly making the decision for the user, which means it's going to be
wrong at least some amount of the time.

additionally thanks Chris Lee and others for stating the security model will be
documented on a wiki at some point. I hear many questions and concerns about
'what will the security model for B2G be ?' both from inside and outside the Mozilla
project, so it's great to see this starting to happen - i agree with other posters
that it's critical to get something up soon, both so people can see that security
is a first-class concern and to start debating/finding flaws in the model - just
like crypto, security models absolutely must have outside review and the more the better :)

thanks,
ian


----- Original Message -----
From: "Ian Melven" <ime...@mozilla.com>
To: dev-se...@lists.mozilla.org
Sent: Monday, March 12, 2012 11:14:50 AM
Subject: Re: [b2g] OpenWebApps/B2G Security model


Hi,

along similar lines, since the current discussion seems to rely heavily on SSL,
has thought been given to what root certs a B2G device will ship with ?

will it be the same as the Firefox root store ? personally, i think this
is not a good idea - there are many certs in, for example, the desktop
SSL store that i wouldn't want to trust for the purpose of granting
privileges to apps, or even necessarily authenticating a connection to a
store.

i assume the vendor/carrier for the device may also want to customize
the root cert store for their devices.

the current cert pinning proposal (https://wiki.mozilla.org/Security/Features/CA_pinning_functionality)
assuages some of my concerns - it would be great if the device vendor would pin the cert of their
own store (and Mozilla did the same for the Mozilla apps store) to
avoid MITM-with-valid-SSL-cert attacks on installs or permission granting.

just some thoughts. personally i would prefer code signing (helps secure upgrades,
if we follow Android's model of only an app with the same cert can upgrade an installed
app) rather than totally relying on SSL - Firefox updates are now signed in FF12+ as well
as being verified against a hash downloaded over SSL, for defense in depth.

thanks
ian



----- Original Message -----
From: "Kevin Chadwick" <ma1l...@yahoo.co.uk>
To: dev-se...@lists.mozilla.org
Sent: Monday, March 12, 2012 3:54:53 AM
Subject: Re: [b2g] OpenWebApps/B2G Security model

On Sun, 11 Mar 2012 21:11:48 -0400
David Barrera wrote:

> SSL works well for authentication and as Jonas points out, it is well
> understood by developers and the community.

Actually the whole security community is looking for ways to fix SSL
Google chrome has announced a lookup website. SSL lookup via DNS similar
to DNSSEC has an RFC but record size was a problem (1500 bit RSA limit).
ECDSA keys now given the secure to be used go ahead and being more
secure at a smaller key size (256bit), now make that viable but it
hasn't picked up traction yet.

That may be a good well tested and RFC'd method similar to DKIM for
email in having a dns record to validate an app is from a domain, it's
easy but may be more difficult for a website to realise how to add TXT
records (dnsmadeeasy.com etc.) than just putting up a website via
Joomla.

For closed source apps like many on Android, private signing keys are
the only way to go. All you have to trust is your trust in the
developer. Forget the permissions it's good and useful and raises the
bar for an attacker especially when users demand to install apps willy
nilly, but nothing to base real trust on especially with a slow update
mechanism to battle exploits, like on phones.

For debian you are trusting that their build machines are secure and
the download of the source code secure. They will take methods to
verify the developers keys themselves etc.. and possibly reviewing code.
This eliminates the risk of an author gaining trust through releasing
source code but then offering a trojaned binary.

A distros build infrastructure also makes it easier for the user
especially when using the primary and more highly reviewed repos but it
is almost a certainty that with the sheer number of packages (> 16GB) in
all of the official Debians repos that there will be some
malware/rootkit in there somewhere. I know I've been too liberal before
and found DDOS trying to go out from a debian box.

p.s. Of course, I'm not saying don't use SSL for transport
security or domain verification, it is the easiest option for the
masses, but also bear in mind it is likely to be replaced. It's been
said for years but the traction does seem to be increasing against it.

All these CA problems sure make the loudness of browsers "don't
connect", it has a self signed key rather laughable/annoying.
_______________________________________________
dev-security mailing list
dev-se...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security

Ian Melven

unread,
Mar 12, 2012, 2:34:58 PM3/12/12
to dev...@lists.mozilla.org, dev-w...@lists.mozilla.org

adding b2g and webapps lists

ian

Jim Straus

unread,
Mar 13, 2012, 4:09:06 PM3/13/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, phill...@gmail.com, mozilla.d...@googlegroups.com, Mozilla B2G mailing list
Hello all -
I've been sketching out an implementation of permissions. I've laid out some code framework, but wanted to through tis out for validation. Assumptions: that B2G/Firefox will have separate processes for each app/tab. This is already declared to be true (but not implemented) for B2G. This proposal doesn't required ipc , but I believe we need to support it. Also note that this should be able to replace the existing Permission and PermissionManager in gecko.

The basic idea is that there is a separate, single Process Manager that is the master owner of the permissions. It would have functions for:
• testing permissions (return if permission is granted/denied, no user query)
• adding/removing permissions
• enumerate the permissions
• query permissions (same as checking permissions, but can query the user if the state of a permission is not known, maybe take a reason to display to the user if the UI is needed needed)
• support observers of permission changes.

In general, the functions work with uri signatures(domain, partial URL, whatever is decided for the WebApps and permission structures. The permission structure is an extension of the existing permission structure:
• type
• uri signature
• value
• source (user, manifest, system)
• expiration type
• expiration time
• allow message (for the UI buttons, just an ID into some string list for space and localization consideration).
• deny message

Each process has a similar object proxy that forwards the requests to the master process. The proxies can also implement a cache to speed up responses, and if so, observe the master so they can keep their caches coherent. These separate processes should also be responsible for populating the uri signature, so an app can't spoof it, or if the master process can determine the uri signature from the call implicitly, that would even be better. I'm also assuming that these slave processes would block on a query function. If an app really wants to continue running until a user grants permission (when it is needed), they can do the query in a separate thread.

I'm also assuming that the UI for querying the user is something that runs outside the normal app/web arena so that it can't easily be subverted and also that it is NOT replaceable for the same reason.

Access to controlled resources would do a check permission, to see if access is allowed. If the controlled resource is in another process, the caching mechanism would work for it as well (it would cache based on uri signature, where as UI processes would cache based on permissions.)

I've been considering whether or not the master process could be partially replaced with something as simple as a sqlite3 database. The database would live in a directory with different ownership than the rest of the UI, so that the normal usage would have it opened read only. The proxy could directly query the database for testing permissions and enumerate the database without having to call the permissions API. The master process would still be used to update the permissions, signal that something changed, and invoke the UI .
-Jim Straus

Lucas Adamski

unread,
Mar 14, 2012, 2:16:10 PM3/14/12
to Jonas Sicking, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, lkcl luke, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
I have to agree with my namesake here. This is too complicated to design in a ML, without having some sort of reference
point. It also makes is unrealistic to add new participants without the cliche "go read every post about this subject
first". If nothing else we should have a wiki that contains at a high level:

a) general goals and requirements
b) open items
c) proposed design
d) threat model

These are living documents that should be evolving along with this discussion. Thanks!
Lucas.

On 3/11/2012 4:48 PM, Jonas Sicking wrote:
> On Sat, Mar 10, 2012 at 1:41 PM, lkcl luke <luke.l...@gmail.com> wrote:
>> this is all really good stuff, jim. but i have to reiterate: WHERE
>> IS IT BEING FORMALLY DOCUMENTED? please don't say "in the mailing
>> list".
> Once we've had a bit more of a discussion here on the list, I think we
> should document everything both as part of the OWA documentation, as
> well as part of the general B2G documentation. But at this point I'm
> not sure that there is enough consensus to start editing wikis.
>
> / Jonas

Lucas Adamski

unread,
Mar 14, 2012, 2:16:59 PM3/14/12
to Jonas Sicking, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, lkcl luke, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
P.S. If that sounds a lot like a Feature Page, that's probably because it wouldn't be a bad starting point.

Justin Lebar

unread,
Mar 14, 2012, 2:30:41 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, lkcl luke, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Wed, Mar 14, 2012 at 2:16 PM, Lucas Adamski <lada...@mozilla.com> wrote:
> P.S.  If that sounds a lot like a Feature Page, that's probably because it wouldn't be a bad starting point.
>  Lucas.

IME, developers have been traditionally hesitant to design using
feature pages, because there's no means for collaboration and
discussion in a feature page, save mediawiki talk pages, which nobody
uses (and for good reason).

We're currently exploring an idea, collaborating, discussing,
debating. All of this is IMO poorly suited to Mozilla's feature page
system.

> On 3/11/2012 4:48 PM, Jonas Sicking wrote:
>> On Sat, Mar 10, 2012 at 1:41 PM, lkcl luke <luke.l...@gmail.com> wrote:
>>> this is all really good stuff, jim.  but i have to reiterate: WHERE
>>> IS IT BEING FORMALLY DOCUMENTED?  please don't say "in the mailing
>>> list".
>> Once we've had a bit more of a discussion here on the list, I think we
>> should document everything both as part of the OWA documentation, as
>> well as part of the general B2G documentation. But at this point I'm
>> not sure that there is enough consensus to start editing wikis.
>>
>> / Jonas
>> _______________________________________________
>> dev-security mailing list
>> dev-se...@lists.mozilla.org
>> https://lists.mozilla.org/listinfo/dev-security

Vivien

unread,
Mar 14, 2012, 4:03:25 PM3/14/12
to Jim Straus, dev-w...@lists.mozilla.org, phill...@gmail.com, Jonas Sicking, dev-se...@lists.mozilla.org, mozilla.d...@googlegroups.com, Mozilla B2G mailing list
On 13/03/2012 21:09, Jim Straus wrote:
> Hello all -
> I've been sketching out an implementation of permissions. I've laid out some code framework, but wanted to through tis out for validation. Assumptions: that B2G/Firefox will have separate processes for each app/tab. This is already declared to be true (but not implemented) for B2G. This proposal doesn't required ipc , but I believe we need to support it. Also note that this should be able to replace the existing Permission and PermissionManager in gecko.

Hey,

Is there a bug # or an url to a WIP to look at?

Lucas Adamski

unread,
Mar 14, 2012, 4:20:49 PM3/14/12
to Justin Lebar, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, lkcl luke, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
I don't think the feature page should be the place for discussion, but it should track the discussion. As issues are
raised, discussed & resolved, those would be mirrored in the feature page. Otherwise, important issues will be lost,
and new participants will cannot synthesize a design from a huge fragmented thread.
Lucas.

Lucas Adamski

unread,
Mar 14, 2012, 5:35:00 PM3/14/12
to David Barrera, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, lkcl luke, Jonas Sicking, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
My understanding is that there will be multiple app stores. But code signing has another benefit: reducing systemic risk.

This assume code signing and sane key management, but lets say there's a very popular app with significant privileges.
To compromise a large number of people, you'd need to:
a) compromise the site hosting the app
b) compromise the key signing the app (assuming you require app updates to be signed with the same key)
c) compromise or trigger the update mechanism for the app
d) wait for updates to trickle out

This is a tedious process that slows down exploitation, and that's no fun.

If app authentication relies only on SSL, then you just need to pop a web server (which isn't hard, really). Everyone
using the app gets owned simultaneously.
Lucas.

On 3/11/2012 6:11 PM, David Barrera wrote:
>> how do you prevent this from happening? jim has described the
>> problem space, very very well. it turns out that there already exists
>> a near-perfect solution to that problem (debian packaging). SSL is
>> like... waayyyy down at the bottom of the infrastructure that is
>> *used* by those solutions.
> SSL works well for authentication and as Jonas points out, it is well
> understood by developers and the community. If B2G apps will *only* be
> distributed through a single store, then SSL can provide the
> authentication you need without the overhead of a signing/verifying
> infrastructure. If there is going to be more than one app store, or
> developers can distribute apps on their own, then I think it is
> sensible to think about digital signatures.
>
>> l.

lkcl luke

unread,
Mar 14, 2012, 5:35:51 PM3/14/12
to Justin Lebar, dev-w...@lists.mozilla.org, ptheriault, Jim Straus, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Wed, Mar 14, 2012 at 6:30 PM, Justin Lebar <justin...@gmail.com> wrote:
> On Wed, Mar 14, 2012 at 2:16 PM, Lucas Adamski <lada...@mozilla.com> wrote:
>> P.S.  If that sounds a lot like a Feature Page, that's probably because it wouldn't be a bad starting point.
>>  Lucas.
>
> IME, developers have been traditionally hesitant to design using
> feature pages, because there's no means for collaboration and
> discussion in a feature page, save mediawiki talk pages, which nobody
> uses (and for good reason).

tough. make it work. this is too important and fundamental an area
to mess about with.

if this was just a lovely bit of GUI code being developed, i would not
care one bit how it was developed. you would not be hearing from me,
and i would not be spending my personal time and money writing a
single word.



... but it's not: it's *security* being discussed. and that means
that you need to follow best Software Engineering practices [as
applicable and relevant in a free software collaborative context].

if people don't "want" to make it work (time, whatever), then you -
mozilla foundation dash B2G team members - compensate accordingly.
allocate at least one person who has the time and inclination to
follow in near-real-time to keep it up-to-date and to prompt everyone
to continuously refer to relevant sections.

yes by all means use the mailing list if that works best for you
personally but the online document *MUST* be the "authoritative"
document that is both living and also reflects as close as possible
and as soon as practical the input from all contributors.

and begins with informal discussion, morphs into a "functional
analysis", from which "requirements specification" is written, and
_then_ you start coding.


> We're currently exploring an idea, collaborating, discussing,
> debating.

... around a vast number of technical areas that cover (at the very least):

* app distribution
* kernel security
* OS security
* application security
* network security of app distribution (evaluation thereof)
* network security of apps once they're _running_ (evaluation thereof)
* digital signing of apps
* public key management
* authentication
* mirroring
* UI design for permissions
* app security APIs
* developer documentation for app security design

in other words there is a need to cover quite literally everything
from how the app is written, right through its distribution, to
minimising the security risks of untrusted apps, to enforcing
permissions once running, to presenting the user with meaningful
choices that impress upon them the importance of their decisions.

as you can see from the above *small* list (which could potentially
be used as the basis for the headings of the security wiki page), this
project is very ambitious, very large, and it is absolutely critical
to get it right.

>  All of this is IMO poorly suited to Mozilla's feature page system.

well... tough. find something that works, or you _make_ it work.
this is too important. there's far too much for people to cover here.

so, may i respectfully suggest some rules?

that:

a) people take the time to refer to the wiki as the authoritative document
b) people write "i've updated the wiki, area xyz, reasoning was as
follows, please help review"
c) if someone _doesn't_ refer to the wiki, that someone take
responsibility to refer them to it.
d) if someone makes a valuable contribution but _doesn't_ put it on
the wiki, that someone take responsibility to gently prod them to do
it or if they don't respond or are too busy, or it's easier, just do
it for them (and of course say "i've updated the wiki").

l.

Jim Straus

unread,
Mar 14, 2012, 5:37:02 PM3/14/12
to Vivien, dev-w...@lists.mozilla.org, phill...@gmail.com, Jonas Sicking, dev-se...@lists.mozilla.org, mozilla.d...@googlegroups.com, Mozilla B2G mailing list
Yup, bug 707625
-Jim Straus

lkcl luke

unread,
Mar 14, 2012, 5:43:16 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, Justin Lebar, ptheriault, Jim Straus, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Wed, Mar 14, 2012 at 8:20 PM, Lucas Adamski <lada...@mozilla.com> wrote:
> I don't think the feature page should be the place for discussion, but it should track the discussion.  As issues are
> raised, discussed & resolved, those would be mirrored in the feature page.  Otherwise, important issues will be lost,
> and new participants will cannot synthesize a design from a huge fragmented thread.

yes.

i have already been criticised for "repeating myself", which is a
clear sign that people are in fact overloaded and simply haven't got
time to read 10,000 (and increasing) words before contributing.

the wiki page should be subdivided into sections, and should
represent and reflect the "latest up-to-speed decisions and ideas".

it is even absolute madness to even expect the actual people who have
been following this right from the beginning to be able to hold
everything in their heads.

if this could be subdivided into separate areas, and the tasks
allocated to different working groups, it would be fine.

you could even split it by mailing list.

.... but you can't do that here, because this is _security_. there
needs to be a clear understanding of the *entire* process (see
previous message with over 12 sub-headings) and how each part
contributes to the overall security.

remember: this could turn out to be as big as android and the iphone.
hundreds of millions of installations. you get that wrong, the
consequences don't bear thinking about.

it's not like PCs and web browsers. this is _phones_. the market is
potentially 10x to 100x bigger. worst case is: they'll all come
knocking on the mozilla foundation's door, some of them potentially
with class action lawsuits.

worse than that is the situation where you suddenly get 10 to 100x
the number of bugreports on bugzilla.

l.

Fabrice Desré

unread,
Mar 14, 2012, 5:50:01 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, lkcl luke, Mozilla B2G mailing list, ptheriault, cjo...@mozilla.com, Jonas Sicking
Lucas,

Are you considering signing the html/js/css/other-content from apps?

I can understand the nice properties that would give us, but that looks
extremely impractical in real life. Web sites change all the time, which
is not the case of native apps distributed from a store.

Fabrice

On 03/14/2012 02:35 PM, Lucas Adamski wrote:
> My understanding is that there will be multiple app stores. But code signing has another benefit: reducing systemic risk.
>
> This assume code signing and sane key management, but lets say there's a very popular app with significant privileges.
> To compromise a large number of people, you'd need to:
> a) compromise the site hosting the app
> b) compromise the key signing the app (assuming you require app updates to be signed with the same key)
> c) compromise or trigger the update mechanism for the app
> d) wait for updates to trickle out
>
> This is a tedious process that slows down exploitation, and that's no fun.
>
> If app authentication relies only on SSL, then you just need to pop a web server (which isn't hard, really). Everyone
> using the app gets owned simultaneously.
> Lucas.

--
Fabrice Desré
b2g Team
Mozilla Corporation

lkcl luke

unread,
Mar 14, 2012, 5:51:23 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, Jonas Sicking, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Mozilla B2G mailing list
On Wed, Mar 14, 2012 at 9:35 PM, Lucas Adamski <lada...@mozilla.com> wrote:
> My understanding is that there will be multiple app stores.  But code signing has another benefit: reducing systemic risk.
>
> This assume code signing and sane key management, but lets say there's a very popular app with significant privileges.
> To compromise a large number of people, you'd need to:
> a) compromise the site hosting the app
> b) compromise the key signing the app (assuming you require app updates to be signed with the same key)
> c) compromise or trigger the update mechanism for the app
> d) wait for updates to trickle out
>
> This is a tedious process that slows down exploitation, and that's no fun.

yyyup. it's why debian's package management has never been
compromised. it's actually worse than tedious: success actually
requires physical compromise techniques that are normally only the
reserve of mafia, cartels, military and intelligence agencies.

( ok that's assuming that the app source code isn't hostile, for
whatever reason .... )

> If app authentication relies only on SSL, then you just need to pop a web server (which isn't hard, really).  Everyone
> using the app gets owned simultaneously.

there's another disadvantage of using SSL PKI authentication (on
HTTPS), which is that each mirror now needs a *separate* key, or you
need to distribute that key out across multiple sites.

so you either have the disadvantage that you have to weaken the
security of the SSL private key (oops)

or you have the disadvantage that you get criticised for "becoming
another apple" by "locking people out of the app system"

or you have the disadvantage that if there are a large number of
mirrors you have to spend vast amounts of time checking that the
mirror admin is competent.

no, the debian system of individually signing the apps (twice) is
much easier. and then it doesn't matter how the app packages are
distributed: you can use rsync, ftp, peer-to-peer networking, carrier
pigeons or CD/DVDs it just doesn't matter.

l.

Ian Bicking

unread,
Mar 14, 2012, 5:55:38 PM3/14/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev...@lists.mozilla.org
On Thu, Mar 8, 2012 at 4:25 AM, Jonas Sicking <jo...@sicking.cc> wrote:

> For especially sensitive APIs, in particular security related ones,
> asking the user is harder. For example asking the user "do you want to
> allow USB access for this app" is unlikely a good idea since most
> people don't know what that means. Similarly, for the ability to send
> SMS messages, only relying on the user to make the right decision
> seems like a big risk.
>
> For such sensitive APIs I think we need to have a trusted party verify
> and ensure that the app won't do anything harmful. This verification
> doesn't need to happen by inspecting the code, it can be enforced
> through non-technical means. For example if the fitbit company comes
> to mozilla and says that they want to write an App which needs USB
> access so that they can talk with their fitbit hardware, and that they
> won't use that access to wipe the data on people's fitbit hardware, we
> can either choose to trust them on this, or we can hold them to it
> through contractual means.
>

This is somewhat an aside to your proposal, but... well, I'll offer up the
idea:

As a kind of use case of both a useful permission and one that can be
abused, I am imagining camera access, where the application can actually
decide when to take a snapshot and get preview information. (This is
different than an interface where the UA handles taking the picture and
just gives the user-selected image to the app.)

If this permission is simply assigned to the application origin, you can
imagine finding an XSS hole where an attacker could upload bad code, and
the code could run from within a hidden iframe, creating a great way to spy
on the user at any time without them even intending to open the
picture-taking app.

So of course if the user has to interact explicitly with the app to
actually take a picture this fixes the problem, just like <input type=file>
doesn't easily let an attacker read arbitrary files. But apps would like
to present novel interfaces, or change the picture preview (e.g., applying
filters on the preview), or any number of novel things that they can do
when they get to control the entire interface.

I think it would be possible to mitigate the danger of APIs like these,
without forcing the UA to mediate all access, by keeping a secure record
and notifications on the side. For instance, with camera access you can
imagine a camera icon being presented somewhere securely whenever the
camera was accessed (maybe in the tab, maybe in B2G in some notification
area, maybe with a translucent message in a fullscreen situation). This
would allow the user to monitor after the fact for inappropriate use of a
permission, while not actually stopping the API from completing in the
context of the app's interface. It wouldn't be of tremendous help for
something like USB access where you could (or at least might be able to) do
something really malicious, and so it would be too late to see the problem
later. Also, it's very hard for a user to discover how to revoke a
permission once they've granted it; this would leave a handy place for
exactly that case.

Using this notification of high-permission API use, you could also let
users report when they believe an application used the permission in an
inappropriate way. This report would probably make sense to send back to
the store, which could monitor for problems and according to the proposal
would have the ability to revoke a permission for an application, or revert
it to require user confirmation (e.g., a store might automatically do that
if there was a sudden burst of reports, as it might signal an attack).

There are however privacy implications to logging API access.


> For each of these capabilities we'll basically have 4 levels of
> access: "deny", "prompt default to remember", "prompt default to not
> remember", "allow". For the two "prompt..." ones we'll pop up UI and
> show the user yes/no buttons and a "remember this decision" box. The
> box is checked for the "prompt default to remember" level.
>

One thing I like about this is that the store could enable certain
permissions automatically when they are clearly essential to the
application. For instance, an application called "Silly Camera" would be
expected, by the user, to access the camera – simply by installing the
application we can infer the user intends to give the application that
permission (at least we can infer that given a human's review). Other
applications might have some camera-using feature but not so obviously that
we can infer user intent, and so the permission might be left at some
prompt state.

Ian

Lucas Adamski

unread,
Mar 14, 2012, 6:37:20 PM3/14/12
to Fabrice Desré, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, lkcl luke, Mozilla B2G mailing list, ptheriault, cjo...@mozilla.com, Jonas Sicking
At this point I'm just raising possibilities. If we go with something close to option b), then we have to figure out
how to deal with a set of threats not really present in other app stores. It doesn't preclude us from doing so, but we
might for example have to require a relatively strict CSP policy for apps to reduce the risk of MITM attacks for
example, or CA pinning.

I don't know of any way to mitigate the risk of server compromise without code signing, though. Short of having a two
tier system (more privilege for "installed" apps, less for "remote" apps), but I'd really like to avoid that.
Lucas.

On 3/14/2012 2:50 PM, Fabrice Desré wrote:
> Lucas,
>
> Are you considering signing the html/js/css/other-content from apps?
>
> I can understand the nice properties that would give us, but that looks extremely impractical in real life. Web sites
> change all the time, which is not the case of native apps distributed from a store.
>
> Fabrice
>
> On 03/14/2012 02:35 PM, Lucas Adamski wrote:
>> My understanding is that there will be multiple app stores. But code signing has another benefit: reducing systemic
>> risk.
>>
>> This assume code signing and sane key management, but lets say there's a very popular app with significant privileges.
>> To compromise a large number of people, you'd need to:
>> a) compromise the site hosting the app
>> b) compromise the key signing the app (assuming you require app updates to be signed with the same key)
>> c) compromise or trigger the update mechanism for the app
>> d) wait for updates to trickle out
>>
>> This is a tedious process that slows down exploitation, and that's no fun.
>>
>> If app authentication relies only on SSL, then you just need to pop a web server (which isn't hard, really). Everyone
>> using the app gets owned simultaneously.
>> Lucas.
>

Lucas Adamski

unread,
Mar 14, 2012, 6:42:47 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, Justin Lebar, ptheriault, Jim Straus, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
Paul and I are putting together a feature page to capture some of the discussion so far, and to provide a foundation
going forward. We'll have someone out for review shortly. Thanks!
Lucas.

David Chan

unread,
Mar 14, 2012, 6:44:38 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, lkcl luke, Jonas Sicking, dev-se...@lists.mozilla.org, Fabrice Desré, cjo...@mozilla.com, Mozilla B2G mailing list
I've updated the wiki page with the information presented in e-mails today
- definition of app instance / version
- link to B2G/App security model feature page
- benefits of mirroring Debian package management system
- link to Jim's permission manager bug
- complications of using SSL as authentication mechanism
- open question about what to sign

See https://wiki.mozilla.org/Apps/Security

David Chan

----- Original Message -----
> From: "Lucas Adamski" <lada...@mozilla.com>
> To: "Fabrice Desré" <fab...@mozilla.com>
> Cc: dev-w...@lists.mozilla.org, "David Barrera" <dbar...@ccsl.carleton.ca>, dev-se...@lists.mozilla.org, "Jim
> Straus" <jst...@mozilla.com>, "lkcl luke" <luke.l...@gmail.com>, "Mozilla B2G mailing list"
> <dev...@lists.mozilla.org>, "ptheriault" <pther...@mozilla.com>, cjo...@mozilla.com, "Jonas Sicking"
> <jo...@sicking.cc>
> Sent: Wednesday, March 14, 2012 3:37:20 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>
> _______________________________________________
> dev-webapps mailing list
> dev-w...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-webapps
>

ptheriault

unread,
Mar 14, 2012, 7:02:24 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, Fabrice Desré, lkcl luke, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking

I actually liked the idea of "more privilege for "installed" apps, less for "remote" apps" - the number of apps that will need elevated permissions are a very small percentage (and I think that was B2G's original plan?) . As I understand it, Gaia apps are already static HTML apps (i.e. it would be easier to package and sign these applications than normal websites), and these would be the main apps which require critical privileges. I suppose that developers will not want to be tied down to a static HTML model though.

Maybe it doesn't have to be a two-tier system - but I think it makes sense for there to be specific security requirements that need to be met in order to be granted critical permissions. I think trying to enforce a blanket set of requirements across the gamut of permissions will result in model that compromises on both ends (it will be heavy handed for apps which don't require permissions, and not strict enough for critical permission)

- Paul

lkcl luke

unread,
Mar 14, 2012, 7:04:41 PM3/14/12
to Fabrice Desré, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, Lucas Adamski, Mozilla B2G mailing list, ptheriault, cjo...@mozilla.com, Jonas Sicking
@whoever is maintaining the wiki page: information is contained about
1/2 way down which is critical to put onto the wiki page. thanks.


On Wed, Mar 14, 2012 at 9:50 PM, Fabrice Desré <fab...@mozilla.com> wrote:
>  Lucas,
>
> Are you considering signing the html/js/css/other-content from apps?

yeeees. nooow you're getting it. that's what debian packages are.
you package them up (.deb) and then you get the maintainer to
digitally-sign them. then, prior to release, the "ftp master" *also*
signs them.


btw, fabrice, is there any particular reason why you chose to respond
to lucas, who described *exactly* the same architecture as that which
i proposed over 4 days ago, in some detail, with comprehensive
information that helps aid in evaluating the architecture and
validating that it is in fact solving the problem, yet have chosen to
completely ignore what i wrote?

that is somewhat disrespectful, and if you do not have a good reason
please could you kindly and publicly apologise.

if however there is a genuine reason then i apologise for "ringing
the alarm bells" and please disregard it completely.


> I can understand the nice properties that would give us, but that looks
> extremely impractical in real life.

tough. it's not "impractical" at all for the debian maintainers, is
it? they've been operating for over 20 years, why therefore are you
considering it "impractical"? debian's entirely free software and
_they_ don't consider it "impractical", do they?

likewise, redhat and fedora and suse - all of whom use digital
signatures on packages - do not consider it to be "impractical", do
they?

read what jonas originally wrote, fabrice. [this is why it's so damn
important to have an online document, folks].

jonas said [paraphrasing]: "we do not want to become another apple"
which you would have to be if you became "the authority" - the
zeig-heil dictator - which told people what apps they can and cannot
have on their own purchased hardware.

the alternatives are just... too nasty, fabrice.


> Web sites change all the time, which is
> not the case of native apps distributed from a store.

it would appear that you are giving a case *for* digital-signatures
of native apps distributed from a "store"!

:)

either that or you are thoroughly confusing "web sites" with "native
apps distributed from a store", believing that just because B2G is a
"web engine" that somehow the two must be the same.


web sites and the code on them which is downloaded via HTTP or HTTPS
by going to a web site (think "B2G's browser application") has
*NOTHING TO DO WITH* the "distribution of native apps from a store".

under the system that i am proposing which i know will solve the
problem of securing the apps that are distributed through stores, the
following actions would occur:

[and could someone please document this on the wiki page]:

* a user goes to a store (GUI front-end TBD)
* this "store" triggers the command "sudo apt-get update" in the
background (or this occurs on a regular basis)
* they go "i wanna app wiv a cherry on top"
* the selection fires off a LOCAL COMMAND "sudo apt-get install cherry-on-top".
* the GUI uses the dpkg / debconf communication system to inform them
of progress
* the GUI then walks them through the security questions (which is
all part of debconf)

in other words, the GUI is actually nothing more than yet another
debconf front-end! it just so happens that the GUI is also a B2G app
that has a communications-path (unix pipe, whatever) through (oh god
please no :) yet another WebAPI over to apt/debconf.

where in that did you see *any* "web site HTTP visitation"?

there *was* none, was there?

now it so happens that apt, at the back-end, depending on what
packages are installed (apt-p2p or the defaults which do http and
ftp), *happens* to perform HTTP requests, FTP requests or blah blah
requests, but this is *NOTHING TO DO WITH* the B2G executable.

it's *entirely* handled by dpkg, aptitude, debconf etc. etc.

the really nice thing about this is that the entire software-base
already exists, is proven, works, well-established and so on. all you
have to do is go find a debian person (steve if he's not madly busy,
wookey likewise) and say "hey folks how do we do this?"

also, if you _were_ to utilise the debian packaging system i'm sure that:

a) they would probably go "cool!" and would simply start allowing B2G
apps directly into the debian repositories. they'd probably even
create a "task" for you which installed B2G on any system. (*1)

b) some of the debian mirrors would, if asked, quite likely allow some
space for the B2G infrastructure and apps. many of them do ubuntu as
well so it's no big deal.

l.

(*1) as long as, of course, you sort out the stupid licensing on
mozilla forcing the advertising clause which just pisses them off, or
just let them come up with another stupid name for B2G such as icefart
or something wonderfully ridiculous. hurrah :)

lkcl luke

unread,
Mar 14, 2012, 7:15:37 PM3/14/12
to David Chan, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, Lucas Adamski, Jonas Sicking, dev-se...@lists.mozilla.org, Fabrice Desré, cjo...@mozilla.com, Mozilla B2G mailing list
On Wed, Mar 14, 2012 at 10:44 PM, David Chan <dc...@mozilla.com> wrote:
> I've updated the wiki page with the information presented in e-mails today
> - definition of app instance / version
> - link to B2G/App security model feature page
> - benefits of mirroring Debian package management system
> - link to Jim's permission manager bug
> - complications of using SSL as authentication mechanism
> - open question about what to sign

eyy! :) superb! *thumbs-up* to david :)

lkcl luke

unread,
Mar 14, 2012, 7:44:57 PM3/14/12
to ptheriault, dev-w...@lists.mozilla.org, David Barrera, Fabrice Desré, Lucas Adamski, Jonas Sicking, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jim Straus, Mozilla B2G mailing list
On Wed, Mar 14, 2012 at 11:02 PM, ptheriault <pther...@mozilla.com> wrote:
>
> I actually liked the idea of "more privilege for "installed" apps, less for "remote" apps" - the number of apps that will need elevated permissions are a very small percentage (and I think that was B2G's original plan?) . As I understand it, Gaia apps are already static HTML apps

yes!

> (i.e. it would be easier to package and sign these applications than normal websites),

correct!

again, it's worth reiterating: "normal" websites have *NOTHING TO DO
WITH* gaia apps.

you really really really really _really_ need to stop thinking of
Gaia apps as having ANYTHING to do with "web sites".

yes their source code happens to be in HTML and Javascript (and if i
had my way they'd be python as well *sigh* and if i _really_ had my
way they'd be absolutely any programming language available under the
sun. hint hint XPCOM big hint... :)

but just because the source code is HTML and Javascript it DOES NOT
MEAN THAT IT HAS ANYTHING TO DO WITH WEB SITES.

it is quite fascinating to see people who have been working with
mozilla for... N years making this same mistake :)

B2G really is quite fundamentally... odd, and wonderful :)


> and these would be the main apps which require critical privileges.  I suppose that developers will not want to be tied down to a static HTML model though.

well then: let's examine that case.

* the static HTML as part of the package downloads dynamic HTML,
stores it and runs it.

the equivalent analogy is that of a debian package such as ruby using
"ruby-gems", python using the zope-style package/downloader or perl
using cpan.

have you _any_ idea how much of a fuck-awful mess that these "debian
package bypassing" mechanisms get people into?? extreme case: did you
hear the ubuntu development team did? so many of them had personally
installed python 2.6 into /usr that they made a fucking STUPID
decision to package the *standard* python 2.6 runtime.... in
/usr/local! that's such a clear violation of FHS it's just hard to
imagine why they made such a clueless decision.

it's particularly bad when the "vanilla" package is *different* from
the debian-packaged one. this often happens with system-level
packages.

so, another example which really drives this home: do you recall what
happened with webmin? webmin was such a fuck-up that i can no longer
even find it by doing "apt-cache search webmin". webmin was
self-upgradeable. but unfortunately it was also installable as a
debian package.

that meant that it could actually severely fuck up your system,
because you could potentially upgrade it to a version which not only
bypassed all the standard debian packaging (config file locations
etc.) but also the "upgrade" could potentially not even be SUITED to a
debian system, but could accidentally be one for REDHAT or gentoo! so
it would try to overwrite config files or try to access config files
that didn't exist!

you see how much of a serious problem it is, by analogy, to even
*contemplate* allowing people to download apps that do downloading of
apps?

i strongly *strongly* advise you *not* to even allow apps to save
source code (HTML+javascript) of other Gaia apps onto the local
filesystem for execution within B2G.

just... don't do it.

if you're going to allow anything, it should be temporary code.
perhaps... i dunno... download something that is *only* allowed to be
from certain web sites, where the URLs are displayed to the user
(wildcards allowed?), and its purpose is to cache stuff.

BUT, any such downloading should come with a severe warning that it
could result in arbitrary code execution from a remote source.

i can see why it should technically made possible, but for many many
reasons it should be discouraged, and monitored carefully. that's
easy enough to do because the permissions set should come up as a red
flag before the app is allowed to go out to the public.

anyway, summary, paul:

* you very very much need to conceptually detach "Gaia apps which are
HTML/JS" from "web site source code which happens to be downloaded by
the Gaia B2G browser application".

* the excellent point you make about dynamically allowing arbitrary
download, storage and subsequent execution of HTML/JS (whether as an
app or as part of the execution of an app) is a serious one that
should either be blanket denied or watched very very carefully.

l.

lkcl luke

unread,
Mar 14, 2012, 7:53:56 PM3/14/12
to ptheriault, dev-w...@lists.mozilla.org, David Barrera, Fabrice Desré, Lucas Adamski, Jonas Sicking, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jim Straus, Mozilla B2G mailing list
review: https://wiki.mozilla.org/Apps/Security#Trusted_store_with_permissions_delegation

(thank you to david for documenting this stuff, yeah!)


"A store (parent) may permit a trusted store (child) to grant a subset
of parent's permissions"

no. this is a very bad idea. ok. maybe it is, maybe it isn't, but
it has a couple of implications which need to be considered.

* delegation implicitly means that you have a hierarchical permissions
system. hierarchical permissions systems have a bit of a problem in
that once the genie is out of the bottle, you can't really get it back
in. this is why the FLASK security model is *NOT* based on
"hierarchical permissions". delegation is fundamentally and
diametrically opposed to the principles behind FLASK (although you
could theoretically express a hierarchical permissions system _using_
SE/Linux, if you really really wanted to).

* thinking from the perspective of debian package maintenance, you
don't see them "delegating", do you? in 20 years, nobody's come up
with the idea of "delegating" package maintenance. it's simply not
needed. why? because...

* ...there is the concept of adding *peer* stores (in debian packaging
terms). take a look at http://debian-multimedia.org. note on there,
it says, "The first package to install is debian-multimedia-keyring.
Since Squeeze you can install this package with apt-get but you need
to presse Y when the package ask what to do and do not press return."

* then there are mirrors. mirrors just copy pre-signed packages. you
don't need to "delegate", you just... do it. they're signed. they've
been vetted. there's no problem. they're tamper-resistant.

l.

Lucas Adamski

unread,
Mar 14, 2012, 8:05:35 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, dev-se...@lists.mozilla.org, Fabrice Desré, Mozilla B2G mailing list, ptheriault, cjo...@mozilla.com, Jonas Sicking
In fairness, its worth considering that a better overall user experience could be obtained by having dynamically web
hosted without an explicit update process, because
a) it maximizes the community that can participate in building really great apps (without having to figure out code
signing, versioning, Debian package management, etc)
b) sidesteps the weekly ritual of having to update 20 apps per device

So I'm eager to entertain the idea of a "newer and better" model, but want to make sure at minimum we understand the
security compromises that might come with it.
Lucas.

lkcl luke

unread,
Mar 14, 2012, 8:07:06 PM3/14/12
to ptheriault, dev-w...@lists.mozilla.org, David Barrera, Fabrice Desré, Lucas Adamski, Jonas Sicking, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jim Straus, Mozilla B2G mailing list
review requested:
https://wiki.mozilla.org/Apps/Security#FLASK_for_enforcing_permissions

i've thought about this some more and have updated it for accuracy as
well. the basic position is that the present model which implements
WebAPIs within the same threads/processes (fork, pthread) is
fundamentally flawed and cannot be secured. period.

if you want proper security, you *have* to have a firebreak between
critical functionality such as actual dialing etc. but more than that
you have to have the dialer front-end *application* as a completely
separate application as well, such that permissions cannot be
"leeched" by a rogue app and used to pretend to the OS that the rogue
app _is_ the dialer.

the only way to fully protect against buffer overruns and other
loveliness is to actually have completely separate executables. not
even fork() will suffice.

l.

David Chan

unread,
Mar 14, 2012, 8:09:00 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, ptheriault, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
Thanks for reviewing the wiki. I'll add your concerns to that section. If I'm
understanding correctly, you are arguing for a flat "hierarchy", peers in the
debian sense. The analogous idea in the B2G world would be that Mozilla,
telcos, company foo could all run their own stores. If a user doesn't like
the policies of the existing stores, they can start their own. However,
there wouldn't be a way for Mozilla to say, "I trust store bar" so I'm
going to give them the same privileges as me.

The "peer" model sounds find to me. It still allows multiple stores. The
selfhost issue Jonas brought up doesn't seem to apply if the packages
are signed. I could host my own app and have it on the store at the same
time. There is still the question of granting permissions. I'm not sure
if the store is the proper entity to decide whether an app can obtain
permission X/Y/Z.

David Chan

----- Original Message -----
> From: "lkcl luke" <luke.l...@gmail.com>
> To: "ptheriault" <pther...@mozilla.com>
> Cc: dev-w...@lists.mozilla.org, "David Barrera" <dbar...@ccsl.carleton.ca>, "Fabrice Desré"
> <fab...@mozilla.com>, "Lucas Adamski" <lada...@mozilla.com>, "Jonas Sicking" <jo...@sicking.cc>,
> dev-se...@lists.mozilla.org, cjo...@mozilla.com, "Jim Straus" <jst...@mozilla.com>, "Mozilla B2G mailing list"
> <dev...@lists.mozilla.org>
> Sent: Wednesday, March 14, 2012 4:53:56 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>

David Chan

unread,
Mar 14, 2012, 8:24:13 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, Lucas Adamski, Jonas Sicking, dev-se...@lists.mozilla.org, Fabrice Desré, cjo...@mozilla.com, Mozilla B2G mailing list
I've added the debconf shim information to the wiki
https://wiki.mozilla.org/Apps/Security#debconf_shim_for_B2G_native_apps

It also does seem that there is some misunderstanding in the terminology
being used so I added a section trying to define a "webapp", "native app".

However, I believe there are use cases / requirements for an app to
be able to retrieve remote resources. This obviously causes problems
if we want to use code signing. My question to the B2G team is.

Is there a compelling use case / business requirement to allow a
native app to retrieve remote content?


If not, I strongly agree with lkcl and others on the thread that we
restrict APIs such dialer / contacts to signed + reviewed static
apps.


David Chan


----- Original Message -----
> From: "lkcl luke" <luke.l...@gmail.com>
> To: "Fabrice Desré" <fab...@mozilla.com>
> Cc: dev-w...@lists.mozilla.org, "David Barrera" <dbar...@ccsl.carleton.ca>, dev-se...@lists.mozilla.org, "Jim
> Straus" <jst...@mozilla.com>, "Lucas Adamski" <lada...@mozilla.com>, "Mozilla B2G mailing list"
> <dev...@lists.mozilla.org>, "ptheriault" <pther...@mozilla.com>, cjo...@mozilla.com, "Jonas Sicking"
> <jo...@sicking.cc>
> Sent: Wednesday, March 14, 2012 4:04:41 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>

David Chan

unread,
Mar 14, 2012, 8:45:15 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, ptheriault, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
> > The
> > selfhost issue Jonas brought up doesn't seem to apply if the
> > packages
> > are signed.
>
> self-host... selfhost... i'm lost. sorry. do you have a reference
> (wiki URL) which explains?
>

The scenario is more complicated than I remember.


> > I could host my own app and have it on the store at the same
> > time. There is still the question of granting permissions. I'm not
> > sure
> > if the store is the proper entity to decide whether an app can
> > obtain
> > permission X/Y/Z.
>
> *deep breath*.... :)
>
> the permissions need to be codified in some format (text file?)
> which
> is incorporated into the OS once they're downloaded, unpacked and
> installed.
>
> however because those permissions *are* just "a text file", they
> *can* be included.... as part of the GPG-signed package (by the
> developer) :)
>
> not only that, but prior to the FTP Masters letting it out the door,
> they can review the permissions file. if the permissions are
> ridiculously over-permissive, the FTP Masters really should not sign
> the package.
>
> meaning, it wouldn't get released. which, unfortunately, makes the
> people managing the store the equivalent of "apple". whoops. but,
> there you go. it seems to work for debian, but that's because they
> have 1,000 people with a ring-of-trust, and those 1,000 people are
> often *not* the developers of the package. they have some free time
> to give, and a reputation to maintain.
>
> ultimately, the _store_ doesn't decide... but a human does.
>
> l.
>

lkcl luke

unread,
Mar 14, 2012, 8:50:15 PM3/14/12
to Lucas Adamski, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, dev-se...@lists.mozilla.org, Fabrice Desré, Mozilla B2G mailing list, ptheriault, cjo...@mozilla.com, Jonas Sicking
[re-sending, whoops i forgot to send this to all recipients. good job
i decided to review it and update the wiki eh, else i wouldn't have
known that i'd made the mistake of sending it only to lucas]

On Thu, Mar 15, 2012 at 12:05 AM, Lucas Adamski <lada...@mozilla.com> wrote:
> In fairness, its worth considering that a better overall user experience could be obtained by having dynamically web
> hosted without an explicit update process, because
> a) it maximizes the community that can participate in building really great apps (without having to figure out code
> signing, versioning, Debian package management, etc)

yes - and likewise any debian user knows that if they go download the
source code of an app and compile/install it themselves, they are "on
their own".

hmmm, that's a good point. it would be a good idea to have the
concept of /usr and /usr/local within B2G to recognise this.

so... "...../gaia/usr" for pre-vetted "official" and "stable" stuff,
but "...../gaia/usr/local/" for stuff that people want to put on there
and do development.

the problem comes of course when people start doing that en-masse,
but that's their problem: they just "voided the warranty" anyway.
so.... don't make it too easy, would be my advice! :) force them to
get root access or something, make it part of the developer
documentation and bury it deep. only the patient and the intelligent
would find it :)

David Chan

unread,
Mar 14, 2012, 8:51:51 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, ptheriault, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
Ignore my other email. I sent too early

> > The
> > selfhost issue Jonas brought up doesn't seem to apply if the
> > packages
> > are signed.
>
> self-host... selfhost... i'm lost. sorry. do you have a reference
> (wiki URL) which explains?
>

The situation is more complicated than I remember. See
http://groups.google.com/group/mozilla.dev.b2g/msg/b079d34ccdec0f85

The part about SMSMessageInc. The scenario is that we have an
untrusted store attempting to sell an app which is hosted on a
trusted store. The idea was for the installer to query our trusted
store to see what capabilities the app received. However if we go
with the dpkg model, then the permissions are in the signed
package and SMSMessageInc can "mirror" the trusted store. Handling
payments for this workflow doesn't appear to be defined.
The review process seems to work for the AMO team at Mozilla. I don't
know how much work it takes them. There were some mentions of
requiring app developers to explain why they are asking for certain
permissions in the app submission process. There has also been discussion
by the apps team to put permissions requests in the manifest file.

David Chan

lkcl luke

unread,
Mar 14, 2012, 8:54:14 PM3/14/12
to David Chan, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, ptheriault, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Thu, Mar 15, 2012 at 12:45 AM, David Chan <dc...@mozilla.com> wrote:
>> > The
>> > selfhost issue Jonas brought up doesn't seem to apply if the
>> > packages
>> > are signed.
>>
>>  self-host... selfhost... i'm lost.  sorry.  do you have a reference
>> (wiki URL) which explains?
>>
>
> The scenario is more complicated than I remember.

... ah :) good job there's other people out there, eh? :) anyone
else recall ... or was it part of the usr/local stuff i wrote about?
hmmm... 1sec lemme cut/paste some of that and summarise...

ok:
https://wiki.mozilla.org/Apps/Security#Other_.28topics_that_don.27t_fall_into_above_proposals.29

i just added the bit about /usr and /usr/local to this section.

also i'd added the bit about "eval()" which i'd seen whizzing by a
few days ago, don't want to lose track of that idea.

l.

lkcl luke

unread,
Mar 14, 2012, 8:58:12 PM3/14/12
to David Chan, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, ptheriault, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Thu, Mar 15, 2012 at 12:51 AM, David Chan <dc...@mozilla.com> wrote:
> Ignore my other email. I sent too early

*eyes shut*....

>> > The
>> > selfhost issue Jonas brought up doesn't seem to apply if the
>> > packages
>> > are signed.
>>
>>  self-host... selfhost... i'm lost.  sorry.  do you have a reference
>> (wiki URL) which explains?
>>
>
> The situation is more complicated than I remember. See
> http://groups.google.com/group/mozilla.dev.b2g/msg/b079d34ccdec0f85
>
> The part about SMSMessageInc. The scenario is that we have an
> untrusted store attempting to sell an app which is hosted on a
> trusted store.

ok just for now so it's not forgotten i've added it to
https://wiki.mozilla.org/Apps/Security#Other_.28topics_that_don.27t_fall_into_above_proposals.29

l.

lkcl luke

unread,
Mar 14, 2012, 9:16:08 PM3/14/12
to David Chan, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, ptheriault, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
Some time ago, Paul wrote this:

> How do domains which install themselves as Web Apps fit into this model? Is
> there perhaps a default lower set of permissions that websites can install
> themselves with - basically the same types as websites, except that with
> apps permissions might be able t get "prompt to remember" instead of just
> "prompt"?)

paul, hi,

what do you mean "domains which install themselves as Web Apps?"

are you envisaging, perhaps, that a Gaia Application (which is merely
source code) be made available from, instead of the local filesystem,
off of a remote one?

because if so, i have an idea that has some... eeenteresting implications.

the idea is very simple: you don't complicate B2G by forcing it to
understand loading of Gaia apps from other sources, you use something
which is well-known called "mount points" and "networked filesystems"
:)

so, you get an app (which is of course signed, and of course has a
permission-set in it).

the permission-set is necessarily complex, because it not only mounts
a networked filesystem (nfs, HTTP, andrewfs, whatever) but also has to
offer up an explanation as to why in hell it's doing this.

(all of these permissions can easily be covered by SE/Linux btw)

now, what's _really_ neat about the use of the debian packaging system
to do this is that you *don't* need to arse about: you can just do
"apt-get install nfs-client" or "apt-get install fuse" along with
another package that does the mounting etc. etc.

you could even have a package which does cacheing (for off-line
situations) of the remote filesystem and it's *not* your problem (i.e.
it's not a B2G coding problem).


the question is, however: why on earth would you a) want to do
something like this b) want to _allow_ something like this?

technically it's cool; technically it's a variation on the "google
chrome OS" theme, so technically yes it's kinda neat *if* you think
that google chrome technically stands a snowball in hell's chance of
success.

but it makes me nervous because if you're offline and there's no
cache, you're hosed; secondly, the users reaaalllly have to trust that
remote site; thirdly unlike the debian packaging system which does not
require secure transfer of the package (containing the app) because
it's digitally-signed securely, if you download apps over the network
you now HAVE to use SSL, authentication etc. because otherwise you
expose users to man-in-the-middle attacks etc. etc.

overall, it's absolutely absolutely fricking cool to have dynamic
domain-grade loading of B2G/Gaia apps, but to be really honest, if you
want "dynamic apps" then just for goodness sake demand that people
upgrade the app.

upgrading the app will achieve exactly the same effect, but will force
the developers to go through a proper formal review process.

if you allow dynamic apps you just allowed them to bypass all the
security. that's baaaad :)

l.

Lucas Adamski

unread,
Mar 14, 2012, 9:18:23 PM3/14/12
to Ian Bicking, dev-w...@lists.mozilla.org, Jonas Sicking, dev...@lists.mozilla.org
I would imagine in such a model that we'd require a UA mediated dialog, but maybe I'm missing something. Trusting apps
to implement privileged "buttons" without running into a swamp of framing & clickjacking issues seems unlikely. I'm
thinking of common "user-land" apps here, rather than "system" apps. If the latter are 100% locally installed and not
loadable/frameable by other apps, then we should be able to dodge that bullet.
Lucas.

ptheriault

unread,
Mar 14, 2012, 9:36:32 PM3/14/12
to lkcl luke, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, David Chan, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking

On Mar 15, 2012, at 12:16 PM, lkcl luke wrote:

> Some time ago, Paul wrote this:
>
>> How do domains which install themselves as Web Apps fit into this model? Is
>> there perhaps a default lower set of permissions that websites can install
>> themselves with - basically the same types as websites, except that with
>> apps permissions might be able t get "prompt to remember" instead of just
>> "prompt"?)
>
> paul, hi,
>
> what do you mean "domains which install themselves as Web Apps?"

Pages which call navigator.mozApps.install(<their own URL>) rather than be installed from a trusted store.

I believe that the idea is that they just won't be a trusted store, so they won't get sensitive permissions. Response from a previous email was:

>Such store's generally won't be trusted. So those stores will work
>just fine, however they won't be able to install apps which need SMS
> privileges.

I.e. this wouldn't be for internal phone apps (gaia-esque) but for more web page style apps, that want the installed app user experience, but don't need sensitive permissions and so don't need to go through a store. Or that is how I understood it.

I'll make a note on this in the wiki.

lkcl luke

unread,
Mar 14, 2012, 9:45:05 PM3/14/12
to ptheriault, dev-w...@lists.mozilla.org, Jim Straus, David Barrera, Fabrice Desré, Lucas Adamski, Mozilla B2G mailing list, David Chan, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
On Thu, Mar 15, 2012 at 1:36 AM, ptheriault <pther...@mozilla.com> wrote:
>
> On Mar 15, 2012, at 12:16 PM, lkcl luke wrote:
>
>> Some time ago, Paul wrote this:
>>
>>> How do domains which install themselves as Web Apps fit into this model?  Is
>>> there perhaps a default lower set of permissions that websites can install
>>> themselves with - basically the same types as websites, except that with
>>> apps permissions might be able t get "prompt to remember" instead of just
>>> "prompt"?)
>>
>> paul, hi,
>>
>> what do you mean "domains which install themselves as Web Apps?"
>
> Pages which call navigator.mozApps.install(<their own URL>)  rather than be installed from a trusted store.

ahh right, ok. does this function allow writing to the local
filesystem? if so, does it allow *overwriting* of existing files? if
so, what protection is there?

(i.e. is there a specification page which describes this function)

it sounds to me like this function is intended to be the equivalent
of dpkg/aptitude, would that be a fair but rough / approximate
assessment?


> I believe that the idea is that they just won't be a trusted store, so they won't get sensitive permissions. Response from a previous email was:
>
>>Such store's generally won't be trusted. So those stores will work
>>just fine, however they won't be able to install apps which need SMS
>> privileges.
>
> I.e. this wouldn't be for internal phone apps (gaia-esque) but for more web page style apps, that want the installed app user experience, but don't need sensitive permissions and so don't need to go through a store. Or that is how I understood it.

*huffs*. if this function is a functional-equivalent of
dpkg/aptitude, it has *deep breath* one hell of a lot of catching up
to do. aptitude takes care of conflicts as well as dependencies; dpkg
takes care of file-conflicts and such. so if there are two packages
that accidentally have the same filename (which is not permitted and
is a severe violation of debian package policy), dpkg will notify you
and bomb out rather than let you proceed.

bottom line is: without looking closely at it,
navigator.mozApps.install is making me nervous :)

> I'll make a note on this in the wiki.

yeay! :)

if there's a spec for navigator.mozApps.install is there any chance
you could add a link to it there, too, so it can be reviewed?

l.

Lucas Adamski

unread,
Mar 14, 2012, 9:54:05 PM3/14/12
to ianG, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list
https://developer.mozilla.org/en/OpenWebApps has some good info.

But in terms of business objectives, I'll do a terrible job of paraphrasing the mission: maximize participation in the
open web. This means breaking up the app silos by maximizing the number of possible devices, developers, app stores,
and overall participation in the app ecosystem. Its the ability to get apps from any number of stores, and run them on
any number and types of devices. All based upon open web standards technologies.
Lucas.

On 3/14/2012 6:04 PM, ianG wrote:
> On 15/03/12 08:43 AM, lkcl luke wrote:
>
>> it is even absolute madness to even expect the actual people who have
>> been following this right from the beginning to be able to hold
>> everything in their heads.
>
> +1. This is what I know:
>
> "I'm way over due to write a proposal for the Open Web Apps and
> Boot-to-Gecko security models."
>
> But I don't know what those two nouns are... So there is zero comment possibly on any security model. First step to
> the security world is this: understand the business model.
>
> Doing a little poking around, it seems to be: build an application store?
>
> Google takes me to here: https://www.mozilla.org/en-US/apps/
>
> iang

Lucas Adamski

unread,
Mar 14, 2012, 10:00:37 PM3/14/12
to lkcl luke, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, dev-w...@lists.mozilla.org
I think the idea is the opposite: to make it easy and common. Not saying its risk-free. :)

In other news, we have started a feature page for the B2G app security model. The idea is to work through the current
list of open issues in a (somewhat) structured process and capture the resulting decisions and logic here:
https://wiki.mozilla.org/B2G_App_Security_Model

The current wiki page (https://wiki.mozilla.org/Apps/Security) is still very relevant; its more of a freeform way of
capturing ideas, issue and discussions that will then feed into the design on the feature page.

The current feature page is just a bare start, and not everything will fit in there. You will note a link to a threat
model, for example. The next steps are to validate requirements, and determine the open issues. Please comment upon
those here and either Paul or I will update the page accordingly.

>From there we'll burn down the open issues list into a functional design.
Lucas.

On 3/14/2012 6:34 PM, lkcl luke wrote:
> On Thu, Mar 15, 2012 at 1:24 AM, Lucas Adamski <lada...@mozilla.com> wrote:
>> I'm interpreting this idea as something much 'worse' than that. I'm suggesting that the most common B2G app would live
>> on a server with no particular signed package. They would be dynamically fetched and locally cached. This is fraught
>> with problems, some security and some simply usability/reliability (i.e. how to ensure the
>> completeness/freshness/integrity of the local app cache).
> yes. absolutely fine. i strongly suggest that you make it really
> really difficult for people (from a UI perspective) to do this sort of
> thing.
>
> 1) go to a dialog which is right at the bottom and says "enable
> untrusted apps".
> 2) then force them to go through a process of "enabling untrusted
> networked apps"
> 3) then warn them of the consequences.
> 4) then refuse to allow the untrusted app to even run.
> 5) then force them to go to the *permissions* page saying "enable
> permissions for untrusted apps"
> 6) then force them to "enable permissions for networked apps"
> 7) _then_ force them to go through an extra "this is an untrusted app
> from a network. do you really want to grant this?" on *every*
> permission.
>
> or some-such.
>
> if they don't _like_ that, then what will happen is that someone
> *else* will write a replacement for the permissions application which
> takes *away* all of that - and installs an application which just goes
> "yes yes yes".
>
> at which point, it's not your problem.
>
> actually if you wanted a "step 0" you could actually make it an app -
> which is on the equivalent of "debian unstable" - which enables all
> that functionality 1-7 above, so it's not even *possible* to
> (normally) bypass the security unless you really really actually
> actively want it.
>
> step -1 would of course involve adding to /etc/apt/sources.list [or
> equivalent] that line which has the "debian unstable" thing in it,
> which makes it even _more_ difficult :)
>
> or... you just ignore all that, let them do what they like, and it all
> goes to hell in a handbasket very quickly he he.
>
> l.
>
>
>> Lucas.
>>
>> On 3/14/2012 5:12 PM, lkcl luke wrote:
>>> On Thu, Mar 15, 2012 at 12:05 AM, Lucas Adamski <lada...@mozilla.com> wrote:
>>>> In fairness, its worth considering that a better overall user experience could be obtained by having apps dynamically web
>>>> hosted without an explicit update process, because
>>>> a) it maximizes the community that can participate in building really great apps (without having to figure out code
>>>> signing, versioning, Debian package management, etc)
>>> yes - and likewise any debian user knows that if they go download the
>>> source code of an app and compile/install it themselves, they are "on
>>> their own".
>>>
>>> hmmm, that's a good point. it would be a good idea to have the
>>> concept of /usr and /usr/local within B2G to recognise this.
>>>
>>> so... "...../gaia/usr" for pre-vetted "official" and "stable" stuff,
>>> but "...../gaia/usr/local/" for stuff that people want to put on there
>>> and do development.
>>>
>>> the problem comes of course when people start doing that en-masse,
>>> but that's their problem: they just "voided the warranty" anyway.
>>> so.... don't make it too easy, would be my advice! :) force them to
>>> get root access or something, make it part of the developer
>>> documentation and bury it deep. only the patient and the intelligent
>>> would find it :)
>>>
>>> l.

Jim Straus

unread,
Mar 14, 2012, 10:15:58 PM3/14/12
to David Chan, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Fabrice Desré, lkcl luke, Lucas Adamski, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, cjo...@mozilla.com, Jonas Sicking
Actually, the suggestion was to have the reason included in the code/manifest. Then when the dialog asking the user for to grant/deny permission was displayed, the developer could tell the user why they want the permission granted. Of course, if the application is being reviewed for granting permission a priori, the reviewer could look at that reason directly. But the reason would always be there on the device, and looked at through whatever Permissions Management app is supplied.
-Jim Straus

Chris Jones

unread,
Mar 14, 2012, 11:35:58 PM3/14/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev...@lists.mozilla.org
----- Original Message -----
> From: "Jonas Sicking" <jo...@sicking.cc>
> To: dev...@lists.mozilla.org, dev-w...@lists.mozilla.org
> Sent: Thursday, March 8, 2012 2:25:39 AM
> Subject: [b2g] OpenWebApps/B2G Security model
>
> Another security model that often comes up is the Apple iOS one. Here
> the security model is basically that Apple takes on the full
> responsibility to check that the app doesn't do anything harmful to
> the user.

Another bit of prior art: addons.mozilla.org has a similar model for Firefox extensions. However the extensions are then given the keys to the kingdom after they pass review, as it were, with no runtime capability limitations.

Cheers,
Chris

Chris Jones

unread,
Mar 14, 2012, 11:50:18 PM3/14/12
to ptheriault, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list, Jonas Sicking
----- Original Message -----
> From: "ptheriault" <pther...@mozilla.com>
> To: cjo...@mozilla.com, "Jonas Sicking" <jo...@sicking.cc>
> Cc: dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, "Mozilla B2G mailing list"
> <dev...@lists.mozilla.org>
> Sent: Thursday, March 8, 2012 1:48:46 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>
> - Privileged store: this is the store which your B2G device comes

> - Trusted store: this is the store where you get your "apps" (i.e.

That's not what I had intended to communicate, sorry. The apps distributed by default in Gaia/OWD/whatever UI I would expect commonly to originate with a store in the trusted root set. Makes updating etc. simpler. But that's entirely up to the b2g vendor.

Cheers,
Chris

lkcl luke

unread,
Mar 15, 2012, 5:54:46 AM3/15/12
to Lucas Adamski, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Mozilla B2G mailing list, ianG
On Thu, Mar 15, 2012 at 1:54 AM, Lucas Adamski <lada...@mozilla.com> wrote:
> https://developer.mozilla.org/en/OpenWebApps has some good info.
>
> But in terms of business objectives, I'll do a terrible job of paraphrasing the mission:  maximize participation in the
> open web.

maximise participation for whom?

... providing a swiss cheese OS that allows virus writers and phishers
to walk all over everyone is participation, isn't it? :) eh, android
maximises participation, don't they?

l.

Justin Lebar

unread,
Mar 15, 2012, 9:30:21 AM3/15/12
to Jonas Sicking, dev-w...@lists.mozilla.org, dev...@lists.mozilla.org
To boil this proposal down into its key points:

1) App lists desired permissions in its manifest.
2) App store approves permissions on app's submission.
3) User approves permissions at install time.
4) Some permissions need to be granted by a trusted app store before
the user can grant them.


== Lazily approving permissions ==

I'd like us to consider changing (3) to

3a) User approves permissions when app touches the relevant API.

The reason is: It's often not clear to me why an app wants permission
X -- for example, it may not be clear to a new user why Yelp wants to
read the user's location. But if a dialog asking "do you want to
share your location with Yelp?" popped up right after I clicked the
"restaurants nearby" button in the app, it would be clearer to me what
the app was going to do if granted permission.

(3a) also solves the problem of granting apps more permissions when
they upgrade. On Android, I'm prompted to approve a set of
permissions during app upgrade, whenever the app wants a new
permission. This is pretty lame -- the upgrade process should be
seamless, but instead I have to click through a list of permissions.
Of course I'm going to grant the new permissions. I just want to get
on with my life.

But if we lazily prompt for permissions, then I'm not bothered until
the app tries to use those new permissions, at which point I hopefully
understand exactly why the app wants that permission. (The worst
case, when the app requests all its permissions immediately upon
loading, with no explanation, is no worse than requesting permissions
at install time.)

== Designing APIs to fail gracefully when permission is not granted ==

Since users may disallow permission for an app to use any API, we need
to design our APIs to fail gracefully when permission is not granted.

Page tries to read contact data. We ask if that's OK. If not, then
we present an empty contact book, instead of throwing an exception or
something. Apps won't be designed to handle the case when permission
is not granted, so we need to make not-granted behave similarly to
granted, from the page's perspective.

Maybe this is obvious, but I wanted to make sure we're all on the same
page here.

== Trusted app stores and sophisticated users ==

How will sophisticated users work around requirement (4), that
sensitive APIs can only be granted by trusted app stores? I realize
this is a fine line.

For distributing beta software, perhaps we could have an "at your own
risk" store, with lots of scary warnings. I'm not sure if that would
work for the separate use-case of a developer testing her software --
I guess it would depend on whether an app submitted to the AYOR store
could be auto-updated, without any intervention on the store's part.

-Justin

On Thu, Mar 8, 2012 at 5:25 AM, Jonas Sicking <jo...@sicking.cc> wrote:
> Hi All,
>
> I'm way over due to write a proposal for the Open Web Apps and
> Boot-to-Gecko security models.
>
> Background:
>
> In general our aim should always be to design any API such that we can
> expose it to as broad of set of web pages/apps as possible. A good
> example of this is the Vibration API [1] which was designed such that
> it covers the vast majority of use cases, while still safe enough that
> we can expose it to all web pages with risk of annoying the user too
> much, or putting him/her at a security or privacy risk.
>
> But we can't always do that, and so we'll need a way to safely grant
> certain pages/apps higher privilege. This gets very complicated in
> scenarios where describing the security impact to the user
>
> There are plenty of examples of bad security models that we don't want
> to follow. One example is the security model that "traditional" OSs,
> like Windows and OS X, uses which is "anything installed has full
> access, so don't install something you don't trust". I.e. it's fully
> the responsibility of the user to not install something that they
> don't 100% trust. Not only that, but the decision that the user has to
> make is pretty extreme, either grant full privileges to the
> application, or don't run the application at all. The result, as we
> all know, is plenty of malware/grayware out there, with users having a
> terrible experience as a result.
>
> A slightly better security model is that of Android, which when you
> install an app shows you a list of what capabilities the app will
> have. This is somewhat better, but we're still relying heavily on the
> user to understand what all capabilities mean. And the user still has
> to make a pretty extreme decision: Either grant all the capabilities
> that the app is requesting, or don't run the app at all.
>
> Another security model that often comes up is the Apple iOS one. Here
> the security model is basically that Apple takes on the full
> responsibility to check that the app doesn't do anything harmful to
> the user. The nice thing about this is that we're no longer relying on
> the user to make informed decisions about what is and what isn't safe.
> However Apple further has the restriction that *only* they can say
> what is safe and what is not. Additionally they deny apps for reasons
> other than security/privacy problems. The result is that even when
> there are safe apps being developed, that the user wants to run, the
> user can't do so if apple says "no". Another problem that iOS has, and
> which has made headlines recently, is that Apple enforces some of its
> privacy policies not using technical means, but rather using social
> means. This has lately lead to discoveries of apps which extracts the
> users contact list and sends it to a server, without the users
> consent. This is things that Apple tries to catch during their review,
> but it's obviously hard to do so perfectly.
>
>
> Proposal:
>
> The basic ideas of my proposal is as follows. For privacy-related
> questions, we generally want to defer to the user. For example for
> almost all apps that want to have access to the users addressbook, we
> should check with the user that this is ok. Most of the time we should
> be able to show a "remember this decision" box, which many times can
> default to checked, so the user is only faced with this question once
> per app.
>
> For especially sensitive APIs, in particular security related ones,
> asking the user is harder. For example asking the user "do you want to
> allow USB access for this app" is unlikely a good idea since most
> people don't know what that means. Similarly, for the ability to send
> SMS messages, only relying on the user to make the right decision
> seems like a big risk.
>
> For such sensitive APIs I think we need to have a trusted party verify
> and ensure that the app won't do anything harmful. This verification
> doesn't need to happen by inspecting the code, it can be enforced
> through non-technical means. For example if the fitbit company comes
> to mozilla and says that they want to write an App which needs USB
> access so that they can talk with their fitbit hardware, and that they
> won't use that access to wipe the data on people's fitbit hardware, we
> can either choose to trust them on this, or we can hold them to it
> through contractual means.
>
> However we also don't want all app developers which need access to
> sensitive APIs to have to come to mozilla (and any other browser
> vendor which implement OWA). We should be able to delegate the ability
> to hand out this trust to parties that we trust. So if someone else
> that we trust wants to open a web store, we could give them the
> ability to sell apps which are granted access to these especially
> sensitive APIs.
>
> This basically creates a chain of trust from the user to the apps. The
> user trusts the web browser (or other OWA-runtime) developers. The
> browser developers trusts the store owners. The store owners trust the
> app developers.
>
> Of course, in the vast majority of cases apps shouldn't need access to
> these especially sensitive APIs. But we need a solution for the apps
> that do.
>
> [1] https://bugzilla.mozilla.org/show_bug.cgi?id=679966
>
>
> How to implement this:
>
> We create a set of capabilities, such as "allowed to see the users
> idle state", "allowed to modify files in the users photo directory",
> "allowed low-level access to the USB port", "allowed unlimited storage
> space on the device", "allowed access to raw TCP sockets".
>
> Each API which requires some sort of elevated privileges will require
> one of these capabilities. There can be multiple APIs which
> semantically have the same security implications and thus might map to
> the same capabilities. However it should never need to be the case
> that an API requires to separate capabilities. This will keep the
> model simpler.
>
> For each of these capabilities we'll basically have 4 levels of
> access: "deny", "prompt default to remember", "prompt default to not
> remember", "allow". For the two "prompt..." ones we'll pop up UI and
> show the user yes/no buttons and a "remember this decision" box. The
> box is checked for the "prompt default to remember" level.
>
> We then enhance the OWA format such that an app can list which
> capabilities it wants. We could possibly also allow listing which
> level of access it wants for each capability, but I don't think that
> is needed.
>
> When a call is made to the OWA .install function, to install an app,
> the store also passes along a list of capabilities that the store
> entrusts the app with, and which level of trust for these
> capabilities. The browser internally knows which stores it trusts to
> hand out which capabilities and which level of trust it trusts the
> store to hand out. The capabilities granted the app is basically the
> intersection of these two lists. I.e. the lowest level in either of
> these lists for either capability.
>
> In the installation UI we could enable the user to see which
> capabilities will be granted, and which level is granted. However it
> should always be safe for the user to click yes, so we have a lot of
> freedom in how we display this.
>
> Further, we should allow the user to modify these settings during the
> installation process as well as after an app is installed. We should
> even allow users to set a default policy like "always 'deny' TCP
> socket access", though this is mostly useful for advanced users. If
> the user does that we intersect with this list too before granting
> permissions to an app.
>
> For any privacy-sensitive capabilities, we simply don't grant stores
> the ability to hand out trust higher than one of the "prompt ..."
> levels. That way we ensure that users are always asked before their
> data is shared.
>
> In addition to this, I think we should have a default set of
> capabilities which are granted to installed apps. For example the
> ability to use unlimited amount of device storage, the ability to
> replace context menus and the ability to run background workers (once
> we have those). This fits nicely with this model since we can simply
> some capabilities to all installed apps (we'd need to decide if they
> should still be required to list these capabilities in the manifest or
> not).
>
> Another thing which came up during a recent security review is that
> we'll likely want to have some technical restrictions on which sites
> can be granted some of these capabilities. For example something as
> sensitive as SMS access might require that the site uses STS (strict
> transport security) and/or EV-certs. This is also applies to the
> stores which we trust to hand out these capabilities.
>
> There's also very interesting things we can do by playing around with
> cookies, but I'll leave that for a separate thread as that's a more
> narrow discussion.
>
> Let me know what you think.
>
> / Jonas
> _______________________________________________
> dev-b2g mailing list
> dev...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-b2g

Jim Straus

unread,
Mar 15, 2012, 9:44:59 AM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, Jonas Sicking, dev...@lists.mozilla.org
Hello -
Comments in below.

On Mar 15, 2012, at 9:30 AM, Justin Lebar wrote:

> To boil this proposal down into its key points:
>
> 1) App lists desired permissions in its manifest.
> 2) App store approves permissions on app's submission.

The App store only needs to approve some of the permissions that it thinks can be safely granted without involving the user (access to the camera for a camera app, for example). Some will probably not be granted a priori (eg. location) in most/all cases.

> 3) User approves permissions at install time.
> 4) Some permissions need to be granted by a trusted app store before
> the user can grant them.
>
>
> == Lazily approving permissions ==
>
> I'd like us to consider changing (3) to
>
> 3a) User approves permissions when app touches the relevant API.

I think we all generally agree with this. The Android model is known to not be very good.
Yes. And also provide a way for the app to determine if the permission was granted and the user really doesn't have any contacts or if the permission was not granted. This is probably more useful for something like location. It also allows the app to stop querying if it knows that it wasn't granted location vs. keep trying in the case that location couldn't be determined at the time.

> Maybe this is obvious, but I wanted to make sure we're all on the same
> page here.
>
> == Trusted app stores and sophisticated users ==
>
> How will sophisticated users work around requirement (4), that
> sensitive APIs can only be granted by trusted app stores? I realize
> this is a fine line.
>
> For distributing beta software, perhaps we could have an "at your own
> risk" store, with lots of scary warnings. I'm not sure if that would
> work for the separate use-case of a developer testing her software --
> I guess it would depend on whether an app submitted to the AYOR store
> could be auto-updated, without any intervention on the store's part.

I think this can be handled by whatever Permissions Manager app we provide. With suitable warnings of course. The user can go into the Manager app and grant permissions to untrusted/beta test apps. Any time the app is updated, the permissions should probably revert to whatever would be granted when initially loaded. This would keep the user from downloading an untrusted app, granting some permissions and then having an update do something malicious. If they had to re-grant the permissions on update, they have to make a conscience decision to grant the permissions again and see the warnings.
> _______________________________________________
> dev-webapps mailing list
> dev-w...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-webapps

Justin Lebar

unread,
Mar 15, 2012, 10:17:27 AM3/15/12
to Jim Straus, dev-w...@lists.mozilla.org, Jonas Sicking, dev...@lists.mozilla.org
>> 2) App store approves permissions on app's submission.
>
> The App store only needs to approve some of the permissions that it thinks can be safely granted without involving the user (access to the camera for a camera app, for example).  Some will probably not be granted a priori (eg. location) in most/all cases.

I'd expect the app store would approve the whole list of permissions.
That is, the app store might reject an app which wants the location
permission, if the app store deems that's in appropriate. (It's a
chess app, or something, which shouldn't be accessing your location.)

But this is a policy point, not a technical one, so it's not
particularly important right now.

>> ==  Designing APIs to fail gracefully when permission is not granted ==
>>
>> Since users may disallow permission for an app to use any API, we need
>> to design our APIs to fail gracefully when permission is not granted.
>>
>> Page tries to read contact data.  We ask if that's OK.  If not, then
>> we present an empty contact book, instead of throwing an exception or
>> something.  Apps won't be designed to handle the case when permission
>> is not granted, so we need to make not-granted behave similarly to
>> granted, from the page's perspective.
>>
> Yes.  And also provide a way for the app to determine if the permission was granted and the user really doesn't have any contacts or if the permission was not granted.  This is probably more useful for something like location.  It also allows the app to stop querying if it knows that it wasn't granted location vs. keep trying in the case that location couldn't be determined at the time.

I'm not sure we want the app to be able to query and see that it was
not granted a permission. Then an app can easily say "in order to use
this app, you need to grant this list of permissions," which is
exactly what we want to avoid!

Surely in some cases an app will be able to determine with high
probability that it was not granted a permission -- few people will
have an empty contact list, for example. But I don't think we should
make it easy for them.

We'll have to design our APIs so they work well with this model. I
don't think it's a problem wrt specifically geolocation, because the
app doesn't poll. Instead, it gets a callback when geolocation data
is available.

>> For distributing beta software, perhaps we could have an "at your own
>> risk" store, with lots of scary warnings.  I'm not sure if that would
>> work for the separate use-case of a developer testing her software --
>> I guess it would depend on whether an app submitted to the AYOR store
>> could be auto-updated, without any intervention on the store's part.
>
> I think this can be handled by whatever Permissions Manager app we provide.  With suitable warnings of course.  The user can go into the Manager app and grant permissions to untrusted/beta test apps.  Any time the app is updated, the permissions should probably revert to whatever would be granted when initially loaded.  This would keep the user from downloading an untrusted app, granting some permissions and then having an update do something malicious.  If they had to re-grant the permissions on update, they have to make a conscience decision to grant the permissions again and see the warnings.

What does it mean for an online web-app to be "updated"? An app
might, without changing the code stored on the device, behave entirely
differently one day to the next. Even if you're offline, the app
might have downloaded some scripts and stored them in local storage,
so those scripts don't count as an "update".

Anyway, using the permission manager for this sounds good. But let's
not rely on "not updated" for security, because that doesn't mean
much.

lkcl luke

unread,
Mar 15, 2012, 11:34:24 AM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, Jonas Sicking, dev...@lists.mozilla.org
On Thu, Mar 15, 2012 at 1:30 PM, Justin Lebar <justin...@gmail.com> wrote:
> To boil this proposal down into its key points:
>
> 1) App lists desired permissions in its manifest.
> 2) App store approves permissions on app's submission.
> 3) User approves permissions at install time.
> 4) Some permissions need to be granted by a trusted app store before
> the user can grant them.
>
>
> == Lazily approving permissions ==

wiki section heading!

> I'd like us to consider changing (3) to
>
>  3a) User approves permissions when app touches the relevant API.

this is a superb idea. where is it recorded on the wiki? *nudge, nudge*

technically it's damn hard though.

* the correct place for verifying permissions is the kernel (not
userspace). it is really really important *not* to have a permissions
system which is in userspace: it'd be security by cooperation. which
is a bit like the unix "good will" virus, if anyone remembers that :)

* thus now what you're saying is that the application must be
PAUSED... at the point at which the actual system-level function call
occurs!

* that means that there needs to be an event notification system from
kernelspace to userspace (!)

* where the notification fires *another* application (a popup)...
bearing in mind now that one of the threads/processes/executables is
presently paused so you *must* not have any mutexes or other locks
which could result in deadlock here....

* the userspace popup gets a yay or nay from the user

* the userspace application hands the answer back to kernelspace

* kernelspace says "yay" or "nay" to the system-call that's blocking

* the userspace application carries on its merry way.

so yes a simple decision that is technically possible that has massive
implications at an implementation level. no blocking for access to
the screen, for a start!

and the "popup" application itself must also not fire off any security
questions which also requires a "yes/no" interactive popup else you'll
get recursive melt-down. whoops.


> ==  Designing APIs to fail gracefully when permission is not granted ==
>
> Since users may disallow permission for an app to use any API, we need
> to design our APIs to fail gracefully when permission is not granted.
>
> Page tries to read contact data.  We ask if that's OK.  If not, then
> we present an empty contact book, instead of throwing an exception or
> something.  Apps won't be designed to handle the case when permission
> is not granted, so we need to make not-granted behave similarly to
> granted, from the page's perspective.
>
> Maybe this is obvious, but I wanted to make sure we're all on the same
> page here.

"obvious" or not, it's important that it go into the functional
analysis document (wiki page).

this is programming. nothing's "obvious", especially where security
is concerned. it's "obvious" that you should not de-reference a null
pointer, so why do people write code that does that, eh? he he :)


> == Trusted app stores and sophisticated users ==
>
> How will sophisticated users work around requirement (4), that
> sensitive APIs can only be granted by trusted app stores?  I realize
> this is a fine line.
>
> For distributing beta software, perhaps we could have an "at your own
> risk" store, with lots of scary warnings.

debian packaging has a comprehensive system for dealing with this.
*dig*, *dig* - surely people know this, right? it's "obvious" to me
that debian and other packaging systems have this? ;)

the package database is divided down into:

* stable - the latest thoroughly-tested release. upgrades are
guaranteed to work.
* backports - makes available new (stable) stuff for older releases
* testing - more comprehensive testing: gets turned into "stable"
every 9-18months
* unstable - stuff that has no complaints for 2 weeks goes into testing
* experimental - really really untested. like... really just you'd
be foolish to use it.
* volatile - this is just data that keeps changing

also there are lines in /etc/apt/sources.list for security-updates
(usually for stable) so that even if people stick with "stable" they
can still get updates for severe flaws in the software.

so when i said that debian distribution maintenance is extremely
comprehensive, i really wasn't arseing about: it really really is just
exceptionally thorough.

1000 opinions over 20 years _does_ eventually result in something that
actually works, if for no other reason than that 1 person does all the
work to solve a particular problem by ignoring the other 999 opinions
:)

l.

Adrienne Porter Felt

unread,
Mar 15, 2012, 1:52:56 PM3/15/12
to David Chan, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, lkcl luke, Lucas Adamski, Mozilla B2G mailing list, ptheriault, Fabrice Desré, cjo...@mozilla.com, Jonas Sicking
https://wiki.mozilla.org/Apps/Security#Management_.2F_granting_of_API_permissions_to_WebApps

Under "Management / granting of API permissions to WebApps", I think two
important points are missing:

4. User should be able to audit usage of permissions (this is different
from viewing what permissions an app has, since that does not tell you how
or when it is used)
5. Apps cannot request permission to do something that is not listed in the
manifest

I'd also like to raise the issue of what happens to permissions when
principals interact. Do webapps have iframes like websites? Can they
embed advertisements? Do the advertisers then get all of the permissions?

There are two ways iframes/permissions don't mix well:

* Child frame requests permission to do something. User thinks that the
dialog belongs to the parent frame, accidentally grants the child frame
access to something.

* Parent frame belongs to an untrusted app with no privileges. It opens a
child frame with a trusted app in it. Let's say the child frame performs a
privileged action as soon as it is opened, using a permanently-granted
permission. The untrusted parent frame has now caused some action to occur
without the user realizing it.

I think the solution is to not let cross-origin iframes wield permissions
that have dialogs. (Permissions that are always hidden from the user, or
permissions that are controlled via auditing only would be OK for an iframe
to access.)

lkcl luke

unread,
Mar 15, 2012, 2:51:42 PM3/15/12
to Adrienne Porter Felt, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, Lucas Adamski, Mozilla B2G mailing list, David Chan, ptheriault, Fabrice Desré, cjo...@mozilla.com, Jonas Sicking
On Thu, Mar 15, 2012 at 5:52 PM, Adrienne Porter Felt <a...@berkeley.edu> wrote:
> https://wiki.mozilla.org/Apps/Security#Management_.2F_granting_of_API_permissions_to_WebApps
>
> Under "Management / granting of API permissions to WebApps", I think two
> important points are missing:
>
> 4. User should be able to audit usage of permissions (this is different from
> viewing what permissions an app has, since that does not tell you how or
> when it is used)
> 5. Apps cannot request permission to do something that is not listed in the
> manifest

adrienne, hi,

i've added these two excellent points to the wiki page on your behalf.

> I'd also like to raise the issue of what happens to permissions when
> principals interact.  Do webapps have iframes like websites?  Can they embed
> advertisements?  Do the advertisers then get all of the permissions?

:)

ok, *deep breath*... :) as this is a complex discussion with some 65
messages, i understand it's hard to follow absolutely everything, so
i'm happy to repeat things in the context you raise, apologies to
everyone else who's seen this before, and to those who are familiar
with the B2G architecture

gaia apps happen to be implemented as iframes. these are "chrome"
iframes (which chris kindly explained: firefox is actually implemented
as iframes! all that URL bar etc. is actually in a separate isolated
frame which has nothing to do with the main webpage-displaying
iframe). the security context of one of these frames is completely
separate from a webapp iframe which contains embedded advertisments
(standard URL etc.)

so it's really rather confusing, as there are separate security
contexts and entirely separate concepts for which the *exact* same
technology (iframes) is being utilised.

if you are familiar with x-windows, you've kinda said "do webapps
have windows like websites?", you see what i'm getting at? :)

anyway, the point is that there are separate security requirements for:

* the root frame (top-level one into which the top gaia HTML is loaded)
* individual gaia apps (sub-iframes, one per app)
* any gaia app that opens up a public-facing (URL-based) iframe - the
browser app is one such
* iframes *within* that iframe - as in "iframes that you normally
think of iframes being used for".

man that's as confusing as hell, but there simply isn't a glossary yet
for describing this stuff and giving it some unique unambiguous
terminology.

anyway: within that context, you can see that the "standard" job to
which the "normal" web site security applies (XSS) would do pretty
well to deal with "advertisments" etc... *as long as* those pages are
completely and totally cut off from the B2G APIs.

ah. damn. i've just thought of something. i'll document it
separately. it's to do with "eval".



> There are two ways iframes/permissions don't mix well:

adrienne, would you mind re-phrasing what you've written, in light of
the explanation of the *four* different types of iframe, above?

i realise this is extra work, but it's too easy to misunderstand.
"child frame" could mean any one of 1, 2, or 3 depending on whether
you're taking about a parent of type 2, 3 or 4, respectively.

the security context boundaries therefore change.

in fact, do you even want all "child" frames to even be _aware_ that
they even have a "parent"? this is the Cambridge Capabilities
Security Model (or, it's a "Capabilities" security model). if you
don't want something to be accessed you *don't even make code aware of
its existence*!

l.

lkcl luke

unread,
Mar 15, 2012, 2:59:20 PM3/15/12
to Adrienne Porter Felt, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, Lucas Adamski, Mozilla B2G mailing list, David Chan, ptheriault, Fabrice Desré, cjo...@mozilla.com, Jonas Sicking
https://wiki.mozilla.org/Apps/Security#Open_questions

point 3 - eval. which someone raised eariler.

ok, i'm dealing with a situation in pyjamas-desktop where it can't
actually execute javascript. so what has to be done is: you inject a
script node into the body of the HTML using python DOM bindings. the
code there stores its responses in a hidden iframe. the data in the
hidden iframe is monitored for changes (from the python code, using
python DOM bindings).

you can see what's coming, can't you.

in this way, any "security" measures which prevent or prohibit
arbitrary execution of code within one security context can be
_completely_ bypassed through this technique, when it is deployed in a
B2G app.

any gaia app that is "locked down" and is not given permission to
execute arbitrary code from remote sources.... well... all you have to
do is use this iframe trick, cooperate with an external web site to
provide the arbitrary code, then get it into the gaia/B2G app security
context with the above trick, and run "eval" on it.

this would actually be incredibly hard to spot within a rogue app....
unless eval was locked down (within the gaia/B2G app security
context).

l.

SUN Haitao

unread,
Mar 15, 2012, 2:59:18 PM3/15/12
to Adrienne Porter Felt, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, lkcl luke, Lucas Adamski, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, Fabrice Desré, cjo...@mozilla.com, Mozilla B2G mailing list
A security model only considering packages seems not enough:

As far as I can tell, there are 4 (or more) types of possible runnables on B2G:
0) Kernel, drivers (including virtual device drivers), CLI tools
(including services), browser engine and (maybe) plug-ins.
1) Packed programs written in HTML/CSS/JS.
2) Installed non-local Web apps (including sites).
3) Non-installed Web apps (including sites).

(It seems all type 1 runnables can be implements as type 2 or 0. Maybe
we needn't treat them as a seperate type)

For type 0 & 1, a deployment mechanism like apt/yum works fine (and
seems required for type 0). But for type 2 & 3, such mechanism may not
cover. I'm afraid that many apps will be implemented as type 2 or 3
for smooth of (re)deployment (and this is a huge advantage for web
apps to native ones). So we still need think what to do when there is
no package at all.

lkcl luke

unread,
Mar 15, 2012, 3:58:03 PM3/15/12
to SUN Haitao, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, Lucas Adamski, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, Fabrice Desré, cjo...@mozilla.com, Adrienne Porter Felt, Mozilla B2G mailing list
On Thu, Mar 15, 2012 at 6:59 PM, SUN Haitao <sunh...@devtaste.com> wrote:
> A security model only considering packages seems not enough:
>
> As far as I can tell, there are 4 (or more) types of possible runnables on B2G:
> 0) Kernel, drivers (including virtual device drivers), CLI tools
> (including services), browser engine and (maybe) plug-ins.
> 1) Packed programs written in HTML/CSS/JS.
> 2) Installed non-local Web apps (including sites).
> 3) Non-installed Web apps (including sites).

sun, hi, this is very useful categorisation. i've added what you
wrote as a section here:
https://wiki.mozilla.org/Apps/Security#Types_of_Runnables

i'm going to add a type 4 as well, if that's ok, which is the
conceptual equivalent of "/usr/local"
https://wiki.mozilla.org/Apps/Security#Other_.28topics_that_don.27t_fall_into_above_proposals.29

it's

> (It seems all type 1 runnables can be implements as type 2 or 0. Maybe
> we needn't treat them as a seperate type)
>
> For type 0 & 1, a deployment mechanism like apt/yum works fine

i believe so, yes.

> (and seems required for type 0).

yes absolutely. it would be insane to go writing an entire new
packaging deployment system when there are perfectly good ones out
there. oh, for completeness it's probably worthwhile mentioning
openembedded: they've "adopted" the .deb system, renamed .deb to "ipk"
and slightly simplified it (removed all of the dependency-tracking and
much of the safety mechanisms, whoops, but it is smaller code. much
smaller)


> But for type 2 & 3, such mechanism may not
> cover. I'm afraid that many apps will be implemented as type 2 or 3
> for smooth of (re)deployment (and this is a huge advantage for web
> apps to native ones). So we still need think what to do when there is
> no package at all.

yes i definitely agree. it would be good to have some input as to
what's actually envisaged (and possible / practical), here.

i have to say that the idea of dynamic loading of gaia apps makes me
rather twitchy. especially as you can achieve the same result by
going through the "install" process, and get better security all
round. both require a network connection, so why would you bypass the
security process? :)

... or were you referring to something else?

l.

Lucas Adamski

unread,
Mar 15, 2012, 5:03:11 PM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, Jonas Sicking, dev...@lists.mozilla.org
Regarding 3 and 3a) I was under the impression that those were not mutually exclusive options. Andreas had explained it to me as the prompt for permissions at install time would be opt-in (for each individual/set of permissions). Any that were not opted into could be prompted when needed. That complicates things and maybe that alone makes it unattractive, but we could do both.

For failing gracefully, it would seem useful for apps to be able to differentiate between a permission issue and some other failure mode. Given location for example, an app might warn the user when they just don't have a good GPS fix, rather than when they've denied access to location entirely.
Lucas.


On Mar 15, 2012, at 6:30 AM, Justin Lebar wrote:

> To boil this proposal down into its key points:
>
> 1) App lists desired permissions in its manifest.
> 2) App store approves permissions on app's submission.
> 3) User approves permissions at install time.
> 4) Some permissions need to be granted by a trusted app store before
> the user can grant them.
>
>
> == Lazily approving permissions ==
>
> I'd like us to consider changing (3) to
>
> 3a) User approves permissions when app touches the relevant API.
>
> The reason is: It's often not clear to me why an app wants permission
> X -- for example, it may not be clear to a new user why Yelp wants to
> read the user's location. But if a dialog asking "do you want to
> share your location with Yelp?" popped up right after I clicked the
> "restaurants nearby" button in the app, it would be clearer to me what
> the app was going to do if granted permission.
>
> (3a) also solves the problem of granting apps more permissions when
> they upgrade. On Android, I'm prompted to approve a set of
> permissions during app upgrade, whenever the app wants a new
> permission. This is pretty lame -- the upgrade process should be
> seamless, but instead I have to click through a list of permissions.
> Of course I'm going to grant the new permissions. I just want to get
> on with my life.
>
> But if we lazily prompt for permissions, then I'm not bothered until
> the app tries to use those new permissions, at which point I hopefully
> understand exactly why the app wants that permission. (The worst
> case, when the app requests all its permissions immediately upon
> loading, with no explanation, is no worse than requesting permissions
> at install time.)
>
> == Designing APIs to fail gracefully when permission is not granted ==
>
> Since users may disallow permission for an app to use any API, we need
> to design our APIs to fail gracefully when permission is not granted.
>
> Page tries to read contact data. We ask if that's OK. If not, then
> we present an empty contact book, instead of throwing an exception or
> something. Apps won't be designed to handle the case when permission
> is not granted, so we need to make not-granted behave similarly to
> granted, from the page's perspective.
>
> Maybe this is obvious, but I wanted to make sure we're all on the same
> page here.
>
> == Trusted app stores and sophisticated users ==
>
> How will sophisticated users work around requirement (4), that
> sensitive APIs can only be granted by trusted app stores? I realize
> this is a fine line.
>
> For distributing beta software, perhaps we could have an "at your own
> risk" store, with lots of scary warnings. I'm not sure if that would
> work for the separate use-case of a developer testing her software --
> I guess it would depend on whether an app submitted to the AYOR store
> could be auto-updated, without any intervention on the store's part.
>

Justin Lebar

unread,
Mar 15, 2012, 5:09:25 PM3/15/12
to Lucas Adamski, dev-w...@lists.mozilla.org, Jonas Sicking, dev...@lists.mozilla.org
On Thu, Mar 15, 2012 at 5:03 PM, Lucas Adamski <lada...@mozilla.com> wrote:
> Regarding 3 and 3a) I was under the impression that those were not mutually exclusive options.  Andreas had explained it to me as the prompt for permissions at install time would be opt-in (for each individual/set of permissions).  Any that were not opted into could be prompted when needed.  That complicates things and maybe that alone makes it unattractive, but we could do both.

I guess we could do both. In practical terms, though, users will
agree to whatever is in front of them, even if they are able to opt
out and opt in at a later point in time.

So from the perspective of giving apps only the permissions they
actually need, I don't see why (3) and (3a) together is better than
(3a) alone. Is the idea just that it's a smoother experience for
users, since we can batch the permission grants into one window?

> For failing gracefully, it would seem useful for apps to be able to differentiate between a permission issue and some other failure mode.  Given location for example, an app might warn the user when they just don't have a good GPS fix, rather than when they've denied access to location entirely.

This is a tradeoff.

Personally, I'd be OK having an app give me a less-than-ideal error
message after I deny it permission to do something. I'd rather that
than have an app intentionally lock me out because I didn't give it
permission to read my location.

-Justin

David Chan

unread,
Mar 15, 2012, 5:14:50 PM3/15/12
to Jim Straus, dev-w...@lists.mozilla.org, phill...@gmail.com, Mozilla B2G mailing list, dev-se...@lists.mozilla.org, mozilla dev webapps, Jonas Sicking
I broke this out into its own heading
https://wiki.mozilla.org/Apps/Security#Centralized_permissions_manager

Similar ideas were discussed later in the thread. I don't believe
I've seen any objections to having permissions being centralized for
control / auditing purposes.

There is still an open question on how a permissions manager should
respond in the event of a DENIED permission. One suggestion is to not
error out but return some default/safe value e.g. no contacts if an
app is not granted Contacts information. A concern of this proposal
is that an app may continue to poll for a permission until it is
granted or an app may pop up a dialog to have the user grant the
permission.

I don't think the dialogue prompt will be a big issue if we have
contextual permissions. If ChessApp asks for geolocation on start,
is denied, then pops up a dialogue saying it needs geolocation,
I would hope the user realizes something is fishy.

To address the polling issue, we could try exponential backoff
when an app requests a permissions. The app would have to wait
1, 2, 4, 8 seconds etc between requests. Of course the permissions
manager should always be able to change the permission even if
the backoff for an app is at 10 years.


David Chan


----- Original Message -----
> From: "Jim Straus" <jst...@mozilla.com>
> To: "Jonas Sicking" <jo...@sicking.cc>
> Cc: dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, phill...@gmail.com, "mozilla dev webapps"
> <mozilla.d...@googlegroups.com>, "Mozilla B2G mailing list" <dev...@lists.mozilla.org>
> Sent: Tuesday, March 13, 2012 1:09:06 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>
> Hello all -
> I've been sketching out an implementation of permissions. I've
> laid out some code framework, but wanted to through tis out for
> validation. Assumptions: that B2G/Firefox will have separate
> processes for each app/tab. This is already declared to be true
> (but not implemented) for B2G. This proposal doesn't required ipc
> , but I believe we need to support it. Also note that this should
> be able to replace the existing Permission and PermissionManager
> in gecko.

Justin Lebar

unread,
Mar 15, 2012, 5:31:02 PM3/15/12
to David Chan, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, mozilla dev webapps, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
> There is still an open question on how a permissions manager should
> respond in the event of a DENIED permission. One suggestion is to not
> error out but return some default/safe value e.g. no contacts if an
> app is not granted Contacts information. A concern of this proposal
> is that an app may continue to poll for a permission until it is
> granted or an app may pop up a dialog to have the user grant the
> permission.

I don't understand this. Can you give a concrete example of what the
problem is here, for example with the current geolocation API?

All of our APIs are async, so there's no polling involved, as far as I
understand.

> ----- Original Message -----
>> From: "Jim Straus" <jst...@mozilla.com>
>> To: "Jonas Sicking" <jo...@sicking.cc>
>> Cc: dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, phill...@gmail.com, "mozilla dev webapps"
>> <mozilla.d...@googlegroups.com>, "Mozilla B2G mailing list" <dev...@lists.mozilla.org>
>> Sent: Tuesday, March 13, 2012 1:09:06 PM
>> Subject: Re: [b2g] OpenWebApps/B2G Security model
>>
>> Hello all -
>>   I've been sketching out an implementation of permissions.  I've
>>   laid out some code framework, but wanted to through tis out for
>>   validation.  Assumptions:  that B2G/Firefox will have separate
>>   processes for each app/tab.  This is already declared to be true
>>   (but not implemented) for B2G.  This proposal doesn't required ipc
>>   , but I believe we need to support it.  Also note that this should
>>   be able to replace the existing Permission and PermissionManager
>>   in gecko.

David Chan

unread,
Mar 15, 2012, 5:44:06 PM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, mozilla dev webapps, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
Sorry I should have used the same nomenclature that was used earlier.

Polling as in the app repeated asks for geolocation because it "failed".
Having different failures for denied vs can't lock onto GPS would solve
this, but I don't know how much it matters for an app.

David

Jim Straus

unread,
Mar 15, 2012, 5:50:14 PM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, Lucas Adamski, dev...@lists.mozilla.org, Jonas Sicking
Hello Justin -
An app that could do something reasonable if it couldn't get a resource (e.g.. location), but decides to lock you out if you deny permission is just being obnoxious. I doubt any such app would be very popular and there would be a lot of pressure by users to not take such action. I'm not particularly worried about that case.
I am slightly worried about the case where an app runs through all the permissions to see what it can and can't do. There might be a security issue there. If we decide that IS an issue, we could limit reading permissions to just those that have been actually used (i.e if you try to use the geolocation API you can then check to see if you were denied. But if you haven't used the API, you get a null response. That would be a bit of a bookkeeping headache, but doable.
-Jiim Straus
>>>> _______________________________________________
>>>> dev-b2g mailing list
>>>> dev...@lists.mozilla.org
>>>> https://lists.mozilla.org/listinfo/dev-b2g
>>> _______________________________________________
>>> dev-b2g mailing list
>>> dev...@lists.mozilla.org
>>> https://lists.mozilla.org/listinfo/dev-b2g
>>

Justin Lebar

unread,
Mar 15, 2012, 5:50:18 PM3/15/12
to David Chan, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, mozilla dev webapps, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
On Thu, Mar 15, 2012 at 5:44 PM, David Chan <dc...@mozilla.com> wrote:
> Sorry I should have used the same nomenclature  that was used earlier.
>
> Polling as in the app repeated asks for geolocation because it "failed".
> Having different failures for denied vs can't lock onto GPS would solve
> this, but I don't know how much it matters for an app.

We design our APIs so they don't have this problem, in general.

This failure mode can happen even when the user allows geolocation.
Suppose I access my maps app from the subway. If our geolocation api
were poorly designed, the maps app would continuously poll until I got
out of the subway.

In the case of the geolocation API, you request the location (either
as one-shot or continuous), and then you wait. You eventually get
called back once GPS is available.

If the user denies permission to access geolocation, that's the same
as being stuck in the subway permanently.

Or at least, that's how I think it should work.

lkcl luke

unread,
Mar 15, 2012, 5:53:07 PM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, mozilla dev webapps, Mozilla B2G mailing list
On Thu, Mar 15, 2012 at 9:31 PM, Justin Lebar <justin...@gmail.com> wrote:
>> There is still an open question on how a permissions manager should
>> respond in the event of a DENIED permission. One suggestion is to not
>> error out but return some default/safe value e.g. no contacts if an
>> app is not granted Contacts information. A concern of this proposal
>> is that an app may continue to poll for a permission until it is
>> granted or an app may pop up a dialog to have the user grant the
>> permission.
>
> I don't understand this.  Can you give a concrete example of what the
> problem is here, for example with the current geolocation API?

i believe david is referring to an app carrying out psychological
bullying / blackmail attacks on its users, by repeatedly demanding
permission for access, would that be a correct assessment, david?

l.

Jim Straus

unread,
Mar 15, 2012, 6:00:26 PM3/15/12
to lkcl luke, dev-w...@lists.mozilla.org, Justin Lebar, phill...@gmail.com, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, mozilla dev webapps, Mozilla B2G mailing list
I'm not sure an app can effectively bully the user. If the user selects "permanently deny", the dialog won't ever come up again (obviously, the user can change their mind by going to the Permissions Manager App). So, a chess program that wants to use geolocation would try to use the API. The dialog would come up. The user can either allow or deny and either once or always. If once is selected, the next time the API is used the dialog will come up again. If always is selected, the dialog never comes up again. The app can't change the choices and the app can't display it's own dialog (well, depending on what our dialog looks like, I guess it could display a dialog, but it would do it no good as it won't have permission to modify permissions). All the user has to do is select the always option and that's that. An app COULD complain to the user if they are denied access and try to get them to go to the Permissions Manager app, but I suspect any app that was so abusive would be deleted very quickly.
-Jim Straus

lkcl luke

unread,
Mar 15, 2012, 6:09:36 PM3/15/12
to David Chan, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, mozilla dev webapps, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
On Thu, Mar 15, 2012 at 9:14 PM, David Chan <dc...@mozilla.com> wrote:
> I broke this out into its own heading
> https://wiki.mozilla.org/Apps/Security#Centralized_permissions_manager

i'm reading this section... it's very hard to understand the concept
being proposed. even the purpose of the proposed "Centralised
permissions manager" is hard for me to grok, for which i apologise.
it's particularly confusing for me because i understand how SE/Linux
works.

SE/Linux is fundamentally implemented at kernel level. any
significant system call, be it a file/socket operation such as read,
write, open, ioctl or other such as fork, exec, mmap etc., all of
these aren't just "allowed", they're audited and controlled... by the
*kernel*.

for proper security - for proper enforcement of permissions - it
*has* to be implemented at the kernel level. it just does. you
simply can't have security implemented in userspace: you've a snowball
in hell's chance of calling it "security".

so from that perspective, proposing the existence of a "centralised
permissions manager" is a misnomer. it's the kernel, and that's the
end of the matter. (sorry, but it is. even android implemented their
security system kernel-side).

so i believe you _mayyy_ be referring to a system which helps users
to interact with granting or denying access to certain features and
information.

parts of the description _may_ be referring to a system which helps
the developers to *create* the sets of permissions that will end up
being associated with the app.

it is very hard to tell, and i get completely lost when reading the
bit about "uri signatures".

i must be missing something, for which i apologise.

l.

lkcl luke

unread,
Mar 15, 2012, 6:14:50 PM3/15/12
to Justin Lebar, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, mozilla dev webapps, Mozilla B2G mailing list
On Thu, Mar 15, 2012 at 9:50 PM, Justin Lebar <justin...@gmail.com> wrote:
> On Thu, Mar 15, 2012 at 5:44 PM, David Chan <dc...@mozilla.com> wrote:
>> Sorry I should have used the same nomenclature  that was used earlier.
>>
>> Polling as in the app repeated asks for geolocation because it "failed".
>> Having different failures for denied vs can't lock onto GPS would solve
>> this, but I don't know how much it matters for an app.
>
> We design our APIs so they don't have this problem, in general.

it's not an APIs issue, justin. i believe david is referring to
dealing with the case where apps try to bully the user into granting
the permission against their will and better judgement, just to get
rid of the app repeatedly advising them... *in the app*, after they've
explicitly said "NO".

scenario:

* application requests access to geolocation
* user says "no".
* application responds by creating a timer that goes off every 30 seconds
* on each timer ping, application puts up a popup "you didn't give me
access to geolocation. GIVE ME ACCESS TO GEOLOCATION".

that's harrassment.

what do you do about this (extreme) situation, and subtle (less
extreme) variants thereof?

l.

Adrienne Porter Felt

unread,
Mar 15, 2012, 6:16:44 PM3/15/12
to lkcl luke, dev-w...@lists.mozilla.org, Justin Lebar, Jim Straus, phill...@gmail.com, mozilla dev webapps, David Chan, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
>
> scenario:
>
> * application requests access to geolocation
> * user says "no".
> * application responds by creating a timer that goes off every 30 seconds
> * on each timer ping, application puts up a popup "you didn't give me
> access to geolocation. GIVE ME ACCESS TO GEOLOCATION".
>
> that's harrassment.
>
> what do you do about this (extreme) situation, and subtle (less
> extreme) variants thereof?


People can uninstall apps (or not visit websites). It seems like that's
what should happen here.

lkcl luke

unread,
Mar 15, 2012, 6:21:04 PM3/15/12
to Jim Straus, dev-w...@lists.mozilla.org, Justin Lebar, phill...@gmail.com, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, mozilla dev webapps, Mozilla B2G mailing list
On Thu, Mar 15, 2012 at 10:00 PM, Jim Straus <jst...@mozilla.com> wrote:
> I'm not sure an app can effectively bully the user.

[....]

>  An app COULD complain to the user if they are denied access and try to get them to go to the Permissions Manager app, but I suspect any app that was so abusive would be deleted very quickly.

ok, that was the answer i was looking for. if that's reasonable to
rely on that happening, then that's ok.

is the more subtle case worth considering? say... the app putting
up instructions to the user on how to change the permissions, and
making it look like part of the OS? phishing attacks, basically.

i'm not clutching at straws with this, i'm just being thorough.

l.

lkcl luke

unread,
Mar 15, 2012, 6:37:02 PM3/15/12
to Adrienne Porter Felt, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, Lucas Adamski, Mozilla B2G mailing list, David Chan, ptheriault, Fabrice Desré, cjo...@mozilla.com, Jonas Sicking
2012/3/15 lkcl luke <luke.l...@gmail.com>:

>  anyway, the point is that there are separate security requirements for:
>
>  * the root frame (top-level one into which the top gaia HTML is loaded)
>  * individual gaia apps (sub-iframes, one per app)
>  * any gaia app that opens up a public-facing (URL-based) iframe - the
> browser app is one such
>  * iframes *within* that iframe - as in "iframes that you normally
> think of iframes being used for".
>
> man that's as confusing as hell, but there simply isn't a glossary yet
> for describing this stuff and giving it some unique unambiguous
> terminology.

https://wiki.mozilla.org/Apps/Security#Concepts_to_be_given_Official_Definitions

i added this section because even i'm finding it hard to keep track
of the concepts. it would be very useful to the clear discussion of
this topic to have some "official input" that creates some terms to
refer to the above, in terminology that the "Official" people dealing
with the B2G project are themselves familiar with.

i'm struggling here somewhat because although i fully grok the chrome
concept i don't actually use the term myself so don't _actually_ know
if "the chrome concept" is the correct way to refer to what *i*
understand is going on.

argh.... :)

*gloop*, *gloop*, drowning in definitions...

l.

Asa Dotzler

unread,
Mar 15, 2012, 6:43:04 PM3/15/12
to mozilla...@lists.mozilla.org
This is an easy problem to solve. Simply expose a "and don't ask me again"

- A

Ben Francis

unread,
Mar 15, 2012, 6:45:56 PM3/15/12
to dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, dev...@lists.mozilla.org
I've had a read over the wiki page and there's certainly a lot of
information to take in and I think there's lots still to discuss.

Here are some of my (slightly naive) questions and opinions on what's been
written so far...

== Distribution/management of WebApps ==

"A telco can decide which stores to implicitly trust on their devices" -
what does this mean?

== App instance/version ==

Number 2 sounds closest to what I see a web app as.
* A web site which hosts an app manifest which provides a name,
description, icons and lists permissions the app may ask for
* Once "installed", the app's origin (or list of origins, or subset of an
origin defined by a path mask in the manifest) is granted certain
additional permissions by the user (or can ask for them at first use) and
resources are downloaded and cached as specified by an app cache manifest,
if provided. These cached resources will not be removed from the cache
without user's consent.
* When the appcache manifest is updated on the server, new versions of
resources may be downloaded and cached locally.

== Types of runnables ==
0) None of the items listed here (drivers, cli tools, browser engine or
plugins) should be handled by Open Web Apps, but rather a separate process.
Out of scope.
1) Web apps should never be "packaged" in the sense of a bunch of assets
zipped up into one file and downloaded, they should always be hosted with a
every resource having a persistent, unique URL. Otherwise they're not "web"
apps, they're just apps.
2) All web apps should be non-local web apps, but with the ability to cache
resources locally.
3) Non-installed web apps are essentially web sites viewed through a
browser app, or bookmarks.
4) Gaia apps will still use the Open Web Apps system and Gaia apps will be
in the Mozilla Marketplace, we will just pre-populate the appcache and
installed manifests at the time of shipping a device.

== Trusted store with permissions delegation ==

Surely there should be no central authority for permissions requests other
than the user?

Yes, listing requested permissions in the manifest seems sensible, although
not essential if permissions requests are not made up-front at install
time. Still useful for information purposes for the user though.

Why would a root store be able to grant permissions on behalf of the user
or delegate that right?

Why does the store have anything to do with what permissions a user wants
to give to an app? Is this because of regulatory restrictions on dialer and
homescreen apps? Surely an app from any store (or self-hosted) can ask for
any permission, it's up the user to decide whether to grant it.

Self-hosted apps should not need the intervention of a store for a user to
grant them permissions.

== FLASK ==

Huh? This doesn't sound like the web.

== Debian Keyring (Package Management) for distribution of apps ==
I love Debian, and apt. But their place is on the dekstop or server to
install native applications and their dependencies - not web apps.

Web apps should be hosted, not packaged. How do you sign code that's
constantly changing? Sometimes when web apps are updated there are
different versions of resources on different nodes of a cluster behind a
load balancer, or different versions of the app are rolled out to subsets
of users at a time. How can you sign code in this environment?

I trust this origin to do these things, sometimes I want proof they really
are who they say they are, and sometimes I want them to ask me before they
do something. Isn't that enough?

== Permissions manager ==
Users should definitlely be able to view the permissions which have been
given to which apps and change those permissions at any time.

== Other ==
SSL shouldn't be mandatory for all apps, but maybe for apps which request
certain sensitve permissions. "It's just a stopwatch, why does it need
encrypting?!"

You should be able to self-host an app and have a direct relationship with
the user, without the involvement of a third party (store).

--
Ben Francis
http://tola.me.uk

Jim Straus

unread,
Mar 15, 2012, 6:57:12 PM3/15/12
to lkcl luke, dev-w...@lists.mozilla.org, Justin Lebar, phill...@gmail.com, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, mozilla dev webapps, Mozilla B2G mailing list
I'm not sure phishing works in a phone. There is no password. Yes, an app could put up a display that looks like the Permissions Manager app. Yes, the user could touch whatever controls would grant permissions. No, the permissions would not change. No an app can't modify the display of the Permissions Manager app. No passwords are entered, I'm not sure what the malicious app is phishing for. Can you give an example of what it might be doing?
Note that this doesn't mean phishing can't occur. An app could look like my bank app if I loaded it from an untrusted source. But it doesn't need any permission besides connecting to it's server to phish in this case.
Another note. I suspect we may allow someone to deliver a "Better Permissions Manager". I would think this would be the kind of app that would want LOTS of inspection before being granted any permissions. And the only permission it should be granted is "Can modify permissions". I might go so far as enforce that in the actual code that implements the permissions management. I would also not allow for Permissions Management to be set to "Deny Always". Otherwise you could be locked out of your device.

David Chan

unread,
Mar 15, 2012, 7:01:00 PM3/15/12
to lkcl luke, dev-w...@lists.mozilla.org, David Barrera, dev-se...@lists.mozilla.org, Jim Straus, Lucas Adamski, Mozilla B2G mailing list, ptheriault, Fabrice Desré, cjo...@mozilla.com, Adrienne Porter Felt, Jonas Sicking
Yea, I was thinking more of the bullying case which is something I don't
think that Android and iOS apps have to deal with much currently. The
option to remember allow/deny addresses the issue I believe.

Extremely hypothetical case. What if an app wants a permission but you
only want to allow it for certain action? I don't think the current model
could accommodate this.

Chess app wants write access to store current game layout, and wants
write access to store tracking information.

I can't think of a good way to address this without a horrible user
experience. You could also think of a situation where an app asks
for a permissions for a relatively benign use, then escalating to
abusing the permission.


David Chan

----- Original Message -----
> From: "lkcl luke" <luke.l...@gmail.com>
> To: "Adrienne Porter Felt" <a...@berkeley.edu>
> Cc: "David Chan" <dc...@mozilla.com>, "Lucas Adamski" <lada...@mozilla.com>, dev-w...@lists.mozilla.org, "David
> Barrera" <dbar...@ccsl.carleton.ca>, "ptheriault" <pther...@mozilla.com>, "Jim Straus" <jst...@mozilla.com>,
> "Jonas Sicking" <jo...@sicking.cc>, dev-se...@lists.mozilla.org, "Fabrice Desré" <fab...@mozilla.com>,
> cjo...@mozilla.com, "Mozilla B2G mailing list" <dev...@lists.mozilla.org>
> Sent: Thursday, March 15, 2012 3:37:02 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>

David Chan

unread,
Mar 15, 2012, 7:45:46 PM3/15/12
to lkcl luke, dev-w...@lists.mozilla.org, Jim Straus, phill...@gmail.com, mozilla dev webapps, dev-se...@lists.mozilla.org, Jonas Sicking, Mozilla B2G mailing list
I was trying to transcribe an earlier post by Jim in which he mentioned
some permissions manager work he is working.

Replies inline.

----- Original Message -----
> From: "lkcl luke" <luke.l...@gmail.com>
> To: "David Chan" <dc...@mozilla.com>
> Cc: "Jim Straus" <jst...@mozilla.com>, dev-w...@lists.mozilla.org, phill...@gmail.com, "Mozilla B2G mailing
> list" <dev...@lists.mozilla.org>, dev-se...@lists.mozilla.org, "mozilla dev webapps"
> <mozilla.d...@googlegroups.com>, "Jonas Sicking" <jo...@sicking.cc>
> Sent: Thursday, March 15, 2012 3:09:36 PM
> Subject: Re: [b2g] OpenWebApps/B2G Security model
>
> On Thu, Mar 15, 2012 at 9:14 PM, David Chan <dc...@mozilla.com>
> wrote:
> > I broke this out into its own heading
> > https://wiki.mozilla.org/Apps/Security#Centralized_permissions_manager
>
> i'm reading this section... it's very hard to understand the concept
> being proposed. even the purpose of the proposed "Centralised
> permissions manager" is hard for me to grok, for which i apologise.
> it's particularly confusing for me because i understand how SE/Linux
> works.
>
> SE/Linux is fundamentally implemented at kernel level. any
> significant system call, be it a file/socket operation such as read,
> write, open, ioctl or other such as fork, exec, mmap etc., all of
> these aren't just "allowed", they're audited and controlled... by the
> *kernel*.
>
> for proper security - for proper enforcement of permissions - it
> *has* to be implemented at the kernel level. it just does. you
> simply can't have security implemented in userspace: you've a
> snowball
> in hell's chance of calling it "security".
>

I agree that controls have to be implemented at a low level. This goes
back to the post by SUN Haitao with the various levels / rings. We
could place a permissions manager at level 1, but it is pointless as
pointed out by your and others. I'll change the heading to say kernel
instead of centralized. If another proposal comes along with the
permissions manager at a different level, we can debate the merits of
that.


> so from that perspective, proposing the existence of a "centralised
> permissions manager" is a misnomer. it's the kernel, and that's the
> end of the matter. (sorry, but it is. even android implemented
> their
> security system kernel-side).
>
> so i believe you _mayyy_ be referring to a system which helps users
> to interact with granting or denying access to certain features and
> information.
>
> parts of the description _may_ be referring to a system which helps
> the developers to *create* the sets of permissions that will end up
> being associated with the app.
>
> it is very hard to tell, and i get completely lost when reading the
> bit about "uri signatures".
>


I believe "URI signature" is a term Jim was using to identify an app
requesting the permissions. All permissions grants/access should be
able to use this "signature" to identify the app. We briefly
mentioned UUID in another thread, or hash based off a key. There
needs to be a guarantee that even if another app spoofed the
identifier, that only the real app can prove it owns the identifier.

Jim, is my interpretation correct?


David Chan

Lucas Adamski

unread,
Mar 16, 2012, 12:32:35 AM3/16/12
to Ben Francis, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, dev...@lists.mozilla.org
On Mar 15, 2012, at 3:45 PM, Ben Francis wrote:

> I've had a read over the wiki page and there's certainly a lot of
> information to take in and I think there's lots still to discuss.
>
> Here are some of my (slightly naive) questions and opinions on what's been
> written so far...
>
> == Distribution/management of WebApps ==
>
> "A telco can decide which stores to implicitly trust on their devices" -
> what does this mean?

A given device may come ship with several preconfigured app stores. Some may be trusted to provide privileged/Gaia apps, others only to provide "normal" B2G apps.
Lucas.

lkcl luke

unread,
Mar 16, 2012, 12:33:59 AM3/16/12
to Ben Francis, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, dev...@lists.mozilla.org
On Thu, Mar 15, 2012 at 10:45 PM, Ben Francis <b...@tola.me.uk> wrote:
> I've had a read over the wiki page and there's certainly a lot of
> information to take in and I think there's lots still to discuss.

he he - yeah it's a monster area this.

> Here are some of my (slightly naive) questions and opinions on what's been
> written so far...

eyy, new eyes are always valuable.

> == Distribution/management of WebApps ==
>
> "A telco can decide which stores to implicitly trust on their devices" -
> what does this mean?

i've updated https://wiki.mozilla.org/Apps/Security#Distribution_.2F_management_of_WebApps
to provide an analogy which helps illustrate.

> == App instance/version ==
>
> Number 2 sounds closest to what I see a web app as.

yes.... yet... conceptually, think "what if that https:// link was to
a web app and the file extension of that web app was '.deb' or '.ipk'
or '.rpm' "? it's an https link, right? therefore it's a "web app"?
:)

> * A web site which hosts an app manifest which provides a name,
> description, icons and lists permissions the app may ask for

all of these things can be covered by .deb or .rpm (not sure about
.ipk it's a bit simplified)

> * Once "installed", the app's origin (or list of origins, or subset of an
> origin defined by a path mask in the manifest) is granted certain
> additional permissions by the user (or can ask for them at first use) and
> resources are downloaded and cached as specified by an app cache manifest,
> if provided. These cached resources will not be removed from the cache
> without user's consent.

/var/lib/dpkg/ etc....

> * When the appcache manifest is updated on the server, new versions of
> resources may be downloaded and cached locally.

apt-get update followed by apt-get upgrade :)

you see how simple that is? and, importantly, how little
code/infrastructure needs to be written in order to meet - and exceed
- the requirements?

> == Types of runnables ==
> 0) None of the items listed here (drivers, cli tools, browser engine or
> plugins) should be handled by Open Web Apps, but rather a separate process.
> Out of scope.

why is it "out of scope"? how should the B2G application (B2G the
executable) be packaged and upgraded? what about the linux kernel,
and other support infrastructure? whose problem is it to define the
security and secure distribution of all those components?

this is a serious question (one that i meant to raise given the
misleading nature of the name picked for the wiki page, which implies
that the discussion's scope must specifically be limited and
restricted to "app" security *only* when in fact it's critical to
consider the whole deal)

> 1) Web apps should never be "packaged" in the sense of a bunch of assets
> zipped up into one file and downloaded, they should always be hosted with a
> every resource having a persistent, unique URL. Otherwise they're not "web"
> apps, they're just apps.

ok... so if that's allowed, then as that adobe AIR security document
explains, you now need to create a separate security context which
restricts what that code can get access to.

however that's not to say that there is not room for *both* types -
"apps" and "web apps" as you've distinguished them - is there? the
fact that they could both be written as JS/HTML (and python if i had
my way... *sigh*...) doesn't actually have anything to do with it.

(convenient link to the AIR intro)
https://www.adobe.com/devnet/air/articles/introduction_to_air_security.html

so, that means needing a glossary which explains the difference. and
a security model devised and discussed which covers each.

but... you know what? now that i think about it, i _really_ don't
see the difference between "app" and "web app" as you've defined it,
if you create a new URI type such as "apt://{packagename}" or
"yum://{packagename}".

think about that for a minute. if you define the concept of "remote
web app" to be "a dynamically downloadable gaia app", you still have
to do security vetting, you still have to cache it locally, you still
have to present the user with the opportunity to vet its
permissions...

so in my mind, there's no difference between "remote web app" as
you've defined it and "just an app" as you've defined it, once you've
created that uri type "apt://" or "yum://". you see what i'm saying?

plus, yeah, crucially, some apps may require upgrades of other
functionality (dependencies) - how would a "remote web app" keep
everything up-to-date if it's not hooked into a proper dependency
management system?


> 2) All web apps should be non-local web apps, but with the ability to cache
> resources locally.

_should_ be is a strong word, ben :)

> == Trusted store with permissions delegation ==
>
> Surely there should be no central authority for permissions requests other
> than the user?

ah - yes. but are the users technically competent to evaluate the
safety of the code? no, they're not. they haven't got time. so
whilst the word "permissions" is the wrong word to use, the concepts
in this section are still kinda sound.

> Yes, listing requested permissions in the manifest seems sensible, although
> not essential if permissions requests are not made up-front at install
> time. Still useful for information purposes for the user though.
>
> Why would a root store be able to grant permissions on behalf of the user
> or delegate that right?

yes i did wonder about that one. i believe this whole section may
have confused various concepts together, using the word "permissions"
to refer to delegated trust that is inherent in digital certificates.

ultimately this is about trust. in the absence of time and technical
ability of the user to do a full code audit, can the user *trust* that
the code executed on their system is safe? someone else makes a
declaration, "i believe this is safe" and then digitally signs the
code.

private/public key signing can be delegated, if you put the right
infrastructure in place.

> == FLASK ==
>
> Huh? This doesn't sound like the web.

yeah, i know. AIR security does not "sound like the web". android's
security model "does not sound like the web". you still have to have
it.

> == Debian Keyring (Package Management) for distribution of apps ==
> I love Debian, and apt. But their place is on the dekstop or server to
> install native applications and their dependencies - not web apps.

that's... i'm not going to judge your opinion, ben.

> Web apps should be hosted, not packaged.

the only difference between "hosted" and "packaged" is a matter of
the uri (or file extension type).

> How do you sign code that's constantly changing?

you don't: it means it hasn't been audited. thus it should not be
trusted. thus it should not be allowed through the "normal" channels.

my opinion is: if you _really_ want "constantly changing" code, then
all bets are off. btw, somewhere buried in the looong list, only 48
hours ago (!) the idea of bypassing the packaging system (and the
security system) was discussed: the idea was proposed to allow
intelligent people to install stuff in the B2G-conceptual-equivalent
of "/usr/local"

> Sometimes when web apps are updated there are
> different versions of resources on different nodes of a cluster behind a
> load balancer, or different versions of the app are rolled out to subsets
> of users at a time. How can you sign code in this environment?

exactly.

> I trust this origin to do these things, sometimes I want proof they really
> are who they say they are, and sometimes I want them to ask me before they
> do something. Isn't that enough?

yes... but the devil is in the details :)

> == Permissions manager ==
> Users should definitlely be able to view the permissions which have been
> given to which apps and change those permissions at any time.
>
> == Other ==
> SSL shouldn't be mandatory for all apps, but maybe for apps which request
> certain sensitve permissions. "It's just a stopwatch, why does it need
> encrypting?!"

i believe you're confusing the difference between a need for secrecy
and a need for validation.

> You should be able to self-host an app and have a direct relationship with
> the user, without the involvement of a third party (store).

yes. the analogous equivalent is that of e.g. debian-multimedia.org
which mr marillat happily maintains. another example: i worked with
phil hands on a small project, we actually set up a debian-formatted
repository for that purpose. that repository was "self-hosted". it
meant adding a line "deb http://hands.com/~phil/debian main" or
something like that to the /etc/apt/sources.list. (as that was almost
4 years ago now, before the apt-keyring stuff, we didn't create a
separate keyring package like mr marillat has)

but if that's "too complicated", another way to allow the "self-host"
concept is to allow ".deb" or ".rpm" to be easily downloaded and
installed. that will still end up going through the security review
and permissions process.

beyond that, if even _that_ is "too complicated" for people to "self
host", then you always have the B2G-conceptual-equivalent of
/usr/local as a fallback.

l.

Ben Francis

unread,
Mar 16, 2012, 7:47:18 AM3/16/12
to Lucas Adamski, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, dev...@lists.mozilla.org
On Fri, Mar 16, 2012 at 4:32 AM, Lucas Adamski <lada...@mozilla.com> wrote:

> On Mar 15, 2012, at 3:45 PM, Ben Francis wrote:
> > "A telco can decide which stores to implicitly trust on their devices" -
> > what does this mean?
>
> A given device may come ship with several preconfigured app stores. Some
> may be trusted to provide privileged/Gaia apps, others only to provide
> "normal" B2G apps.
>

But surely there is no difference between a B2G app and a Gaia app. A B2G
app is an Open Web App and a Gaia app is just an Open Web App written by
Mozilla.

A device may ship with one or more app store apps which have permission to
install and un-install Open Web Apps.

A device may also ship with pre-installed apps which already have
permissions assigned to them by the vendor on the user's behalf.

The user can view and change the permissions given to all apps (including
app store apps) using the permissions manager.

Ben

SUN Haitao

unread,
Mar 16, 2012, 9:05:30 AM3/16/12
to lkcl luke, dev-w...@lists.mozilla.org, David Barrera, ptheriault, Jim Straus, Lucas Adamski, Jonas Sicking, David Chan, dev-se...@lists.mozilla.org, Fabrice Desré, cjo...@mozilla.com, Adrienne Porter Felt, Mozilla B2G mailing list
>  sun, hi, this is very useful categorisation.  i've added what you
> wrote as a section here:
>  https://wiki.mozilla.org/Apps/Security#Types_of_Runnables
Call me 'Haitao'. That is my given name.

>  i'm going to add a type 4 as well, if that's ok, which is the
> conceptual equivalent of "/usr/local"
>    https://wiki.mozilla.org/Apps/Security#Other_.28topics_that_don.27t_fall_into_above_proposals.29
I believe this should be a sub-type of type 1.

>  i have to say that the idea of dynamic loading of gaia apps makes me
> rather twitchy.  especially as you can achieve the same result by
> going through the "install" process, and get better security all
> round.  both require a network connection, so why would you bypass the
> security process? :)
>
>  ... or were you referring to something else?
But this is how today's web sites work. This is what web developers
and users know best. This also provides very (if not the most) smooth
experience of deployment.

I'm afraid that forcing all web apps using a different deployment
mechanism is neither good nor practical.

I think we need a solution to make web apps work like native apps and
be built like web sites.

Ben Francis

unread,
Mar 16, 2012, 9:08:54 AM3/16/12
to lkcl luke, dev-w...@lists.mozilla.org, dev-se...@lists.mozilla.org, dev...@lists.mozilla.org
tl;dr: Devices ship with pre-installed apps (including app store apps)
which already have permissions granted to them by the device vendor, the
user can review and revoke these permissions if they wish. Web apps are
only web apps if they're written with web technologies and hosted on the
web. In the interests of working towards a web standard, we should follow
the existing model of Google Chrome Hosted apps with three key differences
proposed by the Open Web Apps team which make the model more open. Updating
Gecko and the kernel are outside the scope of Open Web Apps.

On Fri, Mar 16, 2012 at 4:33 AM, lkcl luke <luke.l...@gmail.com> wrote:

> > "A telco can decide which stores to implicitly trust on their devices" -
> > what does this mean?
>
> i've updated
> https://wiki.mozilla.org/Apps/Security#Distribution_.2F_management_of_WebApps
> to provide an analogy which helps illustrate.
>

OK, so as I said above:

"A device may ship with one or more app store apps which have permission to
install and un-install Open Web Apps."

This is an explicit permission granted by the vendor on behalf of the user
for pre-installed app store apps to have permission to install and
un-install apps. The user can review and revoke this permission if they
wish so it isn't implicit, just a permission like any other permission.

yes.... yet... conceptually, think "what if that https:// link was to
> a web app and the file extension of that web app was '.deb' or '.ipk'
> or '.rpm' "? it's an https link, right? therefore it's a "web app"?
>

.exe files can be served from an HTTP URL too and they're not web apps.
Equally just because an app is written using HTML, CSS and JavaScript, that
doesn't make it a web app either.

In my view an app is only part of the web if it's written using web
standards (HTML, CSS and JavaScript) and each of those resources can be
addressed with a URI over HTTP.

I believe WebOS, Windows 8 Metro, WAC and Webinos apps are all written
using web technologies, but I wouldn't class any of them as open web apps
because as I understand it they're packaged, not hosted. Once they're
downloaded they're no longer part of the web.

If we're looking for an established model for installable web apps, our
friends at Google have a working implementation of an app store for hosted
web apps and a secure browser they can be installed on. Chrome Hosted Apps
are written in web technologies and hosted on the web
http://code.google.com/chrome/apps/docs/developers_guide.html

The Open Web Apps team has proposed a model very similar to the Chrome Web
Store model with three important differences:

1) There shouldn't just be just one web app store run by one corporation,
anyone should be able to run their own web app store and users should be
able to install apps from stores they trust, without the intervention of
Mozilla or anyone else.
2) Web app developers should be able to list their web apps in multiple app
stores and even host their own apps on their own web server and have a
direct relationship of trust with the user.
3) The JSON app manifest and referenced icons should be hosted on the web,
rather than packaged in a funny proprietary zip-like .crx file - note that
Google is actually also now proposing to make this change themselves, to
make projects like Mozilla's Apps project easier!
http://code.google.com/intl/en-US/chrome/apps/docs/no_crx.html

With a view to creating an open standard for installable web apps, my
opinion is that we should concentrate on a model as close as possible to
the one already implemented by Google, but concentrating on these three key
differences (which are themselves potentially very distruptive and have
plenty of challenges of their own).

:)
>
> > * A web site which hosts an app manifest which provides a name,
> > description, icons and lists permissions the app may ask for
>
> all of these things can be covered by .deb or .rpm (not sure about
> .ipk it's a bit simplified)
>

Probably, but that doesn't mean they'd make a very good web standard. Both
of these are great packaging systems for native applications with great
dependency management. I don't think we need that much complexity.

> == Types of runnables ==
> > 0) None of the items listed here (drivers, cli tools, browser engine or
> > plugins) should be handled by Open Web Apps, but rather a separate
> process.
> > Out of scope.
>
> why is it "out of scope"? how should the B2G application (B2G the
> executable) be packaged and upgraded? what about the linux kernel,
> and other support infrastructure? whose problem is it to define the
> security and secure distribution of all those components?
>

A "Gecko Updater" is on the B2G roadmap and I believe will allow Gecko to
be updated in a similar way to how Firefox is updated.

I don't know how the Linux kernel and other system packages will be
updated, but this isn't a concern for Open Web Apps. Why would Chrome,
Opera or IE care about how B2G updates its kernel? We're talking about a
different level in the stack.

however that's not to say that there is not room for *both* types -
> "apps" and "web apps" as you've distinguished them - is there?


No, in fact Google has both hosted and packaged apps in the Chrome Web
Store. My personal opinion is that packaged apps are bad for the web and
that if we improve the hosted model (including app cache) we don't need the
packaged model at all, but if people at Mozilla want to work on packaged
apps too then there's nothing to stop them. An open packaged model is
better than a closed one.


> so in my mind, there's no difference between "remote web app" as
> you've defined it and "just an app" as you've defined it, once you've
> created that uri type "apt://" or "yum://". you see what i'm saying?
>

A Microsoft Word document downloaded over ftp:// isn't a web page, so why
would a Python application downloaded over apt:// be a web app?


>
> plus, yeah, crucially, some apps may require upgrades of other
> functionality (dependencies) - how would a "remote web app" keep
> everything up-to-date if it's not hooked into a proper dependency
> management system?
>

The only dependencies between web apps I know of are solved by web services
over HTTP.

> Surely there should be no central authority for permissions requests other
> > than the user?
>
> ah - yes. but are the users technically competent to evaluate the
> safety of the code? no, they're not. they haven't got time. so
> whilst the word "permissions" is the wrong word to use, the concepts
> in this section are still kinda sound.
>

To be honest, other than verifying that an app developer is who they say
they are and displaying this verification in the app description, I'm not
sure how feasible it is for the app store to verify that a web app (which
has a server-side component) is safe without having full access to the
entire source code of the app and checking every change that's made to that
source code. I could be wrong but this seems to me to be more of a
contractual issue of trust between the owner of the store store and app
developers than a technical one.
It is loading more messages.
0 new messages