On 3/24/2012 1:27 AM, ianG wrote:
>> === Web pages ===
>> Description: A normal web page can request access to a certain set of WebAPIs.
>>
>> Use cases: Web pages would like to perform functions historically limited to plugins or other binary browser
>> extensions. They might want to capture audio or video input to stream to a server or process client-side, use
>> various cool input devices for games, enable desktop notifications for new emails or tweets, etc. It might
>> optionally be possible to "bookmark" a given web app, but this does not imply any additional trust.
>>
>> Technical characteristics: No manifest, does not need to be installed or cached locally, and has no client-visible
>> versioning scheme. No restrictions on transport or content outside of normal browser model (because it is just
>> normal browser content).
>>
>> Security& Privacy Characteristics: The user does not necessarily have any relationship with or trust in this site,
>> so these APIs require explicit user opt-in at runtime, and should present users with a choice where they can be
>> realistically expected to understand the inherent security and privacy risks. Its possible these APIs may be limited
>> entirely to things that only present privacy or annoyance risk, but not security.
>>
>> Scope: Security permissions are granted for a fully qualified domain name.
>
>
> Who or what grants these security permissions?
The agent running the app does, technically. Whether that's a browser, B2G OS, or some other app container.
> It seems that we are assuming that the browser has these capabilities; can read audio/video and can export that as an
> API to the remote website. So the model then seems to reduce to something like: user clicks in her browser some site,
> which enacts the API which asks the browser to open some privileged channel. Browser being a good agent asks user for
> permission.
Not quite. It simply permits/denies the code in question to access to that API. That code is always technically
running locally (regardless of where from/how it was delivered). No APIs or channels are directly "exported" to any server.
>
>> === Installed applications with WebAPI access ===
>> Description: A web application installed from a specific server, discovered from one of potentially many web app stores.
>>
>> Use cases: Persistent apps that the user opens to perform specific tasks. They perform functions that native apps on
>> a given platform would be expected to perform. While some runtime dialogs might be expected, a typical feature-rich
>> app should not result in a flood of permission requests. Social networking is a typical use case, where a single app
>> may require access to camera/microphone for chat, contacts for integration, access photos, send SMS, trigger
>> notifications, determine user's location, etc.
>>
>> Technical characteristics: A app manifest is referred to from an app store and retrieved from the app host. An app
>> store is required (is it)? The app store can limit the privileges the app is granted. The app is stored in the
>> appcache on the client. The manifest contains version information and an explicit list of files that comprise the
>> app, so that the appcache can be effectively updated from the server when necessary. Otherwise app is always
>> instantiated from local appcache.
>>
>> Security& Privacy Characteristics: User makes choice to install an app, which implies a limited degree of trust.
>> That limited trust may permit implicit access to certain low-risk APIs, and explicit access to the bulk of the rest.
>> Implicit access to API's that could compromise the intergrity of the OS or expose the user to direct financial risk
>> is prohibited. Note there's a big difference between a user approving a OS mediated app dialog to dial a number, and
>> an app that can dial a phone # directly without user any involvement.
>>
>> Scope: Security permissions are granted to code enumerated in the manifest.
>
> Again, as above, it seems that we are assuming a local agent on the user's computer to mediate this. Especially, in
> this case, the local agent (browser) will download an application that it understands, and run it internally in an
> environment of the browser's creation.
In this case yes, the proposal is they would be downloaded and stored in the appcache. This proposal is not without
controversy; many people would also like to maintain the current web application model which puts few security
constraints on content and does not require any code authentication. Unfortunately, this model is also insecure by default.
> Is that right? So this is the "browser plugin model" ?
No, its the browser app model. Apps can already prompt for certain APIs that require user interaction for consent
(plugin installation, addon installation, geolocation, file download / upload).
Plugins are for extending the browser's functionality for web apps.
> Then, there seems to be a missing category of applications, those being downloaded outside the assumed local agent's
> control. In which case, they cannot be controlled and they are potentially outside scope of the discussion.
>
> *but* not really, because they still exist, and the user has a choice in application technologies. These represent an
> uncontrollable competition, which represents a low-tide watermark. The competition means that the local download
> agent must do at least as good a job, and in some areas substantially better. Elsewise users+developers switch.
>
> So, it may be useful to list this "out of scope" set as the competition.
If a user chooses to download and install an app directly outside of the runtime container context, they might as well
be downloading a native application for desktop for all they know. It is true though that we aren't focusing on (and
don't have a story for) the offline or "unmanaged" installs.
>
>>
>> === Installed applications with OS-level API access ===
>> Description: Some apps are integral components of the device UI, and need direct access to highly sensitive APIs.
>> These apps are approved by a trusted 3rd party (ie. carrier or manufacturer) app store for implicit access to
>> dangerous APIs.
>>
>> Use cases: User might want to swap out their default phone dialer or SMS client for a different one. Some APIs may
>> be too difficult to secure so such apps may only be granted privileges after the app store has obtained certain
>> assurances from the developer.
>>
>> Technical characteristics: Largely the same as the previous "Installed applications with WebAPI access" category,
>> except for the extra trust granted to it by the store.
>
>
> This I'm not understanding, sorry. How can the store grant "trust" (a bad word) on a user's computer? Surely when
> the user downloads and installs the application, she is doing so in full responsibility, and in that act, to use the
> above terminology, she is granting "trust" to that application. It comes as it is - the developer did the
> construction, the app store is either shipping it or not. As is, or not.
> She may be doing this on the recommendation of the app store. But that's not the same as "granting".
A computer is probably not the most relevant example; phones are more interesting. In the event of very dangerous APIs,
we may not want to permit access to them unless a 3rd party is willing to vouch for the safety of that app. In the case
of the phone, the carrier might only permit a specific set of apps to replace the stock phone dialer, for example. The
user still chooses to install an app, but this way the store is explicitly providing this as a "phone dialer app". For
such environments, the alternative might simply be "don't support this API."
>
>> Security& Privacy Characteristics: Implicit access to dangerous APIs means the risk to the user or carrier should
>> this type of app be compromised is very high.
>
> Conceptually, all I am seeing to differentiate this group and the previous one is a personal judgement that the prior
> set of APIs are "probably ok" and the latter set are "possibly dangerous" ? As this seems to be rather vague, the
> distinction probably isn't enough to justify an architectural distinction. Which is long words to say, these two
> groups seem the same, it's just that the second group matters more (to some), and tests the architecture more (if
> their judgement means anything).
>
TBD.
>> For example, this type of app can dial a phone number directly without any user involvement or knowledge.
>
> OK. So this "badness" leads to another point. If (for whatever logic) we are led to the point where a carrier /
> manufacturer has "granted" some permissions that are considered to be highly interesting ("dangerous" ?) then we need
> to look at the fuller meaning of that.
>
> Dangerous means they can go wrong. If they never go wrong there isn't an issue and we don't care.
>
> In contrast to that, when they do go wrong, this is the moment when we care. We are forced to take care, we can no
> longer pretent. So let's look at that, as if it is important.
>
> Say the AngryBudgies app did go wrong and turned out to be HungryAlligators in disguise. It does damage (doesn't
> matter what).
>
> What now? This is where the rubber of a security system meets the road of reality. What happens when it all falls
> apart?
>
> Does Alice re-install? Buy a new computer? A new house? Does she damn Carol the Carrier on some ebay-like reputation
> outlet? Does she sue for damages? Does developer Carol's insurance fund pay out? Does Carol's private vigilante
> police force hunt down the Alligators and reinstall with prejudice? Does Bob the WebAPI builder form a standards
> committee to deal with this, and in the process shut out any user complaints?
>
> Without an answer to this, we're talking tech only. Worthless. We need to understand the full business cycle we're
> trying to protect, because only in that context do we understand the attacks.
>
> Maybe the answer is nothing? In which case we do "best efforts, all love, no responsibility" which is the case with
> most Internet security models. On the other hand, do we go an extra mile? Which?
>
Its "our" responsibility to build a model with the right incentives and mitigations in place to maximize the number of
great apps developers can build while minimizing the risk to our users. We are responsible for the overall health and
security of this ecosystem.
Its the developers responsibility to build great apps that don't put the user at risk, and their fault when they don't.
The current web app model makes this extremely hard to do well, and very easy to mess up. We can't expect web
developers to be security experts; its our responsibility to build a model that nudges them (sometimes forcefully)
towards taking the necessary precautions.
Its the app stores responsibility to provide apps that don't put users at undue security and privacy risk, and to remove
apps when they have done so. This is true both for malicious apps, and for apps with serious security issues.
Its the users responsibility to make informed decisions when choosing which apps to trust. Its everyone else's
responsibility to ensure they are presented with accurate information and relevant decisions to make sure they can do so
effectively.
Nobody is responsible for delivering a panacea, however.
Lucas.