Sid,
Afaik, the current thinking in B2G (afaik) is to for apps to request
permissions by specifying a list of desired permissions in the manifest.
There has been talk of a "reason" field, which would accompany each
permission for basically the purposes you describe.
I think this could be a useful feature for app reviewers (be it
marketplace staff, community members, or just security/privacy minded
users). We would need to implement it in such a way that it could not be
used as a social engineering mechanism though. For example, if we just
presented a dialog with the permission and reason together, the app
could seek to confuse the user. For example, your stashy camera app
might try to trick the user into giving access to the address book by
prompting something like "Permission: Addressbook, Reason: Allow your
camera to take photos. We need to make sure the permission being
granted is clear.
But I do see the value per your points below, so i think we should have
a manifest format that supports this, and then figure out where and how
this information is presented. And also what to do when this information
isn't available.
Regards,
Paul
On 5/23/12 8:31 AM, Sid Stamm wrote:
> Hey All,
>
> tl;dr: Apps should also explain how they'll use data from various
> permissions (like access to your contacts) since use of capabilities
> varies so much across apps.
>
> The Apps Permission Security model sets forth mechanisms by which users
> can allow or deny these capabilities to apps (and includes mechanisms to
> ensure that users make informed choices that are not burdonsome).
>
> While the security model facilitates access control for these
> capabilities, it does not dictate what the apps do with data obtained
> from these capabilities.
>
> EXAMPLE: Perhaps an app, Stashy, may be granted access to my device's
> camera, but the B2G security model won't restrict what the app does with
> the bits that come from the camera sensor. These bits could be recorded
> and shipped off to the app's server at
stashy.com and later used to
> identify me as I walk past other cameras controlled by the app developers.
>
> Everywhere the security model engages with its user, we can provide an
> opportunity for apps to represent their *usage intentions* to the user.
> This can be considered a commitment for limited data use by the app
> developers.
>
> I'd like to propose an update to the security model Lucas has been
> guiding through discussion[0]. Specifically, I'd like to add to the
> application permissions model so apps not only ask for permission to use
> various capabilities, but also explain their intended use of the data
> they gain from such access. I'm calling these "usage intentions".
>
> EXAMPLE: The same Stashy app in the previous example requests permission
> to use the camera in my device, and includes a usage intention that says
> "we will edit the camera video stream to paint a mustache on your face
> and display the composite image on your device's screen -- we won't
> record photos or videos from your camera." When the user is asked to
> authorize this capability for the app, he is shown to what the app wants
> to *access* but also *why*.
>
> Ways this can improve users' privacy:
>
> == On the Hook ==
> Apps that make promises via usage intentions have essentially provided
> assurance to the user that their data will be used in a certain way. If
> it turns out the app developers use the data for another purpose (say
> actually recording Stashy photos and posting them on a public twitter
> feed), users have a clear way to explain how the app is operating
> deceptively.
>
> == Pre-Validation with Privacy Policies ==
> Many apps will have a privacy policy. An app store has the opportunity
> to pre-screen apps based on the usage intentions in their manifest and
> the privacy policy they provide. So long as the two are consistent,
> users have a commitment from the app about what it intends to do with
> their data. Apps that are not consistent or vague can be rejected from
> an app store.
>
> == Auditing ==
> To provide a "trail of activity", B2G or other app runtime could
> additionally maintain a capability-access log for each app that keeps
> track of requests for capabilities and the usage intentions over time.
> That way a curious user could analyze the log to see how often an app
> used a permission, why it used it, and perhaps help illustrate abuse of
> their consent.
>
> What do you think?
>
> -Sid
>
> [0]
https://wiki.mozilla.org/Apps/Security/Discussion
>
> _______________________________________________
> dev-webapps mailing list
>
dev-w...@lists.mozilla.org
>
https://lists.mozilla.org/listinfo/dev-webapps