Capabilities container

17 views
Skip to first unread message

Robin Berjon

unread,
Mar 23, 2011, 11:35:05 AM3/23/11
to mozill...@googlegroups.com
Hi all,

I hope that you're all enjoying a delightful hangover from the FX4 launch, congrats on that!

A little while back there was a short thread here[0] about the "capabilities" field in the OWA manifest. The idea is that installed web apps could be granted special superpowers not normally given to arbitrary web pages, or I'm guessing perhaps more accurately that they'd get said powers without requiring the series of user-interaction hoops that one would expect for arbitrary content.

This ties directly in to a larger problem, which I might characterise as follows: while there are plenty of sources out there that I would gladly entrust with greater capabilities to run code on my devices, there are none that I would trust never to be stupid. Or, put differently, with great(er) power comes great(er) XSS mitigation.

This here idea^Wbraindump is not intended to compete or replace CSP, but rather adds another layer of protection. In a nutshell, as a developer CSP protects your users from your own stupidity. Containers try to protect users of a user agent from the stupidity of other developers.

The basic idea is as follows. Let's take the example of a web app that's been granted extra privileges (by what magic does not matter). Those capabilities are *not* available to any script inlined or directly loaded from that page. Instead, in order to access them the developer must load a script that wishes to access them with something like the following (syntax up for bikeshedding):

var myContainer = new Container("super-script.js");

Those capabilities can then only be accessed from the code inside that container. Same-origin restrictions are naturally imposed on where that code can be loaded from, and in general the same loading mechanism as used with Workers is employed. The contained script has its own context that cannot be accessed directly from the global context (I don't believe that the reverse needs to be true).

Communication with the contained code can be imagined to happen in several ways. A simple one is to use the now common postMessage messaging as is done with Workers. Another interesting one is to treat that container as a CommonJS module featuring an "exports" object which can be used to, well, export stuff.

Obviously, this does not completely protect against stupidity. If the container one way or another exposes an rmRf() method then it can still be compromised by XSS. But it does provide an extra measure of mitigation at what I believe to be an acceptable cost.

Then there's a second piece of functionality that we can pile onto this approach. What with the increasing number of extra capabilities that we're starting to be able to pile onto web apps, several people have noticed that it can be a bit of pain to ask permission for each and every one of them individually. The folks working on Web Notifications have drafted a solution[1], DougT has been noodling over it[2], and Anssi has prototyped a variant of this as a playground/skunkworks idea[3]. I've been wondering if we couldn't make permissions and containers work together as it seems like they would be vernacularly close. Something like:

var myContainer = new Container("super-script.js", ["geolocation", "contacts", "camera"]);

The UI would work as it does for multi-perms. The contained script would then have means to access which permissions were granted (again, syntax can vary). Thinking ahead, there's a point at which this turns into a variant of one of the CommonJS require() transports. That may prove to be a good idea but I don't want to go there before mulling over whether this is a useful approach.

Thoughts? Ideas? Screams?

[0] http://groups.google.com/group/mozilla-labs/browse_thread/thread/1fa857342a6888c1/126f5bb966c10d60?lnk=gst&q=capabilities
[1] http://dev.w3.org/2006/webapi/WebNotifications/publish/FeaturePermissions.html#idl-Permissions
[2] http://dougt.org/wordpress/2011/03/device-api-permission-management/
[3] http://dev.w3.org/2009/dap/docs/feat-perms/feat-perms.html

--
Robin Berjon - http://berjon.com/

st...@sjbaker.org

unread,
Mar 23, 2011, 1:01:59 PM3/23/11
to mozill...@googlegroups.com
Let me be clear that I understand what you're saying here:

"Let's take the example of a web app that's been
granted extra privileges (by what magic does not matter).
Those capabilities are *not* available to any script inlined
or directly loaded from that page. Instead, in order to access
them the developer must load a script that wishes to access them
with something like the following (syntax up for bikeshedding):

var myContainer = new Container("super-script.js");

Those capabilities can then only be accessed from the code
inside that container."

So all of the code that uses this extra capability has to reside in
a single source file that's walled off from the rest of the application?

For example - I'd like for there to be a capability to reposition the
mouse cursor from within JavaScript. It's generally an undesirable
browser capability because evil advertisers could keep your mouse entombed
in their advert and naive users could easily imagine that they are trapped
and do something drastic like rebooting their computers to get out of it.
But for something like a WebGL first person shooter game, you absolutely
need this kind of capability in order to be able to steer the camera
around in a "conventional" manner. (HINT: Could someone please add this
capability?!)

So are you saying that I can just wrap the hypothetical mouse cursor
repositioning API ("mozSetMousePosition(x,y);") in a 3 line wrapper
function of my own:

function mySetMouse(x,y){mozSetMousePosition(x,y);}

...put that inside a teeny-tiny script file - then call my wrapper
function from within my main JavaScript program using something like:

var myContainer = new Container ( "mySetMouse.js",
("mozSetMousePosition") ) ;
...
myContainer.mySetMouse ( 100, 100 ) ;

Did I get that right?

-- Steve

> --
> You received this message because you are subscribed to the Google Groups
> "mozilla-labs" group.
> To post to this group, send email to mozill...@googlegroups.com.
> To unsubscribe from this group, send email to
> mozilla-labs...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/mozilla-labs?hl=en.
>
>


Anant Narayanan

unread,
Mar 23, 2011, 1:13:19 PM3/23/11
to mozill...@googlegroups.com
Hi Robin,

On 3/23/2011 8:35 AM, Robin Berjon wrote:
> The basic idea is as follows. Let's take the example of a web app that's been granted extra privileges (by what magic does not matter). Those capabilities are *not* available to any script inlined or directly loaded from that page. Instead, in order to access them the developer must load a script that wishes to access them with something like the following (syntax up for bikeshedding):
>
> var myContainer = new Container("super-script.js");
>
> Those capabilities can then only be accessed from the code inside that container. Same-origin restrictions are naturally imposed on where that code can be loaded from, and in general the same loading mechanism as used with Workers is employed. The contained script has its own context that cannot be accessed directly from the global context (I don't believe that the reverse needs to be true).

> Thoughts? Ideas? Screams?

My initial reaction to the proposal is that it feels a lot like adding
an extra layer of indirection in hopes of reducing the chances of
webapps with extra privileges being bad actors. In the long run, I do
not think this would be the right approach -- simply making something
more difficult to do only postpones the real issue at hand, and we
needlessly add more complexity.

From a broader perspective, writing a webapp with HTML/JS is definitely
much easier than writing a traditional C-based application, but this is
necessarily so. With it, we must accept that if a language is easier to
learn, there will always be programmers who use it in ways that we don't
intend (malicious or otherwise). That is not to say that we shouldn't
put any effort in securing the technology itself, just that it will be
much harder to do.

Now, back to original proposal; what is the real usefulness of
separating out the privileged code into its own container if the
container and 'insecure' code can freely communicate? Even if we
restrict it to just a postMessage style API, almost anything of
importance can be serialized into strings -- not to mention that the
'secure' container itself can contain really bad code. Since, I presume,
the author of the container and outside code is likely to be the same, I
do not see the point of it at all; but maybe I am missing something?

> Then there's a second piece of functionality that we can pile onto this approach. What with the increasing number of extra capabilities that we're starting to be able to pile onto web apps, several people have noticed that it can be a bit of pain to ask permission for each and every one of them individually.

Requesting multiple capabilities is definitely an important use-case,
but I think it is a different problem than the problem of making sure
webapps with elevated privileges don't misbehave.

Regards,
-Anant

Ian Bicking

unread,
Mar 23, 2011, 5:38:45 PM3/23/11
to mozill...@googlegroups.com, Robin Berjon
On Wed, Mar 23, 2011 at 10:35 AM, Robin Berjon <robin....@gmail.com> wrote:
Hi all,

I hope that you're all enjoying a delightful hangover from the FX4 launch, congrats on that!

A little while back there was a short thread here[0] about the "capabilities" field in the OWA manifest. The idea is that installed web apps could be granted special superpowers not normally given to arbitrary web pages, or I'm guessing perhaps more accurately that they'd get said powers without requiring the series of user-interaction hoops that one would expect for arbitrary content.

This ties directly in to a larger problem, which I might characterise as follows: while there are plenty of sources out there that I would gladly entrust with greater capabilities to run code on my devices, there are none that I would trust never to be stupid. Or, put differently, with great(er) power comes great(er) XSS mitigation.

I also have been wondering about this.  If you have permissions attached to an entire origin/domain then that's a lot of power, power that the application writer may not really want.  If you have, for instance, an unsecured wiki laying in some forgotten subdirectory with an XSS hole you could be in trouble -- and even CSP probably won't help you because it won't be applied to the things you forgot you even implemented (as just one example).
 
This here idea^Wbraindump is not intended to compete or replace CSP, but rather adds another layer of protection. In a nutshell, as a developer CSP protects your users from your own stupidity. Containers try to protect users of a user agent from the stupidity of other developers.

The basic idea is as follows. Let's take the example of a web app that's been granted extra privileges (by what magic does not matter). Those capabilities are *not* available to any script inlined or directly loaded from that page. Instead, in order to access them the developer must load a script that wishes to access them with something like the following (syntax up for bikeshedding):

 var myContainer = new Container("super-script.js");

The container's execution model is unclear, but also I think important.  When I've thought about this, I've found it hard to figure out exactly what you'd want to attach the permission to.

What you describe seems like it could be a web worker, and doesn't otherwise need any special features that web workers don't already have (except these permission details).  This does seem to make the limiting of permissions rather difficult -- in that you have to pipe everything through postMessage, which is probably a major code refactoring compared to using permissions inline in your code.  And many applications will probably have two code paths, one with and one without the permission (e.g., in-page notifications vs. Web Notifications), and there's no incentive to have the permissionless implementation in a web worker.  It's hard for me to imagine that all permission-granted access will be *required* to use this model, and so the carrot of higher application security needs to be at least not to bitter (to misuse a metaphor ;).

But outside of a web worker model I can't think of any practical way you could limit use of a permission.  There's just no good scope to define -- Javascript source code isn't a good scope, as it's not really used anywhere.  The scope of all execution is generally the HTML document that contains all the Javascript, and whitelisting a few HTML documents would be annoying and probably require iframe stuff which is basically a poor man's web worker anyway.  CSP is also not helpful in a lot of cases.

Anyway, assuming web workers... one way would be kind of like how many OS capabilities are I think handled, where essentially you start off by creating workers with the permissions they need, then drop permissions you no longer need.  So you might do:

  worker = new Worker('geo-script.js', ['geolocation']);
  dropPermissions(['geolocation'])

But that doesn't actually protect against much, IMHO, as most security problems will probably occur before dropPermissions() is called.  Much safer would be if you could ask for permissions for a specific script, e.g.:

  requestPermissions({geolocation: {scripts: ['/geo-script.js']}})

But then presumably you would need to ask for permissions again if you wanted another script to have a permission.  That's going to put a strong incentive to just ask for permissions everywhere so you don't have to re-ask for permissions just because you've changed the layout of your Javascript files.  Once an application has a confirmed permission the developer is going to really want to hold onto that permission, as re-asking is just asking for problems.  So I don't think any whitelist should be attached to the request process itself.  And anyway, '/geo-script.js' will never mean anything to anyone but the developer.  And if you have an XSS hole, you also have the potential that the attacking script can ask for a permission, and a bunch of users will grant that permission (because they trust your site, and because people hit OK pretty freely).

Since it seems like permissions are going to be attached to origins (not to specific pages), I wonder if CSP is somewhat problematic as it is generally attached to specific pages, meaning that any page on the origin that lacks CSP will be permissive while still inheriting permissions.  It would be nice if a policy (CSP or some extended policy that describes permissions and a whitelist) could be published in a well-known location so that forgotten corners of an application wouldn't have the opportunity to opt-out of any security policies.  This would allow developers to reliably update that information over time without losing permissions.

But back to web workers: I wonder if it would be confusing to have a permission that is not invoked by an HTML page, but by a headless bit of Javascript running in the worker?  (Well, workers can be associated with a window, but do not have to be.)

 
Those capabilities can then only be accessed from the code inside that container. Same-origin restrictions are naturally imposed on where that code can be loaded from, and in general the same loading mechanism as used with Workers is employed. The contained script has its own context that cannot be accessed directly from the global context (I don't believe that the reverse needs to be true).

Same-origin restrictions don't generally apply to Javascript.  Reading the web workers spec (http://www.whatwg.org/specs/web-workers/current-work/#dom-workerglobalscope-location) I think that the origin is considered to be the origin of the script that *created* the worker; the code itself can live anywhere, and besides being subject to CSP restrictions it doesn't otherwise affect the security.


Robin Berjon

unread,
Mar 24, 2011, 6:33:11 AM3/24/11
to mozill...@googlegroups.com
Hi Anant,

On Mar 23, 2011, at 18:13 , Anant Narayanan wrote:
> From a broader perspective, writing a webapp with HTML/JS is definitely much easier than writing a traditional C-based application, but this is necessarily so. With it, we must accept that if a language is easier to learn, there will always be programmers who use it in ways that we don't intend (malicious or otherwise). That is not to say that we shouldn't put any effort in securing the technology itself, just that it will be much harder to do.

I don't think that the problem comes from the ease with which one can pick up web technology and run with it, or with an average competence level of developers. As someone who is largely useless with anything other than web technology, I'm very happy with this state of affairs and certainly don't want to change it :) That's not what's causing the security issues we're seeing, and are likely to see more with increased privileges.

The problem is that using web technology it is shockingly easy to perform script injection attacks. You think you're getting data, you add it to the tree without sanitising, and bam — attack done. This has nothing to do with developer quality, it's a general problem.

The timeless classic example is the GMail widget that was made for the Apple Dashboard platform, and vetted for distribution by Apple. I'm pretty sure that the Google engineers who wrote it and the Apple engineers who reviewed it are all highly skilled and competent at what they're doing. Nevertheless, when listing the emails in a mailbox the widget used innerHTML to assign the subject's content without prior sanitisation. This meant that if you were using that widget, all I'd need to do would be to send you an email with Subject: <script>widget.system("rm -Rf ~");</script> and I would wipe your data — no questions asked. You wouldn't even know it was taking place until it was very much too late.

Granted, no one is talking about adding such a brutally powerful API for installed web apps, but we're still talking about things that could seriously hurt the user.

What this proposal creates is a separate context in which to place enhanced capabilities. That context can only be accessed through a limited API. This can prove to be a strong mitigation factor. For instance, the above attack would not have been possible.

> Now, back to original proposal; what is the real usefulness of separating out the privileged code into its own container if the container and 'insecure' code can freely communicate? Even if we restrict it to just a postMessage style API, almost anything of importance can be serialized into strings -- not to mention that the 'secure' container itself can contain really bad code. Since, I presume, the author of the container and outside code is likely to be the same, I do not see the point of it at all; but maybe I am missing something?

Let's take a roughly realistic example. I've written a Twitter client that's so incredibly awesome that you've just got to have it. And since you're a Twitter addict, it's running all the time. As you know, tweets can be geolocated and it's a feature that you like, so you've granted it blanket access to that capability.

Now I'm a nice guy and all, and on a good day I might even make it to being smart. But I made a mistake. The code I used to sanitise the tweets that you're reading has a hole in it that makes it possible to inject code. Someone's noticed this and has crafted a tweet that performs it. They @ you so that you see the message, but it just looks like garbage so you ignore it as some bot spam. The virus does something very simple: it monitors your geolocation position and reports it to a remote server. Since you're running this app all the time, you're essentially being constantly monitored. That's not very nice. Note that this isn't a far fetched scenario, there have been HTML code injection attacks in Twitter HTML-based clients (including the Twitter website itself).

This only works because you've granted geolocation access to the full JS global context. Now let's look at what would have happened if containers were in effect. I'm the same developer, I'm no smarter. Let's assume that we're not using postMessage (because it's too restrictive) but are exposing an API through the container. Here's how I would have loaded my twitter client code:

var client = new Container("twitter-client.js", ["geolocation"]);

Pretty straightforward. The only place in which I need access to the location information is when I'm sending a message. So in my client, we have something like this code:

//...
exports.sendTweet = function (tweetText) {
if (navigator.permissions["geolocation"]) {
navigator.geolocation.getCurrentPosition(function (pos) {
twitter.geoTweet(tweetText, pos.coords.latitude, pos.coords.longitude);
});
}
else {
twitter.simpleTweet(tweetText);
}
};
//...

This is nothing specific, it's just pretty much the method I would have written anyway. There can't be said to be that much extra complexity. The difference is that the geolocation information is not available in the global context, therefore the attack fails (it's still injecting so it can do other things, notably tweet on your behalf, but the protection is still better). As a user, you've been protected from my stupidity. As a developer, I had virtually nothing to change to my code.

--
Robin Berjon
Robineko (http://robineko.com/)

Robin Berjon

unread,
Mar 24, 2011, 6:54:20 AM3/24/11
to Ian Bicking, mozill...@googlegroups.com
On Mar 23, 2011, at 22:38 , Ian Bicking wrote:
> If you have, for instance, an unsecured wiki laying in some forgotten subdirectory with an XSS hole you could be in trouble -- and even CSP probably won't help you because it won't be applied to the things you forgot you even implemented (as just one example).

Actually I think that CSP helps precisely a lot with that because you can blanket CSP your entire site and get reports for unexpected injections.

>> The basic idea is as follows. Let's take the example of a web app that's been granted extra privileges (by what magic does not matter). Those capabilities are *not* available to any script inlined or directly loaded from that page. Instead, in order to access them the developer must load a script that wishes to access them with something like the following (syntax up for bikeshedding):
>>
>> var myContainer = new Container("super-script.js");
>
> The container's execution model is unclear, but also I think important. When I've thought about this, I've found it hard to figure out exactly what you'd want to attach the permission to.

I think that the capabilities get attached to a Javascript context that is not the global context. There is some porosity between the two so that they can communicate. This indeed can reopen security holes, but you're better off starting from a place where you need to reopen things than one in which you need to close them.

> What you describe seems like it could be a web worker, and doesn't otherwise need any special features that web workers don't already have (except these permission details). This does seem to make the limiting of permissions rather difficult -- in that you have to pipe everything through postMessage, which is probably a major code refactoring compared to using permissions inline in your code.

As I said, the details can be worked out — the important part is the context separation, consisting in never granting increased capabilities to the global context.

I can certainly see how postMessage is a pain. As I said we could simply have containers work like (a subset of) CommonJS modules. They get an exports object, and only expose what's expressly set on that. That means a whole lot less code refactoring compared to the same application written entirely in the global context. Also, since we're talking about a way of exposing (a lot) more capabilities, it's unlikely that there is existing code to refactor anyway. This is the right time to make this move.

> It's hard for me to imagine that all permission-granted access will be *required* to use this model

Why? That's entirely the point: getting more secure code without developers even noticing it (much).

> But outside of a web worker model I can't think of any practical way you could limit use of a permission. There's just no good scope to define -- Javascript source code isn't a good scope, as it's not really used anywhere. The scope of all execution is generally the HTML document that contains all the Javascript, and whitelisting a few HTML documents would be annoying and probably require iframe stuff which is basically a poor man's web worker anyway.

Same as for CommonJS, it's pretty much tried and true.

> Anyway, assuming web workers... one way would be kind of like how many OS capabilities are I think handled, where essentially you start off by creating workers with the permissions they need, then drop permissions you no longer need. So you might do:
>
> worker = new Worker('geo-script.js', ['geolocation']);
> dropPermissions(['geolocation'])

I don't think that that helps since it's likely attacks can occur in between the two. In many cases you'll need continuous access to a capability so dropping isn't an option. Insulation helps a lot more.

> But that doesn't actually protect against much, IMHO, as most security problems will probably occur before dropPermissions() is called. Much safer would be if you could ask for permissions for a specific script, e.g.:
>
> requestPermissions({geolocation: {scripts: ['/geo-script.js']}})

Hey, that's exactly my proposal but with a different syntax :) Actually I realised yesterday that we'd want this to be asynchronous since permissions are involved (duh).

> But then presumably you would need to ask for permissions again if you wanted another script to have a permission.

Right, but then you probably don't need to have a zillion scripts with the same permissions. Also, we could group them. It doesn't matter which syntax is picked so long as the global context for these scripts isn't the one from the HTML document.

> That's going to put a strong incentive to just ask for permissions everywhere so you don't have to re-ask for permissions just because you've changed the layout of your Javascript files.

Not if you can request multiple sources at once. And not if it isn't possible to ask for permission everywhere.

> And anyway, '/geo-script.js' will never mean anything to anyone but the developer.

How is that a problem?

> And if you have an XSS hole, you also have the potential that the attacking script can ask for a permission

Not in the model I described it can't. You can only request permissions on same-origin resources that are scripts. The attacker would have to place a script resource on your server — we've left the domain of XSS here and if they can do that you're screwed anyway.

> Since it seems like permissions are going to be attached to origins (not to specific pages), I wonder if CSP is somewhat problematic as it is generally attached to specific pages, meaning that any page on the origin that lacks CSP will be permissive while still inheriting permissions. It would be nice if a policy (CSP or some extended policy that describes permissions and a whitelist) could be published in a well-known location so that forgotten corners of an application wouldn't have the opportunity to opt-out of any security policies. This would allow developers to reliably update that information over time without losing permissions.

That's unrelated to this proposal, but I think that you're missing something with CSP. You can enable it on an entire site simply by asking your server to always send the CSP header.

Robin Berjon

unread,
Mar 24, 2011, 10:39:14 AM3/24/11
to mozill...@googlegroups.com
Hi Steve,

On Mar 23, 2011, at 18:01 , st...@sjbaker.org wrote:
> So all of the code that uses this extra capability has to reside in
> a single source file that's walled off from the rest of the application?

Yes, that's the point.

> For example - I'd like for there to be a capability to reposition the
> mouse cursor from within JavaScript. It's generally an undesirable
> browser capability because evil advertisers could keep your mouse entombed
> in their advert and naive users could easily imagine that they are trapped
> and do something drastic like rebooting their computers to get out of it.

It's not just naive users. If I locked the mouse cursor and prevented the default action for all keyboard events (which is already possible) on some platforms you'd probably find yourself having to at least kill the browser (on OSX at least I think that only Cmd-Tab would get you out of the browser, and you'd then have no way to return to it without being captured again).

> But for something like a WebGL first person shooter game, you absolutely
> need this kind of capability in order to be able to steer the camera
> around in a "conventional" manner. (HINT: Could someone please add this
> capability?!)
>
> So are you saying that I can just wrap the hypothetical mouse cursor
> repositioning API ("mozSetMousePosition(x,y);") in a 3 line wrapper
> function of my own:
>
> function mySetMouse(x,y){mozSetMousePosition(x,y);}
>
> ...put that inside a teeny-tiny script file - then call my wrapper
> function from within my main JavaScript program using something like:
>
> var myContainer = new Container ( "mySetMouse.js",
> ("mozSetMousePosition") ) ;
> ...
> myContainer.mySetMouse ( 100, 100 ) ;
>
> Did I get that right?

You got the basics, but not completely, and you just gave an excellent example of defeating the purpose :)

I'm not extremely familiar with games but I'm guessing that you want to lock the mouse pointer to a given position while the game is being played (so place it inside your WebGL canvas when the game starts and on mouse move so that it doesn't exit) and release it when the game is paused or stopped. If that's the wrong assumption please correct me.

The way you code it above doesn't work, it completely defeats the idea. Let's assume that you have your game and that it includes a chat zone where you can talk to other players. You made a small, silly mistake that makes that chat vulnerable to code injection from other players (as easy mistake to make). With the approach you outline, all that the attacking code needs to do is call myContainer.mySetMouse(x, y) and it can do whatever it wants with your cursor just as if mozSetMousePosition() had been globally available. But then again, why would you expose that functionality globally? You don't need to. I think you're more likely to have something resembling the following, even without thinking much about it:

// in the global context
var shootGame = new Container("shoot-everything-up.js", ["mouse-control"]); // syntax may vary
shootGame.playGame();

// in shoot-everything.js
exports.playGame = function () {
this.lockMouse = true;
// other stuff..
};

exports.pauseGame = function () {
this.lockMouse = false;
// other stuff..
};

exports.init = function () {
document.onmousemove = function (ev) {
// handle the motion
if (this.lockMouse) mozSetMousePosition(100, 200);
};
};

This is very much similar to how you'd likely layout your code in the first place. The difference is that mozSetMousePosition can't be called directly from the global context so the previous attack does not work. Of course, this isn't Caja so the protection is imperfect, and as a developer you can still defeat the protection — but you have to do it pretty much on purpose.

st...@sjbaker.org

unread,
Mar 24, 2011, 2:26:09 PM3/24/11
to mozill...@googlegroups.com
> Hi Steve,
>
> On Mar 23, 2011, at 18:01 , st...@sjbaker.org wrote:
>> So all of the code that uses this extra capability has to reside in
>> a single source file that's walled off from the rest of the application?
>
> Yes, that's the point.
>
>> For example - I'd like for there to be a capability to reposition the
>> mouse cursor from within JavaScript. It's generally an undesirable
>> browser capability because evil advertisers could keep your mouse
>> entombed
>> in their advert and naive users could easily imagine that they are
>> trapped
>> and do something drastic like rebooting their computers to get out of
>> it.
>
> It's not just naive users. If I locked the mouse cursor and prevented the
> default action for all keyboard events (which is already possible) on some
> platforms you'd probably find yourself having to at least kill the browser
> (on OSX at least I think that only Cmd-Tab would get you out of the
> browser, and you'd then have no way to return to it without being captured
> again).

Yikes! But yes...exactly. You don't want to give this feature to people
whom you don't trust to produce reliable software...whatever that means!
Giving the end-user the choice to award that trust on a case-by-case basis
is where we're at here.

>> But for something like a WebGL first person shooter game, you absolutely
>> need this kind of capability in order to be able to steer the camera
>> around in a "conventional" manner. (HINT: Could someone please add this
>> capability?!)
>>
>> So are you saying that I can just wrap the hypothetical mouse cursor
>> repositioning API ("mozSetMousePosition(x,y);") in a 3 line wrapper
>> function of my own:
>>
>> function mySetMouse(x,y){mozSetMousePosition(x,y);}
>>
>> ...put that inside a teeny-tiny script file - then call my wrapper
>> function from within my main JavaScript program using something like:
>>
>> var myContainer = new Container ( "mySetMouse.js",
>> ("mozSetMousePosition") ) ;
>> ...
>> myContainer.mySetMouse ( 100, 100 ) ;
>>
>> Did I get that right?
>
> You got the basics, but not completely, and you just gave an excellent
> example of defeating the purpose :)
>
> I'm not extremely familiar with games but I'm guessing that you want to
> lock the mouse pointer to a given position while the game is being played
> (so place it inside your WebGL canvas when the game starts and on mouse
> move so that it doesn't exit) and release it when the game is paused or
> stopped. If that's the wrong assumption please correct me.

Not exactly - the issue is that in a typical 1st person shooter (Quake,
Halo, whatever) you aim the camera by use of the mouse - and you NEED to
be able to keep sliding the mouse (say) to the left indefinitely and have
the camera keep spinning around for as long as you do it.

You're probably going to argue that there is a better way to do it - but
please don't. **ALL** first person shooters work this way - and for
exceedingly carefully thought out reasons that won't change...the people
who came up with this are intelligent and experts at human factors stuff.

So the idea is that you don't care about the absolute position of the
mouse - you only care about how far it moved in some amount of time.
Hence, the standard method is to read the mouse position, then move it
back to the center of the window (more often, this would be a full-screen
app). Each iteration of the game is frequent enough that even the fastest
player can't get the mouse off the edge of the screen in 1/30th second.

But - inside a browser, this is 100% impossible. The JavaScript API
prevents you from doing this - and even if you hide the mouse pointer
(which is possible in Firefox but essentially impossible in Chrome), there
is a problem when the mouse hits the edge of the window/screen.

So - if you want first person shooter games that run in a browser (and,
trust me, even if you don't - there are at least a 100,000,000 people who
do!) - then repositioning the mouse is a "must have" feature.

> The way you code it above doesn't work, it completely defeats the idea.
> Let's assume that you have your game and that it includes a chat zone
> where you can talk to other players. You made a small, silly mistake that
> makes that chat vulnerable to code injection from other players (as easy
> mistake to make). With the approach you outline, all that the attacking
> code needs to do is call myContainer.mySetMouse(x, y) and it can do
> whatever it wants with your cursor just as if mozSetMousePosition() had
> been globally available.

That's true - which is why we don't want everyone to have this potentially
dangerous feature without the user giving permission.

> But then again, why would you expose that
> functionality globally? You don't need to. I think you're more likely to
> have something resembling the following, even without thinking much about
> it:
>
> // in the global context
> var shootGame = new Container("shoot-everything-up.js",
> ["mouse-control"]); // syntax may vary
> shootGame.playGame();

So are you saying that I have to place all 1,000,000 lines of "shootGame"
in a single JavaScript source file?!

I surely hope that isn't what you're saying - because:

a) Given the astounding spike in complexity you're about to see in game
complexity now that WebGL and canvas are "out there" - we are ALREADY
seeing JavaScript code in the multiple-megabytes range. What happens when
I include 3rd party middleware in my game - it probably comes with
licensing terms that effectively prohibit it from being merged into the
same file as my proprietary game code. What happens as I stream code into
the application in mid run (eg as I load the "LEVEL 2" code when you
finish playing "LEVEL 1").

b) For safety, I'd prefer to attach privilages to individual code files -
so that (for example) the player-avatar controller class gets given the
mouse cursor repositioning privilege and the video-chat code gets given
access to the camera - but each doesn't get given access to any features
that it doesn't strictly need.


> // in shoot-everything.js
> exports.playGame = function () {
> this.lockMouse = true;
> // other stuff..
> };
>
> exports.pauseGame = function () {
> this.lockMouse = false;
> // other stuff..
> };
>
> exports.init = function () {
> document.onmousemove = function (ev) {
> // handle the motion
> if (this.lockMouse) mozSetMousePosition(100, 200);
> };
> };
>
> This is very much similar to how you'd likely layout your code in the
> first place. The difference is that mozSetMousePosition can't be called
> directly from the global context so the previous attack does not work. Of
> course, this isn't Caja so the protection is imperfect, and as a developer
> you can still defeat the protection — but you have to do it pretty much on
> purpose.

I'm unconvinced by the "let's make it difficult but not impossible"
argument. The bad guys are evil - but they aren't stupid - and they share
their work. So it only takes someone to produce a hack at some
blackhatter's conference and it'll be all over the place within a week.
Now all you've done is make life tough for the good guys to no particular
benefit to the end user. (Who STILL has the option to say "No, I don't
trust this game enough to give it that privilage").

A better way to protect honest developers from making mistakes would be to
provide a way to stick text into an HTML DOM object that automatically
sanitizes it from anything that could cause code within it to run...or
better still, make that the default and force you to use a different
mechanism to deliberately stick code into an innerHTML.

-- Steve


Robin Berjon

unread,
Mar 25, 2011, 7:48:38 AM3/25/11
to mozill...@googlegroups.com
On Mar 24, 2011, at 19:26 , st...@sjbaker.org wrote:
>> It's not just naive users. If I locked the mouse cursor and prevented the
>> default action for all keyboard events (which is already possible) on some
>> platforms you'd probably find yourself having to at least kill the browser
>> (on OSX at least I think that only Cmd-Tab would get you out of the
>> browser, and you'd then have no way to return to it without being captured
>> again).
>
> Yikes! But yes...exactly. You don't want to give this feature to people
> whom you don't trust to produce reliable software...whatever that means!
> Giving the end-user the choice to award that trust on a case-by-case basis
> is where we're at here.

Giving the end user the choice to grant capabilities is clearly part of the picture, but I think that we need to do more. Hence me throwing this idea around to see if it sticks.

>> I'm not extremely familiar with games but I'm guessing that you want to
>> lock the mouse pointer to a given position while the game is being played
>> (so place it inside your WebGL canvas when the game starts and on mouse
>> move so that it doesn't exit) and release it when the game is paused or
>> stopped. If that's the wrong assumption please correct me.
>
> Not exactly - the issue is that in a typical 1st person shooter (Quake,
> Halo, whatever) you aim the camera by use of the mouse - and you NEED to
> be able to keep sliding the mouse (say) to the left indefinitely and have
> the camera keep spinning around for as long as you do it.

Actually, I probably expressed it wrong but what you then went on to describe is pretty much what I had in mind.

>> But then again, why would you expose that
>> functionality globally? You don't need to. I think you're more likely to
>> have something resembling the following, even without thinking much about
>> it:
>>
>> // in the global context
>> var shootGame = new Container("shoot-everything-up.js",
>> ["mouse-control"]); // syntax may vary
>> shootGame.playGame();
>
> So are you saying that I have to place all 1,000,000 lines of "shootGame"
> in a single JavaScript source file?!
>
> I surely hope that isn't what you're saying

The example skunkworks syntax that I showed would require that, but as I said the syntax can be changed. It's certainly possible to extend the same idea to multiple source files at once. The important point is context separation: making sure that additional capabilities are only exposed in contexts that are not directly accessible to injected code.

>> This is very much similar to how you'd likely layout your code in the
>> first place. The difference is that mozSetMousePosition can't be called
>> directly from the global context so the previous attack does not work. Of
>> course, this isn't Caja so the protection is imperfect, and as a developer
>> you can still defeat the protection — but you have to do it pretty much on
>> purpose.
>
> I'm unconvinced by the "let's make it difficult but not impossible"
> argument. The bad guys are evil - but they aren't stupid - and they share
> their work. So it only takes someone to produce a hack at some
> blackhatter's conference and it'll be all over the place within a week.
> Now all you've done is make life tough for the good guys to no particular
> benefit to the end user. (Who STILL has the option to say "No, I don't
> trust this game enough to give it that privilage").

I'm afraid that you're completely missing the point :) The problem isn't whether end users can grant privileges — we have that anyway. It isn't whether evil people can use script injection — they already can. What this changes is what gets exposed to injected script. Instead of "everything the user has agreed to" we have "only what the developer has decided to expose". The cost in terms of additional code complexity to the developer is negligible — there is no make "life tough for the good guys".

> A better way to protect honest developers from making mistakes would be to
> provide a way to stick text into an HTML DOM object that automatically
> sanitizes it from anything that could cause code within it to run...or
> better still, make that the default and force you to use a different
> mechanism to deliberately stick code into an innerHTML.

That, in turn, would require far more sweeping changes and I fail to see how it could be done without indeed making web development a fair bit harder.

st...@sjbaker.org

unread,
Mar 25, 2011, 4:36:13 PM3/25/11
to mozill...@googlegroups.com
> On Mar 24, 2011, at 19:26 , st...@sjbaker.org wrote:
>>> But then again, why would you expose that
>>> functionality globally? You don't need to. I think you're more likely
>>> to
>>> have something resembling the following, even without thinking much
>>> about
>>> it:
>>>
>>> // in the global context
>>> var shootGame = new Container("shoot-everything-up.js",
>>> ["mouse-control"]); // syntax may vary
>>> shootGame.playGame();
>>
>> So are you saying that I have to place all 1,000,000 lines of
>> "shootGame"
>> in a single JavaScript source file?!
>>
>> I surely hope that isn't what you're saying
>
> The example skunkworks syntax that I showed would require that, but as I
> said the syntax can be changed. It's certainly possible to extend the same
> idea to multiple source files at once. The important point is context
> separation: making sure that additional capabilities are only exposed in
> contexts that are not directly accessible to injected code.

The problem is that all of the bits of my code need to share context -
this is a game - the mouse controls talk to the camera code, the camera
talks to the graphics and the AI, both of those talk to the physics and
the pathfinder and all of it talks to networking...we can't hold one half
of the code incommunicado from the other half.

So allowing multiple files to share privilages will simply cause
developers to simply demand the privilages they need to be supplied to ALL
of their source code...and then you're no better off than if you hadn't
walled it all off in the first place because the potential
script-injection stuff is now inside the walled-off context.

If my entire million line game can't work efficiently across
privilaged/not-privilaged boundaries - then I really have no choice but to
put 100% of my code on the privilaged side of the wall - and now you have
your problem back again.

What I'd like to do is to grant access to those privilages ONLY to small
library functions that I've written (eg, the one tiny file that reads the
mouse and repositions it) - but give ALL of my (potentially insecure) code
the ability to pass data into and out of those functions...but I concede
your point that the script-injection stuff can mimic my legitimate calls
to the secure modules and thereby cause grief.

So it seems to me that this isn't at all an issue of what source files
have access to the security-sensitive features - it's an issue of what
kind of communication you allow between insecure and secure sections of
that code...and that's the problem:

* If you disallow bidirectional communication between unsecure and secure
functions - then you force developers to put all of their code into the
secure side of things (including the bits where script injection is
possible)
* If you allow that communication to happen then the script-injection
folks can call secure functions and (perhaps) make them do things we don't
want.

If you can manage that communication - then you can solve the problem.

>> A better way to protect honest developers from making mistakes would be
>> to
>> provide a way to stick text into an HTML DOM object that automatically
>> sanitizes it from anything that could cause code within it to run...or
>> better still, make that the default and force you to use a different
>> mechanism to deliberately stick code into an innerHTML.
>
> That, in turn, would require far more sweeping changes and I fail to see
> how it could be done without indeed making web development a fair bit
> harder.

Why?

myDOMelement.innerHTML = something ;

...results in "something" being unconditionally stuffed into the HTML -
resulting in a risk of script-injection problems if the application fails
to scrutinize "something" carefully enough.

That's difficult because there are millions of applications - and they
aren't all written carefully enough. But if you provided:

myDOMelement.setInnerHTMLSecurely ( something ) ;

...then the underlying implementation of "setInnerHTML()" can carefully
check for nasty stuff and scrub the content of that element to prevent
script injection. If it's well known & documented that this is the safe
way to set HTML DOM elements then developers will use it in preference to
just setting innerHTML - because (a) it's easier than writing and
maintaining all of that sanitization stuff yourself, (b) it'll be written
by folks who understand all of the latest nasty black-hat tricks, and (c)
it'll be written in C++ and not JavaScript - so it'll be fast.

You could go one step further and prevent "secured" code (that which has
user-assigned privilages) from accessing the DOM except through these
kinds of mechanisms. If you did that then my entire million lines of code
system can be on the "secure" side of the fence...providing it doesn't
ever want to deliberately write script code into an HTML element.

You could even add:

myDOMelement.setInnerHTMLWithDangerousScriptingAllowed ( something ) ;

...to allow those kinds of "dangerous" accesses - but at least assuring
that developers would be unlikely to call it without carefully considering
the issues.

-- Steve


Reply all
Reply to author
Forward
0 new messages