JEP 107 (Page Mods)

84 views
Skip to first unread message

Nickolay Ponomarev

unread,
Apr 3, 2010, 6:35:26 PM4/3/10
to mozilla-la...@googlegroups.com
Hi,

I spent some time implementing JEP 107, but since many things were either unclear or didn't have an obvious way to implement, I ended up writing a slightly different and more detailed spec [1] and its implementation [2].

[1] https://wiki.mozilla.org/User:Asqueella/JEP_107
[2] http://bitbucket.org/nickolay/jetpack-packages/src/tip/packages/page-mods/

Let me list the main issues I encountered with the original specification:
1) not clear what format to use with include/exclude.
--> I tried to specify a format and provide a rationale in my proposal
2) not clear how the example using mootools was supposed to work. Presumably the mootools URL was supposed to be loaded in the content page, in which case its APIs would not be available in the host module, where the function using addEvents is running in. I didn't like that very different kinds of scripts (one running in content and another running in a module) were listed in a single list.
--> I tried to define the semantics more clearly in my proposal. Also I suggested using helper functions in the 'script' section to do things like adding a <script> tag to a page to load a script in content.
3) I don't see how applying and unloading script-based mods without reloading the page should be implemented. The mod would need to keep track of what it's doing and carefully implement undo. I think it's a lot of effort for little (if any) gain.
--> I implemented the simplest mechanism -- enabling/disabling script-based mods doesn't have any effect on the pages that started loading earlier.
4) style-based mods are combined with the script-based mods.
--> since CSS-only modifications can be easily (un)applied without a page reload and it doesn't require any effort from the mod's author it makes sense to let the user define using a separate object.

Any comments?

Nickolay

Daniel

unread,
Apr 3, 2010, 11:43:15 PM4/3/10
to mozilla-labs-jetpack
So what was meant by the mootools js url and the actual js code block
was to show an example of how you could load a script in a page by tag
reference to a js file, or actually write in js code that would be
placed in the page as a script tag wrapped block. There was no mix of
module js and page js, not sure how that was concluded...all the code
in the page mod would be explicitly scoped to the page's context not
the module's.

Btw, I just used a random url and random code, don't read into that
part too much :)

- Daniel

On Apr 3, 3:35 pm, Nickolay Ponomarev <asquee...@gmail.com> wrote:
> Hi,
>
> I spent some time implementing JEP 107, but since many things were either
> unclear or didn't have an obvious way to implement, I ended up writing a
> slightly different and more detailed spec [1] and its implementation [2].
>
> [1]https://wiki.mozilla.org/User:Asqueella/JEP_107

> [2]http://bitbucket.org/nickolay/jetpack-packages/src/tip/packages/page-...

Nickolay Ponomarev

unread,
Apr 4, 2010, 9:15:27 AM4/4/10
to mozilla-la...@googlegroups.com
On Sun, Apr 4, 2010 at 7:43 AM, Daniel <dani...@gmail.com> wrote:
So what was meant by the mootools js url and the actual js code block
was to show an example of how you could load a script in a page by tag
reference to a js file, or actually write in js code that would be
placed in the page as a script tag wrapped block.

Hmm, that's counter-intuitive. You're saying the function should be serialized to a string, then that string should be inserted into a page? I haven't seen any other APIs doing this, perhaps it would make more sense to force users to put such code in an external resource?

Nickolay

Daniel

unread,
Apr 4, 2010, 5:11:40 PM4/4/10
to mozilla-labs-jetpack
Well I think it would be beneficial for the pages to be injected in
the page in this way. If we could capture the page before parse
(which would take some work) the page could potentially cache the
scripts as it would if they were included with the page by the
domain.

On Apr 4, 6:15 am, Nickolay Ponomarev <asquee...@gmail.com> wrote:

Atul Varma

unread,
Apr 4, 2010, 8:28:16 PM4/4/10
to mozilla-la...@googlegroups.com, Daniel
Hmm, one of the disadvantanges of "stringifying" a function is that it actually folds constants and removes comments, so when errors are thrown from the function, the line numbers could get thrown off.  For example:
js> uneval(function() { /* yo! */ return 1 + 2; })
(function () {return 3;})
Not sure if there is an easy way to stringify functions without this effect.

We'd also need to make sure it's explicitly communicated to developers that the functions are being stringified, so they shouldn't refer to e.g. variables defined in closures.  One of the advantages of forcing users to put their code in an external resource is that it makes such confusion impossible, I think--though forcing things to be in separate files certainly has its own disadvantages.

- Atul

Daniel

unread,
Apr 5, 2010, 11:59:20 AM4/5/10
to mozilla-labs-jetpack
If code was evaluated in Jetpack space and reached in to act on
content documents from its own sandbox, are there any issues with
leaking privileges to untrusted content code? If that was a issue,
would COWs neutralize any leakage of that sort? Just wondering if
that is another options now that COWs are landed :)

- Daniel

Atul Varma

unread,
Apr 5, 2010, 12:17:02 PM4/5/10
to mozilla-la...@googlegroups.com, Daniel
It's entirely possible to interact with a page's DOM in a way that's similar to Chromium's concept of isolated worlds, but that doesn't allow pages to interact with actual JS objects on the page.  Such inter-script communication could be satisfied through a postMessage-like JSON message-passing bridge, though it obviously isn't as convenient.

I suspect when it comes down to it, we may need to experiment with a number of different secure solutions and see which ones developers find understandable and easy to use.

- Atul

Myk Melez

unread,
Apr 7, 2010, 2:03:41 PM4/7/10
to mozilla-la...@googlegroups.com, Nickolay Ponomarev
On 04/03/2010 03:35 PM, Nickolay Ponomarev wrote:
Hi,

I spent some time implementing JEP 107, but since many things were either unclear or didn't have an obvious way to implement, I ended up writing a slightly different and more detailed spec [1] and its implementation [2].

[1] https://wiki.mozilla.org/User:Asqueella/JEP_107
[2] http://bitbucket.org/nickolay/jetpack-packages/src/tip/packages/page-mods/
Great stuff!

It's fine to leave style modification to a later phase of development, we should just make sure to take its future requirements into consideration!


Let me list the main issues I encountered with the original specification:
1) not clear what format to use with include/exclude.
--> I tried to specify a format and provide a rationale in my proposal
Yup, the original proposal was vague, and yours is specific, which is great!

My primary goals for this format are:
  1. expressiveness: it should support common use cases;
  2. simplicity: it should be as easy as possible to learn, use, and remember;
  3. security: it should mitigate the risk of rule exploitation.
A non-goal is compatibility with existing implementations, however, including @moz-document rules. It's fine to reuse such implementations if possible, but they shouldn't constrain the API design (whose ergonomics take priority) nor our ability to evolve the API in the future (f.e. via the addition of filter functions or regular expressions).

With that in mind, it's not clear that the @moz-document-compatible format is preferable to the one being used by Chrome (whose invariant structure makes it easier to remember), the one used in Greasemonkey (whose rules are simpler), or the one I recommended in my earlier comments on this spec.

I need to take a closer look at this, but at first glance it looks like Chrome's format balances these goals best.


2) not clear how the example using mootools was supposed to work. Presumably the mootools URL was supposed to be loaded in the content page, in which case its APIs would not be available in the host module, where the function using addEvents is running in. I didn't like that very different kinds of scripts (one running in content and another running in a module) were listed in a single list.
--> I tried to define the semantics more clearly in my proposal. Also I suggested using helper functions in the 'script' section to do things like adding a <script> tag to a page to load a script in content.
<script> is problematic, as you note, and will require careful consideration, so I think it's the right decision to delay its implementation while we gather use cases and think through the ramifications!

And I really like the use of the new "content-document-window-created" notification!

Given that the modifications are now functionally equivalent to event handlers, however, we should use the pattern we use elsewhere for such handlers, which is to give each one its own property, prefixed with "on" and named after the event, as described in the API design guidelines, i.e.:

var myMod = new ScriptMod({
  onWindowCreate: [function() {}, function() {}],
  onDOMReady: function() {}
});

This also allows us to reserve "script" for future use as an injector of <script> tags, if we decide to do that at a later date.

Regarding the naming of the "onWindowCreate" event, according to bug 549539 Jonas would have preferred to call the notification "content-window-created", but he thought that would lead to confusion with folks who are familiar with another low-level Gecko API.

We don't have that problem, though, as we aren't exposing that other API, so we can drop "document". And we can drop "content", too, since this API only exposes content windows to modification. Finally, "created" is present tense for consistency with other events.

Regarding the naming of the "onDOMReady" event, the naming should be consistent with other APIs that expose such a handler, like the tabs API, which calls this "onDOMReady" (I think we inherited this from jQuery).

I'm amenable to calling this something else, like onDOMLoad, onContentLoad, onDOMContentLoad, onDocumentReady, or simply onReady. But whatever we choose, we should use it everywhere, and it should be consistent with the other handlers wrt. the use of present tense.


3) I don't see how applying and unloading script-based mods without reloading the page should be implemented. The mod would need to keep track of what it's doing and carefully implement undo. I think it's a lot of effort for little (if any) gain.
--> I implemented the simplest mechanism -- enabling/disabling script-based mods doesn't have any effect on the pages that started loading earlier.
Yup, this makes perfect sense to me.


4) style-based mods are combined with the script-based mods.
--> since CSS-only modifications can be easily (un)applied without a page reload and it doesn't require any effort from the mod's author it makes sense to let the user define using a separate object.
It's true that style modifications don't require page reloads, but that doesn't seem like a distinction significant enough to necessitate creating separate APIs for script and style modifications.

That distinction is a difference in the machine model, not the author's mental model (although it leaks, since we need to document the difference for cases where it matters), and it's not even a very sharp distinction, since there are some kinds of plausible changes that are somewhat dynamic (like removing a DOMContentLoaded handler after a page has started loading but before DOMContentLoaded, preventing that handler from being called).

Also, separate APIs would make authors who make both script and style modifications repeat the include/exclude rules.

Nevertheless, it's fine to leave implementation of style mods to a future phase of development. And I think it's also fine for the constructor exported from the module to be called ScriptMods. But we should still call the module itself "page-mods" and reuse it when we implement style mods in the future by adding a PageMods constructor to it whose interface supports both script and style mods.

-myk

Nickolay Ponomarev

unread,
Apr 7, 2010, 5:52:10 PM4/7/10
to Myk Melez, mozilla-la...@googlegroups.com
Hi, Myk.

First of all, thanks for your comments. I'll make the necessary changes.

On Wed, Apr 7, 2010 at 10:03 PM, Myk Melez <m...@mozilla.org> wrote:
1) what format to use with include/exclude.
[...] With that in mind, it's not clear that the @moz-document-compatible format is preferable to the one being used by Chrome (whose invariant structure makes it easier to remember), the one used in Greasemonkey (whose rules are simpler), or the one I recommended in my earlier comments on this spec.


I need to take a closer look at this, but at first glance it looks like Chrome's format balances these goals best.

The rest of your points makes sense to me.

A couple other topics, I'd like to get input on (continuing numeration):

5) One thing that bothered me much is how this API is going to work in the multi-process Firefox. As far as I understand, the communication between JS of different processes will not be entirely transparent <https://wiki.mozilla.org/Electrolysis/CPOW>, and in general the process running the jetpack code may be different from the process holding content page, so currently suggested approach will likely not be implementable in the multi-process world.

Does it still make sense to implement this API in its current form?

6) My understanding is that page mods were also part of jetpack prototype. Did anyone monitor feedback on the prototype's API? I haven't tried to find it, which makes me feel we're designing APIs in vacuum, which in turn makes me a bit uncomfortable :)

Nickolay

Myk Melez

unread,
Apr 8, 2010, 8:06:47 PM4/8/10
to mozilla-la...@googlegroups.com, Nickolay Ponomarev, Benjamin Smedberg, Atul Varma
On 04/07/2010 02:52 PM, Nickolay Ponomarev wrote:
> http://code.google.com/p/chromium/issues/detail?id=18259
> http://groups.google.com/a/chromium.org/group/chromium-extensions/browse_thread/thread/9e3903c0817b5837/3d305eb340f01763
>
> These links may be of interest:
Thanks, those are very interesting! I'm worried about the same security
issues, although I wouldn't necessarily make the same decisions.

In particular, I'm not sure it's better to prevent developers from
globbing the TLD in the include declaration if it just causes them to
include all sites and then do the filtering in their callbacks, since it
will be easier for reviewers and static analysis tools to detect
dangerous globs in the declaration than the callbacks.

I would be more inclined to provide a simple, relatively secure format
that satisfies common use cases along with the option to use regular
expressions or filter functions (which reviewers and static analysis
tools would scrutinize more carefully) for special cases.

> 5) One thing that bothered me much is how this API is going to work in
> the multi-process Firefox. As far as I understand, the communication
> between JS of different processes will not be entirely transparent
> <https://wiki.mozilla.org/Electrolysis/CPOW>, and in general the
> process running the jetpack code may be different from the process
> holding content page, so currently suggested approach will likely not
> be implementable in the multi-process world.
>
> Does it still make sense to implement this API in its current form?

To the best of my knowledge, it will remain possible to provide a
Jetpack extension running in its own process with references to window
and document objects from a content process.

The references will be wrapped in CPOWs, and there will be some
restrictions on what the extension can do with them. In particular, it's
not clear from my reading of the Electrolysis and CPOW documentation
that it'll be possible for the extension to register an event handler on
a content node. But it seems like it'll be possible to modify the
content DOM.

Benjamin Smedberg and Atul Varma know more about the restrictions and
their impact on this API, though. cc:ing them for their feedback.

> 6) My understanding is that page mods were also part of jetpack
> prototype. Did anyone monitor feedback on the prototype's API? I
> haven't tried to find it, which makes me feel we're designing APIs in
> vacuum, which in turn makes me a bit uncomfortable :)

I and other folks have been monitoring feedback on the prototype APIs
generally and absorbing as much feedback as we can find about similar
page modification APIs (Greasemonkey, Chrome Extensions, Stylish, etc.).
The folks in this forum have also provided very helpful input on the
proposed Jetpack SDK APIs.

Nevertheless, we are in something of a vacuum, and that's to some degree
inevitable. Generally, I would bias towards implementing our best guess
based on the knowledge we currently have and then iterating on the APIs
based on the feedback we receive from folks trying them out, as trying
to get them exactly right the first time without real world testing is
difficult and error-prone!

-myk

Benjamin Smedberg

unread,
Apr 9, 2010, 12:25:13 PM4/9/10
to mozilla-la...@googlegroups.com
On 4/8/10 8:06 PM, Myk Melez wrote:

>> 5) One thing that bothered me much is how this API is going to work in
>> the multi-process Firefox. As far as I understand, the communication
>> between JS of different processes will not be entirely transparent
>> <https://wiki.mozilla.org/Electrolysis/CPOW>, and in general the
>> process running the jetpack code may be different from the process
>> holding content page, so currently suggested approach will likely not
>> be implementable in the multi-process world.
>>
>> Does it still make sense to implement this API in its current form?

> To the best of my knowledge, it will remain possible to provide a
> Jetpack extension running in its own process with references to window
> and document objects from a content process.

I must admit this thread has thoroughly confused me about how pagemods are
supposed to work. There seem to be two fundamentally different models:

A. run a script in the web page context, and let it communicate with the
jetpack via postMessage-style APIs.

This is trivially straightforward to do in a multi-process world, because
the jetpack process communicates with content entirely via asynchronous
messages. But there are issues with polluting the content script namespace
(e.g. if the jetpack needs to define functions).

B. run a script in the jetpack context and pass it the window/document for a
page being loaded. This requires CPOW wrappers, which have some limitations
regarding lifetime. Under normal circumstances the jetpack can obtain a
reference to a content object (window/document/node/etc). But content cannot
obtain a reference to a jetpack-provided object. This means that callback
functions will fail, e.g.

contentDocument.foo = function() { ... };
fails because you're trying to pass a function which will be referenced by
content.

There are specific exceptions that we can make:

* allow the jetpack to add global functions (on the window object) which
content can call
* allow the jetpack to add event listeners to the window and toplevel document

We can allow these exceptions because the lifetime of the function/listener
is well-known (we can clear it when the user navigates to a new page).

For the record, I think one of the explicit use cases that should be called
out is jetpacks adding methods and properties to the global window, i.e. so
we could have implemented window.geolocation as a jetpack, and we could
implement window.camera or window.microphone as a jetpack in the future.

Also note that other pieces of jetpack are probably going to interact with
pagemods: for example, for content context menus jetpacks are going to want
to know which DOM element was clicked in order to set context menu options
correctly for that element and its parents.

--BDS

Myk Melez

unread,
Apr 13, 2010, 12:33:46 PM4/13/10
to mozilla-la...@googlegroups.com, Benjamin Smedberg
On 04/09/2010 09:25 AM, Benjamin Smedberg wrote:
> A. run a script in the web page context, and let it communicate with
> the jetpack via postMessage-style APIs.
This is basically the API described in JEP 107, except that the
message-passing API is not yet defined by that JEP.

> B. run a script in the jetpack context and pass it the window/document
> for a page being loaded. This requires CPOW wrappers, which have some
> limitations regarding lifetime.

This is the API that Nickolay has proposed. Despite the limitations
imposed by the requirement for CPOW wrappers, its developer ergonomics
appeal to me. It's not yet clear what the relative security implications
are, however.

> Under normal circumstances the jetpack can obtain a reference to a
> content object (window/document/node/etc). But content cannot obtain a
> reference to a jetpack-provided object. This means that callback
> functions will fail, e.g.
>
> contentDocument.foo = function() { ... };
> fails because you're trying to pass a function which will be
> referenced by content.
>
> There are specific exceptions that we can make:
>
> * allow the jetpack to add global functions (on the window object)
> which content can call
> * allow the jetpack to add event listeners to the window and toplevel
> document

As long as the latter can be a capturing listener, these exceptions
should enable add-ons to do just about anything they'd like to do
(except perhaps handle non-bubbling events?).

> For the record, I think one of the explicit use cases that should be
> called out is jetpacks adding methods and properties to the global
> window, i.e. so we could have implemented window.geolocation as a
> jetpack, and we could implement window.camera or window.microphone as
> a jetpack in the future.

Yes, good point, these are great use cases.

> Also note that other pieces of jetpack are probably going to interact
> with pagemods: for example, for content context menus jetpacks are
> going to want to know which DOM element was clicked in order to set
> context menu options correctly for that element and its parents.

The Context Menu API takes care of showing/hiding context menu items
based on context (which API consumers specify via CSS selectors), so
add-ons don't have to know which DOM element is context-activated when
the context menu is initially displayed.

They do, however, obtain a reference to the element when one of their
context menu items is activated. And there may be some interactions with
page mods at that point.

-myk

Nickolay Ponomarev

unread,
Apr 15, 2010, 4:23:13 PM4/15/10
to mozilla-la...@googlegroups.com
On Tue, Apr 13, 2010 at 8:33 PM, Myk Melez <m...@mozilla.org> wrote:
 On 04/09/2010 09:25 AM, Benjamin Smedberg wrote:
A. run a script in the web page context, and let it communicate with the jetpack via postMessage-style APIs.
This is basically the API described in JEP 107, except that the message-passing API is not yet defined by that JEP.


B. run a script in the jetpack context and pass it the window/document for a page being loaded. This requires CPOW wrappers, which have some limitations regarding lifetime.
This is the API that Nickolay has proposed. Despite the limitations imposed by the requirement for CPOW wrappers, its developer ergonomics appeal to me. It's not yet clear what the relative security implications are, however.

I'll note that I implemented (B) only because it was the easiest to implement and the JEP didn't give any rationale for implementing it either way.

(A) - running a script in the web page context (as in inserted via <script src=...>) - means we can't give it any additional privileges (e.g. by listening for postMessage'd requests asking to do something that requires chrome permissions or by providing additional APIs like GM_* in Greasemonkey). It's fine for simple scripts, but not in general, I think.

I think it makes the most sense to (C) run page mods in the content processes (to avoid getting involved with CPOWs and their limitations), but in a separate context from the page (to make it possible to write page mods that can do things that we don't want to expose to regular pages). My understanding is that it is similar to what Google Chrome does and similar to what Greasemonkey does.

In a single-process model it just means we load separate instances of the page mod for different pages and don't let them access the main Jetpack/Firefox APIs directly. The web page could be operated on via a reference to its document (wrapped in a XPCNW by default perhaps).

Under normal circumstances the jetpack can obtain a reference to a content object (window/document/node/etc). But content cannot obtain a reference to a jetpack-provided object. This means that callback functions will fail, e.g.

contentDocument.foo = function() { ... };
fails because you're trying to pass a function which will be referenced by content.

There are specific exceptions that we can make:

* allow the jetpack to add global functions (on the window object) which content can call
* allow the jetpack to add event listeners to the window and toplevel document
As long as the latter can be a capturing listener, these exceptions should enable add-ons to do just about anything they'd like to do (except perhaps handle non-bubbling events?).

Not interact with page-defined (JS) APIs usefully though, which makes it a serious limitation, no? There should be a way to run JS in the content process.

So perhaps the API should be (B) initially with (C) implemented on top of it, e.g.

new ScriptMod({
  includes: [...],
  contentScript: require("self").data.url("content-script.js") // data.url example from JEP 106
});

Nickolay

Nickolay Ponomarev

unread,
Apr 15, 2010, 4:39:43 PM4/15/10
to Myk Melez, mozilla-la...@googlegroups.com
FYI: I've updated the wiki page and the implementation this weekend according to the rest of your feedback. I'm not sure if we reached conclusion on the "include" format issue though, so I haven't worked on changes to the supported format yet.


On Fri, Apr 9, 2010 at 4:06 AM, Myk Melez <m...@mozilla.org> wrote:
Thanks, those are very interesting! I'm worried about the same security issues, although I wouldn't necessarily make the same decisions.

In particular, I'm not sure it's better to prevent developers from globbing the TLD in the include declaration if it just causes them to include all sites and then do the filtering in their callbacks, since it will be easier for reviewers and static analysis tools to detect dangerous globs in the declaration than the callbacks.

I would be more inclined to provide a simple, relatively secure format that satisfies common use cases along with the option to use regular expressions or filter functions (which reviewers and static analysis tools would scrutinize more carefully) for special cases.

This makes no sense to me, I thought the issue here is a matter of trust - e.g. some scripts might want to inject information about the user's Gmail contacts into some pages, but not into every page the user visits. Such scripts can't specify a mozilla.* (or mozilla.tld) domain when they trust Mozilla; they must specify domains they know are owned by mozilla. Specifying "mozilla.*" is equivalent to specifying "*" from the trust perspective.

Nickolay

Nickolay Ponomarev

unread,
Apr 16, 2010, 1:38:26 PM4/16/10
to Benjamin Smedberg, mozilla-la...@googlegroups.com
On Fri, Apr 16, 2010 at 7:45 PM, Benjamin Smedberg <bsme...@mozilla.com> wrote:

On 4/15/10 4:23 PM, Nickolay Ponomarev wrote:

Not interact with page-defined (JS) APIs usefully though, which makes it
a serious limitation, no? There should be a way to run JS in the content
process.

Sure it can: it can call functions provided by the page, and get arbitrary JS properties off of elements, for example:

page script:
document.getElementById('foo').myProperty = 17
function tryIt(msg) { ... }

jetpack script:
document.getElementById('foo').myProperty // retrieves 17
document.window.tryIt(msg); // calls tryIt in the content process

What it can't do is pass a callback function. The following would fail:

document.window.tryIt(function() { ... });

Yeah, this is what I talked about. There are things you can do and there are things you can't.

What if you want to use something like gmail's Greasemonkey API to get notified of changes in the web app? <http://code.google.com/p/gmail-greasemonkey/wiki/GmailGreasemonkey10API>

I think it makes the most sense to (C) run page mods in the content
processes (to avoid getting involved with CPOWs and their limitations),
but in a separate context from the page (to make it possible to write
page mods that can do things that we don't want to expose to regular
pages). My understanding is that it is similar to what Google Chrome
does and similar to what Greasemonkey does.

Hrm. That's attractive in some ways, but it breaks the normal jetpack behavior of being a single script that does everything. I'm not sure it's worth breaking that programming model.
 
My gut feeling is that breaking the "you can register callbacks" model (without an easy workaround even!) is no better.

There are two kinds of scripts - those that just modify each page separately and those that need with a global "brain". The latter would benefit from what you called the "normal jetpack behavior" and the former are what I was thinking about when I suggested (C).

Note, that my suggestion was implement (B) and let people do (C) if they need transparent interaction with content.
 
And if you did this, how would the pagemod script running in the content process talk with the "main" jetpack context?

Via postMessage or a similar API presumably; I still don't grok the limitations imposed by the multi-process model on what can be allowed.

Nickolay

--
You received this message because you are subscribed to the Google Groups "mozilla-labs-jetpack" group.
To post to this group, send email to mozilla-la...@googlegroups.com.
To unsubscribe from this group, send email to mozilla-labs-jet...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/mozilla-labs-jetpack?hl=en.

Dietrich Ayala

unread,
Apr 28, 2010, 5:52:08 PM4/28/10
to mozilla-la...@googlegroups.com, Benjamin Smedberg
>> For the record, I think one of the explicit use cases that should be
>> called out is jetpacks adding methods and properties to the global window,
>> i.e. so we could have implemented window.geolocation as a jetpack, and we
>> could implement window.camera or window.microphone as a jetpack in the
>> future.
>
> Yes, good point, these are great use cases.

I added the window.* ability to the use-cases section of the JEP.

I didn't add anything to the spec itself though. Should it be a
property of a pageMod options object? or a special value for the
include property? A separate API?

Nickolay Ponomarev

unread,
May 25, 2010, 8:11:04 PM5/25/10
to Benjamin Smedberg, mozilla-la...@googlegroups.com, Myk Melez, Atul Varma
On Fri, Apr 9, 2010 at 8:08 PM, Benjamin Smedberg <bsme...@mozilla.com> wrote:
On 4/8/10 8:06 PM, Myk Melez wrote:

5) One thing that bothered me much is how this API is going to work in
the multi-process Firefox. As far as I understand, the communication
between JS of different processes will not be entirely transparent
<https://wiki.mozilla.org/Electrolysis/CPOW>, and in general the
process running the jetpack code may be different from the process
holding content page, so currently suggested approach will likely not
be implementable in the multi-process world.

Does it still make sense to implement this API in its current form?

To the best of my knowledge, it will remain possible to provide a
Jetpack extension running in its own process with references to window
and document objects from a content process.

I must admit this thread has thoroughly confused me about how pagemods are supposed to work. There seem to be two fundamentally different models:

A. run a script in the web page context, and let it communicate with the jetpack via postMessage-style APIs.

This is trivially straightforward to do in a multi-process world, because the jetpack process communicates with content entirely via asynchronous messages. But there are issues with polluting the content script namespace (e.g. if the jetpack needs to define functions).

B. run a script in the jetpack context and pass it the window/document for a page being loaded. This requires CPOW wrappers, which have some limitations regarding lifetime. Under normal circumstances the jetpack can obtain a reference to a content object (window/document/node/etc). But content cannot obtain a reference to a jetpack-provided object. This means that callback functions will fail, e.g.

I posted a summary of e10s discussion at https://wiki.mozilla.org/User:Asqueella/JEP_107#Discussion_-_e10s

Can I assume we reached consensus that (B) is a useful baseline? Does anyone else think my suggested addition of (C) "Run page mods in the content processes, but in a separate context from the page" is useful? Who decides if the necessary platform bits for it will be implemented?
 
contentDocument.foo = function() { ... };
fails because you're trying to pass a function which will be referenced by content.

There are specific exceptions that we can make:

* allow the jetpack to add global functions (on the window object) which content can call
* allow the jetpack to add event listeners to the window and toplevel document

We can allow these exceptions because the lifetime of the function/listener is well-known (we can clear it when the user navigates to a new page).
 
Benjamin, I'd like to check if I understand the reason behind this: the issue with the event listeners and callbacks in general is that unlike a function exported as a property on |window|, it's possible that the callback stops being referenced from content code and we'd like to GC it. But on the other hand, we don't want the GC to have to deal with content-chrome cycles. Right?

Nickolay

Benjamin Smedberg

unread,
May 26, 2010, 1:38:38 AM5/26/10
to Nickolay Ponomarev, mozilla-la...@googlegroups.com, Myk Melez, Atul Varma
On 5/25/10 5:11 PM, Nickolay Ponomarev wrote:

> I posted a summary of e10s discussion at
> https://wiki.mozilla.org/User:Asqueella/JEP_107#Discussion_-_e10s

Awesome.

>
> Can I assume we reached consensus that (B) is a useful baseline? Does
> anyone else think my suggested addition of (C) "Run page mods in the
> content processes, but in a separate context from the page" is useful?
> Who decides if the necessary platform bits for it will be implemented?

I believe C is useful, for those developers who need the additional
functionality. I'm worried about making the message-passing API useful, so
we might start with B and figure out from experience what the pain-points
are first.

I don't know who decides!

>
> contentDocument.foo = function() { ... };
> fails because you're trying to pass a function which will be
> referenced by content.
>
> There are specific exceptions that we can make:
>
> * allow the jetpack to add global functions (on the window object)
> which content can call
> * allow the jetpack to add event listeners to the window and
> toplevel document
>
> We can allow these exceptions because the lifetime of the
> function/listener is well-known (we can clear it when the user
> navigates to a new page).
>
>
> Benjamin, I'd like to check if I understand the reason behind this: the
> issue with the event listeners and callbacks in general is that unlike a
> function exported as a property on |window|, it's possible that the
> callback stops being referenced from content code and we'd like to GC
> it. But on the other hand, we don't want the GC to have to deal with
> content-chrome cycles. Right?

Yes, in general. The basic problem is a reference cycle:

content holds jetpack object, jetpack holds content object via a closure,
neither of them is ever freed. (We don't detect GC cycles across processes.)

For objects which are attached to the window, we can explicitly clear the
reference when the window is closed/navigated. For arbitrary objects, there
isn't an obvious point or reference from which we can break the cycle,
leading to a permanent leak.

--BDS

Myk Melez

unread,
May 26, 2010, 4:07:41 PM5/26/10
to Nickolay Ponomarev, Benjamin Smedberg, mozilla-la...@googlegroups.com, Atul Varma
On 05/25/2010 05:11 PM, Nickolay Ponomarev wrote:
> I posted a summary of e10s discussion at
> https://wiki.mozilla.org/User:Asqueella/JEP_107#Discussion_-_e10s
That's a very good summary of where we're at, and overall this is a
fantastic proposal that specifies just the right set of functionality
for a very useful initial implementation of the API.

I don't think this spec needs any significant changes, although I might
recommend some minor ones for API consistency. I'll look at those in
more detail and make some recommendations in a bit. And I'll also take
another look at the include syntax.

> Can I assume we reached consensus that (B) is a useful baseline? Does
> anyone else think my suggested addition of (C) "Run page mods in the
> content processes, but in a separate context from the page" is useful?

From the perspective of developer ergonomics, the ideal model is one in
which a single addon script running in its own context has both full
access to the Jetpack APIs and complete access to the DOM objects of a
page being modified, including the ability for content to obtain
references to addon objects (i.e. for addons to register event listeners
on DOM objects).

We really want to run addons (and eventually content pages) in their own
processes, however. And given the technical limitations inherent to
doing so, the question then becomes what feasible model best
approximates that ideal experience.

I think the answer to that question is in fact your suggested model C.

Most importantly, it preserves the ability for page mods to register
event callbacks and otherwise interact with the DOM objects of pages in
the same ways with which addon developers are already familiar (based on
their experiences with web, Greasemonkey, and traditional addon
development).

It also preserves the content namespace, which has both robustness and
privacy advantages, since it reduces the risk of both name conflicts and
detection by a content page or another addon that a particular addon is
in use.

It does require addon developers to put their page mods into separate
scripts, which is unfortunate. But I think that's the least costly
tradeoff. And in the case of a page mod that doesn't need access to
addon APIs (f.e. one that modifies the appearance and behavior of a page
without incorporating information from chrome or other pages), which
seems like a common case, it's pretty cheap.

That still leaves the question of how the page mod context communicates
with the main addon context in the addon process. Can we use CPOWs to
mediate interaction (modulo restrictions on passing content references
through to the addon context)? Or should we implement an asynchronous
JSON pipe for communication between them? The considerations here are
unclear.

> Who decides if the necessary platform bits for it will be implemented?

Ultimately, the platform folks decide whether or not to fix platform
dependencies for the functionality we want to provide in the SDK. They
seem keen to see Jetpack succeed, however, and the e10s team in
particular is focused right now on standing up support for addon process
isolation, so we have a good opportunity to make our case and get
platform folks to make the necessary changes we identify.

-myk

Myk Melez

unread,
May 26, 2010, 7:25:38 PM5/26/10
to mozilla-labs-jetpack
On May 26, 1:07 pm, Myk Melez <m...@mozilla.org> wrote:
> I don't think this spec needs any significant changes, although I might
> recommend some minor ones for API consistency. I'll look at those in
> more detail and make some recommendations in a bit. And I'll also take
> another look at the include syntax.

Ok, I rereviewed the API, and overall it still looks great, with no
major issues, just a few minor ones...

Regarding the include syntax, after looking at it again, and reviewing
the various syntaxes used by similar APIs, I think this proposed
syntax is perfect for the initial implementation, and I wouldn't make
any changes to it, although we can certainly consider additions/
changes in the future.

(I'm particularly interested in filter functions in a future phase of
development, because I'd rather we provide a mechanism for advanced
developers to specify complex rules in a transparent fashion that is
relatively easy to review and audit than have those authors invent
their own more opaque mechanisms. And the current implementation seems
forwards-compatible with one that allows such functions to be
provided. In fact, I don't think they need their own "filter" option;
I would instead specify them as function values for the include/
exclude options.)

I would only make two changes to the way we process includes...

First, I would include no pages by default and require a value for the
include property in order for the mod to apply to pages, so that
developers make an explicit and intentional choice to apply a mod to
all pages (which has performance and other considerations) rather than
having that be the default. This change is consistent with Google
Chrome's content scripts, although different from Greasemonkey.

Second, I think we should restrict the schemes that URLs are allowed
to match to http, https, and ftp for the initial implementation, as
providing access to other schemes (file, chrome, etc.) has security
implications that we need to think through carefully.

Regarding the names of the callback properties, I had earlier
suggested the name onDOMReady for the DOMContentLoaded callback. Since
then, however, we've been standardizing on the simpler onReady in
other APIs, taking our cue from jQuery's "ready" method, and should do
so here as well.

Similarly, I had previously suggested the name onWindowCreate for the
content-document-global-created notification, but now I think we
should simplify to just onCreate (or, even better, onStart), which is
simpler and just as memorable (if not more so) as "the earliest
possible moment at which you can access the window (and document
object, I suspect, although not the complete DOM) of a page that the
browser has started to load."

And the argument the callback functions are passed should be called
simply "window" in the documentation rather than wrappedWindow, as
developers are unlikely to encounter the differences between the
object and a non-wrapped window object often enough to justify
emphasizing that the object is wrapped (although it's certainly worth
summarizing the differences in a sidenote).

Regarding the enable/disable methods, in other APIs that apply changes
to the browser and/or cache references to constructed objects, like
context menu, widget, page worker, and panel, we've recently been
standardizing on global add/remove methods that make/unmake those
changes instead of methods on the constructed instances. We should do
the same here, requiring that a script mod be added before it becomes
active, i.e.:

let mods = require("page-mods");

function () {
let mod = mods.ScriptMod({ ... });
}();
// The constructed object was never added, so it was never cached,
and it can
// be GCed once it falls out of scope when the function returns.

function () {
let mod = mods.ScriptMod({ ... });
mods.add(mod);
}();
// The constructed object persists until mods.remove() is called on
it
// or the addon is uninstalled.

// `add` returns the added object, so you can also simplify to:
let mod = mods.add(mods.ScriptMod({ ... }));

The purpose of this syntax is to distinguish between objects that have
merely been constructed (and can be GCed when they fall out of scope)
and those that persist, making the behavior of constructed objects
resemble the behavior JavaScript developers are used to, where mere
construction of an object doesn't cause it to persist outside of the
scope in which it is defined.

This makes the API slightly more complicated than having a mod be
activated upon construction, but it's a worthy tradeoff for matching
developers' expectations about the behavior of the language.

Regarding the issue of cleaning up objects referenced from content
when an addon is uninstalled, for APIs that create such references
themselves (context menu, page worker, widget, etc.), the API itself
takes responsibility for doing this. However, I don't see how we can
implement that for content references created by mods, so instead
we'll just have to encourage addon developers to do so via the unload
module (or whatever other methods we invent in the future).

And regarding providing an example of using jQuery in a mod, I totally
agree that it should be possible to use not only jQuery but also other
JavaScript libraries in mods, so we should definitely provide an
example if we can!

However, I wouldn't block implementation on it. In fact, in our last
weekly meeting, we talked about jQuery and other library integration
(not just for page mods but also for page worker, widget, panel,
etc.), which Atul has been planning to work on, and decided not to
focus on it for this cycle, as Atul has too much other high priority
work, although if someone else wants to take this on, I welcome them
to do so!

Finally, one last issue we should be aware of is that currently a
single instance of a script mod handles the modification of all
included pages, but in the distant future (i.e. after Firefox 4), not
only will there be a separate addon process, but there will be
multiple content processes as well. Perhaps there is something we
should do now to prepare for that world, maybe in conjunction with
whatever we do for e10s generally!

-myk

Brian Warner

unread,
May 27, 2010, 11:20:48 PM5/27/10
to mozilla-la...@googlegroups.com

So, I'm probably missing part of this conversation, but since we're
going with "option C" (in which a separate script is delivered to the
content process, to be executed near-but-not-quite-inside the content
context), don't we need an API which specifies JS files, rather than
function objects? Something like:

var data = require("self").data
var mymod = mods.ScriptMod(
{ include: "example.org",
onCreateScript: data.url("add-window.camera-neatstuff.js"),
onReadyScript: data.url("remove-flash-tags.js")
})
mods.add(mymod)

(using data.url() requires that the ScriptMod code be able to pull the
script-to-be-injected out of that chrome: URL. It should also be legal
to use data.load() and therefore pass in a string).

If the API were to pass function objects to onCreate:, I think we'd have
to decompile the function, pass its bytecode over the wire to the
content process, and then somehow reconstitute it before execution.

We'll need a convention about how these scripts will get invoked:
perhaps they should be specified as CommonJS modules that create an
exports.main function that receives a "window" argument. Their ability
to use require() would be limited, but perhaps that's the vector through
which we give them access to libraries like jQuery.

On 5/26/10 4:25 PM, Myk Melez wrote:
>
> First, I would include no pages by default and require a value for the
> include property in order for the mod to apply to pages,

Definitely. From an auditing point of view, each pagemod gives the
jetpack author complete control over the user's interaction with the
selected domains, including the ability to see+use all their credentials
(gmail passwords, etc). So we need to give reviewers an easy and
reliable way to identify which domains will be modded. If the default
were "all pages", then the following declaration, while looking fairly
benign, would in fact give the jetpack complete control over all
domains:

var mymod = mods.ScriptMod({inc1ude: "obscure.example.org", ...});

(because props.include===undefined, and the code will ignore the
misspelled "inc-ONE-ude" property). That's a big review failure.

> Second, I think we should restrict the schemes that URLs are allowed
> to match to http, https, and ftp for the initial implementation, as
> providing access to other schemes (file, chrome, etc.) has security
> implications that we need to think through carefully.

Absolutely. Pagemodding a "chrome:" URL would probably give the jetpack
full control over the browser, equivalent to giving the jetpack full
chrome access directly, and that's the first thing we're trying to take
away from the jetpacks that don't need it.

> And the argument the callback functions are passed should be called
> simply "window" in the documentation rather than wrappedWindow,

One idea that occurred to me: what if the onCreate/onStart method
receives just the one "window" argument, but the onReady() method
receives both "window" and "document" ? The second argument is nominally
redundant, but then anyone looking at the docs would realize that the
document is not available to the onStart() code.. the parameter would be
made notable by its absence.

> currently a single instance of a script mod handles the modification

> of all included pages, ...

The way I'd like to conceptualize this is a single "controller", which
lives in the main jetpack code (and runs in the "jetpack process"), with
connections to a number of "workers" (executing the other code, in the
content process, each being a separate instance of the onReadyScript
code), connected with JSON pipes in a "star"/"hub" topology: each worker
only connects to the controller. The controller needs to hear about new
workers being created (and maybe destroyed), so it can communicate with
them. Perhaps the ScriptMod() arguments should include a callback
function that is invoked for each new application of the mod script, and
given the near end of the JSON pipe, something like:

var data = require("self").data
var mymod = mods.ScriptMod(
{ include: "*",
onReadyScript: data.url("remove-blacklisted-images.js"),
onNewPage: function (pipe) {
// set up pipe callback to accept messages. Each message
// contains the URL of an image. The controller checks
// each imageURL against a single common blacklist, and
// returns a boolean.
} }

The controller would be free to set up whatever context it wants for
each page, so it could treat them separately (i.e. it can distinguish
messages coming from modded page A vs modded page B, and respond
differently to each). Or the controller could have a UI which shows the
user a list of pages-under-control and sends messages to them in
response to user choices. Imagine a jetpack which finds all the
audio/video elements on all pages and puts a list of volume/pause
controls on a popup panel, so you could quickly figure out which
background tab (out of hundreds) was playing that horrible music and
pause it without also pausing the youtube video you're watching in the
foreground. The controller would send a "pause your audio" message to
the one modscript living in a specific page, and not send anything to
the rest.

Instead of a "pipe" argument, maybe the onNewPage function should get a
"control" object, from which it can manipulate the pipe, ask about the
URL from which the target page was loaded, and register to hear about
the page going away. The latter would be necessary for the
all-volume-control jetpack to remove closed pages from its list.


cheers,
-Brian

Myk Melez

unread,
May 28, 2010, 12:34:19 PM5/28/10
to mozilla-la...@googlegroups.com, Brian Warner
On 05/27/2010 08:20 PM, Brian Warner wrote:
> So, I'm probably missing part of this conversation, but since we're
> going with "option C" (in which a separate script is delivered to the
> content process, to be executed near-but-not-quite-inside the content
> context), don't we need an API which specifies JS files, rather than
> function objects?
Yes, that's right.

> Something like:
>
> var data = require("self").data
> var mymod = mods.ScriptMod(
> { include: "example.org",
> onCreateScript: data.url("add-window.camera-neatstuff.js"),
> onReadyScript: data.url("remove-flash-tags.js")
> })
> mods.add(mymod)
>
> (using data.url() requires that the ScriptMod code be able to pull the
> script-to-be-injected out of that chrome: URL. It should also be legal
> to use data.load() and therefore pass in a string).

Yup, that's what we need to do, although I prefer:

main.js:

var data = require("self").data;
var mymod = mods.ScriptMod({
include: "example.org",

script: data.url("my-example-org-mod.js")
});
mods.add(mymod);

my-example-org-mod.js:

function onStart() {
// add-window.camera-neatstuff
}

function onReady() {
// remove-flash-tags
}

That way it is only necessary to provide (and update, over time) a
single file containing all mod code.

> If the API were to pass function objects to onCreate:, I think we'd have
> to decompile the function, pass its bytecode over the wire to the
> content process, and then somehow reconstitute it before execution.

Right. This would be the uneval/eval approach Benjamin dislikes.

> We'll need a convention about how these scripts will get invoked:
> perhaps they should be specified as CommonJS modules that create an
> exports.main function that receives a "window" argument. Their ability
> to use require() would be limited, but perhaps that's the vector through
> which we give them access to libraries like jQuery.

This seems ok to me, in which case my-example-org-mod.js would be:

exports.onStart = function () {
// add-window.camera-neatstuff
}

exports.onReady = function () {
// remove-flash-tags
}


However, I'm a bit worried about the potential for confusion due to
conflating the two spaces by providing both with the same interface for
importing functionality but not allowing one to import the same
functionality as the other.

On the other hand, we do need a way for the mod context to access
self.data.url, JavaScript libraries like jQuery, and probably some other
functionality. And modules are our hammer.

If mods are modules, then it seems like they should go into lib/ rather
than data/, since that's where packages store modules in our
CommonJS-based directory layout.

> One idea that occurred to me: what if the onCreate/onStart method
> receives just the one "window" argument, but the onReady() method
> receives both "window" and "document" ? The second argument is nominally
> redundant, but then anyone looking at the docs would realize that the
> document is not available to the onStart() code.. the parameter would be
> made notable by its absence.

That seems fine, although I think onStart actually does have access to
the document object, it's just that the document itself hasn't yet been
parsed yet.

> The way I'd like to conceptualize this is a single "controller", which
> lives in the main jetpack code (and runs in the "jetpack process"), with
> connections to a number of "workers" (executing the other code, in the
> content process, each being a separate instance of the onReadyScript
> code), connected with JSON pipes in a "star"/"hub" topology:

This seems reasonable at first glance, although I don't yet have a good
enough conceptual model of how it would work. My concern is just making
sure we design the simplest and most ergonomic possible interface that
satisfies the technical requirements of cross-process communication
between the main addon code and the code that runs in the content
process(es).

Can you provide some example code that demonstrates how communication
would work in this model?

-myk

Brian Warner

unread,
May 28, 2010, 10:35:39 PM5/28/10
to Myk Melez, mozilla-la...@googlegroups.com
On 5/28/10 9:34 AM, Myk Melez wrote:

>> We'll need a convention about how these scripts will get invoked:

> This seems ok to me, in which case my-example-org-mod.js would be:


>
> exports.onStart = function () {
> // add-window.camera-neatstuff
> }

I'm guessing you meant:

exports.onStart = function (window) {
// add-window.camera-neatstuff
}

right?

Passing the window as an argument (versus providing it to the whole
module as a global) seems more in keeping with the "The Number Of
Globals In A CommonJS Module Shall Be Two: require and exports" pattern.
Also, it makes it possible for the onStart() function to selectively
deny access to 'window' to other code in that module, so that add-on
developers can use POLA (principle of least authority) inside the
module, not just between modules. In general, I favor passing around
authorities as function arguments rather than as module globals for this
very reason.

> However, I'm a bit worried about the potential for confusion due to
> conflating the two spaces by providing both with the same interface for
> importing functionality but not allowing one to import the same
> functionality as the other.
>
> On the other hand, we do need a way for the mod context to access
> self.data.url, JavaScript libraries like jQuery, and probably some other
> functionality. And modules are our hammer.

Yeah. Perhaps the first release will not provide a require() function,
and then later (once we figure out our story for the "search path" for
this context and how it differs from the other modules), we can make it
available.

I'm vaguely thinking that the PageMod() constructor, next to the script:
argument, could provide a list of libraries that are made available to
that script. Maybe a mapping, like:

var mymod = mods.ScriptMod({
include: "example.org",

script: data.url("my-example-org-mod.js"),
scriptlibs: { jquery: data.url("jquery.js") } }
});

Allowing my-example-org-mod.js to use:

var jq = require("jquery");

(and then of course connect it to window.document somehow).


(huh, now, why did I think that the constructor was named PageMod()
instead of ScriptMod()? Doesn't PageMod() sound like a more accurate name?)

> If mods are modules, then it seems like they should go into lib/ rather
> than data/, since that's where packages store modules in our
> CommonJS-based directory layout.

Indeed. I'm not sure how I feel about having data.url() be able to grant
access to things outside data/ . Maybe we need a different API for that
(require("self").lib ?). I'll think about that.

> Can you provide some example code that demonstrates how communication
> would work in this model?

Sure.. I'll work on that over the weekend. Stay tuned.

cheers,
-Brian

Nickolay Ponomarev

unread,
May 30, 2010, 10:01:43 AM5/30/10
to Benjamin Smedberg, mozilla-la...@googlegroups.com, Myk Melez, Atul Varma
On Wed, May 26, 2010 at 9:38 AM, Benjamin Smedberg <benj...@smedbergs.us> wrote:
On 5/25/10 5:11 PM, Nickolay Ponomarev wrote:

I posted a summary of e10s discussion at
https://wiki.mozilla.org/User:Asqueella/JEP_107#Discussion_-_e10s

Awesome.

Can I assume we reached consensus that (B) is a useful baseline? Does
anyone else think my suggested addition of (C) "Run page mods in the
content processes, but in a separate context from the page" is useful?
Who decides if the necessary platform bits for it will be implemented?

I believe C is useful, for those developers who need the additional functionality. I'm worried about making the message-passing API useful, so we might start with B and figure out from experience what the pain-points are first.

Benjamin, I didn't understand what's your concern is. Could you elaborate, since Myk likes C better, I think we'll need it at least eventually, and that's the model for chrome's content scripts <http://code.google.com/chrome/extensions/content_scripts.html>

Nickolay

Nickolay Ponomarev

unread,
May 30, 2010, 10:50:39 AM5/30/10
to mozilla-la...@googlegroups.com
On Thu, May 27, 2010 at 3:25 AM, Myk Melez <m...@mozilla.org> wrote:
Ok, I rereviewed the API, and overall it still looks great, with no
major issues, just a few minor ones...

Regarding the include syntax[...]

[...filter functions in a future phase of
development[...]
I would only make two changes to the way we process includes...

First, I would include no pages by default and require a value for the
include property in order for the mod to apply to pages[...]

I made this change. I also went one step further and made the ScriptMod constructor throw when 'include' is not specified, to avoid the situation when no feedback is given to the developer, but the mod still doesn't run. I don't see why one would need to create a script mod with no 'include', but we can relax this restriction later, if needed.

Second, I think we should restrict the schemes that URLs are allowed
to match to http, https, and ftp for the initial implementation, as
providing access to other schemes (file, chrome, etc.) has security
implications that we need to think through carefully.

I did not make this change yet. I realize that giving jetpack access to other schemes can let it do something it wouldn't be able to do just by having access to http pages, but as long as it's clear a jetpack has this ability, I think it's OK. (We are not trying to restrict functionality available to jetpacks, are we?)

Currently, "about:" pages are used in tests, we would need a http server for testing otherwise. I also expect being able to mod file:// pages to be useful for development.

How about we restrict "*" to http, https, ftp and let the jetpack author specify "file:*" if he wants to?
 
Regarding the names of the callback properties
I decided not to make these changes, since we might switch to another API altogether.
 
And the argument the callback functions are passed should be called
simply "window" in the documentation rather than wrappedWindow, as
developers are unlikely to encounter the differences between the
object and a non-wrapped window object often enough to justify
emphasizing that the object is wrapped (although it's certainly worth
summarizing the differences in a sidenote).

If they get an XPCNativeWrapper, they will surely encounter the difference soon enough - they can't see the JS objects from the page altogether, unless they get a .wrappedJSObject.

I think there's value in giving XPCNW by default (and that's all that chrome gives its content scripts, as far as I can see), but I can see that wrappers generally confuse people...

I'm keeping wrappedWindow for now, pending the decision to just pass an unwrapped value to the callback.

Regarding the enable/disable methods, [...] we've recently been

standardizing on global add/remove methods that make/unmake those
changes instead of methods on the constructed instances. We should do
the same here, requiring that a script mod be added before it becomes
active[...]

I made this change (modeled add/remove after the context menu module, i.e. the methods only take a single ScriptMod and do not have a return value).

Regarding the issue of cleaning up objects referenced from content
when an addon is uninstalled[...]

(or a script mod is remove()d)
 
[...]we'll just have to encourage addon developers to do so via the unload

module (or whatever other methods we invent in the future).
And regarding providing an example of using jQuery in a mod, I totally
agree that it should be possible to use not only jQuery but also other
JavaScript libraries in mods, so we should definitely provide an
example if we can!

However, I wouldn't block implementation on it.

Noted, thanks.
 
Finally, one last issue we should be aware of is that currently a
single instance of a script mod handles the modification of all
included pages, but in the distant future (i.e. after Firefox 4), not
only will there be a separate addon process, but there will be
multiple content processes as well. Perhaps there is something we
should do now to prepare for that world, maybe in conjunction with
whatever we do for e10s generally!

Yes, I'm keeping this in mind when discussion e10s issues.

The wiki page and the implementation are updated according to this reply. I still have to process the more recent messages about the new API ("C").

Nickolay

Nickolay Ponomarev

unread,
May 30, 2010, 6:13:04 PM5/30/10
to mozilla-la...@googlegroups.com, Myk Melez, war...@mozilla.com
Hi,

First, I suggest reading about Chrome's Content Scripts and Message Passing if you haven't done so already.

Second, I see main 4 questions discussed here.
  • #1 At which point does the separate script run, how does it declare it wants to do something on-window-created or on-DOM-ready.
  • #2 How do we let the script include common libraries / modularize its code
  • #3 What is the script's global, how does it access page's Window and Document
  • #4 How does the script communicate with the main jetpack
As usual, I posted a summary of points to https://wiki.mozilla.org/User:Asqueella/JEP_107#Discsussion_-_script_context_in_Model_C (if I missed anything important, feel free to add it) and below are my comments. I don't have all the answers, but I wanted to post today anyway.


On Sat, May 29, 2010 at 6:35 AM, Brian Warner <war...@mozilla.com> wrote:
On 5/28/10 9:34 AM, Myk Melez wrote:

#3 What is the script's global, how does it access page's Window and Document
 
>> We'll need a convention about how these scripts will get invoked:
>   exports.onStart = function () {

I'm guessing you meant:
  exports.onStart = function (window) {

right?

Passing the window as an argument (versus providing it to the whole
module as a global) seems more in keeping with the "The Number Of
Globals In A CommonJS Module Shall Be Two: require and exports" pattern.
Also, it makes it possible for the onStart() function to selectively
deny access to 'window' to other code in that module, so that add-on
developers can use POLA (principle of least authority) inside the
module, not just between modules. In general, I favor passing around
authorities as function arguments rather than as module globals for this
very reason.

Both GreaseMonkey and Chrome create a clean object as the script's global with __proto__ set to XPCNativeWrapper(contentWindow) and necessary globals added to it (GM_*, chrome.extension.*)

I think we should implement a scheme like GM's/Chrome's, since it will be the most familiar and intuitive. The rare developers who want to use the principle of least authority can do so in the main jetpack (it can't be given access to any of the page's DOM anyway, so there's no reason to keep it in content process).


#2 How do we let the script include common libraries / modularize its code

> However, I'm a bit worried about the potential for confusion due to
> conflating the two spaces by providing both with the same interface for
> importing functionality but not allowing one to import the same
> functionality as the other.
>
> On the other hand, we do need a way for the mod context to access
> self.data.url, JavaScript libraries like jQuery, and probably some other
> functionality. And modules are our hammer.

Yeah. Perhaps the first release will not provide a require() function,
and then later (once we figure out our story for the "search path" for
this context and how it differs from the other modules), we can make it
available.

I'm vaguely thinking that the PageMod() constructor, next to the script:
argument, could provide a list of libraries that are made available to
that script. Maybe a mapping, like:

 var mymod = mods.ScriptMod({
   include: "example.org",
   script: data.url("my-example-org-mod.js"),
   scriptlibs: { jquery: data.url("jquery.js") } }
 });

Allowing my-example-org-mod.js to use:

 var jq = require("jquery");

(and then of course connect it to window.document somehow).

(I'd rather not call it "require()" if it's a different from the main require.)

I agree with Myk's comments on CommonJS in content scripts; we should also remember that the content scripts (and the related CommonJS machinery) will reload every single page load. I think that while the CommonJS hammer is attractive, the content scripts should generally not be as complex as to require it. We can add it later if there's need.

Chrome lists the scripts to be loaded (in the single content script context) in order in the manifest. Simple and similar to web pages, but different from jetpack execution model.

I like it. What do you think?

#1 At which point does the separate script run, how does it declare it wants to do something on-window-created or on-DOM-ready.

Chrome has "run_at" option in the content script's manifest, which defaults to "document_idle" meaning "sometime between DOMReady and soon after onload" with other options being DOMReady and WindowCreated. It's not clear how important this is for performance and why.

You both used an example with callback functions (onStart, onReady) defined in the content script, but since the script must be evaluated "on start" if it is to be able to do anything before the page starts loading, the onStart callback is actually redundant (just put the code at the top level, like in regular web pages).

We could either provide an easy way to register for DOMReady event or leave it to libraries.

#4 How does the script communicate with the main jetpack

I don't have opinions yet, just some food for thought:
  • GreaseMonkey defines several GM_* globals to provide additional functionality to GM scripts.
  • Chrome implements bidirectional asynchronous message passing via chrome.extension.sendReqest(json, responseCallback) and another pipe (Port) based API for long-lived connections.
  • If we are going to allow exporting APIs (e.g. window.microphone) via this mechanism, we might need sync content->jetpack messaging. bsmedberg also mentioned this as a possibility.
Nickolay

P.S. I noticed that Google Groups messes up my glorious formatting (esp. quoted text). Does it get across the mail at least or is 2010 still too early to write in rich text?

Nickolay Ponomarev

unread,
May 30, 2010, 6:18:52 PM5/30/10
to Myk Melez, mozilla-la...@googlegroups.com
On Sat, May 29, 2010 at 6:35 AM, Brian Warner <war...@mozilla.com> wrote:
(huh, now, why did I think that the constructor was named PageMod()
instead of ScriptMod()? Doesn't PageMod() sound like a more accurate name?)

It's a left-over of my initial implementation that separated "ScriptMod"s and "StyleMod"s. We could rename it to PageMod, but it's not necessarily a mod (consider the case when a "mod" actually uses a page APIs to notify the main jetpack of page events). I may sound too-easily influenced by Google team, but maybe require('content-script') and ContentScript?

Myk is our naming expert, so deferring this to him.

Nickolay

Myk Melez

unread,
May 31, 2010, 3:33:02 PM5/31/10
to Brian Warner, mozilla-la...@googlegroups.com
On 05/28/2010 07:35 PM, Brian Warner wrote:
> I'm guessing you meant:
>
> exports.onStart = function (window) {
> // add-window.camera-neatstuff
> }
>
> right?
>
> Passing the window as an argument (versus providing it to the whole
> module as a global) seems more in keeping with the "The Number Of
> Globals In A CommonJS Module Shall Be Two: require and exports" pattern.
Right. And I think it's the correct pattern to use for modules in general.

However, given that the sole purpose of "page mod" modules is to access
the pages they are modifying, it's worth simplifying access to the
"window" object by defining it globally, just as we define Components.*
shortcuts globally in modules that need XPCOM access (as well as those
that don't, at the moment, but that's something to fix).

As Nickolay noted, both Greasemonkey and Chrome Extensions do this, and
it seems to be the simplest, most ergonomic approach.

> Also, it makes it possible for the onStart() function to selectively
> deny access to 'window' to other code in that module, so that add-on
> developers can use POLA (principle of least authority) inside the
> module, not just between modules. In general, I favor passing around
> authorities as function arguments rather than as module globals for this
> very reason.

To me this constraint on the use of globals seems both too limiting
(given their utility for simplifying interfaces) and limited (given the
proclivity of addon developers to use them themselves).

I do understand the desire to enable code within modules to restrict
authority. However, the common case within modules is running trusted
code, so I would not hobble all module code with a requirement (for SDK
developers) and recommendation (in the case of addon developers) to
protect the global state.

Instead, we should find a different approach that enables SDK and addon
developers to use POLA within their modules in those cases where it is
appropriate, such as sandboxing or finer-grained modularity.

> Yeah. Perhaps the first release will not provide a require() function,
> and then later (once we figure out our story for the "search path" for
> this context and how it differs from the other modules), we can make it
> available.

As I've been thinking about it over the weekend, I've been coming to the
conclusion that page mod modules should have regular access to modules
that are bundled with the addon via "require", since it should be as
easy as possible for them to access addon functionality, and requiring
modules is the simplest way for them to do so.

The loader should load separate instances of modules that page mod
modules require, however, since page mod modules will eventually be
loaded in separate processes than the main module (initially the chrome
vs. the addon process, eventually content processes vs. the addon
process). Although it seems sufficient for the initial implementation to
note in the documentation that this will eventually be the case, and
developers shouldn't rely on modules required by both the main module
and a page mod module sharing state.

And I don't mean to suggest we shouldn't implement a pipe for
communication between page mod modules and the main module. Certain
kinds of work, like retrieving shared data and maintaining shared state,
should be done in the main module, and we should provide addon
developers with mechanisms for doing that and best practices for
choosing when to perform work in the main module vs. page mod modules.

> I'm vaguely thinking that the PageMod() constructor, next to the script:
> argument, could provide a list of libraries that are made available to
> that script. Maybe a mapping, like:
>
> var mymod = mods.ScriptMod({
> include: "example.org",
> script: data.url("my-example-org-mod.js"),
> scriptlibs: { jquery: data.url("jquery.js") } }
> });
>
> Allowing my-example-org-mod.js to use:
>
> var jq = require("jquery");
>
> (and then of course connect it to window.document somehow).

You have previously described a mechanism for deriving a list of the
modules being required through analysis of addon code and then
restricting addons from using other modules via manifest metadata that
is enforced by the loader.

That seems like a better approach than making addon developers specify
this list of libraries in the PageMods API, which is duplicate
information, since they then specify it again when they require those
modules in their page mod modules.

Even if we don't have this mechanism yet, I would bias towards giving
page mod modules unrestricted access to "require" to keep this interface
simple and then implementing the mechanism as soon as practicable to
prevent malicious page code that compromises page mod code from gaining
additional capabilities.

> I'm not sure how I feel about having data.url() be able to grant
> access to things outside data/ . Maybe we need a different API for that
> (require("self").lib ?). I'll think about that.

Yes, that makes perfect sense, i.e. a lib/ directory-specific API that
doesn't load a module (as does "require") but provides a reference to
one that can then be loaded by the page-mods module when a page to which
it applies is loaded.

-myk

Myk Melez

unread,
May 31, 2010, 6:02:23 PM5/31/10
to mozilla-la...@googlegroups.com, Nickolay Ponomarev
On 05/30/2010 07:50 AM, Nickolay Ponomarev wrote:
Second, I think we should restrict the schemes that URLs are allowed
to match to http, https, and ftp for the initial implementation, as
providing access to other schemes (file, chrome, etc.) has security
implications that we need to think through carefully.

I did not make this change yet. I realize that giving jetpack access to other schemes can let it do something it wouldn't be able to do just by having access to http pages, but as long as it's clear a jetpack has this ability, I think it's OK. (We are not trying to restrict functionality available to jetpacks, are we?)
No, we aren't trying to restrict the functionality available to addons. However, we do want to make it more obvious what functionality an addon is using and restrict its access to other functionality, to limit the amount of damage that can be done by an addon that gets compromised (or a malicious addon that pretends to do one thing and then tries to do another).

So if an addon wants to modify about:addons, it should be able to do so, although obviously we would review it more carefully than an addon that modifies http://example.com/. But an addon that wants to modify web pages (f.e. to add window.camera) shouldn't unintentionally also make that change to about:addons.


Currently, "about:" pages are used in tests, we would need a http server for testing otherwise. I also expect being able to mod file:// pages to be useful for development.

How about we restrict "*" to http, https, ftp and let the jetpack author specify "file:*" if he wants to?
It accomplishes the goal of not surprising addons that use "*", although it's inconsistent with the typical meaning of a wildcard. Overall, it seems ok, and I don't have any better ideas at the moment, so let's go with it.

-myk

Myk Melez

unread,
May 31, 2010, 6:54:16 PM5/31/10
to Nickolay Ponomarev, mozilla-la...@googlegroups.com, war...@mozilla.com
On 05/30/2010 03:13 PM, Nickolay Ponomarev wrote:
As usual, I posted a summary of points to https://wiki.mozilla.org/User:Asqueella/JEP_107#Discsussion_-_script_context_in_Model_C (if I missed anything important, feel free to add it) and below are my comments. I don't have all the answers, but I wanted to post today anyway.
Thanks, these summaries are really useful!


Both GreaseMonkey and Chrome create a clean object as the script's global with __proto__ set to XPCNativeWrapper(contentWindow) and necessary globals added to it (GM_*, chrome.extension.*)

I think we should implement a scheme like GM's/Chrome's, since it will be the most familiar and intuitive. The rare developers who want to use the principle of least authority can do so in the main jetpack (it can't be given access to any of the page's DOM anyway, so there's no reason to keep it in content process).
I agree. This seems like the most ergonomic approach for addon developers.


(I'd rather not call it "require()" if it's a different from the main require.)

I agree with Myk's comments on CommonJS in content scripts; we should also remember that the content scripts (and the related CommonJS machinery) will reload every single page load. I think that while the CommonJS hammer is attractive, the content scripts should generally not be as complex as to require it. We can add it later if there's need.
There are at least two APIs that I expect to be very commonly used: the request API for making network requests (i.e. XHR) and the "self" API for accessing bundled resources such as icons.

Others may be less common, but there is a long tail of built-in functionality that page mods might want to access (places, widget, page worker), not to mention third-party libraries and modules written by addon developers themselves.

We could design another mechanism to expose APIs to page mod modules, but that mechanism would either be a series of one-offs (and thus limited and stifling) or more general, in which case it would duplicate what "require" already provides (and all the work we're putting into making "require" robust, performant, and secure).

Despite my earlier hesitation, the more I think about it, the more I think that our existing module system is the appropriate mechanism for providing page mods with functionality that isn't provided by the main module via some communications pipe.


Chrome lists the scripts to be loaded (in the single content script context) in order in the manifest. Simple and similar to web pages, but different from jetpack execution model.

I like it. What do you think?
I like the Chrome Extensions approach too. It's a reasonable and simple design, especially for the common case of an addon that only implements a page mod and doesn't need a main module. However, there are a couple issues with using it in Jetpack:
  1. Jetpack's manifest (like Chrome Extensions') is JSON, which doesn't allow function declarations, and that limits how we can implement interfaces like filter functions and the communications pipe between the main module and page mod modules.

  2. Jetpack's other APIs are all implemented as modules that addon code accesses via "require", so a manifest declaration approach would be inconsistent with the other APIs.

    (Where appropriate, it would be particularly useful for the page mods API to be consistent with other APIs that also provide access to content, like page worker, widget, and panel. That probably does mean changing those APIs to work more like page worker's option C, but it probably doesn't mean turning them all into manifest declarations.)

Overall, although using "require" and a constructor is a bit more complicated, it is still a relatively simple, declarative interface, and it's consistent with Jetpack's other APIs while giving us more flexibility on the design of page mods, so I think it's the right approach.


You both used an example with callback functions (onStart, onReady) defined in the content script, but since the script must be evaluated "on start" if it is to be able to do anything before the page starts loading, the onStart callback is actually redundant (just put the code at the top level, like in regular web pages).
Right, that makes sense, which means there is no need to export a separate "start" handler.


We could either provide an easy way to register for DOMReady event or leave it to libraries.
It's unclear how well we'll support libraries right off the bat. Nevertheless, I'm ok with leaving out exports.onReady as long as registering for the DOMContentLoaded event is not too hard and the documentation provides a good example for how to do so.

However, if we do leave out exports.onReady, we should actively monitor usage of the initial implementation of the API and implement such a callback (or JS library support, or something else) as needed in future iterations to improve the developer experience.


#4 How does the script communicate with the main jetpack

I don't have opinions yet, just some food for thought:
  • GreaseMonkey defines several GM_* globals to provide additional functionality to GM scripts.
  • Chrome implements bidirectional asynchronous message passing via chrome.extension.sendReqest(json, responseCallback) and another pipe (Port) based API for long-lived connections.
  • If we are going to allow exporting APIs (e.g. window.microphone) via this mechanism, we might need sync content->jetpack messaging. bsmedberg also mentioned this as a possibility.
I'm not yet familiar enough with these kinds of interfaces to make an educated guess, but I'm hoping Brian can come up with a proposal.


P.S. I noticed that Google Groups messes up my glorious formatting (esp. quoted text). Does it get across the mail at least or is 2010 still too early to write in rich text?
Heh, no, not too early! Personally, I'm a big fan of rich content and use it myself in messages I send to this list. And Google Groups passes it through in emails, so if you read the group via email (as I usually do), then you see the formatting just fine.

Google Groups converts messages to plain text when displaying them in its web interface, though, and it loses some formatting in the process. So if you read the group via the web (as I sometimes do), the experience is suboptimal. But you do still get URLs for links, lists are formatted, etc.

-myk

Myk Melez

unread,
May 31, 2010, 7:05:21 PM5/31/10
to Nickolay Ponomarev, mozilla-la...@googlegroups.com
I'm wary of designing too far in advance, but in a future iteration of this API, I'd really like to see us add support for specifying CSS style mods via the same constructor (so both script and style mods can share an "includes" declaration). Thus "script" is too specific.

The point about there being uses that don't modify the page is well-taken, and I'm open to suggestions of a name that is equally descriptive while being more accurate, but in its absence, especially given that the most common use case will be to modify pages, I think we should call the module "page-mod" and the constructor PageMod.

-myk

Benjamin Smedberg

unread,
Jun 1, 2010, 10:19:03 AM6/1/10
to mozilla-la...@googlegroups.com, Myk Melez, Nickolay Ponomarev, war...@mozilla.com
On 5/31/10 6:54 PM, Myk Melez wrote:

> There are at least two APIs that I expect to be very commonly used: the
> request API for making network requests (i.e. XHR) and the "self" API
> for accessing bundled resources such as icons.

XHR in a content script would just be the core XMLHttpRequest you get with
the DOM. There is no need to use commonJS modules to get at it. Depending on
how we do the global object, it would either be `new XMLHttpRequest` or `new
contentWindow.XMLHttpRequest`

> Others may be less common, but there is a long tail of built-in
> functionality that page mods might want to access (places, widget, page
> worker), not to mention third-party libraries and modules written by
> addon developers themselves.

I strongly disagree here. Page mods, if they wish to access all these other
bits, should use message-passing to the main addon code. Remember that they
will be running in a separate process and have temporary lifetime compared
with the modules they might be importing, and that most of those modules
won't normally live in the content process, but either in the jetpack
processes or the main chrome process.

>> *#4 How does the script communicate with the main jetpack
>> *


>> I don't have opinions yet, just some food for thought:
>>

>> * GreaseMonkey defines several GM_* globals to provide additional
>> functionality to GM scripts.
>> * Chrome implements bidirectional asynchronous message passing via
>> |chrome.extension.sendReqest(json, responseCallback)| and
>> another pipe (|Port|) based API for long-lived connections.
>> * If we are going to allow exporting APIs (e.g. window.microphone)


>> via this mechanism, we might need sync content->jetpack
>> messaging. bsmedberg also mentioned this as a possibility.

I think we should start out using something that is or behaves like DOM
.postMessage. e.g. provide a global `addon` and then allow
addon.postMessage(JSON.stringify(messageobj), '*');

This would re-use existing DOM mindshare, although it is slightly limiting
because it doesn't allow RPC dispatch, only async dispatch.

The real decision I think we need to make, though, is what view of the
content objects the pagemod script has. Chrome uses a very restrictive
sandbox where the pagemod has access to only the DOM, but not any JS
objects. This is similar to XPCNativeWrappers for current chrome extensions.
This is a very limiting view, because it makes it impossible to inject DOM
attributes (such as window.microphone). There are three possible solutions:

A. (the Chrome solution) - DOM view only, `window` is the global object
B. (the current mozilla extension model) - DOM view by default, but
.wrappedJSObject gives you a safe way to access the content-visible object.
Scripts run with a separate (jetpack-provided) global.
B2: Like B, except the page `window` is the global object (which you'd have
to unwrap with window.wrappedJSObject to get at the content-visible window)
C. (simplest) - safe JS view at all times. Need a different global object to
avoid name pollution.

I think we should either do B2 or C.

--BDS

Nickolay Ponomarev

unread,
Jun 2, 2010, 5:32:24 PM6/2/10
to mozilla-la...@googlegroups.com
To sum up:
  • Minor changes:
    • Rename module to "page-mod" and constructor to PageMod
    • Restrict "*" to only match HTTP(S)+FTP
  • Agreed to:
    • Specify pagemod script in a separate file, run it in a separate context (in the future run it in the same content process as the page the script is modding is running in)
    • The script gets executed on-window-created (before the page starts loading). If a page mod needs to add API to the page it should do so at the top level of the script. To modify the DOM a page mod needs a DOMContentLoaded listener, which should be in the docs.
    • The page mod's global is an XPCNativeWrapper of page's global (|window|), like in GM and Chrome. The pagemod script can access the DOM using the same syntax as a webpage script. Unlike Chrome content scripts, a page mod will be able to access content JS objects by using global.wrappedJSObject.
  • Still not decided:
      • How do we let the script include common libraries / modularize its code
      • Mechanism for communication with the main jetpack
    Nickolay

    Nickolay Ponomarev

    unread,
    Jun 2, 2010, 5:48:38 PM6/2/10
    to mozilla-la...@googlegroups.com
    On Tue, Jun 1, 2010 at 6:19 PM, Benjamin Smedberg <benj...@smedbergs.us> wrote:
    On 5/31/10 6:54 PM, Myk Melez wrote:

    There are at least two APIs that I expect to be very commonly used: the
    request API for making network requests (i.e. XHR) and the "self" API
    for accessing bundled resources such as icons.

    XHR in a content script would just be the core XMLHttpRequest you get with the DOM. There is no need to use commonJS modules to get at it. Depending on how we do the global object, it would either be `new XMLHttpRequest` or `new contentWindow.XMLHttpRequest`

    (Not for cross-domain XHR...)
     
    Others may be less common, but there is a long tail of built-in
    functionality that page mods might want to access (places, widget, page
    worker), not to mention third-party libraries and modules written by
    addon developers themselves.

    I strongly disagree here. Page mods, if they wish to access all these other bits, should use message-passing to the main addon code. Remember that they will be running in a separate process and have temporary lifetime compared with the modules they might be importing, and that most of those modules won't normally live in the content process, but either in the jetpack processes or the main chrome process.

    ...but I agree with Benjamin, I think if there's a "long tail" of functionality required by page mods, they're doing it wrong.

    The only class of code that should (has to) live in the page mod's context is the code that works with content (i.e. libraries like jQuery). This type of code historically shares the global, so I think there's no win (but clear overhead) from wrapping it into modules.

    *#4 How does the script communicate with the main jetpack
    I don't have opinions yet, just some food for thought:

       * GreaseMonkey defines several GM_* globals to provide additional
         functionality to GM scripts.
       * Chrome implements bidirectional asynchronous message passing via
         |chrome.extension.sendReqest(json, responseCallback)| and
         another pipe (|Port|) based API for long-lived connections.
       * If we are going to allow exporting APIs (e.g. window.microphone)
         via this mechanism, we might need sync content->jetpack
         messaging. bsmedberg also mentioned this as a possibility.

    I think we should start out using something that is or behaves like DOM .postMessage. e.g. provide a global `addon` and then allow addon.postMessage(JSON.stringify(messageobj), '*');

    This would re-use existing DOM mindshare, although it is slightly limiting because it doesn't allow RPC dispatch, only async dispatch.

    I like this idea!

    Nickolay

    Nickolay Ponomarev

    unread,
    Jun 2, 2010, 5:59:17 PM6/2/10
    to Myk Melez, mozilla-la...@googlegroups.com
    On Tue, Jun 1, 2010 at 2:54 AM, Myk Melez <m...@mozilla.org> wrote:
    Chrome lists the scripts to be loaded (in the single content script context) in order in the manifest. Simple and similar to web pages, but different from jetpack execution model.

    I like it. What do you think?
    I like the Chrome Extensions approach too. It's a reasonable and simple design, especially for the common case of an addon that only implements a page mod and doesn't need a main module. However, there are a couple issues with using it in Jetpack:
    1. Jetpack's manifest (like Chrome Extensions') is JSON, which doesn't allow function declarations, and that limits how we can implement interfaces like filter functions and the communications pipe between the main module and page mod modules.

    2. Jetpack's other APIs are all implemented as modules that addon code accesses via "require", so a manifest declaration approach would be inconsistent with the other APIs.

      (Where appropriate, it would be particularly useful for the page mods API to be consistent with other APIs that also provide access to content, like page worker, widget, and panel. That probably does mean changing those APIs to work more like page worker's option C, but it probably doesn't mean turning them all into manifest declarations.)

    Overall, although using "require" and a constructor is a bit more complicated, it is still a relatively simple, declarative interface, and it's consistent with Jetpack's other APIs while giving us more flexibility on the design of page mods, so I think it's the right approach.

    I didn't mean to suggest putting the list of scripts in package.json. The point was listing the scripts to be run in the same context in a flat list somewhere. It could be in PageMod() constructor options (more declarative, but separated from the script itself) or just in the content script itself via runScript(<path relative to the content script>).

    Consider jQuery plug-ins -- with this approach you just list jquery, followed by a plugin for jquery, they get loaded in the same context, and you can use the jquery plugin without having to modify it. With the modules model you'd have to write special wrappers for the libraries.

    Nickolay

    Myk Melez

    unread,
    Jun 3, 2010, 3:59:41 AM6/3/10
    to mozilla-la...@googlegroups.com, Benjamin Smedberg
    [Accidentally sent this only to Benjamin on Tuesday; resending to the
    group.]

    On 06/01/2010 07:19 AM, Benjamin Smedberg wrote:
    > XHR in a content script would just be the core XMLHttpRequest you get
    > with the DOM. There is no need to use commonJS modules to get at it.
    > Depending on how we do the global object, it would either be `new
    > XMLHttpRequest` or `new contentWindow.XMLHttpRequest`

    That XMLHttpRequest wouldn't allow page mods to make cross-domain
    requests, however, which is useful and popular functionality
    (Greasemonkey injects a cross-domain capable XHR into its scripts for
    this reason).

    > I strongly disagree here. Page mods, if they wish to access all these
    > other bits, should use message-passing to the main addon code.

    Can you elaborate on your reasons here? It's not clear why you disagree.

    > Remember that they will be running in a separate process and have
    > temporary lifetime compared with the modules they might be importing,
    > and that most of those modules won't normally live in the content
    > process, but either in the jetpack processes or the main chrome process.

    Yes, I understand that, and I'm not saying we should expose existing
    instances of those modules in the addon/chrome processes to the content
    processes (which would have all the problems we've previously identified
    with cross-process communication).

    Rather, I'm proposing we load new instances of those modules in the
    content processes when they are required by a page mod, instantiating
    them when initially required and destroying them when the page mod is
    unloaded (or when the process dies, if it makes more sense for them to
    be longer lived).

    > I think we should start out using something that is or behaves like
    > DOM .postMessage. e.g. provide a global `addon` and then allow
    > addon.postMessage(JSON.stringify(messageobj), '*');

    I'm in favor of doing this in addition to whatever we do to expose APIs
    directly to the page mode context, but this is a significantly less
    ergonomic approach than giving page mods synchronous object-based
    interfaces with which they can interact, and I don't think it's sufficient.

    > The real decision I think we need to make, though, is what view of the
    > content objects the pagemod script has. Chrome uses a very restrictive
    > sandbox where the pagemod has access to only the DOM, but not any JS
    > objects. This is similar to XPCNativeWrappers for current chrome
    > extensions. This is a very limiting view, because it makes it
    > impossible to inject DOM attributes (such as window.microphone). There
    > are three possible solutions:
    >
    > A. (the Chrome solution) - DOM view only, `window` is the global object
    > B. (the current mozilla extension model) - DOM view by default, but
    > .wrappedJSObject gives you a safe way to access the content-visible
    > object. Scripts run with a separate (jetpack-provided) global.
    > B2: Like B, except the page `window` is the global object (which you'd
    > have to unwrap with window.wrappedJSObject to get at the
    > content-visible window)
    > C. (simplest) - safe JS view at all times. Need a different global
    > object to avoid name pollution.
    >
    > I think we should either do B2 or C.

    B2 or C both seem fine to me as well. C (with a different global object)
    is preferable if the JS view really is safe, because it's the simplest
    model, although I've been under the impression that a wrappedJSObject
    for an XPCNativeWrapper-wrapped DOMWindow is actually not particularly
    safe because of the potential for malicious page code to spoof DOM
    interfaces in order to trick addon code into exposing its context.

    Perhaps I misunderstand though. Is that not the case?

    -myk

    Myk Melez

    unread,
    Jun 3, 2010, 4:14:34 AM6/3/10
    to mozilla-la...@googlegroups.com, Benjamin Smedberg
    On 06/03/2010 12:59 AM, Myk Melez wrote:
    > On 06/01/2010 07:19 AM, Benjamin Smedberg wrote:
    >> I think we should start out using something that is or behaves like
    >> DOM .postMessage. e.g. provide a global `addon` and then allow
    >> addon.postMessage(JSON.stringify(messageobj), '*');
    > I'm in favor of doing this in addition to whatever we do to expose
    > APIs directly to the page mode context, but this is a significantly
    > less ergonomic approach than giving page mods synchronous object-based
    > interfaces with which they can interact, and I don't think it's
    > sufficient.
    After I sent this message, we had a discussion in the weekly meeting
    about this, and Atul suggested we start with an initial implementation
    that supports only a postMessage-like API and then continue to
    investigate how to provide more ergonomic APIs.

    I think this is a reasonable approach, and I'm ok with moving forward
    with such an initial implementation.

    But we will have to circle back around and address this, as the
    requirement to use a postMessage-like API to access most functionality
    is not a reasonable burden to impose on the casual developers Jetpack is
    targeting, and optimizing the developer experience is a higher priority
    than maximizing the extent to which addon code runs in a separate process.

    -myk

    Myk Melez

    unread,
    Jun 3, 2010, 4:19:28 AM6/3/10
    to Nickolay Ponomarev, mozilla-la...@googlegroups.com
    On 06/02/2010 02:59 PM, Nickolay Ponomarev wrote:
    > I didn't mean to suggest putting the list of scripts in package.json.
    > The point was listing the scripts to be run in the same context in a
    > flat list somewhere. It could be in PageMod() constructor options
    > (more declarative, but separated from the script itself) or just in
    > the content script itself via runScript(<path relative to the content
    > script>).
    >
    > Consider jQuery plug-ins -- with this approach you just list jquery,
    > followed by a plugin for jquery, they get loaded in the same context,
    > and you can use the jquery plugin without having to modify it. With
    > the modules model you'd have to write special wrappers for the libraries.
    This would be easy to implement (and simple for developers to
    understand) as a PageMod.script Collection property that accepts an
    array of values and loads each one as a script in the page mod context,
    i.e.:

    PageMod({
    script: [self.data("my-page-mod.js"),
    self.data("jquery.js")]
    ...
    });

    -myk

    Atul Varma

    unread,
    Jun 3, 2010, 3:34:48 PM6/3/10
    to mozilla-la...@googlegroups.com, Myk Melez, Benjamin Smedberg
    On 6/3/10 12:59 AM, Myk Melez wrote:
    > B2 or C both seem fine to me as well. C (with a different global
    > object) is preferable if the JS view really is safe, because it's the
    > simplest model, although I've been under the impression that a
    > wrappedJSObject for an XPCNativeWrapper-wrapped DOMWindow is actually
    > not particularly safe because of the potential for malicious page code
    > to spoof DOM interfaces in order to trick addon code into exposing its
    > context.
    >
    > Perhaps I misunderstand though. Is that not the case?
    As I understand it, It's mostly an issue of usability: the interface was
    designed to be something that could offer a very nice flexibility
    between being really secure/unspoofable and only seeing the "real"
    underlying DOM structures (using XPCNW without the wrappedJSObject,
    a.k.a. "x-ray vision goggles"), versus being able to actually interact
    with JS objects defined by the page's scripts (in which case you access
    the wrappedJSObject property). Needless to say, this is really hard to
    explain to casual developers, so they often would just access the
    wrappedJSObject and use it for both plain-old-DOM access and script
    access, simply because it was the interface that always "just worked",
    even though it was less secure. (It sure is hard to think about security
    when you're spending 99% of your brain power just trying to get
    something to work the way you want it to!)

    mrbkap and I have discussed ways of improving the usability. One is the
    observation that one of the most frustrating parts of using XPCNWs is
    that there isn't much feedback that tells you what you might be doing
    wrong; accessing "window.foo" on a XPCNW'd window object just returns
    "undefined", for instance, and one way to improve the usability of this
    existing interface may be to have the XPCNW check to see if its
    SafeJSWrapped doppleganger posesses the property, and raise a helpful
    error if so, pointing them to e.g. a page on MDC that explains how they
    can get to the script-defined property in the easiest and most secure way.

    With Ubiquity, OTOH, instead of expecting folks to understand
    XPCNW/SafeJSObjectWrapper distinctions, we just had two global utility
    functions, getWindow() and getUnsafeWindow(). Since the word "unsafe"
    was in the actual name of one of the functions, authors would at least
    get some inkling that what they were doing wasn't necessarily safe, and
    that they should eventually find a better way around their problem--in
    contrast, ".wrappedJSObject" just looks like voodoo incantation, so
    folks used it with reckless abandon.

    All that said, my understanding is that .wrappedJSObject wasn't
    originally safe, because back in the days of Firefox 1.0 or something,
    we didn't have XPCSafeJSObjectWrappers--so you only truly wanted to use
    them on pages that you really, really trusted. With XPCSJOWs, however,
    you're effectively wrapping the content script object in an
    object-capability-esque wrapper, so as long as you're pretty careful,
    you can use them in a secure way. I could be totally wrong about this,
    though.

    - Atul

    Benjamin Smedberg

    unread,
    Jun 3, 2010, 3:40:19 PM6/3/10
    to Atul Varma, mozilla-la...@googlegroups.com, Myk Melez
    On 6/3/10 3:34 PM, Atul Varma wrote:

    >> Perhaps I misunderstand though. Is that not the case?
    > As I understand it, It's mostly an issue of usability: the interface was
    > designed to be something that could offer a very nice flexibility
    > between being really secure/unspoofable and only seeing the "real"
    > underlying DOM structures (using XPCNW without the wrappedJSObject,
    > a.k.a. "x-ray vision goggles"), versus being able to actually interact
    > with JS objects defined by the page's scripts (in which case you access
    > the wrappedJSObject property). Needless to say, this is really hard to
    > explain to casual developers, so they often would just access the
    > wrappedJSObject and use it for both plain-old-DOM access and script
    > access, simply because it was the interface that always "just worked",
    > even though it was less secure. (It sure is hard to think about security
    > when you're spending 99% of your brain power just trying to get
    > something to work the way you want it to!)

    Yeah, it's not about privilege-escalation any more. The only real difference
    is when you call something like document.createElement: if you're using
    wrapped JS objects, the page can do window.createElement = function() and
    you won't actually be calling the DOM function you expect.

    I'd say that as long as it's safe in the privilege-escalation way, I don't
    think this is a problem for 99.5% of addon developers.

    --BDS

    Nickolay Ponomarev

    unread,
    Jun 8, 2010, 4:49:02 PM6/8/10
    to mozilla-la...@googlegroups.com
    On Thu, Jun 3, 2010 at 1:32 AM, Nickolay Ponomarev <asqu...@gmail.com> wrote:
    To sum up:
    • Minor changes:
      • Rename module to "page-mod" and constructor to PageMod
      • Restrict "*" to only match HTTP(S)+FTP
    • Agreed to:
      • Specify pagemod script in a separate file, run it in a separate context (in the future run it in the same content process as the page the script is modding is running in)
      • The script gets executed on-window-created (before the page starts loading). If a page mod needs to add API to the page it should do so at the top level of the script. To modify the DOM a page mod needs a DOMContentLoaded listener, which should be in the docs.
    One unfortunate consequence of running the script so early is that document.documentElement is still null at that point, which will surprise people (it surprises jQuery initialization code for sure). Basically, the only things you can do at the on-window-create time is export JS APIs and register a load/DOMContentLoaded listener.
      • The page mod's global is an XPCNativeWrapper of page's global (|window|), like in GM and Chrome. The pagemod script can access the DOM using the same syntax as a webpage script. Unlike Chrome content scripts, a page mod will be able to access content JS objects by using global.wrappedJSObject.
    I liked this idea, since I wanted to let the authors write code as they would write it in a <script> on the page.

    However, I forgot there are a number of limitations <https://developer.mozilla.org/en/XPCNativeWrapper#Limitations_of_XPCNativeWrapper>, which make this abstraction very leaky.

    In particular (though I can't see it on the XPCNW page), apparently there can be multiple wrapper objects for the same host object, leading to global.window !== global.__proto__ and Object.keys(obj) not listing an expando property added on a different wrapper.

    Also, to get deep wrapping (see the mdc page) I have to specify a system principal for the content script code, though I originally planned to specify the page's principal. I'm not sure if running content script's code with the page's principal will still make it protected by the SJOWs, maybe not.

    So I'm leaning toward bsmedberg's C (SJOW for window, separate from the global object) now, although it means content libraries will be harder to use in the content script.
    • Still not decided:
      • How do we let the script include common libraries / modularize its code
    Providing a load() function to run additional scripts in a single context per page is literally a couple lines of code.

    I think doing CommonJS modules will be harder, since I'd have to use and modify more complex machinery (cuddlefish module).
      • Mechanism for communication with the main jetpack
    Myk suggested this (talking about a similar Panel API):
    The content script gets an "addon" global with "sendMessage" and "onMessage" properties. "sendMessage" is a function that asynchronously sends a message to the instantiating module. The message can be a boolean, number, string, the null value, or a JSON-serializable array or object. "onMessage" is a callback property to which the content script can add one or more callbacks to handle messages from the instantiating module.

    On the other side, the Panel instance has the same two properties, with "sendMessage" sending a message to the content script, while "onMessage" receives messages from the content script.

    This design is similar to the "simple one-time requests" API in Chrome Extensions, except that the Chrome API also passes a callback with the message. I chatted with bsmedberg on IRC, and he thinks passing a callback is E10S-compatible, so I'm inclined to add that as well.

    I think it's OK for some use cases, but not enough for others. Unfortunately, I couldn't sit down long enough to think through this, but the main issue is that the content windows need to have an ID of some sort:
    1) sendMessage on the jetpack side has to know which content script to send the message to (you said "the content script", but there are many of them). A callback to onMessage lets the jetpack "answer" the content script, but still doesn't provide a way for jetpack to initiate a conversation.
    2) there'll probably need to be a way to find which tab a message came from and to send a message to a particular tab's content script.

    Brian talked about this earlier, but there wasn't a final solution.

    --
    One other issue is that we'll need to let the jetpack run dynamically created code in the content context. I encountered this in tests, where I'd have to create dozens of scripts in data/ to replicate my current functionality, but this would also be required for e.g. a GreaseMonkey-like jetpack-based addon. I'm currently thinking of keeping onWindowCreate and giving it the sandbox object to work with.

    Nickolay

    Myk Melez

    unread,
    Jun 9, 2010, 1:28:15 PM6/9/10
    to mozilla-la...@googlegroups.com, Nickolay Ponomarev
    On 06/08/2010 01:49 PM, Nickolay Ponomarev wrote:
    One unfortunate consequence of running the script so early is that document.documentElement is still null at that point, which will surprise people (it surprises jQuery initialization code for sure). Basically, the only things you can do at the on-window-create time is export JS APIs and register a load/DOMContentLoaded listener.
    Hmm, indeed. I wonder if we should ask the platform folks to implement a document-element-parse notification that fires after the document element has been parsed (but before any <script> tags in the <head> have been parsed). Then the API could wait until it observes that notification to set up the content script.

    In the meantime, though, I think we should just document this consequence and provide good examples for when and how users should load libraries like jQuery that assume the presence of the document element.


    So I'm leaning toward bsmedberg's C (SJOW for window, separate from the global object) now, although it means content libraries will be harder to use in the content script.
    Is it possible for global.__proto__ to be a reference to the same SJOW to which global.window points?


    • Still not decided:
      • How do we let the script include common libraries / modularize its code
    Providing a load() function to run additional scripts in a single context per page is literally a couple lines of code.

    I think doing CommonJS modules will be harder, since I'd have to use and modify more complex machinery (cuddlefish module).
    For the initial implementation, I think it's sufficient to let addons specify multiple scripts to load in the PageMod constructor. It doesn't provide modularization, and it isn't as flexible as on-demand loading from within the content script, but it satisfies common use cases such as loading a JS library, and we can circle back around on what else we should do in a second phase of development.


    1) sendMessage on the jetpack side has to know which content script to send the message to (you said "the content script", but there are many of them). A callback to onMessage lets the jetpack "answer" the content script, but still doesn't provide a way for jetpack to initiate a conversation.
    One idea: when a content script is set up, the API could call an onStart handler that the addon defines, passing it a handle to a "content script" object with sendMessage and onMessage properties. Then, if the addon wants to initiate conversations with individual content scripts, it would cache those handles and use them to communicate with the individual scripts.


    2) there'll probably need to be a way to find which tab a message came from and to send a message to a particular tab's content script.

    Brian talked about this earlier, but there wasn't a final solution.
    A callback parameter to sendMessage calls would provide this capability.


    One other issue is that we'll need to let the jetpack run dynamically created code in the content context. I encountered this in tests, where I'd have to create dozens of scripts in data/ to replicate my current functionality, but this would also be required for e.g. a GreaseMonkey-like jetpack-based addon. I'm currently thinking of keeping onWindowCreate and giving it the sandbox object to work with.
    The problem is that we won't be able to expose that sandbox object to the addon context once the addon context is running in a separate process, so this approach is not E10S compatible.

    How about we instead allow PageMod.script to accept strings of code in addition to references to files in the data directory?

    -myk

    Nickolay Ponomarev

    unread,
    Jun 9, 2010, 2:24:18 PM6/9/10
    to Myk Melez, mozilla-la...@googlegroups.com
    On Wed, Jun 9, 2010 at 9:28 PM, Myk Melez <m...@mozilla.org> wrote:
    On 06/08/2010 01:49 PM, Nickolay Ponomarev wrote:

    So I'm leaning toward bsmedberg's C (SJOW for window, separate from the global object) now, although it means content libraries will be harder to use in the content script.
    Is it possible for global.__proto__ to be a reference to the same SJOW to which global.window points?

    I think that's a bad idea, since it will make it very easy to accidentally access page-defined JS properties.

    • Still not decided:
      • How do we let the script include common libraries / modularize its code
    Providing a load() function to run additional scripts in a single context per page is literally a couple lines of code.

    I think doing CommonJS modules will be harder, since I'd have to use and modify more complex machinery (cuddlefish module).
    For the initial implementation, I think it's sufficient to let addons specify multiple scripts to load in the PageMod constructor. It doesn't provide modularization, and it isn't as flexible as on-demand loading from within the content script, but it satisfies common use cases such as loading a JS library, and we can circle back around on what else we should do in a second phase of development.

    Good. As I said, running jQuery code at window-created doesn't work, which was one motivation for load(). Another one was to provide a content-script equivalent to ådding a <script src=..> to the document.

    1) sendMessage on the jetpack side has to know which content script to send the message to (you said "the content script", but there are many of them). A callback to onMessage lets the jetpack "answer" the content script, but still doesn't provide a way for jetpack to initiate a conversation.
    One idea: when a content script is set up, the API could call an onStart handler that the addon defines, passing it a handle to a "content script" object with sendMessage and onMessage properties. Then, if the addon wants to initiate conversations with individual content scripts, it would cache those handles and use them to communicate with the individual scripts.

    Yeah, Brian talked about something similar.

    2) there'll probably need to be a way to find which tab a message came from and to send a message to a particular tab's content script.

    Brian talked about this earlier, but there wasn't a final solution.
    A callback parameter to sendMessage calls would provide this capability.

    I meant getting a pipe to the content script given a browser <tab> and vice versa.
    One other issue is that we'll need to let the jetpack run dynamically created code in the content context. I encountered this in tests, where I'd have to create dozens of scripts in data/ to replicate my current functionality, but this would also be required for e.g. a GreaseMonkey-like jetpack-based addon. I'm currently thinking of keeping onWindowCreate and giving it the sandbox object to work with.
    The problem is that we won't be able to expose that sandbox object to the addon context once the addon context is running in a separate process, so this approach is not E10S compatible.

    How about we instead allow PageMod.script to accept strings of code in addition to references to files in the data directory?

    I meant the sandbox object as returned from the securable-module's SandboxFactory: defineProperty/getProperty/evaluate(string). This should be e10s-compatible.

    Nickolay

    Myk Melez

    unread,
    Jun 10, 2010, 5:41:27 PM6/10/10
    to Nickolay Ponomarev, mozilla-la...@googlegroups.com
    On 06/09/2010 11:24 AM, Nickolay Ponomarev wrote:
    On Wed, Jun 9, 2010 at 9:28 PM, Myk Melez <m...@mozilla.org> wrote:
    On 06/08/2010 01:49 PM, Nickolay Ponomarev wrote:

    So I'm leaning toward bsmedberg's C (SJOW for window, separate from the global object) now, although it means content libraries will be harder to use in the content script.
    Is it possible for global.__proto__ to be a reference to the same SJOW to which global.window points?

    I think that's a bad idea, since it will make it very easy to accidentally access page-defined JS properties.
    My primary concern with "window" being completely separate from the global object is that it complicates integration of JS libraries like jQuery that expect "window" to be the global object and access its properties like "location" as globals. Usage of such libraries will be very common, and it should be simple and easy to do so.

    How would we accommodate such libraries if the global object's prototype wasn't the "window" SJOW?


    • Still not decided:
      • How do we let the script include common libraries / modularize its code
    Providing a load() function to run additional scripts in a single context per page is literally a couple lines of code.

    I think doing CommonJS modules will be harder, since I'd have to use and modify more complex machinery (cuddlefish module).
    For the initial implementation, I think it's sufficient to let addons specify multiple scripts to load in the PageMod constructor. It doesn't provide modularization, and it isn't as flexible as on-demand loading from within the content script, but it satisfies common use cases such as loading a JS library, and we can circle back around on what else we should do in a second phase of development.

    Good. As I said, running jQuery code at window-created doesn't work, which was one motivation for load().
    Hmm, indeed, "load" would be a workaround for jQuery not working on window-created. But we should really fix this issue by loading the content script only after the document element has been created (akin to the way Chrome waits for at least "document_start" before it injects content script JS).

    Perhaps we can register a mutation event listener for that?


    Another one was to provide a content-script equivalent to ådding a <script src=..> to the document.
    That seems like a different feature, and one whose use cases and implications we should think through carefully. Let's circle back around to that in a second phase of development.


    2) there'll probably need to be a way to find which tab a message came from and to send a message to a particular tab's content script.

    Brian talked about this earlier, but there wasn't a final solution.
    A callback parameter to sendMessage calls would provide this capability.

    I meant getting a pipe to the content script given a browser <tab> and vice versa.
    Sure, that seems useful. Perhaps there is some way to implement this in conjunction with the tabs API that Dietrich is working on. I wouldn't block the initial implementation on it, though.


    One other issue is that we'll need to let the jetpack run dynamically created code in the content context. I encountered this in tests, where I'd have to create dozens of scripts in data/ to replicate my current functionality, but this would also be required for e.g. a GreaseMonkey-like jetpack-based addon. I'm currently thinking of keeping onWindowCreate and giving it the sandbox object to work with.
    The problem is that we won't be able to expose that sandbox object to the addon context once the addon context is running in a separate process, so this approach is not E10S compatible.

    How about we instead allow PageMod.script to accept strings of code in addition to references to files in the data directory?

    I meant the sandbox object as returned from the securable-module's SandboxFactory: defineProperty/getProperty/evaluate(string). This should be e10s-compatible.
    Hmm, that's not so clear to me.

    defineProperty lets you specify an arbitrary JS value to define in the sandbox, but E10S would require it to be a JSON-serializable value to cross the process boundary. getProperty has the same issue.

    I suppose handles or CPOWs might help. But based on their description and example, it seems like handles can only be used to provide context to the original process, not to be used as a regular JS object by the other process. And CPOWs currently only support obtaining references in one direction (although there has been discussion of "CPOWs in reverse").

    In any case, a Greasemonkey-like addon is a relatively uncommon use case and a very different one than the use case this API is principally trying to accommodate, which is addons that modify web pages. We should focus on supporting the common use case here and deal with support for the Greasemonkey-like addon case in a separate effort.

    Testing is a different story, as we absolutely do want to support testing the API. However, that's not a good reason to add an addon developer-facing API call which we then have to support for addon developers using the API.

    Instead, we should figure out other ways to test the API, even if that means dozens of data/ scripts or loading the module via test.makeSandboxedLoader and then using that to poke its internal state (as a bunch of tests for existing modules already do).

    -myk

    Brian Warner

    unread,
    Jun 11, 2010, 9:08:59 PM6/11/10
    to mozilla-la...@googlegroups.com
    On 6/10/10 2:41 PM, Myk Melez wrote:
    > On 06/09/2010 11:24 AM, Nickolay Ponomarev wrote:

    >>> 2) there'll probably need to be a way to find which tab a message
    >>> came from and to send a message to a particular tab's content script.
    >>>
    >>> Brian talked about this earlier, but there wasn't a final solution.
    >> A callback parameter to sendMessage calls would provide this
    >> capability.
    >>
    >> I meant getting a pipe to the content script given a browser <tab> and
    >> vice versa.
    > Sure, that seems useful. Perhaps there is some way to implement this in
    > conjunction with the tabs API

    > <https://bugzilla.mozilla.org/show_bug.cgi?id=549317> that Dietrich is


    > working on. I wouldn't block the initial implementation on it, though.

    I tend to lean away from providing "from" arguments in message-receiving
    calls. Instead, I'd have the top-level jetpack code get notified each
    time its pagemods are loaded (i.e. when a new tab is opened, or an
    existing one is navigated to a new page), and for that notification call
    to provide an object on which a message-handler callback can be
    registered. That same object can provide information about the place
    where the mods were just loaded (such as the tab object, or the URL
    which was just loaded). Code which cares about those things can stash
    them for later use.

    Something vaguely like:

    mods.ScriptMod( { stuff..
    onNewPage: function (control) {
    // control.location is the URL of the new page
    all_pages.push(control); // to hit all of them later
    var tab = control.tab;
    function gotMessage(event) {
    if (event.message == "close me") {
    tab.close();
    }
    // or compare 'tab' against a global list to see
    // what we want to do with them
    }
    control.addListener(gotMessage);
    }} )

    My reasoning is that you're never quite sure what the application is
    going to want to switch upon, so it's cleaner to have just one argument
    to gotMessage() and move the other stuff up into the code that decided
    which callback to attach to addListener(). Apps which want to share
    gotMessage() code can just curry in an identifier:

    control.addListener(function (message)
    {gotMessage(message, identifier)}};

    My other reason is the general obj-cap principle that you don't care
    "who" is sending you a message as much as you care about being selective
    about granting the ability to send messages to a particular endpoint.
    There are composability and delegation reasons, as well as protecting
    against the "confused deputy" attack.. I'm not sure they're perfectly
    relevant here, but they lead to a general design principle to avoid the
    "from" argument.

    Also, if we have multiple pagemods which affect a given tab, their
    message need to get to the right handler, and the onNewPage()
    notification (or whatever it's called) is a good place to make that
    distinction.

    cheers,
    -Brian

    Reply all
    Reply to author
    Forward
    0 new messages