Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Hello All - Questions on Plugin migration for e10s

62 views
Skip to first unread message

moz-...@rxv.me

unread,
Apr 5, 2015, 5:38:08 PM4/5/15
to dev-tech-e...@lists.mozilla.org
Hello, I'm new to the list and first want to say great work in getting
this going again, I eagerly await the awesomeness that this will bring.

On the other hand, I'm the author of Snap Links Plus and I have not
moved on e10s compatibility yet because I'm just not sure the best way
to implement it.

Per my understanding of e10s, each document will either be its own
process or served by some pool of processes.

I could implement *nearly* all of SnapLinksPlus in the document, but
this seems as though it would cause each document to bloat
un-necessarily. This would be the case unless I only loaded a stub
which would watch for activation and then load the rest of the code
needed. This seems as though it would introduce potential/considerable
UI sluggishness (as at activation it needs to obtain a list of the
elements in the document that are usable by SLP, adding to that the
loading of code would seem to be a bit too much to be responsive
immediately).

On the other hand, I could implement it entirely on the XUL side, but
this would seem to defeat the purpose of multiple processes (as I
understand that the XUL side will still be a single process?)

Thoughts on the right direction to go?

Thanks,

--
-Clint

Blake Kaplan

unread,
Apr 24, 2015, 6:27:24 PM4/24/15
to mozilla-dev-te...@lists.mozilla.org
Hello,

moz-...@rxv.me wrote:
> On the other hand, I'm the author of Snap Links Plus and I have not
> moved on e10s compatibility yet because I'm just not sure the best way
> to implement it.

I don't think there's a "best way" to do this. Conversion to multiprocess is
more of an art than a science. Here are a couple of tips, that may help.

> Per my understanding of e10s, each document will either be its own
> process or served by some pool of processes.

This is right. At the moment (and for the foreseeable future) the process
boundaries are going to be per-tab at their most granular. We might eventually
want to put things like cross-origin iframes in their own processes, but that
is pretty far out.

> I could implement *nearly* all of SnapLinksPlus in the document, but
> this seems as though it would cause each document to bloat
> un-necessarily. This would be the case unless I only loaded a stub

I'm not sure what bloat you're worried about, exactly. We already load several
megabytes worth of code in both the parent and the child, so a bit of extra
code won't matter too much. Even with several copies, the extra memory used
for your addon should be dwarfed by the memory used by the system.

That being said, it looks like you're going to have to do some splitting
between the parent and child anyway. The context menu has to be opened by the
parent process, while the DOM manipulation should take place in the child. So,
I would expect the code that handles the mousedown/mousemove/mouseup events
and that calculates what links are in the given rectangle to live in the child
process and then to send a message to the parent with the information needed
in order to display the context menu. The context menu would then probably
want to send a message down to the child in order to navigate (or something to
that effect, if you have to open new tabs then you'd probably want to just
pass the URL up).

Does that make sense? Concretely, the code in the parent could live in a
module or overlay and use the message manager to create a delayed frame script
(only one per startup) and then communicate with that script.

> On the other hand, I could implement it entirely on the XUL side, but
> this would seem to defeat the purpose of multiple processes (as I
> understand that the XUL side will still be a single process?)

This is an excellent point. As a matter of fact, we do have CPOWs, which allow
the parent process to synchronously talk directly to DOM nodes in the child;
however, doing so does, in fact, defeat the purpose of using multiple processes and
cause it to jank (and the jank would be even worse because now there's
additional IPC overhead).

I hope this helps.
--
Blake Kaplan

moz-...@rxv.me

unread,
Apr 25, 2015, 9:36:04 AM4/25/15
to Blake Kaplan, mozilla-dev-te...@lists.mozilla.org
Thanks for the reply Blake...

On 4/24/2015 5:26 PM, Blake Kaplan wrote:
> Hello,
>
> moz-...@rxv.me wrote:
>> On the other hand, I'm the author of Snap Links Plus and I have not
>> moved on e10s compatibility yet because I'm just not sure the best way
>> to implement it.
> I don't think there's a "best way" to do this. Conversion to multiprocess is
> more of an art than a science. Here are a couple of tips, that may help.
>
>> Per my understanding of e10s, each document will either be its own
>> process or served by some pool of processes.
> This is right. At the moment (and for the foreseeable future) the process
> boundaries are going to be per-tab at their most granular. We might eventually
> want to put things like cross-origin iframes in their own processes, but that
> is pretty far out.
>
>> I could implement *nearly* all of SnapLinksPlus in the document, but
>> this seems as though it would cause each document to bloat
>> un-necessarily. This would be the case unless I only loaded a stub
> I'm not sure what bloat you're worried about, exactly. We already load several
> megabytes worth of code in both the parent and the child, so a bit of extra
> code won't matter too much. Even with several copies, the extra memory used
> for your addon should be dwarfed by the memory used by the system.
Okay, I won't worry about that too much then, my goal is responsiveness
but without overburdening for no reason.
> That being said, it looks like you're going to have to do some splitting
> between the parent and child anyway. The context menu has to be opened by the
> parent process, while the DOM manipulation should take place in the child. So,
> I would expect the code that handles the mousedown/mousemove/mouseup events
> and that calculates what links are in the given rectangle to live in the child
> process and then to send a message to the parent with the information needed
> in order to display the context menu. The context menu would then probably
> want to send a message down to the child in order to navigate (or something to
> that effect, if you have to open new tabs then you'd probably want to just
> pass the URL up).
>
> Does that make sense? Concretely, the code in the parent could live in a
> module or overlay and use the message manager to create a delayed frame script
> (only one per startup) and then communicate with that script.
I think it makes the most sense to put it in the document, however I had
thought of another issue I probably have no way to solve which is that
SnapLinks does quite a bit of work to make iframes transparent to the
user, meaning that if there are one or many iframes in a document and a
drag is started which drags over the main document and two or three
iframes, it will highlight and function on those iframe links as though
they were not in an iframe.

This trickery is only accomplished because the XUL side (used to be able
to) traverse the DOM without security restrictions. Will I have to
somehow inject code into each iframe as well and have those segments
communicate to... the outer document? (don't think that's possible),
to... the XUL code and then back down to the main document?
>> On the other hand, I could implement it entirely on the XUL side, but
>> this would seem to defeat the purpose of multiple processes (as I
>> understand that the XUL side will still be a single process?)
> This is an excellent point. As a matter of fact, we do have CPOWs, which allow
> the parent process to synchronously talk directly to DOM nodes in the child;
> however, doing so does, in fact, defeat the purpose of using multiple processes and
> cause it to jank (and the jank would be even worse because now there's
> additional IPC overhead).
That makes sense, but with a UI type of interface, responsiveness seems
to be the driving goal, I don't think the async nature should a problem,
in fact the multi-process aspect will probably improve responsiveness if
I offload calculations (which I'd been considering doing anyway to a web
worker). Like many (I hope) I do not have much experience with multiple
threads of execution but have done it before with success.
> I hope this helps.
I does and gives me some impetus to get my butt in gear soon, thanks!

Виноградов Сергей

unread,
Apr 26, 2015, 3:40:10 PM4/26/15
to mozilla-dev-te...@lists.mozilla.org
Do I understand correctly that in a frame script function is not available SDK (require)?

This will be to the next time?

Blake Kaplan

unread,
May 4, 2015, 1:39:04 PM5/4/15
to mozilla-dev-te...@lists.mozilla.org
moz-...@rxv.me wrote:
> This trickery is only accomplished because the XUL side (used to be able
> to) traverse the DOM without security restrictions. Will I have to
> somehow inject code into each iframe as well and have those segments
> communicate to... the outer document? (don't think that's possible),

This should continue to work as expected. Just because your code is running
in the "content" process, you can still run "chrome privileged" code. It might
be easier to talk about the "parent" or "main" process and "child" processes.
Therefore, you'll still be able to walk through all of the DOMs of all of the
iframes without worrying about having to inject code into them.
--
Blake Kaplan

moz-...@rxv.me

unread,
May 4, 2015, 10:54:39 PM5/4/15
to Blake Kaplan, mozilla-dev-te...@lists.mozilla.org
Ah interesting, I thought the content side would be unprivileged. This
will hopefully be easier than I thought then.

Is there any hard date that e10s is planned to be released? Any idea of
time frame?

On 5/4/2015 12:38 PM, Blake Kaplan wrote:
> moz-...@rxv.me wrote:
>> This trickery is only accomplished because the XUL side (used to be able
>> to) traverse the DOM without security restrictions. Will I have to
>> somehow inject code into each iframe as well and have those segments
>> communicate to... the outer document? (don't think that's possible),

Gabor Krizsanits

unread,
May 5, 2015, 6:41:52 AM5/5/15
to moz-...@rxv.me, Blake Kaplan, mozilla-dev-te...@lists.mozilla.org
Exact date would be hard to tell with absolute certainty. It will hit the
aurora channel next Monday, and the goal is to release it by the end of
this year (42-43) You can find more derails here about the schedule and the
remaining blocking bugs: https://wiki.mozilla.org/Electrolysis/Roadmap (M8
is the last beta blocker milestone)

On Tue, May 5, 2015 at 4:53 AM, <moz-...@rxv.me> wrote:

> Ah interesting, I thought the content side would be unprivileged. This
> will hopefully be easier than I thought then.
>
> Is there any hard date that e10s is planned to be released? Any idea of
> time frame?
>
> On 5/4/2015 12:38 PM, Blake Kaplan wrote:
>
>> moz-...@rxv.me wrote:
>>
>>> This trickery is only accomplished because the XUL side (used to be able
>>> to) traverse the DOM without security restrictions. Will I have to
>>> somehow inject code into each iframe as well and have those segments
>>> communicate to... the outer document? (don't think that's possible),
>>>
>> This should continue to work as expected. Just because your code is
>> running
>> in the "content" process, you can still run "chrome privileged" code. It
>> might
>> be easier to talk about the "parent" or "main" process and "child"
>> processes.
>> Therefore, you'll still be able to walk through all of the DOMs of all of
>> the
>> iframes without worrying about having to inject code into them.
>>
> _______________________________________________
> dev-tech-electrolysis mailing list
> dev-tech-e...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-electrolysis
>
0 new messages