The core team who will be working on this project is currently myself, bent,
jdrew, jduell, and bz. If you'd like to volunteer to be part of this core
team, please let me know! We'll definitely be bringing in others as needed.
In order to keep the problem well scoped, we have decided to focus first on
the responsiveness and stability aspects of multi-process, and defer
security sandboxing to a future phase.
There are a lot of unknowns in this project, but I'd like to call out
several in particular:
== Chrome content touching content DOM ==
We'd like to get a list of all the places UI code (chrome) touches content
DOM or JS. At this point we want to get a sense of how frequent this kind of
access is, and what kinds of data it actually needs.
This is because we'd like to avoid having to implement wrappers which expose
arbitrary JS objects or XPCOM objects from the content process to the chrome
process. Instead, all accessses would hopefully be asynchronous. But since
this involves rewriting chrome code, we'd like to scope the problem before
making any decisions.
I can think of at least the following places which touch content from chrome:
* context menus
* session store
* find in page
* snapshots of content (tab previews)
>> Does it really matter what we do? I mean, can't an extension touch
>> anything anywhere anyway? Do we plan to break that model then? (I'm not
>> trying to be stop-energyish, just trying to understand)
Yes, I'd *like* to break that model.
Instead, if you want to touch content, you have to do it asynchronously, and
you can only pass in shared-nothing arguments and return values (things that
could be represented as JSON):
alert("Got data back: " + result);
But right now I'm not trying to make that proposal: just get a grasp of how
big the problem is.
For the most part, not at all. But yes, we expect to have to make some
changes to our various tests (and our chrome and some extensions may
also need changes) to get them to work again in the new setup.
Page Info Dialog
Feed Processor (the one that puts the little icon in the url bar)
Possibly the search engine finder
I guess that JS code in the chrome process would have a hard time
knowing if the access was async or not. Firebug will want to prevent
writes on the content, which would create a kind of sync access at some
possibly high overhead, eg: [async call to halt writes, event says
halted, async call to read, event says value, async call to continue]
would simulate sync read. This would not be the normal mode, rather we
would hook mutation events, read data async to set the UI, then update
the UI based on mutation events.
We'd need to be able to prevent update during mutation events though.
That's not asynchronous. Hmm...
I'm not sure what the use case here. Can you maybe elaborate? What do
you do right now in this situation?
From a technical point of view, this is incredibly interesting. I
like to discuss accessibility related issues. Currently most users of
our accessibility API are in-process, so what this means for
multi-process needs to be worked out. I think part of this
will inevitably look beyond the phase of chrome/content separation.
Questions arise, like will the chrome process broker the information
from content(s)? Or do we want another model.
I don't know if it makes sense for me to be part of the core team, but
want to be as much in the loop as possible.
> I don't know that this is going to be a problem, but I thought I'd
> mention that I have a 75%-done patch to split the test server to its
> own process. So we have a way out if it does become a problem. (I
> estimate another week of work is needed.)
I don't understand what you're saying here. The test server for Mochitest
already runs in its own xpcshell process. Firefox simply accesses it as it
would any other HTTP server. (Although we do create a proxy PAC to proxy a
number of domains to localhost.)
Oh, sorry. The test server for reftests runs in-process and I assumed
mochitest was the same. The patch I have is about reftests.
when a function is compiled we can stop the browser synchronous with the
function-compilation-completion. That allows us to for example set a
breakpoint before the function runs, which is usually when you want the
For DOM mutation I believe we get sync control in the handler: no more
mutation until we return. That way we can examine the result of mutation
confident that future mutations have not changed the state.
For error messages we sometimes get async notification and its a pain in
the neck: who wants to learn about an error on an object whose state has
> when a function is compiled we can stop the browser synchronous with the
> function-compilation-completion. That allows us to for example set a
> breakpoint before the function runs, which is usually when you want the
> breakpoint inserted.
Since the content JS will be running in the content process, all your
debugger hooks will need to be installed there. You'll then have to proxy
the interesting information back to the chrome process, which is presumably
the process that the firebug UI would be in.
As you can probably tell, it's not clear how extensions will be able to do
that kind of thing yet... we'll figure that part out as we go along.
Yeah, this could be proxied (synchronously) to the right process as needed.
> For DOM mutation I believe we get sync control in the handler: no more
> mutation until we return. That way we can examine the result of mutation
> confident that future mutations have not changed the state.
Actually, you can't (by design of DOM events; the future mutations may
have happened by the time the handler fires). But that's separate from
what you're doing.
> For error messages we sometimes get async notification and its a pain in
> the neck: who wants to learn about an error on an object whose state has
> since changed?
OK. So fundamentally what you need here is debugger functionality: the
ability to stop running script at any point. That makes sense.
Event handling, like the one used for activating a link or activating
plugin find, gesture events, "Command" events etc.
s/Command/AppCommand/, DOMLinkAdded, DOMTitleChanged,
probably key events for find,
apparently there is a listener for "pageshow"...
The docshell tree is going to be disconnected: events which take place in
the content window will not automatically bubble up to chrome the way they
do currently. If there are particular events which should bubble, we'll have
to write code to explicitly bubble them.
This feels like a data-collection thread that belongs on a wiki. ;-)
And btw, I'd use word "propagate", not "bubble".
"bubble" is just one phase of event handling ;)
> I can think of at least the following places which touch content from chrome:
* Password manager
* Form manager (satchel)
We already have code to explicitly bubble to the chrome event handler,
basically. We'd need to change it to proxy cross-process, but it would
breakpoint, single step, etc across js, css, and markup because that is
the run time that developers face.
The chrome process won't have acces to the original event target (or, it
would require cross-process proxies which I'd still like to avoid if possible).
You're talking past each other.
Zack is talking about the test *harness* running (mostly) in its own
Ted is talking about the HTTP server (only) running in its own process.
You're talking past each other.
I don't think we'll be able to avoid them.
>Instead, all accessses would hopefully be asynchronous.
Might it be possible to write a caching proxy object that can
synchronously return values for immutable properties e.g. localName, and
also for some properties that are not immutable, but they would be
asynchronously updated on the proxy object, so that the value might be
out of date?
>I can think of at least the following places which touch content from chrome:
>* context menus
>* session store
>* find in page
>* snapshots of content (tab previews)
Does it make a difference whether these accesses are reads or writes?
Warning: May contain traces of nuts.
>The docshell tree is going to be disconnected: events which take place in the content window will not automatically bubble up to chrome the way they do currently. If there are particular events which should bubble, we'll have to write code to explicitly bubble them.
Supporting DOM Inspector is going to be interesting!
That sounds pretty difficult, actually. Rewriting nsContextMenu.js
would be fun without access to the event target, for example...
I'm not quite sure what it means to "single step" through markup or CSS.
It's not like there's a linear thread of execution here like for JS.
Anything involving CSS can (and in the future might) happen in parallel
on multiple native threads, for example.
For markup it could be via DOM mutate events. The debugger would "break"
on certain (eg conditional) or all mutate events. By manipulating the
break point it would create a behavior similar to breakpoints and single
For css you would want the similar effect for style application, rule
changes, and so on.
As always multi-threading makes the runtime indeterminate, but it does
not affect the breakpoint/single step stuff. When a breakpoint hits, all
the other threads have to stop.
Might be time to look at taking Firebug out-of-process itself, based
on Mark Finkle's or other remoting work? That would also reduce the
pain of making things like ChromeBug work reliably, I imagine, since
it wouldn't have to share an increasingly-interleaved runtime with the
What I was *hoping* we could do is:
* have a context menu listener in the content process
* that listener would collect all the information necessary to present the
* and send an event to the chrome process with the information
* then the chrome process could present the actual menu
> What I was *hoping* we could do is:
> * have a context menu listener in the content process
> * that listener would collect all the information necessary to present the
> context menu
> * and send an event to the chrome process with the information
> * then the chrome process could present the actual menu
This sounds like refactoring that could happen now, and when you go
multi-process you just move the event source from chrome to the content
process. Do you have a tracking bug for things like this yet? I bet there
are more ways we could massage the Firefox chrome to be more receptive to
the process separation model.
> This sounds like refactoring that could happen now, and when you go
> multi-process you just move the event source from chrome to the content
> process. Do you have a tracking bug for things like this yet? I bet there
No. For the moment we're going to focus on bootstrapping as quickly as
possible. Until that's done I don't think we're going to have a solid idea
of what we want the model to be.
I guess we can make out-of-process work like in-process only slower,
taking more memory, requiring more source code, and development time.
Since Chromebug's number one reliability problem is insufficient
development time, I think out of process would probably kill it.
The current runtime model works very well. There is a single JS thread
of control and lots of smart people making sure that JS correctly
interleaves with other parts of the browser operations. A multi-process
debugger would probably attempt to recreate this by presenting a network
interface per process plus thread-halting mechanisms (assuming the
multi-process browser was also multi-threaded).
One advantage of an out-of-process debugger is encapsulation of the
target API. Differences among browser versions and across browser brands
could be localized on the target side of the network connection.
Other advantages is remote and 'social' (multi-party) debugging. But
these all come at a cost.
I personally believe the engineering costs of multi-threading and
multi-process are too high compared to benefits in a world of fixed
resources. It's not just "is multiprocess better", it's "if we put the
same enormous effort in another direction what could be done". But
there's no stopping this sort of thing once it gets going.
I'm confused by this answer. The chrome process has to have access to
the event target. Surely a new API for chrome is not being proposed?
> I'm confused by this answer. The chrome process has to have access to
> the event target. Surely a new API for chrome is not being proposed?
Yes, I am proposing that chrome will not have access to the content event
target, and will have to use an API which isolates chrome and content.
So do you have some hints on how it might look by the time it gets up to
JS on the chrome side? By then it maybe be proxy-like?
In our code now we do stuff like
win.addEventListener('load', foo, true);
foo is of course in chrome process. Blake does a lot of heavy lifting to
cause this to work: it looks like we have direct access to the window in
both the caller and the callback, but neither is true. How might this
differ in multiprocess?
That doesn't help with markup (as opposed to DOM scripted changes),
because parsing doesn't fire mutation events (or even the Gecko-internal
equivalents, in many cases; mutations during parsing are aggressively
> For css you would want the similar effect for style application, rule
> changes, and so on.
I'm not sure what "style application" means here... And more
importantly how you single-step it.
> As always multi-threading makes the runtime indeterminate, but it does
> not affect the breakpoint/single step stuff. When a breakpoint hits, all
> the other threads have to stop.
I don't see that happening for situations like "style application"
(assuming I understand what you're talking about) without unacceptable
With the listener in the content process under UI control (so that
different Gecko embeddors can easily use different listeners)? That
would work, I guess; nsContextMenu already does more or less that now,
in terms of building up all these properties on an object and then
showing the menu based on that object.
No problem, I don't think developers want that anyway.
>> For css you would want the similar effect for style application, rule
>> changes, and so on.
> I'm not sure what "style application" means here... And more
> importantly how you single-step it.
I want to answer the question like "Why is this green?" for say a font.
If we did not have dynamics, we could know. But we do.
>> As always multi-threading makes the runtime indeterminate, but it does
>> not affect the breakpoint/single step stuff. When a breakpoint hits,
>> all the other threads have to stop.
> I don't see that happening for situations like "style application"
> (assuming I understand what you're talking about) without unacceptable
> performance penalties.
debuggers: performance penalties-R-us.
So what you really want is to take a point in time, freeze the world,
snapshot the information on rules applying to that element and all
ancestors, and unfreeze the world. You don't actually care about
stopping things partway through style resolution or selector matching,
which is what it sounded like you wanted initially.
>> I don't see that happening for situations like "style application"
>> (assuming I understand what you're talking about) without unacceptable
>> performance penalties.
> debuggers: performance penalties-R-us.
I have no problem with a performance penalty while the debugger is
active. I do have a problem with a performance penalty while not debugging.