Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Proposed changes to JS_THREADSAFE rules

11 views
Skip to first unread message

Jason Orendorff

unread,
Feb 8, 2010, 3:19:16 PM2/8/10
to
This post is about changing the rules about sharing objects in
JS_THREADSAFE builds of SpiderMonkey. Feedback is desired, especially
from brendan, igor, gwagner, and anyone else interested in taking some
of the work.

As it stands, in a build with JS_THREADSAFE, JSObjects can theoretically
be shared across threads and accessed from any thread. Any locking
necessary to avoid crashes supposedly happens internal to the JSAPI. But
there are bugs:

- Dense arrays are not thread-safe (bug 419537).

- Generators, iterators, and E4X objects are not thread-safe
(bug 349263).

- JIT code is not thread-safe.

- Even if it were, objects shared across threads would inhibit the
JIT.

This is pretty sad: applications can't safely use the JS_THREADSAFE APIs
as they are designed and documented--they're broken in ways that scripts
can expose--and yet we are still paying the locking overhead in every
JS_THREADSAFE build.

The changes we're considering are:

1. Make native objects single-thread-only by default. This means
sharing ordinary objects across threads would be forbidden.

We would add an API to create a multithread-safe object. There are
two possible implementations: (a) create both the desired object and
a wrapper object with non-native ops; (b) create the object and then
push non-native ops down on top of it, using a new "stackable ops"
protocol as discussed in bug 408416. Either way, the locking would
happen in the JSObjectOps implementation of the wrapper object--that
is, underneath the ops layer and outside the interpreter proper. JIT
code would never confuse a wrapper for a plain ST object because the
wrappers would be SHAPELESS.

We would not support sharing any global objects or closures across
threads.

2. Move to multiple single-threaded GC heaps and property trees.

References across heaps would require wrappers and an
application-provided cross-heap GC, like Gecko's cycle collector.
(This is not something most non-Gecko embeddings can realistically
afford to provide, though we do provide enough APIs if you're
willing to use undocumented stuff.)

All references to MT objects would be cross-heap and additionally
require the synchronization described above.

As you can tell, the details aren't decided. I haven't even filed bugs
for the work yet (but that's coming soon). Comments welcome.

-j

Boris Zbarsky

unread,
Feb 8, 2010, 3:56:08 PM2/8/10
to
On 2/8/10 3:19 PM, Jason Orendorff wrote:
> We would not support sharing any global objects or closures across
> threads.

In the new world, would it still be possible for a JS XPCOM component to
spin up a worker thread and dispatch runnables to it, and if so, how?

-Boris

Mike Shaver

unread,
Feb 8, 2010, 4:03:30 PM2/8/10
to Boris Zbarsky, dev-tech-...@lists.mozilla.org

TBQH, I'd rather do that by extending the web worker API such that a
JS XPCOM component can ask for a worker that has Components exposed,
and then use postMessage and such.

Mike

Boris Zbarsky

unread,
Feb 8, 2010, 5:13:09 PM2/8/10
to
On 2/8/10 4:03 PM, Mike Shaver wrote:
> TBQH, I'd rather do that by extending the web worker API such that a
> JS XPCOM component can ask for a worker that has Components exposed,
> and then use postMessage and such.

Sounds like a reasonable plan to me.

-Boris

Igor Bukanov

unread,
Feb 8, 2010, 5:23:31 PM2/8/10
to dev-tech-...@lists.mozilla.org
On 8 February 2010 23:19, Jason Orendorff <joren...@mozilla.com> wrote:
> But there are bugs:
>
>  - Dense arrays are not thread-safe (bug 419537).
>
>  - Generators, iterators, and E4X objects are not thread-safe
>    (bug 349263).
>
>  - JIT code is not thread-safe.
>
>  - Even if it were, objects shared across threads would inhibit the
>    JIT.

These bugs can be fixed using the existing model. See the bug 542826.
As such the above problems cannot be used to advocate JS_THREADSAFE
changes. Also, the performance impact of JS_THREADSAFE can be made
smaller.

The real problem IMO with JS_THREADSAFE is an extra code complexity
and maintenance burden which is disproportional to the usage of the
feature. So indeed we should try to replace JS_THREADSAFE with
something that have lesser impact on the code while still supporting
the real use-cases.

> The changes we're considering are:
>
> 1.  Make native objects single-thread-only by default. This means
>    sharing ordinary objects across threads would be forbidden.
>
>    We would add an API to create a multithread-safe object. There are
>    two possible implementations: (a) create both the desired object and
>    a wrapper object with non-native ops; (b) create the object and then
>    push non-native ops down on top of it, using a new "stackable ops"
>    protocol as discussed in bug 408416.

Why the wrapper is necessary? I.e. why not have simply a special
thread-safe object?

Jason Orendorff

unread,
Feb 8, 2010, 5:28:33 PM2/8/10
to

If anyone is doing this currently, I would not be surprised to find it
causing weirdness or crashes. Sharing globals across threads is not
something we support well (or scrupulously avoid mucking up) in the JIT.

-j

Jason Orendorff

unread,
Feb 8, 2010, 5:40:53 PM2/8/10
to Igor Bukanov, dev-tech-...@lists.mozilla.org
On 02/08/2010 04:23 PM, Igor Bukanov wrote:
> On 8 February 2010 23:19, Jason Orendorff<joren...@mozilla.com> wrote:
>> But there are bugs:
>>
>> - Dense arrays are not thread-safe (bug 419537).
>>
>> - Generators, iterators, and E4X objects are not thread-safe
>> (bug 349263).
>>
>> - JIT code is not thread-safe.
>>
>> - Even if it were, objects shared across threads would inhibit the
>> JIT.
>
> These bugs can be fixed using the existing model. See the bug 542826.
> As such the above problems cannot be used to advocate JS_THREADSAFE
> changes.

I think we only disagree a little. All these bugs are longstanding.
Generators have been unsafe for the entire time I've been at Mozilla. So
while it's true that they could all be fixed, it hasn't actually
happened, partly because the existing model is so expensive to maintain,
as you also point out.

> The real problem IMO with JS_THREADSAFE is an extra code complexity
> and maintenance burden which is disproportional to the usage of the
> feature. So indeed we should try to replace JS_THREADSAFE with
> something that have lesser impact on the code while still supporting
> the real use-cases.

I completely agree with that, though I don't consider them to be two
unrelated, or even particularly different, points.

-j

Igor Bukanov

unread,
Feb 8, 2010, 6:29:03 PM2/8/10
to Jason Orendorff, dev-tech-...@lists.mozilla.org
On 9 February 2010 01:40, Jason Orendorff <joren...@mozilla.com> wrote:
> I completely agree with that, though I don't consider them to be two
> unrelated, or even particularly different, points.

Suppose an object that can be shared across threads is provided. The
question is how usable it would be? Surely the operations to the
object would be atomic, but the application would not have a
possibility to control the order of those operations. For that an
extra locking is required.

So perhaps instead of thread-safe objects we should focus on API that
provide a useful way to communicate between and synchronize threads?
For example, providing an explicit lock API like an lock object that
an application can use to implement serialized access is more useful
IMO than having a thread-safe JSObjectOps implementation.

johnjbarton

unread,
Feb 8, 2010, 9:01:55 PM2/8/10
to
On 2/8/2010 12:19 PM, Jason Orendorff wrote:
> This post is about changing the rules about sharing objects in
> JS_THREADSAFE builds of SpiderMonkey. Feedback is desired,...

I would like to learn how this relates to:
1) Firefox's use of SpiderMonkey, (ie not at all or ?)
2) The JS 'context'. (ie one thread / context or unrelated)
jjb

itroot

unread,
Feb 9, 2010, 5:50:40 AM2/9/10
to

I 'v used spidermonkey in 3 ways:

1. 1 runtime, context per thread. + JS_THREADSAFE on
There was (and now is) a lot of internal locking ,
so simultineus script execution was extremely slow.

2. (runtime and context) per thread + JS_THREADSAFE on
There are also some locking inside, but not so much, and it now runs
well.
However, now i have garbage collector thread per runtime,
and also for each runtime i have precompiled scripts. So here is
memory consumption grows.

3. (runtime and context) per thread and no JS_THREADSAFE
It runs fast, but helgring tells me about a lot of problems. So i
did'nt use it for long time.


Now, if we want to reject JS_THREADSAFE, i will also will prefer 3rd
variant.
Because if we will have 1 runtime and multiple context, there will be
a lot of locking (and as far as it is now) in runtime during GC run.
(Or there must be architectural change adding areas in runtime for GC,
as in malloc)

I want form spidermonkey multithread support only one thing - it give
me maximum performance in miltithread running mode, and do not have
internal locking. I can hardly imagine, when i need to share JS object
between threads. If i ever will want this, i can do it myself without
API support (i think).

Thanks.

Igor Bukanov

unread,
Feb 9, 2010, 8:34:23 AM2/9/10
to dev-tech-...@lists.mozilla.org
On 9 February 2010 01:40, Jason Orendorff <joren...@mozilla.com> wrote:
>> The real problem IMO with JS_THREADSAFE is an extra code complexity
>> and maintenance burden which is disproportional to the usage of the
>> feature. So indeed we should try to replace JS_THREADSAFE with
>> something that have lesser impact on the code while still supporting
>> the real use-cases.
>
> I completely agree with that, though I don't consider them to be two
> unrelated, or even particularly different, points.

The performance impact of JS_THREADSAFE is greatly overstated. I have
compared the difference between multi- and single-threaded builds of
SpiderMonkey in sunspider with jit off. On 64-bit Linux !
JS_THREADSAFE is just 3-4% faster.

To investigate farther what contributes to the slowdown most I made a
quick hack to disable separately object locking and GC/Runtime
locking. The former made things faster by 1-1.5%, while the latter has
contributed 2-2.5%. Of cause, we do not know how JS_THREADSAFE would
affect JIT since we do not have thread-safe JIT. But I do not see a
technical reason why the thread-safe JIT would be noticeably slower if
done properly.

So for me the problem with JS_THREADSAFE is the extra source
complexity with easy to make and hard to detect potentially
exploitable bugs. Object locking is particularly bad in that regard.

Now, what is the benefit of the object locking in practice assuming
that the thread-safety bugs can be fixed? AFAICS the main advantage is
an ability to run *untrusted* scripts safely on multiple threads with
shared objects.

If we disable object locking, the engine looses that ability. IMO this
is very fine since embeddings that need thread-shared objects control
the scripts they execute and can afford to rely on special API and,
perhaps, an extra debug-only checks from the engine.

Which suggests the following way to proceed. We provide an extra API
that the scripts can use to communicate across threads and than simply
replace JS_LOCK_OBJ and friends with debug-only asserts to prevent
accessing single-threaded objects from other threads.

Then, with API in place, we can consider farther changes like
thread-local property tries etc.

Jason Orendorff

unread,
Feb 9, 2010, 12:24:26 PM2/9/10
to Igor Bukanov
On 02/08/2010 04:23 PM, Igor Bukanov wrote:

This way we could support creating MT objects of any JSClass.

But the idea of doing less is very appealing to me.

-j

Jason Orendorff

unread,
Feb 9, 2010, 2:50:39 PM2/9/10
to
On 02/08/2010 08:01 PM, johnjbarton wrote:
> On 2/8/2010 12:19 PM, Jason Orendorff wrote:
>> This post is about changing the rules about sharing objects in
>> JS_THREADSAFE builds of SpiderMonkey. Feedback is desired,...
>
> I would like to learn how this relates to:
> 1) Firefox's use of SpiderMonkey, (ie not at all or ?)

Here's my current best guess.

The move to multiple GC heaps is significant. Code in nsJSEnvironment,
XPConnect, maybe jsd, and definitely the cycle collector will have to be
adjusted. We'll take care of that as we go.

Currently the JSAPI supports passing almost any JSObject* anywhere
within a JSRuntime. This will change--it won't be memory-safe to do that
anymore, across heaps. But Firefox already doesn't do this in many
places, so I think we'll just fix those up as we go.

Currently C++ code passing JSObjects around must worry about punching a
hole in our security membranes. Fortunately, only a few places in the
codebase actually have to worry about it. This proposal would change the
places where you can't pass JSObjects, the reasons why not, and the
workaround techniques -- but I expect the number of danger spots will
remain small. You might not even notice.

> 2) The JS 'context'. (ie one thread / context or unrelated)

This won't be affected.

-j

Jason Orendorff

unread,
Feb 9, 2010, 3:16:21 PM2/9/10
to Igor Bukanov, dev-tech-...@lists.mozilla.org
On 02/09/2010 07:34 AM, Igor Bukanov wrote:
> So for me the problem with JS_THREADSAFE is the extra source
> complexity with easy to make and hard to detect potentially
> exploitable bugs. Object locking is particularly bad in that regard.

I agree.

> If we disable object locking, the engine looses that ability. IMO this
> is very fine since embeddings that need thread-shared objects control
> the scripts they execute and can afford to rely on special API and,
> perhaps, an extra debug-only checks from the engine.
>
> Which suggests the following way to proceed. We provide an extra API
> that the scripts can use to communicate across threads and than simply
> replace JS_LOCK_OBJ and friends with debug-only asserts to prevent
> accessing single-threaded objects from other threads.
>
> Then, with API in place, we can consider farther changes like
> thread-local property tries etc.

Well - I like the way this is going.

bz and shaver already suggested using postMessage to communicate between
universes. This seems like the way forward.

-j

johnjbarton

unread,
Feb 9, 2010, 3:18:24 PM2/9/10
to

Sorry, I don't understand. If all of the things above are affected, how
can the relation of the JS context and thread not be affected?

Currently as I understand it, Firefox and other xpconnect based apps
have one 'components' context, one or more module contexts, and one
context for every nsIDOMWindow. I think they all share one GC heap (?)
but there are multiple threads (whose relation to contexts is unclear to
me). One thread crosses multiple contexts. So who owns the GC heap, the
thread. the context?


jjb
>
> -j

Igor Bukanov

unread,
Feb 9, 2010, 4:37:30 PM2/9/10
to dev-tech-...@lists.mozilla.org
> bz and shaver already suggested using postMessage to communicate between
> universes. This seems like the way forward.

I like the idea of pure message passing API that shares nothing like
thread workers. It would clearly allow thread-local heaps in future.

Jason Orendorff

unread,
Feb 9, 2010, 6:36:48 PM2/9/10
to
On 02/09/2010 02:16 PM, Jason Orendorff wrote:
> On 02/09/2010 07:34 AM, Igor Bukanov wrote:
>> If we disable object locking, the engine looses that ability. IMO this
>> is very fine since embeddings that need thread-shared objects control
>> the scripts they execute and can afford to rely on special API and,
>> perhaps, an extra debug-only checks from the engine.
>>
>> Which suggests the following way to proceed. We provide an extra API
>> that the scripts can use to communicate across threads and than simply
>> replace JS_LOCK_OBJ and friends with debug-only asserts to prevent
>> accessing single-threaded objects from other threads.

On IRC, Brendan's response to this idea (a few days ago, before this
thread started) was:

<brendan> first, we have issues even on the main thread with ownercx
handoff
<brendan> we should stop all that, per the plan of record
<brendan> but the plan of record is not just "-UJSTHREADSAFE"
<brendan> because that will break our own code: workers, PAC, add-ons
and plugins

But I think workers will require fixups anyway if we're going to put
them in separate per-thread heaps. Add-ons will require fixups anyway if
we're going to ban sharing functions or global objects across threads.
Plugins will require fixups too, given the changes we're making to the
API (in particular, making objects ST by default).

-j

Igor Bukanov

unread,
Feb 10, 2010, 3:11:44 AM2/10/10
to dev-tech-...@lists.mozilla.org
On 10 February 2010 02:36, Jason Orendorff <joren...@mozilla.com> wrote:
>  <brendan> first, we have issues even on the main thread with ownercx
>            handoff
>  <brendan> we should stop all that, per the plan of record
>  <brendan> but the plan of record is not just "-UJSTHREADSAFE"
>  <brendan> because that will break our own code: workers, PAC, add-ons
>            and plugins

Just to be clear, I would like see on the first stage not
-UJSTHREADSAFE, but rather just replace JS_OBJ_LOCK and friends with
asserts that objects are accessed only from single thread and add
inter-thread communication API. This way we will acknowledge the
reality especially with JIT on.

As regarding farther work like thread-local heap or property tree, the
benefits are not clear. Currently we have the global property tree and
the GC heep that work with JIT and, AFAIK, we do not have
long-standing bugs in that area. Moreover, currently with JIT on
sunspider shows that single-threaded shell is just 5% faster on Linux
64 with most of the slowdown comes from the GC locks in js_GC(), not
the GC locks in the property tree implementation. The difference can
be made reduced.

So benefits may come from simpler code and potentially better thread
locality. Yet single-threaded heap would imply a shift in code
complexity to the browser which would need to manage multiple heaps.
And thread-local property trees would increase memory usage. That may
loose in performance.

Jason Orendorff

unread,
Feb 10, 2010, 12:49:35 PM2/10/10
to
On 02/10/2010 02:11 AM, Igor Bukanov wrote:
> On 10 February 2010 02:36, Jason Orendorff<joren...@mozilla.com> wrote:
>> <brendan> first, we have issues even on the main thread with ownercx
>> handoff
>> <brendan> we should stop all that, per the plan of record
>> <brendan> but the plan of record is not just "-UJSTHREADSAFE"
>> <brendan> because that will break our own code: workers, PAC, add-ons
>> and plugins
>
> Just to be clear, I would like see on the first stage not
> -UJSTHREADSAFE, but rather just replace JS_OBJ_LOCK and friends with
> asserts that objects are accessed only from single thread and add
> inter-thread communication API. This way we will acknowledge the
> reality especially with JIT on.

This sounds like a plan.

Up to now, there has been no fixed association between an object and
the thread or context in which it was created. Even the association
between a context and its thread is not fixed. We have to change the
rules.

The simplest possible rule is: just restrict every object to the thread
in which it's created. But this is too restrictive. Worker events could
not just be grabbed and processed by the first available thread, the
way they are now. Each worker would be bound to a single thread.

Instead we could partition objects by global (scope chain), with these
rules: Workers can hop between threads, but cannot share objects with
other globals; all other objects are restricted to the thread where
they were created, but can share objects with the other non-worker
globals on that thread.

This much can be done with no new wrappers. Once this is in, if we
decide to implement separate heaps per-tab, we can partition further
and add wrappers.

How does this sound?

-j

Jason Orendorff

unread,
Feb 10, 2010, 12:57:12 PM2/10/10
to
On 02/10/2010 02:11 AM, Igor Bukanov wrote:
> As regarding farther work like thread-local heap or property tree, the
> benefits are not clear.

The idea is not to have a thread-local heap, but rather to have separate
heaps for each mostly-isolated collection of objects, perhaps one for
each worker and one for each domain.

Aside from locking overhead, this should reduce GC pause times, e.g.
when one window is creating a lot of garbage while another has a lot of
long-lived objects.

Can the same goal can be achieved some other way?

-j

johnjbarton

unread,
Feb 10, 2010, 1:19:45 PM2/10/10
to
On 2/10/2010 9:49 AM, Jason Orendorff wrote:
...

> Instead we could partition objects by global (scope chain), with these
> rules: Workers can hop between threads, but cannot share objects with
> other globals; all other objects are restricted to the thread where
> they were created, but can share objects with the other non-worker
> globals on that thread.
...

Is this the same as now? Workers can run on non-UI threads because they
don't share objects with the UI. Everything else is single threaded as
far as JS goes.

Extension code by design operates on objects created in other scope
chains. Any thread running extension code will commonly operate on
objects created by another thread unless they are all actually created
by one thread.

I guess with one GC heap per context you can lock per context allowing
all web domains to compete for CPU. But then browser code/ extension
code will block until the web domain thread releases the lock.

Sorry if this is nonsense, I'm seeing this all strictly from the JS side.

jjb

Jason Orendorff

unread,
Feb 10, 2010, 2:48:45 PM2/10/10
to
On 02/10/2010 12:19 PM, johnjbarton wrote:
> On 2/10/2010 9:49 AM, Jason Orendorff wrote:
> ...
>> Instead we could partition objects by global (scope chain), with these
>> rules: Workers can hop between threads, but cannot share objects with
>> other globals; all other objects are restricted to the thread where
>> they were created, but can share objects with the other non-worker
>> globals on that thread.
> ...
>
> Is this the same as now? Workers can run on non-UI threads because they
> don't share objects with the UI. Everything else is single threaded as
> far as JS goes.

Right. It's only different from now inasmuch as there are corners where
these proposed rules aren't being followed!

> Extension code by design operates on objects created in other scope
> chains. Any thread running extension code will commonly operate on
> objects created by another thread unless they are all actually created
> by one thread.

Right again -- this is one of the corners.

Shaver proposed migrating extensions to a postMessage-based model.
Sounds good to me. Extensions doing this today are playing with fire.
It's pretty easy to get the JS engine to crash that way, via known bugs.

> Sorry if this is nonsense, I'm seeing this all strictly from the JS side.

Not nonsense. The Gecko side of things is what I know least about, so if
you see potential problems, please do point them out.

-j

Jason Orendorff

unread,
Feb 10, 2010, 3:44:11 PM2/10/10
to
On 02/09/2010 02:18 PM, johnjbarton wrote:
> On 2/9/2010 11:50 AM, Jason Orendorff wrote:
>> On 02/08/2010 08:01 PM, johnjbarton wrote:
>>> 2) The JS 'context'. (ie one thread / context or unrelated)
>>
>> This won't be affected.
>
> Sorry, I don't understand. If all of the things above are affected, how
> can the relation of the JS context and thread not be affected?

The rules I want to impose apply to what objects can be accessed on what
threads. The JSContext is orthogonal to that, as JSContexts can be
handed off among threads and can happily operate on objects regardless
of what thread they came from.

-j

Igor Bukanov

unread,
Feb 10, 2010, 3:55:49 PM2/10/10
to dev-tech-...@lists.mozilla.org
On 10 February 2010 22:48, Jason Orendorff <joren...@mozilla.com> wrote:
> Shaver proposed migrating extensions to a postMessage-based model.
> Sounds good to me.

postMessage is browser-specific. I think it would be nice if
SpiderMonkey provides a version of it that other embeddings (like js
test shell) can use as is. Something like mini-webworkers.

Mike Shaver

unread,
Feb 10, 2010, 3:58:11 PM2/10/10
to Igor Bukanov, dev-tech-...@lists.mozilla.org
On Wed, Feb 10, 2010 at 3:55 PM, Igor Bukanov <ig...@mir2.org> wrote:
> On 10 February 2010 22:48, Jason Orendorff <joren...@mozilla.com> wrote:
>> Shaver proposed migrating extensions to a postMessage-based model.
>> Sounds good to me.
>
> postMessage is browser-specific. I think it would be nice if
> SpiderMonkey provides a version of it that other embeddings (like js
> test shell) can use as is. Something like mini-webworkers.

We could replace scatter in the shell with a createWorker/postMessage
sort of thing, except that we don't really have an event loop on the
main thread of the shell. Not sure it's worth it, but it's hard to
argue against "would be nice"!

Mike

Brendan Eich

unread,
Feb 10, 2010, 4:56:32 PM2/10/10
to
On Feb 9, 5:34 am, Igor Bukanov <i...@mir2.org> wrote:
> The performance impact of JS_THREADSAFE is greatly overstated. I have
> compared the difference between multi- and single-threaded builds of
> SpiderMonkey in sunspider with jit off. On 64-bit Linux !
> JS_THREADSAFE is just 3-4% faster.

Catching up on the thread. Good to get numbers, including the object-
vs. gc-locking breakdown. My point here is only to say 3-4% is not
trivial -- every perf win counts, we didn't get here by ignoring 3-4%
and we won't scale the rest of the competitive mountain that way
either.

JS_THREADSAFE was a mid-late-90s invention of Jawahar Malholtra and
myself, with Bjorn Carlsen helping on the compare-and-swap Bacon-bit
stuff. It reflected the multi-threaded appserver/webserver
architecture of the day, which had too much sharing by default, but of
course not shared globals or mutable built-ins -- more shared
"server", "session", "database", and "application" objects with
getters and setters. This is still somewhat popular but it definitely
overstates the degree to which sharing, especially of mutables,
matters.

So I've been urging you, Jason, and everyone whose ear I can tug to
move away from the cost (even if "low") imposed on all objects just
for the few shared ones. Good to see no one violently dissenting in
this thread; I will continue catching up.

/be

Brendan Eich

unread,
Feb 10, 2010, 5:04:07 PM2/10/10
to
On Feb 10, 12:55 pm, Igor Bukanov <i...@mir2.org> wrote:

> On 10 February 2010 22:48, Jason Orendorff <jorendo...@mozilla.com> wrote:
>
> > Shaver proposed migrating extensions to a postMessage-based model.
> > Sounds good to me.
>
> postMessage is browser-specific. I think it would be nice if
> SpiderMonkey provides a version of it that other embeddings (like js
> test shell) can use as is. Something like mini-webworkers.

While extensions may need to use message passing to interact with
content, we should make sure not to overdesign for this. It's quite a
different model from today's one; heavy porting tax for much of
http://addons.mozilla.org.

I do not agree that we should break extensions using the JS API from
other threads and see who complains, then consider fixes. We have an
API that "works" except for some (fairly big) corner cases. But if
those are not issues for an add-on or other consumer of our platform,
they do not perceive the problem (it is a problem, for sure; even for
them, down the road or with one false move).

Platforms mean keeping commitments where possible, using carrots as
well as sticks to move clients to better commitments over time. I'd
rather see a way for cross-thread JS API users to keep "working" if we
can do it. Hence the wrapper idea, which could possibly be automated,
as we automate creation of cross-origin wrappers, XPCNativeWrappers,
and all those other mrbkap wrappers.

/be

Brendan Eich

unread,
Feb 10, 2010, 5:14:10 PM2/10/10
to

Not as stated!

Property tree sharing could be a trivial amount of memory, have to
measure. It's not obviously a big deal on desktops, and self-limiting
on mobile (how many Fennec tabs do you have open?).

I believe local heaps, one per "window clique", is really the way to
go. All our competition does something like this and they win big on
pause time, also on avoiding GC locking overhead (cf. v8's
generational single-threaded bump allocator).

You can reinvent this wheel with a shared-across-threads heap and lock-
free single-threaded optimizations on top of it, but that's what we're
talking about. We do not want the lock-free optimizations to be code-
only (i.e., CAS or other barrier-based instructions on top of shared
coherent memory). We really do want lock-free thread-local data too.

/be

John J Barton

unread,
Feb 10, 2010, 8:20:35 PM2/10/10
to
Jason Orendorff wrote:
> On 02/10/2010 12:19 PM, johnjbarton wrote:
...

>> Extension code by design operates on objects created in other scope
>> chains. Any thread running extension code will commonly operate on
>> objects created by another thread unless they are all actually created
>> by one thread.
>
> Right again -- this is one of the corners.
>
> Shaver proposed migrating extensions to a postMessage-based model.

I think this would make more sense if it was expressed as migrating the
runtime to a model of js contexts connected by postMessages and
supporting extensions.

Extensions have some app-level code, some XUL window code, and some Web
page code (mostly operating on the page but some operating in the page).
The major problem with extensions is the mix of the latter two. Most
extensions would be much easier to understand if these three were
separate and connected with message passing.

> Sounds good to me. Extensions doing this today are playing with fire.
> It's pretty easy to get the JS engine to crash that way, via known bugs.

Sorry this is not a great motivator for your proposal, since as a
practical matter we don't crash. Or when we do some bails us out.

A better carrot would be performance and responsiveness. That is why the
design of the boundaries of the message passing solution is important.
Leaving the web-page operating code in the XUL window context and
remoting it with message passing will (I imagine) cause extensions to be
slow and blocked. Otherwise you can't achieve anything with the object
partition.

To try to say this another way, we need to partition the extension
objects also, and group them with the things extended. Does this make sense?

jjb

Mike Moening

unread,
Mar 2, 2010, 5:51:24 PM3/2/10
to
< bz and shaver already suggested using postMessage to communicate between
> universes. This seems like the way forward.

Please no!!! I'd rather have the ability to explicitly "lock" an object
than something messy like that.
postMessage to who?
Each thread doesn't even know the other ones exist.

Threads are operating on the same global and have no idea others exist.


Mike Moening

unread,
Mar 2, 2010, 6:04:00 PM3/2/10
to
> I want form spidermonkey multithread support only one thing - it give
> me maximum performance in miltithread running mode, and do not have
> internal locking. I can hardly imagine, when i need to share JS object
> between threads. If i ever will want this, i can do it myself without
> API support (i think).

We just shared objects all the time...between threads..
Although I'm not against having an explicit "lock" method for an object.


gurkhali

unread,
Mar 23, 2010, 5:00:08 PM3/23/10
to
Sorry to join in this party a little late but I had couple of requests
which I *think* are addressed but want to make sure anyway. Does this
change affect designs that *don't* share data/objects across threads?
In another words for embedders that may create one or more self-
contained 'thread agnostic' RT instances on threads whose scripts
execute separately, will any of these changes pose problems? Generally
speaking I would not expect any issue as long as the RT instances are
not using global, static variables or TLS storage or any things else
that can lead to concurrency issues. My understanding is that, with
the exception of some code in nspr, the RT instance as of now is
pretty self contained. Just wanted to make sure it will stay that way.

Bhushan

0 new messages