Out of memory (mallloc returns null) handling policy

23 views
Skip to first unread message

Brendan Eich

unread,
Jun 14, 2009, 6:42:53 PM6/14/09
to
Going back 14 years, SpiderMonkey and its "Mocha" progenitor tried to
propagate malloc returning null, after promptly reporting the failure,
as a hard-stop error. This was required on some of Netscape's target
platforms of the day (probably most; this was in Win3.1/Mac OS 8
days). It has generally been necessary on small-device OSes too. On
fat desktop OSes, it may win or lose depending on allocation size,
free VM, and OS VM policy.

With C++ exceptions we could move the (RAII-implemented and static-
analysis-checked) recovery code off of the hot control flow paths, but
that is a long road (I've blogged about it under the Mozilla 2 post).
In the mean time, we have a longstanding policy of checking for OOM,
reporting it, and propagating it cleanly to stop the running script or
event handler function in its tracks.

This issue has come up in bugs over time:

https://bugzilla.mozilla.org/show_bug.cgi?id=383932 (where I am very
delinquent in reviewing)
https://bugzilla.mozilla.org/show_bug.cgi?id=384809 (fixes to some OOM
handling bugs)
https://bugzilla.mozilla.org/show_bug.cgi?id=471528 (worth a read to
catch up on discussion)

and now in an old bug about optimizing poorly performing Array join/
toString code:

https://bugzilla.mozilla.org/show_bug.cgi?id=200505

where the idea of safely crashing a la WebKit/JavaScriptCore is
suggested. But see this bug for the counter-arguments, and try

javascript:var s=Array(1e6).join();var a=[];for(var i=0;i!=2000;++i)
a.push(s);a.toString();

in your favoritie browser (Firefox 3.6pre recovers nicely).

Gavin Reaney working for picsel.com, and others I'm forgetting now
(possibly the folks at tellme.com), have tested SpiderMonkey with good
code coverage while injecting random malloc failures. We consider
crashes on a null return from malloc or equivalent to be bugs, and fix
them. We can and should IMO continue to do this, and automate the
fault-injector as part of our continuous build/test infrastructure.

C++ exceptions are a fair way out, from what Taras has said. The issue
is whether we can do better than malloc return value null testing and
OOM reporting/propagation, covered by such testing. I have argued that
we can't in the short term.

To take the other side of the argument from the one I took in bug
200505, consider this:

What if we distinguish "large" from "small" allocations, based on the
premise that malloc returning null for a "small" request size means
the OS has thrashed you to death already, so a clean, unexploitable
crash is more merciful.

(Similar large vs. small distinctions inform safe stack growth
algorithms used by compilers: small sp adjustments will hit a real or
virtual redzone and crash or fail-stop; large ones could cross a
segment boundary in process and cause exploitable trouble.)

This could work for reasonably common fixed-size allocations (struct/
class instances, tuples, but not potentially or definitely big
arrays), on OSes that didn't fail "small" size mallocs regularly. It
would be much more pleasant for people coding in the JS VM, but they
already have to be very tall to ride that roller coaster. More
valuable, it would save code footprint and relieve us of some test
coverage.

But AFAIK we are targeting OSes that can fail even "small" mallocs,
where recovering instead of crashing is better for the user than a
safe fast crash, for Firefox on mobile. Anyway some SpiderMonkey
embedders such as picsel.com target such systems. I welcome more
information and corrections on this point.

Assuming that the above holds, we'd have to #ifdef, or drop support
for such OSes. It won't do to hope that such crashes were rare (we
don't get to control rarity, web content authors and even users do).

For now I think we are better off continuing to work on OOM reporting
and propagation. We have taken some regressions from this policy in js/
src/nanojit and js/src/jstracer.cpp in the latest cycle (JS1.8.1), but
that is all fixable. Comments welcome.

/be

Brendan Eich

unread,
Jun 14, 2009, 7:25:45 PM6/14/09
to
On Jun 14, 3:42 pm, Brendan Eich <bren...@mozilla.org> wrote:
> What if we distinguish "large" from "small" allocations, based on the
> premise that malloc returning null for a "small" request size means
> the OS has thrashed you to death already, so a clean, unexploitable
> crash is more merciful.

Large vs. small is a distinction without a difference if a large JS-
level action such as a big join or array push loop ends up making lots
of small allocations, only ever small ones -- or many small ones
before a large one that could be judged to need null testing of the
malloc or new return value.

> (Similar large vs. small distinctions inform safe stack growth
> algorithms used by compilers: small sp adjustments will hit a real or
> virtual redzone and crash or fail-stop; large ones could cross a
> segment boundary in process and cause exploitable trouble.)

This comparison highlights the problem. We have infallible storage
class "auto" (in C and C++) stack allocation, but not safe alloca for
arbitrary size. We do not get a free lunch even with the stack, which
programmers often assume is allocated infallibly. The heap lunch tax
is even higher.

So if my small vs. large doesn't help (enough, or at all), we're back
to target OS support and Denial-Of-Service or equivalent webdev
debugging runaway policies.

/be

Gavin

unread,
Jun 14, 2009, 11:48:39 PM6/14/09
to
SM's current OOM handling strategy is something that makes it
attractive to potential embedders over rival offerings, so it would be
a shame to lose this. Consider an embedder that runs SM in a fixed
size heap (Picsel do this in a few cases).

I realise that small allocations can be a problem with some OSes
(thrashing, enforced closure of applications etc). I'd be wary of
requiring an embedding application to close on any OOM failure. Of
course, that is a choice for the embedder, but not something that
should be prescribed by SM (I think).

In Picsel's applications that embed SM, we typically use our own
memory manager, which sits on top of the native heap implementation.
This tends to remove many of the problems of thrashing (we can return
a fake OOM if the OS reports free memory below a certain margin). Of
course, this means it isn't safe to assume that small allocations will
succeed irrespective of the host OS.

Having said all that, Picsel targets platforms where SM isn't
supported out of the box, such as Symbian and BREW - these require
many changes to the engine anyway.

Gavin

Brendan Eich

unread,
Jun 15, 2009, 1:00:24 AM6/15/09
to
On Jun 14, 8:48 pm, Gavin <gavin.rea...@gmail.com> wrote:
> Having said all that, Picsel targets platforms where SM isn't
> supported out of the box, such as Symbian and BREW - these require
> many changes to the engine anyway.

Thanks for the informative reply. The issue for SpiderMonkey is not
single-headed. Here I'll try to write about several of the Hydra's
heads.

Never mind picsel.com (the patch to propagate OOM when such if tests
and returns had been removed would not be pretty, or easy to carry):
even in Firefox there are too many ways to induce an OOM, and I see
more accidental instances in bugs and on the web than intentional DOS
attacks.

It could be that under a pure, Benthamite utilitarian analysis,
SpiderMonkey maintainers and most embedders would be better off if we
removed the malloc null return tests and consequent JS_ReportOOM calls
and early returns, and instead focus on other things. One might
imagine that in some absolute, apples-to-oranges, God's-eye-view of
global economics, this would be worth the marginal pain of crashes
felt by random users and web developers; and (more important, I
think), worth the lost and alienate embedders and any other blowback
from breaking compatibility on this point.

I distrust any such asserted conclusion, since we'll never have the
data to make that call, or to objectively balance subjective values by
pro-rating various parties' costs and benefits.

Mainly I dislike optimizing only some (hardly all, since SpiderMonkey
even in C++ can't use exceptions, so must propagate other errors as
returns of false or null, with cx->exception set) failure case.
"Optimizing" this way is not uniform with respect to failure handling,
and it will tend to degrade discipline about checking for non-OOM
failures and propagating them (that's human nature). It may be a false
economy, and it is probably a bad precedent.

Really, we want C++ exceptions, if not an even better implementation
language (memory safety, ahem).

Aborting on OOM may work for some systems and workloads. I'm from the
old Unix days when you could compute max users/procs/ttys/blockdevs
etc. based on timesharing machine setup (physical terminals, etc.),
and panic the kernel if a table filled up.

But the Web has huge dynamic range, and JS in particular is used and
abused in ways that make it hard to justify aborting the browser
process, ever. I'm skeptical of CSS and HTML being easier to constrain
to non-aborting malloc request loads, but with JS there are just too
many ways to hit OOM.

/be

Igor Bukanov

unread,
Jun 15, 2009, 7:02:24 AM6/15/09
to dev-tech-...@lists.mozilla.org
2009/6/15 Brendan Eich <bre...@mozilla.org>:

> I distrust any such asserted conclusion, since we'll never have the
> data to make that call, or to objectively balance subjective values by
> pro-rating various parties' costs and benefits.

While it is true that we could not know the cost of crashing on OOM,
we at least could get a picture of potential benefits through a
not-particularly-costly experiment. For that one can make a patch that
converts SpiderMonkey to OOM-crashing policy. That patch cannot just
remove null checks. For a realistic experiment it should also refactor
the code to take a maximum advantage of infallible malloc like
replacing out-parameters with return value etc.

With such patch we can get a clear picture about performance and
simplicity gains. If they would be miniscule, there would no point to
discuss this farther.

Regards, Igor

Wes Garland

unread,
Jun 15, 2009, 9:00:09 AM6/15/09
to Igor Bukanov, dev-tech-...@lists.mozilla.org
Are there any *reasonable* circumstances where SpiderMonkey can report OOM
when libc malloc wouldn't? Including where jemalloc is in the mix?

Single Data Point -- I live in a typical UNIX environment; my busiest
embedding works like this: NewRuntime, NewContext, CompileScript,
listen/accept/fork, ExecuteScript, OOM ? panic : exit -- so clearly I
personally can tolerate SpiderMonkey crashing nicely on OOM. In fact, it
would reduce LOC count on my embeddings if calls like JS_NewObject() could
be me made infallible. And fewer LOC generally means fewer bugs.

On the flip side, you know the case for handling OOM nicely. Would it be
practical and sustainable to introduce a #define CRASH_ON_OOM throughout
SpiderMonkey, and behave accordingly?

Wes

--
Wesley W. Garland
Director, Product Development
PageMail, Inc.
+1 613 542 2787 x 102

Mike Shaver

unread,
Jun 15, 2009, 9:47:56 AM6/15/09
to Wes Garland, dev-tech-...@lists.mozilla.org, Igor Bukanov
On Mon, Jun 15, 2009 at 9:00 AM, Wes Garland<w...@page.ca> wrote:
> In fact, it
> would reduce LOC count on my embeddings if calls like JS_NewObject() could
> be me made infallible.

I suspect you could make your error reporter notice that it was an OOM
being reported, and abort() there. If needed, we could add some
JSErrorReporter flag to make it easier to detect that case, perhaps?

Mike

Brendan Eich

unread,
Jun 15, 2009, 1:15:36 PM6/15/09
to

The embedding's custom error reporter receives JSMSG_OUT_OF_MEMORY in
reportp->errorNumber already. You'd have to include js.msg with
appropriate MSG_DEF macrology, or just include jscntxt.h, though, to
get JSMSG_OUT_OF_MEMORY.

/be

Brendan Eich

unread,
Jun 15, 2009, 1:17:56 PM6/15/09
to jones....@gmail.com
On Jun 15, 4:02 am, Igor Bukanov <i...@mir2.org> wrote:
> 2009/6/15 Brendan Eich <bren...@mozilla.org>:

>
> > I distrust any such asserted conclusion, since we'll never have the
> > data to make that call, or to objectively balance subjective values by
> > pro-rating various parties' costs and benefits.
>
> While it is true that we could not know the cost of crashing on OOM,
> we at least could get a picture of potential benefits through a
> not-particularly-costly experiment. For that one can make a patch that
> converts SpiderMonkey to OOM-crashing policy. That patch cannot just
> remove null checks. For a realistic experiment it should also refactor
> the code to take a maximum advantage of infallible malloc like
> replacing out-parameters with return value etc.

While not particularly costly in deep ways, it would take a
significant effort to produce a correct refactoring patch that reduces
the code based on infallible malloc...

> With such patch we can get a clear picture about performance and
> simplicity gains. If they would be miniscule, there would no point to
> discuss this farther.

... so it sounds like a job for our static analysis tooling.

/be

Brendan Eich

unread,
Jun 15, 2009, 1:22:14 PM6/15/09
to
On Jun 15, 6:00 am, Wes Garland <w...@page.ca> wrote:
> On the flip side, you know the case for handling OOM nicely.  Would it be
> practical and sustainable to introduce a #define CRASH_ON_OOM throughout
> SpiderMonkey, and behave accordingly?

No, that's a recipe for unhappy core VM developers whose eyes bleed
regularly from the ifdef hell, *and* unhappy embedders who see that
ifdefs mean one side is undertested, so the branches in those ifdefs
that are not taken in the most tested embedding (Firefox) are full of
bugs that don't get fixed soon enough (picsel.com and others see null
deref crashes, or nearly null derefs, or possibly worse, due to lack
of prompt null testing of malloc's return value).

/be

sayrer

unread,
Jun 15, 2009, 5:23:01 PM6/15/09
to
On Jun 14, 6:42 pm, Brendan Eich <bren...@mozilla.org> wrote:
>
> where the idea of safely crashing a la WebKit/JavaScriptCore is
> suggested. But see this bug for the counter-arguments, and try
>
> javascript:var s=Array(1e6).join();var a=[];for(var i=0;i!=2000;++i)
> a.push(s);a.toString();
>
> in your favoritie browser (Firefox 3.6pre recovers nicely).

Safari mac and windows: Crash
IE8: handled gracefully (out of memory alert dialog)
Chrome: crashes content process (aw snap!)

- Rob

Andreas Gal

unread,
Jun 16, 2009, 4:16:10 AM6/16/09
to Igor Bukanov, dev-tech-...@lists.mozilla.org

When is less checks and simpler code not a maintenance advantage?
Below is some code from the engine (asserts removed for readability).
A pretty large chunk of the code is error checking. And this is just a
random example I had open in my editor. I think we need simpler code
in the engine. Operating near the OOM mark is a very rare condition
and mostly a pointless endeavor anyway. In our embedding, even if the
JS engine survives a near-death experience with OOM, the browser is
going to come down on us moments later, so its somewhat pointless. I
would like to experiment with alternative approach to OOM handling,
maybe ballast-based safety nets or some simple setjmp/longjmp based
exception mechanism.

Andreas

invokevp = js_AllocStack(cx, 2 + 2, &mark);
if (!invokevp)
return JS_FALSE;
obj = JSVAL_TO_OBJECT(vp[0]);

invokevp[0] = obj->fslots[JSSLOT_FOUND_FUNCTION];
invokevp[1] = vp[1];
invokevp[2] = obj->fslots[JSSLOT_SAVED_ID];
argsobj = js_NewArrayObject(cx, argc, vp + 2);
if (!argsobj) {
ok = JS_FALSE;
} else {
invokevp[3] = OBJECT_TO_JSVAL(argsobj);
ok = (flags & JSINVOKE_CONSTRUCT)
? js_InvokeConstructor(cx, 2, JS_TRUE, invokevp)
: js_Invoke(cx, 2, invokevp, flags);
vp[0] = invokevp[0];
}


On Jun 16, 2009, at 8:50 AM, Igor Bukanov wrote:

> 2009/6/15 Brendan Eich <bre...@mozilla.org>:


>> While not particularly costly in deep ways, it would take a
>> significant effort to produce a correct refactoring patch that
>> reduces
>> the code based on infallible malloc...
>

> But what are the alternatives? The worst would happen if checks for
> null malloc would be removed or new code forgets to add them in the
> belief (which is unfounded one as you have pointed out) that this is
> the right thing in the long term. As I wrote in the bug 200505, this
> would mean loosing all the benefits of non-aborting OOM promise and
> gaining no performance or maintenance advantages from less checks and
> simpler code.
>
> Igor
> _______________________________________________
> dev-tech-js-engine mailing list
> dev-tech-...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-tech-js-engine

Igor Bukanov

unread,
Jun 16, 2009, 4:59:21 AM6/16/09
to dev-tech-...@lists.mozilla.org
2009/6/16 Andreas Gal <andre...@gmail.com>:

>
> When is less checks and simpler code not a maintenance advantage?

I am not arguing about this. The point is that we do not know if the
advantages outweighs disadvantages of loosing the promise of handling
OOM. There is simply no data at this point to base the decision on and
this is what Brendan wrote about few posts above.

> Below is
> some code from the engine (asserts removed for readability).

That is one example. But we need the whole patch to know the real
advantages. This is what I would like to see.

Igor

Brendan Eich

unread,
Jun 16, 2009, 12:42:04 PM6/16/09
to
On Jun 16, 1:16 am, Andreas Gal <andreas....@gmail.com> wrote:
> When is less checks and simpler code not a maintenance advantage?

When they directly or indirectly introduce other bugs. Directly, where
someone fills a large array from the end so does not dereference
nearly-null safely. Indirectly, the ignore error returns slippery
slope. I've slid down it myself. I saw others even in the latest cycle
fall along it too (the JSON code, jstracer.cpp of course).

Here's some attitude: in the Unix kernel, and in many other code
bases, checking for failure was pretty much required for security. You
could panic in the old days, as I wrote earlier in this thread, but
over time the dynamic range (workstations, servers, not timesharing
minis or mainframes) expanded toward the Web-wide asymptote. This
meant panicking was lame and users would and did complain, not just
grudgingly bump a kernel configurable and rebuild/reboot.

As programmers, we just took this as a cost of business. It was not
huge. Sure, it was not fun, and it had costs beyond errors failing to
test and propagate (failure to release resources before an early
return, due to lack of RAII -- assisted now by static analysis). But
it was not that big a deal!

I suggest time would be better spent here not hacking sketches or
subset patches to show the obvious, while failing to justify the cost
to users and web developers. Rather, compile with C++ exceptions and
RTTI turned on, and work with the IRC #static folks on the unwind-
protect analyses and patch generation.

Or, similar idea: self-host more in JS++ -- JS with system-ish
optional types and unsafe unmanaged code features, but still try/catch/
finally exception handling.

We don't know how close we are to these better worlds. Arguing and
mixing null-crashy code into the codebase is just degrading the
quality of SpiderMonkey and wasting our time.

And if we can't get real exceptions in some suitable language, then
grow a pair and write some null tests, RAII helpers, even goto out
with static analysis macrology!

</rant>

/be

John J. Barton

unread,
Jun 17, 2009, 12:20:36 AM6/17/09
to
Andreas Gal wrote:
>
> When is less checks and simpler code not a maintenance advantage?

When your code is the base used by millions of developers.

jjb

Godmar Back

unread,
Jun 19, 2009, 8:21:42 AM6/19/09
to Brendan Eich, dev-tech-...@lists.mozilla.org
A short thought about this:

Correct handling of OOM in the core would make it easier to adopt SM to be
used in environments in which one is interested in limiting the memory
consumption of different domains. For instance, if 'malloc' is replaced with
one that charges memory to a "domain" - however defined - then it may fail
if this limit is reached. Clearly, crashing the entire engine is the wrong
to do in this case; instead - just the offending "domain" should be
terminated. I'm putting "domain" in quotation marks to not confuse it with
DNS domains.

There's a long line of research that has looked at what it takes to adopt
language-based VM to provide resource-controlled environments. In the past,
this work has focused on Java - nowadays, more and more researchers are
looking into JavaScript. Robust OOM handling in SM would make SM a good
platform for this research and increase the chance of adoption.

- Godmar

Boris Zbarsky

unread,
Jun 19, 2009, 11:39:38 AM6/19/09
to
Godmar Back wrote:
> Correct handling of OOM in the core would make it easier to adopt SM to be
> used in environments in which one is interested in limiting the memory
> consumption of different domains. For instance, if 'malloc' is replaced with
> one that charges memory to a "domain" - however defined - then it may fail
> if this limit is reached. Clearly, crashing the entire engine is the wrong
> to do in this case; instead - just the offending "domain" should be
> terminated.

Honestly, the right way to do this to me seems to be
process-per-"domain" with resource limits on the process, in which case
crashing the entire engine is in fact fine...

-Boris

Godmar Back

unread,
Jun 19, 2009, 12:18:37 PM6/19/09
to Boris Zbarsky, dev-tech-...@lists.mozilla.org

That's the approach taken by Google Chromium. The question, in my opinion,
is whether one can do better (and whether support for more fine-grained
isolation can/should be designed into VMs such as Spidermonkey). I admit
that is an open question.

For instance, consider a page such as iGoogle (or any portal or complex RIA
application that uses a multitude of JavaScript "widgets" or UI components
from different non-coordinating sources and libraries). I argue that it's
hard-to-impossible to isolate their resource consumption, as well as
resource-related and other faults, if the only available tool is OS process
separation.

I don't mean to hijack this list, but perhaps the approach my student
Amarjyoti Deka and I are currently pursueing may be of interest to others
(and the Spidermonkey designers as well). And we'd definitely like to hear
thoughts and comments.

We introduced an abstraction we call "script spaces" - a script space is
associated with either a tab, frame, or with a <div> section of a document.
There's also a system space for chrome JavaScript. All execution is done on
behalf of a script space. Event handlers are associated with a script space.


Our current prototype is able to support multiple CPU-bound scripts within a
page, across tabs, etc., without making the browser lose interactivity. To
be able to allow multiple events to be processed simultaneously, we use
multiple threads; however, these threads simply hold the execution context
(stack, etc.), they don't execute concurrently since most of Firefox is not
thread-safe. As an event is propagated along the DOM tree, event handlers
belonging to different script spaces need to be executed. In this case, the
dispatching thread "enters" the space and becomes subject to the script
space's resource consumption constraints. Only thread is allowed to be
inside a script space to maintain JS's single-thread programming model. We
control CPU consumption per script space by calling into a scheduler from
the DOMbranchcallback, which thus works like a timer interrupt in a
traditional OS. We apply a traditional proportional-share scheduling scheme
(Duda/Cheriton's Borrowed Virtual Time scheduler).

We are considering extending this idea to also maintain different memory
consumption domains. In such a setup, a domain may run out of memory inside
a call to the SM engine. We'd like the engine to report such resource
exhaustion situations, allowing us to react to it - terminate the script
space, for instance - in a way that does not compromise the engine or other
script spaces. (Just like you can issue a kill -9 on a process, or have it
die with SIGXCPU without having to reboot the system.)

- Godmar

John J. Barton

unread,
Jun 19, 2009, 1:00:03 PM6/19/09
to
Godmar Back wrote:
> On Fri, Jun 19, 2009 at 11:39 AM, Boris Zbarsky <bzba...@mit.edu> wrote:
>
>> Godmar Back wrote:
>>
>>> Correct handling of OOM in the core would make it easier to adopt SM to be
>>> used in environments in which one is interested in limiting the memory
>>> consumption of different domains. For instance, if 'malloc' is replaced
>>> with
>>> one that charges memory to a "domain" - however defined - then it may fail
>>> if this limit is reached. Clearly, crashing the entire engine is the wrong
>>> to do in this case; instead - just the offending "domain" should be
>>> terminated.
>>>
>> Honestly, the right way to do this to me seems to be process-per-"domain"
>> with resource limits on the process, in which case crashing the entire
>> engine is in fact fine...
>>
>
> That's the approach taken by Google Chromium. The question, in my opinion,
> is whether one can do better

+1 from the debugger! Multi-process with a single thread per page will
have all of the problems of debugging today plus more because you have
to do everything remote.

> We introduced an abstraction we call "script spaces" - a script space is
> associated with either a tab, frame, or with a <div> section of a document.
> There's also a system space for chrome JavaScript. All execution is done on
> behalf of a script space. Event handlers are associated with a script space.

Having separate threads for chrome js and page js would benefit
debugging. This is the kind of innovation that Mozilla should investigate.

jjb

Brendan Eich

unread,
Jun 19, 2009, 7:01:14 PM6/19/09
to
On Jun 19, 9:18 am, Godmar Back <god...@gmail.com> wrote:
> That's the approach taken by Google Chromium.  The question, in my opinion,
> is whether one can do better (and whether support for more fine-grained
> isolation can/should be designed into VMs such as Spidermonkey). I admit
> that is an open question.

Hi Godmar, this matches our thinking (at least that of Andreas Gal and
myself when we discussed the problem space recently).

I spoke at Dagstuhl 09141 and more recently at W2SP about this; slides
for the latter are here:

http://w2spconf.com/2009/presentations/invited-slides.pdf

At W2SP, Adam Barth forwarded to me a draft of this paper:

http://www.adambarth.com/papers/2009/barth-weinberger-song.pdf

which validates the approach that Andreas and I had already been
discussing, implemented as a patch to WebKit.

It turns out that all the modern optimizing JS virtual machines
speculate productively about callsite/callee affinity and other
relations. Including trust labels as well as "hidden classes" or
"shapes" in such polymorphic inline caches makes for a low overhead,
universal inline reference monitor. We are going to implement this in
TraceMonkey this summer.


> We are considering extending this idea to also maintain different memory
> consumption domains. In such a setup, a domain may run out of memory inside
> a call to the SM engine. We'd like the engine to report such resource
> exhaustion situations, allowing us to react to it - terminate the script
> space, for instance - in a way that does not compromise the engine or other
> script spaces. (Just like you can issue a kill -9 on a process, or have it
> die with SIGXCPU without having to reboot the system.)

This work is of great interest to us at Mozilla. I would be happy to
review any patches, whether you aim to contribute them back upstream.
Further collaboration is possible, we seem to have a lot in common
among our research agendas. Please feel free to mail me about any of
this. Thanks,

/be

Brendan Eich

unread,
Jun 20, 2009, 1:00:20 AM6/20/09
to
On Jun 19, 9:18 am, Godmar Back <god...@gmail.com> wrote:
> We
> control CPU consumption per script space by calling into a scheduler from
> the DOMbranchcallback, which thus works like a timer interrupt in a
> traditional OS. We apply a traditional proportional-shar

Hi Godmar,

FYI, we have ditched the branch callback in Firefox 3.5 / Gecko
1.9.1 / SpiderMonkey JS1.8.1 (not yet released of course!). The new
API was proposed to this group here:

http://groups.google.com/group/mozilla.dev.tech.js-engine/browse_frm/thread/a4d1fe147761aacb/89b9c58829daf24?hl=en&ie=UTF-8&oe=utf-8&q=brendan+eich+operation+callback#089b9c58829daf24

Since we are using a watchdog thread and wall-clock time measurement,
you shouldn't have a problem adapting your code to use this lower-
overhead API. Let me know if I'm wrong,

/be

Reply all
Reply to author
Forward
0 new messages