Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Rethinking the amount of system JS we use in Gecko on B2G

677 views
Skip to first unread message

Justin Lebar

unread,
Apr 21, 2013, 7:51:18 PM4/21/13
to dev-platform
I think we should consider using much less JS in the parts of Gecko that are
used in B2G. I'd like us to consider writing new modules in C++ where
possible, and I'd like us to consider rewriting existing modules in C++.

I'm only proposing a change for modules which are enabled for B2G. For modules
which aren't enabled on B2G, I'm not proposing any change.

What I'd like to come out of this thread is a consensus one way or another as
to whether we continue along our current path of writing many features that are
enabled on B2G in JS, or whether we change course.

Since most of these features implemented in JS seem to be DOM features, I'm
particularly interested in the opinions of the DOM folks. I'm also interested
in the opinions of JS folks, particularly those who know about the memory usage
of our new JITs.

In the remainder of this e-mail I'll first explain where our JS memory is
going. Then I'll address two arguments that might be made against my proposal
to use more C++. Finally, I'll conclude by suggesting a plan of action.

=== Data ===

Right now about 50% (16mb) of the memory used by the B2G main process
immediately after rebooting is JS. It is my hypothesis that we could greatly
reduce this by converting modules to C++.

On our 256mb devices, we have about 120mb available to Gecko, so this 16mb
represents 13% of all memory available to B2G.

To break down the 16mb of JS memory, 8mb is from four workers: ril_worker,
net_worker, wifi_worker (x2). 5mb of the 8mb is under "unused-arenas"; this is
fragmentation in the JS heap. Based on my experience tackling fragmentation in
the jemalloc heap, I suspect reducing this would be difficult. But even if we
eliminated all of the fragmentation, we'd still be spending 3mb on these four
workers, which I think is likely far more than we need.

The other 8mb is everything else in the system compartment (all our JSMs,
XPCOM components, etc). In a default B2G build you don't get a lot of insight
into this, because most of the system compartments are squished together to save
memory (bug 798491). If I set jsloader.reuseGlobal to false, the amount of
memory used increases from 8mb to 15mb, but now we can see where it's going.

The list of worst offenders follows, but because this data was collected with
reuseGlobal turned off, apply generous salt.

0.74 MB modules/Webapps.jsm
0.59 MB anonymous sandbox from devtools/dbg-server.jsm:41
0.53 MB components/SettingsManager.js
0.53 MB chrome://browser/content/shell.xul
0.49 MB components/WifiWorker.js
0.43 MB modules/DOMRequestHelper.jsm
0.38 MB modules/XPCOMUtils.jsm
0.34 MB RadioInterfaceLayer.js
0.31 MB AppsUtils.jsm
0.27 MB Webapps.js
0.22 MB BrowserElementParent.jsm
0.21 MB app://system.gaiamobile.org/index.html

Many (but certainly not all) of these modules could be rewritten in C++.

Beyond this list, it's death by a thousand cuts; there are 100 compartments in
there, and they each cost a small amount.

I've attached two about:memory dumps collected on my hamachi device soon after
reboot, so you can examine the situation more closely, if you like.
merged.json was collected with the default config, and unmerged.json was
collected with jsloader.reuseGlobal set to false.

Download and extract these files and then open them with the button at
the bottom
of about:memory in Nightly.

(Before you ask: Most of the heap-unclassified in these dumps is
graphics memory,
allocated in drivers.)

=== Should we use JS because it's nicer than C++? ===

I recognize that in many ways JS is a more convenient language than C++. But
that's besides the point here. The point is that in the environment we're
targeting, our implementation of JS is too heavyweight. We can either fix our
implementation or use less JS, but we can't continue using as much JS as we
like without doing one of these two things.

=== Why not just make JS slimmer? ===

It's been suggested to me that instead of converting existing and future JS
code to C++, we should focus on making our JS engine slimmer. Such changes
would of course have the advantage of improving our handling of web content on
B2G.

I'm absolutely in favor of reducing JS memory usage, but I see this effort as
orthogonal to the question of rewriting our current code and writing our future
code in C++, for a few reasons.

1. Content JS does not run in the B2G main process, where the impact of high
memory usage is strongest. We can probably tolerate higher memory usage for
content JS than we can for main-process code. I think it makes sense for our
JS team to focus their effort on optimizing for content JS, since that's far
more widespread.

2. We have a large team of B2G engineers, some of whom could work exclusively
on converting components from JS to C++. In contrast, we have a relatively
small team of JS engineers, few of whom can work exclusively on optimizing the
JS engine for B2G's use-cases.

3. I know people get in trouble at Mozilla for suggesting that it's impossible
to do anything in JS, so I won't do that, but it seems to me that the dynamic
semantics of JS make it very difficult to achieve the same degree of memory
density as we do with C++. (We're talking about density of program data as
well as code here.)

At the very least, I'm pretty sure it's straightforward to significantly reduce
our memory usage by rewriting code in C++, while it would probably take
engineering heroics to approach the same level of memory usage by modifying the
JS engine. I don't think it's wise to bet the product on heroics,
given an alternative.

=== Conclusion ===

If we think that 256mb is a fad, then our current trajectory is probably
sustainable. But everything I have heard from management suggests that we are
serious about 256mb for the foreseeable future.

If we anticipate shipping on 256mb devices for some time, I think our rate of
adding features written in JS is unsustainable. I think we should shift the
default language for implementation of DOM APIs from JS to C++, and we should
rewrite the parts of the platform that run on B2G in C++, where possible.

I'd start by converting these four workers. Do we agree this is a place to
start?

-Justin

Justin Lebar

unread,
Apr 22, 2013, 10:01:50 AM4/22/13
to dev-platform
Of course attachments don't work great on newsgroups. I've uploaded
the about:memory dumps I tried to attach to people.m.o:

http://people.mozilla.org/~jlebar/downloads/merged.json.xz
http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz

Andreas Gal

unread,
Apr 22, 2013, 10:11:06 AM4/22/13
to Justin Lebar, dev-platform
JS is a big advantage for rapid implementation of features and it's
easier to avoid exploitable mistakes. Also, in many cases JS code
(bytecode, not data) should be slimmer than C++. Using JS for
infrequently executing code should be a memory win. I think I would
like to hear from the JS team on reducing memory use of JS and
disabling any compilation for infrequently running code before we give
up on it.

Andreas

Sent from Mobile.
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform

Kyle Huey

unread,
Apr 22, 2013, 10:22:20 AM4/22/13
to Andreas Gal, dev-platform, Justin Lebar
The things on Justin's list do not appear to be infrequently running code.
Wifi, RIL, Apps, mozbrowser, etc are going to be running all the time.

- Kyle

Justin Lebar

unread,
Apr 22, 2013, 10:28:30 AM4/22/13
to Andreas Gal, dev-platform
> I think I would like to hear from the JS team on reducing memory use of JS and
> disabling any compilation for infrequently running code before we give
> up on it.

The issue isn't compilation of code; that doesn't stick out in the
memory reports. The issue seems to be mostly the overhead of the JS
engine for each file (even if the file stores very little data, as
BrowserElementParent does) and also the unavoidable inefficiency
associated with assigning each worker its own runtime (in particular,
the fact that this greatly increases fragmentation).

> Using JS for infrequently executing code should be a memory win.

If the JS team wants to own B2G memory usage from here on out and can
commit the necessary resources to re-optimizing the JS engine
specifically for B2G, then that's great, I'd love to see us test this
hypothesis.

Otherwise, we're leaving a huge amount of memory for B2G on the table
when we have a solution in hand that requires zero heroics. I think
that's folly.

Mike Hommey

unread,
Apr 22, 2013, 10:31:08 AM4/22/13
to Justin Lebar, dev-platform
On Sun, Apr 21, 2013 at 07:51:18PM -0400, Justin Lebar wrote:
> I think we should consider using much less JS in the parts of Gecko that are
> used in B2G. I'd like us to consider writing new modules in C++ where
> possible, and I'd like us to consider rewriting existing modules in C++.
>
> I'm only proposing a change for modules which are enabled for B2G. For modules
> which aren't enabled on B2G, I'm not proposing any change.
>
> What I'd like to come out of this thread is a consensus one way or another as
> to whether we continue along our current path of writing many features that are
> enabled on B2G in JS, or whether we change course.
>
> Since most of these features implemented in JS seem to be DOM features, I'm
> particularly interested in the opinions of the DOM folks. I'm also interested
> in the opinions of JS folks, particularly those who know about the memory usage
> of our new JITs.
>
> In the remainder of this e-mail I'll first explain where our JS memory is
> going. Then I'll address two arguments that might be made against my proposal
> to use more C++. Finally, I'll conclude by suggesting a plan of action.

How about pre-compiling JS in JITed form? That would require the JIT
form to be relocatable if they isn't already, and wouldn't work well on
platforms where we use different instructions depending on the actual
target processor, but I guess that could work on our ARM targets. I
however don't know how much less memory that would take.

Mike

Boris Zbarsky

unread,
Apr 22, 2013, 10:40:48 AM4/22/13
to
On 4/21/13 7:51 PM, Justin Lebar wrote:
> Since most of these features implemented in JS seem to be DOM features, I'm
> particularly interested in the opinions of the DOM folks.

Opinions differ. ;)

I think for DOM features that are not invoked on most pages,
implementing them in JS seems like a reasonable choice. We are actively
working on making this easier to do than it is now.

This is only a viable implementation strategy for objects you don't
expect to have lots of around, since there is a lot more per-object
overhead for a JS-implemented DOM object. But lots of things that are
currently implemented in JS are per-page singletons, so that's ok.

I'm very sympathetic to Andreas' points about memory-safety (especially
for hard-to-fuzz things) and ease of prototyping and implementation.
But on the flip side, there are security bugs that JS implementations
are subject to (involving content passing in unexpected objects and
whatnot) that a C++ implementation simply never has to deal with. See
https://bugzilla.mozilla.org/show_bug.cgi?id=856042 for those with the
access... I'm hoping that putting a WebIDL layer in front of the JS
implementations will help, but will of course add to the per-object
overhead.

One other thing to keep in mind is that doing an implementation in JS
does not involve having to figure out cycle collection, etc, boilerplate...

So to the extent that b2g is in a "not enough hands" situation,
implementing in JS makes sense.

To the extent that we have code whose performance we want to maximize or
whose memory footprint we need to minimize, a JS implementation right
now is not a good idea.

I'm afraid in practice the right decision should probably be made on a
per-component basis, and the hard part with that is making sure informed
decisions are made. :(

-Boris

Justin Lebar

unread,
Apr 22, 2013, 10:53:40 AM4/22/13
to Mike Hommey, dev-platform
> How about pre-compiling JS in JITed form?

While significant, it seems that memory used for script source isn't
the biggest offender.

Full details are in the about:memory reports,

http://people.mozilla.org/~jlebar/downloads/merged.json.xz
http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz

but here's a teaser.

This is the wifi worker. I think "script-sources" is code. Note that
fragmentation (unused-arenas) is way too high, but even despite this
the worker uses too much memory.

> 2.38 MB (05.13%) -- worker(resource://gre/modules/wifi_worker.js, 0x45584800)
> ├──1.77 MB (03.81%) -- gc-heap
> │ ├──1.74 MB (03.74%) ── unused-arenas
> │ └──0.03 MB (00.07%) -- (2 tiny)
> │ ├──0.03 MB (00.07%) ── chunk-admin
> │ └──0.00 MB (00.00%) ── unused-chunks
> └──0.61 MB (01.32%) -- (3 tiny)
> ├──0.26 MB (00.57%) -- runtime
> │ ├──0.13 MB (00.27%) ── gc-marker
> │ ├──0.06 MB (00.12%) ── jaeger-code
> │ ├──0.04 MB (00.08%) ── runtime-object
> │ ├──0.02 MB (00.03%) ── atoms-table
> │ ├──0.01 MB (00.03%) ── script-sources
> │ ├──0.01 MB (00.01%) ── unused-code
> │ ├──0.00 MB (00.01%) ── dtoa
> │ ├──0.00 MB (00.01%) ── stack
> │ ├──0.00 MB (00.01%) ── temporary
> │ ├──0.00 MB (00.00%) ── contexts
> │ ├──0.00 MB (00.00%) ── script-filenames
> │ ├──0.00 MB (00.00%) ── ion-code
> │ ├──0.00 MB (00.00%) ── math-cache
> │ └──0.00 MB (00.00%) ── regexp-code
> ├──0.25 MB (00.54%) -- compartment(web-worker)
> │ ├──0.13 MB (00.29%) -- gc-heap
> │ │ ├──0.07 MB (00.15%) ── unused-gc-things [2]
> │ │ ├──0.03 MB (00.06%) -- objects
> │ │ │ ├──0.02 MB (00.04%) ── non-function
> │ │ │ └──0.01 MB (00.02%) ── function
> │ │ ├──0.02 MB (00.04%) ── shapes/tree
> │ │ └──0.02 MB (00.03%) ── sundries [2]
> │ ├──0.06 MB (00.13%) ── analysis-temporary
> │ ├──0.03 MB (00.06%) ── other-sundries [2]
> │ ├──0.02 MB (00.04%) ── objects-extra/slots
> │ └──0.01 MB (00.02%) ── script-data
> └──0.10 MB (00.22%) -- compartment(web-worker-atoms)
> ├──0.09 MB (00.19%) -- gc-heap
> │ ├──0.09 MB (00.18%) ── strings
> │ └──0.00 MB (00.01%) ── sundries
> └──0.01 MB (00.02%) ── other-sundries

Here's the worst-offending compartment (webapps.js) with
jsloader.reuseGlobal turned off. Recall that flipping this switch
expands memory usage by a factor of 2, and not all pieces of the
compartment are affected equally, so take this with salt.

> 0.74 MB (01.60%) -- compartment([System Principal], resource://gre/modules/Webapps.jsm)
> ├──0.28 MB (00.61%) -- gc-heap
> │ ├──0.10 MB (00.21%) ── unused-gc-things
> │ ├──0.08 MB (00.17%) -- objects
> │ │ ├──0.06 MB (00.12%) ── non-function
> │ │ └──0.02 MB (00.04%) ── function
> │ ├──0.05 MB (00.11%) ── strings
> │ ├──0.03 MB (00.05%) ── shapes/tree
> │ ├──0.02 MB (00.04%) ── scripts
> │ └──0.01 MB (00.02%) ── sundries
> ├──0.17 MB (00.36%) -- string-chars
> │ ├──0.09 MB (00.20%) ── non-huge
> │ └──0.08 MB (00.17%) -- huge
> │ ├──0.04 MB (00.08%) ── string(length=9114, "data:image//png;base64,iVBORw0KG...") [2]
> │ └──0.04 MB (00.08%) ── string(length=9646, "data:image//png;base64,iVBORw0KG...") [2]
> ├──0.11 MB (00.23%) -- objects-extra
> │ ├──0.09 MB (00.20%) ── slots
> │ └──0.02 MB (00.03%) ── property-iterator-data
> ├──0.07 MB (00.16%) ── script-data
> ├──0.06 MB (00.13%) ── analysis-temporary
> ├──0.02 MB (00.04%) ── shapes-extra/tree-tables
> ├──0.02 MB (00.04%) ── other-sundries
> └──0.02 MB (00.03%) ── cross-compartment-wrappers

To me, this seems like death by a thousand cuts; there are /lots/ of
little things that we'd need to improve.

On Mon, Apr 22, 2013 at 10:31 AM, Mike Hommey <m...@glandium.org> wrote:
> On Sun, Apr 21, 2013 at 07:51:18PM -0400, Justin Lebar wrote:
>> I think we should consider using much less JS in the parts of Gecko that are
>> used in B2G. I'd like us to consider writing new modules in C++ where
>> possible, and I'd like us to consider rewriting existing modules in C++.
>>
>> I'm only proposing a change for modules which are enabled for B2G. For modules
>> which aren't enabled on B2G, I'm not proposing any change.
>>
>> What I'd like to come out of this thread is a consensus one way or another as
>> to whether we continue along our current path of writing many features that are
>> enabled on B2G in JS, or whether we change course.
>>
>> Since most of these features implemented in JS seem to be DOM features, I'm

Mike Hommey

unread,
Apr 22, 2013, 11:05:52 AM4/22/13
to Justin Lebar, dev-platform
On Mon, Apr 22, 2013 at 10:53:40AM -0400, Justin Lebar wrote:
> > How about pre-compiling JS in JITed form?
>
> While significant, it seems that memory used for script source isn't
> the biggest offender.
>
> Full details are in the about:memory reports,
>
> http://people.mozilla.org/~jlebar/downloads/merged.json.xz
> http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz
>
> but here's a teaser.
>
> This is the wifi worker. I think "script-sources" is code. Note that
> fragmentation (unused-arenas) is way too high, but even despite this
> the worker uses too much memory.

How about making the JS engine use jemalloc instead of its own
allocator? Does anything actually rely on the arenas being independent
at the paging level?

Mike

Justin Lebar

unread,
Apr 22, 2013, 11:16:01 AM4/22/13
to Mike Hommey, dev-platform
> How about making the JS engine use jemalloc instead of its own
> allocator? Does anything actually rely on the arenas being independent
> at the paging level?

My understanding, which may be wrong, is that the JS engine needs to
be able to quickly map an object's address to a compartment. They do
this by keeping a map (in the runtime?) of chunk addresses (1mb
aligned) to compartments. Given an address, you can easily find its
chunk (zero the low-order bits), and then it's a quick lookup to find
its compartment.

We could imagine shrinking chunks to 4kb, but that doesn't solve the
fragmentation problem here; the fragmentation in "unused-arenas" is
/within/ a page. (We also have a bunch of decommitted memory in these
workers due to fragmentation between pages, but that's not a problem.)
What you really need, I guess, is a moving GC.

But I emphasize again that reducing the unused-arenas does not solve
all of our problems, only the single largest.

Justin Lebar

unread,
Apr 22, 2013, 11:51:51 AM4/22/13
to dev-pl...@lists.mozilla.org
> So to the extent that b2g is in a "not enough hands" situation, implementing in JS
> makes sense.

I don't feel like b2g is in this position. We don't have a lot of
Gecko hackers with 2+ years of experience, but we do have a lot of
hackers. Whether or not we have a lot of C++ versus JS hackers, I
can't say.

> To the extent that we have code whose performance we want to maximize or whose
> memory footprint we need to minimize, a JS implementation right now is not a good
> idea.

My contention is that /everything that runs on B2G/ is code whose
memory footprint we need to minimize. Fitting inside 120mb leaves
very little room for waste.

But it doesn't sound like you disagree too much with the idea of
re-implementing the worst offenders in C++, as opposed to sitting back
and hoping for heroics from the JS team. (To be clear, if I had to
choose a team to deliver heroics, I'd choose the JS team in a
heartbeat.)

It seems foolhardy to me to on the one hand do the work of
re-implementing these workers in C++, and on the other hand continue
implementing DOM features in JS that eat up tens or hundreds of KB of
memory each. But I suppose that's better than doing nothing, and if
we all can agree on doing that much, I'd be happy.

Terrence Cole

unread,
Apr 22, 2013, 1:36:47 PM4/22/13
to dev-pl...@lists.mozilla.org
On 04/21/2013 04:51 PM, Justin Lebar wrote:
> I think we should consider using much less JS in the parts of Gecko that are
> used in B2G. I'd like us to consider writing new modules in C++ where
> possible, and I'd like us to consider rewriting existing modules in C++.
>
> I'm only proposing a change for modules which are enabled for B2G. For modules
> which aren't enabled on B2G, I'm not proposing any change.
>
> What I'd like to come out of this thread is a consensus one way or another as
> to whether we continue along our current path of writing many features that are
> enabled on B2G in JS, or whether we change course.
>
> Since most of these features implemented in JS seem to be DOM features, I'm
> particularly interested in the opinions of the DOM folks. I'm also interested
> in the opinions of JS folks, particularly those who know about the memory usage
> of our new JITs.
>
> In the remainder of this e-mail I'll first explain where our JS memory is
> going. Then I'll address two arguments that might be made against my proposal
> to use more C++. Finally, I'll conclude by suggesting a plan of action.
>
> === Data ===
>
> Right now about 50% (16mb) of the memory used by the B2G main process
> immediately after rebooting is JS. It is my hypothesis that we could greatly
> reduce this by converting modules to C++.
>
> On our 256mb devices, we have about 120mb available to Gecko, so this 16mb
> represents 13% of all memory available to B2G.
>
> To break down the 16mb of JS memory, 8mb is from four workers: ril_worker,
> net_worker, wifi_worker (x2). 5mb of the 8mb is under "unused-arenas"; this is
> fragmentation in the JS heap. Based on my experience tackling fragmentation in
> the jemalloc heap, I suspect reducing this would be difficult. But even if we
> eliminated all of the fragmentation, we'd still be spending 3mb on these four
> workers, which I think is likely far more than we need.

Once exact rooting of the browser is complete we can implement heap
defragmentation easily. Generational GC should help as well here.
Our exact rooting work is at a spot right now where we could easily use
more hands to accelerate the process. The main problem is that the work
is easy and tedious: a hard sell for pretty much any hacker at mozilla.
Once this work is complete, however, we can get some level of
defragmentation working in a matter of weeks and generational in a
slightly longer time frame.

> 3. I know people get in trouble at Mozilla for suggesting that it's impossible
> to do anything in JS, so I won't do that, but it seems to me that the dynamic
> semantics of JS make it very difficult to achieve the same degree of memory
> density as we do with C++. (We're talking about density of program data as
> well as code here.)

Are you saying JS is not perfect in every way? Release the hounds! :-)

More seriously, JS does have all the tools you need to get very close in
memory or execution speed to C++. The question of which of those tools
is easiest to use in any specific situation is more interesting.

> At the very least, I'm pretty sure it's straightforward to significantly reduce
> our memory usage by rewriting code in C++, while it would probably take
> engineering heroics to approach the same level of memory usage by modifying the
> JS engine. I don't think it's wise to bet the product on heroics,
> given an alternative.

Well, I have to disagree a bit. I know Monkey Island is a weird and
special place to work, but the impossible does seem to happen here on a
strangely regular basis. Usually it even happens without any obvious
heroics.

> === Conclusion ===
>
> If we think that 256mb is a fad, then our current trajectory is probably
> sustainable. But everything I have heard from management suggests that we are
> serious about 256mb for the foreseeable future.
>
> If we anticipate shipping on 256mb devices for some time, I think our rate of
> adding features written in JS is unsustainable. I think we should shift the
> default language for implementation of DOM APIs from JS to C++, and we should
> rewrite the parts of the platform that run on B2G in C++, where possible.
>
> I'd start by converting these four workers. Do we agree this is a place to
> start?

I can't really agree or disagree without knowing why they use "too much"
memory. What I can say is that doing a complete rewrite is almost never
the right answer, particularly if we don't understand what we are trying
to solve in more detail.

-Terrence

> -Justin

Justin Lebar

unread,
Apr 22, 2013, 2:07:07 PM4/22/13
to terr...@mozilla.com, dev-pl...@lists.mozilla.org
> I can't really agree or disagree without knowing why they use "too much"
> memory.

At the risk of sounding like a broken record, it's all in the memory
reports. You probably understand this data better than I do. Extract
and load in about:memory (button is at the bottom).

http://people.mozilla.org/~jlebar/downloads/merged.json.xz
http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz

As I said earlier, if the JS team wants to own B2G memory usage and
commit to getting chrome JS memory usage down to C++ levels within a
timeframe that's acceptable to the B2G team, that's fantastic.

If on the other hand the JS team is not ready to commit to getting
this work done on B2G's schedule, then by construction "wait for JS to
get better" is not a solution that works for us.

Given how long some of this prerequisite work (e.g. generational
garbage collection) has been ongoing for, I'm highly dubious of the
claim that our JS engine will improve at the requisite rate. Where
we've had success reducing our JS memory in the past (e.g. bug
798491), it's been by working within the current reality of the JS
engine, instead of twiddling our thumbs waiting for the Right Fix
(e.g. bug 759585, which did not come in time to be useful for B2G
1.x).

Please don't take this as a suggestion that I think you guys are doing
a bad job -- I continue to characterize the JS team's work as heroic.
I just think that there's a limit to how much we ought to expect from
the JS folks, particularly given how many other high-priority projects
you have.

Bill McCloskey

unread,
Apr 22, 2013, 2:15:54 PM4/22/13
to dev-pl...@lists.mozilla.org
I can't agree with you more, Justin. I think Boris is right that we
should make these decisions on a case-by-case basis. But in the case of
these workers, it seems clear that converting them to C++ is the way to
go, assuming we have the resources to do so.

-Bill

Taras Glek

unread,
Apr 22, 2013, 2:28:56 PM4/22/13
to Mike Hommey, Justin Lebar
JS source is smaller than the compiled variety(atleast on x86),
especially when compressed. It should be very hard to make JITed code
smaller

Taras

Richard Newman

unread,
Apr 22, 2013, 2:44:46 PM4/22/13
to
I think there are three parallel efforts here.

The first is to pick obvious targets and rewrite them in C++. This is a short-term goal driven by numbers, which is great. Get module owners on board and ship it!

The second is to continue to work on improving JS runtime efficiency all-over. We've already wrestled with compartments and zones on desktop (including shipping FHR preprocessed into combined files to avoid overhead), for example. This is clearly more of an ongoing process than a target.

Thirdly, it seems clear that we have a papercut efficiency problem: dozens of little modules, that aren't well-suited to rewriting in C++, that nonetheless contribute to bloat.

Perhaps it's time for a little series of brownbags on writing performant JS, or profiling and improving existing code?

There's probably a lot of dead space smeared over the codebase that we could fix with a little time adding lazy eval, better cleanup, reduced allocations, and whatever fancy profiler-informed tricks we have these days.

I know that my knowledge on that topic is rusty from spending too much time in Java-land, and I bet the same is true for a lot of us.

Thoughts, perf team?

Terrence Cole

unread,
Apr 22, 2013, 3:04:39 PM4/22/13
to Justin Lebar, dev-pl...@lists.mozilla.org
On 04/22/2013 11:07 AM, Justin Lebar wrote:
>> I can't really agree or disagree without knowing why they use "too much"
>> memory.
> At the risk of sounding like a broken record, it's all in the memory
> reports. You probably understand this data better than I do. Extract
> and load in about:memory (button is at the bottom).
>
> http://people.mozilla.org/~jlebar/downloads/merged.json.xz
> http://people.mozilla.org/~jlebar/downloads/unmerged.json.xz
>
> As I said earlier, if the JS team wants to own B2G memory usage and
> commit to getting chrome JS memory usage down to C++ levels within a
> timeframe that's acceptable to the B2G team, that's fantastic.
>
> If on the other hand the JS team is not ready to commit to getting
> this work done on B2G's schedule, then by construction "wait for JS to
> get better" is not a solution that works for us.

I agree: I was not suggesting that as a general solution in any way.

>
> Given how long some of this prerequisite work (e.g. generational
> garbage collection) has been ongoing for, I'm highly dubious of the
> claim that our JS engine will improve at the requisite rate.

Generational GC is an extremely ambitious undertaking. We have set
realistic milestones for completion and we are meeting our goal dates
more often than not: the project is on schedule. Whether this means it
will be done soon enough for B2G, I have no idea. What does B2G's
schedule look like?

> Where
> we've had success reducing our JS memory in the past (e.g. bug
> 798491), it's been by working within the current reality of the JS
> engine, instead of twiddling our thumbs waiting for the Right Fix
> (e.g. bug 759585, which did not come in time to be useful for B2G
> 1.x).

Agreed. Last week we finally got an actual physical Unagi posting
numbers to AWFY. Nicolas is now looking into our GC tuning parameters
with the goal of improving our numbers there.

>
> Please don't take this as a suggestion that I think you guys are doing
> a bad job -- I continue to characterize the JS team's work as heroic.
> I just think that there's a limit to how much we ought to expect from
> the JS folks, particularly given how many other high-priority projects
> you have.

I did not want to suggest that rewriting some of your modules to C++ is
the wrong solution, given your requirements. Sorry if my response was a
bit harsh; it is extremely frustrating from our side to be told now that
what we did 9 months ago was not good enough when you needed it 3 months
ago. Please keep in mind that we are also attacking the same problem
from the other direction and we'd very much like it if we could make our
work more helpful to you.

-Terrence

>
> On Mon, Apr 22, 2013 at 1:36 PM, Terrence Cole <tc...@mozilla.com> wrote:
>> On 04/21/2013 04:51 PM, Justin Lebar wrote:
>>> I think we should consider using much less JS in the parts of Gecko that are
>>> used in B2G. I'd like us to consider writing new modules in C++ where
>>> possible, and I'd like us to consider rewriting existing modules in C++.
>>>
>>> I'm only proposing a change for modules which are enabled for B2G. For modules
>>> which aren't enabled on B2G, I'm not proposing any change.
>>>
>>> What I'd like to come out of this thread is a consensus one way or another as
>>> to whether we continue along our current path of writing many features that are
>>> enabled on B2G in JS, or whether we change course.
>>>
>>> Since most of these features implemented in JS seem to be DOM features, I'm
>>> particularly interested in the opinions of the DOM folks. I'm also interested
>>> in the opinions of JS folks, particularly those who know about the memory usage
>>> of our new JITs.
>>>
>>> In the remainder of this e-mail I'll first explain where our JS memory is
>>> going. Then I'll address two arguments that might be made against my proposal
>>> to use more C++. Finally, I'll conclude by suggesting a plan of action.
>>>

Jeff Muizelaar

unread,
Apr 22, 2013, 3:12:40 PM4/22/13
to Bill McCloskey, dev-pl...@lists.mozilla.org

On 2013-04-22, at 2:15 PM, Bill McCloskey wrote:

> I can't agree with you more, Justin. I think Boris is right that we should make these decisions on a case-by-case basis. But in the case of these workers, it seems clear that converting them to C++ is the way to go, assuming we have the resources to do so.

So a specific case that I ran into during the Performance Workshop is RILContentHelper.js. During the startup of the settings app
we jank for 53ms while initializing the RILContentHelper.js:

http://people.mozilla.com/~bgirard/cleopatra/#report=bf7077c6552fe2bc015d7074a338b673911f3ce8&search=Mobile

There doesn't seem to be anything specific taking that much time in the profile, just general JS overhead. In this case RILContentHelper.js is wrapped by by C++ code in dom/network/src/MobileConnection.cpp and so we end up spending a fair amount of time transitioning from JS to C++ to JS to C++.

-Jeff

Terrence Cole

unread,
Apr 22, 2013, 3:44:04 PM4/22/13
to dev-pl...@lists.mozilla.org
On 04/22/2013 12:12 PM, Jeff Muizelaar wrote:
> On 2013-04-22, at 2:15 PM, Bill McCloskey wrote:
>
>> I can't agree with you more, Justin. I think Boris is right that we should make these decisions on a case-by-case basis. But in the case of these workers, it seems clear that converting them to C++ is the way to go, assuming we have the resources to do so.
> So a specific case that I ran into during the Performance Workshop is RILContentHelper.js. During the startup of the settings app
> we jank for 53ms while initializing the RILContentHelper.js:
>
> http://people.mozilla.com/~bgirard/cleopatra/#report=bf7077c6552fe2bc015d7074a338b673911f3ce8&search=Mobile

That link gives me this: "Error fetching profile :(. URL:
'http://profile-store.commondatastorage.googleapis.com/bf7077c6552fe2bc015d7074a338b673911f3ce8'.
Did you set the CORS headers?"

>
> There doesn't seem to be anything specific taking that much time in the profile, just general JS overhead. In this case RILContentHelper.js is wrapped by by C++ code in dom/network/src/MobileConnection.cpp and so we end up spending a fair amount of time transitioning from JS to C++ to JS to C++.

That seems like the sort of thing that SpiderMonkey may be able to
address in the short term, depending on what exactly it turns out to be.
Is there a bug on file somewhere to coordinate the investigation?

-Terrence

> -Jeff

Till Schneidereit

unread,
Apr 22, 2013, 4:25:36 PM4/22/13
to terr...@mozilla.com, dev-pl...@lists.mozilla.org
There are a few things we're working on in SpiderMonkey that should improve
this situation quite a bit:

Terrence already mentioned generational GC, which certainly is the largest
piece by far. Getting rid of all or almost all memory lost to fragmentation
makes the tradeoff a considerably different one, I'd say.

Additionally, the work on making bytecode generation lazy[1] should vastly
reduce the memory used for scripts. Based on that, I'm investigating
several more options to reduce script memory:
- re-lazyfication of JSScripts, reducing memory usage by removing the
parsed represantation of scripts that are only run once or very rarely.
- lazy cloning of JSScripts with the same source from other compartments
containing the same script. Probably to go hand-in-hand with
re-lazyfication.

Combined, I'd very vaguely say that these measures should reduce the memory
usage by an additional 20 or 30 percentage points. (Based on the current
usage as 100%, and no guarantees, of course).

Nicholas Nethercote

unread,
Apr 22, 2013, 4:32:12 PM4/22/13
to Justin Lebar, Mike Hommey, dev-platform
Some comments on this whole thread:

- I'm very sympathetic to Justin's concerns. 120 MiB is not much
memory. While it's (somewhat) ok to kill apps that are using too much
memory, that doesn't work with the main B2G process, and I've been
CC'd on enough "the B2G main process is using too much memory" bugs to
understand this is an ongoing problem.

- Given the intense time pressure the B2G folks are under, saying
things like "there's no fundamental reason why JS can't be
memory-efficient" isn't much help when it currently isn't
memory-efficient.

- Justin knows more about B2G memory consumption than anyone; he's
been looking at it closely for months.

- Look at the data -- code size is not the issue for JS code.

- Generational GC's timeline isn't even remotely feasible for B2G.
B2G branched on version 18! Gen GC might be done in a few months.
(Which is fair enough; Gen GC is a gigantic project.)

- I believe Justin identified off-list that the workers (and probably
the main JS runtime) are screaming out for more aggressive
decommitting of the JS heap:

>> 2.38 MB (05.13%) -- worker(resource://gre/modules/wifi_worker.js, 0x45584800)
>> ├──1.77 MB (03.81%) -- gc-heap
>> │ ├──1.74 MB (03.74%) ── unused-arenas

"unused-arenas" are empty 4 KiB JS heap arenas that could be
decommitted. The merged.json data shows almost 5 MiB worth of
unused-arenas in 3 of the 4 workers.
https://bugzilla.mozilla.org/show_bug.cgi?id=829482 is currently open
on this issue. That sounds easier to fix than rewriting modules in
C++.

- Looking at the merged.json data: the system principal compartment
merging is happening on the main process, but doesn't appear to be
happening on all the other processes: Homescreen, Usage,
(Preallocated app). I've included the relevant data below.

Nick


Homescreen (pid 390)
Explicit Allocations

13,642,508 B (100.0%) -- explicit
├───4,403,684 B (32.28%) -- js-non-window
│ ├──3,865,124 B (28.33%) -- compartments
│ │ ├──3,358,912 B (24.62%) -- non-window-global
│ │ │ ├────277,688 B (02.04%) ++ compartment([System Principal])
│ │ │ ├────262,212 B (01.92%) ++ compartment([System Principal],
resource://gre/modules/DOMRequestHelper.jsm)
│ │ │ ├────199,328 B (01.46%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/Webapps.js)
│ │ │ ├────187,400 B (01.37%) ++ compartment([System Principal],
resource://gre/modules/XPCOMUtils.jsm)
│ │ │ ├────182,632 B (01.34%) ++ compartment([System Principal],
resource://gre/modules/CSPUtils.jsm)
│ │ │ ├────134,456 B (00.99%) ++ compartment([System Principal],
resource://gre/modules/ObjectWrapper.jsm)
│ │ │ ├────124,216 B (00.91%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/SettingsManager.js)
│ │ │ ├────114,360 B (00.84%) ++ compartment([System Principal],
resource://gre/modules/UserAgentOverrides.jsm)
│ │ │ ├────110,008 B (00.81%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/contentSecurityPolicy.js)
│ │ │ ├────107,012 B (00.78%) ++ compartment([System Principal],
resource://gre/modules/AppsServiceChild.jsm)
│ │ │ ├────105,264 B (00.77%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/PushService.js)
│ │ │ ├────104,688 B (00.77%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/DirectoryProvider.js)
│ │ │ ├─────98,664 B (00.72%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/nsHandlerService.js)
│ │ │ ├─────92,656 B (00.68%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/nsPrompter.js)
│ │ │ ├─────89,624 B (00.66%) ++ compartment([System Principal],
resource://gre/modules/BrowserElementPromptService.jsm)
│ │ │ ├─────83,176 B (00.61%) ++ compartment([System Principal],
resource://gre/modules/AppsUtils.jsm)
│ │ │ ├─────82,712 B (00.61%) ++ compartment([System Principal],
resource://gre/modules/services-common/preferences.js)
│ │ │ ├─────81,536 B (00.60%) ++ compartment([System Principal],
resource://gre/modules/BrowserElementParent.jsm)
│ │ │ ├─────77,520 B (00.57%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/AppProtocolHandler.js)
│ │ │ ├─────70,168 B (00.51%) ++ compartment([System Principal],
resource://gre/modules/Geometry.jsm)
│ │ │ ├─────69,648 B (00.51%) ++ compartment([System Principal],
resource://gre/modules/SettingsDB.jsm)
│ │ │ ├─────66,392 B (00.49%) ++ compartment([System Principal],
resource://gre/modules/NetUtil.jsm)
│ │ │ ├─────65,768 B (00.48%) ++ compartment([System Principal],
resource://gre/modules/Services.jsm)
│ │ │ ├─────65,112 B (00.48%) ++ compartment([System Principal],
resource://gre/modules/commonjs/promise/core.js)
│ │ │ ├─────64,880 B (00.48%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/BrowserElementParent.js)
│ │ │ ├─────64,864 B (00.48%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/AppsService.js)
│ │ │ ├─────64,424 B (00.47%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/ProcessGlobal.js)
│ │ │ ├─────63,656 B (00.47%) ++ compartment([System Principal],
jar:file:///system/b2g/omni.ja!/components/nsIDService.js)
│ │ │ ├─────60,472 B (00.44%) ++ compartment([System Principal],
resource://gre/modules/IndexedDBHelper.jsm)
│ │ │ ├─────60,408 B (00.44%) ++ compartment([System Principal],
resource://gre/modules/FileUtils.jsm)
│ │ │ ├─────58,360 B (00.43%) ++ compartment([System Principal],
resource://gre/modules/SettingsQueue.jsm)
│ │ │ ├─────43,560 B (00.32%) ++
compartment(moz-nullprincipal:{0cf61a20-96af-456a-ba29-ca139251bdd6})
│ │ │ └─────26,048 B (00.19%) ++ compartment(null-principal)

Justin Lebar

unread,
Apr 22, 2013, 4:38:03 PM4/22/13
to Till Schneidereit, terr...@mozilla.com, dev-pl...@lists.mozilla.org
> There are a few things we're working on in SpiderMonkey that should improve
> this situation quite a bit:

Thanks, but I again need to emphasize that these are large, long-term
plans. Terrence tells me that GGC is planned for "sometime this
year". Lazy bytecode generation has been on the roadmap for a long
time as well.

I understand that these are large projects, and I don't mean to
suggest that you guys aren't doing a good job with them, but I do not
think that the right solution for b2g system compartments / workers is
"JS should be able to meet your requirements one day," and "wait until
things get better." In the meantime we continue to dig ourselves in a
hole by implementing new DOM APIs in JS.

We of course still want these engine improvements for reducing the
memory usage of web content.

Again, in the past, we've had the most success dealing with the JS
engine as it is today. Then the engine optimizations, when they come,
are gravy. That's all I'm suggesting we do here.

Nicholas Nethercote

unread,
Apr 22, 2013, 4:42:12 PM4/22/13
to Till Schneidereit, terr...@mozilla.com, dev-pl...@lists.mozilla.org
On Mon, Apr 22, 2013 at 1:25 PM, Till Schneidereit
<tschne...@gmail.com> wrote:
> There are a few things we're working on in SpiderMonkey that should improve
> this situation quite a bit:
>
> ...generational GC...
>
> making bytecode generation lazy...
> ...re-lazyfication of JSScripts...
> ...lazy cloning of JSScripts...

This is all great stuff, but as mentioned elsewhere, B2G branched at
version 18 and so they need improvements that that can land quickly on
the relevant branches.

Nick

Justin Lebar

unread,
Apr 22, 2013, 4:48:56 PM4/22/13
to Nicholas Nethercote, terr...@mozilla.com, dev-pl...@lists.mozilla.org, Till Schneidereit
> This is all great stuff, but as mentioned elsewhere, B2G branched at
> version 18 and so they need improvements that that can land quickly on
> the relevant branches.

Well, to be clear, it would be great if we could land some
improvements for v1.1 (which is based off version 18), but we're
locking the tree down pretty hard already, so I suspect that e.g. a
wifi worker rewrite is off the table for that version. Hopefully v1.1
is the last b2g version that will be based off b2g18.

v1.1 is also, I found out last week, targeting a 512mb device, so
memory usage probably isn't as critical there as it will be in future
releases which target 256mb devices.

Nicholas Nethercote

unread,
Apr 22, 2013, 4:57:14 PM4/22/13
to Justin Lebar, Mike Hommey, dev-platform
On Mon, Apr 22, 2013 at 1:32 PM, Nicholas Nethercote
<n.neth...@gmail.com> wrote:
>
> - Looking at the merged.json data: the system principal compartment
> merging is happening on the main process, but doesn't appear to be
> happening on all the other processes: Homescreen, Usage,
> (Preallocated app).

I filed https://bugzilla.mozilla.org/show_bug.cgi?id=864494 for this issue.

N

Till Schneidereit

unread,
Apr 22, 2013, 5:13:39 PM4/22/13
to Justin Lebar, terr...@mozilla.com, Nicholas Nethercote, dev-pl...@lists.mozilla.org
On Mon, Apr 22, 2013 at 9:48 PM, Justin Lebar <justin...@gmail.com>wrote:

> > This is all great stuff, but as mentioned elsewhere, B2G branched at
> > version 18 and so they need improvements that that can land quickly on
> > the relevant branches.
>

I understand that (but should certainly have made it more clear).

Maybe I'm underestimating the amount of work required to rebuild enough
(whatever "enough" means, in this case) modules in C++, but isn't that a
somewhat major undertaking, too?


>
> Well, to be clear, it would be great if we could land some
> improvements for v1.1 (which is based off version 18), but we're
> locking the tree down pretty hard already, so I suspect that e.g. a
> wifi worker rewrite is off the table for that version. Hopefully v1.1
> is the last b2g version that will be based off b2g18.
>
> v1.1 is also, I found out last week, targeting a 512mb device, so
> memory usage probably isn't as critical there as it will be in future
> releases which target 256mb devices.
>

We might (should, actually) be able to land the lazy-cloning and
re-lazification parts without the lazy bytecode. I'm not saying it's a tiny
project, but at least we have almost all of the required infrastructure in
place, already.

Mmh, but backporting it to 18 might be a lot harder.

Jeff Muizelaar

unread,
Apr 22, 2013, 5:22:41 PM4/22/13
to terr...@mozilla.com, dev-pl...@lists.mozilla.org

On 2013-04-22, at 3:44 PM, Terrence Cole wrote:

> On 04/22/2013 12:12 PM, Jeff Muizelaar wrote:
>> On 2013-04-22, at 2:15 PM, Bill McCloskey wrote:
>>
>>> I can't agree with you more, Justin. I think Boris is right that we should make these decisions on a case-by-case basis. But in the case of these workers, it seems clear that converting them to C++ is the way to go, assuming we have the resources to do so.
>> So a specific case that I ran into during the Performance Workshop is RILContentHelper.js. During the startup of the settings app
>> we jank for 53ms while initializing the RILContentHelper.js:
>>
>> http://people.mozilla.com/~bgirard/cleopatra/#report=bf7077c6552fe2bc015d7074a338b673911f3ce8&search=Mobile
>
> That link gives me this: "Error fetching profile :(. URL:
> 'http://profile-store.commondatastorage.googleapis.com/bf7077c6552fe2bc015d7074a338b673911f3ce8'.
> Did you set the CORS headers?"

That's weird. The link works for others and the CORS headers should be set.

>
>>
>> There doesn't seem to be anything specific taking that much time in the profile, just general JS overhead. In this case RILContentHelper.js is wrapped by by C++ code in dom/network/src/MobileConnection.cpp and so we end up spending a fair amount of time transitioning from JS to C++ to JS to C++.
>
> That seems like the sort of thing that SpiderMonkey may be able to
> address in the short term, depending on what exactly it turns out to be.
> Is there a bug on file somewhere to coordinate the investigation?

I don't know if there's anything surprising here. Calling into JS from C++ goes through xpconnect which is a long standing slowness.

-Jeff

Boris Zbarsky

unread,
Apr 22, 2013, 5:36:06 PM4/22/13
to
On 4/22/13 5:22 PM, Jeff Muizelaar wrote:
> I don't know if there's anything surprising here. Calling into JS from C++ goes through xpconnect which is a long standing slowness.

_That_ we can try to fix by converting to JS-implemented WebIDL. Right
now that performs about the same, but we know how to make it much faster...

-Boris

Nicolas B. Pierron

unread,
Apr 22, 2013, 6:18:32 PM4/22/13
to
On 04/22/2013 07:53 AM, Justin Lebar wrote:
> This is the wifi worker. I think "script-sources" is code. Note that
> fragmentation (unused-arenas) is way too high, but even despite this
> the worker uses too much memory.
>
>> 2.38 MB (05.13%) -- worker(resource://gre/modules/wifi_worker.js, 0x45584800)
>> ├──1.77 MB (03.81%) -- gc-heap
>> │ ├──1.74 MB (03.74%) ── unused-arenas

We have a parameter which set a low limit which is used to prevent GC during
start-up, I don't think it is used for workers, but this might be something
to check that this preference[1] is only used for the main thread.
Currently this preference is set to 3 MB before the first GC.

[1] javascript.options.mem.gc_allocation_threshold_mb

--
Nicolas B. Pierron

Gijs Kruitbosch

unread,
Apr 22, 2013, 7:07:17 PM4/22/13
to Nicolas B. Pierron
Knowing nothing about the code, this sure looks suspicious:

http://mxr.mozilla.org/mozilla-central/source/b2g/app/b2g.js#550

I cannot find code where this pref is changed using MXR, but I am
jetlagged and don't know my way around this code, so I could totally
have missed it...


~ Gijs

Justin Dolske

unread,
Apr 22, 2013, 9:35:23 PM4/22/13
to
On 4/21/13 4:51 PM, Justin Lebar wrote:

> What I'd like to come out of this thread is a consensus one way or another as
> to whether we continue along our current path of writing many features that are
> enabled on B2G in JS, or whether we change course.

First -- B2G should clearly do whatever it needs to in order to get
acceptable memory/perf/whatever in the short term. If B2G flops, it
doesn't matter a damn what it's written in.

That said, I think it's critically important that we're working to make
JS a acceptable -- nay, _excellent_ -- language/runtime for application
development for the long run. We can't tell a credible story about why
people should write HTML5 apps, if we're tearing out swaths of JS in our
own products. Sometimes dogfooding is unpleasant or hard, but that's the
whole point.

There will always be a place for C++ (well, until Rust, amirite? ;).
Either as being the right tool for a specific job, or just because
someone happens to know it better. But if we end up avoiding JS because
it's inherently JS, then that's a big strategic problem.

Justin

Nicholas Nethercote

unread,
Apr 22, 2013, 9:46:29 PM4/22/13
to Justin Dolske, dev-pl...@lists.mozilla.org
On Mon, Apr 22, 2013 at 6:35 PM, Justin Dolske <dol...@mozilla.com> wrote:
>
> That said, I think it's critically important that we're working to make JS a
> acceptable -- nay, _excellent_ -- language/runtime for application
> development for the long run. We can't tell a credible story about why
> people should write HTML5 apps, if we're tearing out swaths of JS in our own
> products. Sometimes dogfooding is unpleasant or hard, but that's the whole
> point.

There's a big difference between apps on Firefox OS, which are likely
to have relatively short lifetimes and can be killed if they take up
too much memory, and the main process. Bad memory behaviour in the
main process is a much bigger deal, and it's something that's
happening right now with some frequency.

Nick

Justin Lebar

unread,
Apr 23, 2013, 3:47:42 PM4/23/13
to Nicholas Nethercote, dev-pl...@lists.mozilla.org, Justin Dolske
To close the loop on this thread, the consensus here seems to be that

1. We should continue to make JS slimmer. This is a high priority for
B2G even setting aside the memory consumption of B2G's chrome JS,
since of course B2G runs plenty of content JS.

The memory profile of B2G is different from desktop -- small overheads
matter more on B2G, and the consequences of using too much memory are
more drastic. We should keep this in mind when working on JS in the
future.

2. We should fix bug 829482 (run more shrinking GCs in workers). This
will get us an easy 4-5mb in the main B2G process. This is a
MemShrink:P1 and has been open for a while; I'd love some assistance
with it.

3. We should rewrite these main-process workers in C++, if and when we
have manpower. Even with bug 829482 fixed, the workers will still be
some of the largest individual consumers of JS memory in B2G, and all
of the JS folks I spoke with said that they thought they'd be unable
to reduce the memory overhead of a worker significantly in the medium
term.

I filed bugs:

https://bugzilla.mozilla.org/show_bug.cgi?id=864927
https://bugzilla.mozilla.org/show_bug.cgi?id=864931
https://bugzilla.mozilla.org/show_bug.cgi?id=864932

4. It's worthwhile to at least look carefully at the biggest B2G
chrome compartments and see whether we can reduce their size one way
or another. I filed a metabug:
https://bugzilla.mozilla.org/show_bug.cgi?id=864943

5. When writing new APIs, we should at least consider writing them in
C++. JS should not be the default. Where things are super-easy in JS
and super-annoying in C++, we should consider investing in our C++
infrastructure to make it more pleasant.

Since not everyone reads this newsgroup, I'd appreciate assistance
disseminating (5) in bugs. At the very least, we should ask patch
authors to consider the alternatives before creating new JS modules
that are enabled on B2G.

I'm also going to post this summary to dev-b2g with a pointer back to
this newsgroup.

Thanks for your thoughts, everyone.

-Justin

On Mon, Apr 22, 2013 at 9:46 PM, Nicholas Nethercote
<n.neth...@gmail.com> wrote:
> On Mon, Apr 22, 2013 at 6:35 PM, Justin Dolske <dol...@mozilla.com> wrote:
>>
>> That said, I think it's critically important that we're working to make JS a
>> acceptable -- nay, _excellent_ -- language/runtime for application
>> development for the long run. We can't tell a credible story about why
>> people should write HTML5 apps, if we're tearing out swaths of JS in our own
>> products. Sometimes dogfooding is unpleasant or hard, but that's the whole
>> point.
>
> There's a big difference between apps on Firefox OS, which are likely
> to have relatively short lifetimes and can be killed if they take up
> too much memory, and the main process. Bad memory behaviour in the
> main process is a much bigger deal, and it's something that's
> happening right now with some frequency.
>
> Nick

azakai

unread,
Apr 23, 2013, 11:10:34 PM4/23/13
to
On Monday, April 22, 2013 7:28:30 AM UTC-7, Justin Lebar wrote:
>
> The issue isn't compilation of code; that doesn't stick out in the
> memory reports. The issue seems to be mostly the overhead of the JS
> engine for each file (even if the file stores very little data, as
> BrowserElementParent does) and also the unavoidable inefficiency
> associated with assigning each worker its own runtime (in particular,
> the fact that this greatly increases fragmentation).
>

Probably this doesn't make sense, but could some of the code running in JS workers be combined to live inside fewer actual workers? The main downside would be less concurrency obviously, but it sounds like it could avoid some memory overhead?

- Alon

Caspy7

unread,
Apr 25, 2013, 2:41:50 AM4/25/13
to
I'm not a developer and so apologize if my understanding is oversimplified or naive.
I had been under the impression that C++ (previously) was not being used at all, apart from Gecko itself, for security and stability reasons. Perhaps I am mistaken or simply misunderstand the structures discussed here.

Anyway, I just wanted to suggest that in the future, once asm.js has come to greater maturity (and of course GGC & Lazy bytecode gen), perhaps these workers could be compiled to asm. This would keep the security of JS while having high performance and not require a rewrite.
Again, apologies if my underlying assumptions or understandings are errant.

Till Schneidereit

unread,
Apr 25, 2013, 8:44:49 AM4/25/13
to Caspy7, dev-pl...@lists.mozilla.org
The entire discussion is about which technology to use for implementing
platform, i.e. Gecko, features. Gecko consists of many modules, some of
which are written in C++, some in JS.
0 new messages