Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Recent build time improvements due to unified sources

351 views
Skip to first unread message

Gregory Szorc

unread,
Nov 19, 2013, 2:15:16 AM11/19/13
to dev-platform
Do builds feel like they've gotten faster in the last few weeks^hours?
It's because they have.

When I did my MacBook Pro comparison [1] with a changeset from Oct 28,
build times on my 2013 2.6 GHz MacBook Pro were as follows:

Wall 11:13 (673)
User 69:55 (4195)
Sys 6:04 (364)
Total 75:59 (4559)

I just built the tip of m-c (e4b59fdbc9c2) and times on that same
machine are now:

Wall 9:23 (563)
User 57:38 (3458)
Sys 4:58 (298)
Total 62:36 (3756)

That's a 17.6% drop in CPU time required to build the tree! If the build
system were able to deliver 100% CPU utilization, my machine would be
able to build the tree in ~7:50 wall time. Not too shabby!

I can say with high confidence that unified sources are mostly
responsible for the CPU efficiency gains. When I built m-c earlier today
just after Australis landed, total CPU time was at 66:28. In between, 5
unified sources bugs landed and ~4 minutes total CPU time was shaved off.

Project unified sources: making builds insanely faster since yesterday.

I can't wait to see what tomorrow brings.

[1]
http://gregoryszorc.com/blog/2013/11/05/macbook-pro-firefox-build-times-comparison/

Mike Hommey

unread,
Nov 19, 2013, 5:49:42 PM11/19/13
to Gregory Szorc, dev-platform
Just because my build machine (older and slower than your mac) was bored
last night:

$ hg up -C -r 'date("2013-08-01")'
$ ./mach clobber && time ./mach build > /dev/null
real 25m36.250s
user 143m19.573s
sys 8m14.883s

$ hg up -C -r 'date("2013-09-01")'
$ ./mach clobber && time ./mach build > /dev/null
real 22m15.830s
user 122m46.584s
sys 6m49.058s

$ hg up -C -r 'date("2013-10-01")'
$ ./mach clobber && time ./mach build > /dev/null
real 17m41.810s
user 116m21.284s
sys 6m8.659s

$ hg up -C -r 'date("2013-11-01")'
$ ./mach clobber && time ./mach build > /dev/null
real 17m36.424s
user 109m38.891s
sys 5m58.714s

$ hg up -C -r tip
$ ./mach clobber && time ./mach build > /dev/null
real 14m23.843s
user 86m57.834s
sys 4m45.118s

(All linux builds with mk_add_options MOZ_MAKE_FLAGS=-j12)

The corresponding sizes are respectively:
objdir libxul (text+data+bss)
6.1G 877M (59.9M)
5.6G 800M (62.6M)
5.3G 752M (62.8M)
5.2G 727M (62.7M)
4.7G 602M (65.0M)

In less than 4 months, clobber build time on my machine has decreased by ~44%,
objdir size by ~22%, and DWARF size (approximated by libxul-(text+data+bss))
by ~34%.

Mike

Gregory Szorc

unread,
Nov 20, 2013, 1:08:35 AM11/20/13
to dev-platform
On 11/18/13, 11:15 PM, Gregory Szorc wrote:
> Do builds feel like they've gotten faster in the last few weeks^hours?
> It's because they have.
>
> When I did my MacBook Pro comparison [1] with a changeset from Oct 28,
> build times on my 2013 2.6 GHz MacBook Pro were as follows:
>
> Wall 11:13 (673)
> User 69:55 (4195)
> Sys 6:04 (364)
> Total 75:59 (4559)
>
> I just built the tip of m-c (e4b59fdbc9c2) and times on that same
> machine are now:
>
> Wall 9:23 (563)
> User 57:38 (3458)
> Sys 4:58 (298)
> Total 62:36 (3756)

And 24 hours later, m-c (4f993fa378eb) is getting faster:

Wall 8:47 (527)
User 52:41 (3161)
Sys 4:38 (278)
Total 57:19 (3439)

Shavings of 5:17 / 8.4% CPU time.

People working on bug 939583 are earning Friend of the Tree status for life.

Nicholas Nethercote

unread,
Nov 20, 2013, 3:38:12 AM11/20/13
to Gregory Szorc, dev-platform
On September 12, a debug clobber build on my new Linux desktop took
12.7 minutes. Just then it took 7.5 minutes. Woo!

Nick

Chris Peterson

unread,
Nov 20, 2013, 12:09:28 PM11/20/13
to
On 11/19/13, 10:08 PM, Gregory Szorc wrote:
> And 24 hours later, m-c (4f993fa378eb) is getting faster:
>
> Wall 8:47 (527)
> User 52:41 (3161)
> Sys 4:38 (278)
> Total 57:19 (3439)

Unified builds currently coalesce source files in batches of 16.

It might be useful to add a files_per_unified_file parameter to
mozconfig or mach build. People could benchmark different values of
files_per_unified_file (trading off clobber vs incremental build times).
The same parameter could also be used to disable unified builds with
files_per_unified_file = 1.

Benoit Jacob

unread,
Nov 20, 2013, 12:37:24 PM11/20/13
to Chris Peterson, dev-platform
Talking about ideas for further extending the impact of UNIFIED_SOURCES, it
seems that the biggest limitation at the moment is that sources can't be
unified between different moz.build's. Because of that, source directories
that consist of many small sub-directories do not benefit much from
UNIFIED_SOURCES at the moment. I would love to have the ability to declare
in a moz.build that UNIFIED_SOURCES from here downwards, including
subdirectories, are to be unified with each other. Does that sound
reasonable?

Benoit


2013/11/20 Chris Peterson <cpet...@mozilla.com>
> _______________________________________________
> dev-platform mailing list
> dev-pl...@lists.mozilla.org
> https://lists.mozilla.org/listinfo/dev-platform
>

Gregory Szorc

unread,
Nov 20, 2013, 12:43:22 PM11/20/13
to Benoit Jacob, Chris Peterson, dev-platform
On Nov 20, 2013, at 9:37, Benoit Jacob <jacob.b...@gmail.com> wrote:

> Talking about ideas for further extending the impact of UNIFIED_SOURCES, it
> seems that the biggest limitation at the moment is that sources can't be
> unified between different moz.build's. Because of that, source directories
> that consist of many small sub-directories do not benefit much from
> UNIFIED_SOURCES at the moment. I would love to have the ability to declare
> in a moz.build that UNIFIED_SOURCES from here downwards, including
> subdirectories, are to be unified with each other. Does that sound
> reasonable?

You can do this today by having a parent moz.build list sources in child directories.

Keep in mind some moz.build/Makefile.in still customize directories on a directory-by-directory basis.

I would love to see a project that consolidated data into fewer moz.build files. The recent work around how libraries are defined should have made that easier. But there are still things you can only do once per directory. Those limitations will disappear eventually.

Patrick McManus

unread,
Nov 20, 2013, 12:47:39 PM11/20/13
to Nicholas Nethercote, dev-platform, Gregory Szorc
I was skeptical of this work - so I need to say now that it is paying
dividends bigger and faster than I thought it could. very nice!

Benoit Jacob

unread,
Nov 20, 2013, 12:49:14 PM11/20/13
to Gregory Szorc, Chris Peterson, dev-platform
2013/11/20 Gregory Szorc <g...@mozilla.com>

> On Nov 20, 2013, at 9:37, Benoit Jacob <jacob.b...@gmail.com> wrote:
>
> > Talking about ideas for further extending the impact of UNIFIED_SOURCES,
> it
> > seems that the biggest limitation at the moment is that sources can't be
> > unified between different moz.build's. Because of that, source
> directories
> > that consist of many small sub-directories do not benefit much from
> > UNIFIED_SOURCES at the moment. I would love to have the ability to
> declare
> > in a moz.build that UNIFIED_SOURCES from here downwards, including
> > subdirectories, are to be unified with each other. Does that sound
> > reasonable?
>
> You can do this today by having a parent moz.build list sources in child
> directories.
>

>From the perspective of someone porting a directory to UNIFIED_SOURCES, and
wanting to make minimal changes at this point to see how much compile time
improvement we can get without making too intrusive changes everywhere,
that is not the same: Switching an entire directory to listing all sources
in the parent moz.build is a very intrusive change to make to the build
system. I've been refraining from doing that for now.

Benoit

Ehsan Akhgari

unread,
Nov 20, 2013, 1:18:10 PM11/20/13
to Benoit Jacob, Chris Peterson, dev-platform
On 2013-11-20 12:37 PM, Benoit Jacob wrote:
> Talking about ideas for further extending the impact of UNIFIED_SOURCES, it
> seems that the biggest limitation at the moment is that sources can't be
> unified between different moz.build's. Because of that, source directories
> that consist of many small sub-directories do not benefit much from
> UNIFIED_SOURCES at the moment. I would love to have the ability to declare
> in a moz.build that UNIFIED_SOURCES from here downwards, including
> subdirectories, are to be unified with each other. Does that sound
> reasonable?

I don't think that we should do this for now. One problem with unified
builds is that adding or removing files can shift things into different
translation units and cause build failures. Because of this reason, I
don't like the idea of unifying multiple cpp files in multiple
directories into the same translation unit. But further down the road,
opting into this by moving things into one moz.build file may make
sense. I have experimented with this once in bug 936899.

Cheers,
Ehsan

Ehsan Akhgari

unread,
Nov 20, 2013, 1:19:18 PM11/20/13
to Chris Peterson, dev-pl...@lists.mozilla.org
On 2013-11-20 12:09 PM, Chris Peterson wrote:
> It might be useful to add a files_per_unified_file parameter to
> mozconfig or mach build. People could benchmark different values of
> files_per_unified_file (trading off clobber vs incremental build times).
> The same parameter could also be used to disable unified builds with
> files_per_unified_file = 1.

I agree! See bug 941097.

Cheers,
Ehsan

Ehsan Akhgari

unread,
Nov 20, 2013, 2:01:29 PM11/20/13
to Gregory Szorc, dev-platform
While this analysis is interesting, it doesn't measure the impact of the
unified builds project directly, so I decided that now that a good chunk
of code is being compiled in unified mode we should get some specific
numbers on the improvements.

What I did was I took inbound as of ab70db6b27c8, and did a clobber
build twice. The first time I manually converted all UNIFIED_SOURCES
variables to SOURCES using [1], and the second time I reverted that
change to build a pristine inbound. Both builds were done with a warm
cache of all of the source tree (I manually put everything into the
cache before each build.)

And here are the results:

Before:
22:12.74 Overall system resources - Wall time: 1332s; CPU: 92%; Read
bytes: 50761728; Write bytes: 7229292544; Read time: 39540; Write time:
5736192
real 22m13.648s
user 69m52.462s
sys 5m57.270s

After:
16:07.89 Overall system resources - Wall time: 967s; CPU: 89%; Read
bytes: 10317824; Write bytes: 6860136448; Read time: 12604; Write time:
4981152
real 16m8.469s
user 47m39.219s
sys 4m1.699s


This means that unified builds so far have made our builds around 27%
faster in terms of wall clock, and around 32% faster in terms of CPU time.

These measurements were done on Linux on a 4-core machine.


Last but not least, thanks to everybody who is helping with this project!

Cheers,
Ehsan


[1] $ find . -name moz.build | xargs grep -w UNIFIED_SOURCES | awk -F:
'{print $1}' | sort | uniq | xargs sed -i 's/UNIFIED_SOURCES/SOURCES/'

On 2013-11-19 2:15 AM, Gregory Szorc wrote:
> Do builds feel like they've gotten faster in the last few weeks^hours?
> It's because they have.
>
> When I did my MacBook Pro comparison [1] with a changeset from Oct 28,
> build times on my 2013 2.6 GHz MacBook Pro were as follows:
>
> Wall 11:13 (673)
> User 69:55 (4195)
> Sys 6:04 (364)
> Total 75:59 (4559)
>
> I just built the tip of m-c (e4b59fdbc9c2) and times on that same
> machine are now:
>
> Wall 9:23 (563)
> User 57:38 (3458)
> Sys 4:58 (298)
> Total 62:36 (3756)
>
> That's a 17.6% drop in CPU time required to build the tree! If the build
> system were able to deliver 100% CPU utilization, my machine would be
> able to build the tree in ~7:50 wall time. Not too shabby!
>
> I can say with high confidence that unified sources are mostly
> responsible for the CPU efficiency gains. When I built m-c earlier today
> just after Australis landed, total CPU time was at 66:28. In between, 5
> unified sources bugs landed and ~4 minutes total CPU time was shaved off.
>
> Project unified sources: making builds insanely faster since yesterday.
>
> I can't wait to see what tomorrow brings.
>
> [1]
> http://gregoryszorc.com/blog/2013/11/05/macbook-pro-firefox-build-times-comparison/

Gregory Szorc

unread,
Nov 20, 2013, 2:50:28 PM11/20/13
to Benoit Jacob, Chris Peterson, dev-platform
On 11/20/13, 9:49 AM, Benoit Jacob wrote:
>
>
>
> 2013/11/20 Gregory Szorc <g...@mozilla.com <mailto:g...@mozilla.com>>
>
> On Nov 20, 2013, at 9:37, Benoit Jacob <jacob.b...@gmail.com
> <mailto:jacob.b...@gmail.com>> wrote:
>
> > Talking about ideas for further extending the impact of
> UNIFIED_SOURCES, it
> > seems that the biggest limitation at the moment is that sources
> can't be
> > unified between different moz.build's. Because of that, source
> directories
> > that consist of many small sub-directories do not benefit much from
> > UNIFIED_SOURCES at the moment. I would love to have the ability
> to declare
> > in a moz.build that UNIFIED_SOURCES from here downwards, including
> > subdirectories, are to be unified with each other. Does that sound
> > reasonable?
>
> You can do this today by having a parent moz.build list sources in
> child directories.
>
>
> From the perspective of someone porting a directory to UNIFIED_SOURCES,
> and wanting to make minimal changes at this point to see how much
> compile time improvement we can get without making too intrusive changes
> everywhere, that is not the same: Switching an entire directory to
> listing all sources in the parent moz.build is a very intrusive change
> to make to the build system. I've been refraining from doing that for now.

Having the build automagically do this right now is dangerous. The rules
to derive the flags to pass to the compiler are dependent on the
moz.build/Makefile.in the source definition came from. Since we don't
have things like CFLAGS in moz.build files yet nor do we have the logic
for deriving compiler flags in Python (they live in config.mk), having
variables defined in child directories magically get moved to parent
directories breaks the contract on how compiler flags are derived.

I'd therefore strongly prefer the user-facing lists in moz.build files
closely match what's actually happening.

Long term, we'll have a better way of defining libraries. Right now, you
produce one library/archive per directory. In the future, we'll likely
expose a "class" to represent libraries/translation units and you'll be
able to define multiple instances in one moz.build file. This requires
significantly revamping our make logic/rules first, so it's not a
trivial effort. In time.

Zack Weinberg

unread,
Nov 20, 2013, 5:06:01 PM11/20/13
to
On 2013-11-20 12:37 PM, Benoit Jacob wrote:
> Talking about ideas for further extending the impact of UNIFIED_SOURCES, it
> seems that the biggest limitation at the moment is that sources can't be
> unified between different moz.build's. Because of that, source directories
> that consist of many small sub-directories do not benefit much from
> UNIFIED_SOURCES at the moment. I would love to have the ability to declare
> in a moz.build that UNIFIED_SOURCES from here downwards, including
> subdirectories, are to be unified with each other. Does that sound
> reasonable?

... Maybe this should be treated as an excuse to reduce directory nesting?

zw

Robert O'Callahan

unread,
Nov 20, 2013, 5:27:14 PM11/20/13
to Zack Weinberg, dev-pl...@lists.mozilla.org
We don't need an excuse!

layout/xul/base/src, and pretty much everything under content/, I'm looking
at you.

Rob
--
Jtehsauts tshaei dS,o n" Wohfy Mdaon yhoaus eanuttehrotraiitny eovni
le atrhtohu gthot sf oirng iyvoeu rs ihnesa.r"t sS?o Whhei csha iids teoa
stiheer :p atroa lsyazye,d 'mYaonu,r "sGients uapr,e tfaokreg iyvoeunr,
'm aotr atnod sgaoy ,h o'mGee.t" uTph eann dt hwea lmka'n? gBoutt uIp
waanndt wyeonut thoo mken.o w

Ehsan Akhgari

unread,
Nov 20, 2013, 5:48:27 PM11/20/13
to rob...@ocallahan.org, Zack Weinberg, dev-pl...@lists.mozilla.org
On 2013-11-20 5:27 PM, Robert O'Callahan wrote:
> On Thu, Nov 21, 2013 at 11:06 AM, Zack Weinberg <za...@panix.com> wrote:
>
> We don't need an excuse!
>
> layout/xul/base/src, and pretty much everything under content/, I'm looking
> at you.

How do you propose that we know which directory contains the "source" then?

</sarcasm>

Ehsan

Benoit Jacob

unread,
Nov 20, 2013, 5:59:00 PM11/20/13
to Ehsan Akhgari, dev-pl...@lists.mozilla.org, Zack Weinberg, Robert O'Callahan
2013/11/20 Ehsan Akhgari <ehsan....@gmail.com>

> On 2013-11-20 5:27 PM, Robert O'Callahan wrote:
>
>> On Thu, Nov 21, 2013 at 11:06 AM, Zack Weinberg <za...@panix.com> wrote:
>>
>> We don't need an excuse!
>>
>> layout/xul/base/src, and pretty much everything under content/, I'm
>> looking
>> at you.
>>
>
> How do you propose that we know which directory contains the "source" then?
>

And I always thought that all public: methods had to go in the public/
directory!

Benoit


>
> </sarcasm>
>
> Ehsan

Gregory Szorc

unread,
Nov 21, 2013, 12:57:49 AM11/21/13
to dev-platform
On 11/19/2013 10:08 PM, Gregory Szorc wrote:
> On 11/18/13, 11:15 PM, Gregory Szorc wrote:
>> Do builds feel like they've gotten faster in the last few weeks^hours?
>> It's because they have.
>>
>> When I did my MacBook Pro comparison [1] with a changeset from Oct 28,
>> build times on my 2013 2.6 GHz MacBook Pro were as follows:
>>
>> Wall 11:13 (673)
>> User 69:55 (4195)
>> Sys 6:04 (364)
>> Total 75:59 (4559)
>>
>> I just built the tip of m-c (e4b59fdbc9c2) and times on that same
>> machine are now:
>>
>> Wall 9:23 (563)
>> User 57:38 (3458)
>> Sys 4:58 (298)
>> Total 62:36 (3756)
>
> And 24 hours later, m-c (4f993fa378eb) is getting faster:
>
> Wall 8:47 (527)
> User 52:41 (3161)
> Sys 4:38 (278)
> Total 57:19 (3439)

And 24 hours later on inbound on Mountain View's November 20'th evening:

Wall 8:33 (513)
User 49:48 (2988)
Sys 4:21 (261)
Total 54:09 (3249)

"Only" 3:20 CPU time reduction today.

Is it time to start any betting pools? Between ongoing unification work
and planned build system work, I think 6:30 wall time is achievable by
end of year.

Neil

unread,
Nov 21, 2013, 5:04:39 AM11/21/13
to
There used to be a limitation that source files had to be in the VPATH.
This limitation obviously does not apply to unified sources (the
compiler will use the -I path to find the source.) so you shouldn't have
a problem setting UNIFIED_SOURCES in a parent moz.build file. Indeed, we
can probably avoid setting VPATH altogether (possibly speeding up make).

--
Warning: May contain traces of nuts.

Michael Shal

unread,
Nov 21, 2013, 9:53:24 AM11/21/13
to Neil, dev-pl...@lists.mozilla.org

> From: "Neil" <ne...@parkwaycc.co.uk>
> There used to be a limitation that source files had to be in the VPATH.
> This limitation obviously does not apply to unified sources (the
> compiler will use the -I path to find the source.) so you shouldn't have
> a problem setting UNIFIED_SOURCES in a parent moz.build file. Indeed, we
> can probably avoid setting VPATH altogether (possibly speeding up make).

Just a heads up - we no longer have the VPATH limitation since bug 888016 landed. There are still many uses of VPATH in the tree, which are slowly being eliminated as things are converted to moz.build (or if I ever get back to bug 875013 :)

In any case, we don't need to use VPATH or -I flags for unified sources - we can just use #include "path/to/source.cpp". It looks like some unified sources do this already: see intl/uconv/src for an example. Once flags are moved over, we could look to combining subdir sources into the parent directory automatically.

-Mike

Gregory Szorc

unread,
Nov 21, 2013, 10:38:00 AM11/21/13
to dev-platform
You people are sick. I go to bed, wake up and my builds have gotten
faster (09e33431c543):

Wall 8:14 (494)
User 48:29 (2909)
Sys 4:16 (256)
Total 52:45 (3165)

By disabling WebRTC and ICU, I'm able to achieve:

Wall 7:14 (434)
User 42:22 (2542)
Sys 3:30 (210)
Total 45:52 (2752)

Nicholas Nethercote

unread,
Nov 21, 2013, 9:53:08 PM11/21/13
to Gregory Szorc, dev-platform
On Thu, Nov 21, 2013 at 7:38 AM, Gregory Szorc <g...@mozilla.com> wrote:
>
> You people are sick. I go to bed, wake up and my builds have gotten faster

And I've gone from 7.5 minutes to 6.75 minutes in the past day or two.

Nick

Gregory Szorc

unread,
Nov 28, 2013, 12:06:46 AM11/28/13
to dev-platform
> You people are sick. I go to bed, wake up and my builds have gotten
> faster (09e33431c543):
>
> Wall 8:14 (494)
> User 48:29 (2909)
> Sys 4:16 (256)
> Total 52:45 (3165)
>
> By disabling WebRTC and ICU, I'm able to achieve:
>
> Wall 7:14 (434)
> User 42:22 (2542)
> Sys 3:30 (210)
> Total 45:52 (2752)

On de5aa094b55f, we're now down to:

Wall 7:37 (457)
User 45:38 (2738)
Sys 3:54 (234)
Total 49:32 (2972)

That's with WebRTC and ICU enabled.

Gabriele Svelto

unread,
Nov 28, 2013, 12:21:12 AM11/28/13
to Gregory Szorc, dev-platform
On 28/11/2013 06:06, Gregory Szorc wrote:
> On de5aa094b55f, we're now down to:
>
> Wall 7:37 (457)
> User 45:38 (2738)
> Sys 3:54 (234)
> Total 49:32 (2972)
>
> That's with WebRTC and ICU enabled.

Looking at my own stats while building I was wondering if anybody has
looked at peak memory consumption with unified sources. Since now we're
compiling up to 16 files per compiler instance I would imagine that peak
memory consumption has increased, possibly significantly. This could be
offset by the fact that most of the files will be sharing headers but I
still wonder how much of an impact it has.

Gabriele

Gregory Szorc

unread,
Nov 28, 2013, 2:09:56 AM11/28/13
to Gabriele Svelto, dev-platform
Peak RSS likely has increased significantly (hundreds to gigabytes
range). However, you can offset this by building in non-unified mode
(--disable-unified-compilation) or by decreasing the concurrency of
building (make -j).

Memory is cheap and getting cheaper. Nobody paid by Mozilla to develop
Firefox should have a machine with less than 16 GB. Adding 25%+ to build
times to accommodate people on old hardware is not acceptable. We can,
however, add build system warnings when "inadequate" hardware is
detected. Bug 914431 is related. Long term, we could explore having the
build system adapt to the system's abilities e.g. by dynamically
adjusting concurrency based on detection of swapping. Things like this
are low in priority compared to making builds faster for the majority of
builders. In the interim, patches welcome.

Gabriele Svelto

unread,
Nov 28, 2013, 2:29:39 AM11/28/13
to Gregory Szorc, dev-platform
On 28/11/2013 08:09, Gregory Szorc wrote:
> Peak RSS likely has increased significantly (hundreds to gigabytes
> range).

OK, that's what I was curious about.

> Memory is cheap and getting cheaper. Nobody paid by Mozilla to develop
> Firefox should have a machine with less than 16 GB. Adding 25%+ to build
> times to accommodate people on old hardware is not acceptable.

My desktop machine's got 32GiB but there's a number of contributors or
partners' employees that might not be on such beefy hardware. We often
had issues in the past with potential FxOS contributors running into OOM
errors while doing their first build. Another common case of OOMs are
people building inside a VM. Just yesterday I helped another mozillian
figure out why his FxOS build had started to fail. It turned out he was
building inside a VM with too many jobs and too little memory dedicated
to it.

Also I'm not sure how our automated build infrastructure copes with this
but I would assume we should have enough memory there (or if we don't
the memory/CPU time trade-off is probably worth the improvement in build
times).

> We can, however, add build system warnings when "inadequate" hardware is
> detected. [...] In the interim, patches welcome.

Since we compute the number of build jobs ourselves (unless the user
manually overrides that value) we might enforce some simple
rule-of-thumb like no more than 1 build job per GB of memory
irrespective of the number of CPUs, or something like that. I'm willing
to contribute something along the lines but I'd first like to gather
some memory consumption measures with and without unified sources.

Gabriele

Gregory Szorc

unread,
Nov 28, 2013, 2:38:16 AM11/28/13
to Gabriele Svelto, dev-platform
mach already records memory usage during builds if psutil installs
properly during configure. See objdir/.mozbuild/build_resources.json.
This only works with |mach build|.

I'd love to see patches to
python/mozbuild/mozbuild/resources/html-build-viewer/ to render memory
stats.

Neil

unread,
Nov 28, 2013, 4:57:55 AM11/28/13
to
Gabriele Svelto wrote:

> Another common case of OOMs are people building inside a VM. Just
> yesterday I helped another mozillian figure out why his FxOS build had
> started to fail. It turned out he was building inside a VM with too
> many jobs and too little memory dedicated to it.

I often build in a VM. I allocate 2 CPUs and 2GB of RAM to it, which
seems to be enough even to link xul.dll with debug symbols, although it
takes a few minutes. (Linking it without symbols takes just seconds.)
Given the amount of memory needed to link I haven't considered the case
of running out of RAM during compiling, although I did manage to run out
of virtual disk space for temporary files when I accidentally built with
-j instead of -j3.

Gabriele Svelto

unread,
Nov 28, 2013, 6:49:38 AM11/28/13
to Neil, dev-pl...@lists.mozilla.org
On 28/11/2013 10:57, Neil wrote:
> I often build in a VM. I allocate 2 CPUs and 2GB of RAM to it, which
> seems to be enough even to link xul.dll with debug symbols, although it
> takes a few minutes. (Linking it without symbols takes just seconds.)
> Given the amount of memory needed to link I haven't considered the case
> of running out of RAM during compiling, although I did manage to run out
> of virtual disk space for temporary files when I accidentally built with
> -j instead of -j3.

I assume this was a Firefox for Windows build, correct? This week I
found an issue that causes FxOS's gecko to be built with -j16
irrespective of the number of jobs you specify on the command-line or in
the .config file. That's likely the cause for some of these issues;
fortunately it's FxOS-specific. I'm fixing this problem as part of bug
888698.

Gabriele

Ehsan Akhgari

unread,
Nov 28, 2013, 12:14:32 PM11/28/13
to Gabriele Svelto, Gregory Szorc, dev-platform
On 11/28/2013, 2:29 AM, Gabriele Svelto wrote:
> On 28/11/2013 08:09, Gregory Szorc wrote:
>> Peak RSS likely has increased significantly (hundreds to gigabytes
>> range).
>
> OK, that's what I was curious about.
>
>> Memory is cheap and getting cheaper. Nobody paid by Mozilla to develop
>> Firefox should have a machine with less than 16 GB. Adding 25%+ to build
>> times to accommodate people on old hardware is not acceptable.
>
> My desktop machine's got 32GiB but there's a number of contributors or
> partners' employees that might not be on such beefy hardware. We often
> had issues in the past with potential FxOS contributors running into OOM
> errors while doing their first build. Another common case of OOMs are
> people building inside a VM. Just yesterday I helped another mozillian
> figure out why his FxOS build had started to fail. It turned out he was
> building inside a VM with too many jobs and too little memory dedicated
> to it.

Please file a bug (and CC me) with more details about where the build
fails, how much memory the machine has, and how much memory each
compiler invocation consumes (test with make -j1). We can adjust the
number of files we build in one chunk per directory, so this is easily
fixable. "Buy more RAM" is a terrible answer to this kind of problem.

> Also I'm not sure how our automated build infrastructure copes with this
> but I would assume we should have enough memory there (or if we don't
> the memory/CPU time trade-off is probably worth the improvement in build
> times).

Note that I would expect the peak memory usage in any build to be
diminished by the linker's memory usage. If the compiler invocations
end up consuming an exceeding amount of memory in a given directory, we
should fix that.

Cheers,
Ehsan

Mike Hoye

unread,
Nov 28, 2013, 1:21:45 PM11/28/13
to dev-pl...@lists.mozilla.org
On 11/28/2013, 12:14 PM, Ehsan Akhgari wrote:

> Please file a bug (and CC me) with more details about where the build
> fails, how much memory the machine has, and how much memory each
> compiler invocation consumes (test with make -j1). We can adjust the
> number of files we build in one chunk per directory, so this is easily
> fixable. "Buy more RAM" is a terrible answer to this kind of problem.

Particularly when that's not an option at all for many current and
prospective contributors, yeah.

Right now the build page on MDN says " More than 2 GB of RAM for recent
Firefox builds, 4 GB or higher recommended", but it's important for our
community to have a more granular understanding of our requirements
there, and to make sure it stays possible - not "efficient" or
"pleasant" or anything, just "possible at all" - to build Firefox on
low-end machines.

- mhoye

Ehsan Akhgari

unread,
Nov 28, 2013, 4:12:31 PM11/28/13
to Mike Hoye, dev-pl...@lists.mozilla.org
On 11/28/2013, 1:21 PM, Mike Hoye wrote:
> On 11/28/2013, 12:14 PM, Ehsan Akhgari wrote:
>
>> Please file a bug (and CC me) with more details about where the build
>> fails, how much memory the machine has, and how much memory each
>> compiler invocation consumes (test with make -j1). We can adjust the
>> number of files we build in one chunk per directory, so this is easily
>> fixable. "Buy more RAM" is a terrible answer to this kind of problem.
>
> Particularly when that's not an option at all for many current and
> prospective contributors, yeah.

Exactly.

> Right now the build page on MDN says " More than 2 GB of RAM for recent
> Firefox builds, 4 GB or higher recommended", but it's important for our
> community to have a more granular understanding of our requirements
> there, and to make sure it stays possible - not "efficient" or
> "pleasant" or anything, just "possible at all" - to build Firefox on
> low-end machines.

Right. FWIW, this is probably as good a forum as any to voice my
objection on the recent trend of recommending people to buy new hardware
to get faster builds. There is a *lot* that we can still do in order to
improve our build times, and while those of us who work for MoCo/MoFo
may be able to enjoy the luxury of asking their employer for a new
machine, we should never assume that for everybody. Back when I was a
non-paid contributor, I used a relatively slow machine for all of my
Mozilla development, and I couldn't really afford the latest and
greatest hardware. If our standard answer to this kind of complaint is
"buy better hardware" then we're excluding a very important part of our
community from consideration. We should not do that.

Cheers,
Ehsan

Nicholas Nethercote

unread,
Nov 28, 2013, 4:45:27 PM11/28/13
to Ehsan Akhgari, dev-platform, Mike Hoye
On Thu, Nov 28, 2013 at 1:12 PM, Ehsan Akhgari <ehsan....@gmail.com> wrote:
>
> Right. FWIW, this is probably as good a forum as any to voice my objection
> on the recent trend of recommending people to buy new hardware to get faster
> builds. There is a *lot* that we can still do in order to improve our build
> times, and while those of us who work for MoCo/MoFo may be able to enjoy the
> luxury of asking their employer for a new machine, we should never assume
> that for everybody.

I'm pretty sure that gps was saying "if you're paid to work by
Mozilla, get a faster machine".

More generally, we're all in furious agreement: fast builds are good;
achieving them via multiple means is worthwhile; those with the
option of getting faster hardware should do so.

Nick

Mike Hoye

unread,
Nov 28, 2013, 4:55:39 PM11/28/13
to Nicholas Nethercote, Ehsan Akhgari, dev-platform
On 11/28/2013, 4:45 PM, Nicholas Nethercote wrote:
> I'm pretty sure that gps was saying "if you're paid to work by
> Mozilla, get a faster machine". More generally, we're all in furious
> agreement: fast builds are good; achieving them via multiple means is
> worthwhile; those with the option of getting faster hardware should do so.

That's all true, but if we end up in a place where a box with "only" 2GB
of RAM can't build Firefox at all that will be a serious problem, and a
major barrier to community participation. I get that this can't be true
for phones, sure, but "official support" for Firefox shouldn't just mean
"you can run Firefox on this platform", it should as often as possible
also mean "you can build Firefox on this platform."


- mhoye

Ehsan Akhgari

unread,
Nov 28, 2013, 5:12:53 PM11/28/13
to Mike Hoye, Nicholas Nethercote, dev-platform
We won't get there, at least not any time soon. Which is why I'm asking
people to file bugs if the unified build stuff has caused that
limitation to appear. Hardware that could build our code two weeks ago
should not be considered as "inadequate" because of my work here, that's
all I'm saying. :-)

Cheers,
Ehsan

Mike Hommey

unread,
Nov 28, 2013, 5:38:29 PM11/28/13
to Ehsan Akhgari, Nicholas Nethercote, Mike Hoye, dev-platform
That being said, I could be wrong, but I doubt a machine with 2GB,
except if it has many cores, will have much problems compiling unified
sources.

Mike

Gregory Szorc

unread,
Nov 29, 2013, 9:37:59 PM11/29/13
to Ehsan Akhgari, Mike Hoye, Nicholas Nethercote, dev-platform
On 11/29/13, 5:12 AM, Ehsan Akhgari wrote:
> On 11/28/2013, 4:55 PM, Mike Hoye wrote:
>> On 11/28/2013, 4:45 PM, Nicholas Nethercote wrote:
>>> I'm pretty sure that gps was saying "if you're paid to work by
>>> Mozilla, get a faster machine". More generally, we're all in furious
>>> agreement: fast builds are good; achieving them via multiple means is
>>> worthwhile; those with the option of getting faster hardware should do
>>> so.
>>
>> That's all true, but if we end up in a place where a box with "only" 2GB
>> of RAM can't build Firefox at all that will be a serious problem, and a
>> major barrier to community participation. I get that this can't be true
>> for phones, sure, but "official support" for Firefox shouldn't just mean
>> "you can run Firefox on this platform", it should as often as possible
>> also mean "you can build Firefox on this platform."
>
> We won't get there, at least not any time soon. Which is why I'm asking
> people to file bugs if the unified build stuff has caused that
> limitation to appear. Hardware that could build our code two weeks ago
> should not be considered as "inadequate" because of my work here, that's
> all I'm saying. :-)

People with less than 8 GB RAM have likely had issues building Firefox
for years. Linking libxul is in particular very painful. Depending on
the platform, you have to page in up to a few GB of object files. On top
of that, the linker allocates a few GB. About a year ago, I calculated
that you needed 9-10 GB available memory to build on Linux without
incurring any disk I/O. If you had any less than that and were on a
mechanical disk, your build times were a minute or two slower.

What unified sources has done is increased memory consumption during
compilation thus increasing the risk of swapping there. Previously,
swapping was most likely during linking libxul.

What I meant in my previous reply was that I'm concerned about users
with modern machines being handicapped by the needs of those with slower
machines. e.g. if we have to decrease the number of sources going into
each unified source to reduce memory and this change makes builds 6
minutes instead of 5 minutes for a modern machine with tons of RAM,
that's not cool. I would much rather we explore different build modes
for people with different hardware configurations. e.g. decrease
concurrency on machines with insufficient memory. The build system can
also be much more proactive about suggesting faster build configurations
(e.g. use Clang over GCC, use a more modern compiler, disable debug
symbols, use gold, etc). Unfortunately, we have practically zero insight
into what people are using in the wild, so it's difficult to target pain
points and to measure the impact of various changes. This is why I want
reporting of build stats to Mozilla. Maybe in Q1.
0 new messages