Minimum vs. Comfortable Building Requirements

33 views
Skip to first unread message

ISHIKAWA,chiaki

unread,
May 2, 2023, 7:49:19 PM5/2/23
to dev-pl...@lists.mozilla.org, ishikawa, chiaki

Hi,

When I first started to create patches for thunderbird about a dozen years ago, I read someone's blog/post that said we need 12 GB memory for comfortable build.
I had to do with 8GB of memory and it worked.
Well, C compiler has become larger, I think. Many tools including the linker have become larger, too. Their memory requirements also have become larger.

About half a dozen years ago, I allocated 16GB of memory to my guest linux in VirtualBox where I have been building TB. since.

Last week, I had a hardware issue and had to replace motherboard, and I added more memory to my PC from 32GB to 48GB.
On a hunch, I increased the memory allocated to VirtualBox from 16GB to 24 GB.

Yes, I use virtualbox to run linux on my windows PC.
Guest linux is where I create and test C-C TB patches. I know virtualbox may not be the best environment.  But  I have to use MS office tools for work, so I have to run Windows side-by-side and there used to be quite a lot of hardware gadgets that only run under windows. Using virtualbox was one of the solutions for this home PC user who create patches for TB.

Before giving you the performance numbers, here is my current hardware.
CPU: Ryzen 3700x (8C/16T), but I only allocate 7 virtual CPU to the virtualbox.
Memory: 48 GB, but only 24GB allocated to virtualbox.
Storage: my guest linux in VirtualBox uses virtual disks both on NVME M.2 SSD and hard disks (relatively slow at 5400 rpm).
Source tree and object tree are in virtual disks on NVME M.2 SSD.

I often change only a few C++ source files in C-C directory and re-build TB.
So to me, relinking C-C should be fast.
ccache helps a lot.

Anyway, just relinking C-C TB takes  time.
Before (with 16GB assigned to linux guest inside virtualbox): almost 10 minutes. Slightly less. 550 seconds or so.
After (with 24 GB assigned to linux guest.):  4min 38 seconds. 

x2 speed up after memory size increase (!)

The reason I think the speedup is observed is twofold:
- file buffer cache within the linux OS has become larger and thus helps file I/O.
- less page faults. I think this counts.
I am not sure what are causing the page faults (the linker?), but the number of page faults during C-C TB build was rather large.
I have beei observing system status using xosview visually, and I have been puzzled by the burst of page faults observed during build.
That is why I added 8GB to my virutalbox setup after I could afford to add more memory to my PC. It now has 8GBx2 + 16GBx2 (ECC memory). Before the hardware change, it had 8GB x 4 (ECC).

Well, please recall that if we need to compile sizable number of source files,
the speed up factor is not that dramatic because it is CPU-bound (I am yet to check it.)

But the decrease of 10 minutes to 4.5 minutes of linking by adding 8GB memory IN MY SETUP is huge for repeatedly running edit/compile/link process on local PC.

There are some suggestions for "Requirements".
For example, "Building Firefox on Linux".
https://firefox-source-docs.mozilla.org/setup/linux_build.html

In there the "Requirements " states:

  • Memory: 4GB RAM minimum, 8GB+ recommended.

  • Disk Space: At least 30GB of free disk space.

  • Operating System: A 64-bit installation of Linux. It is strongly advised that you use a supported distribution; see Supported Build Hosts. We also recommend that your system is fully up-to-date.

Sure, we could  build TB with 4GB RAM. I did it on 32-bit linux until the debug symbol table became too large for 32-bit memory space.
I switched to 64-bit linux around that time.
With virtual memory, we can build C-C TB with 4GB real memory to be sure, but the page faults slow it down.

I think we SHOULD mention that 16GB or even 24 GB memory would be preferable in the paragraph, not discouraging users with only 8 GB of memory on their PCs. We should mention the reduced build time with larger memory.

I should have increased my memory to 16GB much sooner. But my old motherboard did not have good memory support and 32GB was the maximum.
And I somehow had a wrong notion that 16GB was good enough. Well, "good" is a very judgemental word.
Obviously, for "comfortable" build of C-C TB with current crop of development tools that seem to hog memory, 24GB is definitely the minimum IMHO now that I verified the numbers.
In that sense, I was wrong to think 16GB was good enough.

Someone might want to rewrite the paragraph in mozilla web page(s) by adding suggested comfortable setting by investigating some people's setups and build time observed in that setting, etc.

MS states 4GB memory is required for Windows 11.
But we should know the memory amount with which we can operate Windows 11 and applications comfortably. Right?

The same goes for mozilla software deveelopment.

Chiaki

Gabriele Svelto

unread,
May 3, 2023, 3:33:04 AM5/3/23
to ISHIKAWA,chiaki, dev-pl...@lists.mozilla.org
On 03/05/23 01:49, ISHIKAWA,chiaki wrote:
> Anyway, just relinking C-C TB takes  time.
> Before (with 16GB assigned to linux guest inside virtualbox): almost 10
> minutes. Slightly less. 550 seconds or so.
> After (with 24 GB assigned to linux guest.):  4min 38 seconds.

that's a long time just for linking! What linker are you using? I think
either gold or lld should be able to link Thunderbird much faster than that.

Gabriele

OpenPGP_signature

ISHIKAWA,chiaki

unread,
May 3, 2023, 4:28:45 AM5/3/23
to Gabriele Svelto, dev-pl...@lists.mozilla.org
I stand corrected.
The number was not the linking time per se.
I explain how I obtained the numbers.
BTW, I use gnu gold and mold.

In order to obtain the ballpark figure of typical build time of after
downloading daily changes, I tried the following.
I clobbered the tree, and reconfigure the tree and then rebuild.
By this time, the pre-compiled object files from the previous sessions
are in the cache of ccache and scache (I am not sure if I am using the
following anymore.)||

So basically the time is for  the re-building the C-C TB binary by
traversing the new source file tree, and
invoking ccache if necessary to compile (= obtrain the precompiled
binary) and then link the binaries.

It is a bit strange number. But it is the baseline figure when many
files remain identical modulo whitespace changes and thus
ccache can save the recompilation time. This is true for maybe more than
half the files after daily source tree updates.
Of course, depending on the number of header files changed, basically I
would have to compile almost all the files sometimes.

That this elapsed time for this workflow  becomes half alone was the
merit of adding extra 8GB to my linux image (from 16GB to 24GB).

Anyway, the typical link time alone is about a minute (76 seconds
including various chores of my bash script to set environmental
variable, etc.)
I find out this out by running |mach build| again after the above
scenario of mach {clobber, configure, build|}
In this case,  household chores diminish and it becomes 76 seconds from
about 450 seconds.
I think the initial build process records some file system update
information (or the previous compilation process updates the object file
timestamp) so that the second build does no longer have to invoke ccache
too often or maybe even some subdirectories?

Like I said, if I need to compile many files, that would be CPU-bound
and the build time would be much longer.

In any case, I reviewed the page fault rates today during build, and I
think cargo library processing seems to generate many page faults.
Even with 24GB of memory I see sustained page faults shown in xosview
window.
With 16GB of memory I used to see really long period of sustained page
faults and wondered what it was.
(This is what makes me wonder if I am not using gnu gold or mold for
cargo linking. )

So the number I gave and the description was a bit misleading, but still
the larger the memory up until to 24GB is a plus for C/C++
edit/compile/build for M-C and C-C development.
Actually I often create C-C patches, but the build time includes M-C
tree recompilation. Thus I believe the merit of added memory holds true
for FF developers, too.

How can I find out if I am using gnu gold or mold for rust library
linking (CARGO processing, that is)?

Chiaki

Reply all
Reply to author
Forward
0 new messages