Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

building firefox with tup

459 views
Skip to first unread message

Mike Shal

unread,
Jul 4, 2012, 5:00:50 PM7/4/12
to dev-pl...@lists.mozilla.org
Hello,

I was trying to build Firefox from source to try out some changes (I
wanted to see if I could figure out why it was syncing excessively),
and was a bit frustrated with the incremental build performance. From
searching the forums & blog posts I can see that others have concerns
about it as well. It looks like some attempts have been made to fix
this (CMake, non-recursive make, and some talk of gyp). Since I've
written the tup build system (http://gittup.org/tup/), I thought I
would give it a try with that. I've summarized my results here:

http://gittup.org/building-firefox-with-tup.html

In short, it is not a real port, but it is able to build a working
firefox executable (with support files) on my Linux machine. I didn't
try to integrate with the configure layer or get it to work on other
platforms yet - this was just a throw-away test to see if it would
build at all. I think the initial results are promising:

$ touch mozilla-central-tup-1/browser/app/nsBrowserApp.cpp
$ time tup upd
[ tup ] [0.000s] No filesystem scan - monitor is running.
[ tup ] [0.000s] Reading in new environment variables...
[ tup ] [0.001s] No Tupfiles to parse.
[ tup ] [0.001s] No files to delete.
[ tup ] [0.002s] Executing Commands...
1) [1.001s] obj-default/mozilla-central-tup-1/browser/app: C++ nsBrowserApp.cpp
2) [0.012s] obj-default/mozilla-central-tup-1/browser/app: LD -Ur built-in.o
3) [0.022s] obj-default/mozilla-central-tup-1/browser: LD -Ur built-in.o
4) [0.115s] obj-default/mozilla-central-tup-1/dist/bin: Link firefox
[ ] 100%
[ tup ] [1.180s] Updated.

real 0m1.199s
user 0m1.003s
sys 0m0.162s

This shows that changing nsBrowserApp.cpp and rebuilding firefox has a
total turn-around time of 1.2 seconds. Tup actually starts the
compiler in 2 milliseconds, so it is quite fast. Moreover, this was in
a project where I cloned the mozilla-central repository 10 times, so
it is essentially a project that is 10x larger than what you normally
build with make. For comparison, a single mozilla-central no-op build
with make takes over 2 minutes on the same machine.

I think a real port (described in a bit more detail in the results
page) would take a few months to implement, if there is any interest.
This can be done while keeping the existing make infrastructure in
place so that either can be used. I would love to help out with this,
and I think the result would be useful to the developers who like a
fast & accurate incremental build.

Any feedback, questions, or concerns are welcome.

-Mike

Nicolas Silva

unread,
Jul 4, 2012, 6:55:54 PM7/4/12
to Mike Shal, dev-pl...@lists.mozilla.org

If you can make the build system significantly better, it would make me VERY happy.
I am pretty sure I am not the only one that would love shorter incremental build times.

Cheers,

Nical
_______________________________________________
dev-platform mailing list
dev-pl...@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-platform

Robert O'Callahan

unread,
Jul 4, 2012, 7:45:22 PM7/4/12
to Mike Shal, dev-pl...@lists.mozilla.org
That is impressive. Nice work!

How hard would it be to repeat the experiment on Windows and Mac? Windows
currently has the slowest builds (by quite a bit, even with pymake), so
would be the most interesting.

Rob
--
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]

Rafael Ávila de Espíndola

unread,
Jul 4, 2012, 11:36:19 PM7/4/12
to dev-pl...@lists.mozilla.org, mar...@gmail.com
> Any feedback, questions, or concerns are welcome.

First of all congratulations. The paper, tup itself and getting it to
build firefox are all impressive.

Having seem better (but still Alpha, in your terminology) build systems
when working in llvm I would love to see firefox get a better one. Tup's
support for deleting old files in particular would be handy to replace
our "make package".

I guess the first question in how far do you want to push this. Do you
want to try to replace the current build system?

On tup itself:

* What is the plan for configuration? Just keep autoconf for now? If
that is possible it should save a lot of work in porting a lot of m4.

* If I understand it correctly, tup "plugs" in the OS to find out which
files are used by a given command execution. It uses this both for error
if a dependency is not declared (really neat!) and to collect extra
dependencies like the ones that is traditionally retrieved from "gcc
-MM". Is that correct?

* Does it handle addition of header files to a search path before the
current one was found? Just a curiosity, no build system I know handles
that :-)

* How hard is the "plug" to port? Windows, OS X and Linux cover most
users/developers, but firefox does have a very long tail. Can tup
fallback to running gcc -MM?

* If gcc -MM is not an option, is there a plan to support a backend
targeting tup in cmake, gyp or another generator? Maintaining two build
systems is hard, specially if one of them is our current build system :-(

> -Mike

Thanks,
Rafael

Benoit Jacob

unread,
Jul 5, 2012, 12:22:51 AM7/5/12
to Mike Shal, dev-pl...@lists.mozilla.org
At least Tup has the funniest homepage of any software project I've
seen (except of course for those hosted at sam.zoy.org):

"In a typical build system, the dependency arrows go down. Although
this is the way they would naturally go due to gravity, it is
unfortunately also where the enemy's gate is. This makes it very
inefficient and unfriendly. In tup, the arrows go up. This is
obviously true because it rhymes."

( From http://gittup.org/tup/ )

Benoit

2012/7/4 Mike Shal <mar...@gmail.com>:
> Any feedback, questions, or concerns are welcome.
>

Justin Lebar

unread,
Jul 5, 2012, 12:46:55 AM7/5/12
to Mike Shal, dev-pl...@lists.mozilla.org
This is very cool stuff. I particularly like the fact that this
system relies on monitoring which files the compiler accesses instead
of using gcc -MMD, because the former approach works when your
"compiler" is an arbitrary program -- we have many such programs in
our build.

> [From http://gittup.org/building-firefox-with-tup.html]
> (This my first foray into hg, so please let me know if something is wrong with
> the repository).

If you prefer to use git, we have an unofficial but widely-used git port of m-c:

https://github.com/mozilla/mozilla-central/

-Justin

Mike Hommey

unread,
Jul 5, 2012, 1:40:10 AM7/5/12
to Mike Shal, dev-pl...@lists.mozilla.org
Hi Mike,

On Wed, Jul 04, 2012 at 05:00:50PM -0400, Mike Shal wrote:
> Hello,
>
> I was trying to build Firefox from source to try out some changes (I
> wanted to see if I could figure out why it was syncing excessively),
> and was a bit frustrated with the incremental build performance. From
> searching the forums & blog posts I can see that others have concerns
> about it as well. It looks like some attempts have been made to fix
> this (CMake, non-recursive make, and some talk of gyp). Since I've
> written the tup build system (http://gittup.org/tup/), I thought I
> would give it a try with that. I've summarized my results here:
>
> http://gittup.org/building-firefox-with-tup.html

(snip)

Thanks for the awesome work.

Interestingly, our own Michael Wu has been experimenting with tup and
got something working too. You should get in touch with him (mwu at
mozilla)

That being said, the current plan for our build system more or less is
outlined in [1], at least up to and including part 4.

The path beyond that is not clear yet, but while tup is interesting, there
are other alternatives to make that are attractive too. Although I don't
like its syntax, gyp is the most promising one, because it's a meta
system, that allows to create MSVC or Xcode projects, as well as
Makefiles and such. Having a gyp backend for tup would definitely be a
plus.

Cheers,

Mike

1. http://gregoryszorc.com/blog/2012/06/25/improving-mozilla%27s-build-system/

Mike Shal

unread,
Jul 5, 2012, 1:53:30 PM7/5/12
to rob...@ocallahan.org, dev-pl...@lists.mozilla.org
Hi Rob,

On Wed, Jul 4, 2012 at 7:45 PM, Robert O'Callahan <rob...@ocallahan.org> wrote:
> That is impressive. Nice work!
>
> How hard would it be to repeat the experiment on Windows and Mac? Windows
> currently has the slowest builds (by quite a bit, even with pymake), so
> would be the most interesting.

Hmm, off-hand I'm not too sure. As I described briefly in the results
page, everything was hard-coded assuming my current Linux setup. So
for example, netwerk/wifi/Tupfile explicitly lists
nsWifiScannerUnix.cpp, whereas the Makefile.in file lists this in
platform conditionals. To repeat this experiment on other platforms,
it could probably be hard-coded to another platform without too much
effort (ie: replacing that line with nsWifiScannerMac.cpp, or
whatever). This was the first time I looked into the Mozilla build
system, so I don't know off-hand how many instances of these there
are, or if there are other significant differences between the
platforms. The trickiest part I think would be updating the CXXFLAGS
definitions, which I put together using some scripts to scrub the make
output and generate Tuprules.tup files. Long-term of course it would
be better to just include Makefile.in (or the generated Makefile) so
that all of those conditionals propagate to tup and don't have to be
re-implemented.

That said, I don't think there would be too much difference in the
result for other platforms. Although there are some platform-specific
pieces in tup, the incremental build logic is the same on each
platform. Tup doesn't parse the Tupfiles every time an incremental
build is performed, so the build time for a small change should just
be the time for tup to scan the file-system, plus the time to run the
tools (gcc, etc). The linux version of tup also has a file monitor for
cases where the file-system scan can take long (this is the difference
between the 0.728s and 0.005s null build times). This could be
implemented for the other platforms as well, so an incremental build
will be dominated by the sub-process time.

-Mike

Mike Shal

unread,
Jul 5, 2012, 2:40:43 PM7/5/12
to Rafael Ávila de Espíndola, dev-pl...@lists.mozilla.org
Hi Rafael,

On Wed, Jul 4, 2012 at 11:36 PM, Rafael Ávila de Espíndola
<respi...@mozilla.com> wrote:
>> Any feedback, questions, or concerns are welcome.
>
>
> First of all congratulations. The paper, tup itself and getting it to build
> firefox are all impressive.

Thanks!

>
> Having seem better (but still Alpha, in your terminology) build systems when
> working in llvm I would love to see firefox get a better one. Tup's support
> for deleting old files in particular would be handy to replace our "make
> package".
>
> I guess the first question in how far do you want to push this. Do you want
> to try to replace the current build system?

In the near-term I'd like to provide tup as an alternative. I think it
can be done by adding a separate tup back-end, and re-using the
per-directory Makefile.in files. I don't see any reason to change the
Makefile.in structure (what I called the "front-end" for lack of a
better term) - it already looks to me like it is very explicit and
easy to understand and modify. I'm not a Mozilla dev though so correct
me if I'm wrong here :)

The idea is that even if you are perfectly happy with make, you can
still use it, and go about adding CPPSRCS and such to Makefile.in.
Both systems should still work with those updates - they wouldn't have
to be done in make and then in tup.

If in the long-term people have switched over to tup, then it might be
worth deprecating the make back-end. I don't think that's something
that would happen for a few years, though.

>
> On tup itself:
>
> * What is the plan for configuration? Just keep autoconf for now? If that is
> possible it should save a lot of work in porting a lot of m4.

I wasn't planning to change anything here. Perhaps the configure
script would have a '--enable-tup' flag to generate Tupfiles, or the
Tupfiles could always be generated along-side the Makefiles. Tup does
have support for a kconfig-style configuration file, but I don't see
any reason to use that here, at least not initially.

>
> * If I understand it correctly, tup "plugs" in the OS to find out which
> files are used by a given command execution. It uses this both for error if
> a dependency is not declared (really neat!) and to collect extra
> dependencies like the ones that is traditionally retrieved from "gcc -MM".
> Is that correct?

Yes, that is correct. It also checks the output files, so if in the
Tupfile has something like:

: |> gcc -c foo.c -o bar.o |> foo.o

Where the output (foo.o) mis-matches with what gcc actually does
(create bar.o), then it will flag it as an error.

This does occasionally cause issues with some languages, since you
have to explicitly specify the outputs. There is a recent thread in
tup-users about compilers that create different files depending on the
input files' contents, for example. Part of the reason for my
experiment was to see if there were going to be any roadblocks like
this in the mozilla-central tree (in particular with the various
scripts that generate/process files), but I don't believe there are
any.

>
> * Does it handle addition of header files to a search path before the
> current one was found? Just a curiosity, no build system I know handles that
> :-)

Yes, it should handle this. For example, with a rule like:

: |> gcc foo.c -Ia -Ib -o foo |> foo

and foo.c does:
#include "foo.h"

Then tup will create an entry in its database for both a/foo.h and
b/foo.h, even if only b/foo.h exists. This is a placeholder in the
event that a/foo.h is later created, since it would affect the build
(internally in tup, these are called 'ghost' nodes). I think the Vesta
system can also handle cases like this (http://www.vestasys.org/), but
when I first tried it several years ago I wasn't able to get it to
run. I also didn't like how it was so tightly integrated with version
control, since I wanted to try out the new-fangled DVCSs.

I should mention that the reason tup puts so much effort into handling
situations like this is because of my Rule #2 - where doing
incremental builds must give the same result as a full build. With
make I am never sure if I need to do a clean build after a 'git pull',
or if it's safe to just do a 'make' from the top. Consider also a
bisection, where you are bouncing around the revision history. If an
incremental 'make' gives an incorrect build, then it may throw off the
bisection results and point you to the wrong commit. Or, you could try
to be safe and waste hours at each commit doing a full rebuild.
Neither scenario is desirable for me - I'd rather the build system
just do the right thing. For tup's own development tree, I don't think
I've had to clean out the build area in years. Every step is just
incremental from the last.

I should also mention that the above case was broken in Windows if the
"a" directory didn't exist during the first build, but I'm pushing out
a fix now. Hey, nothing's perfect! :)

>
> * How hard is the "plug" to port? Windows, OS X and Linux cover most
> users/developers, but firefox does have a very long tail. Can tup fallback
> to running gcc -MM?

Unfortunately, I don't have a good answer for this. It largely depends
on what facilities are available on the target OS. For Linux and OSX
(and FreeBSD on the way), it uses a temporary FUSE file-system to run
the sub-process in. Windows has a different implementation that uses
DLL injection, which required someone who knew what they were doing on
Windows to implement. These porting issues are why I suggest keeping
the make build as-is for the foreseeable future, so that no
functionality is lost.

I don't know that gcc -MM support would be very desirable. For one, it
only covers that specific sub-process, so things like the python
scripts that are used in the build process would need their own
treatment. Additionally, I believe the -M family of flags in gcc only
report files that it actually found, so it breaks the previous case of
a header not being found earlier in an include path. Although somewhat
rare, it is nice to be able to just type 'tup upd' and not be suspect
about the results.

>
> * If gcc -MM is not an option, is there a plan to support a backend
> targeting tup in cmake, gyp or another generator? Maintaining two build
> systems is hard, specially if one of them is our current build system :-(

I haven't looked into getting tup supported by cmake or gyp much
myself. There was a cmake thread a while back about it:
http://cmake.3232098.n2.nabble.com/Effort-to-create-a-new-generator-tup-td4946808.html

But I don't know if any real progress was made. I think one of the
issues was that cmake generates the Makefile to re-run cmake if a
CMakeLists.txt changes (thus updating the Makefile), and tup doesn't
allow the Tupfiles to be generated as part of its build process.

Another option in these cases instead of having cmake/gyp generate
Tupfiles, is to have tup read the CMakeLists.txt (or whatever) files
using an external script via the 'run' directive. So instead of
running 'cmake .; tup upd', it would just be 'tup upd', and it reads
most of the configuration information directly from CMakeLists.txt. I
haven't tried it though, so that may not work at all.

-Mike

Mike Shal

unread,
Jul 5, 2012, 2:44:32 PM7/5/12
to Justin Lebar, dev-pl...@lists.mozilla.org
On Thu, Jul 5, 2012 at 12:46 AM, Justin Lebar <justin...@gmail.com> wrote:
> This is very cool stuff. I particularly like the fact that this
> system relies on monitoring which files the compiler accesses instead
> of using gcc -MMD, because the former approach works when your
> "compiler" is an arbitrary program -- we have many such programs in
> our build.

Yep, it's supposed to be generic enough to work with anything, since
it just looks at file reads and writes. I did have to update one of
the python scripts that was opening its input files as RDWR though so
tup wouldn't complain :)

>
>> [From http://gittup.org/building-firefox-with-tup.html]
>> (This my first foray into hg, so please let me know if something is wrong with
>> the repository).
>
> If you prefer to use git, we have an unofficial but widely-used git port of m-c:
>
> https://github.com/mozilla/mozilla-central/

Ahh, thanks for the heads up! I am more familiar with git, but I
thought most folks here used hg and I didn't want to have any extra
barriers for its accessibility. Plus it was a good excuse to try out a
new (to me) VCS.

-Mike

Mike Shal

unread,
Jul 5, 2012, 3:19:23 PM7/5/12
to Mike Hommey, dev-pl...@lists.mozilla.org
Hi Mike,

On Thu, Jul 5, 2012 at 1:40 AM, Mike Hommey <m...@glandium.org> wrote:
> Hi Mike,
>
> On Wed, Jul 04, 2012 at 05:00:50PM -0400, Mike Shal wrote:
>> Hello,
>>
>> I was trying to build Firefox from source to try out some changes (I
>> wanted to see if I could figure out why it was syncing excessively),
>> and was a bit frustrated with the incremental build performance. From
>> searching the forums & blog posts I can see that others have concerns
>> about it as well. It looks like some attempts have been made to fix
>> this (CMake, non-recursive make, and some talk of gyp). Since I've
>> written the tup build system (http://gittup.org/tup/), I thought I
>> would give it a try with that. I've summarized my results here:
>>
>> http://gittup.org/building-firefox-with-tup.html
>
> (snip)
>
> Thanks for the awesome work.
>
> Interestingly, our own Michael Wu has been experimenting with tup and
> got something working too. You should get in touch with him (mwu at
> mozilla)

Ahh, I wasn't aware of that or I would've chatted with him first. I
didn't see any noise about it in the mozilla or tup mailing lists so I
just dove in. I've emailed him privately to see what he was able to
come up with.

>
> That being said, the current plan for our build system more or less is
> outlined in [1], at least up to and including part 4.
>
> The path beyond that is not clear yet, but while tup is interesting, there
> are other alternatives to make that are attractive too. Although I don't
> like its syntax, gyp is the most promising one, because it's a meta
> system, that allows to create MSVC or Xcode projects, as well as
> Makefiles and such. Having a gyp backend for tup would definitely be a
> plus.

A friend of mine had pointed me to that blog post as well. I
definitely agree with many of the statements, at least Parts 1-3, and
pieces of Part 4. From what I could see I thought most of the
Makefile.in files already adhered to these rules, in that they are
data-driven and don't contain make rules. I guess that's not true in
general? Or many of them have already been fixed as a result of this
work? I haven't looked at all of the Makefiles, to be honest.

However, I strongly disagree with the notion that loading the whole
DAG in memory for every build will fix all dependency issues, and be
the most efficient build possible. In other words, the conclusion from
Recursive Make Considered Harmful is not correct. This is why I wrote
Build System Rules & Algorithms:
http://gittup.org/tup/build_system_rules_and_algorithms.pdf

It is not necessary to load a global DAG to perform a correct
incremental build, and it is possible to construct the smallest
partial DAG necessary to correctly update the project very
efficiently. Additionally, a single DAG by itself does not necessarily
support other changes that can break builds. What happens if a rule is
removed, or a file is renamed? Make isn't able to handle these cases.

As for gyp, I don't have an opinion one way or the other -- switching
to that for the front-end would be beneficial if people want to
generate project files to better integrate with their IDEs, or if the
gyp syntax is preferred to that of Makefile.in. However if the
Makefiles it generates are still processed by make, I don't see how
that would help incremental builds since it is the underlying build
engine that is flawed, not the syntax used to list cpp files and
compiler flags.

Thanks for the feedback,
-Mike

Justin Lebar

unread,
Jul 5, 2012, 4:34:47 PM7/5/12
to Mike Shal, Mike Hommey, dev-pl...@lists.mozilla.org
> However, I strongly disagree with the notion that loading the whole
> DAG in memory for every build will fix all dependency issues, and be
> the most efficient build possible.

We've been working under the assumption that loading our whole DAG
into Make would take a few seconds. I believe the evidence for this
was how long the OOo build takes.

If that's true and the overhead scales roughly linearly with the size
of our tree, then it's likely an acceptable amount of overhead,
compared to our current overhead and the time it takes to link.

> Additionally, a single DAG by itself does not necessarily
> support other changes that can break builds. What happens if a rule is
> removed, or a file is renamed? Make isn't able to handle these cases.

Correct handling of these cases is a big plus for tup, IMO. We often
run into cases where we have to "clobber" (make clean) on our
continuous-integration build machines. And as you may have
discovered, we have small make-clean-like steps built into our build
system right now to get around this problem with header files and, I
believe, some generated files.

-Justin

Rafael Ávila de Espíndola

unread,
Jul 5, 2012, 5:23:51 PM7/5/12
to dev-pl...@lists.mozilla.org, mar...@gmail.com
> If that's true and the overhead scales roughly linearly with the size
> of our tree, then it's likely an acceptable amount of overhead,
> compared to our current overhead and the time it takes to link.

I agree, I tested building chromium and a nop build is 0.74s with ninja
(https://plus.google.com/u/0/108996039294665965197/posts/SfhrFAhRyyd).

>> Additionally, a single DAG by itself does not necessarily
>> support other changes that can break builds. What happens if a rule is
>> removed, or a file is renamed? Make isn't able to handle these cases.
>
> Correct handling of these cases is a big plus for tup, IMO. We often
> run into cases where we have to "clobber" (make clean) on our
> continuous-integration build machines. And as you may have
> discovered, we have small make-clean-like steps built into our build
> system right now to get around this problem with header files and, I
> believe, some generated files.

I agree. To get the 0.74s nop build time we have to fix our build to not
depend on old files being deleted if we stay with Make or Ninja (or
anything that generates them). Tup might actually be easier as we can
just assume that old files are deleted, but if we depend on a Tup only
feature then we will really be maintaining two build systems or
completely switching to Tup.

> -Justin

Cheers,
Rafael

Anthony jones

unread,
Jul 5, 2012, 6:14:07 PM7/5/12
to dev-pl...@lists.mozilla.org
On 06/07/12 08:34, Justin Lebar wrote:
>> However, I strongly disagree with the notion that loading the whole
>> DAG in memory for every build will fix all dependency issues, and be
>> the most efficient build possible.
>
> We've been working under the assumption that loading our whole DAG
> into Make would take a few seconds. I believe the evidence for this
> was how long the OOo build takes.

Tup is an excellent example of a build system that does what a build
system needs to do and nothing else. It proves that it's not necessary
to settle for waiting "a few seconds". It undermines the assumption.

> If that's true and the overhead scales roughly linearly with the size
> of our tree, then it's likely an acceptable amount of overhead,
> compared to our current overhead and the time it takes to link.

Consider the pattern of build on save then linking explicitly. The build
itself gives immediate feedback on whether it will compile. Link
failures are less frequent [1] so the link time is less important in the
feedback cycle. Time to compile is worth preserving even if the link is
slow.

> Correct handling of these cases is a big plus for tup, IMO. We often
> run into cases where we have to "clobber" (make clean) on our
> continuous-integration build machines. And as you may have
> discovered, we have small make-clean-like steps built into our build
> system right now to get around this problem with header files and, I
> believe, some generated files.

The value of being able to trust your build system is very important.
The thing that tup provides is a mechanism to verify that your rules are
correct. They're not prone to human error in the same way as make.

I have no doubt that a beta build system is where we want to be. We
still need to answer the questions of whether tup is the right answer or
whether we need to find or build something else.

The transition is the tough part. Right now it makes sense to generate
Tupfiles out of something. The lowest barrier to entry would be to
generate them out of Makefiles. Gregory has done some great work on this
and I agree that a future direction of using an existing language such
as Lua makes a lot of sense[1].

I have some remaining questions:

* How could we break this up into stages?
* To what extent can our existing make system co-exist with tup?
* Could we create a tup rule for an entire make invocation?

Anthony

[1] I would choose a pure functional language because we need something
declarative. Anything would work.

Justin Lebar

unread,
Jul 5, 2012, 6:25:31 PM7/5/12
to Anthony jones, dev-pl...@lists.mozilla.org
>> If that's true and the overhead scales roughly linearly with the size
>> of our tree, then it's likely an acceptable amount of overhead,
>> compared to our current overhead and the time it takes to link.
>
> Consider the pattern of build on save then linking explicitly. The build
> itself gives immediate feedback on whether it will compile. Link failures
> are less frequent [1] so the link time is less important in the feedback
> cycle. Time to compile is worth preserving even if the link is slow.
>
> [snip]
>
> I have no doubt that a beta build system is where we want to be.

All things being equal, of course faster is better than slower.

But all things may not be equal, and I'm saying that the advantage of
less-than-100ms build overhead compared to less-than-5s build overhead
may not outweigh other advantages of a non-tup-based solution.

For example, if you told me that I could have a Makefile-based rewrite
of the build system with 90% certainty in three months, and a
tup-based rewrite of the build system with 50% certainty in three
months, I'd take Make in a heartbeat.

-Justin

Anthony jones

unread,
Jul 5, 2012, 6:56:04 PM7/5/12
to Justin Lebar, dev-pl...@lists.mozilla.org
On 06/07/12 10:25, Justin Lebar wrote:
> All things being equal, of course faster is better than slower.
>
> But all things may not be equal, and I'm saying that the advantage of
> less-than-100ms build overhead compared to less-than-5s build overhead
> may not outweigh other advantages of a non-tup-based solution.
>
> For example, if you told me that I could have a Makefile-based rewrite
> of the build system with 90% certainty in three months, and a
> tup-based rewrite of the build system with 50% certainty in three
> months, I'd take Make in a heartbeat.

CPU cores aren't getting any faster so we can't expect the Makefile
solution to improve over time without us taking any action. The point I
was making was not about cost or trade-offs. The point is that the value
of a <1s build is a lot higher than the value of a 5 second build. It's
hard to see when you're a long way off so that is why I wanted to make
this point for those who aren't used to fast builds.

If you have never experienced build on save then you don't know what
you're missing. A 5 second is too slow. This is the value side of the
argument rather than the cost side.

When it comes down to making the cost trade-off we need to consider the
work and the risk. The transition to another Makefile-based system comes
with considerable risk because you still have to get your make rules right.

The big win you get from tup is that it'll actually tell you if the
compiler's actions don't match what you specified in the Tupfile. You
get fast feedback for compiler errors and at the same time you get fast
feedback for Tupfile errors. That is a double win.

I'm not necessarily saying that tup is the answer. The point I'm making
is that if it does what it says on the box then we get faster builds and
we get better guarantees. My point is that we should set our sights on
both of these.

No matter how good you get it with Makefiles you still can't them it
100% because the changes in the Makefiles themselves end up leaving junk
in your filesystem.

Anthony





Joshua Cranmer

unread,
Jul 5, 2012, 7:22:04 PM7/5/12
to
On 7/5/2012 6:56 PM, Anthony jones wrote:
> CPU cores aren't getting any faster so we can't expect the Makefile
> solution to improve over time without us taking any action. The point
> I was making was not about cost or trade-offs. The point is that the
> value of a <1s build is a lot higher than the value of a 5 second
> build. It's hard to see when you're a long way off so that is why I
> wanted to make this point for those who aren't used to fast builds.
>
> If you have never experienced build on save then you don't know what
> you're missing. A 5 second is too slow. This is the value side of the
> argument rather than the cost side.

What you're forgetting is that in the case of actually needing to
compile and link something, the time that the build system takes to
compute dependencies is insignificant compared to the time it takes to
actually do the build steps.

On my normal debug build on my computer, linking libxul.so takes over 2
minutes. In another build I have where I need to do LTO, it takes a
half-hour. Saving five seconds computing dependencies doesn't matter
when the build time is long enough to cause you to context switch while
you wait.

Justin Lebar

unread,
Jul 6, 2012, 1:21:58 AM7/6/12
to Joshua Cranmer, dev-pl...@lists.mozilla.org
> What you're forgetting is that in the case of actually needing to compile
> and link something, the time that the build system takes to compute
> dependencies is insignificant compared to the time it takes to actually do
> the build steps.

Well, his argument is that compiling without linking happens often,
perhaps more often than compiling with linking, because when writing
code, you frequently compile to see and fix errors. Compiling just
one file takes a second or two, so five seconds is significant there.

Anyway, we all agree that faster is better, all things being equal.
But so many people have tried and failed to fix our build system, I'm
happy to take whatever I can get at this point. Beggars can't be
choosers.

Mike Hommey

unread,
Jul 6, 2012, 1:48:35 AM7/6/12
to Justin Lebar, dev-pl...@lists.mozilla.org
On Fri, Jul 06, 2012 at 01:21:58AM -0400, Justin Lebar wrote:
> > What you're forgetting is that in the case of actually needing to compile
> > and link something, the time that the build system takes to compute
> > dependencies is insignificant compared to the time it takes to actually do
> > the build steps.
>
> Well, his argument is that compiling without linking happens often,
> perhaps more often than compiling with linking, because when writing
> code, you frequently compile to see and fix errors. Compiling just
> one file takes a second or two, so five seconds is significant there.

Compiling just one file doesn't need to have an overhead with make.
I also think 5s is overexagerated. For instance, the android build
system takes more than 20s for a nop build, but most of this time is
spent running find or python scripts selecting Android.mk files.

That being said, I'm not closing the door to the possibility of
replacing make. I'm definitely more attracted to things that can
generate projects for IDEs.

Mike

Benoit Girard

unread,
Jul 7, 2012, 2:03:37 PM7/7/12
to Mike Shal, dev-pl...@lists.mozilla.org, Justin Lebar
On Thu, Jul 5, 2012 at 2:44 PM, Mike Shal <mar...@gmail.com> wrote:

> On Thu, Jul 5, 2012 at 12:46 AM, Justin Lebar <justin...@gmail.com>
> wrote:
> > This is very cool stuff. I particularly like the fact that this
> > system relies on monitoring which files the compiler accesses instead
> > of using gcc -MMD, because the former approach works when your
> > "compiler" is an arbitrary program -- we have many such programs in
> > our build.
>
> Yep, it's supposed to be generic enough to work with anything, since
> it just looks at file reads and writes. I did have to update one of
> the python scripts that was opening its input files as RDWR though so
> tup wouldn't complain :)
>

Which script is that? We should fix that if it make this kind of effort
easier.

-Benwa

Mike Shal

unread,
Jul 7, 2012, 9:46:46 PM7/7/12
to Anthony jones, dev-pl...@lists.mozilla.org
Hi Anthony,

On Thu, Jul 5, 2012 at 6:14 PM, Anthony jones <ajo...@mozilla.com> wrote:
> The transition is the tough part. Right now it makes sense to generate
> Tupfiles out of something. The lowest barrier to entry would be to generate
> them out of Makefiles. Gregory has done some great work on this and I agree
> that a future direction of using an existing language such as Lua makes a
> lot of sense[1].
>
> I have some remaining questions:
>
> * How could we break this up into stages?

I spoke with Michael Wu on IRC a few days ago - his approach was to
write some conversion scripts to convert the Makefile.in front-end
into Tupfiles. In my proposal I was considering trying to use the
Makefile.in's as-is in tup, though during our discussion he pointed me
to some parts where this would be tricky to do. In either case, I
think this allows a transition to happen wherein both make & tup can
exist in the tree, and developers can choose either to build. Updating
the build description (ie: adding a new file) would still happen by
updating Makefile.in, and then the tup configuration can be
re-generated by Michael's scripts.

> * To what extent can our existing make system co-exist with tup?

I think they can both exist together in the tree - you would just
choose with to build with at configure time.

> * Could we create a tup rule for an entire make invocation?

I'm not really sure what you mean here. Can you elaborate?

-Mike

Mike Shal

unread,
Jul 7, 2012, 9:51:21 PM7/7/12
to Benoit Girard, dev-pl...@lists.mozilla.org
Sorry, I should have mentioned - it is xpt.py (patch attached).

Michael has also fixed it in his tree.

-Mike

Gregory Szorc

unread,
Jul 10, 2012, 3:44:31 PM7/10/12
to Mike Shal, dev-pl...@lists.mozilla.org
On 7/4/12 2:00 PM, Mike Shal wrote:
> Hello,
>
> I was trying to build Firefox from source to try out some changes (I
> wanted to see if I could figure out why it was syncing excessively),
> and was a bit frustrated with the incremental build performance. From
> searching the forums & blog posts I can see that others have concerns
> about it as well. It looks like some attempts have been made to fix
> this (CMake, non-recursive make, and some talk of gyp). Since I've
> written the tup build system (http://gittup.org/tup/), I thought I
> would give it a try with that. I've summarized my results here:
>
> http://gittup.org/building-firefox-with-tup.html

Great work! Tup is a nifty piece of software and it shows real potential
for being an eventual replacement for the build backend for mozilla-central.

We don't have a concrete vision for what the build system will
eventually look like. We have a number of paths we can take, including
de-recursified Makefiles, GYP + {Make, Ninja, etc}, and even Tup. If we
can support multiple ones easily, maybe we even do that!

We believe attempting a flag day conversion is a Fool's errand: the
scope is just too large. So, we're currently focused on paving the road
for a transition by making the make files fully declarative. This means:

* Removing all rules from Makefile.in's (bug 769378)
* Removing all $(shell) invocations from Makefile.in's (bug 769390)
* Removing all filesystem functions from Makefile.in's (bug 769407)
* Migrating non-buildy aspects of make files elsewhere (bug 769394)

Once the build definition files are declarative, we can statically
analyze and transform to whatever solution we choose to go with. What
this means exactly is an open question. I've been advocating for GYP
mainly because you get Visual Studio, Xcode, etc for free. You also get
Tup-like build speeds with Ninja generation. (Although, I don't think
Ninja works on all platforms yet, sadly.) Speed is obviously an
important consideration. But, I think it would be silly for us to
rewrite the build system without seizing the opportunity to obtain
robust Visual Studio, Xcode, etc project generation. This is mainly to
attract new contributors to the project who are most comfortable in
their favorite IDE. But, existing contributors would also see a win with
an integrated debugger, code complete, etc.

Anyway, your Tup integration work is truly awesome. As you said in your
web article, a real port probably involves "configuring" Tup from the
same build frontend files as the existing build system. I agree. And, I
would argue we can't get there until the make files are truly
declarative. So, any work towards fully declarative make files will lead
us one step closer to {Tup, GYP, Ninja, Visual Studio, Xcode, etc}. To
anyone reading: if you want faster build speeds, look at the bugs linked
above and do your part.

I can't guarantee we'll have Tup support in the tree one day. But, there
are certainly compelling reasons why we should heavily consider it.

Gregory

Anthony Jones

unread,
Jul 10, 2012, 5:38:00 PM7/10/12
to dev-pl...@lists.mozilla.org
On 11/07/12 07:44, Gregory Szorc wrote:
> We believe attempting a flag day conversion is a Fool's errand: the
> scope is just too large. So, we're currently focused on paving the road
> for a transition by making the make files fully declarative. This means:
>
> * Removing all rules from Makefile.in's (bug 769378)
> * Removing all $(shell) invocations from Makefile.in's (bug 769390)
> * Removing all filesystem functions from Makefile.in's (bug 769407)
> * Migrating non-buildy aspects of make files elsewhere (bug 769394)
>
> Once the build definition files are declarative, we can statically
> analyze and transform to whatever solution we choose to go with. What
> this means exactly is an open question. I've been advocating for GYP
> mainly because you get Visual Studio, Xcode, etc for free. You also get
> Tup-like build speeds with Ninja generation. (Although, I don't think
> Ninja works on all platforms yet, sadly.) Speed is obviously an
> important consideration. But, I think it would be silly for us to
> rewrite the build system without seizing the opportunity to obtain
> robust Visual Studio, Xcode, etc project generation. This is mainly to
> attract new contributors to the project who are most comfortable in
> their favorite IDE. But, existing contributors would also see a win with
> an integrated debugger, code complete, etc.

I read your blog and I can see that you've done some fantastic work on
the build system. I agree that the value is in providing good developer
tools. From a Linux/Eclipse the best value for effort is to get a fast
build that properly reveals compiler invocations. Half done in this
context still delivers value. Is there a useful middle ground for Visual
Studio or Xcode?

The end game is incremental builds that are very fast and 100%
trustworthy along with proper IDE integration on tier 1 platforms.
Individual preference aside we just need good tools especially for mobile.

We need to figure out how we can deliver incremental value along with
incremental effort. This rewards people immediately. Is there a way we
can make parts of the build faster to help motivate people to de-uglify
the Makefile.in files?

I'm prepared to put my own time into this if I can see some return on it.

Anthony

Gregory Szorc

unread,
Jul 10, 2012, 8:11:54 PM7/10/12
to Anthony Jones, dev-pl...@lists.mozilla.org
On 7/10/12 2:38 PM, Anthony Jones wrote:
> We need to figure out how we can deliver incremental value along with
> incremental effort. This rewards people immediately. Is there a way we
> can make parts of the build faster to help motivate people to de-uglify
> the Makefile.in files?

For starters, one of the first things we do during a build is rm -rf
large parts of dist/. This just creates lots of new work for a supposed
no-op build. AFAIK we do this because we don't trust dependencies in
parts of the build system. We should trust our build system and not
create more work for it. Surely we can prevent some of this redundancy.

If you are looking to shave ~20s from no-op builds, I think the lowest
hanging fruit would be to optimize the export tier. We could replace the
entire export tier with a monolithic make file generated from data
statically extracted from individual Makefile.in's. e.g.
https://raw.github.com/gist/1363034/b3d956a66e049e7624fbdb79762551202fe9a465/optimized.mk.
This will allow a no-op export to finish in a second rather than ~20s.
Clobber builds should also be much faster since we have no directory
traversal and optimal parallelism. Alternatively, we could convert the
export tier to be fully derecursified and just include all the
Makefile.in's together. However, this requires all the Makefile.in's to
be sane (e.g. no $(shell)'s that always execute, etc). I think
generating a monolithic make file from extracted data is actually easier
with today's make file state.

Export is the tier really wasting CPU time. libs does a lot of C++
compilation, so time is usually dwarfed by that. However, we aren't as
optimal here because tier directory traversal is mostly serial. We only
see parallel execution (-jN) in individual Makefile.in's or in the less
common case where we use PARALLEL_DIRS instead of DIRS. We should try
converting things to PARALLEL_DIRS to get us some quick gains. However,
this may introduce race conditions and needs to be done with care.

We should also reduce the overall number of Makefile.in's by merging
leaf nodes into their parents. This will require new make rules and
variables to support operating on things in child directories. This
effort will ultimately support de-recursified execution, if we ever want
that. The real gains from reduced number of Makefile's is not the
reduced traversal overhead, but from the increase in parallel execution.

There are also a number of smaller battles that can be fought.
Basically, identify a directory tree that takes a while to execute in
no-op builds and go from there. From
http://gps.pastebin.mozilla.org/1699952, I see that the libs tier in dom
(20s), content (25s), and layout (13s) are the biggest offenders. Most
of this seems to be nsinstall invocations (perhaps due to using double
colon make rules, which force the rule to always be evaluated). It could
be something else - I haven't measured it with scrutiny. Here, if we
merge make files, we should see more parallel execution. Then again,
maybe we're gated on I/O. I dunno.

Anyway, our build system is really death by a thousand cuts. So, every
time a command is eliminated or a Makefile.in is consolidated, it makes
things just a little bit faster.

Justin Lebar

unread,
Jul 11, 2012, 12:22:26 AM7/11/12
to Gregory Szorc, Anthony Jones, dev-pl...@lists.mozilla.org
> We should try converting things to PARALLEL_DIRS to get
> us some quick gains. However, this may introduce race
> conditions and needs to be done with care.

See [1], where I tried and, depending on your point of view, either
mostly failed or mostly succeeded at this. Either way, it didn't make
it into the tree.

Although I totally agree that breaking the build-system-fix into
smaller pieces is the best way to get something done, I'm not
convinced that our time would be well-spent in micro-optimizations
like reducing the number of Makefiles.

What I'd really like to see is us tup-ify one directory of m-c and
grow from there. I don't know if that's feasible with tup
specifically, and it if course wouldn't be as fast as a full top-level
conversion to tup (or whatever your favorite build system is). But I
bet if we had amazingly fast builds in one directory of m-c, we'd
suddenly find the resources to convert the rest of the tree.

-Justin

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=620285

On Tue, Jul 10, 2012 at 8:11 PM, Gregory Szorc <g...@mozilla.com> wrote:
> On 7/10/12 2:38 PM, Anthony Jones wrote:
>>
>> We need to figure out how we can deliver incremental value along with
>> incremental effort. This rewards people immediately. Is there a way we
>> can make parts of the build faster to help motivate people to de-uglify
>> the Makefile.in files?
>
>

Mike Hommey

unread,
Jul 11, 2012, 2:28:24 AM7/11/12
to Gregory Szorc, Anthony Jones, dev-pl...@lists.mozilla.org
On Tue, Jul 10, 2012 at 05:11:54PM -0700, Gregory Szorc wrote:
> On 7/10/12 2:38 PM, Anthony Jones wrote:
> >We need to figure out how we can deliver incremental value along with
> >incremental effort. This rewards people immediately. Is there a way we
> >can make parts of the build faster to help motivate people to de-uglify
> >the Makefile.in files?
>
> For starters, one of the first things we do during a build is rm -rf
> large parts of dist/. This just creates lots of new work for a
> supposed no-op build. AFAIK we do this because we don't trust
> dependencies in parts of the build system.

The main reason we do that is because of the possibility of files that
were removed in the tree, or moved in $DIST to stay there, and hide
or create new problems.

> We should trust our build
> system and not create more work for it. Surely we can prevent some
> of this redundancy.

There really is no "make" solution for this problem, except for logging
nsinstall action and remove files that were previously copied from a
given srcdir when they're not in the install list the second time.

> If you are looking to shave ~20s from no-op builds, I think the
> lowest hanging fruit would be to optimize the export tier. We could
> replace the entire export tier with a monolithic make file generated
> from data statically extracted from individual Makefile.in's. e.g. https://raw.github.com/gist/1363034/b3d956a66e049e7624fbdb79762551202fe9a465/optimized.mk.
> This will allow a no-op export to finish in a second rather than
> ~20s. Clobber builds should also be much faster since we have no
> directory traversal and optimal parallelism. Alternatively, we could
> convert the export tier to be fully derecursified and just include
> all the Makefile.in's together. However, this requires all the
> Makefile.in's to be sane (e.g. no $(shell)'s that always execute,
> etc). I think generating a monolithic make file from extracted data
> is actually easier with today's make file state.
>
> Export is the tier really wasting CPU time. libs does a lot of C++
> compilation, so time is usually dwarfed by that. However, we aren't
> as optimal here because tier directory traversal is mostly serial.
> We only see parallel execution (-jN) in individual Makefile.in's or
> in the less common case where we use PARALLEL_DIRS instead of DIRS.
> We should try converting things to PARALLEL_DIRS to get us some
> quick gains. However, this may introduce race conditions and needs
> to be done with care.

Another idea is bug 749122.

Mike

Gregory Szorc

unread,
Jul 11, 2012, 3:02:44 AM7/11/12
to Mike Hommey, Anthony Jones, dev-pl...@lists.mozilla.org
On 7/10/12 11:28 PM, Mike Hommey wrote:
> On Tue, Jul 10, 2012 at 05:11:54PM -0700, Gregory Szorc wrote:
>> On 7/10/12 2:38 PM, Anthony Jones wrote:
>>> We need to figure out how we can deliver incremental value along with
>>> incremental effort. This rewards people immediately. Is there a way we
>>> can make parts of the build faster to help motivate people to de-uglify
>>> the Makefile.in files?
>>
>> For starters, one of the first things we do during a build is rm -rf
>> large parts of dist/. This just creates lots of new work for a
>> supposed no-op build. AFAIK we do this because we don't trust
>> dependencies in parts of the build system.
>
> The main reason we do that is because of the possibility of files that
> were removed in the tree, or moved in $DIST to stay there, and hide
> or create new problems.
>
>> We should trust our build
>> system and not create more work for it. Surely we can prevent some
>> of this redundancy.
>
> There really is no "make" solution for this problem, except for logging
> nsinstall action and remove files that were previously copied from a
> given srcdir when they're not in the install list the second time.

True to both points.

For extra files in $DIST, those will get removed whenever we do a
clobber. The trees on the buildbot hosts are periodically clobbered,
right? So, what's the worst that can happen? We don't detect breakage
until the next clobber and have to bisect over the range from the
previous clobber?

If we really care about what files exist in $DIST, then I'd argue we
should maintain a manifest of what's expected to be in $DIST. At the end
and/or beginning of the build, we prune what shouldn't be there and
error if something is missing. We already maintain manifests for lots of
other pieces of the tree, so I don't think this idea is too crazy. We
just need to take the time to design and implement it.

Mike Hommey

unread,
Jul 11, 2012, 3:11:38 AM7/11/12
to Gregory Szorc, Anthony Jones, dev-pl...@lists.mozilla.org
On Wed, Jul 11, 2012 at 12:02:44AM -0700, Gregory Szorc wrote:
> On 7/10/12 11:28 PM, Mike Hommey wrote:
> >On Tue, Jul 10, 2012 at 05:11:54PM -0700, Gregory Szorc wrote:
> >>On 7/10/12 2:38 PM, Anthony Jones wrote:
> >>>We need to figure out how we can deliver incremental value along with
> >>>incremental effort. This rewards people immediately. Is there a way we
> >>>can make parts of the build faster to help motivate people to de-uglify
> >>>the Makefile.in files?
> >>
> >>For starters, one of the first things we do during a build is rm -rf
> >>large parts of dist/. This just creates lots of new work for a
> >>supposed no-op build. AFAIK we do this because we don't trust
> >>dependencies in parts of the build system.
> >
> >The main reason we do that is because of the possibility of files that
> >were removed in the tree, or moved in $DIST to stay there, and hide
> >or create new problems.
> >
> >>We should trust our build
> >>system and not create more work for it. Surely we can prevent some
> >>of this redundancy.
> >
> >There really is no "make" solution for this problem, except for logging
> >nsinstall action and remove files that were previously copied from a
> >given srcdir when they're not in the install list the second time.
>
> True to both points.
>
> For extra files in $DIST, those will get removed whenever we do a
> clobber. The trees on the buildbot hosts are periodically clobbered,
> right? So, what's the worst that can happen? We don't detect
> breakage until the next clobber and have to bisect over the range
> from the previous clobber?

It's not only a problem for tinderboxes, it's a problem for local
builds. If you have to clobber to ensure your build failure is not due
to some bad interaction between Makefile.in changes and the objdir from
your previous build, we've failed.

> If we really care about what files exist in $DIST, then I'd argue we
> should maintain a manifest of what's expected to be in $DIST. At the
> end and/or beginning of the build, we prune what shouldn't be there
> and error if something is missing. We already maintain manifests for
> lots of other pieces of the tree, so I don't think this idea is too
> crazy. We just need to take the time to design and implement it.

I'm kind of okay with manifests, as long as they are generated.
Seriously, I think the package manifest is one of the worst abomination
in the build system. There are so many possible ways to do something
wrong with it that it's not even funny.

Mike

Brian Smith

unread,
Jul 11, 2012, 3:50:35 PM7/11/12
to Gregory Szorc, Mike Shal, dev-pl...@lists.mozilla.org
Gregory Szorc wrote:
> this means exactly is an open question. I've been advocating for GYP
> mainly because you get Visual Studio, Xcode, etc for free.

In order to have really *usable* IDE support, you need to deal with things like this:

mfbt/Assertions.h => #include "mozilla/Assertions.h"
xpcom/glue/whatever.h => #include "mozilla/whatever.h"

If you add dist/include to the include path of the IDE, then the IDE's navigation features work really poorly, because they (at least VS2010 and VS2012) tend to navigate to the copy in dist/include/ instead of the copy in the source tree, which is the one you really want to edit.

To solve this problem in my local VS project, I avoided adding dist/include to the include path of the project; instead I created symlinks like this:

vs/symlinks/mfbt/mozilla/ => mfbt/
vs/symlinks/xpcom/glue/mozilla => xpcom/glue/
[etc.]

and then added vs/symlinks/mfbt, vs/symlinks/xpcom/glue, etc. to the include path. (I also added every individual $(OBJDIR)/.../_xpidlgen directory to the include path, though this causes its won problems.) This gives me the results I want with respect to navigating the source code but it isn't a good general-purpose solution for everybody (one reason: you must elevate to admin privileges to create symlinks in Windows).

IMO, it is MUCH more important to be able to navigate the source code in the IDE efficiently than to be able to build using the IDE's "native" build system; in Visual Studio, makefile projects work well enough. So, I hope VS project generation doesn't get stalled on the work needed to get the generated projects buildable through VS itself.

I also found that VS2010 could NOT handle a project that included everything in mozilla-central; navigating using F12 and Ctrl+, was WAY too slow. Consequently, I ended up creating a VS project for just the parts I usually edit (xpcom/, netwerk/, security/manager, security/nss, nsprpub/).

Cheers,
Brian

Trevor Saunders

unread,
Jul 11, 2012, 4:05:38 PM7/11/12
to Mike Hommey, Anthony Jones, dev-pl...@lists.mozilla.org, Gregory Szorc
On Wed, Jul 11, 2012 at 08:28:24AM +0200, Mike Hommey wrote:
> > We should trust our build
> > system and not create more work for it. Surely we can prevent some
> > of this redundancy.
>
> There really is no "make" solution for this problem, except for logging
> nsinstall action and remove files that were previously copied from a
> given srcdir when they're not in the install list the second time.

so, perhaps a silly question, but what would using symb links instead of
actually copying break?

Trev

signature.asc

Brian Smith

unread,
Jul 11, 2012, 4:18:34 PM7/11/12
to Mike Hommey, Anthony Jones, dev-pl...@lists.mozilla.org, Gregory Szorc
Trevor Saunders <trev.s...@gmail.com> wrote:
> so, perhaps a silly question, but what would using symb links instead
> of actually copying break?

Windows, because on Windows you cannot create symlinks without elevating to admin privileges, and you shouldn't need to elevate to admin to build Firefox.

Mike Hommey wrote:
> On Tue, Jul 10, 2012 at 05:11:54PM -0700, Gregory Szorc wrote:
> > For starters, one of the first things we do during a build is rm
> > -rf large parts of dist/. This just creates lots of new work for
> > a supposed no-op build. AFAIK we do this because we don't trust
> > dependencies in parts of the build system.
>
> The main reason we do that is because of the possibility of files
> that were removed in the tree, or moved in $DIST to stay there, and hide
> or create new problems.
>
> There really is no "make" solution for this problem, except for
> logging nsinstall action and remove files that were previously copied
> from a given srcdir when they're not in the install list the second time.

Do we really need to copy those files into dist/include at all for a normal build of libxul?

As far as headers go, is there a lot of value in controlling which headers get exported since the rules for inter-module dependencies were changed? Perhaps a simple change in convention could eliminate the need to do this copying and deleting at all:

* for every directory D that contains header files:
* if the header file that is exported, move it to
D/mozilla; otherwise move it to D/private.
* add D to the include path of every compilation

* for every directory D that contains IDL files:
* add $(OBJDIR)/D/_xpidlgen to the include path of every
compilation.

* Change all #includes from #include "X.h" to
#include "mozilla/X.h" or #include "private/X.h"

* Add a code review rule that bans #include "private/X.h" for
any X.h in another module.

* Create a new target "public-api" or whatever that can be used
to create dist/include, dist/idl, etc. for addons API.

Note that this would also solve the #include problem I mentioned in the message "IDE project generation."

- Brian

Justin Lebar

unread,
Jul 11, 2012, 4:30:13 PM7/11/12
to Brian Smith, Mike Hommey, dev-pl...@lists.mozilla.org, Gregory Szorc, Anthony Jones
> Trevor Saunders <trev.s...@gmail.com> wrote:
>> so, perhaps a silly question, but what would using symb links instead
>> of actually copying break?
>
> Windows, because on Windows you cannot create symlinks without elevating to admin privileges, and you shouldn't need to elevate to admin to build Firefox.

We actually do use symlinks on Linux. See objdir/dist/include.

The problem isn't the copy, per se, it's that we have to nuke the
whole directory in order to ensure that headers which are deleted from
the Makefile don't stick around.

symlinks /almost/ solve this problem. If you delete the header, then
obviously the symlink is broken. But if you remove the header from
the Makefile but leave the file there, the symlink still works, and
that's the problem.

> * for every directory D that contains header files:
> * if the header file that is exported, move it to
> D/mozilla; otherwise move it to D/private.
> * add D to the include path of every compilation

That would result in /very/ long arguments to GCC, of course. I don't
know if that would cause problems, but I would not be surprised if
Windows was displeased with us.

Anthony Jones

unread,
Jul 11, 2012, 4:47:12 PM7/11/12
to Gregory Szorc, Mike Hommey, dev-pl...@lists.mozilla.org
On 11/07/12 19:02, Gregory Szorc wrote:
> On 7/10/12 11:28 PM, Mike Hommey wrote:
>> On Tue, Jul 10, 2012 at 05:11:54PM -0700, Gregory Szorc wrote:
>> There really is no "make" solution for this problem, except for logging
>> nsinstall action and remove files that were previously copied from a
>> given srcdir when they're not in the install list the second time.
>
> If we really care about what files exist in $DIST, then I'd argue we
> should maintain a manifest of what's expected to be in $DIST. At the end
> and/or beginning of the build, we prune what shouldn't be there and
> error if something is missing. We already maintain manifests for lots of
> other pieces of the tree, so I don't think this idea is too crazy. We
> just need to take the time to design and implement it.

If you read the side of the box, it says tup automatically tracks
changes in Tupfiles (Makefiles) and automatically removes artefacts
where required. I'm assuming everyone has read the contents of the tup
website.

This is also a slow part of the build. We all agree it can't be
reasonably done in make. We're also looking for a place to try out Tup.
I'm going to put these pieces together and suggest that we start by
tupifying the install. I'm hoping I can peel off a chunk that is small
enough that we can make it stick.

I've cloned build-splendid but I don't know whether that would be the
best place to start or whether I should start with what Mike Shal and
Michael Wu have done.

Anthony

Kyle Huey

unread,
Jul 11, 2012, 5:06:28 PM7/11/12
to Brian Smith, Mike Hommey, dev-pl...@lists.mozilla.org, Gregory Szorc, Anthony Jones
On Wed, Jul 11, 2012 at 1:18 PM, Brian Smith <bsm...@mozilla.com> wrote:

> Trevor Saunders <trev.s...@gmail.com> wrote:
> > so, perhaps a silly question, but what would using symb links instead
> > of actually copying break?
>
> Windows, because on Windows you cannot create symlinks without elevating
> to admin privileges, and you shouldn't need to elevate to admin to build
> Firefox.
>

And even if you do, MSYS chokes on the resulting links.

- Kyle

Gregory Szorc

unread,
Jul 11, 2012, 5:35:52 PM7/11/12
to Anthony Jones, Mike Hommey, dev-pl...@lists.mozilla.org
On 7/11/12 1:47 PM, Anthony Jones wrote:
> On 11/07/12 19:02, Gregory Szorc wrote:
> This is also a slow part of the build. We all agree it can't be
> reasonably done in make. We're also looking for a place to try out Tup.
> I'm going to put these pieces together and suggest that we start by
> tupifying the install. I'm hoping I can peel off a chunk that is small
> enough that we can make it stick.

If you haven't already, please say hi in #pymake, where all the build
peers hang out. I don't want to see people put in a lot of effort only
to find at review time that the approach isn't satisfactory.

> I've cloned build-splendid but I don't know whether that would be the
> best place to start or whether I should start with what Mike Shal and
> Michael Wu have done.

My build-splendid branch is bitrotted and dead at this point. The Python
CLI frontend is being uplifted to mozilla-central (bug 751795). Some
pymake API enhancements are also on their way (see bug 769976). I also
have plans to uplift a make file "style" checker and other misc parts of
the build-splendid branch. The remaining bits can be ported to the
mach/mozbuild platform (bug 751795) if there is any interest (The
MozillaMakefile class - which is essentially an API for extracting data
from Makefile's would be the most useful, IMO). I may rebase
build-splendid on top of mach/mozbuild some day. But, don't count on it.

Neil

unread,
Jul 12, 2012, 4:37:11 AM7/12/12
to
Trevor Saunders wrote:

>what would using symb links instead of actually copying break?
>
>
Windows. (We already use symlinks on Mac/Linux; on Windows we hard link
in some places, but that only helps for speed purposes.)

--
Warning: May contain traces of nuts.

Mike Shal

unread,
Jul 12, 2012, 4:08:27 PM7/12/12
to Gregory Szorc, dev-pl...@lists.mozilla.org
Hi Gregory,

On Tue, Jul 10, 2012 at 3:44 PM, Gregory Szorc <g...@mozilla.com> wrote:
> On 7/4/12 2:00 PM, Mike Shal wrote:
>>
>> Hello,
>>
>> I was trying to build Firefox from source to try out some changes (I
>> wanted to see if I could figure out why it was syncing excessively),
>> and was a bit frustrated with the incremental build performance. From
>> searching the forums & blog posts I can see that others have concerns
>> about it as well. It looks like some attempts have been made to fix
>> this (CMake, non-recursive make, and some talk of gyp). Since I've
>> written the tup build system (http://gittup.org/tup/), I thought I
>> would give it a try with that. I've summarized my results here:
>>
>> http://gittup.org/building-firefox-with-tup.html
>
>
> Great work! Tup is a nifty piece of software and it shows real potential for
> being an eventual replacement for the build backend for mozilla-central.

Thanks :)

>
> We don't have a concrete vision for what the build system will eventually
> look like. We have a number of paths we can take, including de-recursified
> Makefiles, GYP + {Make, Ninja, etc}, and even Tup. If we can support
> multiple ones easily, maybe we even do that!
>
> We believe attempting a flag day conversion is a Fool's errand: the scope is
> just too large. So, we're currently focused on paving the road for a
> transition by making the make files fully declarative. This means:
>
> * Removing all rules from Makefile.in's (bug 769378)
> * Removing all $(shell) invocations from Makefile.in's (bug 769390)
> * Removing all filesystem functions from Makefile.in's (bug 769407)
> * Migrating non-buildy aspects of make files elsewhere (bug 769394)

I agree a flag day conversion can and should be avoided. All of these
bugs look like good things to fix to me - I wasn't aware of these
issues in the Makefile.in's when I posted about possibly using them
directly as source files in tup. The ones I looked at didn't have any
of these issues, though maybe that's a result of the work you have
already put into it.

>
> Once the build definition files are declarative, we can statically analyze
> and transform to whatever solution we choose to go with. What this means
> exactly is an open question. I've been advocating for GYP mainly because you
> get Visual Studio, Xcode, etc for free. You also get Tup-like build speeds
> with Ninja generation. (Although, I don't think Ninja works on all platforms
> yet, sadly.) Speed is obviously an important consideration. But, I think it
> would be silly for us to rewrite the build system without seizing the
> opportunity to obtain robust Visual Studio, Xcode, etc project generation.
> This is mainly to attract new contributors to the project who are most
> comfortable in their favorite IDE. But, existing contributors would also see
> a win with an integrated debugger, code complete, etc.

Build system generators like gyp or cmake can indeed have other
benefits like native IDE support, though it may be easy to support
those once you have data-driven Makefile.in's. (I can't really speak
to this since I have no experience with either - I'm just presuming it
would be easy to read in the data and convert it to whatever back-end
necessary). As for Ninja, correct me if I'm wrong but I believe it is
still a global dependency view system. In other words, it reads in the
.ninja files and all dependency files before doing anything. Tup's
speed is a result of several things: 1) Building only the part of the
DAG necessary for the incremental update; 2) Only parsing the build
description (Tupfiles) when necessary; and 3) Using the (optional,
currently Linux-only) monitor daemon to skip the file-system scan. It
sounds like ninja is a lot faster than make, so maybe it is fast
enough that you don't yet hit the scalability issues of an O(n) build
system and you don't have to worry about it. However, features like
this worry me:

* Builds are always run in parallel, based by defaulton the number
of CPUs your system has. Underspecified build dependencies will result
in incorrect builds.

In particular, the "Underspecified build dependencies will result in
incorrect builds" part. What is the indication to the user that this
has happened?

Tup's philosophy is that any such errors in the build description must
be reported as errors to the developer. This frees up developers to
make changes to the build description, python scripts, etc, and be
confident that an incremental build will detect any problems the
moment they are introduced, rather than at some later date when a
parallel build happens to schedule things differently.

>
> Anyway, your Tup integration work is truly awesome. As you said in your web
> article, a real port probably involves "configuring" Tup from the same build
> frontend files as the existing build system. I agree. And, I would argue we
> can't get there until the make files are truly declarative. So, any work
> towards fully declarative make files will lead us one step closer to {Tup,
> GYP, Ninja, Visual Studio, Xcode, etc}. To anyone reading: if you want
> faster build speeds, look at the bugs linked above and do your part.
>
> I can't guarantee we'll have Tup support in the tree one day. But, there are
> certainly compelling reasons why we should heavily consider it.

Thanks again - I agree that work here would definitely be helpful
moving forward.

-Mike

Mike Shal

unread,
Jul 12, 2012, 4:25:52 PM7/12/12
to Gregory Szorc, Anthony Jones, dev-pl...@lists.mozilla.org
Hi again,

On Tue, Jul 10, 2012 at 8:11 PM, Gregory Szorc <g...@mozilla.com> wrote:
> We should also reduce the overall number of Makefile.in's by merging leaf
> nodes into their parents. This will require new make rules and variables to
> support operating on things in child directories. This effort will
> ultimately support de-recursified execution, if we ever want that. The real
> gains from reduced number of Makefile's is not the reduced traversal
> overhead, but from the increase in parallel execution.

I don't think merging Makefile.in's is the right approach, in
particular as long as tup is still under consideration. Tup expects
each directory to define its own configuration, which is how it is
able to skip re-parsing every Tupfile on every update (ie: if you add
a new .cpp file in dom/base/, then tup will re-parse dom/base/Tupfile
and a handful of other Tupfiles that depend on outputs in dom/base).
In my example, this results in 9 Tupfiles out of 561 that are parsed.
I believe merging the Makefile.in's would only really benefit the
current recursive-make due to the parallel execution as you mention. A
non-recursive-make wouldn't be affected, since you can have a
non-recursive setup where each directory defines its own
configuration, and is included into one mega Makefile. The only real
difference in merging leaf nodes would be that it could save some
small amount of time by opening a few less files. However, a quick
test on my system shows that reading each Makefile individually takes
~23ms, whereas if they are all combined into one huge Makefile,
reading from it takes 3ms (this is using 'cat', not 'make'). Make
would still have to build and walk the same size DAG in either case,
so you would only save at most 20ms by moving things into fewer
Makefile.in's.

-Mike

Gregory Szorc

unread,
Jul 12, 2012, 5:17:30 PM7/12/12
to Mike Shal, Anthony Jones, dev-pl...@lists.mozilla.org
On 7/12/12 1:25 PM, Mike Shal wrote:
> Hi again,
>
> On Tue, Jul 10, 2012 at 8:11 PM, Gregory Szorc <g...@mozilla.com> wrote:
>> We should also reduce the overall number of Makefile.in's by merging leaf
>> nodes into their parents. This will require new make rules and variables to
>> support operating on things in child directories. This effort will
>> ultimately support de-recursified execution, if we ever want that. The real
>> gains from reduced number of Makefile's is not the reduced traversal
>> overhead, but from the increase in parallel execution.
>
> I don't think merging Makefile.in's is the right approach, in
> particular as long as tup is still under consideration. Tup expects
> each directory to define its own configuration, which is how it is
> able to skip re-parsing every Tupfile on every update (ie: if you add
> a new .cpp file in dom/base/, then tup will re-parse dom/base/Tupfile
> and a handful of other Tupfiles that depend on outputs in dom/base).
> In my example, this results in 9 Tupfiles out of 561 that are parsed.

This line of thought assumes we'll have Tupfiles committed into
mozilla-central.

Unless something radical happens, Mozilla will need to support building
with make for an indefinite period of time. This doesn't mean we won't
be able to leverage tup. It just means that the process of converting
all our build systems to support tup (or something else) as well as
ensuring that whatever we switch to is supported on tier 2 platforms
will be long and drawn out. This necessitates a period where multiple
build backends co-exist: make + others.

I'm going to assert that we'll want a single source of truth for the
build definition because we don't want to deal with fragmentation. This
will be make files until somebody presents a better solution (which can
be converted to make files, of course).

If a Tupfile can be converted to a Makefile easily, then maybe we'll
check in Tupfiles. But, I don't see a non-C API in tup's source
repository and I'm going to assert that nobody will want to build a
translator in C or even in something that loosely binds the C API (like
Python's ctypes module). Maybe we can write a standalone Tupfile parser
easily. Is that better than something like YAML, GYP, or even our own
DSL? I just don't know.

What I'm getting at is that it is most likely that Tupfiles would be
automatically generated from something else. And, at the point you do
that, you can easily have the 1 Tupfile per directory which you say is
desirable.

Automatically generated Tupfiles may interfere with tup's magic since
now tup needs to know if it needs to regenerate the Tupfiles. Does tup
have support for handling systems where the Tupfiles may depend on
something else, or would we need to put a frontend in front of tup that
regenerated the Tupfiles before invoking tup?

> I believe merging the Makefile.in's would only really benefit the
> current recursive-make due to the parallel execution as you mention. A
> non-recursive-make wouldn't be affected, since you can have a
> non-recursive setup where each directory defines its own
> configuration, and is included into one mega Makefile. The only real
> difference in merging leaf nodes would be that it could save some
> small amount of time by opening a few less files. However, a quick
> test on my system shows that reading each Makefile individually takes
> ~23ms, whereas if they are all combined into one huge Makefile,
> reading from it takes 3ms (this is using 'cat', not 'make'). Make
> would still have to build and walk the same size DAG in either case,
> so you would only save at most 20ms by moving things into fewer
> Makefile.in's.

I disagree. Even with a recursive make, fewer, "fatter" Makefile.in's
would likely yield more parallel execution because each make invocation
would be more likely to saturate available cores. Think of a make
invocation as this combine harvesting corn:

====== (make -j6)
XX foo/Makefile
XX foo/a/Makefile
X foo/b/Makefile
XXXX foo/c/Makefile

We have 6 cores here, each represented by an =. We have our targets,
each represented by an X. We "harvest" each row serially, because that
is how recursive make works. For each row, we have wasted harvesting
capacity because we don't have enough targets to process.

Whereas if we merge Makefile.in's:

====== (make -j6)
XXXXXXXXX foo/Makefile

We now fully saturate our harvester. No idle cores. Of course, this
assumes the DAG is such that it allows parallel execution. But, I think
that is fair.

Merging Makefile.in's also has another significant advantage: less files
to manage. There's ~1500 Makefiles in mozilla-central today. That number
is staggering and managing them all is unwieldy. A lot of us would like
to see that reduced by a magnitude and then possibly some, just so
things are easier to mentally grok and manage.

Gregory

Mike Shal

unread,
Jul 12, 2012, 6:13:29 PM7/12/12
to Gregory Szorc, dev-pl...@lists.mozilla.org
Hi Gregory,

On Thu, Jul 12, 2012 at 5:17 PM, Gregory Szorc <g...@mozilla.com> wrote:
> On 7/12/12 1:25 PM, Mike Shal wrote:
>>
>> Hi again,
>>
>> On Tue, Jul 10, 2012 at 8:11 PM, Gregory Szorc <g...@mozilla.com> wrote:
>>>
>>> We should also reduce the overall number of Makefile.in's by merging leaf
>>> nodes into their parents. This will require new make rules and variables
>>> to
>>> support operating on things in child directories. This effort will
>>> ultimately support de-recursified execution, if we ever want that. The
>>> real
>>> gains from reduced number of Makefile's is not the reduced traversal
>>> overhead, but from the increase in parallel execution.
>>
>>
>> I don't think merging Makefile.in's is the right approach, in
>> particular as long as tup is still under consideration. Tup expects
>> each directory to define its own configuration, which is how it is
>> able to skip re-parsing every Tupfile on every update (ie: if you add
>> a new .cpp file in dom/base/, then tup will re-parse dom/base/Tupfile
>> and a handful of other Tupfiles that depend on outputs in dom/base).
>> In my example, this results in 9 Tupfiles out of 561 that are parsed.
>
>
> This line of thought assumes we'll have Tupfiles committed into
> mozilla-central.

I was thinking if there are Tupfiles generated by configure, it is
easier to read the configuration if it is in the current directory,
rather than in a parent directory. For example, if the Makefile.in's
were all fixed according to the bugs you have filed so that they are
data-driven, then a generated Tupfile could fairly easily do:

obj-foo/some/directory/Tupfile:
include_rules
include Makefile.in
include $(MOZ_ROOT)/config/build.tup

Where the tup back-end is defined by config/build.tup. In other words,
the Makefile.in is still the build fragment checked in to
mozilla-central, not the Tupfile. If several Makefile.in's are grouped
into a parent directory, then the configure stage for a tup build
becomes more complex because it needs to find the right Makefile.in to
include, and the build.tup becomes more complex since it needs to
prune out the data for the current directory.

>
> Unless something radical happens, Mozilla will need to support building with
> make for an indefinite period of time. This doesn't mean we won't be able to
> leverage tup. It just means that the process of converting all our build
> systems to support tup (or something else) as well as ensuring that whatever
> we switch to is supported on tier 2 platforms will be long and drawn out.
> This necessitates a period where multiple build backends co-exist: make +
> others.

Yes, I agree.

>
> I'm going to assert that we'll want a single source of truth for the build
> definition because we don't want to deal with fragmentation. This will be
> make files until somebody presents a better solution (which can be converted
> to make files, of course).

I agree here too, so long as we're talking about the data-driven front-end.

>
> If a Tupfile can be converted to a Makefile easily, then maybe we'll check
> in Tupfiles. But, I don't see a non-C API in tup's source repository and I'm
> going to assert that nobody will want to build a translator in C or even in
> something that loosely binds the C API (like Python's ctypes module). Maybe
> we can write a standalone Tupfile parser easily. Is that better than
> something like YAML, GYP, or even our own DSL? I just don't know.
>
> What I'm getting at is that it is most likely that Tupfiles would be
> automatically generated from something else. And, at the point you do that,
> you can easily have the 1 Tupfile per directory which you say is desirable.
>
> Automatically generated Tupfiles may interfere with tup's magic since now
> tup needs to know if it needs to regenerate the Tupfiles. Does tup have
> support for handling systems where the Tupfiles may depend on something
> else, or would we need to put a frontend in front of tup that regenerated
> the Tupfiles before invoking tup?

I don't think tup would need to re-generate Tupfiles - it is likely
that it could read the data-driven build configuration files directly.
I think some stub Tupfiles would be generated by the configure
process, not managed by tup (somewhat similar to how the current
Makefiles are generated). I wasn't thinking of changing anything at
the configure stage.

>
>
>> I believe merging the Makefile.in's would only really benefit the
>> current recursive-make due to the parallel execution as you mention. A
>> non-recursive-make wouldn't be affected, since you can have a
>> non-recursive setup where each directory defines its own
>> configuration, and is included into one mega Makefile. The only real
>> difference in merging leaf nodes would be that it could save some
>> small amount of time by opening a few less files. However, a quick
>> test on my system shows that reading each Makefile individually takes
>> ~23ms, whereas if they are all combined into one huge Makefile,
>> reading from it takes 3ms (this is using 'cat', not 'make'). Make
>> would still have to build and walk the same size DAG in either case,
>> so you would only save at most 20ms by moving things into fewer
>> Makefile.in's.
>
>
> I disagree. Even with a recursive make, fewer, "fatter" Makefile.in's would
> likely yield more parallel execution because each make invocation would be
> more likely to saturate available cores. Think of a make invocation as this
> combine harvesting corn:
>
> ====== (make -j6)
> XX foo/Makefile
> XX foo/a/Makefile
> X foo/b/Makefile
> XXXX foo/c/Makefile
>
> We have 6 cores here, each represented by an =. We have our targets, each
> represented by an X. We "harvest" each row serially, because that is how
> recursive make works. For each row, we have wasted harvesting capacity
> because we don't have enough targets to process.
>
> Whereas if we merge Makefile.in's:
>
> ====== (make -j6)
> XXXXXXXXX foo/Makefile
>
> We now fully saturate our harvester. No idle cores. Of course, this assumes
> the DAG is such that it allows parallel execution. But, I think that is
> fair.

I think we agree here? I was saying that merging these only benefits
recursive make, as you describe with your example. You had also said:
"This effort will ultimately support de-recursified execution, if we
ever want that", which I took to mean a non-recursive make - is that
what you meant? What I was showing was that in a non-recursive setup,
you would only save about 20ms so it isn't really a benefit. The
parallelism in a non-recursive make would be the same whether you have
1000 Makefiles with 10 rules each or 1 Makefile with 10,000 rules - in
either case, a single make process will be viewing a single row of
10,000 X's. The only difference in this non-recursive case is the time
it takes to open 1000 files vs 1 file, which I measured at 20ms.

>
> Merging Makefile.in's also has another significant advantage: less files to
> manage. There's ~1500 Makefiles in mozilla-central today. That number is
> staggering and managing them all is unwieldy. A lot of us would like to see
> that reduced by a magnitude and then possibly some, just so things are
> easier to mentally grok and manage.

Personally I don't see this as an advantage, but that is my opinion. I
think it is easier to look at a build file in the current directory to
see how files in the current directory are built. You don't really
have less content to manage by merging files into the parent
directories, so I don't see how it makes it more understandable just
because it's in a single file.

-Mike

Justin Lebar

unread,
Jul 12, 2012, 6:42:42 PM7/12/12
to Mike Shal, dev-pl...@lists.mozilla.org, Gregory Szorc
>> Merging Makefile.in's also has another significant advantage: less files to
>> manage. There's ~1500 Makefiles in mozilla-central today. That number is
>> staggering and managing them all is unwieldy. A lot of us would like to see
>> that reduced by a magnitude and then possibly some, just so things are
>> easier to mentally grok and manage.
>
> Personally I don't see this as an advantage, but that is my opinion. I
> think it is easier to look at a build file in the current directory to
> see how files in the current directory are built. You don't really
> have less content to manage by merging files into the parent
> directories, so I don't see how it makes it more understandable just
> because it's in a single file.

Isn't the argument for merging Makefile.in's applicable to basically
anything else in the tree? "We have ~7000 header files in
mozilla-central. That number is staggering and managing them all is
unweildy. A lot of us would like to see that reduced by a mangnitude
or and then possibly some, just so things are easier to mentally grok
and manage."

But I hope nobody seriously thinks that reducing our header file count
by a factor of 10 via merging would be a net decrease in complexity.

Smaller, more tightly-scoped files are better than fewer, larger,
files, particularly with respect to merge conflicts.

If we really want to merge Makefiles for the purposes of recursive
make, I think we should either (a) make our directories larger (that
is, flatten the whole directory structure, rather than just the
makefiles), or (b) include lower-level directories' Makefile.in's into
higher-level directories'.

Our build would not be simpler if we merged 1,500 Makefiles into 150
files. We'd just have fewer files. :shrug:

Gregory Szorc

unread,
Jul 12, 2012, 6:53:40 PM7/12/12
to Justin Lebar, Mike Shal, dev-pl...@lists.mozilla.org
When I say merging Makefile.in's, I'm talking about b). I don't think
anyone is advocating merging Makefile.in's from unrelated components.
Instead, we're talking about things like moving Makefile.in's from the
test directories into the same Makefile.in where the source files are.
Basically, consolidate each tree of related Makefile.in's so related
build definitions are more cohesive.

FWIW, removing Makefile.in's from test directories would eliminate ~400
files. I don't think this would cause more merge conflicts, etc.

Justin Lebar

unread,
Jul 12, 2012, 7:00:04 PM7/12/12
to Gregory Szorc, Mike Shal, dev-pl...@lists.mozilla.org
>> If we really want to merge Makefiles for the purposes of recursive
>> make, I think we should either (a) make our directories larger (that
>> is, flatten the whole directory structure, rather than just the
>> makefiles), or (b) include lower-level directories' Makefile.in's into
>> higher-level directories'.
>
> When I say merging Makefile.in's, I'm talking about b). I don't think anyone
> is advocating merging Makefile.in's from unrelated components.

Sorry, when I said "include", I should have said "#include".

> Instead,
> we're talking about things like moving Makefile.in's from the test
> directories into the same Makefile.in where the source files are. Basically,
> consolidate each tree of related Makefile.in's so related build definitions
> are more cohesive.

Right, this is what I think is not a useful errand, for anything other
than recursive make speed.

> FWIW, removing Makefile.in's from test directories would eliminate ~400
> files. I don't think this would cause more merge conflicts, etc.

I guess it depends on how you structure the resultant files. But I'd
still prefer to see them #included...

Gregory Szorc

unread,
Jul 12, 2012, 7:30:49 PM7/12/12
to Mike Shal, dev-pl...@lists.mozilla.org
On 7/12/12 3:13 PM, Mike Shal wrote:
> I was thinking if there are Tupfiles generated by configure, it is
> easier to read the configuration if it is in the current directory,
> rather than in a parent directory. For example, if the Makefile.in's
> were all fixed according to the bugs you have filed so that they are
> data-driven, then a generated Tupfile could fairly easily do:
>
> obj-foo/some/directory/Tupfile:
> include_rules
> include Makefile.in
> include $(MOZ_ROOT)/config/build.tup
>
> Where the tup back-end is defined by config/build.tup. In other words,
> the Makefile.in is still the build fragment checked in to
> mozilla-central, not the Tupfile. If several Makefile.in's are grouped
> into a parent directory, then the configure stage for a tup build
> becomes more complex because it needs to find the right Makefile.in to
> include, and the build.tup becomes more complex since it needs to
> prune out the data for the current directory.

OK, I think I grok what you are saying.

So, one practical problem we have - even with data-driven Makefile's -
is that we have weird Makefile magic like
https://bugzilla.mozilla.org/show_bug.cgi?id=773202. Instead of:

FILES := a b c d e f

We have magic like:

COMPONENTS := foo bar

FOO_FILES := a b c
FOO_DEST := foo

BAR_FILES := d e f
BAR_DEST := bar

We are performing define..endef, $(eval), and $(call) magic in our
rules.mk to do the right thing.

I'm no expert with tup's syntax, but is it able to reproduce this behavior?

If not, we either need to change how things are defined in our
Makefile.in's or we introduce an abstraction layer that generates
Tupfiles. I was operating on the assumption we'd have to go with the latter.

I also like the abstraction layer because that means we have a data
structure representing our build system definition. And, we can
theoretically convert that into Visual Studio, Xcode, etc files.

> I think we agree here? I was saying that merging these only benefits
> recursive make, as you describe with your example. You had also said:
> "This effort will ultimately support de-recursified execution, if we
> ever want that", which I took to mean a non-recursive make - is that
> what you meant?

Yes, that's what I mean.

> What I was showing was that in a non-recursive setup,
> you would only save about 20ms so it isn't really a benefit. The
> parallelism in a non-recursive make would be the same whether you have
> 1000 Makefiles with 10 rules each or 1 Makefile with 10,000 rules

Yup. I'm also looking at this from a practical transition perspective.
We're not going to get to non-recursive make overnight. So, any
advancements in a recursive make system with more parallelism can be
realized immediately, before we attain non-recursive make. It will be
easier to convince people to help with the effort if they see an
immediate gain.

Nicholas Nethercote

unread,
Jul 12, 2012, 10:10:50 PM7/12/12
to Gregory Szorc, Mike Shal, dev-pl...@lists.mozilla.org
I just want to say what a pleasure it is to read this thread!
Constructive, thoughtful, discussion from multiple people that sounds
like it has a decent chance of working in practice.

Nick

Justin Lebar

unread,
Jul 13, 2012, 2:04:32 AM7/13/12
to Nicholas Nethercote, Mike Shal, dev-pl...@lists.mozilla.org, Gregory Szorc
If I can summarize some of the conclusions I think we've reached in this thread:

1) We would like to be able to experiment with different back-ends for
our build system (Visual Studio projects, tup, non-recursive Make,
etc.).

2) In order to do this, we need to be able to sanely parse our
Makefile.in's. Since many files contain make-isms, we'll have to
rewrite at least some of our Makefile.in's by hand.

3) Whatever format we use as the Makefile.in rewrite target will need
to support branches (for e.g. target platform), custom compile flags
for different files and directories, and other tricky things.

In my view, the first step here should be investigating different
front-end formats and converting a small part of the tree to whatever
format we decide is best, with a back-end targeting recursive make.

It would then be relatively easy to collapse some directories so they
trigger fewer recursive calls to make, either by collapsing the
front-end files (not my preference), or by somehow annotating the
front-end files to indicate that they should generate collapsed
makefiles.

In particular, if we think that the end goal is defined by the three
points above, I don't think that optimizing our existing Makefile.in
(by merging them or whatever) is necessarily all that helpful of a
step, since we're going to rewrite them anyway.

Like I said earlier, if we could demonstrate lightning-fast builds in
[pick your favorite directory], I bet we'd suddenly find lots of
people eager to move this project forward.

-Justin

Nicholas Nethercote

unread,
Jul 13, 2012, 2:25:34 AM7/13/12
to Justin Lebar, Mike Shal, dev-pl...@lists.mozilla.org, Gregory Szorc
On Thu, Jul 12, 2012 at 11:04 PM, Justin Lebar <justin...@gmail.com> wrote:
>
> Like I said earlier, if we could demonstrate lightning-fast builds in
> [pick your favorite directory], I bet we'd suddenly find lots of
> people eager to move this project forward.

[bayou:~/moz/mi4] time make -s -C d64/js/src
Making all in include
Making all in testsuite
Making all in man

real 0m0.486s
user 0m0.288s
sys 0m0.096s


That's for a no-change build. I have a fast machine, and I suspect
Mike wouldn't call half a second "lightning fast", but compared to
overall build times it's nothing. A number of other directories are
fine just like js/src/, it's the ones where we (have to) do stupid
things like forcing -j1 or regenerating files every time that kill us,
AFAICT.

Nick

Gregory Szorc

unread,
Jul 13, 2012, 4:35:45 AM7/13/12
to Justin Lebar, dev-pl...@lists.mozilla.org, Mike Shal, Nicholas Nethercote
On 7/12/12 11:04 PM, Justin Lebar wrote:
> If I can summarize some of the conclusions I think we've reached in this thread:
>
> 1) We would like to be able to experiment with different back-ends for
> our build system (Visual Studio projects, tup, non-recursive Make,
> etc.).
>
> 2) In order to do this, we need to be able to sanely parse our
> Makefile.in's. Since many files contain make-isms, we'll have to
> rewrite at least some of our Makefile.in's by hand.
>
> 3) Whatever format we use as the Makefile.in rewrite target will need
> to support branches (for e.g. target platform), custom compile flags
> for different files and directories, and other tricky things.

I am over 95% in agreement with the above 3 points, especially #1 and
#2. I will split hairs on #3.

I would argue we only need 1 "tier-1" build system that supports the
whole enchilada. Everything else could be a subset. If it can do the
basics (e.g. build Firefox) and can do it faster, that should be good
enough, right? That being said, we should aim for something that can run
on the releng infrastructure and on tier-2 and below build
configurations because that would be splendid.

> In my view, the first step here should be investigating different
> front-end formats and converting a small part of the tree to whatever
> format we decide is best, with a back-end targeting recursive make.

I think we already have that format: static and deterministic Makefile.in's.

Don't get me wrong, I would love to eliminate the make foot gun and
switch to something else. But, it isn't strictly required because we can
use pymake's API to extract data. (I've been working on patches to
pymake that provide APIs for inspecting, modifying, and even rewriting
make files - see [1] and [2].) We can also use pymake's API for auditing
Makefile.in's to keep this foot gun in check.

I would instead propose the first step to be ridding our Makefile.in's
of all the non-deterministic bits. This is the important part: we can't
convert to another (sanely parsed) front-end format if we have
make-isms. As a bonus, converting to something else can be completely
automated if the source is static.

If nothing else, having the source of truth as Makefile.in's does
eliminate a potential point of failure: the conversion to Makefile's. My
experience operating high-availability services tells me we shouldn't
make changes unless we absolutely need to. I don't think we absolutely
need to make this change.

I think switching DSLs is inevitable because make files are suboptimal.
But, it really doesn't matter. I'm just voicing a conservative opinion here.

> In particular, if we think that the end goal is defined by the three
> points above, I don't think that optimizing our existing Makefile.in
> (by merging them or whatever) is necessarily all that helpful of a
> step, since we're going to rewrite them anyway.

I agree with this rationalization. My suggestions for optimizing
existing Makefile.in's are mostly about diversifying our bets. If we
fail epically, it would be nice to have gains to fall back on. And, as
long as we are talking about "experimenting" with "alternative" build
backends, we will still have people on the existing system who will see
gains from work here. Of course, this is dependent on us persisting with
Makefile.in's as the build definition files. If we switch DSLs, all bets
are off because the output format can be whatever we want.

So, I guess the real question is "how bold do we want to be?"

Will the build peers support an effort that requires up-front conversion
of the build definition to something not Makefile.in's? Or, should we
build on top of static Makefile.in's and make a superficial DSL
conversion later once things are proved out? I don't know either.

Gregory

[1]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/buildsplendid/makefile.py
[2]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/buildsplendid/extractor.py

Mike Hommey

unread,
Jul 13, 2012, 4:44:20 AM7/13/12
to Gregory Szorc, Nicholas Nethercote, dev-pl...@lists.mozilla.org, Mike Shal, Justin Lebar
On Fri, Jul 13, 2012 at 01:35:45AM -0700, Gregory Szorc wrote:
> So, I guess the real question is "how bold do we want to be?"
>
> Will the build peers support an effort that requires up-front
> conversion of the build definition to something not Makefile.in's?
> Or, should we build on top of static Makefile.in's and make a
> superficial DSL conversion later once things are proved out? I don't
> know either.

I think a lot of people have very little knowledge of our build system,
and it would be sad to start messing with the little understanding they
have. That is, I think for the time being, we should stick to the common
knowledge that adding a source file needs adding the file to CPPSRCS in
Makefile.in, etc.

We can parse that and do whatever we want from it.

Mike

Justin Lebar

unread,
Jul 13, 2012, 10:26:34 AM7/13/12
to Gregory Szorc, dev-pl...@lists.mozilla.org, Mike Shal, Nicholas Nethercote
> My experience
> operating high-availability services tells me we shouldn't make changes
> unless we absolutely need to. I don't think we absolutely need to make this
> change.

If you think we could make this work without rewriting all the
Makefile's in the tree, that would be great! I've been operating
under the assumption that that's too hard.

Could you be more specific about what you mean by a "static" Makefile?
Is it just a Makefile which doesn't run any external commands?
> So, I guess the real question is "how bold do we want to be?"
>
> Will the build peers support an effort that requires up-front conversion of
> the build definition to something not Makefile.in's? Or, should we build on
> top of static Makefile.in's and make a superficial DSL conversion later once
> things are proved out? I don't know either.
>

Justin Lebar

unread,
Jul 13, 2012, 10:28:26 AM7/13/12
to Mike Hommey, Nicholas Nethercote, Mike Shal, dev-pl...@lists.mozilla.org, Gregory Szorc
> I think a lot of people have very little knowledge of our build system,
> and it would be sad to start messing with the little understanding they
> have. That is, I think for the time being, we should stick to the common
> knowledge that adding a source file needs adding the file to CPPSRCS in
> Makefile.in, etc.
>
> We can parse that and do whatever we want from it.

Inasmuch as people with little knowledge of the build system follow
the procedure of adding a file to the long list of cpp files in their
directory's Makefile.in, I don't think changing to some other format
would make peoples' lives much harder...

That is, people with little knowledge of the build system have little
to lose by a conversion to some other format.

Gregory Szorc

unread,
Jul 13, 2012, 2:01:44 PM7/13/12
to Justin Lebar, dev-pl...@lists.mozilla.org, Mike Shal, Nicholas Nethercote
On 7/13/12 7:26 AM, Justin Lebar wrote:
>> My experience
>> operating high-availability services tells me we shouldn't make changes
>> unless we absolutely need to. I don't think we absolutely need to make this
>> change.
>
> If you think we could make this work without rewriting all the
> Makefile's in the tree, that would be great! I've been operating
> under the assumption that that's too hard.
>
> Could you be more specific about what you mean by a "static" Makefile?
> Is it just a Makefile which doesn't run any external commands?

Essentially it is a Makefile that consists of purely variable
assignments and has no external dependencies. i.e. no $(shell) or
$(wildcard). It also likely avoids make-isms like define..endef. This
way, we can feed the Makefile into pymake's API and resolve things
without having side-effects.

We also want no explicit rules so implementing a new build backend is
"just" a matter of reimplementing the logic in rules.mk.

Robert Kaiser

unread,
Jul 13, 2012, 2:27:13 PM7/13/12
to
Gregory Szorc schrieb:
> I disagree. Even with a recursive make, fewer, "fatter" Makefile.in's
> would likely yield more parallel execution because each make invocation
> would be more likely to saturate available cores.

I'm not entirely sure there as we even support building multiple
directories in parallel with our makefiles (see PARALLEL_DIRS). Of
course this only works if those directories aren't interdependent (which
is why we have been very slow and careful to introduce this in many
cases), but that's the same for a flat Makefile.

Robert Kaiser

Mike Shal

unread,
Jul 16, 2012, 11:14:59 PM7/16/12
to Gregory Szorc, dev-pl...@lists.mozilla.org
Hi Gregory,

On Thu, Jul 12, 2012 at 7:30 PM, Gregory Szorc <g...@mozilla.com> wrote:
> On 7/12/12 3:13 PM, Mike Shal wrote:
>>
>> I was thinking if there are Tupfiles generated by configure, it is
>> easier to read the configuration if it is in the current directory,
>> rather than in a parent directory. For example, if the Makefile.in's
>> were all fixed according to the bugs you have filed so that they are
>> data-driven, then a generated Tupfile could fairly easily do:
>>
>> obj-foo/some/directory/Tupfile:
>> include_rules
>> include Makefile.in
>> include $(MOZ_ROOT)/config/build.tup
>>
>> Where the tup back-end is defined by config/build.tup. In other words,
>> the Makefile.in is still the build fragment checked in to
>> mozilla-central, not the Tupfile. If several Makefile.in's are grouped
>> into a parent directory, then the configure stage for a tup build
>> becomes more complex because it needs to find the right Makefile.in to
>> include, and the build.tup becomes more complex since it needs to
>> prune out the data for the current directory.
>
>
> OK, I think I grok what you are saying.
>
> So, one practical problem we have - even with data-driven Makefile's - is
> that we have weird Makefile magic like
> https://bugzilla.mozilla.org/show_bug.cgi?id=773202. Instead of:
>
> FILES := a b c d e f
>
> We have magic like:
>
> COMPONENTS := foo bar
>
> FOO_FILES := a b c
> FOO_DEST := foo
>
> BAR_FILES := d e f
> BAR_DEST := bar
>
> We are performing define..endef, $(eval), and $(call) magic in our rules.mk
> to do the right thing.
>
> I'm no expert with tup's syntax, but is it able to reproduce this behavior?

Hmm, off-hand it is a little hard for me to say. It is easy to parse
the variable declarations, of course. However, tup's standard parser
doesn't have any string manipulating functions like make, and
certainly nothing as complex as eval/call macros, so the back-end may
not be implementable with the regular Tupfile syntax. For cases like
this where the basic parsing syntax is not sufficient, tup supports
running external scripts to do the heavy-lifting. For example, the
Tupfile could run a python script to read the Makefile.in and generate
the corresponding rules for tup. Note that the python script would
only run when tup decides to re-parse that directory, such as when a
file is added/removed, or the Tupfile/python script changes.

>
> If not, we either need to change how things are defined in our Makefile.in's
> or we introduce an abstraction layer that generates Tupfiles. I was
> operating on the assumption we'd have to go with the latter.

I don't think it makes sense to change Makefile.in's just to support
tup at this time (aside from those fixes you have already logged as
bugs), unless a small fix to a Makefile.in would save a lot of re-work
in a Tupfile or something.

>
> I also like the abstraction layer because that means we have a data
> structure representing our build system definition. And, we can
> theoretically convert that into Visual Studio, Xcode, etc files.

Do you consider static data-driven Makefile.in's an abstraction layer?
Or are you referring to something else?

Thanks,
-Mike

Gregory Szorc

unread,
Jul 16, 2012, 11:42:30 PM7/16/12
to Mike Shal, dev-pl...@lists.mozilla.org
On 7/16/12 8:14 PM, Mike Shal wrote:

>> If not, we either need to change how things are defined in our Makefile.in's
>> or we introduce an abstraction layer that generates Tupfiles. I was
>> operating on the assumption we'd have to go with the latter.
>
> I don't think it makes sense to change Makefile.in's just to support
> tup at this time (aside from those fixes you have already logged as
> bugs), unless a small fix to a Makefile.in would save a lot of re-work
> in a Tupfile or something.

We are changing our Makefile.in's to support N build systems, not just tup.

The general plan of attack is to provide some autoconf glue that allows
different "generators" to be used. By default, we'll go with the
existing straight conversion (Makefile.in -> Makefile). But, you could
easily swap in a Makefile.in -> Tupfile converter.

As I understand things (and Mike Hommey should confirm this), the set of
build configuration/input files will be saved to config.status along
with whatever build backend generator script was passed into configure.
When config.status runs, it will invoke the generator script, which will
write/update the build backend files appropriately.

We /think/ this is generic enough to work with any build backend. Is
this sufficient for tup? If not, what do we need to change?

>> I also like the abstraction layer because that means we have a data
>> structure representing our build system definition. And, we can
>> theoretically convert that into Visual Studio, Xcode, etc files.
>
> Do you consider static data-driven Makefile.in's an abstraction layer?
> Or are you referring to something else?

I imagine that at some point we'll probably have code in the tree that
parses Makefile.in's (or whatever) into some monolithic data structure
and provides APIs for inspecting that. This will probably be along the
lines of extractor.py from my build-splendid branch. This solution will
abstract away the existence of Makefile.in's and allow build backends to
consume Python rather than having to worry about consuming the source.
Think of it as an API to the build configuration. All build backend
generators would likely use this API because it makes sense to do so (to
avoid wheel reinvention).

Mike Shal

unread,
Jul 16, 2012, 11:56:53 PM7/16/12
to Gregory Szorc, dev-pl...@lists.mozilla.org
Hi Gregory,

On Mon, Jul 16, 2012 at 11:42 PM, Gregory Szorc <g...@mozilla.com> wrote:
> On 7/16/12 8:14 PM, Mike Shal wrote:
>
>>> If not, we either need to change how things are defined in our
>>> Makefile.in's
>>> or we introduce an abstraction layer that generates Tupfiles. I was
>>> operating on the assumption we'd have to go with the latter.
>>
>>
>> I don't think it makes sense to change Makefile.in's just to support
>> tup at this time (aside from those fixes you have already logged as
>> bugs), unless a small fix to a Makefile.in would save a lot of re-work
>> in a Tupfile or something.
>
>
> We are changing our Makefile.in's to support N build systems, not just tup.

Righto :)

>
> The general plan of attack is to provide some autoconf glue that allows
> different "generators" to be used. By default, we'll go with the existing
> straight conversion (Makefile.in -> Makefile). But, you could easily swap in
> a Makefile.in -> Tupfile converter.
>
> As I understand things (and Mike Hommey should confirm this), the set of
> build configuration/input files will be saved to config.status along with
> whatever build backend generator script was passed into configure. When
> config.status runs, it will invoke the generator script, which will
> write/update the build backend files appropriately.
>
> We /think/ this is generic enough to work with any build backend. Is this
> sufficient for tup? If not, what do we need to change?

That sounds reasonable to me - I don't think there would be a problem
with this for tup.

>
>
>>> I also like the abstraction layer because that means we have a data
>>> structure representing our build system definition. And, we can
>>> theoretically convert that into Visual Studio, Xcode, etc files.
>>
>>
>> Do you consider static data-driven Makefile.in's an abstraction layer?
>> Or are you referring to something else?
>
>
> I imagine that at some point we'll probably have code in the tree that
> parses Makefile.in's (or whatever) into some monolithic data structure and
> provides APIs for inspecting that. This will probably be along the lines of
> extractor.py from my build-splendid branch. This solution will abstract away
> the existence of Makefile.in's and allow build backends to consume Python
> rather than having to worry about consuming the source. Think of it as an
> API to the build configuration. All build backend generators would likely
> use this API because it makes sense to do so (to avoid wheel reinvention).

Ahh, I see now. Thanks for the feedback!

-Mike

Gregory Szorc

unread,
Jul 24, 2012, 2:40:44 PM7/24/12
to Mike Shal, dev-pl...@lists.mozilla.org
Mike, et al,

I wanted to let you know that I have rebased my build-splendid branch on
top of current mozilla-central and have refactored it to work with mach.
The branch is still at [1].

All the code now lives under build/pylib/mozbuild. What's left of the
old code lives in build/pylib/mozbuild/mozbuild/buildconfig. That
directory also holds most new functionality not being tracked in bug
751795 because I'm trying to keep the patches separate so it is easier
to uplift them when the time comes. I've also checked in a custom
version of pymake to build/pylib/pymake which contains necessary patches
that haven't made it into upstream yet.

I've done a lot of refactoring as part of the rebase. frontend.py [2]
contains the bits that assemble the set of input/frontend files for the
build system. mozillamakefile.py [3] contains code for converting
Makefile.in's into generic data structures representing metadata from
them (e.g. the set of IDL files, files to export, etc).

The most important change is the introduction of a new concept:
backends. A backend is effectively an interface that takes a frontend
instance and can generate() backend files/state which can be used to
build() the tree. The LegacyBackend [4] implements Mozilla's existing
build backend: it converts Makefile.in's to Makefile's and runs make.

The HybridMakeBackend [5] is much more exciting. It takes the extracted
data from Makefile.in's (using MozillaMakefile.get_data_objects), writes
out a non-recursive .mk file corresponding to each Makefile.in, and
strips variables from the Makefile related to actions now performed by
the non-recursive make files. During building, it executes both the old
(slow) non-recursive make system and the new (insanely fast) recursive
make system. Let me say that again: *I have non-recursive make working
in mozilla-central in harmony with the existing recursive make backend*
(for some functionality, anyway). The export tier works very well and
already shaves a few minutes off of build times. It also has true no-op
builds that take less than 3s! I'm working on C++ compilation (current
implementation is very hacky, so please don't look at it).

Anyway, you could use HybridMakeBackend as a starting point for
implementing a TupBackend. If you want to do that, you'll need some
plumbing in the CLI [6].

Currently, the workflow is suboptimal:

$ ./mach configure
$ ./mach buildbuild hybridmake

(or |./mach buildconfig hybridmake| to just do generation without building).

Mike Hommey has plans to make configure support customizing the build
backend. For now, we just have configure do it's default "output
Makefile's" behavior then patch over it inside the generate() of the
build splendid backend.

There are many parts still very alpha and there is a lot of missing
functionality (like intelligently detecting when a new backend
generation is required). But, I think things are stable enough for you
to play around with. I'm pretty sure you have enough to create a working
Tup backend (for some functionality, anyway).

For people wanting to use this, I recommend not attempting to use it
just yet. It is still broken in a few major areas. I plan to expose a
standalone patch that one can just |hg qimport| or |git apply| once I'm
comfortable with people testing this as part of their day-to-day work.
And, yes, I'm attempting to upstream patches so this can eventually land
in m-c.

Gregory

[1] https://github.com/indygreg/mozilla-central/tree/build-splendid
[2]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/pylib/mozbuild/mozbuild/buildconfig/frontend.py
[3]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/pylib/mozbuild/mozbuild/buildconfig/mozillamakefile.py
[4]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/pylib/mozbuild/mozbuild/buildconfig/backend/legacy.py
[5]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/pylib/mozbuild/mozbuild/buildconfig/backend/hybridmake.py
[6]
https://github.com/indygreg/mozilla-central/blob/build-splendid/build/pylib/mozbuild/mozbuild/cli/buildconfig.py
0 new messages