Hi Rafael,
On Wed, Jul 4, 2012 at 11:36 PM, Rafael Ávila de Espíndola
<
respi...@mozilla.com> wrote:
>> Any feedback, questions, or concerns are welcome.
>
>
> First of all congratulations. The paper, tup itself and getting it to build
> firefox are all impressive.
Thanks!
>
> Having seem better (but still Alpha, in your terminology) build systems when
> working in llvm I would love to see firefox get a better one. Tup's support
> for deleting old files in particular would be handy to replace our "make
> package".
>
> I guess the first question in how far do you want to push this. Do you want
> to try to replace the current build system?
In the near-term I'd like to provide tup as an alternative. I think it
can be done by adding a separate tup back-end, and re-using the
per-directory Makefile.in files. I don't see any reason to change the
Makefile.in structure (what I called the "front-end" for lack of a
better term) - it already looks to me like it is very explicit and
easy to understand and modify. I'm not a Mozilla dev though so correct
me if I'm wrong here :)
The idea is that even if you are perfectly happy with make, you can
still use it, and go about adding CPPSRCS and such to Makefile.in.
Both systems should still work with those updates - they wouldn't have
to be done in make and then in tup.
If in the long-term people have switched over to tup, then it might be
worth deprecating the make back-end. I don't think that's something
that would happen for a few years, though.
>
> On tup itself:
>
> * What is the plan for configuration? Just keep autoconf for now? If that is
> possible it should save a lot of work in porting a lot of m4.
I wasn't planning to change anything here. Perhaps the configure
script would have a '--enable-tup' flag to generate Tupfiles, or the
Tupfiles could always be generated along-side the Makefiles. Tup does
have support for a kconfig-style configuration file, but I don't see
any reason to use that here, at least not initially.
>
> * If I understand it correctly, tup "plugs" in the OS to find out which
> files are used by a given command execution. It uses this both for error if
> a dependency is not declared (really neat!) and to collect extra
> dependencies like the ones that is traditionally retrieved from "gcc -MM".
> Is that correct?
Yes, that is correct. It also checks the output files, so if in the
Tupfile has something like:
: |> gcc -c foo.c -o bar.o |> foo.o
Where the output (foo.o) mis-matches with what gcc actually does
(create bar.o), then it will flag it as an error.
This does occasionally cause issues with some languages, since you
have to explicitly specify the outputs. There is a recent thread in
tup-users about compilers that create different files depending on the
input files' contents, for example. Part of the reason for my
experiment was to see if there were going to be any roadblocks like
this in the mozilla-central tree (in particular with the various
scripts that generate/process files), but I don't believe there are
any.
>
> * Does it handle addition of header files to a search path before the
> current one was found? Just a curiosity, no build system I know handles that
> :-)
Yes, it should handle this. For example, with a rule like:
: |> gcc foo.c -Ia -Ib -o foo |> foo
and foo.c does:
#include "foo.h"
Then tup will create an entry in its database for both a/foo.h and
b/foo.h, even if only b/foo.h exists. This is a placeholder in the
event that a/foo.h is later created, since it would affect the build
(internally in tup, these are called 'ghost' nodes). I think the Vesta
system can also handle cases like this (
http://www.vestasys.org/), but
when I first tried it several years ago I wasn't able to get it to
run. I also didn't like how it was so tightly integrated with version
control, since I wanted to try out the new-fangled DVCSs.
I should mention that the reason tup puts so much effort into handling
situations like this is because of my Rule #2 - where doing
incremental builds must give the same result as a full build. With
make I am never sure if I need to do a clean build after a 'git pull',
or if it's safe to just do a 'make' from the top. Consider also a
bisection, where you are bouncing around the revision history. If an
incremental 'make' gives an incorrect build, then it may throw off the
bisection results and point you to the wrong commit. Or, you could try
to be safe and waste hours at each commit doing a full rebuild.
Neither scenario is desirable for me - I'd rather the build system
just do the right thing. For tup's own development tree, I don't think
I've had to clean out the build area in years. Every step is just
incremental from the last.
I should also mention that the above case was broken in Windows if the
"a" directory didn't exist during the first build, but I'm pushing out
a fix now. Hey, nothing's perfect! :)
>
> * How hard is the "plug" to port? Windows, OS X and Linux cover most
> users/developers, but firefox does have a very long tail. Can tup fallback
> to running gcc -MM?
Unfortunately, I don't have a good answer for this. It largely depends
on what facilities are available on the target OS. For Linux and OSX
(and FreeBSD on the way), it uses a temporary FUSE file-system to run
the sub-process in. Windows has a different implementation that uses
DLL injection, which required someone who knew what they were doing on
Windows to implement. These porting issues are why I suggest keeping
the make build as-is for the foreseeable future, so that no
functionality is lost.
I don't know that gcc -MM support would be very desirable. For one, it
only covers that specific sub-process, so things like the python
scripts that are used in the build process would need their own
treatment. Additionally, I believe the -M family of flags in gcc only
report files that it actually found, so it breaks the previous case of
a header not being found earlier in an include path. Although somewhat
rare, it is nice to be able to just type 'tup upd' and not be suspect
about the results.
>
> * If gcc -MM is not an option, is there a plan to support a backend
> targeting tup in cmake, gyp or another generator? Maintaining two build
> systems is hard, specially if one of them is our current build system :-(
I haven't looked into getting tup supported by cmake or gyp much
myself. There was a cmake thread a while back about it:
http://cmake.3232098.n2.nabble.com/Effort-to-create-a-new-generator-tup-td4946808.html
But I don't know if any real progress was made. I think one of the
issues was that cmake generates the Makefile to re-run cmake if a
CMakeLists.txt changes (thus updating the Makefile), and tup doesn't
allow the Tupfiles to be generated as part of its build process.
Another option in these cases instead of having cmake/gyp generate
Tupfiles, is to have tup read the CMakeLists.txt (or whatever) files
using an external script via the 'run' directive. So instead of
running 'cmake .; tup upd', it would just be 'tup upd', and it reads
most of the configuration information directly from CMakeLists.txt. I
haven't tried it though, so that may not work at all.
-Mike