Is the purpose of a ninja build to give the same result as a full rebuild?

62 views
Skip to first unread message

Marc Delorme

unread,
Apr 16, 2021, 1:02:25 PM4/16/21
to ninja-build
I am starting this fundamental discussion because I realized the answer has a real impact on some feature design decision. (related to at least 3 PRs: Provide resiliency against inputs changing during the build (PR #1943)Make outputs modified outside of the build system considered dirty (PR #1951)make ninja handle dynamic outputs (PR #1953), and to this other discussion:  How to make ninja aware of dynamic implicit outputs  )

Is the purpose of a ninja build to give the same result as a full rebuild? 
In other for a given state of build input file should running ninja produce the same result whatever the state of the outputs, the build log, and deps log?

If the answer is yes, then:
  1. If an output is modified, it should be considered dirty. If not, running ninja will not re-generate, and the result would be different from a full rebuild.
  2. It should be possible to inform ninja about dynamic outputs file. If not, when an dynamic output file gets modified and deleted, running ninja will not re-generate it (because it is not aware about it), and the result would be different from a full rebuild.
  3. If the input file of a rule command change while executing the command but before the output is generated, ninja should either fail or re-run the command. If not, after ninja said it has succeeded, the state of the build input will not match the state of the outputs (i.e the result would be different than if you were doing a full rebuild just after is said having succeeded)
The point above makes ninja tend to better reproducibility. But it is possible this better reproducibility goes against some practical scenario:

About 1., if for some reason you want to modified on purpose an intermediate or output file, ninja will overwrite your work next time you run it. 

About 3. if input files are modified while a rule command is modified, it is correct to say the rule execution failed (since the result does not match the input), but in practice the result is consistent, it is not  just matching the current input. Also if ninja where re-running the rule command, it could be stuck in an infinite loop in theory.

Ben Boeckel

unread,
Apr 18, 2021, 1:46:25 PM4/18/21
to Marc Delorme, ninja-build
On Thu, Apr 15, 2021 at 21:04:13 -0700, Marc Delorme wrote:
> *Is the purpose of a ninja build to give the same result as a full
> rebuild? *
> In other for a given state of build input file should running ninja produce
> the same result whatever the state of the outputs, the build log, and deps
> log?

FWIW, I don't think this is a great goal to have *at the level ninja is
working*. Overall, it's a good goal, but I think it is too high-level
for something like ninja to solve.

> If the answer is yes, then:
>
> 1. If an output is modified, it should be considered dirty. If not,
> running ninja will not re-generate, and the result would be different from
> a full rebuild.
> 2. It should be possible to inform ninja about dynamic outputs file. If
> not, when an dynamic output file gets modified and deleted, running ninja
> will not re-generate it (because it is not aware about it), and the result
> would be different from a full rebuild.
> 3. If the input file of a rule command change while executing the
> command but before the output is generated, ninja should either fail or
> re-run the command. If not, after ninja said it has succeeded, the state of
> the build input will not match the state of the outputs (i.e the result
> would be different than if you were doing a full rebuild just after is said
> having succeeded)

Note that for a truly strong statement about this, the environment must
be controlled, logged, and tracked. Running part of a build under
`LD_PRELOAD` is not likely to agree with a from-scratch build. Likewise
with `CCACHE_DISABLE`, mount namespaces, etc. If this is going to be a
stated goal, ninja will have to commit to be *much* more involved in the
commands it is executing. You'll probably end up with Tup[1] in the end
which I view as having practical limitations to working in the existing
computing spectrum.

Depending on the strength intended of this goal, certain things would
likely become verboten:

- commands which don't declare *all* their outputs
- commands which use files that aren't listed as inputs

But there are commands for which dyndep doesn't work because the set of
files output are only known after doing the command (e.g., encapsulating
a "build and install of a dependency" as a ninja command). This is where
I, personally, find Tup lacking: any such "utility" commands are forced
out of the "main build" and require a sidecar process for executing
them.

> The point above makes ninja tend to better reproducibility. But it is
> possible this better reproducibility goes against some practical scenario:

I think reproducibility is likely better handled at a different level.
Tool-specific caching solutions (and their metrics) at the low level and
artifact comparison tools at the high-level. Tools can be sensitive to
inode orders, random environment variables, use pointer sorting
internally, etc. IMO, ninja isn't going to solve these issues in general
(at least without ignoring its current "small build tool" status).

> About 1., if for some reason you want to modified on purpose an
> intermediate or output file, ninja will overwrite your work next time you
> run it.

FWIW, I have done this myself at times (e.g., to avoid
regeneration or other incidental changes from interfering with my
current debugging task). Being able to "trick" or "short-circuit" the
build in this way is quite handy to avoid expensive turn-around times
when debugging parts of a build.

> About 3. if input files are modified while a rule command is modified, it
> is correct to say the rule execution failed (since the result does not
> match the input), but in practice the result is consistent, it is not just
> matching the current input. Also if ninja where re-running the rule
> command, it could be stuck in an infinite loop in theory.

This is especially bad in projects without good internal dependency
tracking.

--Ben

[1]http://gittup.org/tup/
Reply all
Reply to author
Forward
0 new messages