Joseph Garvin <joseph....@gmail.com
> letting the build system know you only depend
> on inner_file_1 is better than depending on files.tar because the build
> system can specifically check if inner_file_1 changed, and avoid rebuilding
> if it hasn't even if other files inside the tarball have changed
I am pretty sure that for granular dependencies you do not want to
depend on files.tar, but on inner_file_1, which is extracted using
inner_file_1.do from files.tar and depends solely on files.tar.
I talked with people about multiple outputs at Chaos Communication Camp
recently and some pretty creative approaches exist; also people seem to
reject the tarball solution mainly on aesthetic grounds.
A similar thing occurs when you tell people who want out-of-tree builds
to just mount an overlay filesystem. The majority of people recoil, but
would clearly be capable of doing so, i.e. their OS offers FS overlays.
People seem to think that it is “weird” and should not be used, despite
asserting Docker and other software using overlay filesystems are okay.
Anyway, if it is stupid and it works, it is not stupid. And if it feels
clever and does not work, it is not particularly useful or clever, like
the DAG-toposort “optimizations” which can never beat a naive recursive
top-down build implementation on correctness (if you believe otherwise,
read “build systems a la carte” again and meditate on the formalisms) …
That being said, I would very much prefer to see a real-world issue you
encountered that could not be solved with tar and what you did instead.
Since I use redo for real problems, architecture astronautics annoy me.
So, please show and tell your problem & solution. Maybe do a blog post?