On Wed, Dec 19, 2012 at 9:49 PM, Ben Boeckel <
math...@gmail.com> wrote:
> On Wed, Dec 19, 2012 at 21:09:25 +0100, Andrew Wagner wrote:
>> If you're cleaning just because you don't trust the correctness of
>> your own build setup (I have found this to usually be the case with
>> people using CMake)
>
> I'd be interested in seeing these case(s). If it's sloppy CMake writing,
> that's understandable, but if CMake is generating improper Makefiles,
> that's a different story. If it's due to external_project_add, this does
> not surprise me at all.
CMake works well enough for simple cases, that, for better or worse,
people start using it and keep using it without ever understanding how
it works. Something like make or tup has a steeper initial learning
curve, but during that time you learn how the tool works and have a
shallow learning curve for the more complex cases.
> I have pretty complex CMake setups and the only problems I hit are when
> I miss dependencies with add_custom_(command|target) calls.
Yes, this is one of the prime examples we've been bitten by. It is
exacerbated by the out-of-source build stuff since it's non-obvious
how to refer to dependencies or build targets in CMake. I'm sure it's
learnable, but I've yet to run into someone IRL who was really
comfortable with it. That said, I mostly hang around with academics
who are by nature inexperienced programmers learning the ropes. My
programming guru is a guy who was a pro programmer in telecom, and
then hacked on the linux kernel for his PhD, and swears by make.
>> it is usually unnecessary with tup, since tup can
>> nag you about several common errors (i.e. unstated dependencies, often
>> exacerbated by make not suppoorting proper recursive builds).
>
> I clean fairly regularly in some projects to help catch warnings in the
> source code (they're not warning-clean, so -Werror isn't an option).
Ah. Of course, best practice is to fix the causes of the warnings...
but I totally understand if you inherited a pile of non-clean code.
>> It also deletes extra crap when you switch between git trees and
>> rebuild. If you just want to save space or something, I think most
>> people using tup are probably already already using git, which does a
>> great job of cleaning.
>
> I vastly prefer keeping builds (and I'm disappointed that Tup didn't go
> this route rather than variants, but oh well) out of the source tree
> (the source tree should ideally be buildable from a read-only
> directory), so git can't help there.
Well, git can do a local clone of the code, but then you really need a
script of mechanism outside of your code that is pulling changes into
your variants. I think it's customary for buildbots to do clean
checkouts of source code, so there is at least precedence for this
sort of thing. Remember, local clones with git by default use hard
links for the objects, so local cloning is faster and doesn't
unnecessarily duplicate files.
>> > 'make install' seems another story?
>>
>> Nope, tup doesn't know anything platform specific aside from the
>> filesystem magic it depends on.
>
> There's some projects which might need to relink or modify on install
> (OS X seems to be the common one that I've seen due to its...odd rpath
> support, but using rpath in the build tree and stripping it at install
> is common elsewhere too). Is there a way to have an 'on install' command
> for a target in Tup?
Tup itself doesn't even have an install target, so I wouldn't hold my breath :)
> Here are a few targets I have in my CMake builds which don't fit the
> 'many -> one' target mapping:
>
> - Doxygen (many -> many indeterminate filepaths);
> - A target to generate a tarball+patch for the current tree (tarball
> has HEAD; patch has `git diff HEAD` output iff there's a diff);
This is meta-version control stuff. I would keep it out of your build setup.
> - Updating .git/hooks;
Ah yes. The git devs (linus himself?) make a good case for custom
hooks not being versioned (or rather, not having them installed by
default). I keep my hooks in shell scripts. If you run them manually
they set up simlinks to themselves as hooks. If run by git, they act
as hooks.
> - Dynamic test generation (for each shared library created, make sure
> that it can be dlopen'd without error).
You could have your tests in a separate subdirectory. Then you can
just build the main part of your program when you want in its
subdirectory. For this application I would think you'd want your test
program to spit out reports that are known to tup. Then when you
change something, only the relevant tests get re-run. (running the
tests becomes the same as building the test subdirectory)
> I'd be interested to know what Tup can do about these cases.
>
> For the record, I'm interested in build systems as a packager; I don't
> believe Tup can fulfill the use cases I currently require that CMake can
> do for most of my personal projects.
>
> Lacking a proper install framework is going to make any Tup projects a
> nightmare for packagers. The nicest systems I've seen that aren't
> essentially language-specific (e.g., Python's distutils or Haskell's
> cabal) are CMake and autoconf (much as I dislike it, controlling its
> install directories are a breeze). Custom Makefiles span the entire
> spectrum from worst to best mainly because developers have to write
> stuff from nothing. I'd like Tup to help make things easy for packagers
> of Tup-using projects.
>
> --Ben
I think the future for open source software will be to migrate to
something like nix/nixos plus something for much better
automated/distributed monitoring of package dependencies and
conflicts. The current model of having separate communities of people
doing packaging manually (i.e. all the awesome debian folks) seems
tragically wasteful to me, and results in an extremely brittle
software ecosystem. Hopefully someday soon Linus will get sick of
seeing everyone wasting their time on different linux distributions,
and create (or bless) a system that will unify them all.