We use stand-alone builds in Fedora. How we build is essentially like this:
$ mkdir build
$ cd build
$ cmake <monorepo-root>/libcxx <options>
$ ninja -C build
$ ninja -C build install
The main advantages of building this way is that you don't need the full
source tarball, just the libcxx source, and also it ensures that the
default targets only build the parts that we need.
For reference, the spec files we use for building these projects can be found here:
https://src.fedoraproject.org/rpms/libcxx/blob/master/f/libcxx.spec
https://src.fedoraproject.org/rpms/libcxxabi/blob/master/f/libcxxabi.spec
-Tom
> 2. What is the "Runtimes" build? How does it work, what is it used for, and what does it expect from libc++/libc++abi/libunwind?
> 3. Are there other "hidden" ways to build the runtime libraries?
>
> Cheers,
> Louis
>
>
>
> _______________________________________________
> LLVM Developers mailing list
> llvm...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
>
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
I earlier this week converted our toolchain scripts from a monorepo
build for libc++ to a "standalone build" where I build libunwind,
libc++abi and libc++ in succession instead of one single build. The
only reason I converted to this is because it's the only sane way to
build libc++ for a system without a existing C++ runtime. The problem
is actually not that libc++ depends on a existing C++ runtime - but
rather that CMake testing does.
When running a monorepo build on a system without a C++ runtime it
will fail a lot of tests from HandleLLVMOptions, while the standalone
builds for unwind, libc++abi and libc++ is much simpler and have less
tests I need to force a value for. This way I don't need first add a
c++ runtime.
My scripts are an adoption of mstorsjo's llvm-mingw (his script to
build libc++ for mingw can be seen here:
https://github.com/mstorsjo/llvm-mingw/blob/master/build-libcxx.sh).
I am all for simplifying this build on the other hand - building
libc++ and knowing which of the three should be static or dynamic and
when to use the USE_STATIC_UNWINDER etc is very complex and required a
lot of iterations to get it to work and I am still not sure if I did
the best thing (I ended up with static libc++abi but shared unwinder).
Thanks,
Tobias
(CCing Chris and Petr, who’ve done the most work on the runtimes build)
At least for me on Linux, using LLVM_ENABLE_PROJECTS is actually the unusual way of building libc++; I use LLVM_ENABLE_RUNTIMES. The reason is, my host compiler is often gcc, but I want to build, test, and ship libc++ with the clang I just built.
The runtimes build is when you use LLVM_ENABLE_RUNTIMES. It sets up the build of all runtimes (compiler-rt, libc++, libc++abi, libunwind, etc.) as a CMake ExternalProject which depends on the build of clang and other toolchain tools. In other words, if I run the following:
cmake -DLLVM_ENABLE_PROJECTS=clang -DLLVM_ENABLE_RUNTIMES='libcxx;libcxxabi' path/to/my/llvm-project/llvm
ninja cxx
The build system will automatically build clang and other toolchain tools (e.g. llvm-ar), run the ExternalProject configuration with e.g. CMAKE_C_COMPILER and CMAKE_CXX_COMPILER set to the just-built clang, and then build libc++ with that configuration (so with the just-built clang). It’s a pretty convenient workflow for my setup. It also takes care of e.g. automatically rebuilding libc++ if you make changes to clang and then run `ninja cxx` again.
As for why the runtimes build use the “standalone build” setup, it’s because there’s a separate CMake configuration happening for the runtimes in this setup (which is necessary in order to be able to configure them to use the just-built toolchain), so e.g. clang isn’t available as an in-tree target. See https://reviews.llvm.org/D62410 for more details. Your top-level CMakeLists.txt in the runtimes build is llvm/runtimes/CMakeLists.txt and not libcxx/CMakeLists.txt (as it would be in a fully standalone build), but it’s also not llvm/CMakeLists.txt (as it would be with LLVM_ENABLE_PROJECTS).
At the CMake round table at the dev meeting last October, we’d discussed the runtimes builds, and Chris had advanced that there should be two supported ways to build the runtimes:
I agree with that position. In particular, I think LLVM_ENABLE_PROJECTS is definitely the wrong thing to use for compiler-rt, which is strongly tied to your just-built compiler. I think it’s arguably the wrong thing to use for libc++/libc++abi/libunwind as well, where you either want to use your just-built Clang (which LLVM_ENABLE_RUNTIMES was made for), or a different compiler (in which case you’d do a fully standalone build), but silently using your host compiler for them is probably not what you want. (The LLVM_ENABLE_PROJECTS workflow for libc++/libc++abi/libunwind probably works out better on macOS, where your host compiler is a recent Clang anyway so the difference between it and a just-built Clang aren’t as marked, but I’ve had issues even on macOS in the past where using the host compiler via LLVM_ENABLE_PROJECTS gave me weird libc++ test errors, and LLVM_ENABLE_RUNTIMES just worked.)
What do you think?
_______________________________________________
This is one of the use cases for the runtimes build. You can invoke CMake as:cmake \-DLLVM_ENABLE_PROJECTS=clang \-DLLVM_ENABLE_RUNTIMES=libcxx \-DLLVM_RUNTIME_TARGETS=aarch64-unknown-linux-gnu;armv7-unknown-linux-gnueabihf;i386-unknown-linux-gnu;x86_64-unknown-linux-gnu \path/to/llvm-project/llvmThis is going to compile Clang, and then use the just-built Clang to cross-compile libc++ to four different targets. There are a lot of other customization options. You can also pass-through additional flags to individual targets, but this gets complicated quickly, so we usually put this logic into a cache file (see for example https://github.com/llvm/llvm-project/blob/master/clang/cmake/caches/Fuchsia-stage2.cmake) and then invoke CMake as:cmake \-C path/to/llvm-project/clang/cmake/caches/Fuchsia-stage2.cmake \path/to/llvm-project/llvmAside from the fact that you can do everything in a single CMake build without any additional wrapper scripts, another advantage is that anyone can easily reproduce your build without needing anything else other than the LLVM monorepo.