Yes. When you run ninja without any argument you have parallelism by default. This options can control the default number of jobs for ninja.
Also, especially when doing LTO, you may want to limit the number of link jobs independently from the number of compile job, which is again a facility that ninja has.
> If this is a fact, can you please adjust the information in [1]?
>
> Do I have other options to speedup my build?
Yes: start to use ninja ;)
If for any reason (?) you are really stuck with "make", then run "make -j ~ncpus" (with ~ncpus "approximately the number of cores in your machine). It will build in parallel.
--
Mehdi
> ( For testing RCs I tend to use slow build - one single (c)make job. )
>
> My build-script and configure-log are attached.
>
> Thanks.
>
> Regards,
> - Sedat -
>
>
> [1] http://llvm.org/docs/CMake.html
> <configure-log_llvm-toolchain-3.8.0rc3.txt><build_llvm-toolchain.sh>_______________________________________________
> LLVM Developers mailing list
> llvm...@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
So, I like NinjaS but I have not used it yet as software.
From my build-script...
[ -d ${BUILD_DIR} ] || mkdir -p ${BUILD_DIR}
cd $BUILD_DIR
# BUILD-VARIANT #1: CONFIGURE AND MAKE (will be DEPRECATED with LLVM v3.9)
##../llvm/configure $CONFIGURE_OPTS 2>&1 | tee $LOGS_DIR/$CONFIGURE_LOG_FILE
##$MAKE $MAKE_OPTS 2>&1 | tee $LOGS_DIR/$BUILD_LOG_FILE
##sudo $MAKE install 2>&1 | tee $LOGS_DIR/$INSTALL_LOG_FILE
# BUILD-VARIANT #2: CMAKE
$CMAKE ../llvm $CMAKE_OPTS 2>&1 | tee $LOGS_DIR/$CONFIGURE_LOG_FILE
$CMAKE --build . 2>&1 | tee $LOGS_DIR/$BUILD_LOG_FILE
##sudo $CMAKE --build . --target install 2>&1 | tee $LOGS_DIR/$INSTALL_LOG_FILE
You mean configuring with cmake and build (and install) with make?
$CMAKE ../llvm $CMAKE_OPTS 2>&1 | tee $LOGS_DIR/$CONFIGURE_LOG_FILE
$MAKE $MAKE_OPTS 2>&1 | tee $LOGS_DIR/$BUILD_LOG_FILE
sudo $MAKE install 2>&1 | tee $LOGS_DIR/$INSTALL_LOG_FILE
Which combination of cmake/ninja versions are you using (latest are
v3.4.3 and v1.6.0)?
- Sedat -
/*
I think the difference could be more beneficial if you're doing
incremental builds, but I don't think that is what you're doing..
*/
On Thu, Feb 25, 2016 at 1:51 PM, Sedat Dilek via llvm-dev
What do you mean by "incremental builds"?
/me wanted to test RCs of upcoming LLVM v3.8.0 - not doing daily builds.
I am building a "modern" Linux graphics driver stack (libdrm | mesa |
intel-ddx) and a Linux v4.4.y-LTS.
Things will be more modern when I switch from Ubuntu/precise
(12.04-LTS) to Ubuntu/xenial (upcoming 16.04-LTS, beta1 should be
available today according release-schedule).
( I am a bit sick of backporting software. )
- Sedat -
[1] https://wiki.ubuntu.com/XenialXerus/ReleaseSchedule
By incremental I mean..
build the compiler
change some source file
rebuild
...
With this combination I could reduce build-time down from approx. 3h
down to 01h20m.
$ egrep -i 'jobs|ninja' llvm-build/CMakeCache.txt
//Program used to build from build.ninja files.
CMAKE_MAKE_PROGRAM:FILEPATH=/opt/cmake/bin/ninja
//Define the maximum number of concurrent compilation jobs.
LLVM_PARALLEL_COMPILE_JOBS:STRING=3
//Define the maximum number of concurrent link jobs.
LLVM_PARALLEL_LINK_JOBS:STRING=1
CMAKE_GENERATOR:INTERNAL=Ninja
$ LC_ALL=C ls -alt logs/3.8.0rc3_clang-3-8-0-rc3_cmake-3-4-3_ninja-1-6-0/
total 360
drwxr-xr-x 2 wearefam wearefam 4096 Feb 25 19:58 .
drwxr-xr-x 6 wearefam wearefam 4096 Feb 25 19:58 ..
-rw-r--r-- 1 wearefam wearefam 130196 Feb 25 19:54
install-log_llvm-toolchain-3.8.0rc3.txt
-rw-r--r-- 1 wearefam wearefam 205762 Feb 25 19:51
build-log_llvm-toolchain-3.8.0rc3.txt
-rw-r--r-- 1 wearefam wearefam 14331 Feb 25 18:30
configure-log_llvm-toolchain-3.8.0rc3.txt
$ LC_ALL=C du -s -m llvm* /opt/llvm-toolchain-3.8.0rc3
315 llvm
941 llvm-build
609 /opt/llvm-toolchain-3.8.0rc3
- Sedat -
[1] https://cmake.org/files/v3.5/cmake-3.5.0-rc3-Linux-x86_64.tar.gz
(1) we have a number of places throughout out CMake build where we use features from newer CMakes gated by version checks. Most of these features are performance or usability related. None of them are correctness. Using the latest CMake release will often result in faster builds, so I encourage it.
(2) CMake's "install" target will pretty much always be slower from clean than the old autoconf/make "install" target. This is because in CMake "install" depends on "all", and our CMake builds more stuff in "all" than autoconf did. To help with this or CMake system has lots of convenient "install-${name}" targets that support component-based installation. Not every component has one of these rules, but if one you need is missing let me know. I also recently (r261681) added a new option (LLVM_DISTRIBUTION_COMPONENTS) that allows you to specify a list of components that have custom install targets. It then creates a new "install-distribution" target that wraps just the components you want. For Apple this is almost a 40% speed up over "ninja install".
-Chris
That sounds great, I want to use it!
It would even be more awesome with an description/example in docs/CMake.rst :)
--
Mehdi
> On Mar 1, 2016, at 10:01 AM, Mehdi Amini <mehdi...@apple.com> wrote:
>
>
>> On Mar 1, 2016, at 9:57 AM, Chris Bieneman <cbie...@apple.com> wrote:
>>
>> There are a few notes I'd like to add to this thread.
>>
>> (1) we have a number of places throughout out CMake build where we use features from newer CMakes gated by version checks. Most of these features are performance or usability related. None of them are correctness. Using the latest CMake release will often result in faster builds, so I encourage it.
>>
>> (2) CMake's "install" target will pretty much always be slower from clean than the old autoconf/make "install" target. This is because in CMake "install" depends on "all", and our CMake builds more stuff in "all" than autoconf did. To help with this or CMake system has lots of convenient "install-${name}" targets that support component-based installation. Not every component has one of these rules, but if one you need is missing let me know. I also recently (r261681) added a new option (LLVM_DISTRIBUTION_COMPONENTS) that allows you to specify a list of components that have custom install targets. It then creates a new "install-distribution" target that wraps just the components you want. For Apple this is almost a 40% speed up over "ninja install".
>
> That sounds great, I want to use it!
> It would even be more awesome with an description/example in docs/CMake.rst :)
Once I get the last of the kinks worked out for our internal adoption I'm going to open source our config files that use it.
I've also made a note to remind myself to document it in docs/CMake.rst. I need to do a pass updating that with a bunch of the cool new things we're doing with CMake. Thanks for the reminder.
-Chris
LTO *will* slow down dramatically the build.
--
Mehdi
> <build_llvm-toolchain_clang-cmake-ninja.sh><install_llvm-toolchain_clang-cmake-ninja.sh>
- Fariborz
It might be that a CLANG generated with LTO/PGO speeds up the build.
Can you confirm this?
Can you confirm binutils-gold speed up the build?
Has LLVM an own linker?
Can be used? Speedup the build?
Yesterday night I loooked through available CMAKE/LLVM variables...
### GOLD
# CMAKE_LINKER:FILEPATH=/usr/bin/ld
# GOLD_EXECUTABLE:FILEPATH=/usr/bin/ld.gold
# LLVM_TOOL_GOLD_BUILD:BOOL=ON
### OPTLEVEL
# CMAKE_ASM_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
# CMAKE_CXX_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
# CMAKE_C_FLAGS_RELEASE:STRING=-O3 -DNDEBUG
### LTO
# LLVM_TOOL_LLVM_LTO_BUILD:BOOL=ON
# LLVM_TOOL_LTO_BUILD:BOOL=ON
### PGO
# LLVM_USE_OPROFILE:BOOL=OFF
#### TABLEGEN
# LLVM_OPTIMIZED_TABLEGEN:BOOL=OFF
So '-O3' is default for a RELEASE build.
Not sure which of the LTO variables are suitable, maybe both.
PGO? Is that the correct variable?
The blog-text mentioned to use optimized-tablegen.
Good? Bad? Ugly?
Thanks in advance for answering my questions.
Best regards,
- Sedat -
On 03/03/2016 08:09 AM, Sedat Dilek via llvm-dev wrote:
> It might be that a CLANG generated with LTO/PGO speeds up the build.
> Can you confirm this?
Yes, a Clang host compiler built with LTO or PGO is generally faster
than an -O3 build.
Some things to keep in mind when building the Clang host compiler:
GCC:
- GCC 4.9 gives good results with PGO enabled (1.16x speedup over the
-O3 build), not so much with LTO (actually regresses performance over
the -O3 build, same for PGO vs PGO+LTO)
- GCC 5.1/5.2/5.3 can't build Clang with LTO enabled
(https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66027), that's supposed to
be fixed in GCC 5.4
Clang:
- PGO works and gives a good 1.12x speedup over the -O3 build
(produced about 270GB of profiling data when I tried this in December
last year, this should be addressed soon once the in-process profiling
data merging lands)
- LTO provides a 1.03x speedup over the -O3 build
- I have not tried LTO+PGO with full Clang bootstrap profiling data
but I would expect that it helps to increase the performance even further
> Can you confirm binutils-gold speed up the build?
Yes, gold is definitely faster than ld when building Clang/LLVM.
> Has LLVM an own linker?
> Can be used? Speedup the build?
I haven't tried it but lld can definitely link Clang/LLVM on x86-64 Linux.
> The blog-text mentioned to use optimized-tablegen.
> Good? Bad? Ugly?
Good, it helps to speed up debug builds.
Regards,
Tilmann
How do I enable PGO via CMAKE?
<build_llvm-toolchain_clang-cmake-ninja.sh>_______________________________________________
I found in "http://llvmweekly.org/issue/110"
[ llvm.src/cmake/modules/HandleLLVMOptions.cmake ]
option(LLVM_ENABLE_LTO "Enable link-time optimization" OFF)
append_if(LLVM_ENABLE_LTO "-flto"
CMAKE_CXX_FLAGS
CMAKE_C_FLAGS
CMAKE_EXE_LINKER_FLAGS
CMAKE_SHARED_LINKER_FLAGS)
Searching for...
$ grep LLVM_ENABLE_LTO llvm/cmake/modules/HandleLLVMOptions.cmake
[ OUTPUT EMPTY ]
...gives no output.
So, LLVM_ENABLE_LTO is not available for LLVM/Clang v3.8.0?!
Hmm, maybe I can backport (or git cherry-pick) SVN r259766.
- Sedat -
[1] http://llvmweekly.org/issue/110
[2] http://reviews.llvm.org/rL259766
With the backported patch I get...
[ build-log ]
...
[164/1813] Linking CXX executable bin/llvm-tblgen
[165/1813] Building CXX object lib/MC/CMakeFiles/LLVMMC.dir/MCAsmInfoELF.cpp.o
[166/1813] Building CXX object lib/MC/CMakeFiles/LLVMMC.dir/MCAsmStreamer.cpp.o
FAILED: : && /opt/llvm/bin/clang++-3.8 -fPIC
-fvisibility-inlines-hidden -Wall -W -Wno-unused-parameter
-Wwrite-strings -Wcast-qual -Wmissing-field-initializers -pedantic
-Wno-long-long -Wcovered-switch-default -Wnon-virtual-dtor
-Wdelete-non-virtual-dtor -std=c++11 -fcolor-diagnostics
-ffunction-sections -fdata-sections -flto -O3 -flto
-Wl,-allow-shlib-undefined -Wl,-O3 -Wl,--gc-sections
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/AsmMatcherEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/AsmWriterEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/AsmWriterInst.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/Attributes.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CallingConvEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeEmitterGen.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeGenDAGPatterns.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeGenInstruction.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeGenMapTable.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeGenRegisters.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeGenSchedule.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CodeGenTarget.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DAGISelEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DAGISelMatcherEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DAGISelMatcherGen.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DAGISelMatcherOpt.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DAGISelMatcher.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DFAPacketizerEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/DisassemblerEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/FastISelEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/FixedLenDecoderEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/InstrInfoEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/IntrinsicEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/OptParserEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/PseudoLoweringEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/RegisterInfoEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/SubtargetEmitter.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/TableGen.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/X86DisassemblerTables.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/X86ModRMFilters.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/X86RecognizableInstr.cpp.o
utils/TableGen/CMakeFiles/obj.llvm-tblgen.dir/CTagsEmitter.cpp.o -o
bin/llvm-tblgen lib/libLLVMSupport.a lib/libLLVMTableGen.a
lib/libLLVMSupport.a -lrt -ldl -ltinfo -lpthread -lz -lm
-Wl,-rpath,"\$ORIGIN/../lib" && :
/usr/bin/ld: /opt/llvm-toolchain-3.8.0/bin/../lib/LLVMgold.so: error
loading plugin
/usr/bin/ld: /opt/llvm-toolchain-3.8.0/bin/../lib/LLVMgold.so: error
in plugin cleanup (ignored)
clang-3.8: error: linker command failed with exit code 1 (use -v to
see invocation)
ninja: build stopped: subcommand failed.
Not sure what's going on or missing?
Any help appreciated.
Thanks.
- sed@ -