--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/17497b25-ab34-40ac-8832-6cd5c8c7cb78n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/dbfda8b9-ff73-4a22-84bd-953b92a097c7n%40googlegroups.com.
David, thank you for your response!Although our build supports multiple platforms (Mac, Linux, Windows, Emscripten) my observations about Bazel high CPU consumption was on Windows where to my understanding sandbox is not supported (and therefore not created/destroyed) regardless of any settings. So I guess that high CPU is not related to sandboxing.
I wonder if there is a way to make an educated guess regarding what it is doing by looking at the build profile or some other kind of instrumentation. I'd love to find out that the cause is some kind of inefficiency in our Starlark code and make it better.We also use Python and care about its hermeticity, mostly because devs may have any random versions and configurations of Python on their machines and we don't want the build to be affected by those variations. So we also have our own Python package (for each platform) which we download with http_archive and then establish Python toolchain pointing at that downloaded package. Unfortunately we learned that at least on some platforms (like Linux) standard rules_python still use system Python to unpack the executable which IMHO goes against the goal of Python hermeticity. For that reason we use our own very limited Python ruleset which does not require creation of py_binary packages and just runs build scripts with the Python toolchain provided. Again we'd love to switch to the standard rules_python but the system Python requirement is a real bummer.
Konstantin--On Sunday, December 4, 2022 at 9:03:47 AM UTC-8 di...@google.com wrote:Bazel does a number of things like setting up a different sandbox for each build command, or hashing the content of build inputs and outputs that take a non-trivial amount of CPU and i/O.
None of this has to be performed with CMake + Ninja, so a "clean" Bazel build will always be significantly slower, in the absence of prebuilt artifacts in the cache.
Now half of your CPU cores seems really high, but that may depend on your project. For example, in our experience, when using a custom Python distribution (i.e. interpreter + module files, about 5000 files or 120 MiB), setting up the sandbox for a command that invokes a single py_binary() takes several hundred milliseconds, which is considerably slower than loading and running the script itself. None of that happens when using the system python, which we avoid for hermeticity / reproducibility reasons.On Fri, Dec 2, 2022 at 7:04 PM Konstantin <kon...@ermank.com> wrote:Our current build system is not pure CMake - it is CMake + Ninja and I believe Ninja's primary purpose is exactly maximum utilization of the cores. It just consumes way less CPU for itself than Bazel does. My observations are on Windows and it could be that on Windows Bazel scheduler is not as resource efficient as Ninja.On Friday, December 2, 2022 at 4:43:09 AM UTC-8 Zhuo Chen wrote:Can you paste your build metrics include the build time of both CMake and Bazel? Your observation on the CPU utilization is compliance with my understanding, however, the result is not.Benefiting from Bazel's algorithm of paralleling, Bazel is able to fully utilize your CPU cores. On the opposit, cmake's paralleling can only submit many build jobs "in batch" without considering their dependency. Hence although the CPU utilization is high when using Bazel, but I believe the build time should be shorter than cmake, in my experience, usually Bazel takes only one third of cmake.在2022年11月27日星期日 UTC+8 03:32:45<kon...@ermank.com> 写道:For our huge C++ build from the "clean" state (no caches populated) Bazel is consistently and substantially slower than CMake. One thing I could not help noticing is that during Execution phase CMake consumes very little resources, leaving most of it to compilers and such, while Bazel continuously pegs down about half of available CPU cores for itself which means compilation goes twice slower! I wonder what is causing it and if something can be done to improve it.Thank you!Konstantin--You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/dbfda8b9-ff73-4a22-84bd-953b92a097c7n%40googlegroups.com.
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/eb34636d-e94b-4239-9b4f-afcd53fae8aan%40googlegroups.com.
-- it will still scan output directories to remove files that are not listed in a build actions' manifest.That part confused me a lot! To the best of my knowledge (and my experience confirms it) Bazel does not clean up package output folder from whatever garbage could be there since previous builds. Your link points at the code which seemingly does just that so-o... what actually happens? I will try to research it, but I'd appreciate some leads.Package may have many targets, targets may have many actions... How Bazel would even know that everything we wanted to build inside the package is done and it is time to "prune the tree"?
Also the link to the "really ugly workaround" seems to be Google internal, but I very much want to know what it is.
stub_shebang = '#!/usr/bin/env -S /bin/bash -c \'"$0".runfiles/main/%s "$0" "$@"\'' % _python3_interpreter_path,
I understand the workaround is not suitable for Windows, but on the other hand funny enough there seems to be no problem with the system Python on Windows, i.e. in my experiments py_binary on Windows unpacks and works without system Python!
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/750f9ec5-b793-4dd4-8281-92bfacfdb247n%40googlegroups.com.
David, thank you for your response!Although our build supports multiple platforms (Mac, Linux, Windows, Emscripten) my observations about Bazel high CPU consumption was on Windows where to my understanding sandbox is not supported (and therefore not created/destroyed) regardless of any settings. So I guess that high CPU is not related to sandboxing.
-Lars
--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/1bc3ff56-b88c-4358-aa54-d33e7d46deadn%40googlegroups.com.