Worth noting is that I have distcc hacked into the toolchain for lack of a better remote-execution solution. In order to make effective use of this I need to lie to Bazel about the local resource (i.e., I tell it there are something like 80 cores instead of the 8 I actually have locally.)
I'm happy to investigate and do some profiling--I'm just not sure what the most effective approach is for digging into this.
Thanks,
Chris
--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/ade7ac13-cdbf-44f1-8f71-0db60f9bebdc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
I will try that out. The problem is that the wall clock times for builds are significantly longer when leaving local resources at their default values, so it is not really an effective solution. At least that has been my experience; I have not tried this recently.
Is Bazel remote execution a real thing for C++ yet? Whenever I look into it I'm left somewhat confused about the state of things.
Letting Bazel decide what resources are available does seem to reduce the load of Bazel itself, but it still is surprisingly busy. For example, during a one minute window of the build Bazel ran about 190 subcommands, about 90% of them compiling C++ and the remaining linking shared objects. ps(1) shows that Bazel consumed 28 seconds of CPU time during this period, so now burning only half a core. That seems quite high.
The above was still distributing to distcc, so the cores were otherwise not very busy while running the above test. I wondered if maybe Bazel would consume less CPU if running GCC locally? I tried this and got similar resultes: 136 subcommands (with only a handful or so not GCC) in 60 seconds, ps(1) shows Bazel used 25 seconds of CPU.
Is 150 ms CPU time per subcommand typical of Bazel? I'm currently running 23.2, but I think this is characteristic of my Bazel builds over many release (i.e., so not a recent regression, either with Bazel or in my build configuration.)
Thanks,
Chris
--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/270af909-ed13-42a1-afbc-a0dd28bd86a0%40googlegroups.com.
On Saturday, May 4, 2019 at 1:56:58 PM UTC-5, Austin Schuh wrote:
> Bazel checksums all the outputs. This takes a surprising amount of CPU.
That's a a good point, though it does not seem like it should account for much of the overhead I'm seeing. I just pointed find(1) at a portion of my Bazel cache to look for object files and ran them all through sha1sum(1). The input was 9500+ files totaling more than 8GB. Computing SHA1 sums for each of these took about 1.5 seconds CPU time. I would expect Bazel was computing checksums for only a few hundred files when I was seeing it use 20+ seconds of CPU.
--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/CABsbf%3DH0HYmeQarVr%2BeX64Y4RE-so1kK5amr6h_d4PQ7Ae50Dw%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.