Hey Lars, thank you for your response! It begins to make sense now.
I know about dynamic execution, but was under impression that it must be explicitly turned on. I did not know it is now ON by default.
While from the description dynamic execution looks like the best of both worlds (or strategies) I wonder if it may have some negative perf consequences.
Here is our specific scenario: large C++ build with thousands of targets, single build machine and "local remote" cache (AKA --disk_cache) on the local SSD.
From the clean state it takes 40 minutes to finish. At the end it shows INFO:
11477 processes: 1586 internal, 9891 local.
Then I run "bazel clean" to wipe the output folder and build again without any changes expecting it to just copy everything over from --disk_cache.
At the end it reports INFO: 11744
processes: 10032 remote cache hit, 1712 internal. (numbers are approximate) which seems to tell that indeed it copied everything from the cache, except for the "internal" actions which I don't know what it is.
THE PROBLEM: the second build which only copies files from one SSD to another still takes about 10 minutes! I tried to copy the same volume of data manually and it was 30 seconds. Also during the fully cached build CPU is at 100% all the time, which probably can be explained by the dynamic execution, but I don't see any compiler processes running.
It does not feel right that the fully cached build takes that long and I am looking for the culprit.
Could dynamic execution be at fault? How do I turn it off for the experiment?
Also it is kind of pity that Bazel treats __disk_cache as the remote cache, while physically it is local. Dynamic execution makes a lot of sense when distributed execution is enabled and probably when REAL remote cache is in play, but local cache would be faster in 99.9% cases and there is no point to incur dynamic execution overhead.
Could you shed some light please?
Konstantin