I am applying for GSoC 2017 and have some question about "Dynamic thread pool sizing".
1. Multithreading is always inspiring but I am not sure about its effect in compiling. When compiling Bazel, my laptop reports `Elapsed time: 183.156s, Critical Path: 178.70s` and my PC server reports `Elapsed time: 77.687s, Critical Path: 62.96s` with default parameters. I guess that critical path can not be optimized by multithreading and it has already taken the most part of the elapsed time, so I wonder how much can 'Dynamic thread pool sizing' save the compiling time and is that your goal?
2. I thought compile was a CPU-intensive work, but you mentioned other resources like RAM and I/O are also important. And how to model these resource requirements is the core problem. You provide two methods, make the model configurable or design a cool algorithm to predict the model intelligently. The latter seems challenging and is what I prefer, do you have any suggestion on it? And is there any previous work on resource management in other build tools?
3. You discussed whether Bazel should be conservative or not when requiring resources but in my opinion, the compiler should always be aggressive in competing for resources. So what's your consideration?
4. At last, do you have some real scenarios to measure the performance of Bazel or is the
benchmark sufficient?