On Tue, Mar 05, 2019 at 07:08:59AM -0500, Henry Miller wrote:
> Not exactly. First, I have 60 machines, each with 8 cores. I'm saying
> that none of the above machines can build as fast with icecc as a 32
> core AWS machine that our CI system uses. This is not an equal
> comparison . Not only are the cups different, these are all user's
> machines that someone is doing work on - some of the 60 machines are
> doing builds of thir own in all tests. Our network also has other
> traffic, and not all machines are on the same subnet.
>
the upshot is that single machines scale better than clusters, esp. when
the nodes are not allocated exclusively.
> If you have no other machines in the network, icecc is always slower
> because it breaks apart build steps the compiler is more efficient
> doing as one.
>
that can't be it, because that's a linear factor, so (say) 2*x icecc
cores would still beat x local ones.
> Icecc does have overhead, this is documented.
> [...]
> When you only have a few cores adding icecc in helps. When you already
> have a lot of cores locally the overhead of icecc is not worth it.
>
non-linear overhead is created only when disproportionately much work is
done locally before distributing, as is the case with the preprocessing.
that's an architectural limitation of icecc's current implementation.
this can be addressed to a high degree by full distribution (see
distcc's pump mode) built on top of efficient distributed caching.
lubos did an experiment in that direction, but i'm convinced he didn't
go far enough to make it pay off; see
https://github.com/icecc/icecream/issues/138 for the discussion (i
would actually have more to add there, but it's kinda pointless when
nobody is going to implement it anyway).