CCache is a compiler cache that speeds up recompilations of
[Objective-]C/C++ code by caching previous compilations and detecting
when the same compilation is being done again. The latest bleeding
edge version can also cope with the various quirks of xcodebuild and
Apple's distcc fork. Especially when switching branches or making GRD
changes, this can really speed up builds.
A description of how to use CCache for your Mac builds can be found at
http://code.google.com/p/chromium/wiki/CCacheMac . Let me know if you
run into problems with it.
Bernhard.
> --
> Chromium Developers mailing list: chromi...@chromium.org
> View archives, change email options, or unsubscribe:
> http://groups.google.com/a/chromium.org/group/chromium-dev
>
Bernhard.
My machine has two quad-core xeons with hyperthreading, so it's
entirely possible that I'm seeing a lock-contention issue. But
reading the ccache code doesn't make me hopeful (the locking seems
reasonable to me). I might try with stats disabled, but that doesn't
seem like it should be a real point of contention.
-scott
On Thu, Jun 17, 2010 at 9:55 AM, Bernhard Bauer <bau...@chromium.org> wrote:
:-(
I've also noticed that behavior, with phases with lots of disk
accesses and no compiles running, and phases with high CPU usage and
compiles running, but I was under the assumption that low CPU usage is
a good thing, as it means that I'm saving compilations there.
Remember, the point of ccache is not to increase compilation
throughput, but to reduce the actual need for compilations.
Did you check if Xcode actually made progress during the low CPU
phases? For me it ran through a bunch of compile jobs there without
actually starting a compiler (and presumably faster than doing so).
Bernhard.
As I understand it, the problem is that ccache operates by mapping
hash(preprocess(source)) to compile(source). This implies running
preprocess locally.
Pump-mode distcc is able to achieve additional speedups by performing
the preprocess step remotely.
If you put ccache first, you force local preprocessing, which undoes
all of pump mode’s performance gains.
Effectively, you’re choosing between optimizing for the “my files
change a lot” and “I compulsively clean my object tree but rarely
sync” cases.
In my (copious!) spare time, I’m working on this problem by moving
ccache to the remote side. (I’m talking in terms of principle now, I’m
not using actually ccache or distcc.) The idea is that instead of
running “ccache distcc gcc” (local ccache), it’ll be “distcc ccache
gcc” (remote ccache). In order for this to work effectively, you need
to be either talking to a single server that manages the cache, or to
a pool of servers that share a cache. I don’t think ccache is really
designed for that, which is why (when I looked) they recommended you
use “ccache distcc gcc” (local ccache) instead.
(Xcode 3.1 and earlier, as used on Mac OS X 10.5, don’t include a
distcc capable of pump-mode. I got it working anyway, with TVL’s help,
and we use this on our 10.5 buildbots. Xcode 3.2.2 and up, which
require 10.6, include our work and are pump-mode-enabled.)
Mark
CCache 3.0 includes a "direct" mode, which can get by without running
the preprocessor. It effectively keeps an additional map from (subset
of compiler options + hashes of all input files) to
hash(preprocess(source)), which then can be mapped to
(compile(source)). In my tests, I've had very few cases where that
didn't work and ccache had to fall back to running the preprocessor.
So, in theory, this should work quite nicely with distcc-pump. I'll
try it out as soon as I upgrade my Mac to 10.6.
> Effectively, you’re choosing between optimizing for the “my files
> change a lot” and “I compulsively clean my object tree but rarely
> sync” cases.
Using ccache is also helpful if you're often switching between
different branches, and if you do XIB changes, which cause some
headers to be regenerated without their contents changing.
> In my (copious!) spare time, I’m working on this problem by moving
> ccache to the remote side. (I’m talking in terms of principle now, I’m
> not using actually ccache or distcc.) The idea is that instead of
> running “ccache distcc gcc” (local ccache), it’ll be “distcc ccache
> gcc” (remote ccache). In order for this to work effectively, you need
> to be either talking to a single server that manages the cache, or to
> a pool of servers that share a cache. I don’t think ccache is really
> designed for that, which is why (when I looked) they recommended you
> use “ccache distcc gcc” (local ccache) instead.
I thought about doing something similar, but I'm trying to create a
shared object file cache for ccache (using a local cache with
transparent fallback to a remote cache, and asynchronously uploading
object files from the local one to the remote one). This way, you can
keep the problem of distributing compilation work separate from the
problem of reducing the amount of work that needs to be done.
Bernhard.