How to ccache kernel compilations

1,166 views
Skip to first unread message

Marcus Linsner

unread,
Aug 23, 2018, 10:27:13 AM8/23/18
to qubes-users
I'm trying to use ccache to compile kernel(s) but after 1k files of cache miss, I see only 8% hit rate, even if I keep the compilation dir ('linux-obj'), do a clean (kernel Makefile's clean, not rpm's clean) and re-issue the just-build again after a ccache -z (to clear ccache stats).

So, just installing ccache via `sudo dnf install ccache` and running a new terminal, one should have ccache in PATH, like:
$ which gcc
/usr/lib64/ccache/gcc
$ which cc
/usr/lib64/ccache/cc
etc.
This causes the kernel compilation to always use ccache (since it increases cache miss counter), however it's almost always all cache miss! (as I said, even if I keep the obj dir and just `make clean` inside it.
What am I missing here?

I'm using https://github.com/QubesOS/qubes-linux-kernel

cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf
stats zero time Thu Aug 23 16:10:18 2018
cache hit (direct) 88
cache hit (preprocessed) 2
cache miss 1026
cache hit rate 8.06 %
called for link 22
called for preprocessing 2052
unsupported code directive 2
no input file 177
cleanups performed 0
files in cache 6730
cache size 148.4 MB
max cache size 20.0 GB


next compilation, after a `ccache -z` and kernel Makefile's `make clean`:
cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf
stats zero time Thu Aug 23 16:21:17 2018
cache hit (direct) 105
cache hit (preprocessed) 2
cache miss 1015
cache hit rate 9.54 %
called for link 22
called for preprocessing 2047
unsupported code directive 3
no input file 295
cleanups performed 0
files in cache 9859
cache size 217.9 MB
max cache size 20.0 GB

Marcus Linsner

unread,
Aug 23, 2018, 10:36:51 AM8/23/18
to qubes-users
Quick question:
If kernel-4.14.57/linux-obj/Makefile is being regenerated on every build (even if `linux-obj` dir is being kept between successing builds) does that automatically cause `make` to rebuild everything and somehow invalidate ccache?

Marcus Linsner

unread,
Aug 23, 2018, 10:57:47 AM8/23/18
to qubes-users
On Thursday, August 23, 2018 at 4:36:51 PM UTC+2, Marcus Linsner wrote:
> Quick question:
> If kernel-4.14.57/linux-obj/Makefile is being regenerated on every build (even if `linux-obj` dir is being kept between successing builds) does that automatically cause `make` to rebuild everything and somehow invalidate ccache?

to answer my own question: no effect even if I stop it from being regenerated (eg. touched)

Marcus Linsner

unread,
Aug 23, 2018, 11:21:48 AM8/23/18
to qubes-users
On Thursday, August 23, 2018 at 4:27:13 PM UTC+2, Marcus Linsner wrote:
> I'm trying to use ccache to compile kernel(s) but after 1k files of cache miss, I see only 8% hit rate, even if I keep the compilation dir ('linux-obj'), do a clean (kernel Makefile's clean, not rpm's clean) and re-issue the just-build again after a ccache -z (to clear ccache stats).
ok I'm on to something: it's the .config !
If I'm using default .config (aka in the source folder `make mrproper; make menuconfig`, Save then Exit) then copy that .config into ../linux-obj/ and then execute this twice:
$ time make clean;ccache -z; time make -j18
I get ccache hit direct over 90% ! which is how it should be.

I'll post again if I find out exactly which options in the .config are the ccache-busting culprits

Marcus Linsner

unread,
Aug 23, 2018, 12:01:17 PM8/23/18
to qubes-users

Well, it's CONFIG_GCC_PLUGINS, it has to be unset for ccache to work. (else you get well under 8% hit rate, instead of like almost 100%)
and it's being set in the file `config-qubes` like so:
CONFIG_GCC_PLUGINS=y
CONFIG_GCC_PLUGIN_LATENT_ENTROPY=y
CONFIG_GCC_PLUGIN_STRUCTLEAK=y

it was also the reason these two patches were needed here: https://groups.google.com/forum/#!topic/qubes-devel/Q3cdQKQS4Tk
to avoid compilation failure when compiling kernel 4.14.57

Great, now ccache works even with `make rpms` !!
If anyone knows another way or why I should keep those gcc plugins, lemme know? :D

cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf

stats zero time Thu Aug 23 17:57:29 2018
cache hit (direct) 1029
cache hit (preprocessed) 8
cache miss 32
cache hit rate 97.01 %
called for link 37
called for preprocessing 1900
cache file missing 1
unsupported code directive 3
no input file 704
cleanups performed 0
files in cache 73615
cache size 1.8 GB

Marcus Linsner

unread,
Aug 24, 2018, 5:24:27 AM8/24/18
to qubes-users
This is how a full(well, slightly modified) kernel compilation looks like now, with ccache working:
ie. `time make rpms`
real 7m47.483s
user 9m2.507s
sys 6m47.245s

cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf

stats zero time Fri Aug 24 11:09:03 2018
cache hit (direct) 14047
cache hit (preprocessed) 1
cache miss 8
cache hit rate 99.94 %
called for link 47
called for preprocessing 21125
unsupported code directive 4
no input file 1092
cleanups performed 0
files in cache 42606
cache size 865.4 MB


max cache size 20.0 GB

The build phase actually takes only 2min (for 14k files):
real 2m1.674s
user 5m28.075s
sys 4m50.768s


cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf

stats zero time Fri Aug 24 11:17:37 2018
cache hit (direct) 14011
cache hit (preprocessed) 0
cache miss 5
cache hit rate 99.96 %
called for link 28
called for preprocessing 21069
unsupported code directive 4
no input file 342
cleanups performed 0
files in cache 42616
cache size 865.6 MB

awokd

unread,
Aug 24, 2018, 5:51:45 AM8/24/18
to Marcus Linsner, qubes-users
On Fri, August 24, 2018 9:24 am, Marcus Linsner wrote:
> This is how a full(well, slightly modified) kernel compilation looks like
> now, with ccache working: ie. `time make rpms` real 7m47.483s user 9m2.507s
> sys 6m47.245s

Any idea what those GCC plugins are for? Seems like it's usually a hassle
to track them down on distro version updates too.


Marcus Linsner

unread,
Aug 24, 2018, 6:13:01 AM8/24/18
to qubes-users

And for comparison, a full %build phase when CONFIG_GCC_PLUGINS is untouched(aka set):
real 17m19.746s
user 125m44.920s
sys 17m9.877s

cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf

stats zero time Fri Aug 24 11:27:18 2018
cache hit (direct) 28
cache hit (preprocessed) 133
cache miss 13857
cache hit rate 1.15 %
called for link 30
called for preprocessing 21075
unsupported code directive 4
no input file 348
cleanups performed 0
files in cache 84685
cache size 1.7 GB


max cache size 20.0 GB

So you see, 15 more minutes than with ccache. Ok, maybe let's say that that was the first compilation with CONFIG_GCC_PLUGINS set (ie. cold cache?), so redoing it(make prep; ccache -z; time make rpms-just-build) means it should make use of the now primed ccache (ie. hot cache?):
real 18m34.318s
user 122m23.001s
sys 17m7.478s

cache directory /home/user/.ccache
primary config /home/user/.ccache/ccache.conf
secondary config (readonly) /etc/ccache.conf

stats zero time Fri Aug 24 11:46:30 2018
cache hit (direct) 160
cache hit (preprocessed) 2
cache miss 13856
cache hit rate 1.16 %
called for link 30
called for preprocessing 21075
unsupported code directive 4
no input file 348
cleanups performed 0
files in cache 126746
cache size 2.6 GB


max cache size 20.0 GB

It probably took one minute longer than before because I was using the other VMs for browsing (also started a few)
But you get the point, 1.2% ccache hit rate. Appalling! :D

On Friday, August 24, 2018 at 11:51:45 AM UTC+2, awokd wrote:
> Any idea what those GCC plugins are for? Seems like it's usually a hassle
> to track them down on distro version updates too.

According to 'config-qubes' file [1] they help "Enable some more hardening options"

According to '/home/user/qubes-linux-kernel/kernel-4.14.57/linux-4.14.57/arch/Kconfig' [2]:

menuconfig GCC_PLUGINS
bool "GCC plugins"
depends on HAVE_GCC_PLUGINS
depends on !COMPILE_TEST
help
GCC plugins are loadable modules that provide extra features to the
compiler. They are useful for runtime instrumentation and static analysis.

See Documentation/gcc-plugins.txt for details.

(see url [3] at the end, for this gcc-plugins.txt)

config GCC_PLUGIN_LATENT_ENTROPY
bool "Generate some entropy during boot and runtime"
depends on GCC_PLUGINS
help
By saying Y here the kernel will instrument some kernel code to
extract some entropy from both original and artificially created
program state. This will help especially embedded systems where
there is little 'natural' source of entropy normally. The cost
is some slowdown of the boot process (about 0.5%) and fork and
irq processing.

Note that entropy extracted this way is not cryptographically
secure!

This plugin was ported from grsecurity/PaX. More information at:
* https://grsecurity.net/
* https://pax.grsecurity.net/

config GCC_PLUGIN_STRUCTLEAK
bool "Force initialization of variables containing userspace addresses"
depends on GCC_PLUGINS
help
This plugin zero-initializes any structures containing a
__user attribute. This can prevent some classes of information
exposures.

This plugin was ported from grsecurity/PaX. More information at:
* https://grsecurity.net/
* https://pax.grsecurity.net/

[1] https://github.com/QubesOS/qubes-linux-kernel/blob/d382499510ba3f6e69cd888e2f6e59ce41aa8550/config-qubes#L27-L32
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/Kconfig?h=v4.14#n432
[3] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/gcc-plugins.txt?h=v4.14

Marcus Linsner

unread,
Aug 24, 2018, 7:14:09 AM8/24/18
to qubes-users

For posterity, the modifications (applied on top of 'qubes-linux-kernel' repo's tag 'v4.14.57-2') that I used to achieve the above, are here:
https://github.com/constantoverride/qubes-linux-kernel/commit/ac9a975512bdc67dc12c948355b14dfdcc229b1a
(also attached just in case github goes away, somehow)

ac9a975512bdc67dc12c948355b14dfdcc229b1a.patch

Marcus Linsner

unread,
Aug 25, 2018, 9:44:10 AM8/25/18
to qubes-users
On Friday, August 24, 2018 at 1:14:09 PM UTC+2, Marcus Linsner wrote:
> For posterity, the modifications (applied on top of 'qubes-linux-kernel' repo's tag 'v4.14.57-2') that I used to achieve the above, are here:
> https://github.com/constantoverride/qubes-linux-kernel/commit/ac9a975512bdc67dc12c948355b14dfdcc229b1a
> (also attached just in case github goes away, somehow)

The way I tried to compile kernel in this thread was wrong(because installing it in dom0 would fail due to compilation VM being Fedora 28 instead of 25 and thus missing some new libs; on a Fedora 25 VM compilation would fail).

The right way to compile a VM (and dom0?) kernel is by using qubes-builder (which chroots to a fc25(aka Fedora 25, which is what dom0 is on) even though we're running inside a Fedora 28 VM): thanks to fepitre for telling me the steps here https://github.com/QubesOS/qubes-linux-kernel/pull/22#issuecomment-415453140

I'll keep track of my kernel compilation progress here: https://gist.github.com/constantoverride/825717e0136f804aa6ebf66293234b57
(like making ccache work for this version of compilation steps)

Reply all
Reply to author
Forward
0 new messages