Efforts to lower build times? Build time metrics show 2x increase since July 2022

663 views
Skip to first unread message

Alesandro Ortiz

unread,
Jun 28, 2024, 2:21:58 AMJun 28
to Chromium-dev
Hello,

I hope I'm not adding to the "my build is too slow" pile of emails, but this seems like an actionable concern.

I saw this graph [1] and noticed that build times (measured by CPU hours) have more than doubled since July 2022. Between July 2018 and July 2022, they held relatively steady at about ~30-40 CPU hours (~40-50 hours between June 2020 and April 2021). Since then, they have steadily increased, now reaching ~85-90 CPU hours.

The 5 year graph [2] shows the unusual trend more clearly.

Assuming the data in the graph is accurate, this is concerning for external developers with limited resources like myself, who likely have limited resources to build Chromium on a regular basis. Based on the graph, developers are likely having to pay twice as much to build Chromium in cloud envs, or spend twice as long sword fighting [3]. Anecdotally, this seems true.

Is there active work to analyze and lower, or at least stabilize, build times?

Any work to stabilize build times is very much appreciated. If there's a big Good Reason™ that justifies the increasing build times, or I'm misinterpreting the graph, please disregard this email. :)


Regards,
Alesandro

danakj

unread,
Jun 28, 2024, 12:20:03 PMJun 28
to ales...@alesandroortiz.com, Chromium-dev
On Fri, Jun 28, 2024 at 2:21 AM Alesandro Ortiz <ales...@alesandroortiz.com> wrote:
Hello,

I hope I'm not adding to the "my build is too slow" pile of emails, but this seems like an actionable concern.

I saw this graph [1] and noticed that build times (measured by CPU hours) have more than doubled since July 2022. Between July 2018 and July 2022, they held relatively steady at about ~30-40 CPU hours (~40-50 hours between June 2020 and April 2021). Since then, they have steadily increased, now reaching ~85-90 CPU hours.

The 5 year graph [2] shows the unusual trend more clearly.

Assuming the data in the graph is accurate, this is concerning for external developers with limited resources like myself, who likely have limited resources to build Chromium on a regular basis. Based on the graph, developers are likely having to pay twice as much to build Chromium in cloud envs, or spend twice as long sword fighting [3]. Anecdotally, this seems true.

Is there active work to analyze and lower, or at least stabilize, build times?

Any work to stabilize build times is very much appreciated. If there's a big Good Reason™ that justifies the increasing build times, or I'm misinterpreting the graph, please disregard this email. :)

I think it would be a better graph if it was normalized for the number of files or lines of code. The codebase (and binary) keeps increasing in size. And FWIW C++ continues to get even more expensive (read: slow) to compile as it evolves.
 
--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/4268b0c8-fdfa-48de-9b66-72d39e1c3061n%40chromium.org.

David Benjamin

unread,
Jun 28, 2024, 12:37:09 PMJun 28
to dan...@chromium.org, ales...@alesandroortiz.com, Chromium-dev
On Fri, Jun 28, 2024 at 12:18 PM danakj <dan...@chromium.org> wrote:
On Fri, Jun 28, 2024 at 2:21 AM Alesandro Ortiz <ales...@alesandroortiz.com> wrote:
Hello,

I hope I'm not adding to the "my build is too slow" pile of emails, but this seems like an actionable concern.

I saw this graph [1] and noticed that build times (measured by CPU hours) have more than doubled since July 2022. Between July 2018 and July 2022, they held relatively steady at about ~30-40 CPU hours (~40-50 hours between June 2020 and April 2021). Since then, they have steadily increased, now reaching ~85-90 CPU hours.

The 5 year graph [2] shows the unusual trend more clearly.

Assuming the data in the graph is accurate, this is concerning for external developers with limited resources like myself, who likely have limited resources to build Chromium on a regular basis. Based on the graph, developers are likely having to pay twice as much to build Chromium in cloud envs, or spend twice as long sword fighting [3]. Anecdotally, this seems true.

Is there active work to analyze and lower, or at least stabilize, build times?

Any work to stabilize build times is very much appreciated. If there's a big Good Reason™ that justifies the increasing build times, or I'm misinterpreting the graph, please disregard this email. :)

I think it would be a better graph if it was normalized for the number of files or lines of code. The codebase (and binary) keeps increasing in size. And FWIW C++ continues to get even more expensive (read: slow) to compile as it evolves.

There's also somewhat inherently a conflict between build times and other (more important) properties like safety and correctness. Safer and less error-prone constructs are usually higher level and rely on more compiler power to compensate for them. For example, any system that bounds-checks array accesses will take longer to compiler than an equivalent system that doesn't. It takes more code to emit the bounds checks, and more compiler machinery to reason about the bounds-checks and optimize out the ones which are redundant.

That's not to say we shouldn't improve it. There are plenty of non-safety and non-correctness issues that contribute to build times. C++'s textual header inclusion model is horrible for compile times. More modern module systems do much better, and I hope someday we will finally get C++20 modules and avoid that mess. But the inherent conflict means it is also natural for build times to trend up over time. Where build times do conflict with our ability to efficiently ship a secure, correct, and performant browser, I think it's correct for build times to take a back seat.
 

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/4268b0c8-fdfa-48de-9b66-72d39e1c3061n%40chromium.org.

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.

Jayson Adams

unread,
Jun 28, 2024, 1:04:11 PMJun 28
to Chromium-dev, David Benjamin, ales...@alesandroortiz.com, Chromium-dev, dan...@chromium.org
More CPU per file, or safety and correctness - both are speculation? We can guess at the root cause, but it seems like Chrome taking double the CPU to build today than it did in 2022 is an alarming trend that we'd want to understand?

To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev+unsubscribe@chromium.org.

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev+unsubscribe@chromium.org.

danakj

unread,
Jun 28, 2024, 1:27:54 PMJun 28
to Jayson Adams, Chromium-dev, David Benjamin, ales...@alesandroortiz.com
On Fri, Jun 28, 2024 at 1:04 PM Jayson Adams <shr...@chromium.org> wrote:
More CPU per file, or safety and correctness - both are speculation? We can guess at the root cause, but it seems like Chrome taking double the CPU to build today than it did in 2022 is an alarming trend that we'd want to understand?

This is well-known to the folks who maintain the compilers and the bot infrastructure, so I guess it is prioritized among many other things there. I suspect any efforts to move this will need to be voluntarily-driven/self-directed. If you care about this I encourage you to dig in. :) I too want things to be faster.

Also also FWIW that graph measures a clean build which is not really representative of what developers are doing most of the time. But anyway still data.


(And +1 to David's comments)
 

To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.

Demetrios Papadopoulos

unread,
Jun 28, 2024, 2:22:45 PMJun 28
to dan...@chromium.org, Jayson Adams, Chromium-dev, David Benjamin, ales...@alesandroortiz.com
> There's also somewhat inherently a conflict between build times and other (more important) properties like safety and correctness.

Indeed. It is also worth noting that the
 a) The WebUI codebase has migrated from JavaScript TypeScript over the last few years
 b) More TS (and WebUI in general) code is being added at a high pace

The above means that there is more work needed to be done during the build which is unrelated to C++ (more files thrown to the TS compiler for type checking, more code thrown at other tools like Terser, Rollup and others when optimize_webui=true, more NodeJS invocations).

For example the following command
gn ls -C out/gchrome | grep ":build_ts" | wc -l

reveals that there are 247 TypeScript targets (ts_library.gni) across the build at this point.

Thanks,
Demetrios

Demetrios Papadopoulos

unread,
Jun 28, 2024, 2:25:36 PMJun 28
to dan...@chromium.org, Jayson Adams, Chromium-dev, David Benjamin, ales...@alesandroortiz.com
On Fri, Jun 28, 2024 at 11:20 AM Demetrios Papadopoulos <dpa...@chromium.org> wrote:
> There's also somewhat inherently a conflict between build times and other (more important) properties like safety and correctness.

Indeed. It is also worth noting that the
 a) The WebUI codebase has migrated from JavaScript TypeScript over the last few years
 b) More TS (and WebUI in general) code is being added at a high pace

The above means that there is more work needed to be done during the build which is unrelated to C++ (more files thrown to the TS compiler for type checking, more code thrown at other tools like Terser, Rollup and others when optimize_webui=true, more NodeJS invocations).

For example the following command
gn ls -C out/gchrome | grep ":build_ts" | wc -l

The command above should have been
gn ls -C out/gchrome | grep ":build_ts$" | wc -l

which yields 186 targets instead, but the main point of having more work to do during the build compared to previous years is still vaild. 

Lei Zhang

unread,
Jun 28, 2024, 2:30:29 PMJun 28
to shr...@chromium.org, Chromium-dev, David Benjamin, ales...@alesandroortiz.com, dan...@chromium.org
V8 also takes a long time to build. For instance, the V8 bots here [1] do not do distributed builds, and they take 2 hours to build.


To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.
To view this discussion on the web visit https://groups.google.com/a/chromium.org/d/msgid/chromium-dev/90815654-94d0-427b-a78f-3b88d836ce6fn%40chromium.org.

Ho Cheung

unread,
Jun 28, 2024, 9:56:44 PMJun 28
to Chromium-dev, Lei Zhang, Chromium-dev, David Benjamin, ales...@alesandroortiz.com, dan...@chromium.org, shr...@chromium.org
On my local environment, a full compilation of Chromium takes longer than before and takes up more hardware resources. The V8, //content/browser, and blink targets take up a lot of time and resources during the entire compilation process.

Alesandro Ortiz

unread,
Jun 29, 2024, 1:22:44 PMJun 29
to Chromium-dev, Lei Zhang, Chromium-dev, David Benjamin, ales...@alesandroortiz.com, dan...@chromium.org, shr...@chromium.org
Thank you all for the responses.

> I think it would be a better graph if it was normalized for the number of files or lines of code.

Agreed. That said, even if a normalized graph doesn't really show an outsized trend, the pain persists. Also agree that safety is priority, and this likely contributes to some of the increases.

I don't know how, but I'd like to confirm if LOC is indeed the primary driver of the increases. If it isn't, then there's some hope for "easier" improvements.

> Also also FWIW that graph measures a clean build which is not really representative of what developers are doing most of the time. But anyway still data.

I want to clarify that my main concern is with fresh builds, given the frequency I unfortunately have to do them (at least once a month; sometimes weekly). Sometimes I need to rebuild for 2-3 different OS targets, a separate ASan target, or different args.gn based on the current needs, which can add up to a full day or more. Maybe there's workflow improvements I can implement to reuse parts of the builds, but either way, it's quite a bit of time to rebuild multiple targets.

Maybe I'm an outlier of an outlier, but sometimes even a week's worth of changes results in effectively a fresh build, either due to the large number of changes, a widely-used file that is updated, or a small change that causes compiler errors with existing output dir, forcing me to nuke the output dir to resolve. (Maybe the stale files issue is not quite WAI, and something I should ask about in a separate thread/crbug, but based on what I've read, it's somewhat expected.)

The discussion in https://crbug.com/40280306 is quite insightful, even if focused on incremental component builds (I'm doing non-component builds, often fresh builds).

Regards,
Alesandro

Peter Kasting

unread,
Jun 29, 2024, 3:55:40 PMJun 29
to ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
I'm on my phone, excuse typos.

Build times can roughly be broken into "compile" and "link". Overall time is a function of both + parallelism.

Compile times are mostly a function of input tokens, which you can sort of upper bound as O(n^2) in lines of code. (That's a terrible approximation, but whatever.) This is because compiling a single file requires compiling the transitive closure of the #include graph. Link times are closer to being linear in lines of code, but depend greatly on your build config (LTO and linking debug info are both slow) as well as things like your disk and memory bandwidth and free memory (linking a large target is extremely memory-hungry).

Build times held steady for awhile because a bunch of people made a focused effort to remove unnecessary #includes and to split up commonly-#included files into smaller pieces. Think of this work as driving the n^2 compile time closer towards O(n). For the couple years that work went on, we basically kept pace with the increasing codebase size. Unfortunately, we have basically picked most of the low-hanging fruit there, which is a reason things have been getting slower again. C++20 didn't help, as it increased compile times by about 10%. (It does provide a lot of tools we could use to reduce compile times in the future, but you only benefit from those where you actually use them.)

There are two primary, not-mutually-exclusive paths forward. One is to resurrect the jumbo build from 2017. This concatenates source files into groups so you only pay the #include costs for one group at once, not every file. I am experimenting with this locally. It has promising effects on compile times but there are meaningful maintenance costs we have to pay if we go this route, so do not count on it happening.

The second is modules, either the pre-c++20 "clang modules" or true c++20 modules. These can be thought of a bit like "precompiled headers on steroids". The downside is that support in llvm is still buggy, we need build system support, and to take full advantage of c++20 modules we need to gradually rewrite ~all our current usage of header files. "Clang modules" provide less win but require far less effort. Unfortunately there is no current staffing on any of this work.

What work has happened to date is primarily focused on the siso and reclient tools, which (respectively) replace ninja and goma. Depending on platform, they can improve build throughout, primarily for remote builds that are massively parallel. This way, in the limit, Google can "solve" high build times (at least for itself) by throwing more hardware at the problem. Reclient uses RBE, which to my knowledge is open and non-proprietary, and I believe the goal is to allow broad access to it (but not necessarily to Google's server clusters, which would probably only be accessible to Googlers; open source contributors would presumably need to put together a server pool on AWS or Google Cloud or something. Please assume I don't know anything here and everything I say is wrong).

So if you're external and doing fully local builds, your best near-term hope is that I am able to get enough promising data to convince leadership that resurrecting jumbo makes sense. In the absolute best case, that would likely be about six months out. In the meantime, you can try to use the #include size page (too lazy to link sorry) to drive down more unnecessary #includes, or contribute on the gn and llvm sides to accelerate module adoption.

PK

Yoav Weiss (@Shopify)

unread,
Jun 30, 2024, 11:36:25 PMJun 30
to pkas...@chromium.org, Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
Thanks Alesandro for bringing this up!

The creeping build times are indeed a significant issue when trying to convince folks to contribute to Chromium. Regardless of the (valid) reasons why we have expensive builds, it should be noted that both WebKit and Gecko are ~ an order of magnitude faster to build.


(personally, I have RBE access, which is *extremely* appreciated. But folks new to the project don't benefit from that luxury)

On Sat, Jun 29, 2024 at 9:54 PM Peter Kasting <pkas...@chromium.org> wrote:
I'm on my phone, excuse typos.

Build times can roughly be broken into "compile" and "link". Overall time is a function of both + parallelism.

Compile times are mostly a function of input tokens, which you can sort of upper bound as O(n^2) in lines of code. (That's a terrible approximation, but whatever.) This is because compiling a single file requires compiling the transitive closure of the #include graph. Link times are closer to being linear in lines of code, but depend greatly on your build config (LTO and linking debug info are both slow) as well as things like your disk and memory bandwidth and free memory (linking a large target is extremely memory-hungry).

Build times held steady for awhile because a bunch of people made a focused effort to remove unnecessary #includes and to split up commonly-#included files into smaller pieces. Think of this work as driving the n^2 compile time closer towards O(n). For the couple years that work went on, we basically kept pace with the increasing codebase size. Unfortunately, we have basically picked most of the low-hanging fruit there, which is a reason things have been getting slower again. C++20 didn't help, as it increased compile times by about 10%. (It does provide a lot of tools we could use to reduce compile times in the future, but you only benefit from those where you actually use them.)

I suspect that tools to detect unnecessary includes and/or cases where includes could be turned into forward declarations could go a long way to reduce that non-linearity.
Also, in the past, I've seen the use of the following pattern to avoid including the same header file multiple times:
#ifdef WHATEVER_H_
#include "whatever.h"
#endif

That helped avoid I/O operations in case of multiple included .h files all depending on the same .h file. It might be worthwhile to investigate.
While it's an extremely ugly pattern, because all of our header file guards are predictable, maybe we can macro it away? (if this proves to be actually useful)
 

There are two primary, not-mutually-exclusive paths forward. One is to resurrect the jumbo build from 2017. This concatenates source files into groups so you only pay the #include costs for one group at once, not every file. I am experimenting with this locally. It has promising effects on compile times but there are meaningful maintenance costs we have to pay if we go this route, so do not count on it happening.

AFAICT, that's the approach WebKit took, but it should be noted that jumbo builds are no panacea.
They optimize the "fresh build"/rebase case, but significantly increase the cost of small, iterative changes.
Building WebKit from scratch takes an ~hour (compared to 8+ hours for Chromium), but a simple change takes ~90 seconds, compared to a few seconds.
 

The second is modules, either the pre-c++20 "clang modules" or true c++20 modules. These can be thought of a bit like "precompiled headers on steroids". The downside is that support in llvm is still buggy, we need build system support, and to take full advantage of c++20 modules we need to gradually rewrite ~all our current usage of header files. "Clang modules" provide less win but require far less effort. Unfortunately there is no current staffing on any of this work.

What work has happened to date is primarily focused on the siso and reclient tools, which (respectively) replace ninja and goma. Depending on platform, they can improve build throughout, primarily for remote builds that are massively parallel. This way, in the limit, Google can "solve" high build times (at least for itself) by throwing more hardware at the problem. Reclient uses RBE, which to my knowledge is open and non-proprietary, and I believe the goal is to allow broad access to it (but not necessarily to Google's server clusters, which would probably only be accessible to Googlers; open source contributors would presumably need to put together a server pool on AWS or Google Cloud or something. Please assume I don't know anything here and everything I say is wrong).

While that (presumably) works well for orgs with a large number of contributors, I don't think that helps new contributors, as it requires a hefty upfront investment.

An interesting angle to approach this may be from a cost/energy savings perspective. Presumably, Google pays a non-trivial amount for these 90 CPU hours per build for folks on the Chrome team (and folks with RBE access, like myself), and reducing that time can pay for itself in the long run.

K. Moon

unread,
Jul 1, 2024, 3:26:03 AMJul 1
to Yoav Weiss, Peter Kasting, Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
I think these are good points, but regarding redundant includes specifically, I don't think moving the header guards out of line would help.

My recollection from previous conversations about this topic is that major C/C++ compilers are very efficient at checking the kind of predictable header guards we use in Chromium. Any I/O involved is in the noise of a typical compilation.

Peter Kasting

unread,
Jul 1, 2024, 11:17:47 AMJul 1
to Yoav Weiss (@Shopify), Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
On Sun, Jun 30, 2024 at 8:34 PM Yoav Weiss (@Shopify) <yoav...@chromium.org> wrote:
On Sat, Jun 29, 2024 at 9:54 PM Peter Kasting <pkas...@chromium.org> wrote:
Build times held steady for awhile because a bunch of people made a focused effort to remove unnecessary #includes and to split up commonly-#included files into smaller pieces. Think of this work as driving the n^2 compile time closer towards O(n). For the couple years that work went on, we basically kept pace with the increasing codebase size. Unfortunately, we have basically picked most of the low-hanging fruit there, which is a reason things have been getting slower again. C++20 didn't help, as it increased compile times by about 10%. (It does provide a lot of tools we could use to reduce compile times in the future, but you only benefit from those where you actually use them.)

I suspect that tools to detect unnecessary includes and/or cases where includes could be turned into forward declarations could go a long way to reduce that non-linearity.

The closest thing to the former is IWYU/clang-include-cleaner. There is some ongoing effort to run the first of those. However, it's unlikely this will have much effect on compile times; the primary practical effect is to aid large-scale refactoring by turning transient #includes into direct ones.

The majority of unnecessary #includes end up transiently #included anyway, from places where they are indeed necessary. It is possible to strip some cruft this way, but experience suggests the available gains here are on the order of a couple percent.

Especially in Blink, there are a small number of headers that are both complex and "transitively included by everything". Aggressively trimming those could help. We also spend a big chunk of CPU time in python for Mojo; making that faster might help. Making mojo generated headers "significantly lighter-weight" somehow might help. I've not looked into any of these to figure out how realistic they are or how much win is available.
 
Also, in the past, I've seen the use of the following pattern to avoid including the same header file multiple times:
#ifdef WHATEVER_H_
#include "whatever.h"
#endif

That helped avoid I/O operations in case of multiple included .h files all depending on the same .h file. It might be worthwhile to investigate.

I would be shocked if this had an effect, for the reason kmoon@ mentioned, as well as file-system caching.

There are two primary, not-mutually-exclusive paths forward. One is to resurrect the jumbo build from 2017. This concatenates source files into groups so you only pay the #include costs for one group at once, not every file. I am experimenting with this locally. It has promising effects on compile times but there are meaningful maintenance costs we have to pay if we go this route, so do not count on it happening.

AFAICT, that's the approach WebKit took, but it should be noted that jumbo builds are no panacea.
They optimize the "fresh build"/rebase case, but significantly increase the cost of small, iterative changes.
Building WebKit from scratch takes an ~hour (compared to 8+ hours for Chromium), but a simple change takes ~90 seconds, compared to a few seconds.

In local testing, I (surprisingly) have not found jumbo builds to have much effect on incremental build times in most configs. Under memory pressure, they performed measurably better or worse than normal builds; when I increased my memory pool they ended up performing about the same as normal builds, for both component and non-component builds. I did less testing here than for full builds, though.

The primary cost of jumbo builds is that by concatenating source files, they make file-scope symbols become in-scope across groups of files (and the group boundaries shift over time). This can lead to surprising errors. From having spent several months fixing such errors, I claim that for non-test code, about 80% of the errors actually point out problematic copy-and-pasting and other patterns that should be refactored instead, and while surprising, these issues are a win for codebase quality. For test code, the errors are mostly an annoyance. The Opera folks had a Clang patch that would basically treat file-scope symbols as "truly file-scope" in jumbo builds, eliminating this downside, and if we do try to resurrect jumbo, I will look into whether LLVM would take it and we could turn it on for test targets, as that's by far the largest "true cost" of jumbo. (At the time, LLVM upstream was skeptical and suggested that "modules should solve this soon". It is now seven years later and we are a long way from having modules in Chromium.)

For incremental builds, the build time story is complex; improving linker speed becomes important, and for both good and bad reasons, the component build config is often dramatically (10x) faster. On the former front, some folks have been testing the "mold" linker (authored by a Xoogler who, among other things, wanted to eliminate the need to have a component build config in Chromium!); it can successfully link chrome on Linux, and shows some meaningful speedups, but less than its author expects, and it's not available for non-Linux hosts. The component build, meanwhile, is faster in part due to two different bugs, one of which lets it incorrectly skip rebuilding certain necessary dependencies, and one of which makes non-component builds incorrectly rebuild certain unnecessary dependencies. It also results in a slew of subtle bugs, gotchas, and limitations. I would love to get to the point where its costs outweigh its benefits and we can remove it.

PK

Junji Watanabe

unread,
Jul 2, 2024, 1:23:40 AMJul 2
to pkas...@chromium.org, Yoav Weiss (@Shopify), Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
To just give a data point, I tried measuring user CPU time grouping by build action types.  
(GN args: use_siso=true, Target = chrome)

mojom bindings generator, blink generate bindings, typescript takes time. But still clang cxx compile is the dominant part.

image.png

--
--
Chromium Developers mailing list: chromi...@chromium.org
View archives, change email options, or unsubscribe:
http://groups.google.com/a/chromium.org/group/chromium-dev
---
You received this message because you are subscribed to the Google Groups "Chromium-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chromium-dev...@chromium.org.

K. Moon

unread,
Jul 2, 2024, 2:54:04 AMJul 2
to jw...@google.com, Peter Kasting, Yoav Weiss (@Shopify), Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
Looking at time/unit, it's clear to me that we need to switch back to C. 😛

This is some really interesting data. The Blink binding generation stands out, for example; that seems painful to regenerate any time it changes, but maybe it doesn't change much?


Junji Watanabe

unread,
Jul 2, 2024, 3:09:15 AMJul 2
to K. Moon, Peter Kasting, Yoav Weiss (@Shopify), Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org
On Tue, Jul 2, 2024 at 3:51 PM K. Moon <km...@chromium.org> wrote:
Looking at time/unit, it's clear to me that we need to switch back to C. 😛

This is some really interesting data. The Blink binding generation stands out, for example; that seems painful to regenerate any time it changes, but maybe it doesn't change much?

The slow blink binding generation does matter because it's in the critical path and a blocker for many other actions.  
On the other hand, many cxx compiles can run in parallel.  So the "weighted" time (= time / parallelism) is much lower, especially with remote executions when you use thousands of -j.

Takuto Ikuta

unread,
Jul 2, 2024, 10:08:00 PMJul 2
to Chromium-dev, Junji Watanabe, Peter Kasting, Yoav Weiss (@Shopify), Shruthi Sreekanta, ales...@alesandroortiz.com, Chromium-dev, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org, K. Moon

Clang plugin also makes build slower https://crbug.com/333005905#comment13.
It might help if you set clang_use_chrome_plugins=false in args.gn.

Peter Kasting

unread,
Jul 3, 2024, 12:32:45 PMJul 3
to Takuto Ikuta, Chromium-dev, Junji Watanabe, Yoav Weiss (@Shopify), Shruthi Sreekanta, ales...@alesandroortiz.com, Lei Zhang, David Benjamin, dan...@chromium.org, shr...@chromium.org, K. Moon
On Tue, Jul 2, 2024 at 7:08 PM Takuto Ikuta <tik...@google.com> wrote:
Clang plugin also makes build slower https://crbug.com/333005905#comment13.
It might help if you set clang_use_chrome_plugins=false in args.gn.

This is undesirable in most situations, though, as the point of these plugins is to ban various constructs; disabling this locally can lead to unexpected failures after upload, after landing, or (worst case) just checking in something we intended to ban.

PK 
Reply all
Reply to author
Forward
0 new messages