AOSP build being so resource demanding is frustrating - can we optimize?

612 views
Skip to first unread message

asquator

unread,
Oct 13, 2025, 2:14:21 PMOct 13
to android-...@googlegroups.com
Hello,

I'm new to AOSP and I have just completed the building process. It was not easy to do it even on a pretty strong PC that handles all the [heavy] development tasks I have. For the first time I had to create a swapfile of 32G in addition to my 16G ram so the build process doesn't crash. I'm not even talking about the 250G+ (!!!) it now takes on my disk. The compilation itself was pretty quick, but the Soong scanning is really a serial RAM killer. Does it load the entire dependency graph into memory? Or why is the consumption so high?
I've seen the minimal resource requirements on the docs page, and it makes me sad. Is it a techonogical limit we bumped into, the one that can't be shifted, or is it a lack of optimizations? There is no way to tell Soong to consume less memory and cache on disk, except swap or memory compression. And it's still not clear to me why the build eats so much disk space - no way Android itself is that heavy.
I'm probably missing a lot of things as a newcomer, and I'd be happy to get some directions to relevant sources that explain why the build should be such a pain (should it?). Are there any plans to optimize it in the future? Again, the sole idea of having such high requirements just looks wrong to me... Seen many similar posts in the past, but I don't think any of them got enough of community attention.

Thank you!

enh

unread,
Oct 14, 2025, 2:32:33 PMOct 14
to android-...@googlegroups.com
there's probably some reduction in memory usage possible, but probably
at the cost of increasing the [already long] build time. so it's just
simple economics --- no-one's going to do that work because it would
be useless to folks doing lots of builds, and a net loss in terms of
opportunity cost.
> --
> --
> You received this message because you are subscribed to the "Android Building" mailing list.
> To post to this group, send email to android-...@googlegroups.com
> To unsubscribe from this group, send email to
> android-buildi...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/android-building?hl=en
>
> ---
> You received this message because you are subscribed to the Google Groups "Android Building" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to android-buildi...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/android-building/I7AUUF9E7AOwKELcLt6540Gni6pcWrmEZC3SaclPdvQi2e2Iv4Y4g_55nWG_S7Go3PA2qb1IcIXqX05dc-G4xfLHsax1z4d7VDv2tpXFuuo%3D%40proton.me.

asquator

unread,
Oct 17, 2025, 12:13:09 PMOct 17
to android-...@googlegroups.com

I fear it's not about opportunity cost, but a lack of optimization. For example, I've recently synced my AOSP build and it took just one minute, meaning there's nothing to do. When I added a *single* binary module and triggered a build, I had to wait for *65 minutes* before it finished. The build process was constantly consuming 12gb + 32gb memory (the latter being swap). Something is definitely wrong here, as incremental builds should be quick. Why is so much memory used when adding just one module? Is the entire source tree re-scanned in memory? What's the chance I'm doing something wrong? All this time the build was hanging with the lines:

[100% 1/1] bootstrap blueprint
Running globs...

I'm syncing in ASfP.

Developing on commodity machines is made somewhat impossible, as it may take hours to include any addition in the module graph, and the PC becomes unusable at that moment.
> To view this discussion visit https://groups.google.com/d/msgid/android-building/CAJgzZooLYPUDBF6MnQS94QVCcgG3KO6EQk7VRXz1RMV6hr6qkw%40mail.gmail.com.

enh

unread,
Oct 17, 2025, 12:56:11 PMOct 17
to android-...@googlegroups.com
On Fri, Oct 17, 2025 at 12:13 PM 'asquator' via Android Building
<android-...@googlegroups.com> wrote:
>
>
> I fear it's not about opportunity cost,

i fear you need to read https://en.wikipedia.org/wiki/Opportunity_cost :-)

in this specific case: imagine you have 40 hours for fixing bugs this
week. do you spend that on reducing memory usage for the tens of
hobbyists/students who're trying to build an entire OS on a laptop ...
or do you spend that time on fixing something that the billions of
actual users will notice? because you only get to spend those 40 hours
once.

> but a lack of optimization. For example, I've recently synced my AOSP build and it took just one minute, meaning there's nothing to do. When I added a *single* binary module and triggered a build, I had to wait for *65 minutes* before it finished. The build process was constantly consuming 12gb + 32gb memory (the latter being swap). Something is definitely wrong here, as incremental builds should be quick. Why is so much memory used when adding just one module? Is the entire source tree re-scanned in memory? What's the chance I'm doing something wrong? All this time the build was hanging with the lines:
>
> [100% 1/1] bootstrap blueprint
> Running globs...
>
> I'm syncing in ASfP.
>
> Developing on commodity machines is made somewhat impossible, as it may take hours to include any addition in the module graph, and the PC becomes unusable at that moment.

sure, but it's already "somewhat impossible" because there's an entire
operating system's worth of code to build, so anyone trying to
actually get anything done is going to get a huge return on investment
from buying a more ram and more/faster cpu cores. because again, you
only get to spend your 40 hours one way, and hours spent "waiting for
an entire operating system to build on a laptop" rather isn't going to
help any of those billions of users...

16GiB wasn't enough to build comfortably a decade ago. and while, yes,
if you were their boss you could have an engineer try to reduce those
requirements ... but you'd have a hard time justifying to _your_ boss
why that engineer wasn't fixing something that would improve the
product. especially because the cost of 128GiB of ram isn't going to
pay that engineer's wages (even if they're on minimum wage!) for one
whole 40 hour week.

https://en.wikipedia.org/wiki/Opportunity_cost

don't get me wrong: obviously it sucks to be a student or in a country
where minimum wage is a lot lower than the local figure i used, where
"buy a build machine that meets the suggested specifications on
https://source.android.com/docs/setup/start#hardware-requirements",
but that's opportunity cost in action too --- if that's the most
valuable bug for you to fix, go for it!
> To view this discussion visit https://groups.google.com/d/msgid/android-building/4sNDx8J7Mtm61IY3elXQMKnxEnLfwOP3sM_q1OSIvjlQShm1SExKIwJoZwFhAUa4f02BNzjTIy3VsntLY_38OCC_hE7LenJKlxBYhjkUPQk%3D%40proton.me.

asquator

unread,
Oct 17, 2025, 6:13:20 PMOct 17
to android-...@googlegroups.com
Thank you, I know what opportunity cost means, and in this case it's a technical debt that is thrown resources at because it's cheaper than developers' time (so far).

I perfectly understand that a laptop will always have a hard time building big projects, but I want to differentiate AOSP in two aspects:
1. It's not just slow builds, but some "peculiarity" in the build system that causes enormous memory allocations. Even when building on strong cloud machines or corporate PCs overloaded with RAM, I guess there can be a significant slowdown when frequently rebuilding the images or doing "will it compile" checks. For comparison, building the kernel is much faster and can be done with just about 4GB of memory.
2. Incremental syncs should be fast, but it seems they aren't. On a "student's PC" it takes almost the same time to do an incremental build after updating a single Android.bp file as a full build, although it's expected to build just "the things that changed and their dependencies". Here I suspect that I'm doing something wrong, because incremental builds should always be faster. I never encountered this issue with make/cmake projects.

> if that's the most valuable bug for you to fix, go for it!

Before leaping with fixes, I think it's reasonable to consult Android devs. Maybe the resource consumption is justified and there's nothing to fix? Or maybe there's already an open issue and people working on it? That's why I opened this topic - to get the minimal direction.

To make it transparent, I'm adding a HAL implementation for a custom device, and the initial checks take a lot of time. Maybe there's a way to isolate this development completely from the main tree and load the library files to an existing image... Again, I hoped the build system can do it for me, in incremental fashion.

Thank you for guidance anyway

On Friday, October 17th, 2025 at 7:56 PM, 'enh' via Android Building <android-...@googlegroups.com> wrote:

> On Fri, Oct 17, 2025 at 12:13 PM 'asquator' via Android Building
> android-...@googlegroups.com wrote:
>
> > I fear it's not about opportunity cost,
>
>
> i fear you need to read https://en.wikipedia.org/wiki/Opportunity_cost :-)
>
> in this specific case: imagine you have 40 hours for fixing bugs this
> week. do you spend that on reducing memory usage for the tens of
> hobbyists/students who're trying to build an entire OS on a laptop ...
> or do you spend that time on fixing something that the billions of
> actual users will notice? because you only get to spend those 40 hours
> once.
>
> > but a lack of optimization. For example, I've recently synced my AOSP build and it took just one minute, meaning there's nothing to do. When I added a single binary module and triggered a build, I had to wait for 65 minutes before it finished. The build process was constantly consuming 12gb + 32gb memory (the latter being swap). Something is definitely wrong here, as incremental builds should be quick. Why is so much memory used when adding just one module? Is the entire source tree re-scanned in memory? What's the chance I'm doing something wrong? All this time the build was hanging with the lines:
> >
> > [100% 1/1] bootstrap blueprint
> > Running globs...
> >
> > I'm syncing in ASfP.
> >
> > Developing on commodity machines is made somewhat impossible, as it may take hours to include any addition in the module graph, and the PC becomes unusable at that moment.
>
>
> sure, but it's already "somewhat impossible" because there's an entire
> operating system's worth of code to build, so anyone trying to
> actually get anything done is going to get a huge return on investment
> from buying a more ram and more/faster cpu cores. because again, you
> only get to spend your 40 hours one way, and hours spent "waiting for
> an entire operating system to build on a laptop" rather isn't going to
> help any of those billions of users...
>
> 16GiB wasn't enough to build comfortably a decade ago. and while, yes,
> if you were their boss you could have an engineer try to reduce those
> requirements ... but you'd have a hard time justifying to your boss
> To view this discussion visit https://groups.google.com/d/msgid/android-building/CAJgzZorN-HADk8QYEvshmXkR65M5%3DaeZxwiPadFSLZMyPSax_w%40mail.gmail.com.

Dan Willemsen

unread,
Oct 17, 2025, 7:00:32 PMOct 17
to Android Building
> 2. Incremental syncs should be fast, but it seems they aren't. On a "student's PC" it takes almost the same time to do an incremental build after updating a single Android.bp file as a full build, although it's expected to build just "the things that changed and their dependencies". Here I suspect that I'm doing something wrong, because incremental builds should always be faster. I never encountered this issue with make/cmake projects.

When we were running under make in 2015, it would take a couple minutes to start every single build, and the builds have gotten significantly more complex and much larger since then. As we've been rewriting away from make, one of the optimizations we've made has been that we split the build between an "analysis" phase and an "execution" phase. The analysis results are persisted to disk, so that we only need to rerun them when one of it's inputs changes (an Android.bp/mk, glob in an Android.bp, the build system itself, etc).

This means that most iterations when you're iterating on your code is fairly fast, only going through the execution phase. But when you do touch an Android.bp it'll take a few minutes (on a sufficient machine) to re-analyze before switching to running the compilers/etc.

> 1. It's not just slow builds, but some "peculiarity" in the build system that causes enormous memory allocations. Even when building on strong cloud machines or corporate PCs overloaded with RAM, I guess there can be a significant slowdown when frequently rebuilding the images or doing "will it compile" checks. For comparison, building the kernel is much faster and can be done with just about 4GB of memory.

Yes, iterating on an Android.bp takes several minutes minimum, which is rather annoying, and there are people working on optimizations in this area, but they're primarily focused on time rather than memory use. Often optimizations to one will help the other, but not always.

> For comparison, building the kernel is much faster and can be done with just about 4GB of memory.

The android build is substantially more complicated and larger than the kernel build. I know some groups include a kernel build within the Android build (in AOSP and more generally at Google we're using prebuilts, but build speed isn't the primary reason for this).

> When I added a single binary module and triggered a build, I had to wait for 65 minutes before it finished. The build process was constantly consuming 12gb + 32gb memory (the latter being swap).

Going into swap, especially that far, is likely increasing your build times rather substantially. Swap usage is rather expensive, especially if we're constantly swapping in and out. Using all of your memory also evicts a lot of the filesystem caches, which make filesystem operations slower as well. And builds use a lot of filesystem operations (on larger machines we've often been limited by disk performance over the years).

Soong roughly works by loading in all of the Android.bp files and iteratively building up graphs from the inputs and modifying them until we apply all of the different build logic and generate the final graph / commands. If the current working set (graph/etc) doesn't fit in memory it's going to be constantly swapping things in and out.

The other half of analysis is Kati reading makefiles. For many years we've been using both together, but there's also work being done to get Soong-only builds, finally moving the rest of the Make logic into Soong. This rather dramatically shrinks the analysis time as well, but likely doesn't reduce the peak memory use much (Soong has used more peak memory than Kati most of the time). 

- Dan

asquator

unread,
Oct 20, 2025, 1:00:25 PMOct 20
to android-...@googlegroups.com
>The analysis results are persisted to disk, so that we only need to rerun them when one of it's inputs changes (an Android.bp/mk, glob in an Android.bp, the build system itself, etc).

>This means that most iterations when you're iterating on your code is fairly fast, only going through the execution phase. But when you do touch an Android.bp it'll take a few minutes (on a sufficient machine) to re-analyze before switching to running the compilers/etc.

Yes, I've noticed that, but isn't there a way to make the analysis stage incremental too? In case a single module file was modified while everything else is unchanged, we want to re-analyze the module itself and all of its recursive dependencies. As we have a cached graph already, it looks like we can rely on it to find all the dependents. Is there something I'm missing that requires rebuilding the graph from scratch?


> When we were running under make in 2015, it would take a couple minutes to start every single build, and the builds have gotten significantly more complex and much larger since then.

Exactly, so being able to load the entire graph into RAM is a bold assumption, isn't it? As the system will continue growing, more modules will be added and the memory footprint will constantly increase, until we need 128GB for a comfortable build. Every build system strives to avoid touching the files not being changed, is there an inherent technological limitation in Android that prevents us from doing that?

chris simmonds

unread,
Oct 27, 2025, 11:12:01 AM (10 days ago) Oct 27
to android-...@googlegroups.com
Hi

This is a limitation in the Android build system, which as we all know is a mess. There are three stages:

1. Soong parses *all* Android.bp files and creates a ninja manifest file,  out/soong/build.ninja. This file is typically 4 to 8 Gb
2. Kati parses all the Android.mk files and other mk fragments and creates more ninja manifests
3. Ninja takes the manifest files and the target device name, calculates the dependency tree and then schedules the jobs to generate the final target

The problem is that the dependency tree is not known until stage 3. Also, touching any Android,.bp file will trigger stage 1.


Could this be fixed? Yes, but only by making large changes throughout the build system . The Android team started to do this by migrating the whole thing to Bazel, but the project was cancelled, presumably because there is no economic reason for them to do so. Another option that I have written about int he past is to migrate to a mature build system such as BitBake: https://www.linkedin.com/pulse/using-yocto-build-aosp-chris-simmonds-ymhof/

So, it's a problem, there are solutions but it needs someone with sufficient resouces to fix

HTH

Chris Simmonds
 

--
--
You received this message because you are subscribed to the "Android Building" mailing list.
To post to this group, send email to android-...@googlegroups.com
To unsubscribe from this group, send email to
android-buildi...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/android-building?hl=en

---
You received this message because you are subscribed to the Google Groups "Android Building" group.
To unsubscribe from this group and stop receiving emails from it, send an email to android-buildi...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages