Looking for Linux projects with sandboxing overhead

149 views
Skip to first unread message

Pedro Liberal Fernandez

unread,
Oct 18, 2023, 12:31:14 PM10/18/23
to bazel-discuss
Hello everyone,

I'm looking for Linux projects that show a clear overhead when building with the linux-sandbox.

I have tried a few but the overhead in those (including Tensorflow) seems small enough that it doesn't seem worth it at this point to optimize sandboxing on Linux any further or add more complexity.

If you are interested in us considering your project, please invoke your build command with the flags:
1. --spawn_strategy=local
2. --spawn_strategy=sandboxed --reuse_sandbox_directories and --experimental_sandbox_async_tree_delete_idle_threads=4

If between 1 and 2 you see an increase in build time of at least 5%, then we'd consider that significant enough.

Looking forward to any feedback you can provide.

PS: We are looking into macOS sandboxing separately

Pedro
Bazel Team

Fabian Meumertzheim

unread,
Oct 18, 2023, 1:54:59 PM10/18/23
to Pedro Liberal Fernandez, bazel-discuss
Java projects are currently mostly unaffected by the overhead since the default strategy for Java compilation is an unsandboxed multiplex-worker, but that would change with https://bazel-review.googlesource.com/c/bazel/+/179090. It will be interesting to test some Java projects with a build of Bazel including that PR.

Best,
Fabian

--
You received this message because you are subscribed to the Google Groups "bazel-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bazel-discus...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/bazel-discuss/CAAzTuMXqUe_4k-RpKo3bhk8UAbbdU00KGuinVC%2B8Rs8pLStO1g%40mail.gmail.com.

David Turner

unread,
Oct 18, 2023, 6:08:12 PM10/18/23
to Pedro Liberal Fernandez, bazel-discuss
The Fuchsia project had a bad time using a vendored Python toolchain, where the `files` attribute of our py_runtime() target listed about 12,000 files (mostly standard library module files). Setting up and tearing down the sandbox everytime took between 500ms and 800ms on Linux before running _any_ script in a Bazel action (often this is far longer than the script's execution time). We found a work-around by putting all library modules in a zip file and using PYTHONPATH to point to it (all wrapped in a custom repository rule) so that our py_runtime() target's files attribute now only lists about 3 files, and we get performance that is now similar to --spawn_strategy=local while having proper hermeticity.

We didn't find a similar thing for our Linux C/C++ sysroot that contains around 4000 files (headers + some link-time libraries), though we try to subset that based on the target CPU architecture. But usually compilation or linking takes noticeably more time than setting up the sandbox, so that cost is less important. Anything to make this faster would be appreciated though.

On Wed, Oct 18, 2023 at 6:31 PM 'Pedro Liberal Fernandez' via bazel-discuss <bazel-...@googlegroups.com> wrote:
--

p...@google.com

unread,
Oct 19, 2023, 4:25:29 AM10/19/23
to bazel-discuss
Thank you for your answers.
 
The Fuchsia project had a bad time using a vendored Python toolchain, where the `files` attribute of our py_runtime() target listed about 12,000 files (mostly standard library module files). Setting up and tearing down the sandbox everytime took between 500ms and 800ms on Linux before running _any_ script in a Bazel action (often this is far longer than the script's execution time). We found a work-around by putting all library modules in a zip file and using PYTHONPATH to point to it (all wrapped in a custom repository rule) so that our py_runtime() target's files attribute now only lists about 3 files, and we get performance that is now similar to --spawn_strategy=local while having proper hermeticity.

Was this with --reuse_sandbox_directories? I believe with the two flags I listed most of the overhead should go away. We are struggling to find projects where this is not the case, that's why we were hoping to get some concrete examples with significant sandboxing overhead we could clone and iterate on.

p...@google.com

unread,
Oct 20, 2023, 12:51:47 PM10/20/23
to bazel-discuss
If you are interested in the topic of sandboxing please have a look at this document where I summarize my findings so far and where I propose a new idea at the end for optimizing performance.

Manuel

unread,
Oct 20, 2023, 3:10:41 PM10/20/23
to Pedro Liberal Fernandez, bazel-discuss
As Booking.com in our Perl ecosystem we have cases of runfiles reaching 70000 entries in total, making RBE really painful, and I've noticed sandboxing being one of the reasons. 

Similar to Fuchsia we're working on a way tar our libraries (composed of multiple files) and our toolchain, and that makes the sandbox setup somehow manageable.

I don't remember the numbers but our setup time in RBE was in the order of 10 seconds our cleanup 7 and our test was another 10, when we started taring it setup/cleanup went down to < 2 secs.

I think the old rules for javascript would be a good candidate though.

Unfortunately I can't give you access to it, as in it's not available publicly, after BazelCon I can run with the flags you mention. If you're also going to Munich I would love to meet and talk about it.

Also we don't have a single 70k runfiles target, we have thousands of those, so the accumulate time is considerable. We tried with several options for local and sandbox_reuse was one giving us good results, mount in tmpfs also helped considerably.

--

Pedro Liberal Fernandez

unread,
Oct 23, 2023, 4:10:12 AM10/23/23
to Manuel, bazel-discuss
 If you're also going to Munich I would love to meet and talk about it.
I will see you there! 
Reply all
Reply to author
Forward
0 new messages