[llvm-dev] Compiler support in libc++

290 views
Skip to first unread message

Louis Dionne via llvm-dev

unread,
Mar 1, 2021, 12:40:53 PM3/1/21
to Libc++ Dev, llvm...@lists.llvm.org

Dear LLVM community,


I’ve wanted to address the topic of which compilers are supported by libc++ for a long time. LLVM documents that it supports GCC >= 5, Clang >= 3.5 and other fairly old compilers. I think this makes a lot of sense for codebases like LLVM and Clang, since it means you can bootstrap a compiler with your system compiler in many cases. It’s also fairly easy to enforce that, since you just have to code in a supported subset of C++.


However, for a library like libc++, things are a bit different. By its very nature, libc++ needs to rely on a recent compiler in order to implement most recent library features. Not being able to rely on a recent compiler leads to problems:

  • Adding new features is significantly more complicated because we need to implement them conditionally on compiler support, not just on support for a C++ Standard. There can also be interactions between what compiler the library is built with and what compiler the headers are used with.

  • We accumulate technical debt around the code base. Some of these #ifdef code paths are not in use anymore, others don’t compile anymore or they contain bugs.

  • It creates a false sense of support: people think they can use a libc++ built with e.g. Clang 3.5, but in reality doing so is a terrible idea. The library might not contain runtime support for features that will be advertised as available by the headers (the char8_t RTTI and the upcoming support for <format> come to mind). Those are serious ABI issues that you’ll only notice when trying to use the feature.


I think it’s important to stress that the current state of things is that we don’t *actually* support much older compilers - the documentation claims we do, but that is misleading. While things may happen to work on older compilers, I wouldn’t recommend relying on that for anything serious, since it’s mostly untested.


Furthermore, the actual value of supporting old compilers isn’t obvious. Indeed, the best way of building libc++ is to bootstrap Clang and then build libc++ with it, which is easily achieved with the LLVM Runtimes build. Of course, we also support different shipping mechanisms (including non-Clang compilers), but in all cases it should be reasonable to expect that someone building libc++ at the tip is able to do so using a recent compiler.


For all these reasons, I think we must adjust the official support policy we currently document. Concretely, the following modified policy solves the issues I mentioned above and makes it so that the stated support reflects the reality of what we truly support:

  • At any given point in time, libc++ supports back to the latest released version of Clang. For example, if the latest major release of Clang is 14, libc++ (on main) supports Clang 14. When Clang 15 is released (and libc++ 15 with it), libc++ (on main) is free to assume Clang 15. As a result, any released libc++ will always support the previously (and the currently) released Clang, with the support window moving as newer Clangs are released.

  • We support the latest major release of GCC, as advertised on https://gcc.gnu.org/releases.html.

  • We support the latest major release of AppleClang.


The above policy is reasonable from libc++’s perspective, and it also reflects what we test on a regular basis with the CI. Furthermore, supporting up to the last release instead of requiring a trunk compiler (like MSVC’s STL and libstdc++) gives vendors with alternate delivery vehicles approximately 6 months to update their compiler if they want to jump on the next release of libc++, which I think is an important property to retain.


This message is both a heads up about the current state of things, an explanation of where we (the libc++ contributors) want to end up, and an invitation to have a discussion with the rest of the community.


I propose that we maintain our current level of support for older compilers (i.e. keep things roughly building) until the next LLVM release, after which the above policy would become official and libc++ development would be allowed to assume a compiler as documented above. That would give approximately 6 months (from now to the next release) for people managing build bots to migrate to the Runtimes build, and approximately 6 months (from the next release to the next-next release) for external users to adjust to this policy if needed.


Thanks,

Louis


P.S.: There is no mention of other compilers besides Clang, AppleClang and GCC above. That’s because no other compiler is tested on a regular basis, so the status of support for other compilers is unknown. If you’d like to add official support for a new compiler, I’ll be happy to help you set up the required testing.


Mark de Wever via llvm-dev

unread,
Mar 1, 2021, 12:52:27 PM3/1/21
to Louis Dionne, llvm...@lists.llvm.org, Libc++ Dev
As a libc++ contributor a +1

Cheers,
Mark de Wever

On Mon, Mar 01, 2021 at 12:40:36PM -0500, Louis Dionne via llvm-dev wrote:
> Dear LLVM community,
>
> I’ve wanted to address the topic of which compilers are supported by libc++
> for a long time. LLVM documents that it supports GCC >= 5, Clang >= 3.5 and
> other fairly old compilers. I think this makes a lot of sense for codebases
> like LLVM and Clang, since it means you can bootstrap a compiler with your
> system compiler in many cases. It’s also fairly easy to enforce that, since
> you just have to code in a supported subset of C++.
>
> However, for a library like libc++, things are a bit different. By its very
> nature, libc++ needs to rely on a recent compiler in order to implement
> most recent library features. Not being able to rely on a recent compiler
> leads to problems:
>

> -


>
> Adding new features is significantly more complicated because we need to
> implement them conditionally on compiler support, not just on support for a
> C++ Standard. There can also be interactions between what compiler the
> library is built with and what compiler the headers are used with.

> -


>
> We accumulate technical debt around the code base. Some of these #ifdef
> code paths are not in use anymore, others don’t compile anymore or they
> contain bugs.

> -


>
> It creates a false sense of support: people think they can use a libc++
> built with e.g. Clang 3.5, but in reality doing so is a terrible idea. The
> library might not contain runtime support for features that will be
> advertised as available by the headers (the char8_t RTTI and the upcoming
> support for <format> come to mind). Those are serious ABI issues that
> you’ll only notice when trying to use the feature.
>
>
> I think it’s important to stress that the current state of things is that
> we don’t *actually* support much older compilers - the documentation claims
> we do, but that is misleading. While things may happen to work on older
> compilers, I wouldn’t recommend relying on that for anything serious, since
> it’s mostly untested.
>
> Furthermore, the actual value of supporting old compilers isn’t obvious.
> Indeed, the best way of building libc++ is to bootstrap Clang and then
> build libc++ with it, which is easily achieved with the LLVM Runtimes
> build. Of course, we also support different shipping mechanisms (including
> non-Clang compilers), but in all cases it should be reasonable to expect
> that someone building libc++ at the tip is able to do so using a recent
> compiler.
>
> For all these reasons, I think we must adjust the official support policy
> we currently document. Concretely, the following modified policy solves the
> issues I mentioned above and makes it so that the stated support reflects
> the reality of what we truly support:
>

> -


>
> At any given point in time, libc++ supports back to the latest released
> version of Clang. For example, if the latest major release of Clang is 14,
> libc++ (on main) supports Clang 14. When Clang 15 is released (and libc++
> 15 with it), libc++ (on main) is free to assume Clang 15. As a result, any
> released libc++ will always support the previously (and the currently)
> released Clang, with the support window moving as newer Clangs are released.

> -


>
> We support the latest major release of GCC, as advertised on
> https://gcc.gnu.org/releases.html.

> -

> _______________________________________________
> LLVM Developers mailing list
> llvm...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev

Michael Schellenberger Costa via llvm-dev

unread,
Mar 1, 2021, 1:50:31 PM3/1/21
to Louis Dionne, llvm...@lists.llvm.org, Libc++ Dev
As a (rare) stl contributor I am also strongly in favor of the proposal.

I greatly reduces the maintenance burden for us.

--Michael 

_______________________________________________
libcxx-dev mailing list
libcx...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/libcxx-dev

Ben Craig via llvm-dev

unread,
Mar 1, 2021, 2:01:41 PM3/1/21
to Michael Schellenberger Costa, Louis Dionne, llvm...@lists.llvm.org

+1 on the compiler support.

 

I’d love to see a more clearly defined policy for other aspects as well, like supported C libraries and supported OSes.

Mehdi AMINI via llvm-dev

unread,
Mar 1, 2021, 2:40:15 PM3/1/21
to Ben Craig, llvm...@lists.llvm.org, Louis Dionne, Michael Schellenberger Costa
Hi,

What isn't clear to me is the difference between "building libcxx" and "using the installed here in client code"?
If we ship libc++ on a system, what is the restriction on the system for someone to build a c++ application? 

Thanks,

-- 
Mehdi


_______________________________________________

Louis Dionne via llvm-dev

unread,
Mar 1, 2021, 3:23:55 PM3/1/21
to Mehdi AMINI, llvm...@lists.llvm.org, Michael Schellenberger Costa
On Mon, Mar 1, 2021 at 2:40 PM Mehdi AMINI <joke...@gmail.com> wrote:
Hi,

What isn't clear to me is the difference between "building libcxx" and "using the installed here in client code"?
If we ship libc++ on a system, what is the restriction on the system for someone to build a c++ application?
The compiler requirements would be the same for building libc++ and for using its headers to build a client application. So basically, you'd be required to use a recent compiler when building an application against recent libc++ headers.

The basic idea is that someone shipping libc++ as part of a toolchain would update Clang at the same time as they update libc++, and any application would be built against a combination of that Clang and the matching libc++. As I said, we'd actually support something more lenient than that, i.e. libc++ would support up to the last stable release of Clang. That way, people who don't ship libc++ as part of  a LLVM-based toolchain would have a 6 month grace period to update their compiler at each release of libc++.

Louis

Ben Craig via llvm-dev

unread,
Mar 1, 2021, 3:29:29 PM3/1/21
to Louis Dionne, Mehdi AMINI, llvm...@lists.llvm.org, Michael Schellenberger Costa

Presumably though, someone building against old headers, but running against a new libc++ would still be supported, right?  In other words, you’re still going to maintain binary compatibility?

Joerg Sonnenberger via llvm-dev

unread,
Mar 1, 2021, 3:41:19 PM3/1/21
to llvm...@lists.llvm.org
On Mon, Mar 01, 2021 at 12:40:36PM -0500, Louis Dionne via llvm-dev wrote:
> However, for a library like libc++, things are a bit different.

So how does this prevent the libstdc++ mess that you need to lock step
the RTL with the compiler and more importantly, get constantly screwed
over when you need to upgrade or downgrade the compiler in a complex
environment like an actual Operating System?

I consider this proposal a major step backwards...

Joerg

Louis Dionne via llvm-dev

unread,
Mar 1, 2021, 3:44:49 PM3/1/21
to Ben Craig, llvm...@lists.llvm.org, Michael Schellenberger Costa
On Mon, Mar 1, 2021 at 3:29 PM Ben Craig <ben....@ni.com> wrote:

Presumably though, someone building against old headers, but running against a new libc++ would still be supported, right?  In other words, you’re still going to maintain binary compatibility?


Yes, of course. You can always build your application against a version of libc++ and then link/run it against a newer version of the library (.so or .dylib). If you specify the right deployment target, you can also build against a newer version of libc++ (headers and .so/.dylib), and then actually run it against an older dylib provided your application doesn't use symbols that didn't exist in the old dylib you're running against.

Those guarantees don't change.

Louis

Zoe Carver via llvm-dev

unread,
Mar 1, 2021, 8:54:03 PM3/1/21
to Louis Dionne, llvm-dev, Libc++ Dev

As a libc++ contributor, I am strongly in favor of this. I'd like to re-iterate three main points:

  1. Currently, we are telling users that libc++ supports Clang 3.5 (for example) when there is no proof of that. We are basically guessing (actually, we're not even guessing; if I had to guess, I'd say that Clang 3.5 probably won't work).
  2. This will make the QoI way better. Bugs are hidden in macros when we have to support many/old compilers. We can also remove a lot of dead code, which will make it easier to reason about the implementation logic. 
  3. Users of old compilers can download old versions of libc++ (in the uncommon case when this is required) by simply heading to https://releases.llvm.org/download.html.


Thanks for pushing this forward, Louis!


Curdeius Curdeius via llvm-dev

unread,
Mar 2, 2021, 5:50:49 AM3/2/21
to libcx...@lists.llvm.org, llvm...@lists.llvm.org, Louis Dionne
A strong +1 on this proposal.
As a contributor, I see the benefits of removing technical debt and maintenance costs we have.

Regards,
Marek
 
   -

Louis Dionne via llvm-dev

unread,
Mar 2, 2021, 10:10:15 AM3/2/21
to Joerg Sonnenberger, llvm...@lists.llvm.org

> On Mar 1, 2021, at 15:41, Joerg Sonnenberger via llvm-dev <llvm...@lists.llvm.org> wrote:
>
> On Mon, Mar 01, 2021 at 12:40:36PM -0500, Louis Dionne via llvm-dev wrote:
>> However, for a library like libc++, things are a bit different.
>
> So how does this prevent the libstdc++ mess that you need to lock step
> the RTL with the compiler and more importantly, get constantly screwed
> over when you need to upgrade or downgrade the compiler in a complex
> environment like an actual Operating System?

Could you please elaborate on what issue you’re thinking about here? As someone who ships libc++ as part of an operating system and SDK (which isn’t necessarily in perfect lockstep with the compiler), I don’t see any issues. The guarantee that you can still use a ~6 months old Clang is specifically intended to allow for that use case, i.e. shipping libc++ as part of an OS instead of a toolchain.


> I consider this proposal a major step backwards...

To be clear, we only want to make official the level of support that we already provide in reality. As I explained in my original email, if you’ve been relying on libc++ working on much older compilers, I would suggest that you stop doing so because nobody is testing that and we don’t really support it, despite what the documentation says. So IMO this can’t be a step backwards, since we already don’t support these compilers, we just pretend that we do.

Louis

Mehdi AMINI via llvm-dev

unread,
Mar 3, 2021, 12:31:30 PM3/3/21
to Louis Dionne, llvm-dev
Hi,

It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.
For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.
So the setup is:
- stage1: build clang/libc++ with host clang-8/libstdc++
- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

With this proposal, the setup would be:

- stage1: build just clang with host clang-8/libstdc++
- stage2: build clang/libc++ with stage1 clang and host libstdc++
- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

The only way to avoid adding a stage in the bootstrap is to keep updating the bots with a very recent host clang (I'm not convinced that increasing the cost of maintenance for CI / infra is good in general).

We should aim for a better balance: it is possible that clang-5 is too old (I don't know?), but there are people (like me, and possibly others) who are testing HEAD with older compiler (clang-8 here) and it does not seem broken at the moment (or the recent years), I feel there should be a strong motivation to break it.
Could we find something more intermediate here? Like a time-based support (2 years?) or something based on the latest Ubuntu release or something like that. That would at least keep the cost of upgrading bots a bit more controlled (and avoid a costly extra stage of bootstrap).

Thanks,

-- 
Mehdi


David Blaikie via llvm-dev

unread,
Mar 3, 2021, 1:32:53 PM3/3/21
to Mehdi AMINI, llvm-dev
On Wed, Mar 3, 2021 at 9:31 AM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:
Hi,

It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.
For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.
So the setup is:
- stage1: build clang/libc++ with host clang-8/libstdc++
- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

With this proposal, the setup would be:

- stage1: build just clang with host clang-8/libstdc++
- stage2: build clang/libc++ with stage1 clang and host libstdc++
- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Would it be possible to change the build system so that libc++ can be built like compiler-rt, using the just-built clang? That would then avoid the need for the extra stage? (though it would bottleneck the usual build a bit - not being able to start the libc++ build until after clang build)

& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.
 

Mehdi AMINI via llvm-dev

unread,
Mar 3, 2021, 2:18:17 PM3/3/21
to David Blaikie, llvm-dev
On Wed, Mar 3, 2021 at 10:32 AM David Blaikie <dbla...@gmail.com> wrote:
On Wed, Mar 3, 2021 at 9:31 AM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:
Hi,

It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.
For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.
So the setup is:
- stage1: build clang/libc++ with host clang-8/libstdc++
- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
 
With this proposal, the setup would be:

- stage1: build just clang with host clang-8/libstdc++
- stage2: build clang/libc++ with stage1 clang and host libstdc++
- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Would it be possible to change the build system so that libc++ can be built like compiler-rt, using the just-built clang? That would then avoid the need for the extra stage? (though it would bottleneck the usual build a bit - not being able to start the libc++ build until after clang build)

That's a good point:
 - stage1: build just clang with host clang-8/libstdc++
- stage1.5: build libc++ with stage1 clang
- stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Since this "stage 2" is the new "stage1", I believe that this should be made completely straightforward to achieve. Ideally it should boil down to a single standard CMake invocation to produce this configuration.
 

& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.

Right: I'm absolutely not convinced by the "we're documenting the current state of things" actually. 
In particular my take in general on what we call "supported" is a policy that "we revert if we break a supported configuration" and "we accept patches to fix a supported configuration". So the change here is that libc++ would not accept to revert when they break an older toolchain, and we wouldn't accept patches to libc++ to fix it.
We don't necessarily have buildbots for every configuration that we claim LLVM is supporting, yet this is the policy, and I'm quite wary of defining the "current state of things" based exclusively on the current public buildbots setup.

Michael Kruse via llvm-dev

unread,
Mar 3, 2021, 8:34:34 PM3/3/21
to Mehdi AMINI, llvm-dev
Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:

> That's a good point:
> - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.

When I last looked at it, it did work and haven't noticed work on that
front. However, I just re-tried and it actually does work. Thanks to
anyone who fixed it ;-)

Michael

Louis Dionne via llvm-dev

unread,
Mar 3, 2021, 8:52:07 PM3/3/21
to Mehdi AMINI, llvm-dev

On Mar 3, 2021, at 14:17, Mehdi AMINI <joke...@gmail.com> wrote:



On Wed, Mar 3, 2021 at 10:32 AM David Blaikie <dbla...@gmail.com> wrote:
On Wed, Mar 3, 2021 at 9:31 AM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:
Hi,

It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.
For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.
So the setup is:
- stage1: build clang/libc++ with host clang-8/libstdc++
- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
 
With this proposal, the setup would be:

- stage1: build just clang with host clang-8/libstdc++
- stage2: build clang/libc++ with stage1 clang and host libstdc++
- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Would it be possible to change the build system so that libc++ can be built like compiler-rt, using the just-built clang? That would then avoid the need for the extra stage? (though it would bottleneck the usual build a bit - not being able to start the libc++ build until after clang build)

That's a good point:
 - stage1: build just clang with host clang-8/libstdc++
- stage1.5: build libc++ with stage1 clang
- stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Since this "stage 2" is the new "stage1", I believe that this should be made completely straightforward to achieve. Ideally it should boil down to a single standard CMake invocation to produce this configuration.

I think the Runtimes build is exactly what you’re looking for. With the runtimes build, you say:

    $ cmake -S "${MONOREPO_ROOT}/llvm" -B "${BUILD_DIR}” \
        -DLLVM_ENABLE_PROJECTS="clang” \
        -DLLVM_ENABLE_RUNTIMES="libcxx;libcxxabi” \
        -DLLVM_RUNTIME_TARGETS="x86_64-unknown-linux-gnu”

And then you can just do:

    $ make -C $BUILD_DIR cxx

That will bootstrap Clang and then build libc++ with the just-built Clang. I don’t know whether you consider that to be one or two stages, but it happens automatically in that single CMake invocation. And since building libc++ is basically trivial, this takes approximately the same time as building Clang only.

 

& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.

Right: I'm absolutely not convinced by the "we're documenting the current state of things" actually. 
In particular my take in general on what we call "supported" is a policy that "we revert if we break a supported configuration" and "we accept patches to fix a supported configuration". So the change here is that libc++ would not accept to revert when they break an older toolchain, and we wouldn't accept patches to libc++ to fix it.
We don't necessarily have buildbots for every configuration that we claim LLVM is supporting, yet this is the policy, and I'm quite wary of defining the "current state of things" based exclusively on the current public buildbots setup.

To be clear, what we do today to “fix” older compilers is usually to mark failing tests in the test suite with XFAIL or UNSUPPORTED annotations. We don’t actually provide a good level of support for those compilers. There’s also other things that we simply can’t fix, like the fact that a libc++ built with a compiler that doesn’t know about char8_t (for example) won’t produce the RTTI for char8_t in the dylib, and hence will produce a dylib where some random uses of char8_t will break down. This is just an example, but my point is that it’s far better to clarify the support policy to something that *we know* will work, and that we can commit to supporting. There's a small upfront cost for people running build bots right now, but once things are setup it’ll just be better for everyone.

 
 

The only way to avoid adding a stage in the bootstrap is to keep updating the bots with a very recent host clang (I'm not convinced that increasing the cost of maintenance for CI / infra is good in general).

We should aim for a better balance: it is possible that clang-5 is too old (I don't know?), but there are people (like me, and possibly others) who are testing HEAD with older compiler (clang-8 here) and it does not seem broken at the moment (or the recent years), I feel there should be a strong motivation to break it.

Libc++ on Clang 8 doesn’t look broken because it builds. And it builds because you’ve been pinging us on Phabricator when we break you with a change, and we add a “workaround” that makes it build. But there’s no guarantee about the “quality" of the libc++ that you get in that case though. That’s exactly what we want to avoid - you get something that “kinda works”, yet we still have to insert random workarounds in the code. It’s a lose/lose situation.

Could we find something more intermediate here? Like a time-based support (2 years?) or something based on the latest Ubuntu release or something like that. That would at least keep the cost of upgrading bots a bit more controlled (and avoid a costly extra stage of bootstrap).

As I said above, I don’t think there’s any extra stage of bootstrap. The only difference is that you build your libc++ using the Clang you just built, instead of against the system compiler. In both cases you need to build both Clang and libc++ anyway.

Furthermore, we specifically support the last released Clang. If you were in a situation where you didn’t want to build Clang but wanted to build libc++, you’d just have to download a sufficiently recent Clang release and use that.

Louis

Mehdi AMINI via llvm-dev

unread,
Mar 4, 2021, 12:06:02 AM3/4/21
to Michael Kruse, llvm-dev
On Wed, Mar 3, 2021 at 5:34 PM Michael Kruse <llv...@meinersbur.de> wrote:
Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:
> That's a good point:
>  - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.  

Fantastic! It looks like the cmake option I was suggesting already exists actually :)

I had to read our CMake scripts to figure out how to use it though, looking at https://llvm.org/docs/BuildingADistribution.html#relevant-cmake-options it mentions the CMake option, but just running locally `ninja` does not get them to be built, I had to explicitly call `ninja runtimes` to get libc++ to be built, I don't know if this is intended?
I also need to explicitly have `-DLLVM_ENABLE_PROJECTS=clang` by the way otherwise `ninja runtimes` will error out obscurely at some point, the cmake handling of this option isn't defensive about it at the moment.

Anyway, I need to update the bot config to see if this "just works", but locally it seems promising!

Thanks Michael!

-- 
Mehdi

Mehdi AMINI via llvm-dev

unread,
Mar 4, 2021, 12:09:56 AM3/4/21
to Louis Dionne, llvm-dev
Yes, thanks, this config is just perfectly fitting here!
 

 

& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.

Right: I'm absolutely not convinced by the "we're documenting the current state of things" actually. 
In particular my take in general on what we call "supported" is a policy that "we revert if we break a supported configuration" and "we accept patches to fix a supported configuration". So the change here is that libc++ would not accept to revert when they break an older toolchain, and we wouldn't accept patches to libc++ to fix it.
We don't necessarily have buildbots for every configuration that we claim LLVM is supporting, yet this is the policy, and I'm quite wary of defining the "current state of things" based exclusively on the current public buildbots setup.

To be clear, what we do today to “fix” older compilers is usually to mark failing tests in the test suite with XFAIL or UNSUPPORTED annotations. We don’t actually provide a good level of support for those compilers. There’s also other things that we simply can’t fix, like the fact that a libc++ built with a compiler that doesn’t know about char8_t (for example) won’t produce the RTTI for char8_t in the dylib, and hence will produce a dylib where some random uses of char8_t will break down. This is just an example, but my point is that it’s far better to clarify the support policy to something that *we know* will work, and that we can commit to supporting. There's a small upfront cost for people running build bots right now, but once things are setup it’ll just be better for everyone.

 
 

The only way to avoid adding a stage in the bootstrap is to keep updating the bots with a very recent host clang (I'm not convinced that increasing the cost of maintenance for CI / infra is good in general).

We should aim for a better balance: it is possible that clang-5 is too old (I don't know?), but there are people (like me, and possibly others) who are testing HEAD with older compiler (clang-8 here) and it does not seem broken at the moment (or the recent years), I feel there should be a strong motivation to break it.

Libc++ on Clang 8 doesn’t look broken because it builds. And it builds because you’ve been pinging us on Phabricator when we break you with a change, and we add a “workaround” that makes it build. But there’s no guarantee about the “quality" of the libc++ that you get in that case though. That’s exactly what we want to avoid - you get something that “kinda works”, yet we still have to insert random workarounds in the code. It’s a lose/lose situation.

To be fair there has been exactly *one* breakage caused by libc++ over the last 2 years: while there may be issue in corner cases like you mentions, it seems to work fine for many projects (including clang/llvm/mlir bootstrap since this is what I've been testing).

Petr Hosek via llvm-dev

unread,
Mar 4, 2021, 1:19:37 AM3/4/21
to Mehdi AMINI, llvm-dev
On Wed, Mar 3, 2021 at 9:06 PM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:


On Wed, Mar 3, 2021 at 5:34 PM Michael Kruse <llv...@meinersbur.de> wrote:
Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:
> That's a good point:
>  - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.  

Fantastic! It looks like the cmake option I was suggesting already exists actually :)

I had to read our CMake scripts to figure out how to use it though,

Documentation is definitely something that needs improving since right now it's basically completely absent.
 
looking at https://llvm.org/docs/BuildingADistribution.html#relevant-cmake-options it mentions the CMake option, but just running locally `ninja` does not get them to be built, I had to explicitly call `ninja runtimes` to get libc++ to be built, I don't know if this is intended?

In all our examples, we always use `ninja distribution` and include `runtimes` in `LLVM_DISTRIBUTION_COMPONENTS` and I haven't thought of also including `runtimes` in the default target, but it makes sense and should be easy to fix.
 
I also need to explicitly have `-DLLVM_ENABLE_PROJECTS=clang` by the way otherwise `ninja runtimes` will error out obscurely at some point, the cmake handling of this option isn't defensive about it at the moment.

That's also something we could improve.
 
Anyway, I need to update the bot config to see if this "just works", but locally it seems promising!

The runtimes build is under active development and we're interested in hearing about issues so please let me know if you run into problems.
 
Thanks Michael!

-- 
Mehdi



 

When I last looked at it, it did work and haven't noticed work on that
front. However, I just re-tried and it actually does work. Thanks to
anyone who fixed it ;-)

Michael
_______________________________________________

Louis Dionne via llvm-dev

unread,
Mar 4, 2021, 10:18:40 AM3/4/21
to Mehdi AMINI, llvm-dev

On Mar 4, 2021, at 01:19, Petr Hosek via llvm-dev <llvm...@lists.llvm.org> wrote:

On Wed, Mar 3, 2021 at 9:06 PM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:


On Wed, Mar 3, 2021 at 5:34 PM Michael Kruse <llv...@meinersbur.de> wrote:
Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:
> That's a good point:
>  - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)

Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.  

Fantastic! It looks like the cmake option I was suggesting already exists actually :)

I had to read our CMake scripts to figure out how to use it though,

Documentation is definitely something that needs improving since right now it's basically completely absent.
 
looking at https://llvm.org/docs/BuildingADistribution.html#relevant-cmake-options it mentions the CMake option, but just running locally `ninja` does not get them to be built, I had to explicitly call `ninja runtimes` to get libc++ to be built, I don't know if this is intended?

In all our examples, we always use `ninja distribution` and include `runtimes` in `LLVM_DISTRIBUTION_COMPONENTS` and I haven't thought of also including `runtimes` in the default target, but it makes sense and should be easy to fix.
 
I also need to explicitly have `-DLLVM_ENABLE_PROJECTS=clang` by the way otherwise `ninja runtimes` will error out obscurely at some point, the cmake handling of this option isn't defensive about it at the moment.

That's also something we could improve.
 
Anyway, I need to update the bot config to see if this "just works", but locally it seems promising!

The runtimes build is under active development and we're interested in hearing about issues so please let me know if you run into problems.

I will be changing the default way of building libc++ to the runtimes build (in the documentation). I am also in the process of adding a CI job to build and test libc++ with a bootstrapped Clang. Once those configurations are the default ones, we should be able to fix issues more easily as the usage of the runtimes build will increase. I do agree that it’s a bit difficult to find documentation on it right now though :-).

Louis

Christopher Di Bella via llvm-dev

unread,
Mar 4, 2021, 1:07:17 PM3/4/21
to Zoe Carver, llvm-dev, Louis Dionne, Libc++ Dev
Strongly in favour of this :-)

Reid Kleckner via llvm-dev

unread,
Mar 8, 2021, 1:59:17 PM3/8/21
to Louis Dionne, llvm-dev, Libc++ Dev
I think it's reasonable to raise the compiler version floor for libc++, but I think I would like to see a more relaxed policy with respect to clang. Maybe the last two releases of clang, so that a user of ToT libc++ with stable clang doesn't have to rush to upgrade clang as soon as it is released. If you support the last two releases, the user always has six months of lead time before updating, and libc++ never supports a compiler older than a year.

I'll also point out that, I see a lot of support on this thread, but I see a lot of developer representation, and not much user representation. I have no idea how to effectively survey users of libc++, though.

Lastly, from Chromium's PoV, Chromium has an ancient NaCl toolchain, and we believe we may be using ToT libc++ with it. We have other reasons (C++17 for one) to want to either remove or update this compiler, so please don't consider this a blocker for libc++. I only mention it to show that users do sometimes inadvertently develop dependencies on old compilers.

Roman Lebedev via llvm-dev

unread,
Mar 8, 2021, 2:10:19 PM3/8/21
to Reid Kleckner, llvm-dev, Louis Dionne, Libc++ Dev
On Mon, Mar 8, 2021 at 9:59 PM Reid Kleckner via llvm-dev
<llvm...@lists.llvm.org> wrote:
>
> I think it's reasonable to raise the compiler version floor for libc++, but I think I would like to see a more relaxed policy with respect to clang. Maybe the last two releases of clang, so that a user of ToT libc++ with stable clang doesn't have to rush to upgrade clang as soon as it is released. If you support the last two releases, the user always has six months of lead time before updating, and libc++ never supports a compiler older than a year.
>
> I'll also point out that, I see a lot of support on this thread, but I see a lot of developer representation, and not much user representation. I have no idea how to effectively survey users of libc++, though.
+1.
From user POV, supporting only the last two stable clang releases
is *the smallest reasonable guarantee*.


Roman

Ken Cunningham via llvm-dev

unread,
Mar 8, 2021, 2:30:38 PM3/8/21
to Roman Lebedev, llvm-dev, Louis Dionne, Libc++ Dev
On MacPorts, I maintain the llvm/clang/flang/libc++ ports for darwin systems.

We support thousands of users, all of whom open tickets for any issues, which we resolve as we go along.

The current tip of trunk runs on all darwin systems back to and including 10.6.8.

We bootstrap from system roots to trunk on all them, using stepping stone clang versions along the way.

If there is anything you would like to know about this process or questions raised thereof, please ask.

Best,

Ken

James Y Knight via llvm-dev

unread,
Mar 8, 2021, 2:37:24 PM3/8/21
to Roman Lebedev, llvm-dev, Louis Dionne, Libc++ Dev
"Users" are just going to use a toolchain distribution put together by someone else, right?

I'd expect the person putting together such a toolchain distribution to download a release of llvm, and build all of the components from that same revision. It's historically been annoying to ensure that you actually build the runtime libraries using the just-built clang, when you're building a set of llvm+clang+compiler-rt+libcxxabi+libcxx all together, instead of whatever compiler you had lying around...but once the documentation and process is updated to make the right thing happen in the "obvious" path, ISTM that solves 99% of the problem here.

For developers of libc++ who are making changes against devhead, they may want to avoid rebuilding clang every time they want to test a new revision of libc++. For that, the "last stable" promise seems useful. But that seems like it shouldn't really affect users?

The question is whether there's circumstances in which someone who is putting together a toolchain distribution needs to upgrade to a newer version of libc++, yet remain on an older release of clang (...but only up to 1 year old). If that's what folks are saying is necessary: maybe someone can help explain why? It doesn't seem like it should be needed, to me.

Louis Dionne via llvm-dev

unread,
Mar 16, 2021, 7:29:34 PM3/16/21
to James Y Knight, llvm-dev, Libc++ Dev
Sorry for the late reply, I'm on vacation right now and until the end of March. See my answers below.

On Mon, Mar 8, 2021 at 11:37 AM James Y Knight <jykn...@google.com> wrote:
"Users" are just going to use a toolchain distribution put together by someone else, right?

Yes, that would be my expectation too. I guess that depends how you define users.


I'd expect the person putting together such a toolchain distribution to download a release of llvm, and build all of the components from that same revision. It's historically been annoying to ensure that you actually build the runtime libraries using the just-built clang, when you're building a set of llvm+clang+compiler-rt+libcxxabi+libcxx all together, instead of whatever compiler you had lying around...but once the documentation and process is updated to make the right thing happen in the "obvious" path, ISTM that solves 99% of the problem here.

I agree, I think this solves 99% of problems.
 

For developers of libc++ who are making changes against devhead, they may want to avoid rebuilding clang every time they want to test a new revision of libc++. For that, the "last stable" promise seems useful. But that seems like it shouldn't really affect users?

The question is whether there's circumstances in which someone who is putting together a toolchain distribution needs to upgrade to a newer version of libc++, yet remain on an older release of clang (...but only up to 1 year old). If that's what folks are saying is necessary: maybe someone can help explain why? It doesn't seem like it should be needed, to me.

This is useful if you ship libc++ as part of a different product than the product you ship Clang in, yet both need to interoperate. For example, we do this on Apple platforms by means of shipping Clang in Xcode, but shipping libc++ as part of the operating system and corresponding SDK. In this case, the "last stable release" guarantee means that everything will work as long as Clang keeps getting updated at a reasonable rate.
 

On Mon, Mar 8, 2021 at 2:10 PM Roman Lebedev via libcxx-dev <libcx...@lists.llvm.org> wrote:
On Mon, Mar 8, 2021 at 9:59 PM Reid Kleckner via llvm-dev
<llvm...@lists.llvm.org> wrote:
>
> I think it's reasonable to raise the compiler version floor for libc++, but I think I would like to see a more relaxed policy with respect to clang. Maybe the last two releases of clang, so that a user of ToT libc++ with stable clang doesn't have to rush to upgrade clang as soon as it is released. If you support the last two releases, the user always has six months of lead time before updating, and libc++ never supports a compiler older than a year.

Hmm, yes, I think that would make sense. This would indeed give more leeway to folks testing ToT libc++ with a released Clang, so that they wouldn't need to update the Clang on their CI fleet the very second a new Clang gets released (if they don't, the next libc++ commit could technically break them). I think that makes sense - at least we can start with that and see if we want to make it more stringent in the future.

I'll be creating a Phabricator review to enshrine this updated policy when I come back from vacation, and it will start being enforced at the next release.

Louis

Reply all
Reply to author
Forward
0 new messages