Dear LLVM community,
I’ve wanted to address the topic of which compilers are supported by libc++ for a long time. LLVM documents that it supports GCC >= 5, Clang >= 3.5 and other fairly old compilers. I think this makes a lot of sense for codebases like LLVM and Clang, since it means you can bootstrap a compiler with your system compiler in many cases. It’s also fairly easy to enforce that, since you just have to code in a supported subset of C++.
However, for a library like libc++, things are a bit different. By its very nature, libc++ needs to rely on a recent compiler in order to implement most recent library features. Not being able to rely on a recent compiler leads to problems:
Adding new features is significantly more complicated because we need to implement them conditionally on compiler support, not just on support for a C++ Standard. There can also be interactions between what compiler the library is built with and what compiler the headers are used with.
We accumulate technical debt around the code base. Some of these #ifdef code paths are not in use anymore, others don’t compile anymore or they contain bugs.
It creates a false sense of support: people think they can use a libc++ built with e.g. Clang 3.5, but in reality doing so is a terrible idea. The library might not contain runtime support for features that will be advertised as available by the headers (the char8_t RTTI and the upcoming support for <format> come to mind). Those are serious ABI issues that you’ll only notice when trying to use the feature.
I think it’s important to stress that the current state of things is that we don’t *actually* support much older compilers - the documentation claims we do, but that is misleading. While things may happen to work on older compilers, I wouldn’t recommend relying on that for anything serious, since it’s mostly untested.
Furthermore, the actual value of supporting old compilers isn’t obvious. Indeed, the best way of building libc++ is to bootstrap Clang and then build libc++ with it, which is easily achieved with the LLVM Runtimes build. Of course, we also support different shipping mechanisms (including non-Clang compilers), but in all cases it should be reasonable to expect that someone building libc++ at the tip is able to do so using a recent compiler.
For all these reasons, I think we must adjust the official support policy we currently document. Concretely, the following modified policy solves the issues I mentioned above and makes it so that the stated support reflects the reality of what we truly support:
At any given point in time, libc++ supports back to the latest released version of Clang. For example, if the latest major release of Clang is 14, libc++ (on main) supports Clang 14. When Clang 15 is released (and libc++ 15 with it), libc++ (on main) is free to assume Clang 15. As a result, any released libc++ will always support the previously (and the currently) released Clang, with the support window moving as newer Clangs are released.
We support the latest major release of GCC, as advertised on https://gcc.gnu.org/releases.html.
We support the latest major release of AppleClang.
The above policy is reasonable from libc++’s perspective, and it also reflects what we test on a regular basis with the CI. Furthermore, supporting up to the last release instead of requiring a trunk compiler (like MSVC’s STL and libstdc++) gives vendors with alternate delivery vehicles approximately 6 months to update their compiler if they want to jump on the next release of libc++, which I think is an important property to retain.
This message is both a heads up about the current state of things, an explanation of where we (the libc++ contributors) want to end up, and an invitation to have a discussion with the rest of the community.
I propose that we maintain our current level of support for older compilers (i.e. keep things roughly building) until the next LLVM release, after which the above policy would become official and libc++ development would be allowed to assume a compiler as documented above. That would give approximately 6 months (from now to the next release) for people managing build bots to migrate to the Runtimes build, and approximately 6 months (from the next release to the next-next release) for external users to adjust to this policy if needed.
Thanks,
Louis
P.S.: There is no mention of other compilers besides Clang, AppleClang and GCC above. That’s because no other compiler is tested on a regular basis, so the status of support for other compilers is unknown. If you’d like to add official support for a new compiler, I’ll be happy to help you set up the required testing.
Cheers,
Mark de Wever
On Mon, Mar 01, 2021 at 12:40:36PM -0500, Louis Dionne via llvm-dev wrote:
> Dear LLVM community,
>
> I’ve wanted to address the topic of which compilers are supported by libc++
> for a long time. LLVM documents that it supports GCC >= 5, Clang >= 3.5 and
> other fairly old compilers. I think this makes a lot of sense for codebases
> like LLVM and Clang, since it means you can bootstrap a compiler with your
> system compiler in many cases. It’s also fairly easy to enforce that, since
> you just have to code in a supported subset of C++.
>
> However, for a library like libc++, things are a bit different. By its very
> nature, libc++ needs to rely on a recent compiler in order to implement
> most recent library features. Not being able to rely on a recent compiler
> leads to problems:
>
> -
>
> Adding new features is significantly more complicated because we need to
> implement them conditionally on compiler support, not just on support for a
> C++ Standard. There can also be interactions between what compiler the
> library is built with and what compiler the headers are used with.
> -
>
> We accumulate technical debt around the code base. Some of these #ifdef
> code paths are not in use anymore, others don’t compile anymore or they
> contain bugs.
> -
>
> It creates a false sense of support: people think they can use a libc++
> built with e.g. Clang 3.5, but in reality doing so is a terrible idea. The
> library might not contain runtime support for features that will be
> advertised as available by the headers (the char8_t RTTI and the upcoming
> support for <format> come to mind). Those are serious ABI issues that
> you’ll only notice when trying to use the feature.
>
>
> I think it’s important to stress that the current state of things is that
> we don’t *actually* support much older compilers - the documentation claims
> we do, but that is misleading. While things may happen to work on older
> compilers, I wouldn’t recommend relying on that for anything serious, since
> it’s mostly untested.
>
> Furthermore, the actual value of supporting old compilers isn’t obvious.
> Indeed, the best way of building libc++ is to bootstrap Clang and then
> build libc++ with it, which is easily achieved with the LLVM Runtimes
> build. Of course, we also support different shipping mechanisms (including
> non-Clang compilers), but in all cases it should be reasonable to expect
> that someone building libc++ at the tip is able to do so using a recent
> compiler.
>
> For all these reasons, I think we must adjust the official support policy
> we currently document. Concretely, the following modified policy solves the
> issues I mentioned above and makes it so that the stated support reflects
> the reality of what we truly support:
>
> -
>
> At any given point in time, libc++ supports back to the latest released
> version of Clang. For example, if the latest major release of Clang is 14,
> libc++ (on main) supports Clang 14. When Clang 15 is released (and libc++
> 15 with it), libc++ (on main) is free to assume Clang 15. As a result, any
> released libc++ will always support the previously (and the currently)
> released Clang, with the support window moving as newer Clangs are released.
> -
>
> We support the latest major release of GCC, as advertised on
> https://gcc.gnu.org/releases.html.
> -
> _______________________________________________
> LLVM Developers mailing list
> llvm...@lists.llvm.org
> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
_______________________________________________
LLVM Developers mailing list
llvm...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
_______________________________________________
libcxx-dev mailing list
libcx...@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/libcxx-dev
+1 on the compiler support.
I’d love to see a more clearly defined policy for other aspects as well, like supported C libraries and supported OSes.
_______________________________________________
Hi,What isn't clear to me is the difference between "building libcxx" and "using the installed here in client code"?If we ship libc++ on a system, what is the restriction on the system for someone to build a c++ application?
Presumably though, someone building against old headers, but running against a new libc++ would still be supported, right? In other words, you’re still going to maintain binary compatibility?
So how does this prevent the libstdc++ mess that you need to lock step
the RTL with the compiler and more importantly, get constantly screwed
over when you need to upgrade or downgrade the compiler in a complex
environment like an actual Operating System?
I consider this proposal a major step backwards...
Joerg
Presumably though, someone building against old headers, but running against a new libc++ would still be supported, right? In other words, you’re still going to maintain binary compatibility?
As a libc++ contributor, I am strongly in favor of this. I'd like to re-iterate three main points:
Thanks for pushing this forward, Louis!
-
> On Mar 1, 2021, at 15:41, Joerg Sonnenberger via llvm-dev <llvm...@lists.llvm.org> wrote:
>
> On Mon, Mar 01, 2021 at 12:40:36PM -0500, Louis Dionne via llvm-dev wrote:
>> However, for a library like libc++, things are a bit different.
>
> So how does this prevent the libstdc++ mess that you need to lock step
> the RTL with the compiler and more importantly, get constantly screwed
> over when you need to upgrade or downgrade the compiler in a complex
> environment like an actual Operating System?
Could you please elaborate on what issue you’re thinking about here? As someone who ships libc++ as part of an operating system and SDK (which isn’t necessarily in perfect lockstep with the compiler), I don’t see any issues. The guarantee that you can still use a ~6 months old Clang is specifically intended to allow for that use case, i.e. shipping libc++ as part of an OS instead of a toolchain.
> I consider this proposal a major step backwards...
To be clear, we only want to make official the level of support that we already provide in reality. As I explained in my original email, if you’ve been relying on libc++ working on much older compilers, I would suggest that you stop doing so because nobody is testing that and we don’t really support it, despite what the documentation says. So IMO this can’t be a step backwards, since we already don’t support these compilers, we just pretend that we do.
Louis
Hi,It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.So the setup is:- stage1: build clang/libc++ with host clang-8/libstdc++- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)With this proposal, the setup would be:- stage1: build just clang with host clang-8/libstdc++- stage2: build clang/libc++ with stage1 clang and host libstdc++- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
On Wed, Mar 3, 2021 at 9:31 AM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:Hi,It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.So the setup is:- stage1: build clang/libc++ with host clang-8/libstdc++- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
With this proposal, the setup would be:- stage1: build just clang with host clang-8/libstdc++- stage2: build clang/libc++ with stage1 clang and host libstdc++- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
Would it be possible to change the build system so that libc++ can be built like compiler-rt, using the just-built clang? That would then avoid the need for the extra stage? (though it would bottleneck the usual build a bit - not being able to start the libc++ build until after clang build)
& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.
Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.
When I last looked at it, it did work and haven't noticed work on that
front. However, I just re-tried and it actually does work. Thanks to
anyone who fixed it ;-)
Michael
On Mar 3, 2021, at 14:17, Mehdi AMINI <joke...@gmail.com> wrote:On Wed, Mar 3, 2021 at 10:32 AM David Blaikie <dbla...@gmail.com> wrote:On Wed, Mar 3, 2021 at 9:31 AM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:Hi,It seems to me that this would require one extra stage of bootstrap in CI for many buildbots.For example, today I have a Linux bot with a clang-8 host compiler and libstdc++. The goal is to ensure that MLIR (but it is applicable to any project) builds with clang and libc++ at the top of the main branch.So the setup is:- stage1: build clang/libc++ with host clang-8/libstdc++- stage2: build test "anything" using stage1 (`ninja check-all` in the monorepo for example, but applicable to any other external project)With this proposal, the setup would be:- stage1: build just clang with host clang-8/libstdc++- stage2: build clang/libc++ with stage1 clang and host libstdc++- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
Would it be possible to change the build system so that libc++ can be built like compiler-rt, using the just-built clang? That would then avoid the need for the extra stage? (though it would bottleneck the usual build a bit - not being able to start the libc++ build until after clang build)That's a good point:- stage1: build just clang with host clang-8/libstdc++- stage1.5: build libc++ with stage1 clang- stage 2: assemble toolchain with clang from stage1 and libc++ from stage2- stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)Since this "stage 2" is the new "stage1", I believe that this should be made completely straightforward to achieve. Ideally it should boil down to a single standard CMake invocation to produce this configuration.
& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.Right: I'm absolutely not convinced by the "we're documenting the current state of things" actually.In particular my take in general on what we call "supported" is a policy that "we revert if we break a supported configuration" and "we accept patches to fix a supported configuration". So the change here is that libc++ would not accept to revert when they break an older toolchain, and we wouldn't accept patches to libc++ to fix it.We don't necessarily have buildbots for every configuration that we claim LLVM is supporting, yet this is the policy, and I'm quite wary of defining the "current state of things" based exclusively on the current public buildbots setup.
The only way to avoid adding a stage in the bootstrap is to keep updating the bots with a very recent host clang (I'm not convinced that increasing the cost of maintenance for CI / infra is good in general).We should aim for a better balance: it is possible that clang-5 is too old (I don't know?), but there are people (like me, and possibly others) who are testing HEAD with older compiler (clang-8 here) and it does not seem broken at the moment (or the recent years), I feel there should be a strong motivation to break it.
Could we find something more intermediate here? Like a time-based support (2 years?) or something based on the latest Ubuntu release or something like that. That would at least keep the cost of upgrading bots a bit more controlled (and avoid a costly extra stage of bootstrap).
Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:
> That's a good point:
> - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.
& again, this isn't so much a proposal of change, but one of documenting the current state of things - which reveals the current situations are sort of unsupported? (though it also goes against the claim that they're untested) - so I'll be curious to hear from the libc++ folks about this for sure.Right: I'm absolutely not convinced by the "we're documenting the current state of things" actually.In particular my take in general on what we call "supported" is a policy that "we revert if we break a supported configuration" and "we accept patches to fix a supported configuration". So the change here is that libc++ would not accept to revert when they break an older toolchain, and we wouldn't accept patches to libc++ to fix it.We don't necessarily have buildbots for every configuration that we claim LLVM is supporting, yet this is the policy, and I'm quite wary of defining the "current state of things" based exclusively on the current public buildbots setup.To be clear, what we do today to “fix” older compilers is usually to mark failing tests in the test suite with XFAIL or UNSUPPORTED annotations. We don’t actually provide a good level of support for those compilers. There’s also other things that we simply can’t fix, like the fact that a libc++ built with a compiler that doesn’t know about char8_t (for example) won’t produce the RTTI for char8_t in the dylib, and hence will produce a dylib where some random uses of char8_t will break down. This is just an example, but my point is that it’s far better to clarify the support policy to something that *we know* will work, and that we can commit to supporting. There's a small upfront cost for people running build bots right now, but once things are setup it’ll just be better for everyone.The only way to avoid adding a stage in the bootstrap is to keep updating the bots with a very recent host clang (I'm not convinced that increasing the cost of maintenance for CI / infra is good in general).We should aim for a better balance: it is possible that clang-5 is too old (I don't know?), but there are people (like me, and possibly others) who are testing HEAD with older compiler (clang-8 here) and it does not seem broken at the moment (or the recent years), I feel there should be a strong motivation to break it.Libc++ on Clang 8 doesn’t look broken because it builds. And it builds because you’ve been pinging us on Phabricator when we break you with a change, and we add a “workaround” that makes it build. But there’s no guarantee about the “quality" of the libc++ that you get in that case though. That’s exactly what we want to avoid - you get something that “kinda works”, yet we still have to insert random workarounds in the code. It’s a lose/lose situation.
On Wed, Mar 3, 2021 at 5:34 PM Michael Kruse <llv...@meinersbur.de> wrote:Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:
> That's a good point:
> - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.Fantastic! It looks like the cmake option I was suggesting already exists actually :)I had to read our CMake scripts to figure out how to use it though,
looking at https://llvm.org/docs/BuildingADistribution.html#relevant-cmake-options it mentions the CMake option, but just running locally `ninja` does not get them to be built, I had to explicitly call `ninja runtimes` to get libc++ to be built, I don't know if this is intended?
I also need to explicitly have `-DLLVM_ENABLE_PROJECTS=clang` by the way otherwise `ninja runtimes` will error out obscurely at some point, the cmake handling of this option isn't defensive about it at the moment.
Anyway, I need to update the bot config to see if this "just works", but locally it seems promising!
Thanks Michael!--Mehdi
When I last looked at it, it did work and haven't noticed work on that
front. However, I just re-tried and it actually does work. Thanks to
anyone who fixed it ;-)
Michael
_______________________________________________
On Mar 4, 2021, at 01:19, Petr Hosek via llvm-dev <llvm...@lists.llvm.org> wrote:On Wed, Mar 3, 2021 at 9:06 PM Mehdi AMINI via llvm-dev <llvm...@lists.llvm.org> wrote:On Wed, Mar 3, 2021 at 5:34 PM Michael Kruse <llv...@meinersbur.de> wrote:Am Mi., 3. März 2021 um 13:18 Uhr schrieb Mehdi AMINI via llvm-dev
<llvm...@lists.llvm.org>:
> That's a good point:
> - stage1: build just clang with host clang-8/libstdc++
> - stage1.5: build libc++ with stage1 clang
> - stage 2: assemble toolchain with clang from stage1 and libc++ from stage2
> - stage3: build test "anything" using stage2 (`ninja check-all` in the monorepo for example, but applicable to any other external project)
Stage 1.5 is exactly what cmake
-DLLVM_ENABLE_RUNTIMES=libcxxabi;libcxx should do.Fantastic! It looks like the cmake option I was suggesting already exists actually :)I had to read our CMake scripts to figure out how to use it though,Documentation is definitely something that needs improving since right now it's basically completely absent.looking at https://llvm.org/docs/BuildingADistribution.html#relevant-cmake-options it mentions the CMake option, but just running locally `ninja` does not get them to be built, I had to explicitly call `ninja runtimes` to get libc++ to be built, I don't know if this is intended?In all our examples, we always use `ninja distribution` and include `runtimes` in `LLVM_DISTRIBUTION_COMPONENTS` and I haven't thought of also including `runtimes` in the default target, but it makes sense and should be easy to fix.I also need to explicitly have `-DLLVM_ENABLE_PROJECTS=clang` by the way otherwise `ninja runtimes` will error out obscurely at some point, the cmake handling of this option isn't defensive about it at the moment.That's also something we could improve.Anyway, I need to update the bot config to see if this "just works", but locally it seems promising!The runtimes build is under active development and we're interested in hearing about issues so please let me know if you run into problems.
Roman
"Users" are just going to use a toolchain distribution put together by someone else, right?
I'd expect the person putting together such a toolchain distribution to download a release of llvm, and build all of the components from that same revision. It's historically been annoying to ensure that you actually build the runtime libraries using the just-built clang, when you're building a set of llvm+clang+compiler-rt+libcxxabi+libcxx all together, instead of whatever compiler you had lying around...but once the documentation and process is updated to make the right thing happen in the "obvious" path, ISTM that solves 99% of the problem here.
For developers of libc++ who are making changes against devhead, they may want to avoid rebuilding clang every time they want to test a new revision of libc++. For that, the "last stable" promise seems useful. But that seems like it shouldn't really affect users?The question is whether there's circumstances in which someone who is putting together a toolchain distribution needs to upgrade to a newer version of libc++, yet remain on an older release of clang (...but only up to 1 year old). If that's what folks are saying is necessary: maybe someone can help explain why? It doesn't seem like it should be needed, to me.
On Mon, Mar 8, 2021 at 2:10 PM Roman Lebedev via libcxx-dev <libcx...@lists.llvm.org> wrote:On Mon, Mar 8, 2021 at 9:59 PM Reid Kleckner via llvm-dev
<llvm...@lists.llvm.org> wrote:
>
> I think it's reasonable to raise the compiler version floor for libc++, but I think I would like to see a more relaxed policy with respect to clang. Maybe the last two releases of clang, so that a user of ToT libc++ with stable clang doesn't have to rush to upgrade clang as soon as it is released. If you support the last two releases, the user always has six months of lead time before updating, and libc++ never supports a compiler older than a year.