std::launder is not a new feature as a change to the language - it is a
way to write code that is correct according to the way the C++ memory
model has worked since it was defined for C++03. The mistake is not
adding std::launder in C++17 - the mistake was not including it in
C++03, or not finding a memory model that did not need such a feature.
So it is /not/ the language that has changed - this is a feature that
has been needed (but only in rare situations) since C++03, that has
finally been added.
Why is std::launder needed? Let's take an example (assuming I have
understood the details correctly) :
#include <new>
struct X {
int a = 1;
virtual int foo() { return a + 2; }
};
void foof(X& x);
int foobar1() {
X x;
int f = x.foo();
foof(x);
int g = x.foo();
return f + g;
}
int foobar2() {
X x;
int f = x.foo();
foof(x);
X& y = *std::launder(&x);
int g = y.foo();
return f + g;
}
When compiling foobar1(), the compiler knows that the function "foof"
cannot replace "x" with a new object and still access it through "x" -
that would be breaking the C++ memory model. That means it can be sure
that any "const" fields, including the vtable, are unchanged by "foof".
This lets it make very significantly more efficient code by
devirtualizing then inlining the code. It can compile foobar1() as
though it were:
int foobar1() {
X x;
foof(x);
return x.a + 5;
}
If foof() used placement new to put a new object in x (such as a type
that inherits from X but is the same size, with a new implementation of
foo), then the programmer would want that new foo() to be called. The
best way to do this would be for "foof" to return the result of the
placement new - which is a pointer to the same memory, but known to
point to a different object. However, that's not always convenient. So
std::launder tells the compiler that the object may have changed. The
compiler thus cannot do the same kind of optimisation. gcc implements
it roughly as though the programmer had written:
int foobar2() {
X x;
foof(x);
auto p = &x.foo;
if (p == &X.foo) {
return x.a + 5;
} else {
return x.p() + 3;
}
}
Devirtualization, fully or partially, is a /big/ optimisation for C++
classes with virtual functions. It only works well because the C++
memory model (from C++03, not C++17) limits what you can do.
std::launder give you a way to be more flexible for unusual cases.
>
>>>
>>> Yes, but by the same argument should you make the language incrementally
>>> more difficult to use with every new memory feature:
>>
>> A language needs to figure out where it stands on this kind of thing at
>> an early stage, and when stabilising. Then it should stick to the
>> decisions it makes unless there is a very good reason to do otherwise.
>> Once you have a base of existing code in use, it's hard to make changes
>> at this level. If you remove semantics to allow more optimisation, you
>> break code that used to be correct according to the language
>> specification when it was written. If you add semantics and reduce
>> optimisation, code that was fine before now runs less efficiently.
>> Neither is good.
>
> This is why I believe the std::launder change is an example of bad design:
> In the beginning there was type punning, unions, and the "effective
> type" rules of C, i.e. the rules of pointer casts.
> Then C++ addressed this by adding a number of cast operators designed
> for the very purpose of making the semantics of these rules more explicit.
No, it did not. The cast operators make casts clearer and make it more
obvious what that cast does or does not do. But they don't make the
type aliasing rules clearer or more explicit, and they don't change them
significantly compared to C. In particular, if you have a "float*" that
points to a float, there is no cast operator that will turn that into an
"int*" that can be used (in a fully defined manner) to read the float
object's memory as though it were an int.
>
> Baseline is that pointer conversion is strictly coupled with dynamic
> memory allocation - and I think you agree that dynamic memory is a core
> feature of C++. malloc /is/ useful in C++ too;
malloc can be used in C++ in the same way as C. And like C, the memory
returned by C++ has (AFAIUI) no type until you use the memory. So there
is no problem.
> And C++ did "figure out where it stands on this kind of thing at an
> early stage".
> The fact is that the C++ committee decided, only 30+ years after this
> early stage, to turn the whole thing upside down. Not smart, IMO.
>
They have clarified it, and added flexibility that was missing. The
language hasn't changed here since C++03.
What /has/ changed, is that compilers have gained optimisations that
take advantage of decisions made 18 years ago (or more), and which
perhaps people have misunderstood in the meantime.
The question of whether compilers should continue to do what some people
thought they were supposed to do, or whether they optimise more based on
what the /standards/ say they should do, is a difficult one. But that
is the question to be asking here - not whether the committee should
have added std::launder or not.
(You can, of course, ask whether std::launder was the best way to
implement the additional flexibility.)
>
>>>
>>>> Casting pointers is /dangerous/. It is lying to the compiler - it is
>>>> saying that an object has one type, but you want to pretend it is a
>>>> different type.
> /Careless/ casting is dangerous. Bjarne knew about it and he addressed
> the problem.
> It's not lying unless you misuse it, and in C++ you need to exercise
> some gymnastics to achieve that.
> On the other hand, if you want to do anything with a piece of memory
> returned by malloc you /need/ to cast the pointer you get (the same is
> substantially true for new char[]).
>
You can cast the return value from malloc() and use it - that is an
entirely reasonable (indeed, essential) use of casting pointer types.
You can't do the same with memory returned by "new char[]".
You can't do it in C either. I'd like a std::launder equivalent in C to
be able to handle this.
(When I say "you can't do this", I mean the standards don't define a
particular behaviour for it. Particular compilers might, perhaps using
extensions, or they might simply give you the code you expect even
though it is not guaranteed by design of the language or specification
of the compiler.)
> Many other programming languages don't allow anything
>>>> equivalent to such conversions.
> And these programming languages are far less powerful than C++ (and C),
> to the point that the most popular of them would just not work without C
> or C++, e.g Java and C#.
>
> However, it can be useful on occasion
>>>> in low-level code, which can usually be left to a few programmers who
>>>> understand the issues. The same applies in C++ - std::launder is
>>>> likely
>>>> to find use in implementing memory pools and specialist allocators, not
>>>> in normal application code.
>
> I'm not that convinced about this ivory tower argument.
> I do get annoyed by broken code when written by some incompetent
> keypusher when I see it, and I dislike when some major software vendor
> advertises their new programming language being "easy to use" as its
> primary selling point, but I don't think that making C++ more convoluted
> or fragmented is a solution to that.
I'm not going to argue that the C++ memory model here is the best
choice, or that the ideal balance has been found between a compiler's
opportunities for optimisation and the programmer's expectation that
code works the way it looks like it works. I'm always happier if
mistakes in the code lead to compile-time failures, or at least compiler
warnings - having to remember subtle things like std::launder in certain
types of low-level code is not great.
But complaints and blame should be appropriate. std::launder was not
added so that previously safe code would now be unsafe without it - it
was added so that previously unsafe code could now be written safely.