Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"C++20: The Unspoken Features" By Michele Caini

60 views
Skip to first unread message

Lynn McGuire

unread,
Jun 18, 2020, 10:02:31 PM6/18/20
to
"C++20: The Unspoken Features" By Michele Caini

https://humanreadablemag.com/issues/3/articles/cpp20-the-unspoken-features

"C++20 is coming. As of this writing, the new revision of the language
isn't a thing yet, but by the time you are reading this, it may be
published."

"The C++ community is really excited about C++20. It looks like a
groundbreaking update similar to C++11. Some major features are finding
their way in the standard: modules, coroutines, concepts, and ranges.
These are the big four, and almost everybody is giving a talk or a
presentation on these topics."

"But let's be honest, modules will take a while before they are widely
adopted, coroutines aren't something you use daily, and concepts are
mainly dedicated to those who write libraries rather than end-users;
it's unlikely they'll be used extensively in a medium-sized application."

"Of the big four, ranges seem to be the one candidate that most people
will be using daily; but C++20 is not just that. This revision of the
standard also contains many small changes that will help us as developers."

"We'll be taking a look at some of these unspoken features in this article."

Lynn

Öö Tiib

unread,
Jun 19, 2020, 4:28:25 AM6/19/20
to
Thanks for sharing what people think.

Basically spaceship and designated initialisers are likely useful.

I would also likely love constexpr vector and string. Since the constexpr
is IMHO defective after C++17 the implementations have perhaps to add
some non-standard magic (that I would possibly like to use). Also
standard algorithms becoming constexpr means I can perhaps erase
constexpr raw arrays and some hand-rolled algorithms that process
those around 2022 or so. :D

Rest seems all syntax sugar that I do not need (like variadic argument
lambda templates) that can make the language even worse IMHO
(like more ways to add obfuscation garbage into if, for and switch
statements).

Juha Nieminen

unread,
Jun 19, 2020, 5:42:08 AM6/19/20
to
Öö Tiib <oot...@hot.ee> wrote:
> Basically spaceship and designated initialisers are likely useful.

I just wish you could specify designated initializers in any order
you wanted, just like in C. (There are very good reasons why I would
want this.)

I don't care if in C++ initializing members is more complicated,
especially if the initialization has side-effects. They could just
state something like "if the designated initializers are not listed
in the same order as the member variables, the order in which they
are evaluated is implementation-defined". I would be fully content
with that.

Daniel P

unread,
Jun 19, 2020, 9:23:03 AM6/19/20
to
On Friday, June 19, 2020 at 4:28:25 AM UTC-4, Öö Tiib wrote:
> On Friday, 19 June 2020 05:02:31 UTC+3, Lynn McGuire wrote:
> >
> > "The C++ community is really excited about C++20.
>
> Thanks for sharing what people think.
>
I'm trying to picture a C++ programmer being "really excited" over a feature of
the language. I've never seen that.

Daniel

Öö Tiib

unread,
Jun 19, 2020, 10:14:36 AM6/19/20
to
I think it is issue for programmer to solve that is new to C++.
Basically ... optimal order for minimal padding can be different from
optimal order for initialising the members ... and C syntax is not
designed with that issue in mind.

Paavo Helde

unread,
Jun 19, 2020, 2:09:36 PM6/19/20
to
I feel "really excited" over the RAII feature, especially because many
younger languages have (IMO foolishly) failed to adopt the idea. In
addition to making the object lifetime self-contained it also makes
object lifetime explicit in the source code so that any violations can
be detected as compile errors.

A new language feature which I would be excited over would be similar,
but for multithreading. It would produce a compile-time error for a race
condition (missing MT locking) or for invalid duplicate/recursive
locking. Alexandrescu tried to develop something like that a long time
ago, but this did not quite work out.

As a bonus it could detect at compile time if there is a danger of a
deadlock (yeah, this would be NP-hard I guess, so not much hope in this
regard).

woodb...@gmail.com

unread,
Jun 20, 2020, 7:56:56 PM6/20/20
to
On Thursday, June 18, 2020 at 9:02:31 PM UTC-5, Lynn McGuire wrote:
> "C++20: The Unspoken Features" By Michele Caini
>
> https://humanreadablemag.com/issues/3/articles/cpp20-the-unspoken-features
>
> "C++20 is coming. As of this writing, the new revision of the language
> isn't a thing yet, but by the time you are reading this, it may be
> published."
>
> "The C++ community is really excited about C++20. It looks like a
> groundbreaking update similar to C++11.

I like span, but have yet to determine how coroutines
would be helpful to me. I'm going to stick with 2017
C++ for my open source code for now.


Brian
Ebenezer Enterprises - Enjoying programming again.
https://webEbenezer.net

Juha Nieminen

unread,
Jun 21, 2020, 10:08:49 AM6/21/20
to
woodb...@gmail.com wrote:
> I like span, but have yet to determine how coroutines
> would be helpful to me. I'm going to stick with 2017
> C++ for my open source code for now.

Coroutines are a very hard beast to understand, even for a very
experienced programmers. Both how they work, how they are used,
and what they are useful for, can be really obscure (and sometimes
hard to explain).

Here's one practical real-life example of where coroutines can be
useful:

Suppose you have a compressed file format decompressor. Quite often
you don't want to decompress the entire input file at once into RAM,
because the decompressed data can be enormous in size (too large to
even fit in RAM at once), and thus you want to decompress and process
it in chunks.

For example, you might want to decompress and hande 1 MB of (decompressed)
data at a time. You might even want, for efficiency, to have a 1 MB
static C-style array into which you decompress the data to to be
handled. In other words, decompress data from the input file into
the 1 MB array, and when the array gets full, handle that data,
and then continue decompressing more data into the same array,
starting from the beginning.

But this poses a problem: Most likely when you are decompressing
the data from the input file, it won't come in nice 1 MB chunks.
At some point something in the compressed input will probably
produce a chunk of data that would overflow the array. It won't
fit in the remaining space in the array.

In other words, you would need to stop decompressing the current
compressed block at the exact moment when the 1 MB array gets full,
call the routine that consumes the array, and then continue from
where you left off, writing the rest of the decompressed block to
the beginning of the array.

In order to be able to do this, you need to store the state of the
decompressor in full. It needs to be able to stop, jump somewhere
else, and then continue exactly from where it stopped.

This is certainly possible with current C++, but it can become
complicated. You need to not only store all the state of the
decompressor inside something (usually a class or struct), but to
also design your decompressor code so that it can continue exactly
from where it stopped (which, depending on the complexity of the
compressed file format, can become quite complicated).

This is where coroutines step in to help, as they automatize the
vast majority of that: At any point in your decompressor code you
can "yield", which jumps back to whatever called the decompressor,
and when that's done, it can return to the exact point where you
yielded, and the decompressor can continue as if nothing had happened.
The entire state at that point in the code is restored automatically.

This simplifies *significantly* this kind of situation, requiring the
programmer to do significantly less work to achieve this. The state is
stored automatically, and the routine can easily continue from wherever
there's a "yield", no matter where it is in the code, or how many
yield points there may be. Even if there's a dozen points in the
decompressor where the target buffer may get full, it doesn't matter.
The next time the execution returns to it, it will continue from that
point.
0 new messages