Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

“Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated”

189 views
Skip to first unread message

Lynn McGuire

unread,
Sep 21, 2022, 3:05:02 PM9/21/22
to
“Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated”
https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/

““It’s time to halt starting any new projects in C/C++ and use Rust for
those scenarios where a non-GC language is required. For the sake of
security and reliability, the industry should declare those languages as
deprecated,” he said on Twitter, expressing a personal opinion rather
than a fresh Microsoft policy.”

Wow ! Bold.

Lynn

Chris M. Thomasson

unread,
Sep 21, 2022, 3:28:22 PM9/21/22
to
On 9/21/2022 12:04 PM, Lynn McGuire wrote:
> “Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated”
>    https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>
> ““It’s time to halt starting any new projects in C/C++ and use Rust for
> those scenarios where a non-GC language is required.

Ohhh boy. Humm...


> For the sake of
> security and reliability, the industry should declare those languages as
> deprecated,” he said on Twitter, expressing a personal opinion rather
> than a fresh Microsoft policy.”
>
> Wow !

Holy... MOLY!

> Bold.

Wow!!

Where a non-gc lang is required? Huh? I personally have some issues with
GC. I don't really like it. security and reliability? Is it possible to
write buggy code in Rust? I have to admit that I never used it.

Shit.

Juha Nieminen

unread,
Sep 22, 2022, 2:17:10 AM9/22/22
to
In comp.lang.c++ Lynn McGuire <lynnmc...@gmail.com> wrote:
> ???Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated???
> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>
> ??????It???s time to halt starting any new projects in C/C++ and use Rust for
> those scenarios where a non-GC language is required. For the sake of
> security and reliability, the industry should declare those languages as
> deprecated,??? he said on Twitter, expressing a personal opinion rather
> than a fresh Microsoft policy.???
>
> Wow ! Bold.

Is a "Rust religion" forming in the industry?

I remember when Java was supposed to be the Holy Grail and answer to
everything.

(Nowadays I get the feeling that Java survives solely because it's the
main/only programming language available for developing for Android
devices. Haven't exactly researched if it's the only possible, or just
the main one.)

David Brown

unread,
Sep 22, 2022, 3:46:07 AM9/22/22
to
On 22/09/2022 08:16, Juha Nieminen wrote:
> In comp.lang.c++ Lynn McGuire <lynnmc...@gmail.com> wrote:
>> ???Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated???
>> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>>
>> ??????It???s time to halt starting any new projects in C/C++ and use Rust for
>> those scenarios where a non-GC language is required. For the sake of
>> security and reliability, the industry should declare those languages as
>> deprecated,??? he said on Twitter, expressing a personal opinion rather
>> than a fresh Microsoft policy.???
>>
>> Wow ! Bold.
>
> Is a "Rust religion" forming in the industry?
>

Rust is, IMHO, far too immature to be the right choice for major
projects. It has some good points, but it needs to stabilise as a
language and get more and better tools before it can be an alternative
to C or C++ for a lot of work.

An alternative future solution is for the C++ folks to copy the useful
parts from Rust. Rust's memory safety (AFAIUI) relies partly on the use
of static analysis tools run on the code. There are a variety of static
and dynamic analysis tools available for C++, but they are often tightly
to specific compilers or expensive third-party tools. If a common
standard could be formed and integrated with the standard library and
with open source tools, then C++ would be as "safe" as Rust for those
that chose to use these tools.

> I remember when Java was supposed to be the Holy Grail and answer to
> everything.
>
> (Nowadays I get the feeling that Java survives solely because it's the
> main/only programming language available for developing for Android
> devices. Haven't exactly researched if it's the only possible, or just
> the main one.)

The preferred language for Android is now Kotlin (which runs on the same
JVM virtual machine).

Juha Nieminen

unread,
Sep 22, 2022, 5:42:03 AM9/22/22
to
In comp.lang.c++ Lynn McGuire <lynnmc...@gmail.com> wrote:
> ???Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated???
> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>
> ??????It???s time to halt starting any new projects in C/C++ and use Rust for
> those scenarios where a non-GC language is required. For the sake of
> security and reliability, the industry should declare those languages as
> deprecated,??? he said on Twitter, expressing a personal opinion rather
> than a fresh Microsoft policy.???

I took a cursory look at Rust, and it really seems to embrace the
"brevity-over-clarity" style of programming.

What I call "brevity-over-clarity style of programming" is something
I have opposed and learned to hate over the years. For some reason it
appears to be what a good majority of beginner programmers naturally
gravitate towards, and some of them never unlearn. It's the general
use of names that are as short as possible, usually with complete
disregard to readability and understandability. (For example,
using the variable name "err" instead of "errorCode". Or "ret"
instead of "returnValue". Or "i" instead of "index". Sometimes the
word may be spelled-out rather than contracted, but in itself is
very non-descript without any further qualifiers. For example a
function named "convert()". Convert what? And into what?)

It appears that functions in Rust are declared with a keyword. What is
this keyword? Perhaps "function"? Of course not. It's "fn". Because
why wouldn't it be? (It couldn't get much shorter than that, unless
you want it to be just "f". Which actually wouldn't even surprise me.)

In order to declare a varible as mutable you have to specify that with
a keyword. And what is this keyword? Perhaps "mutable" (like in C++)?
Of course not. It's "mut", because why wouldn't it be? Who cares about
readability? You are saving typing a whopping 4 characters!

Obviously type names are as short as possible, such as "i32" and "u64".

Rust seems to also support a "using" keyword similar to the one used
in C++ and some other languages. Except that, obviously, "using" is
too long, so it's "use". (At least it is an entire English word.)

The standard library (or at least the little I skimmed over) seems to be
a bit of a mixed bag. Sometimes words might be spelled out in their
entirety, but not even nearly always. (For example, what possible reason
is there to call a function "gen_range" instead of "generate_range"? Is
that honestly too much to type?)

At least their string type is named "String". I wouldn't have been
surprised if it had been "Str". Must have been a lapsus.

Jens Stuckelberger

unread,
Sep 22, 2022, 10:02:13 AM9/22/22
to
A personal opinion is like an asshole: everybody's got one.

Ben Bacarisse

unread,
Sep 22, 2022, 11:47:10 AM9/22/22
to
Juha Nieminen <nos...@thanks.invalid> writes:

> In comp.lang.c++ Lynn McGuire <lynnmc...@gmail.com> wrote:
>> ???Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated???
>> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>>
>> ??????It???s time to halt starting any new projects in C/C++ and use Rust for
>> those scenarios where a non-GC language is required. For the sake of
>> security and reliability, the industry should declare those languages as
>> deprecated,??? he said on Twitter, expressing a personal opinion rather
>> than a fresh Microsoft policy.???
>
> I took a cursory look at Rust, and it really seems to embrace the
> "brevity-over-clarity" style of programming.
>
> What I call "brevity-over-clarity style of programming" is something
> I have opposed and learned to hate over the years. For some reason it
> appears to be what a good majority of beginner programmers naturally
> gravitate towards, and some of them never unlearn. It's the general
> use of names that are as short as possible, usually with complete
> disregard to readability and understandability. (For example,
> using the variable name "err" instead of "errorCode". Or "ret"
> instead of "returnValue". Or "i" instead of "index". Sometimes the
> word may be spelled-out rather than contracted, but in itself is
> very non-descript without any further qualifiers. For example a
> function named "convert()". Convert what? And into what?)

I've seen all of these in C, so why single out Rust? Does the language
itself somehow promote this style?

> It appears that functions in Rust are declared with a keyword. What is
> this keyword? Perhaps "function"? Of course not. It's "fn". Because
> why wouldn't it be? (It couldn't get much shorter than that, unless
> you want it to be just "f". Which actually wouldn't even surprise me.)

Of course it could get shorter. In C it's nothing at all and you don't
complain about that degree of brevity!

> In order to declare a varible as mutable you have to specify that with
> a keyword. And what is this keyword? Perhaps "mutable" (like in C++)?
> Of course not. It's "mut", because why wouldn't it be? Who cares about
> readability? You are saving typing a whopping 4 characters!

And C has int, struct, enum, char, const, extern... I think there is a
bit of a double standard here.

--
Ben.

Kenny McCormack

unread,
Sep 22, 2022, 12:16:08 PM9/22/22
to
In article <871qs3z...@bsb.me.uk>,
Ben Bacarisse <ben.u...@bsb.me.uk> wrote:
...
>> In order to declare a varible as mutable you have to specify that with
>> a keyword. And what is this keyword? Perhaps "mutable" (like in C++)?
>> Of course not. It's "mut", because why wouldn't it be? Who cares about
>> readability? You are saving typing a whopping 4 characters!
>
>And C has int, struct, enum, char, const, extern... I think there is a
>bit of a double standard here.

I think the implication is that, as a new language, Rust could and should
do better. We don't NEED to keep re-implementing the mistakes of the past.

C/C++ is, of course, constrained by those mistakes, because of their long
history, but a new language isn't.

That said, I *would* like to see in what ways the language itself reinforces
these habits.

--
The randomly chosen signature file that would have appeared here is more than 4
lines long. As such, it violates one or more Usenet RFCs. In order to remain
in compliance with said RFCs, the actual sig can be found at the following URL:
http://user.xmission.com/~gazelle/Sigs/Snicker

David Brown

unread,
Sep 22, 2022, 2:43:23 PM9/22/22
to
Rust is supposed to be /better/ than C. It is supposed to look at how C
has evolved over the decades, and pick a better starting point. So
saying C is as bad, or worse, is not an excuse for poor design choices
in Rust.

I don't know if I'd agree entirely with Juha here, but I'd certainly
agree to some degree. But the point is that /if/ you think it is better
to have clearer names, keywords, and language syntax in general, then
Rust as a modern language should have done better. Whataboutism is not
an acceptable defence, nor is it double standards to use a different
language that is worse.

Rust definitely scores points (IMHO) for having "mut" and "fn", even if
I too would have preferred something a little longer.

Ben Bacarisse

unread,
Sep 22, 2022, 3:15:59 PM9/22/22
to
Well all I can say is that's not how I read the remarks. I didn't get
any sense of it being a disappointment, just that these things are
worthy of criticism.

Anyway, short keywords are, to my mind, a complete non-issue. I don't
mind them in C and I don't mind them in Rust.

> I don't know if I'd agree entirely with Juha here, but I'd certainly
> agree to some degree. But the point is that /if/ you think it is
> better to have clearer names, keywords, and language syntax in
> general, then Rust as a modern language should have done better.

It's odd, but I'd never go that far. Maybe I am just not assertive
enough for the modern world! If I had this view, I'd express some
disappointment in a wasted opportunity, but I could not bring myself to
say that the designers should have done better.

> Whataboutism is not an acceptable defence,

Sure. I was not defending Rust's choice.

> nor is it double standards
> to use a different language that is worse.

Nor was I suggesting the double standard came from making such a
multi-faceted choice as which language to use. I was remarking on
never having seen this objection raised about C or C++.

--
Ben.

David Brown

unread,
Sep 22, 2022, 4:57:06 PM9/22/22
to
It's very much a subjective thing. Unless someone can show real,
relevant statistical evidence that one style of keyword length is better
than the other in terms of fewer mistakes in code, then it will always
be a matter of personal preference and familiarity. (Perhaps people
will agree that extremes such as APL and Cobol are not ideal for most
programmers.)

So I am certainly not saying there's anything wrong with thinking Rust's
"mut" and "fn" are fine. I am merely pointing out that it is also fine
to think Rust missed an opportunity here and would have been better if
the names were more descriptive. And it is fine to think that and also
enjoy programming in C and C++ despite their sometimes short (or even
non-existent) keywords.

(As I have not used Rust for more than a brief "hello, world" style
test, it would be unfair for me to express a strong judgement about the
lengths of its keywords or library identifiers. But my knee-jerk
reaction is that I'd prefer them to be longer.)

>
>> I don't know if I'd agree entirely with Juha here, but I'd certainly
>> agree to some degree. But the point is that /if/ you think it is
>> better to have clearer names, keywords, and language syntax in
>> general, then Rust as a modern language should have done better.
>
> It's odd, but I'd never go that far. Maybe I am just not assertive
> enough for the modern world! If I had this view, I'd express some
> disappointment in a wasted opportunity, but I could not bring myself to
> say that the designers should have done better.
>

For those that think the language could easily have been improved with
more longer keywords, or in other ways be "better" than C (for whatever
subjective meaning of "better" they choose) it is perhaps wrong to blame
Rust's designers. Clearly they /could/ have done better, but it is a
lot less clear that they /should/ have done better. One could perhaps
blame those promoting Rust as a safer and better alternative to C. Many
people would like to see a language that has the efficiency of C but
lower risks of code errors. The people, projects and companies of
influence who promote a candidate for such a language have a
responsibility to ensure it really fits the bill.

>> Whataboutism is not an acceptable defence,
>
> Sure. I was not defending Rust's choice.
>
>> nor is it double standards
>> to use a different language that is worse.
>
> Nor was I suggesting the double standard came from making such a
> multi-faceted choice as which language to use. I was remarking on
> never having seen this objection raised about C or C++.
>

I interpreted your comment as a more direct response to Juha - accusing
him personally of double standards. But it looks like I was reading too
much into a general comment.

Perhaps people are just so used to C and C++ programming that its
keywords are considered "normal" or "standard", and everything else is
judged in relation to them. "mut" is short because it is shorter than
"mutable", rather than being viewed as how well it fits in Rust programming.

Bart

unread,
Sep 22, 2022, 6:16:07 PM9/22/22
to
I've done a bit more than you then in porting a 70-line benchmark to it:

https://github.com/sal55/langs/blob/master/fannkuch.txt

(From line 670 and done during a brief period when I had a fully working
implementation.)

But this apparently is not idiomatic Rust. 'Proper' Rust code has most
lines starting with "." (chained method calls I believe, split across
lines) and is generally indecipherable to me. Having short keywords
(they couldn't even stretch 'fn' to 'fun') is the least of it.

(WTH is a borrow checker, and why do I have to concern myself with it
instead of writing my app? Either the language should care of it, or
give me full control.)

It also seems to be one of the slowest languages to compile on the planet.


Ben Bacarisse

unread,
Sep 22, 2022, 7:08:40 PM9/22/22
to
David Brown <david...@hesbynett.no> writes:

> On 22/09/2022 21:15, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
<cut>
>>> Whataboutism is not an acceptable defence,
>>
>> Sure. I was not defending Rust's choice.
>>
>>> nor is it double standards
>>> to use a different language that is worse.
>>
>> Nor was I suggesting the double standard came from making such a
>> multi-faceted choice as which language to use. I was remarking on
>> never having seen this objection raised about C or C++.
>
> I interpreted your comment as a more direct response to Juha -
> accusing him personally of double standards. But it looks like I was
> reading too much into a general comment.

Kind of. I was not specifically accusing Juha but saying that his
remarks were part of a larger double standard.

--
Ben.

Kaz Kylheku

unread,
Sep 22, 2022, 11:17:53 PM9/22/22
to
["Followup-To:" header set to comp.lang.c.]
On 2022-09-21, Lynn McGuire <lynnmc...@gmail.com> wrote:
> “Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated”
> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>
> ““It’s time to halt starting any new projects in C/C++ and use Rust for
> those scenarios where a non-GC language is required.

Those scenarios are almost never, though.

The niche is small, and nothing needs replacing in it.

> For the sake of
> security and reliability

... write in an easy-to use GC language that doesn't have a
fit if you get objects in a cycle, and in which any reference
to any object can go anywhere in the program you want
without a hassle.

--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal

Juha Nieminen

unread,
Sep 23, 2022, 3:27:06 AM9/23/22
to
In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
> Rust is supposed to be /better/ than C. It is supposed to look at how C
> has evolved over the decades, and pick a better starting point. So
> saying C is as bad, or worse, is not an excuse for poor design choices
> in Rust.

That's exactly what I meant. C is something like 45 years old, Rust is
something like 10 years old, and supposed to be a more modern and better
designed language taking advantage of 50+ years of experience on how a
good programming language should be like.

Answering the criticism "this 10yo language has this design problem" with
"oh yeah? Well, this 45yo language also has the same problem!" makes
no sense. Newer "better" language are not supposed to copy the mistakes
of the past. It's supposed to fix/improve on them.

I'm not saying that eg. C++ is significantly better. The
"brevity-over-clarity" mentality can oftentimes be seen throughout
the standard library.

For example, there's literally no reason why it should be called
'shared_ptr' instead of 'shared_pointer'. I can't think of any advantage
in shortening "pointer" like that.

But not to be bested, Rust makes it better: 'Rc'.

Because why not.

Juha Nieminen

unread,
Sep 23, 2022, 3:46:10 AM9/23/22
to
In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
> It's very much a subjective thing. Unless someone can show real,
> relevant statistical evidence that one style of keyword length is better
> than the other in terms of fewer mistakes in code, then it will always
> be a matter of personal preference and familiarity.

It's not so much about making mistakes, but about readability and
understandability of the code (from the perspective of others than the
author).

In general, programmers are blind to the illegibility of their own code.
After all, they are thinking what to write, and writing it, so rather
obviously they understand perfectly what the code is doing. However,
someone else reading the code doesn't know in advance what the code is
doing and thus has to decipher that from reading the source code.

The problem with many programmers is that they get some kind of fuzzy
feeling when they create code that does a lot with as little code as
possible. Some programming language (such as Haskell) advocates even
sometimes boast how their pet language can do so much with a one-liner!
Things that require a dozen lines in other languages can be done with
a one-liner in their language!

As if brevity were somehow a desirable goal.

Of course the other extreme isn't good either: Excessive verbosity can
also make the code less readable and understandable.

In general, the more condensed or the more sparse the code, the less
readable it is. The perfect middle should be the goal.

This is where even the length of keywords steps in: The shorter the
keywords, the more condensed the code becomes. The more conceptual units
are packed into a small space, the more effort it requires the reader to
dig out the meaning. This isn't exactly helped if the code does not
consist of full English words, but some obscure abbreviations.

Imagine if this post consisted of nothing but very short abbreviations
of every word. It would become illegible.

Juha Nieminen

unread,
Sep 23, 2022, 3:50:53 AM9/23/22
to
In comp.lang.c++ Kaz Kylheku <864-11...@kylheku.com> wrote:
> ["Followup-To:" header set to comp.lang.c.]
> On 2022-09-21, Lynn McGuire <lynnmc...@gmail.com> wrote:
>> ???Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated???
>> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>>
>> ??????It???s time to halt starting any new projects in C/C++ and use Rust for
>> those scenarios where a non-GC language is required.
>
> Those scenarios are almost never, though.
>
> The niche is small, and nothing needs replacing in it.

I wouldn't call embedded programming to be a "small niche".

(Well, the subset of embedded programming that happens on processors
so small that they can't run Linux, at least.)

Paavo Helde

unread,
Sep 23, 2022, 5:14:12 AM9/23/22
to
23.09.2022 10:45 Juha Nieminen kirjutas:
>
> Imagine if this post consisted of nothing but very short abbreviations
> of every word. It would become illegible.

In general, the length of the word ought to be reversely correlated with
the frequency of its use. Imagine what would English look like if the
words "is", "it", "me", "you", etc. were all 12 letter long.

Juha Nieminen

unread,
Sep 23, 2022, 6:02:17 AM9/23/22
to
I think it's more important that names are entire words, not contracted
(with very few exceptions).

Secondly, the names should express as clearly as possible what the thing
is representing. Using an entire English word is not good enough if it
still doesn't express clearly what the thing is representing. In general
one shouldn't be afraid of creating even quite long names that express
clearly what the thing is.

This especially if you can't give a rational argument why the shorter
version has some concrete benefit from it (other than "it makes code
lines shorter").

For example, I think it's quite hard to argue why "ret" is a better
variable name than "returnValue" (or "return_value", depending on
your coding style/guideline). If you can't give a rational argument
for using the shorter name, just use the longer one.

As another example: "convert(...)" might feel more convenient to
write than something like "convert_to_utf8_from_utf16(...)", but
you have to think about the code from the perspective of someone
who is reading it and doesn't already know what it's doing.
The latter may be significantly longer but it expresses infinitely
more clearly what it's doing (even without seeing what the parameters
are). Someone seeing just "convert(...)" in the code can't have any
idea what it's doing.

Michael S

unread,
Sep 23, 2022, 6:14:41 AM9/23/22
to
On Friday, September 23, 2022 at 1:02:17 PM UTC+3, Juha Nieminen wrote:
>
> As another example: "convert(...)" might feel more convenient to
> write than something like "convert_to_utf8_from_utf16(...)", but
> you have to think about the code from the perspective of someone
> who is reading it and doesn't already know what it's doing.
> The latter may be significantly longer but it expresses infinitely
> more clearly what it's doing (even without seeing what the parameters
> are). Someone seeing just "convert(...)" in the code can't have any
> idea what it's doing.

It sounds like an argument against C++-style static polymorphism.
Personally, I was against static polymorphism since I started to think
independently about that sort of matters (which didn't happen until I was 35+).
But I was not expecting to hear it from aficionado of "moderm C++".
Maturing?

Ben Bacarisse

unread,
Sep 23, 2022, 6:15:29 AM9/23/22
to
Juha Nieminen <nos...@thanks.invalid> writes:

> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>> Rust is supposed to be /better/ than C. It is supposed to look at how C
>> has evolved over the decades, and pick a better starting point. So
>> saying C is as bad, or worse, is not an excuse for poor design choices
>> in Rust.
>
> That's exactly what I meant. C is something like 45 years old, Rust is
> something like 10 years old, and supposed to be a more modern and better
> designed language taking advantage of 50+ years of experience on how a
> good programming language should be like.
>
> Answering the criticism "this 10yo language has this design problem" with
> "oh yeah? Well, this 45yo language also has the same problem!" makes
> no sense. Newer "better" language are not supposed to copy the mistakes
> of the past. It's supposed to fix/improve on them.

I took too much from the context. The context was Rust suggested as a
replacement for C (any maybe C++, I don't recall) but your remarks
seemed to all about thing things you thought bad about Rust that C also
had. So in that context, all you seemed to be saying is that Rust is no
worse than C. Maybe everyone, even the Rust champions, agree but they
will then point to the things that /are/ better in Rust.

--
Ben.

Ben Bacarisse

unread,
Sep 23, 2022, 6:58:37 AM9/23/22
to
Juha Nieminen <nos...@thanks.invalid> writes:

> As another example: "convert(...)" might feel more convenient to
> write than something like "convert_to_utf8_from_utf16(...)", but
> you have to think about the code from the perspective of someone
> who is reading it and doesn't already know what it's doing.

Is convert(...) a Rust function that converts strings or are you just
giving an example of someone who chose a bad name in a particular
program? (I know about std::convert in Rust, but that does not seem to
be what you are talking about.)

--
Ben.

David Brown

unread,
Sep 23, 2022, 7:32:24 AM9/23/22
to
On 23/09/2022 09:45, Juha Nieminen wrote:
> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>> It's very much a subjective thing. Unless someone can show real,
>> relevant statistical evidence that one style of keyword length is better
>> than the other in terms of fewer mistakes in code, then it will always
>> be a matter of personal preference and familiarity.
>
> It's not so much about making mistakes, but about readability and
> understandability of the code (from the perspective of others than the
> author).

The prime purpose of making code readable is to avoid mistakes. Either
you see the mistakes in your own code, or someone else sees them.
(Readable code is also easier to use and re-use, and less effort to read
- but I think the avoidance of errors is key in this discussion.
Certainly it is foremost in the minds of almost everyone who promotes
Rust - every pro-Rust article starts by saying how many memory-related
errors in C could have been prevented by using Rust.)

>
> In general, programmers are blind to the illegibility of their own code.
> After all, they are thinking what to write, and writing it, so rather
> obviously they understand perfectly what the code is doing. However,
> someone else reading the code doesn't know in advance what the code is
> doing and thus has to decipher that from reading the source code.
>
> The problem with many programmers is that they get some kind of fuzzy
> feeling when they create code that does a lot with as little code as
> possible. Some programming language (such as Haskell) advocates even
> sometimes boast how their pet language can do so much with a one-liner!
> Things that require a dozen lines in other languages can be done with
> a one-liner in their language!
>
> As if brevity were somehow a desirable goal.
>
> Of course the other extreme isn't good either: Excessive verbosity can
> also make the code less readable and understandable.
>
> In general, the more condensed or the more sparse the code, the less
> readable it is. The perfect middle should be the goal.
>

You should aim to be like a successful psychic - a happy medium :-)

> This is where even the length of keywords steps in: The shorter the
> keywords, the more condensed the code becomes. The more conceptual units
> are packed into a small space, the more effort it requires the reader to
> dig out the meaning. This isn't exactly helped if the code does not
> consist of full English words, but some obscure abbreviations.
>
> Imagine if this post consisted of nothing but very short abbreviations
> of every word. It would become illegible.

I agree that there is a balance to be struck. But I think the "ideal"
balance has a lot of variation, and thus is always going to be somewhat
subjective.

David Brown

unread,
Sep 23, 2022, 7:38:05 AM9/23/22
to
And it will be a /long/ time before Rust has significant market share in
these devices. Despite wanting the newest microcontrollers, embedded
programmers are a conservative bunch - C90 is, I think, the most popular
choice of language. (Yes, C90 - not C99.) C++ is gaining, assembly is
dwindling (but not gone), and there is a small subset that like Ada,
Forth, MicroPython and a few others.

It may well be that the use of Rust in such systems would be a good idea
- but that does not mean it will happen to any serious extent.

Ralf Goertz

unread,
Sep 23, 2022, 7:45:13 AM9/23/22
to
Am Thu, 22 Sep 2022 23:15:44 +0100
schrieb Bart <b...@freeuk.com>:

> I've done a bit more than you then in porting a 70-line benchmark to
> it:
>
> https://github.com/sal55/langs/blob/master/fannkuch.txt

I just implemented the benchmark myself (it should really be called
„Pfannkuchen“) quite straightforwardly (see below). There is a 1995
article about the the benchmark where the authors listed execution times
of (among others) C-versions compiled with "gcc -O2" and just "gcc". The
latter needed almost 4 times as long as the former. I wondered how "-O6"
would compare. So I compiled my program with that (using n=12 instead of
10 as in the article). To my big surprise "-O6":

~/c> time ./fannkuch
fannkuch(12)=65 equals 65 from oeis

real 0m36.325s
user 0m36.242s
sys 0m0.045s

was significantly slower than "-O2):

~/c> time ./fannkuch
fannkuch(12)=65 equals 65 from oeis

real 0m30.976s
user 0m30.913s
sys 0m0.021s

What's the reason for that?




#include <iostream>
#include <algorithm>
#include <array>
#include <numeric>

//from oeis.org
int results[]={0, 0, 1, 2, 4, 7, 10, 16, 22, 30, 38, 51, 65, 80, 101,
113, 139, 159, 191, 221};

using namespace std; // I know…

const int n=12;

int flip(array<int, n> t) {
int res=0;
do {
++res;
int top=t[0];
reverse(t.begin(),t.begin()+top);
} while (t[0]!=1);
return res;
}

int main() {
array<int, n> a;
iota(a.begin(),a.end(),1);
if (n>1) swap(a[0],a[1]); // don't need the perms with 1 at the beginning
int m=0; //max
do {
int i=flip(a);
if (i>m) m=i;
} while (next_permutation(a.begin(),a.end()));
cout<<"fannkuch("<<n<<")="<<m<<" equals "<<results[n]<<" from oeis"<<endl;
return 0;
}

Bart

unread,
Sep 23, 2022, 8:53:46 AM9/23/22
to
On 23/09/2022 12:44, Ralf Goertz wrote:
> Am Thu, 22 Sep 2022 23:15:44 +0100
> schrieb Bart <b...@freeuk.com>:
>
>> I've done a bit more than you then in porting a 70-line benchmark to
>> it:
>>
>> https://github.com/sal55/langs/blob/master/fannkuch.txt
>
> I just implemented the benchmark myself (it should really be called
> „Pfannkuchen“) quite straightforwardly (see below). There is a 1995
> article about the the benchmark where the authors listed execution times
> of (among others) C-versions compiled with "gcc -O2" and just "gcc". The
> latter needed almost 4 times as long as the former. I wondered how "-O6"
> would compare. So I compiled my program with that (using n=12 instead of
> 10 as in the article). To my big surprise "-O6":

I didn't know it went up to -O6, I thought that -O3 was the highest level.

My own tests shows -O3 was a bit slower than -O2, but -O4 and up were
about the same as -O3.

Apparently there's no upper limit to optimisation level, as -O1000000
also works (with the same result as -O3), as does -O1000000000000.

So the real mystery is what the deal is with those silly optimisation
numbers.

> I just implemented the benchmark myself (it should really be called

(Actually the reason for all those different (p)fannkuch versions was to
use it as the basis of a bigger benchmark for comparing compilation
speeds for large input files:

https://github.com/sal55/langs/blob/master/Compilertest3.md)

David Brown

unread,
Sep 23, 2022, 9:31:41 AM9/23/22
to
On 23/09/2022 14:53, Bart wrote:
> On 23/09/2022 12:44, Ralf Goertz wrote:
>> Am Thu, 22 Sep 2022 23:15:44 +0100
>> schrieb Bart <b...@freeuk.com>:
>>
>>> I've done a bit more than you then in porting a 70-line benchmark to
>>> it:
>>>
>>> https://github.com/sal55/langs/blob/master/fannkuch.txt
>>
>> I just implemented the benchmark myself (it should really be called
>> „Pfannkuchen“) quite straightforwardly (see below). There is a 1995
>> article about the the benchmark where the authors listed execution times
>> of (among others) C-versions compiled with "gcc -O2" and just "gcc". The
>> latter needed almost 4 times as long as the former. I wondered how "-O6"
>> would compare. So I compiled my program with that (using n=12 instead of
>> 10 as in the article). To my big surprise "-O6":
>
> I didn't know it went up to -O6, I thought that -O3 was the highest level.
>

It is. But gcc (in common with many compilers) accepts higher numbers
and treats it all as highest level.

> My own tests shows -O3 was a bit slower than -O2, but -O4 and up were
> about the same as -O3.
>
> Apparently there's no upper limit to optimisation level, as -O1000000
> also works (with the same result as -O3), as does -O1000000000000.
>
> So the real mystery is what the deal is with those silly optimisation
> numbers.
>

At a guess, someone (long, long ago) thought it was a convenient
compatibility with some other compiler that differentiated higher
numbers - thus people could move from "some_other_compiler -O6" to "gcc
-O6" without changing command line options. (Many command line options
in gcc are compatible with other *nix style compilers, new and old.)

David Brown

unread,
Sep 23, 2022, 9:53:53 AM9/23/22
to
On 23/09/2022 13:44, Ralf Goertz wrote:
> Am Thu, 22 Sep 2022 23:15:44 +0100
> schrieb Bart <b...@freeuk.com>:
>
>> I've done a bit more than you then in porting a 70-line benchmark to
>> it:
>>
>> https://github.com/sal55/langs/blob/master/fannkuch.txt
>
> I just implemented the benchmark myself (it should really be called
> „Pfannkuchen“) quite straightforwardly (see below). There is a 1995
> article about the the benchmark where the authors listed execution times
> of (among others) C-versions compiled with "gcc -O2" and just "gcc". The
> latter needed almost 4 times as long as the former. I wondered how "-O6"
> would compare. So I compiled my program with that (using n=12 instead of
> 10 as in the article). To my big surprise "-O6":
>

Anything higher than -O3 is the same as -O3 in gcc.

> ~/c> time ./fannkuch
> fannkuch(12)=65 equals 65 from oeis
>
> real 0m36.325s
> user 0m36.242s
> sys 0m0.045s
>
> was significantly slower than "-O2):
>
> ~/c> time ./fannkuch
> fannkuch(12)=65 equals 65 from oeis
>
> real 0m30.976s
> user 0m30.913s
> sys 0m0.021s
>
> What's the reason for that?
>

gcc -O3 uses a lot effort trying to get the last few fractions of a
percent out of the code - it is rarely used, as it is rarely worth the
effort. And sometimes it backfires. Typically, this is caused by
enthusiastic loop unrolling or inlining that gives code that is
/sometimes/ faster, but also sometimes slower due to more misses for
instruction caches, return address buffers, branch prediction tables,
etc. Trying to squeeze the last drops of speed out of a compiler is an
art - you should expect to work hard at benchmarking, profiling, and
testing different compiler options in different parts of the code.
Compilers are not omniscient, and don't know everything about your code
and how it is used, your processor, or what else might be running on the
same system.

So generally, gcc -O2 is the normal choice until you are ready to work hard.

There are, however, a few flags that can make a very significant
difference, depending on source code.

"-march" tells the compiler details of the processor you are using,
rather than picking the lowest common denominator for the processor
family. "-march=native" tells it that the target is the same as the
compiler is running on. The "-march" flag gives the compiler access to
more SIMD and advanced instructions, as well as a more accurate model
for scheduling, pipelining, and other processor-specific fine tuning
options.

"-fast-math" enables a lot of floating point optimisations that are not
allowed by strict IEEE rules, but are often fine for real-world floating
point code. If your source has a lot of floating point calculations and
you are okay with the kinds of re-arrangements enabled by this flag, it
can speed up the floating point code significantly.

Ralf Goertz

unread,
Sep 23, 2022, 10:21:57 AM9/23/22
to
Am Fri, 23 Sep 2022 15:53:36 +0200
schrieb David Brown <david...@hesbynett.no>:

> On 23/09/2022 13:44, Ralf Goertz wrote:
> > Am Thu, 22 Sep 2022 23:15:44 +0100
> > schrieb Bart <b...@freeuk.com>:
> >
> >> I've done a bit more than you then in porting a 70-line benchmark
> >> to it:
> >>
> >> https://github.com/sal55/langs/blob/master/fannkuch.txt
> >
> > I just implemented the benchmark myself (it should really be called
> > „Pfannkuchen“) quite straightforwardly (see below). There is a 1995
> > article about the the benchmark where the authors listed execution
> > times of (among others) C-versions compiled with "gcc -O2" and just
> > "gcc". The latter needed almost 4 times as long as the former. I
> > wondered how "-O6" would compare. So I compiled my program with
> > that (using n=12 instead of 10 as in the article). To my big
> > surprise "-O6":
>
> Anything higher than -O3 is the same as -O3 in gcc.

Oh okay. I don't know where I got the idea that 6 was the maximum.

> > ~/c> time ./fannkuch
> > fannkuch(12)=65 equals 65 from oeis
> >
> > real 0m36.325s
> > user 0m36.242s
> > sys 0m0.045s
> >
> > was significantly slower than "-O2):
> >
> > ~/c> time ./fannkuch
> > fannkuch(12)=65 equals 65 from oeis
> >
> > real 0m30.976s
> > user 0m30.913s
> > sys 0m0.021s
> >
> > What's the reason for that?
> >
>
> gcc -O3 uses a lot effort trying to get the last few fractions of a
> percent out of the code - it is rarely used, as it is rarely worth the
> effort. And sometimes it backfires.

As it obviously does with my program. Then again it still surprises me.
It's 17% slower! I guess the 99% consists of swapping elements in a
std::array<int, 12>. At least the part I have written but also
next_permutation() if I remember the algorithm in TAOCP correctly.

> Typically, this is caused by enthusiastic loop unrolling or inlining
> that gives code that is /sometimes/ faster, but also sometimes slower
> due to more misses for instruction caches, return address buffers,
> branch prediction tables, etc. Trying to squeeze the last drops of
> speed out of a compiler is an art - you should expect to work hard at
> benchmarking, profiling, and testing different compiler options in
> different parts of the code. Compilers are not omniscient, and don't
> know everything about your code and how it is used, your processor, or
> what else might be running on the same system.

> So generally, gcc -O2 is the normal choice until you are ready to work
> hard.
>
> There are, however, a few flags that can make a very significant
> difference, depending on source code.
>
> "-march" tells the compiler details of the processor you are using,
> rather than picking the lowest common denominator for the processor
> family. "-march=native" tells it that the target is the same as the
> compiler is running on. The "-march" flag gives the compiler access
> to more SIMD and advanced instructions, as well as a more accurate
> model for scheduling, pipelining, and other processor-specific fine
> tuning options.

-march=native makes "-O2" a little bit slower and "-O3" a little bit
faster but the former is still 11% faster.

Thanks for your explanations.

Opus

unread,
Sep 23, 2022, 12:32:32 PM9/23/22
to
Le 21/09/2022 à 21:04, Lynn McGuire a écrit :
> “Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated”
>    https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>
> ““It’s time to halt starting any new projects in C/C++ and use Rust for
> those scenarios where a non-GC language is required. For the sake of
> security and reliability, the industry should declare those languages as
> deprecated,” he said on Twitter, expressing a personal opinion rather
> than a fresh Microsoft policy.”
>
> Wow !  Bold.

It's not bold. It's just stupid.



Keith Thompson

unread,
Sep 23, 2022, 3:36:00 PM9/23/22
to
"Rc" stands for "Reference Counted Smart Pointer". It's not just a
random 2-letter sequence.

--
Keith Thompson (The_Other_Keith) Keith.S.T...@gmail.com
Working, but not speaking, for Philips
void Void(void) { Void(); } /* The recursive call of the void */

Chris M. Thomasson

unread,
Sep 23, 2022, 3:44:58 PM9/23/22
to
Humm, Rc, I thought it might of been a "Reference Collected" attribute
applied to an object's declaration.

red floyd

unread,
Sep 23, 2022, 5:43:26 PM9/23/22
to
On 9/23/2022 12:45 AM, Juha Nieminen wrote:

>
> In general, the more condensed or the more sparse the code, the less
> readable it is. The perfect middle should be the goal.

Or, as Einstein once put it, "Things should be as simple as possible,
but no simpler".



Bonita Montero

unread,
Sep 24, 2022, 7:25:12 AM9/24/22
to
Am 22.09.2022 um 11:41 schrieb Juha Nieminen:
> In comp.lang.c++ Lynn McGuire <lynnmc...@gmail.com> wrote:
>> ???Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated???
>> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>>
>> ??????It???s time to halt starting any new projects in C/C++ and use Rust for
>> those scenarios where a non-GC language is required. For the sake of
>> security and reliability, the industry should declare those languages as
>> deprecated,??? he said on Twitter, expressing a personal opinion rather
>> than a fresh Microsoft policy.???
>
> I took a cursory look at Rust, and it really seems to embrace the
> "brevity-over-clarity" style of programming.
>
> What I call "brevity-over-clarity style of programming" is something
> I have opposed and learned to hate over the years. For some reason it
> appears to be what a good majority of beginner programmers naturally
> gravitate towards, and some of them never unlearn. It's the general
> use of names that are as short as possible, usually with complete
> disregard to readability and understandability. (For example,
> using the variable name "err" instead of "errorCode". Or "ret"
> instead of "returnValue". Or "i" instead of "index". Sometimes the
> word may be spelled-out rather than contracted, but in itself is
> very non-descript without any further qualifiers. For example a
> function named "convert()". Convert what? And into what?)
>
> It appears that functions in Rust are declared with a keyword. What is
> this keyword? Perhaps "function"? Of course not. It's "fn". Because
> why wouldn't it be? (It couldn't get much shorter than that, unless
> you want it to be just "f". Which actually wouldn't even surprise me.)
>
> In order to declare a varible as mutable you have to specify that with
> a keyword. And what is this keyword? Perhaps "mutable" (like in C++)?
> Of course not. It's "mut", because why wouldn't it be? Who cares about
> readability? You are saving typing a whopping 4 characters!
>
> Obviously type names are as short as possible, such as "i32" and "u64".
>
> Rust seems to also support a "using" keyword similar to the one used
> in C++ and some other languages. Except that, obviously, "using" is
> too long, so it's "use". (At least it is an entire English word.)
>
> The standard library (or at least the little I skimmed over) seems to be
> a bit of a mixed bag. Sometimes words might be spelled out in their
> entirety, but not even nearly always. (For example, what possible reason
> is there to call a function "gen_range" instead of "generate_range"? Is
> that honestly too much to type?)
>
> At least their string type is named "String". I wouldn't have been
> surprised if it had been "Str". Must have been a lapsus.

If that are your arguments against a lanuage - don't program at all.


David Brown

unread,
Sep 24, 2022, 11:05:31 AM9/24/22
to
On 23/09/2022 16:21, Ralf Goertz wrote:
> Am Fri, 23 Sep 2022 15:53:36 +0200
> schrieb David Brown <david...@hesbynett.no>:
>
>> On 23/09/2022 13:44, Ralf Goertz wrote:
>>> Am Thu, 22 Sep 2022 23:15:44 +0100
>>> schrieb Bart <b...@freeuk.com>:
>>>
>>>> I've done a bit more than you then in porting a 70-line benchmark
>>>> to it:
>>>>
>>>> https://github.com/sal55/langs/blob/master/fannkuch.txt
>>>
>>> I just implemented the benchmark myself (it should really be called
>>> „Pfannkuchen“) quite straightforwardly (see below). There is a 1995
>>> article about the the benchmark where the authors listed execution
>>> times of (among others) C-versions compiled with "gcc -O2" and just
>>> "gcc". The latter needed almost 4 times as long as the former. I
>>> wondered how "-O6" would compare. So I compiled my program with
>>> that (using n=12 instead of 10 as in the article). To my big
>>> surprise "-O6":
>>
>> Anything higher than -O3 is the same as -O3 in gcc.
>
> Oh okay. I don't know where I got the idea that 6 was the maximum.

I've seen other compilers that go up to -O6. gcc has taken the path of
having explicit control flags for different optimisations, for those
that want more control, rather than having many levels. After all, as
you have seen, a higher numerical level does not always relate well to
higher speeds.
Interesting. "-march=native" usually makes things faster (or smaller) -
rarely slower. Maybe your processor is newer than your compiler, so it
doesn't really know about the details. Or maybe you were just unlucky.

If you see particular slowdown effects and poor code, and it is the same
even with relatively current gcc versions, you can always report it.
Missed or poor optimisation issues are not often the highest prioritised
issues for gcc development (unlike incorrect code bugs), but the
developers like to have them in the bugzilla list. It makes it easier
to see when an issue is affecting many people. (If you want to look at
the generated code with different gcc versions and different compiler
options, <https://godbolt.org> is your website of choice.)

> Thanks for your explanations.
>

No problem.

red floyd

unread,
Sep 24, 2022, 3:33:20 PM9/24/22
to
On 9/24/2022 8:05 AM, David Brown wrote:

>
> I've seen other compilers that go up to -O6.  gcc has taken the path of
> having explicit control flags for different optimisations, for those
> that want more control, rather than having many levels.  After all, as
> you have seen, a higher numerical level does not always relate well to
> higher speeds.
>

I understand that the Spinal Tap compiler, written by Nigel Tufnel, goes
up to 11.



Mut...@dastardlyhq.com

unread,
Sep 25, 2022, 4:45:31 AM9/25/22
to
+1

Though I doubt many on here will get it.

Manfred

unread,
Sep 25, 2022, 1:31:53 PM9/25/22
to
Non-GC is definitely not for embedded programming only.
I have seen a pretty large project for digital imaging workstations fail
because of the non deterministic nature of GC.


Kaz Kylheku

unread,
Sep 26, 2022, 12:53:24 AM9/26/22
to
On 2022-09-25, Manfred <non...@add.invalid> wrote:
> Non-GC is definitely not for embedded programming only.
> I have seen a pretty large project for digital imaging workstations fail
> because of the non deterministic nature of GC.

And are you sure that you didn't see a large project fail for various
reasons, whereby some people tried to shift the blame to garbage
collection?

Today someone will pull it off ... in the browser.

(How many decades before the web was this?)

David Brown

unread,
Sep 26, 2022, 2:43:02 AM9/26/22
to
On 26/09/2022 08:07, Blue-Maned_Hawk wrote:
> On 9/22/22 02:16, Juha Nieminen wrote:
>> [snip]
> >
>> I remember when Java was supposed to be the Holy Grail and answer to
>> everything.
>>
>> (Nowadays I get the feeling that Java survives solely because it's the
>> main/only programming language available for developing for Android
>> devices. Haven't exactly researched if it's the only possible, or just
>> the main one.)
>
> I think it's possible to use other languages instead of Java, but i
> don't have any sort of citation for that.

Some easy references:

<https://en.wikipedia.org/wiki/Kotlin_(programming_language)>
<https://en.wikipedia.org/wiki/Android_software_development>

Kotlin is now the preferred choice (I have little experience of Java,
and none of Kotlin, so I have no idea of its pros and cons). But you
can use all sorts of other languages too.

In practice, a great many apps for Android are basically webpages - so
HTML5 and JavaScript are all you need.

> Personally, it seems to me
> like another big reason Java continues to limp on is because Minecraft,
> a terrible game that's somehow one of the most popular in the world, is
> based upon Java for it's primary edition.
> ​

Minecraft is probably the Java application with most users. But Java
has been hugely popular for in-house and dedicated programs for
businesses, so there is a vast investment in Java code that most people
never see. It may go out of fashion, but like Cobol, it will never die.

Juha Nieminen

unread,
Sep 26, 2022, 4:01:29 AM9/26/22
to
Michael S <already...@yahoo.com> wrote:
>> As another example: "convert(...)" might feel more convenient to
>> write than something like "convert_to_utf8_from_utf16(...)", but
>> you have to think about the code from the perspective of someone
>> who is reading it and doesn't already know what it's doing.
>> The latter may be significantly longer but it expresses infinitely
>> more clearly what it's doing (even without seeing what the parameters
>> are). Someone seeing just "convert(...)" in the code can't have any
>> idea what it's doing.
>
> It sounds like an argument against C++-style static polymorphism.

Polymorphism has its uses (especially in templates), but also its abuses.

Every good feature can be abused to make it a bad feature.

And in situations where the shorter and more generic name is needed because
of polymorphism (eg. in templates), my answer is almost always the same:

"Implement convert_to_utf8_from_utf16() anyway, and make the equivalent
convert() function call it. Use the former when you don't need the
polymorphism, restrict the use of the latter only to those situations
where it's actually needed."

Juha Nieminen

unread,
Sep 26, 2022, 4:02:12 AM9/26/22
to
It's just a hypothetical example (which is actually based on real
production code).

Juha Nieminen

unread,
Sep 26, 2022, 4:09:51 AM9/26/22
to
In comp.lang.c++ Keith Thompson <Keith.S.T...@gmail.com> wrote:
>> But not to be bested, Rust makes it better: 'Rc'.
>>
>> Because why not.
>
> "Rc" stands for "Reference Counted Smart Pointer". It's not just a
> random 2-letter sequence.

Of course it's not random, but it's needlessly short, for no reason nor
advantage.

Juha Nieminen

unread,
Sep 26, 2022, 4:13:24 AM9/26/22
to
In comp.lang.c++ Bonita Montero <Bonita....@gmail.com> wrote:
> If that are your arguments against a lanuage - don't program at all.

I thought you weren't reading any of my "nonsense". So why are you?

Just go away, asshole.

Juha Nieminen

unread,
Sep 26, 2022, 4:19:55 AM9/26/22
to
In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>> I wouldn't call embedded programming to be a "small niche".
>>
>> (Well, the subset of embedded programming that happens on processors
>> so small that they can't run Linux, at least.)
>
> And it will be a /long/ time before Rust has significant market share in
> these devices. Despite wanting the newest microcontrollers, embedded
> programmers are a conservative bunch - C90 is, I think, the most popular
> choice of language. (Yes, C90 - not C99.)

In my experience working in the field I think C99 has gained popularity
even among many (although not all) old-school "as bare metal as possible"
embedded C programmers.

Designated initializers are perhaps the best thing that has happened to C
during its entire existence (which is why they have been widely adopted
in the Linux kernel, and many other major C projects).

David Brown

unread,
Sep 26, 2022, 7:55:22 AM9/26/22
to
On 26/09/2022 10:19, Juha Nieminen wrote:
> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>>> I wouldn't call embedded programming to be a "small niche".
>>>
>>> (Well, the subset of embedded programming that happens on processors
>>> so small that they can't run Linux, at least.)
>>
>> And it will be a /long/ time before Rust has significant market share in
>> these devices. Despite wanting the newest microcontrollers, embedded
>> programmers are a conservative bunch - C90 is, I think, the most popular
>> choice of language. (Yes, C90 - not C99.)
>
> In my experience working in the field I think C99 has gained popularity
> even among many (although not all) old-school "as bare metal as possible"
> embedded C programmers.
>

Certainly C99 has gained popularity, but I think many C programmers
would be surprised to see how common C90 still is. It's also common to
have a kind of mixture of C90 with bits of C99 - people might use single
line comments but define all their local variables uninitialised at the
top of a function and use "int" when they should use "bool". And the
use of compiler extensions is also common - for many microcontrollers,
it is unavoidable.

> Designated initializers are perhaps the best thing that has happened to C
> during its entire existence (which is why they have been widely adopted
> in the Linux kernel, and many other major C projects).

Designated initialisers are certainly nice, but I would not call them
the "best" feature of C99. If I were to pick one favourite C99 feature,
it would be mixing declarations and statements - but such choices are
highly subjective. However, it's taken until C++20 to get designated
initialisers in C++, which suggests that they were not viewed as the
most important feature (though there has certainly been plenty of call
for them in C++).

Juha Nieminen

unread,
Sep 26, 2022, 10:32:12 AM9/26/22
to
In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>> Designated initializers are perhaps the best thing that has happened to C
>> during its entire existence (which is why they have been widely adopted
>> in the Linux kernel, and many other major C projects).
>
> Designated initialisers are certainly nice, but I would not call them
> the "best" feature of C99. If I were to pick one favourite C99 feature,
> it would be mixing declarations and statements - but such choices are
> highly subjective.

It is indeed highly subjective. Many C programmers are of the opinion
that declaring variables within the code implementation is actually a
bad thing, and they still prefer declaring all the variables of the
function at the beginning. (If I'm not mistaken, this is actually one
of the style requirements of the Linux kernel code. Although I'm not
sure if it's outright required or just recommended.)

When you have seen and programmed with designated initializers, however,
you really start to appreciate them. If you are writing eg. a Linux
kernel module, you essentially "fill out" some particular structs with
the data required for your module to work. The great thing about
designated initializers is that not only are these struct initializations
much more readable, but moreover the code doesn't need to care what the
order of the member variables is, or if there are more member variables
before, after, or in-between the ones being initialized. It really makes
a lot easier. (It also allows for those structs to be changed by eg.
adding new member variables without breaking tons of existing code.)

> However, it's taken until C++20 to get designated
> initialisers in C++, which suggests that they were not viewed as the
> most important feature (though there has certainly been plenty of call
> for them in C++).

And C++20 ruined one of the best aspects of them: The fact that you don't
need to know the order in which the member variables have been defined
in the struct.

(Not having to care about the order allows for refactoring of the struct
by swapping things around. Also, the order of the member variables in
the struct might have been chosen for space efficiency, while in the
initialization you can use a more logical order of initialization by
grouping related values together.)

rbowman

unread,
Sep 26, 2022, 11:04:51 AM9/26/22
to
On 9/26/22 05:55, David Brown wrote:
> Designated initialisers are certainly nice, but I would not call them
> the "best" feature of C99.  If I were to pick one favourite C99 feature,
> it would be mixing declarations and statements - but such choices are
> highly subjective.  However, it's taken until C++20 to get designated
> initialisers in C++, which suggests that they were not viewed as the
> most important feature (though there has certainly been plenty of call
> for them in C++).

I'd go with the ability to place declarations close to the code using
them also. Throwing in curly braces worked but it wasn't too elegant.

Bart

unread,
Sep 26, 2022, 11:05:42 AM9/26/22
to
I don't like designated initialisers.

First because my private C compiler doesn't support them, so I have to
switch to another when some code uses them, or edit it to remove the
dependency.

But also, they tend to be used gratuitously; I've actually seen examples
like this:

typedef struct {int x, y;} Point;

Point p = {100, 200}; // without designated initialisers
Point q = {110, 220};

Point r = {.x = 100, .y = 200};
Point s = {.y = 220, .x = 110};

The designated ones introduce too much clutter for a simple struct like
this, but also people feel free mix up the order, just because they can.
So you can't as easily see the patterns like those in the first example,
or they can be misleading unless you keep an eye on those names.

Another problem is when you decide to change the member names,
especially when the names weren't particularly important. Just changing
the order can mean data within the struct will get mixed up.

Mut...@dastardlyhq.com

unread,
Sep 26, 2022, 11:24:51 AM9/26/22
to
Java looks great at first - no pointer issues, fully (almost) OO and garbage
collected. Then a team writes a huge application in it which sucks up 100s of
megs of memory just to boot, hangs at inappropriate moments while it GCs and
generally runs multiple times slower than something similar written in C++.
Then they find the write once run anywhere mantra only works if you don't
do anything OS specific which for any given business app is unlikely and
the 101 libraries to do the same thing becomes unmanagable.

>businesses, so there is a vast investment in Java code that most people
>never see. It may go out of fashion, but like Cobol, it will never die.

True, but its doubtful much new greenfield code relatively speaking will be
written in it in the future.

daniel...@gmail.com

unread,
Sep 26, 2022, 12:00:48 PM9/26/22
to
On Monday, September 26, 2022 at 11:24:51 AM UTC-4, Mut...@dastardlyhq.com wrote:

> True, but its doubtful much new greenfield code relatively speaking will be
> written in [Java] in the future.

I think you've been misinformed :-)

Sometime in the 1990's, Java largely replaced C++ in middleware tooling for
business-to-business messaging, data transformation, application servers for enterprises,
and IT infrastructure software generally as vendors including IBM, Oracle, and Tibco all moved
to Java. That's a huge code base. There's some competition in this space from
.NET ASP, which is now cross platform across Linux, macOS, and Windows, but most of
the major vendors in this space are firmly in the Java camp. There's also a massive
amount of Java open source code in this space, including a JVM, particularly with the
Apache Software Foundation.

Daniel



Mut...@dastardlyhq.com

unread,
Sep 26, 2022, 12:11:54 PM9/26/22
to
On Mon, 26 Sep 2022 09:00:34 -0700 (PDT)
"daniel...@gmail.com" <daniel...@gmail.com> wrote:
>On Monday, September 26, 2022 at 11:24:51 AM UTC-4, Mut...@dastardlyhq.com
>wrote:
>
>> True, but its doubtful much new greenfield code relatively speaking will be
>> written in [Java] in the future.
>
>I think you've been misinformed :-)
>
>Sometime in the 1990's, Java largely replaced C++ in middleware tooling for
>business-to-business messaging, data transformation, application servers for
>enterprises,

Its not the 1990s anymore.

daniel...@gmail.com

unread,
Sep 26, 2022, 12:22:16 PM9/26/22
to
Indeed :-) But the domination of Java in middleware tooling for
business-to-business messaging, data transformation, application servers for
enterprises, and IT infrastructure software generally remains, and major
vendors like IBM, Oracle and Tibco aren't moving anywhere, from the looks of
it. I realize this isn't your space.

Daniel

Keith Thompson

unread,
Sep 26, 2022, 1:45:16 PM9/26/22
to
Juha, if you do not stop feeding the trolls, I will treat you
as a troll yourself and add you to my killfile. You do not have
to respond to everything. I have several users filtered out by
the equivalent of a killfile. You are bypassing my killfile by
posting responses. Please stop.

Juha Nieminen

unread,
Sep 27, 2022, 2:27:20 AM9/27/22
to
In comp.lang.c++ Keith Thompson <Keith.S.T...@gmail.com> wrote:
> Juha, if you do not stop feeding the trolls, I will treat you
> as a troll yourself and add you to my killfile. You do not have
> to respond to everything. I have several users filtered out by
> the equivalent of a killfile. You are bypassing my killfile by
> posting responses. Please stop.

Which pretty much makes killfiles useless. I have never understood,
and will never understand, why people use them. They are useless.
The only thing they do is that they break threads and make them
disjointed. And if you just keep adding people to your killfile
because they keep responding to people in your killfile, or to
people responding to those, or to people responding to those...
sooner or later you'll have almost everybody in your killfile
and see nothing.

I honestly cannot see the point.

Juha Nieminen

unread,
Sep 27, 2022, 2:36:23 AM9/27/22
to
In comp.lang.c++ Bart <b...@freeuk.com> wrote:
> I don't like designated initialisers.
>
> First because my private C compiler doesn't support them, so I have to
> switch to another when some code uses them, or edit it to remove the
> dependency.

That's a rather strange reason not to like them.

> But also, they tend to be used gratuitously; I've actually seen examples
> like this:
>
> typedef struct {int x, y;} Point;
>
> Point p = {100, 200}; // without designated initialisers
> Point q = {110, 220};
>
> Point r = {.x = 100, .y = 200};
> Point s = {.y = 220, .x = 110};
>
> The designated ones introduce too much clutter for a simple struct like
> this, but also people feel free mix up the order, just because they can.
> So you can't as easily see the patterns like those in the first example,
> or they can be misleading unless you keep an eye on those names.

You are assuming that the struct is directly visible there and extremely
obvious what it contains. It's not always that obvious. For example, this
is a very typical way to register a device driver in the Linux kernel
(directly from the source codes):

//----------------------------------------------------------
static struct platform_driver ftgpio_gpio_driver = {
.driver = {
.name = "ftgpio010-gpio",
.of_match_table = of_match_ptr(ftgpio_gpio_of_match),
},
.probe = ftgpio_gpio_probe,
.remove = ftgpio_gpio_remove,
};

builtin_platform_driver(ftgpio_gpio_driver);
//----------------------------------------------------------

What does that 'platform_driver' struct look like? Does it contain member
variables other than those being initialized here? Are they in this
particular order? I have no idea. And I don't have to have an idea.

And consider how much more readable the above initialization is.
It pretty much documents itself.

David Brown

unread,
Sep 27, 2022, 5:22:04 AM9/27/22
to
On 26/09/2022 16:31, Juha Nieminen wrote:
> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>>> Designated initializers are perhaps the best thing that has happened to C
>>> during its entire existence (which is why they have been widely adopted
>>> in the Linux kernel, and many other major C projects).
>>
>> Designated initialisers are certainly nice, but I would not call them
>> the "best" feature of C99. If I were to pick one favourite C99 feature,
>> it would be mixing declarations and statements - but such choices are
>> highly subjective.
>
> It is indeed highly subjective. Many C programmers are of the opinion
> that declaring variables within the code implementation is actually a
> bad thing, and they still prefer declaring all the variables of the
> function at the beginning. (If I'm not mistaken, this is actually one
> of the style requirements of the Linux kernel code. Although I'm not
> sure if it's outright required or just recommended.)

Somewhat bizarrely, the coding style for the Linux kernel allows
(AFAIUI) very complex initialisation of the variables at the start of a
function, but no mixing of declarations and statements. So it partially
encourages the "don't define your variables until you have good data for
initialisation" style, but by disallowing statements between the
declarations it pushes people towards unwieldy and overly complex
expressions.

I assume this is because until recently, the kernel was based on C90 -
but it was C90 with gcc extensions. The gcc "gnu90" pseudo-standard
supports mixed declarations and statements, just like C99.

I understand that some people like a clean list of uninitialised
variable declarations at the start of a function, much like in Pascal.
I don't think it is a good way to code, but people have their
preferences. The Linux kernel style looks to me like a mongrel hybrid
that does not lead to clean and clear code.

>
> When you have seen and programmed with designated initializers, however,
> you really start to appreciate them.

I /do/ use them, and appreciate them - but I don't find them to be such
a revolution that you apparently do. Maybe that's just a matter of the
kind of code I write compared to the kind of code you write.

> If you are writing eg. a Linux
> kernel module, you essentially "fill out" some particular structs with
> the data required for your module to work. The great thing about
> designated initializers is that not only are these struct initializations
> much more readable, but moreover the code doesn't need to care what the
> order of the member variables is, or if there are more member variables
> before, after, or in-between the ones being initialized. It really makes
> a lot easier. (It also allows for those structs to be changed by eg.
> adding new member variables without breaking tons of existing code.)
>
>> However, it's taken until C++20 to get designated
>> initialisers in C++, which suggests that they were not viewed as the
>> most important feature (though there has certainly been plenty of call
>> for them in C++).
>
> And C++20 ruined one of the best aspects of them: The fact that you don't
> need to know the order in which the member variables have been defined
> in the struct.

I agree with that. It strikes me as a strange requirement - just like
the insistence that initialisers in a constructor match the declaration
order. Perhaps there is some clear, logical reason for that restriction
(and perhaps someone could tell me what it is?).

>
> (Not having to care about the order allows for refactoring of the struct
> by swapping things around. Also, the order of the member variables in
> the struct might have been chosen for space efficiency, while in the
> initialization you can use a more logical order of initialization by
> grouping related values together.)

Yes.

David Brown

unread,
Sep 27, 2022, 5:27:24 AM9/27/22
to
That is one of the important benefits of being able to mix declarations
and statements, so I think we agree on our favourite feature.

It is not the only benefit, however. Declaring your variable only when
you have something useful to put in it lets you make many such variables
"const" - then both the programmer and the compiler know it is not going
to change. It makes adding new local variables "cheap" - so instead of
re-using a single "int temp;" variable in a dozen places in the
function, you have many const variables that do one thing, with one
name, and you always know exactly what they are.

Juha Nieminen

unread,
Sep 27, 2022, 9:26:51 AM9/27/22
to
I understand that C++ wants member variables to be initialized in strict
order of declaration (which in the case of C++ in particular can make
quite a difference if those initializations have side effects), but when
it comes to designated initializers I really think that the C++20 standard
should have allowed any order in the initialization list, and simply
declared that if the initializers are not specified in the same order
as the members have been declared, the behavior is "implementation-defined".
In other words, if the initializers are listed in the correct order, the
behavior is well-defined, but if they are in another order it's up to the
compiler whether it will reorder them to match the declaration order, or
initialize the members in the order specified in the list. (In other
words, if the initializers have side effects, you can't rely on them
being done in any particular order.)

(Another possibility is that it could have made an exception with POD
types, and allow out-of-order initialization with those.)

rbowman

unread,
Sep 27, 2022, 10:11:58 AM9/27/22
to
I'm not fond of 'const'. We have a lot of legacy code where the
programmers used it inappropriately so you wind up casting it away. I
think they saw it in one declaration and decided it was always needed.
Used correctly it's valuable

Bo Persson

unread,
Sep 27, 2022, 10:54:04 AM9/27/22
to
The thing is that destructors are supposed to be called in the reverse
order of construction (to allow "unchaining" any dependencies again).
This becomes really messy if we allow different construction order in
different places of the program.

I belive the strictness here is a do-the-right-thing attempt, avoiding
the constructor initializer list reordering that we have always had.
When adding a new feature, try to avoid the old mistakes!




Joe Pfeiffer

unread,
Sep 27, 2022, 11:57:02 AM9/27/22
to
Typically only a few people ever respond to the people who have, by dint
of extraordinary effort, managed to land in my killfile. Typically
those few people who do respond to them aren't much loss either.

The killfile propagation you are describing is not my experience.

David Brown

unread,
Sep 27, 2022, 12:10:26 PM9/27/22
to
Is it not better just to say that you are not fond of incorrect and
badly written code? If you have code written by people who don't know
how to use "const" properly, you should not let that stop you using it
when it is useful.

Keith Thompson

unread,
Sep 27, 2022, 1:51:18 PM9/27/22
to
I'm going to assume that you actually want an explanation. That might
be a bad assumption.

My killfile is extremely useful. It filters out posts by a handful of
users who would otherwise seriously damage the signal-to-noise ratio of
my view of this newsgroup. It makes this newsgroup a much more pleasant
place.

Most other users here have enough sense to refrain from posting
followups to noise.

My killfile does not generally make threads disjointed, because
typically none of the participants in a meaningful thread are in my
killfile.

You are one of the few users here who both posts relevent content *and*
posts annoying responses to trolls. Most recently, you posted:

> I thought you weren't reading any of my "nonsense". So why are you?
>
> Just go away, asshole.

because, apparently, you thought that *all* readers of this newsgroup
needed to see you say that. We really really didn't.

I'll note that the particular troll you were replying to posts under
what appears to be a valid email address, so you could have contacted
them directly.

Honestly, I understand the urge to feed the trolls. I've done it
myself. But I strongly suggest that you learn to let them have the last
word -- or it will never end.

Perhaps you now understand a bit better. Whether you do or not, my
previous statement stands. Your positive contributions here do not
outweigh the noise you produce with posts like your recent one.

Kaz Kylheku

unread,
Sep 27, 2022, 2:18:51 PM9/27/22
to
["Followup-To:" header set to comp.lang.c.]
On 2022-09-27, Juha Nieminen <nos...@thanks.invalid> wrote:
> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>>>> However, it's taken until C++20 to get designated
>>>> initialisers in C++, which suggests that they were not viewed as the
>>>> most important feature (though there has certainly been plenty of call
>>>> for them in C++).
>>>
>>> And C++20 ruined one of the best aspects of them: The fact that you don't
>>> need to know the order in which the member variables have been defined
>>> in the struct.
>>
>> I agree with that. It strikes me as a strange requirement - just like
>> the insistence that initialisers in a constructor match the declaration
>> order. Perhaps there is some clear, logical reason for that restriction
>> (and perhaps someone could tell me what it is?).
>
> I understand that C++ wants member variables to be initialized in strict
> order of declaration (which in the case of C++ in particular can make

That's a pretty stupid thing to fuss about in a language in which
the very *functions* have unspecified argument evaluation order.

If you create an object using this kind of paradigm:

obj *make_object(obj *arg1, obj *arg2, obj *arg3);

make_object(complex_expr, another_complex_expr, third_complex_expr);

the arguments expressions in your constructor call will be evaluated in
many possible orders, including interleaved with each other.

*THAT* is the fucking place where you want to introduce order:
in an imperative, effect-full language, you want function arguments
to be strictly evaluated.

But no, we can't have that; let's obsess over order in the initializer
syntax.

C++ is a shizophrenic language designed by a disingenuous committee
whose main aim is self-preservation and not the betterment of a
programming language.

rbowman

unread,
Sep 27, 2022, 7:30:24 PM9/27/22
to
That is true but dealing with badly written code is part of my life.


Kaz Kylheku

unread,
Sep 27, 2022, 8:26:37 PM9/27/22
to
["Followup-To:" header set to comp.lang.c.]
const requires a modicum of generic programming to use in a satisfactory
manner in all circumstances. The problem is that functions with
const-qualified pointer arguments accept qualified and unqualified
pointers.

If they return a pointer into the same object, if they make it
const-qualified, it's inconvenient. If they return unqualified,
an unsafe cast is required inside the function.

The C++ versions of certain C library headers fix this. You can
have code like this:

const char *ptr = strchr(const_str, 'a');

as well as

char *ptr = strchr(non_const_str, 'a');

thanks to there being two C++ overloads for the function.

(I'm guessing that the C11 _Generic thing could be used for this,
to create a strchr-like macro that has cases for const
and non-const.)

--
TXR Programming Language: http://nongnu.org/txr
Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal

James Kuyper

unread,
Sep 28, 2022, 1:31:03 AM9/28/22
to
On 9/27/22 20:03, Kaz Kylheku wrote:
> On 2022-09-27, Scott Lurndal <sc...@slp53.sl.home> wrote:
...
>> To be fair, the language allows you to explicity order them
>> rather easily:
>>
>> temp1 = complex_expr;
>> temp2 = another_complex_expr;
>> temp3 = yet_another_complex_expr;
>>
>> make_object(temp1, temp2, temp3).
>
> That transformation is the job of the compiler, though.
>
> It's not practical to do this even in small programs, let alone
> ones in which there are millions of function call expressions.

You only do it when the order actually matters, not automatically for
all function calls. In every case where the order doesn't matter, that
leaves the compiler free to reorder the evaluations in whichever order
is most convenient/fastest.

David Brown

unread,
Sep 28, 2022, 3:40:28 AM9/28/22
to
On 27/09/2022 15:26, Juha Nieminen wrote:
> In comp.lang.c++ David Brown <david...@hesbynett.no> wrote:
>>>> However, it's taken until C++20 to get designated
>>>> initialisers in C++, which suggests that they were not viewed as the
>>>> most important feature (though there has certainly been plenty of call
>>>> for them in C++).
>>>
>>> And C++20 ruined one of the best aspects of them: The fact that you don't
>>> need to know the order in which the member variables have been defined
>>> in the struct.
>>
>> I agree with that. It strikes me as a strange requirement - just like
>> the insistence that initialisers in a constructor match the declaration
>> order. Perhaps there is some clear, logical reason for that restriction
>> (and perhaps someone could tell me what it is?).
>
> I understand that C++ wants member variables to be initialized in strict
> order of declaration (which in the case of C++ in particular can make
> quite a difference if those initializations have side effects),

I understand that C++ /does/ insist on following the order of
declaration - I don't understand /why/. I appreciate that the
initialisations could have side effects, so it makes sense that the
initialisations are always carried out in the order of declaration - but
not that the order must be followed syntactically when writing the
constructor. (Alternatively, the language could have said that the
initialisations are carried out in the order given in the list in the
constructor - either would have worked, as long as it was clear and
consistent.)

For example, if I have a class containing some ints and doubles, and
some flags, then I might want to order the member declarations to pack
nicely and match a cache line in total size. But I might prefer the
initialisations in constructors to follow a logical order that is
different. I fail to see a good reason why that is not allowed.
(Again, this is quite possibly just that I am ignorant of the good reason.)

> but when
> it comes to designated initializers I really think that the C++20 standard
> should have allowed any order in the initialization list, and simply
> declared that if the initializers are not specified in the same order
> as the members have been declared, the behavior is "implementation-defined".

Agreed.

> In other words, if the initializers are listed in the correct order, the
> behavior is well-defined, but if they are in another order it's up to the
> compiler whether it will reorder them to match the declaration order, or
> initialize the members in the order specified in the list. (In other
> words, if the initializers have side effects, you can't rely on them
> being done in any particular order.)
>
> (Another possibility is that it could have made an exception with POD
> types, and allow out-of-order initialization with those.)

Also possible.

Kaz Kylheku

unread,
Sep 28, 2022, 4:26:47 AM9/28/22
to
["Followup-To:" header set to comp.lang.c.]
On 2022-09-28, James Kuyper <james...@alumni.caltech.edu> wrote:
> On 9/27/22 20:03, Kaz Kylheku wrote:
>> On 2022-09-27, Scott Lurndal <sc...@slp53.sl.home> wrote:
> ...
>>> To be fair, the language allows you to explicity order them
>>> rather easily:
>>>
>>> temp1 = complex_expr;
>>> temp2 = another_complex_expr;
>>> temp3 = yet_another_complex_expr;
>>>
>>> make_object(temp1, temp2, temp3).
>>
>> That transformation is the job of the compiler, though.
>>
>> It's not practical to do this even in small programs, let alone
>> ones in which there are millions of function call expressions.
>
> You only do it when the order actually matters, not automatically for
> all function calls.

OK; I still need the compiler to tell me where those places are;
I'm not looking at thousands of function calls to classify them
into whether they are stuffed with side effects whose order
matters or not.

> In every case where the order doesn't matter, that
> leaves the compiler free to reorder the evaluations in whichever order
> is most convenient/fastest.

In cases where the order doesn't matter, the compiler
can still reorder the evaluations even if they are expressed in a
pattern which constraints the abstract order.

The business of unspecified evaluation orders is mostly geared toward
helping compilers from 1982 generate better code with less analysis.

David Brown

unread,
Sep 28, 2022, 4:32:50 AM9/28/22
to
The old mistake (IMHO) is that re-ordering was not allowed, and they are
re-making it here.

Aggregate initialisation in C++ is already limited to classes without
user-declared or inherited constructors. I think you'd have to try hard
to make a class for which designated initialisers are allowed, but the
order of initialisation or destruction matters.

Bo Persson

unread,
Sep 28, 2022, 4:43:31 AM9/28/22
to
On 2022-09-28 at 10:26, Kaz Kylheku wrote:
> ["Followup-To:" header set to comp.lang.c.]
> On 2022-09-28, James Kuyper <james...@alumni.caltech.edu> wrote:
>> On 9/27/22 20:03, Kaz Kylheku wrote:
>>> On 2022-09-27, Scott Lurndal <sc...@slp53.sl.home> wrote:
>> ...
>>>> To be fair, the language allows you to explicity order them
>>>> rather easily:
>>>>
>>>> temp1 = complex_expr;
>>>> temp2 = another_complex_expr;
>>>> temp3 = yet_another_complex_expr;
>>>>
>>>> make_object(temp1, temp2, temp3).
>>>
>>> That transformation is the job of the compiler, though.
>>>
>>> It's not practical to do this even in small programs, let alone
>>> ones in which there are millions of function call expressions.
>>
>> You only do it when the order actually matters, not automatically for
>> all function calls.
>
> OK; I still need the compiler to tell me where those places are;
> I'm not looking at thousands of function calls to classify them
> into whether they are stuffed with side effects whose order
> matters or not.
>

So you write comlicated expressions with lots of side effects, without
considering the order of those side effects. Doesn't sound like a good
way to write code.




David Brown

unread,
Sep 28, 2022, 5:00:40 AM9/28/22
to
On 28/09/2022 02:26, Kaz Kylheku wrote:
> ["Followup-To:" header set to comp.lang.c.]

(Please don't set follow-ups that exclude groups that are clearly
relevant and part of the active thread - it just breaks up conversations.)

> On 2022-09-27, David Brown <david...@hesbynett.no> wrote:
>> On 27/09/2022 16:11, rbowman wrote:

<snip>

>>> I'm not fond of 'const'. We have a lot of legacy code where the
>>> programmers used it inappropriately so you wind up casting it away. I
>>> think they saw it in one declaration and decided it was always needed.
>>> Used correctly it's valuable
>>
>> Is it not better just to say that you are not fond of incorrect and
>> badly written code? If you have code written by people who don't know
>> how to use "const" properly, you should not let that stop you using it
>> when it is useful.
>
> const requires a modicum of generic programming to use in a satisfactory
> manner in all circumstances. The problem is that functions with
> const-qualified pointer arguments accept qualified and unqualified
> pointers.
>
> If they return a pointer into the same object, if they make it
> const-qualified, it's inconvenient. If they return unqualified,
> an unsafe cast is required inside the function.

Yes, being "const consistent" can sometimes be awkward. That does not
mean avoiding "const" is a good idea - it just means "const" is not
quite as simple as one might like.

>
> The C++ versions of certain C library headers fix this. You can
> have code like this:
>
> const char *ptr = strchr(const_str, 'a');
>
> as well as
>
> char *ptr = strchr(non_const_str, 'a');
>
> thanks to there being two C++ overloads for the function.

Yes.

>
> (I'm guessing that the C11 _Generic thing could be used for this,
> to create a strchr-like macro that has cases for const
> and non-const.)
>

Yes, _Generic certainly could handle that.


Richard Damon

unread,
Sep 28, 2022, 7:56:56 AM9/28/22
to
The problem with the order listed in the constructor is that there might
not be just one constructor, but several, that list the order
differently. That gives the destructor problems, as it doesn't know what
order to destruct the object, and the constructor's implementation might
not even be visible at the point of generating the code for the destructor.

That forces ALL classes to add hidden members that record the order of
construction for this instance of the class, which violates the
principle of trying to not pay for things you don't need.

Using the order from the class definition is the simple solution.

Note, for your class with ints and doubles, it doesn't matter, as the
destructor is trivial.

Ben Bacarisse

unread,
Sep 28, 2022, 8:33:52 AM9/28/22
to
David Brown <david...@hesbynett.no> writes:

> On 28/09/2022 02:26, Kaz Kylheku wrote:
<cut>
>> The C++ versions of certain C library headers fix this. You can
>> have code like this:
>> const char *ptr = strchr(const_str, 'a');
>> as well as
>> char *ptr = strchr(non_const_str, 'a');
>> thanks to there being two C++ overloads for the function.
>
> Yes.
>
>> (I'm guessing that the C11 _Generic thing could be used for this,
>> to create a strchr-like macro that has cases for const
>> and non-const.)
>
> Yes, _Generic certainly could handle that.

And now C23 proposes type-generic versions of strchr etc.

--
Ben.

David Brown

unread,
Sep 28, 2022, 9:51:06 AM9/28/22
to
That all suggests it is a good idea for initialisers to be executed in
the order given in the class declaration (since practicality dictates
having one fixed order, that seems the sensible choice). It does /not/
suggest that the same order should be required when listing the
initialisers.

>
> Using the order from the class definition is the simple solution.
>
> Note, for your class with ints and doubles, it doesn't matter, as the
> destructor is trivial.
>
And yet C++ won't let me re-order them in the initialiser list. It will
let me use whatever order I want inside the body of the constructor
using assignment, and then the compiler will re-order to match exactly
the same code that I'd get with initialisers - whatever is the most
efficient code.

Scott Lurndal

unread,
Sep 28, 2022, 10:21:40 AM9/28/22
to
I suppose that when BS was writing cfront, it wasn't feasible to
support different orders for declaration and initialization; subsequent
code being sensitive to such, relaxation as compilers became more
sophisticated was counterindicated.

Scott Lurndal

unread,
Sep 28, 2022, 12:17:27 PM9/28/22
to
Moreover, since function call arguments were passed on a stack,
were pushed in reverse order and the target architectures of the day
had limited registers, it follows that compiler writers in the day
would process arguments in reverse order when generating code.

Kaz Kylheku

unread,
Sep 28, 2022, 12:56:47 PM9/28/22
to
On 2022-09-28, Scott Lurndal <sc...@slp53.sl.home> wrote:
> Moreover, since function call arguments were passed on a stack,
> were pushed in reverse order and the target architectures of the day
> had limited registers, it follows that compiler writers in the day
> would process arguments in reverse order when generating code.

Not just limited registers; but limited analysis. Because you can
reorder the evaluation to go in the stack-convenient order, in cases
where you can confirm that it makes no difference.

James Kuyper

unread,
Sep 28, 2022, 1:15:33 PM9/28/22
to
On 9/28/22 04:26, Kaz Kylheku wrote:
> ["Followup-To:" header set to comp.lang.c.]
> On 2022-09-28, James Kuyper <james...@alumni.caltech.edu> wrote:
>> On 9/27/22 20:03, Kaz Kylheku wrote:
>>> On 2022-09-27, Scott Lurndal <sc...@slp53.sl.home> wrote:
>> ...
>>>> temp1 = complex_expr;
>>>> temp2 = another_complex_expr;
>>>> temp3 = yet_another_complex_expr;
>>>>
>>>> make_object(temp1, temp2, temp3).
>>>
>>> That transformation is the job of the compiler, though.
>>>
>>> It's not practical to do this even in small programs, let alone
>>> ones in which there are millions of function call expressions.
>>
>> You only do it when the order actually matters, not automatically for
>> all function calls.
>
> OK; I still need the compiler to tell me where those places are;

You've got the communications direction wrong. Writing code that way is
how you tell the compiler that the order matters. Writing it as a simple
function call without explicit temporaries is how you tell the compiler
that you think the order doesn't matter. A good compiler should inform
you if realizes that the assumption is incorrect.

Bo Persson

unread,
Sep 28, 2022, 2:31:36 PM9/28/22
to
The "stack convenient" order is very important for some common
functions, like printf. Evaluating that function call left-to-right and
push the parameters so that the format string is at the bottom of an
unknown sized parameter pack is - Extremely Inconvenient(TM).

So the evaluation is ordered so that the format string is pushed last
and easy to find for printf. Kind of important for it to figure out the
types and numbers of the other parameters.



The Doctor

unread,
Sep 28, 2022, 5:45:58 PM9/28/22
to
In article <jpji7n...@mid.individual.net>,
C/C++ being deprecated? How would computing work?
--
Member - Liberal International This is doc...@nk.ca Ici doc...@nk.ca
Yahweh, King & country!Never Satan President Republic!Beware AntiChrist rising!
Look at Psalms 14 and 53 on Atheism https://www.empire.kred/ROOTNK?t=94a1f39b
Quebec oubliez les extremes et votez PLQ Beware https://mindspring.com

Richard Damon

unread,
Sep 28, 2022, 7:33:36 PM9/28/22
to
Right, because to allow otherwise adds a LOT of overhead. That is what I
was implying. That IS the rules of C++, the initialzers are processed in
the order the members are listed in the class definition, regardless of
the order specified in the constructor.

>
>>
>> Using the order from the class definition is the simple solution.
>>
>> Note, for your class with ints and doubles, it doesn't matter, as the
>> destructor is trivial.
>>
> And yet C++ won't let me re-order them in the initialiser list.  It will
> let me use whatever order I want inside the body of the constructor
> using assignment, and then the compiler will re-order to match exactly
> the same code that I'd get with initialisers - whatever is the most
> efficient code.

You can specify the initializer list in what ever order you want, though
a good compiler will have the option to give you are warning if the
listed order isn't the order that it will do them in.

In the body of the construtor, it is required to do them in the order
listed, or at least act "as if" it did them in the order listed.

Because of the "as if" rule, if you can't tell the difference, it
doesn't matter what order they are done in.

Kaz Kylheku

unread,
Sep 28, 2022, 7:39:13 PM9/28/22
to
Of course, I can tell which is the case by inspection. Problem is,
suppose I didn't write the code myself and there are tens of thousands
of function calls in it. I can't inspect them all, and so that's why
I would need the compiler to tell me where the places are.

I suspect that practically achievable diagnostics in this area will
yield lots of false positives.

For instance, the order probably doesn't matter in code like this:

debug_pf("device %s: status %x\n", dev_name(d), dev_status(d));

yet, this code potentially has behavior dependent on the order
of the function calls. The functions are external, defined in another
translation unit; the compiler has no idea whether they mutate the d
object. A diagnostic feature warning about potential side effects in
function arguments would have to flag this.

Really, the best solution is not to have this class of bug at all
by a fixed order of evaluation.

David Brown

unread,
Sep 29, 2022, 5:47:18 AM9/29/22
to
The logical extension of this is saying that when some programmers don't
understand the language properly, or make unwarranted assumptions, or
write poor code, you want the language to define away the bugs. Reality
can't work that way. Programmers have to take responsibility for
writing correct code.

/Everyone/ has their own personal choice of things they think are flaws
in the language, or common coding errors that could be reduced by a
change to the language. Equally, there will be people who prefer that
aspect of the language remains as it is. This applies to all
programming languages - it is not a C or C++ thing.

We can all sympathise with having to deal with code written by others
that is not as good as we'd like.

But remember that whenever one person says "eliminates a class of bugs",
others see "eliminates a class of optimisations" or "eliminates a type
of coding flexibility" or "eliminates scope for static error detection
or run-time sanitizer tracking".


David Brown

unread,
Sep 29, 2022, 5:50:44 AM9/29/22
to
I can quite believe that being flexible about the ordering would have
involved more compiler effort at the time, and that's a solid argument.
But we have seen many cases where features are initially very
restricted, but get freer as compilers get more sophisticated and can
handle them better - constexpr functions is a prime case.

I can't see the challenges of writing cfront as a reason not to allow
freely ordered designated initialisers.

Kaz Kylheku

unread,
Sep 29, 2022, 1:59:15 PM9/29/22
to
["Followup-To:" header set to comp.lang.c.]
Well, it does a fair good job of that already, right?. For instance, it
tells you if you're passing a "foo *" pointer to a function that expects
"bar *", or accessing a nonexistent member in a structure instead of just
blindly fetching from that offset, or that you're passing too few or
many arguments and so on.

Every time you get a diagnostic in any such an area, that's a situation
in which you didn't write correct code, and which would have either not
been caught at all, or possibly very late, like years later in the field
somewhere.

> Programmers have to take responsibility for
> writing correct code.

When was the last time you took responsibility that a C function
call correctly pops all the arguments off the stack which it pushed?

One way that programmers, as a group, take responsibility for writing
correct code is by by developing tools, and using them.

David Brown

unread,
Sep 30, 2022, 3:17:20 AM9/30/22
to
On 29/09/2022 19:58, Kaz Kylheku wrote:
> ["Followup-To:" header set to comp.lang.c.]

Would you /please/ stop doing that? It messes up the threads, giving a
disjointed view in both groups with some posts missing and others
appearing from nowhere as people correct your inappropriate follow-up
groups.

If you want to talk about C specifically, start a thread in comp.lang.c.

If you want to talk about C++ specifically, start a thread in comp.lang.c++.

When a thread is discussing both languages, such as this one (look at
the subject), and everyone else is keeping both newsgroups in the
followups, then you should do that too.


There are only two good reasons why you should consider setting
followups. One is if a branch is clearly diverging to a side-topic that
is applicable to one group only, and is of no interest to people in the
other group. This will normally be accompanied by a subject change.

The other is when someone posts to the wrong group, and you are
informing them of where they ought to be.

Neither applies here.

Note that "I only follow the one group comp.lang.c" is /not/ a relevant
reason. Your follow-up settings mess up things particularly badly for
those that happen to follow only comp.lang.c++.


Your posts in this thread are as topical, relevant and interesting in
both groups.

Kaz Kylheku

unread,
Sep 30, 2022, 12:40:01 PM9/30/22
to
On 2022-09-30, David Brown <david...@hesbynett.no> wrote:
> On 29/09/2022 19:58, Kaz Kylheku wrote:
>> ["Followup-To:" header set to comp.lang.c.]
>
> Would you /please/ stop doing that? It messes up the threads, giving a
> disjointed view in both groups with some posts missing and others
> appearing from nowhere as people correct your inappropriate follow-up
> groups.

The question is, why do I do that? It's a bad idea in most cases.

The workflow is that SLRN, in the cross-posting situation, brings up a a
prompt like this:

Crosspost using: (F)ollowup-To, (A)ll groups, (T)his group, (C)ancel

There are times when I hit 'f' by mistake. (Could it be because 'f' is also
the command for initiating the follow-up in the first place?)
Among those times, there are times when I subsequently neglect to edit out the
header associated body line. Maybe I jump to the bottom too fast,
that being the fundamental race in Usenet, and all.

Drilling into it now, I see that if the configuration option
netiquette_warnings is set to 0, the prompt goes away, and the (A)ll
behavior prevails.

It's unfortunate that the warnings are lumped together under one
option like that. I'd like to be warned if there is too much quoted
material, or long lines and such.

It can be solved in another way: Vim can easily be programmed to
dispatch an action when a file named .followup is opened; that
can be used to make some automatic edits.

Kaz Kylheku

unread,
Sep 30, 2022, 1:20:35 PM9/30/22
to
[ Apologies for having had diverted this subthread to comp.lang.c ]
On 2022-09-29, David Brown <david...@hesbynett.no> wrote:
> On 29/09/2022 19:58, Kaz Kylheku wrote:
> Every time I use a function? I take responsibility for picking a good
> compiler that I am confident I can rely on, for testing my code, and for
> writing code correctly in the first place - that's a programmer's job.
> And when I accidentally put bugs in the code, that is my responsibility too.
>
>> One way that programmers, as a group, take responsibility for writing
>> correct code is by by developing tools, and using them.
>
> Sure. In particular, they are responsible for learning the tools and
> using them properly to maximal effect for the task in hand.

This is a basic principle in the field and pretty much any other
professional field; as such, it doesn't really inform.

It's true whether you're toggling switches on a panel to produce
a binary program directly in memory, or whether you're using OCaml
or Prolog.

Yet it is empirically valid that those diverse alternatives
don't have the same safety, and we evolve things accordingly.

Speaking of returning the cross-posting to comp.lang.c++, that language
as of C++17 has chosen to define some evaluation orders.

The evaluation order of function arguments is not yet defined, but
now there is a sequence point between the argument expressions.

That is obviouslly super important, because now the unspecified behavior
stays unspecified when there are side effects like:

f(i++, i++);

it doesn't help with the situation

f(g(), h())

because those calls are sequenced. However, it helps in the
argument space: with just the classic sequencing of function calls not
being interleaved, this is still undefined:

f(g(i++), h(i++))

whereas by my understanding C++17 leaves it unspecified in which order g
and h will be called, and which one will have receive the original value
of i. But, I think, we can infer that that function which is called
first receives the original value of i, and i is reliably incremented
twice. Baby steps in the right direction, in any case.

I don't understand the rationale for sequencing A before B in A << B
(what is so special about shifting); there must be some motivating
example where you have a side effect in A that B depends on. What I'm
likely missing that it's probably not the built-in arithmetic << that is
of concern, but overloads.

> But you don't get to pick different rules that you'd prefer to have for
> the language, and then blame the language when your code doesn't work.
> It is not the language that is at fault if someone writes code that
> relies on execution order that is left unspecified by the standards -
> it's the programmers' fault.

It's not a game of blame, but of reducing the unfortunate situations in
which someone has a reason to look for something to blame.

I can't look at a large body of code (that I perhaps didn't write: so no
responsibility of mine) and easily know whether there is a problem due
to eval order. If I'm charged with the responsibility of finding out,
I could just give a quick answer "almost certainly no, modulo compiler
bugs" if eval order is defined by the language, without any effort. (On
the other hand, that leaves money on the table, doesn't it! Now I have
to find alternative ways to get paid for the saved hours.)

Scrambled eval orders in a language in which side effects are the
principal means of getting anything done is a poor technical situation.

Paavo Helde

unread,
Sep 30, 2022, 2:00:06 PM9/30/22
to
30.09.2022 20:20 Kaz Kylheku kirjutas:

> Scrambled eval orders in a language in which side effects are the
> principal means of getting anything done is a poor technical situation.
>

That must be some other language. I have never written any C++ code
where side effects were the principal means of getting something done.

If anything, C++ in general has moved towards more functional style over
the years, with fewer mutable objects, not to speak about objects
mutated by side effects.


David Brown

unread,
Oct 1, 2022, 6:47:24 AM10/1/22
to
No software is perfect - there's always something that is inconvenient
or awkward. And there are always some cases where you want to do
something different from the norm.

Anyway, thanks for the explanation - and now back to the interesting C
and C++ stuff!

David Brown

unread,
Oct 2, 2022, 7:38:53 AM10/2/22
to
Yes. And it is not uncommon in many fields to blame the tools for
mistakes of the users. Sometimes such blame is fair, of course - but
usually not.

> Yet it is empirically valid that those diverse alternatives
> don't have the same safety, and we evolve things accordingly.
>
> Speaking of returning the cross-posting to comp.lang.c++, that language
> as of C++17 has chosen to define some evaluation orders.
>

Yes. Primarily it is for the case of "cout << a() << b();". This is a
situation where specific ordering really does help the programmer.

Language changes are sometimes a good idea, but there are almost always
trade-offs to consider. Things that one person sees as clearly a good
idea, will sound crazy and be a pain for someone else.

> The evaluation order of function arguments is not yet defined, but
> now there is a sequence point between the argument expressions.
>
> That is obviouslly super important, because now the unspecified behavior
> stays unspecified when there are side effects like:
>
> f(i++, i++);
>
> it doesn't help with the situation
>
> f(g(), h())
>
> because those calls are sequenced. However, it helps in the
> argument space: with just the classic sequencing of function calls not
> being interleaved, this is still undefined:
>
> f(g(i++), h(i++))
>

There is always going to be things that are undefined or unspecified.
There are always going to be ways to write things that make no sense, or
at least no well-defined and consistent sense. A language can aim to
reduce these possibilities, but it comes at a big cost. There are three
disadvantages, as I see them. One is that it tells programmers that
even though a piece of code is unclear to humans, it is valid code - and
then some people write code like "f(i++, i++)" that is hard for others
to comprehend. Another is it reduces the ability of compilers and tools
to help you write good, clear code (compilers can warn on "f(i++, i++)"
when it is not defined behaviour). And it reduces optimisation
possibilities for perfectly good code.


> whereas by my understanding C++17 leaves it unspecified in which order g
> and h will be called, and which one will have receive the original value
> of i. But, I think, we can infer that that function which is called
> first receives the original value of i, and i is reliably incremented
> twice. Baby steps in the right direction, in any case.
>

No - that is the /wrong/ message to take from the C++17 changes. It is
not "baby steps in the right direction" - it is a case of minimal
changes needed to give defined behaviour to common incorrect code that
has been used in practice for years. The key motivation is that a lot
of code has been written that assumes "cout << f() << g();" calls "f()"
first, then "g()".

It is not a step in a direction towards more general changes, because
that would mean making at least some existing correct code less efficient.

> I don't understand the rationale for sequencing A before B in A << B
> (what is so special about shifting); there must be some motivating
> example where you have a side effect in A that B depends on. What I'm
> likely missing that it's probably not the built-in arithmetic << that is
> of concern, but overloads.
>

Correct.

>> But you don't get to pick different rules that you'd prefer to have for
>> the language, and then blame the language when your code doesn't work.
>> It is not the language that is at fault if someone writes code that
>> relies on execution order that is left unspecified by the standards -
>> it's the programmers' fault.
>
> It's not a game of blame, but of reducing the unfortunate situations in
> which someone has a reason to look for something to blame.
>

Fair enough.

> I can't look at a large body of code (that I perhaps didn't write: so no
> responsibility of mine) and easily know whether there is a problem due
> to eval order.

Analysing the correctness of other people's code is never an easy job!

However, I don't think a specified evaluation order would help. If you
see "f(g(), h());" in the code, you are concerned that the programmer
might be assuming that "g()" is evaluated before "h()". But if you know
the programmer was good at his/her job, you will know that he/she would
have used temporaries and run g() and h() before f() if the order
mattered. So you know the code is correct, and the order does not matter.

However, if the order were specified by the language, then you still
know the code is correct but you don't know if the order of the call
matters (and the programmer relied on the evaluation order), or if it
does not matter.

The lack of specification in the language gives you /more/ information,
not less.

And if you can't rely on the original programmer's competence, you can't
rely on anything in the code without more detailed checking anyway.


Sometimes tighter specification for things like evaluation order gives
you benefits, but not always - it can just as well be a disadvantage.

Ben Bacarisse

unread,
Oct 2, 2022, 8:01:10 AM10/2/22
to
David Brown <david...@hesbynett.no> writes:

> On 30/09/2022 19:20, Kaz Kylheku wrote:
<cut>
>> Speaking of returning the cross-posting to comp.lang.c++, that language
>> as of C++17 has chosen to define some evaluation orders.
>
> Yes. Primarily it is for the case of "cout << a() << b();". This is
> a situation where specific ordering really does help the programmer.

I'm curious. How does a specific ordering help the programmer here?
Even with no specified order of evaluation, the output must look as if
the result of b() was appended to the stream resulting from the
left-most << operator. All the ordering does is cause any side effects
in a() and b() to be predictably ordered, and that might look more like
helping the programmer to write dodgy code.

--
Ben.

David Brown

unread,
Oct 2, 2022, 10:20:51 AM10/2/22
to
Yes, it is a matter of side-effects. Without side-effects in "a()" or
"b()", the result is clear regardless of the order of evaluation.

And if there are side-effects, and order matters, then programmers can
(and should, at least until C++17) write along the lines of :

const auto res_a = a();
const auto res_b = b();
cout << res_a << res_b;

But it seems that many programmers have been assuming that using
iostreams for output like this has a specified order - even when they
don't make such assumptions about other kinds of expressions. I suppose
it is something about the appearance of the code that makes it look like
there is an order to the evaluation.

Maybe it is not accurate to say this "helps the programmer", and it is
more correct to say that it makes what used to be dodgy code into
correct code. But if it means their old dodgy code continues to work as
they intended even when they use newer compilers that do more
re-ordering optimisations, then it helps them.

A disadvantage of this change is that it might make some programmers
think ordering is more tightly specified for other kinds of expression
as well, and rely on such incorrect assumptions.

(I'm not convinced that this evaluation order change in C++17 is a good
idea, or worth the complications of having this special case. I am just
trying to say why it is different from a more general evaluation order
specification, and why the C++ committee thought it was worth changing.)

Tim Rentsch

unread,
Oct 2, 2022, 10:24:47 AM10/2/22
to
For this sub-thread I think the posting should have been limited
to comp.lang.c++, and not included comp.lang.c.

Tim Rentsch

unread,
Oct 2, 2022, 10:31:20 AM10/2/22
to
Paavo Helde <ees...@osa.pri.ee> writes:

> 30.09.2022 20:20 Kaz Kylheku kirjutas:
>
>> Scrambled eval orders in a language in which side effects are the
>> principal means of getting anything done is a poor technical situation.
>
> That must be some other language. I have never written any C++ code
> where side effects were the principal means of getting something done.

That claim is very hard to believe.

> If anything, C++ in general has moved towards more functional style
> over the years, with fewer mutable objects, not to speak about objects
> mutated by side effects.

I feel obliged to mention that comments about C++ are not topical
in comp.lang.c, and the Newsgroups: line should have been
adjusted accordingly.

Tim Rentsch

unread,
Oct 2, 2022, 3:57:21 PM10/2/22
to
Michael S <already...@yahoo.com> writes:

> On Friday, September 23, 2022 at 1:02:17 PM UTC+3, Juha Nieminen wrote:
>
>> As another example: "convert(...)" might feel more convenient to
>> write than something like "convert_to_utf8_from_utf16(...)", but
>> you have to think about the code from the perspective of someone
>> who is reading it and doesn't already know what it's doing.
>> The latter may be significantly longer but it expresses infinitely
>> more clearly what it's doing (even without seeing what the parameters
>> are). Someone seeing just "convert(...)" in the code can't have any
>> idea what it's doing.
>
> It sounds like an argument against C++-style static polymorphism.

Do you mean ad hoc polymorphism, aka function overloading? Or do
you mean something else, e.g., template-related?

Tim Rentsch

unread,
Oct 2, 2022, 3:59:29 PM10/2/22
to
Manfred <non...@add.invalid> writes:

> On 9/23/2022 9:50 AM, Juha Nieminen wrote:
>
>> In comp.lang.c++ Kaz Kylheku <864-11...@kylheku.com> wrote:
>>
>>> ["Followup-To:" header set to comp.lang.c.]
>>> On 2022-09-21, Lynn McGuire <lynnmc...@gmail.com> wrote:
>>>
>>>> Microsoft Azure CTO Mark Russinovich: C/C++ should be deprecated
>>>> https://devclass.com/2022/09/20/microsoft-azure-cto-on-c-c/
>>>>
>>>> It's time to halt starting any new projects in C/C++ and use Rust
>>>> for those scenarios where a non-GC language is required.
>>>
>>> Those scenarios are almost never, though.
>>>
>>> The niche is small, and nothing needs replacing in it.
>>
>> I wouldn't call embedded programming to be a "small niche".
>>
>> (Well, the subset of embedded programming that happens on processors
>> so small that they can't run Linux, at least.)
>
> Non-GC is definitely not for embedded programming only.
> I have seen a pretty large project for digital imaging workstations
> fail because of the non deterministic nature of GC.

Garbage collection is not inherently non-deterministic.

Manfred

unread,
Oct 2, 2022, 4:05:56 PM10/2/22
to
On 10/2/2022 4:20 PM, David Brown wrote:
> On 02/10/2022 14:00, Ben Bacarisse wrote:
>> David Brown <david...@hesbynett.no> writes:
>>
>>> On 30/09/2022 19:20, Kaz Kylheku wrote:
>> <cut>
>>>> Speaking of returning the cross-posting to comp.lang.c++, that language
>>>> as of C++17 has chosen to define some evaluation orders.
>>>
>>> Yes.  Primarily it is for the case of "cout << a() << b();".  This is
>>> a situation where specific ordering really does help the programmer.
>>
>> I'm curious.  How does a specific ordering help the programmer here?
>> Even with no specified order of evaluation, the output must look as if
>> the result of b() was appended to the stream resulting from the
>> left-most << operator.  All the ordering does is cause any side effects
>> in a() and b() to be predictably ordered, and that might look more like
>> helping the programmer to write dodgy code.
>>
>
> Yes, it is a matter of side-effects.  Without side-effects in "a()" or
> "b()", the result is clear regardless of the order of evaluation.
>
> And if there are side-effects, and order matters, then programmers can
> (and should, at least until C++17) write along the lines of :
>
>     const auto res_a = a();
>     const auto res_b = b();
>     cout << res_a << res_b;
>

I disagree that this change is to be interpreted as "giving some help to
bad programmers".

The key point is that this is about the insertion operator, not binary
shift (here the C++ syntax does not help indeed)
Insertion operators, and specifically in combination with streams, have
been explicitly designed /from day one/ to allow for:

cout <<
value_1 <<
value_2 <<
value_3 <<
value_4 <<
value_5 <<
value_6;

Along the same lines:

cout <<
f_1() <<
f_2() <<
f_3() <<
f_4() <<
f_5() <<
f_6();

Is just expected semantics, which good language design should account for.

I am pretty confident that the construct that you suggest as "the right
way" was not the intention of the language designers - think e.g. IO
manipulators.
It is loading more messages.
0 new messages