Understanding the evaluation order compromise

581 views
Skip to first unread message

Nicol Bolas

unread,
Jul 9, 2016, 2:00:30 PM7/9/16
to ISO C++ Standard - Future Proposals
C++17's CD has apparently voted in the expression evaluation order proposal, but as I understand it, they used the "Alternate Evaluation Order for Function Calls" That says:

the expression in the function position is evaluated before all the arguments and the evaluations of the arguments are indeterminately sequenced, but with no interleaving.

So, I want to make sure I understand something.

This has well defined behavior:

template<typename Ts...>
void func(Ts&&... args)
{
    vector
<int> v;
    v
.reserve(sizeof...(Ts));
   
   
(v.insert(v.end(), std::forward<Ts>(args)), ...);
}

This will insert to the end of `v` every element in `args`, in the order of the arguments passed to `func`.

But this will not:

template<typename Ts...>
void func(Ts&&... args)
{
    vector
<int> v;
    v
.reserve(sizeof...(Ts));
   
   
auto inserts = make_array(v.insert(v.end(), std::forward<Ts>(args))...);
}

So, by the mere act of calling a function on the return values, the state of `v` is unspecified.

Now, the new sequencing rules will at least ensure that this code results in a valid `v`. That is, each `v.end` will be called, followed by its corresponding `v.insert`. There won't be any interleaving, so `v` will have actual data in it in some order. And the returned iterators will all be equally valid.

But there also won't be a guaranteed order. All because it was wrapped in a function call.

Is my understanding of things correct? Would people not consider this highly surprising behavior?

Edward Catmur

unread,
Jul 10, 2016, 5:52:14 PM7/10/16
to ISO C++ Standard - Future Proposals
Yes, your understanding is correct to the best of my knowledge. It's mildly surprising, certainly; but C++ has plenty of mildly surprising corner cases. In mitigation, the syntax is different (no comma) so you can hardly say that it's the same code wrapped in a function call, and the code as written is somewhat artificial (why not use array directly with a braced-init-list via template argument deduction)?

For the benefit of those reading up on this for the first time, and who did not have the pleasure of participating in previous discussions on the topic, there are several good reasons for preserving the current unspecified ordering. Among those I find most compelling are refactoring, optimization, backwards compatibility. Also, of course, with the ordering unspecified implementations are at liberty to fix on an LTR or RTL ordering, and there is nothing preventing a further version of the standard from making that change, but to do so now would be nearly irrevocable if it proved dissatisfactory.

Hyman Rosen

unread,
Jul 11, 2016, 12:17:31 PM7/11/16
to std-pr...@isocpp.org
The irrevocable wrong decision that was made was that in assignments, the RHS is evaluated before the LHS.

On Sun, Jul 10, 2016 at 5:52 PM, Edward Catmur <e...@catmur.co.uk> wrote:
Yes, your understanding is correct to the best of my knowledge. It's mildly surprising, certainly; but C++ has plenty of mildly surprising corner cases. In mitigation, the syntax is different (no comma) so you can hardly say that it's the same code wrapped in a function call, and the code as written is somewhat artificial (why not use array directly with a braced-init-list via template argument deduction)?

For the benefit of those reading up on this for the first time, and who did not have the pleasure of participating in previous discussions on the topic, there are several good reasons for preserving the current unspecified ordering. Among those I find most compelling are  refactoring, optimization, backwards compatibility. Also, of course, with the ordering unspecified implementations are at liberty to fix on an LTR or RTL ordering, and there is nothing preventing a further version of the standard from making that change, but to do so now would be nearly irrevocable if it proved dissatisfactory.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/9f9be10e-ea35-4fc8-84c5-e71c770063dd%40isocpp.org.

Greg Marr

unread,
Jul 11, 2016, 3:16:23 PM7/11/16
to ISO C++ Standard - Future Proposals
On Monday, July 11, 2016 at 12:17:31 PM UTC-4, Hyman Rosen wrote:
The irrevocable wrong decision that was made was that in assignments, the RHS is evaluated before the LHS.

Why do you consider that the wrong decision, and what would have been the right decision?

Nicol Bolas

unread,
Jul 11, 2016, 4:35:24 PM7/11/16
to ISO C++ Standard - Future Proposals
On Sunday, July 10, 2016 at 5:52:14 PM UTC-4, Edward Catmur wrote:
Yes, your understanding is correct to the best of my knowledge. It's mildly surprising, certainly; but C++ has plenty of mildly surprising corner cases.

Yes. And the point of specifying the order of evaluation was to eliminate a big one of those. And this doesn't.

Of course, the point of braced-init-lists was to eliminate initialization-based corner cases too. But it didn't. And the point of perfect forwarding was to eliminate forwarding-based corner cases. But it didn't.

So I guess this one fits right in: evaluation order is perfectly well-specified... except when it isn't.
 
In mitigation, the syntax is different (no comma) so you can hardly say that it's the same code wrapped in a function call,

It is as far as the user is concerned. Capturing the return value of expressions should not suddenly cause the code's behavior to change. Even if you had to give it a different spelling to make it work right.

and the code as written is somewhat artificial (why not use array directly with a braced-init-list via template argument deduction)?

... because that's not possible? Maybe I'm misunderstanding what you're suggesting here. But I don't know of a way to turn a braced-init-list into a std::array that involves template argument deduction.

Sure, you could do this:

auto inserts = array<vector<int>::iterator, sizeof...(args)>{v.insert(v.end(), std::forward<Ts>(args))...}

But the whole point of `make_array` is that you don't have to specify the type or the count of values; they can be deduced. Since `std::array` is an aggregate, I see no way for it to deduce the number of arguments or their types. Even the new constructor deduction scheme introduced in C++17 only applies to actual constructors, and `array` doesn't have any.

Hyman Rosen

unread,
Jul 11, 2016, 4:36:14 PM7/11/16
to std-pr...@isocpp.org
The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

The C++17 change is wrong in two parts.

First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.

Second, it specifies that in assignments, the RHS is evaluated before the LHS (I believe because of a mistaken notion of what associativity means).  This makes the Standard irretrievably broken, because it becomes impossible to specify the correct left-to-right behavior in the future.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Nicol Bolas

unread,
Jul 11, 2016, 5:16:42 PM7/11/16
to ISO C++ Standard - Future Proposals
On Monday, July 11, 2016 at 4:36:14 PM UTC-4, Hyman Rosen wrote:
The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

The C++17 change is wrong in two parts.

First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.

Second, it specifies that in assignments, the RHS is evaluated before the LHS (I believe because of a mistaken notion of what associativity means).  This makes the Standard irretrievably broken, because it becomes impossible to specify the correct left-to-right behavior in the future.

I think saying that it's irrevocably broken is a bit hyperbolic. Certainly, it cannot be changed. But even if the order is not what is expected, it is still some order.

Also, the CD is not the final word on C++17. National body comments and so forth can change things.

Greg Marr

unread,
Jul 11, 2016, 5:22:17 PM7/11/16
to ISO C++ Standard - Future Proposals
On Monday, July 11, 2016 at 4:36:14 PM UTC-4, Hyman Rosen wrote:
The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

Agree 100%.
 
The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

The C++17 change is wrong in two parts.

First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.

Second, it specifies that in assignments, the RHS is evaluated before the LHS (I believe because of a mistaken notion of what associativity means).  This makes the Standard irretrievably broken, because it becomes impossible to specify the correct left-to-right behavior in the future.

I agree with the first part, was only asking about the second part.  Thanks for the information.

I was actually thinking of something completely different than what came up when I was looking things up based on your answer.
The reference I found was this:

int a[2] = {0, 1}
int b = 1;
a[b] = b = 0;

(adjust syntax as appropriate for language).

The question was whether this changes a[0] or a[1].
The best answer is, of course, "don't do that".
It seems that the answer to this is a[1] in Java and a[0] in Python.

What I was thing of was

a() = b() + c();

To me it made sense to order it like this:

auto temp_b = b();
auto temp_c = c();
auto temp_sum = temp_b + temp_c;
a() = temp_sum;

This is, I believe, based on how I might write this in english:
Compute b() and c(), add them together, and assign that value to a().

Are you suggesting this instead?

auto &&temp_a = a();
auto &&temp_b = b();
auto &&temp_c = c();
temp_a = temp_b + temp_c;

Greg Marr

unread,
Jul 11, 2016, 5:24:34 PM7/11/16
to ISO C++ Standard - Future Proposals
On Monday, July 11, 2016 at 5:22:17 PM UTC-4, Greg Marr wrote:
What I was thing of was

What I was *thinking* of was
 
To me it made sense to order it like this:

auto temp_b = b();
auto temp_c = c();
auto temp_sum = temp_b + temp_c;
a() = temp_sum;

This is, I believe, based on how I might write this in english:
Compute b() and c(), add them together, and assign that value to a().

Are you suggesting this instead?

auto &&temp_a = a();
auto &&temp_b = b();
auto &&temp_c = c();
temp_a = temp_b + temp_c;

Ignore the differences between auto and auto && in those two examples.

Richard Smith

unread,
Jul 11, 2016, 6:04:14 PM7/11/16
to std-pr...@isocpp.org
On Mon, Jul 11, 2016 at 1:35 PM, Hyman Rosen <hyman...@gmail.com> wrote:
The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

The C++17 change is wrong in two parts.

First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.

Second, it specifies that in assignments, the RHS is evaluated before the LHS (I believe because of a mistaken notion of what associativity means).  This makes the Standard irretrievably broken, because it becomes impossible to specify the correct left-to-right behavior in the future.

As far as I recall, you're right that the choice to evaluate the RHS of an assignment before the LHS was made in order to match the associativity rules. This is not completely disconnected from order of evaluation; given:

  a = b = c = d

... the operator= calls are ordered from right to left, so there's at least some symmetry in ordering the subexpressions from right to left also. (That order also minimizes the amount of temporary state that must be retained for an unparenthesized expression.)

That said, I do agree that this choice still seems a little arbitrary. But I don't think it's necessarily outright wrong, as you claim.

On Mon, Jul 11, 2016 at 3:16 PM, Greg Marr <greg...@gmail.com> wrote:
On Monday, July 11, 2016 at 12:17:31 PM UTC-4, Hyman Rosen wrote:
The irrevocable wrong decision that was made was that in assignments, the RHS is evaluated before the LHS.

Why do you consider that the wrong decision, and what would have been the right decision?

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/e3c4df92-5e44-441b-8f06-e039623bb9a9%40isocpp.org.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Edward Catmur

unread,
Jul 11, 2016, 8:07:48 PM7/11/16
to std-pr...@isocpp.org


On 11 Jul 2016 9:35 p.m., "Nicol Bolas" <jmck...@gmail.com> wrote:
>
> On Sunday, July 10, 2016 at 5:52:14 PM UTC-4, Edward Catmur wrote:
>>
>> In mitigation, the syntax is different (no comma) so you can hardly say that it's the same code wrapped in a function call,
>
>
> It is as far as the user is concerned. Capturing the return value of expressions should not suddenly cause the code's behavior to change. Even if you had to give it a different spelling to make it work right.

If you don't understand the reason behind the different spelling you're programming by permutation. Which is a fine enough technique but will bite you sooner or later.

>> and the code as written is somewhat artificial (why not use array directly with a braced-init-list via template argument deduction)?
>
>
> ... because that's not possible? Maybe I'm misunderstanding what you're suggesting here. But I don't know of a way to turn a braced-init-list into a std::array that involves template argument deduction.
>
> Sure, you could do this:
>
> auto inserts = array<vector<int>::iterator, sizeof...(args)>{v.insert(v.end(), std::forward<Ts>(args))...}
>
> But the whole point of `make_array` is that you don't have to specify the type or the count of values; they can be deduced. Since `std::array` is an aggregate, I see no way for it to deduce the number of arguments or their types. Even the new constructor deduction scheme introduced in C++17 only applies to actual constructors, and `array` doesn't have any.
>

No, you're right, I was expecting that template deduction guides should work for array. It's a pity if they won't.

Edward Catmur

unread,
Jul 11, 2016, 8:27:41 PM7/11/16
to std-pr...@isocpp.org


On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.

> The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

There will always be implementation variance especially between platforms. A blanket statement that it is always bad ignores the multitude of reasons to permit and encourage implementation variance. And of course the implementation variance already exists, so you won't be able to change the behavior of existing platforms.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.

And is making C++ more similar to Java supposed to be a *good* thing?

> The C++17 change is wrong in two parts.
>
> First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.
>
> Second, it specifies that in assignments, the RHS is evaluated before the LHS (I believe because of a mistaken notion of what associativity means).  This makes the Standard irretrievably broken, because it becomes impossible to specify the correct left-to-right behavior in the future.

Quite.

> On Mon, Jul 11, 2016 at 3:16 PM, Greg Marr <greg...@gmail.com> wrote:
>>
>> On Monday, July 11, 2016 at 12:17:31 PM UTC-4, Hyman Rosen wrote:
>>>
>>> The irrevocable wrong decision that was made was that in assignments, the RHS is evaluated before the LHS.
>>
>>
>> Why do you consider that the wrong decision, and what would have been the right decision?
>>
>> --
>> You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
>> To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
>>
>> To post to this group, send email to std-pr...@isocpp.org.
>> To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/e3c4df92-5e44-441b-8f06-e039623bb9a9%40isocpp.org.
>
>
> --

> You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Future Proposals" group.
> To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-proposals/wahN6MBQt68/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to std-proposal...@isocpp.org.


> To post to this group, send email to std-pr...@isocpp.org.

> To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CAHSYqdanbWiH%2BFixdzPLjLnD%2BfJTCr25yFphPSz%3DS45ryQQu5A%40mail.gmail.com.

Jeffrey Yasskin

unread,
Jul 11, 2016, 8:28:07 PM7/11/16
to std-pr...@isocpp.org
On Sat, Jul 9, 2016 at 11:00 AM, Nicol Bolas <jmck...@gmail.com> wrote:
C++17's CD has apparently voted in the expression evaluation order proposal, but as I understand it, they used the "Alternate Evaluation Order for Function Calls" That says:

the expression in the function position is evaluated before all the arguments and the evaluations of the arguments are indeterminately sequenced, but with no interleaving.

So, I want to make sure I understand something.

This has well defined behavior:

template<typename Ts...>
void func(Ts&&... args)
{
    vector
<int> v;
    v
.reserve(sizeof...(Ts));
   
   
(v.insert(v.end(), std::forward<Ts>(args)), ...);
}

This will insert to the end of `v` every element in `args`, in the order of the arguments passed to `func`.

But this will not:

template<typename Ts...>
void func(Ts&&... args)
{
    vector
<int> v;
    v
.reserve(sizeof...(Ts));
   
   
auto inserts = make_array(v.insert(v.end(), std::forward<Ts>(args))...);
}

So, by the mere act of calling a function on the return values, the state of `v` is unspecified.

I don't expect this to convince you, but note that "calling a function on the return value" of the (v.insert(v.end(), std::forward<Ts>(args)), ...) expression would look like:

make_array((v.insert(v.end(), std::forward<Ts>(args))...));

which will continue to evaluate in order, but of course won't give an array of the results of the insert() calls.
 
Now, the new sequencing rules will at least ensure that this code results in a valid `v`. That is, each `v.end` will be called, followed by its corresponding `v.insert`. There won't be any interleaving, so `v` will have actual data in it in some order. And the returned iterators will all be equally valid.

But there also won't be a guaranteed order. All because it was wrapped in a function call.

Is my understanding of things correct? Would people not consider this highly surprising behavior?

It's definitely surprising that different uses of commas behave differently, but it may be necessarily surprising. I think we don't *know* that the surprise is necessary yet, but there wasn't enough confidence to lock in the change. We made it somewhat better in the C++17 CD, but it's certainly not perfect. Hopefully we'll get papers proposing and justifying further improvements for C++20.

Jeffrey

FrankHB1989

unread,
Jul 11, 2016, 9:57:54 PM7/11/16
to ISO C++ Standard - Future Proposals


在 2016年7月12日星期二 UTC+8上午4:36:14,Hyman Rosen写道:
The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

The C++17 change is wrong in two parts.

First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.

Second, it specifies that in assignments, the RHS is evaluated before the LHS (I believe because of a mistaken notion of what associativity means).  This makes the Standard irretrievably broken, because it becomes impossible to specify the correct left-to-right behavior in the future.

Stop rumor on "correct" evaluation order. Left-to-right can not always the "correct" one. In fact, for assignment, it can be very unintuitive if you force people to determine the value of left operand before the value of right operand. In most cases the right operand need more complicated computation. This will raise problems when evaluating the assignment expression in human brains. It is stupid to always remember the result of evaluation of left operand before evaluation of right operand, so people should have the freedom to evaluate the right operand first. So, unless changing the semantics of assignment expressions (or simply enforcing a different operand order of assignment, like Intel -> AT&T assembly syntax), your claim is not valid.

 

FrankHB1989

unread,
Jul 11, 2016, 10:42:10 PM7/11/16
to ISO C++ Standard - Future Proposals


在 2016年7月12日星期二 UTC+8上午4:36:14,Hyman Rosen写道:
The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.


The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

On the contrary, defined order keeps the code "accidentally works" truly "accidentally" works, with far less opportunity to be found earlier. It can hide bugs that cannot be automatically detected by the implementation easily, i.e. due to lack of consideration on which evaluation order (if any) is really needed, which is almost impossible known by the compiler or code checker. For example, "a = b = c;" is strictly less clearer than "a = c; b =c;" when both of them "accidentally works". The reason of "accidentally works" is they are accidentally semantically equivalent according to the language rules, not by nature/some magic power. If there is volatile-qualified variables involved, it can be fatal to think they are still equivalent.

You have not fixed the real bugs. You only workaround the seen risks. You are hard to foresee other risks. You can't guarantee it will never make some other bugs more subtle to fix. You can measure and ensure it is feasible for you or your company, but not for others. So this is hardly a way worth encouraging in general.
 
The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Why should code be read left-to-right? They are usually only parsed left-to-right. As of "trivial"... are you assuming there can be only one "correct" way to process the text in others' brains?

And why C++ should matches Java? Further, note Java Language Specification is clearly against the naive assumption on left-to-right order. Is it "trivial"?
 
The C++17 change is wrong in two parts.

First, it does not specify order of evaluation completely, so accidentally working code will continue to be a problem.  This can be fixed in a later version of the standard.

To forbid user leaving the order intentionally unspecified is a real bug because it makes portability on the opposite of expressiveness (which implies maintainability and performance, etc).
 

FrankHB1989

unread,
Jul 11, 2016, 10:46:43 PM7/11/16
to ISO C++ Standard - Future Proposals


在 2016年7月12日星期二 UTC+8上午8:27:41,Edward Catmur写道:


On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


This can be resolved by introducing some specific primitives to specify the evaluation order not obeying this rule. However, they will be lengthy and users will be reluctant to use them. Users will have excuse granted by the default rule.

The true harm of such decision is, it encourages stupidity and laziness as the default case. Freedom of expressiveness is defeated by freedom of lack of consideration. This is offensive to people who want to write the "more correct" (more precise, easier to read, easier to maintain) - not merely "runnable" code: it makes they harder to achieve their goals while giving them nothing.
 

> The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

There will always be implementation variance especially between platforms. A blanket statement that it is always bad ignores the multitude of reasons to permit and encourage implementation variance. And of course the implementation variance already exists, so you won't be able to change the behavior of existing platforms.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.


Yes. This is exactly my experience, and not only to C++. If a user can only work under a single strict order with some sequenced materials, I doubt his/her brain is damaged. Moreover, assuming left-to-right to be the natural order may be offensive to people who are using natural languages with right-to-left natural layout.
 

And is making C++ more similar to Java supposed to be a *good* thing?


And I can imagine making C++ less similar to C here has more impact in reality.
 

Nicol Bolas

unread,
Jul 12, 2016, 12:34:57 AM7/12/16
to ISO C++ Standard - Future Proposals
On Monday, July 11, 2016 at 10:46:43 PM UTC-4, FrankHB1989 wrote:
在 2016年7月12日星期二 UTC+8上午8:27:41,Edward Catmur写道:


On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


This can be resolved by introducing some specific primitives to specify the evaluation order not obeying this rule. However, they will be lengthy and users will be reluctant to use them. Users will have excuse granted by the default rule.

The true harm of such decision is, it encourages stupidity and laziness as the default case. Freedom of expressiveness is defeated by freedom of lack of consideration.

Accepting reality should superseded all else. Reality has shown us that, 99% of the time, programmers assume things happen in a left-to-right order. Which means that your vaunted "freedom of expressiveness" is only being used 1% of the time.

C++ is a practical language, not some ivory-tower experiment. We must bow to reality when it is presented to us. And reality tells us, time and again, that we lose far more from undefined evaluation order than we gain from it. We lose time in debugging. We lose time in development. We lose mental time in having to think about it.

The only thing we gain from it is making it harder to become a proficient C++ programmer. And that's not a benefit of the language.

This is offensive to people who want to write the "more correct" (more precise, easier to read, easier to maintain) - not merely "runnable" code: it makes they harder to achieve their goals while giving them nothing.

I don't know; I find this to be quite "correct" code:


auto inserts = make_array(v.insert(v.end(), std::forward<Ts>(args))...);

This code is very clear: for each item in the pack, insert it at the end, capturing those iterators in an array. I see no reason to express this code in a more verbose way. Well, besides the fact that it doesn't work, but that's only due to a silly language rule.

What is the "more correct" alternative? Show me how to write that same code in a way that is as clear and concise as this, which doesn't provoke undefined behavior.

And if `make_array` offends you, feel free to change it into any function that would take a sequence of iterators.

> The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

There will always be implementation variance especially between platforms. A blanket statement that it is always bad ignores the multitude of reasons to permit and encourage implementation variance. And of course the implementation variance already exists, so you won't be able to change the behavior of existing platforms.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.


Yes. This is exactly my experience, and not only to C++. If a user can only work under a single strict order with some sequenced materials, I doubt his/her brain is damaged. Moreover, assuming left-to-right to be the natural order may be offensive to people who are using natural languages with right-to-left natural layout.

Wait a minute. Having a left-to-right order is offensive to people who use right-to-left languages... but having all our keywords being English is perfectly fine? What about the fact that the parser goes left-to-right; are they offended by that too? Maybe my brain is too damaged from not being able to read paragraphs at a single glance, but I really can't follow your logic here.

Also, programming isn't a sport; you don't get extra points for degree of difficulty.

And is making C++ more similar to Java supposed to be a *good* thing?


And I can imagine making C++ less similar to C here has more impact in reality.

C is not a subset of C++, and it hasn't been since... C++98. While there are people who write in the subset between C and C++, this is usually code intended to be compiled as either. And such people are generally writing C which can be compiled as C++, not writing C++ that can be compiled as C. So they will write it under C's rules first and foremost. Porting the other way is much rarer.

Also, let's not forget that, when it comes to complex expressions, C has nothing on C++. We have so many more tools to write complicated expressions than C does. That's why my example involved parameter packs and fold expressions; to me, they're the poster-child for making sequences of complex expressions that the user will expect to evaluate in order.

C++ needs it far more than C.

Nicol Bolas

unread,
Jul 12, 2016, 1:12:21 AM7/12/16
to ISO C++ Standard - Future Proposals
On Monday, July 11, 2016 at 8:27:41 PM UTC-4, Edward Catmur wrote:

On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


I've heard a lot of "sounds like" out of people defending the undefined order rules. I've heard far less "certainly is". Is there any genuine evidence of this "performance nightmare", or is it just fear of the unknown?

> The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

There will always be implementation variance especially between platforms. A blanket statement that it is always bad ignores the multitude of reasons to permit and encourage implementation variance. And of course the implementation variance already exists, so you won't be able to change the behavior of existing platforms.


Sure, there will always be implementation variance.

Now explain why we should allow this one.

As far as I'm concerned, removing needless variance is its own reward. So unless you can prove that the variance actually accomplishes something, it should be done away with.

It's not like C++ hasn't removed variance in the past once it was shown to be pointless. For example, the division of POD types into standard layout and trivial copyability that C++11 did. It also expanded the range of types that could be standard layout or trivially copyable, compared to C++98. Why was standard layout expanded to allow base classes that were empty?

Because virtually all implementations didn't have empty classes interfere in their layout. And thus the variance was proven to be pointless. Same goes for expanding trivial copyability.

How many compilers actually use the undefined evaluation order rules to optimize code? Do they re-order expressions based on what is best for that specific expression? Or do they always evaluate expressions in an arbitrary order? Because if such flexibility is not actually making code faster, then the variance is pointless and should be done away with.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.


Let's pretend this was true (despite the undeniable fact that "experienced users" make these mistakes too).

So what? Are we supposed to restrict C++ to "experienced users" as you define it?

And is making C++ more similar to Java supposed to be a *good* thing?


In and of itself? No. But it's not a bad thing either. A language feature should not be justified merely by "language X does it". But neither should it be dismissed for that reason either.

FrankHB1989

unread,
Jul 12, 2016, 4:02:45 AM7/12/16
to ISO C++ Standard - Future Proposals


在 2016年7月12日星期二 UTC+8下午12:34:57,Nicol Bolas写道:
On Monday, July 11, 2016 at 10:46:43 PM UTC-4, FrankHB1989 wrote:
在 2016年7月12日星期二 UTC+8上午8:27:41,Edward Catmur写道:


On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


This can be resolved by introducing some specific primitives to specify the evaluation order not obeying this rule. However, they will be lengthy and users will be reluctant to use them. Users will have excuse granted by the default rule.

The true harm of such decision is, it encourages stupidity and laziness as the default case. Freedom of expressiveness is defeated by freedom of lack of consideration.

Accepting reality should superseded all else. Reality has shown us that, 99% of the time, programmers assume things happen in a left-to-right order. Which means that your vaunted "freedom of expressiveness" is only being used 1% of the time.

What should be there is what is already there. As formal encouragement, the rules should be long lived in foreseeable future, so it should better avoid compromise merely to the status quo. (Actually I don't think there is need of the compromise. Note that we already have ways to make 99% cases work perfectly. The illustrated examples in the original proposal, however, are disputable.) And whether it is reality or not, if the assumption does not feed the need (of reality, of course) well, it should be thrown away. As I experienced, the real need now is to do "the right thing", not "worse is better", because we already have the latter enough, if not too much. Your idea seems to go to the wrong direction.

One more reality is, people get bitten when they think too little. Every will-educated programmer around me does not assume left-to-right evaluation, because the rules of the languages (not only C++) told them it should not be relied on, or the coding convention forbids them to use such risky styles. Unless you eliminate all rules about evaluation order other than left-to-right in the world (again, not only in C++), programmers who have to deal with multiple languages will have a lesson from it, sooner or later. Again in reality, very few programmers learn and use only C++. (Notably, C does not have such guarantee, so they must care about it.) After changing of the rules in C++, they have to care more, if not be ignorant to these changes totally - which is not a good attitude, anyway.
 
C++ is a practical language, not some ivory-tower experiment. We must bow to reality when it is presented to us. And reality tells us, time and again, that we lose far more from undefined evaluation order than we gain from it. We lose time in debugging. We lose time in development. We lose mental time in having to think about it.

The only thing we gain from it is making it harder to become a proficient C++ programmer. And that's not a benefit of the language.

No. I don't have such experience of wasting time on such things. Just keep away from those suspicious styles of code, and nothing would be lost. You will eventually gain more (spare time, for example) because you don't have to learn such strange code patterns if it is identified as smelly one. Only a few proficient reviewers should know how to precisely find them and prevent them harm the "practical" people.
 
This is offensive to people who want to write the "more correct" (more precise, easier to read, easier to maintain) - not merely "runnable" code: it makes they harder to achieve their goals while giving them nothing.

I don't know; I find this to be quite "correct" code:

auto inserts = make_array(v.insert(v.end(), std::forward<Ts>(args))...);

This code is very clear: for each item in the pack, insert it at the end, capturing those iterators in an array. I see no reason to express this code in a more verbose way. Well, besides the fact that it doesn't work, but that's only due to a silly language rule.

What is the "more correct" alternative? Show me how to write that same code in a way that is as clear and concise as this, which doesn't provoke undefined behavior.

I admit the language is silly here. The point is there is no way to directly express the specified order you need here. But if it is not so silly to meet your requirements, it would be equally silly when I don't want to depend on the order explicitly. It would be even more silly, because when you have no simple way to express the specified order, you can workaround by verbosity; however, after the rules of specified order are settled, I will not workaround without breaking portability. This is the net lost.

I say "more correct" is about logical correctness of the code. Whenever you make the code precisely work as your need through the specific rules you have known, it is more correct than code relying on some unconsciously randomly selected and accidentally effective rules, though it may be too wordy.

The root of evil in your case is, the comma token does not play one role well. This insane design can be attributed to C, only because it need to specify order in several uncommon contexts (e.g. in the condition clause of a loop statement). It can't  consistently use semicolon because the design of syntax. Your case is one of the victim of the silly design.

It is unfortunate that the comma operator is more messy in C++ as a result of overloaded operator comma. But actually, the whole silly thing is still that simple - the ability of specifying the order by the (builtin) operator comma, or more precisely, making a comma as an operator as well as a non-operator in similar contexts without any reasons besides to workaround the silly syntax design. Why not semicolon? The rules would be simpler a lot.

The answer may be compatibility. But why you throw away compatibility just now, in a hurry and a far more disputable manner?

Despite the existed messy rules, a better solution is to add new core rules to clearly express your need without such verbosity, e.g. allowing semicolons as separators of function arguments.

 
And if `make_array` offends you, feel free to change it into any function that would take a sequence of iterators.

> The reason for defining order is to avoid code that "accidentally works" - that is, code that has order dependencies such that its behavior is undefined or unspecified, but happens to be built with a compiler whose behavior matches the programmer's intent. Such code can mysteriously break when built on a different platform, long after it was written and has appeared to work correctly. We have personally encountered this situation in our company.

There will always be implementation variance especially between platforms. A blanket statement that it is always bad ignores the multitude of reasons to permit and encourage implementation variance. And of course the implementation variance already exists, so you won't be able to change the behavior of existing platforms.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.


Yes. This is exactly my experience, and not only to C++. If a user can only work under a single strict order with some sequenced materials, I doubt his/her brain is damaged. Moreover, assuming left-to-right to be the natural order may be offensive to people who are using natural languages with right-to-left natural layout.

Wait a minute. Having a left-to-right order is offensive to people who use right-to-left languages... but having all our keywords being English is perfectly fine? What about the fact that the parser goes left-to-right; are they offended by that too? Maybe my brain is too damaged from not being able to read paragraphs at a single glance, but I really can't follow your logic here.
No, it does not. ... I mean, it is not English. In reality, some people ignorant to programming have already protested the "discrimination" of English-central culture in programming languages. As a rebuttal, we told them the words in common programming languages are never "English", just occasionally spelled by Latin alphabets similarly as English words due to some historical reasons.
I was illustrating the difference between processing text of natural languages vs. in programming languages should not be ignored. Because I don't think they can effectively use the same path of logic in one's brain, there is no "natural" bonus of RTL evaluation. Further, I am sure different natural languages can share little processing logic in my brain unless they are similar enough (which is already difficult between arbitrary natural languages), though they may share similar parsing logic in typical cases (extraordinary cases are rare, e.g. Dongba symbols). Since I don't daily use RTL languages so I used "may" to illustrate the possibility if the points above is not feasible.
As of a parser... it can go from left to right, and vice versa. It can even be neither LTR or RTL, but I guess it will be very inefficient.

Also, programming isn't a sport; you don't get extra points for degree of difficulty.

And is making C++ more similar to Java supposed to be a *good* thing?


And I can imagine making C++ less similar to C here has more impact in reality.

C is not a subset of C++, and it hasn't been since... C++98. While there are people who write in the subset between C and C++, this is usually code intended to be compiled as either. And such people are generally writing C which can be compiled as C++, not writing C++ that can be compiled as C. So they will write it under C's rules first and foremost. Porting the other way is much rarer.

Yes, C++ should not be messed up with C. But again in reality, there a plenty of "C/C++" code to maintain. People write code that compiled both by C and C++ compiler (e.g. header files), or transfer code in C to C++ with quite a few modifications (e.g. GCC source tree). The "context switch" between working on C and C++ code base is often a pain, so those people will likely continue to only use the common set of C and C++ features. So the changes in the rule has almost no effect to them, if they are not confused.

The worse thing is, the changes also make it more complicated to transfer code between different version of C++ dialects, since they are not bidirectionally compatible. Note the effective region is too broad so a full coverage of review is always needed to port valid code in new dialects back to the code base in old dialects (very likely in reality). Tools may help, but not enough.

Should the extra effort be deserved? Or is it insane to switch to the new C++ dialects?
 
Also, let's not forget that, when it comes to complex expressions, C has nothing on C++. We have so many more tools to write complicated expressions than C does. That's why my example involved parameter packs and fold expressions; to me, they're the poster-child for making sequences of complex expressions that the user will expect to evaluate in order.

This expectation are based on vague and non modular design of specific syntax (mostly borrowed from C), rather than intuition.

Briefly, there is no problem.to specify the order. But if the unavoidable cost is to limit how to specify the order in other cases, it is problematic and unacceptable.
 

Hyman Rosen

unread,
Jul 12, 2016, 12:00:55 PM7/12/16
to std-pr...@isocpp.org
We have been around and around this discussion many times before.  I know that there are people who do not accept that C++ should have strict left-to-right evaluation.  I believe that those people are utterly wrong, but that there is no way to convince them of that, not unlike other political situations.  To me, the notion that (a() << (b() << c())) and (a() = (b() = c())) should call a(), b(), and c() in different specified orders, while (a() + (b() + c())) should call them in unspecified order is so irrational that it beggars belief.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Jeffrey Yasskin

unread,
Jul 12, 2016, 12:02:12 PM7/12/16
to std-pr...@isocpp.org
On Mon, Jul 11, 2016 at 7:46 PM, FrankHB1989 <frank...@gmail.com> wrote:
The true harm of such decision is, it encourages stupidity and laziness as the default case.

Please try to stick to technical arguments on this mailing list, instead of personally attacking ("stupidity") the people who prefer a different outcome than you do.

Jeffrey

Hyman Rosen

unread,
Jul 12, 2016, 12:12:08 PM7/12/16
to std-pr...@isocpp.org
The "stupidity and laziness" argument demands that the programming language should contain constructs that must not be used, and that will act as traps for the unwary.  The notion that this is a desirable feature again seems to me so irrational that it beggars belief.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Edward Catmur

unread,
Jul 12, 2016, 12:15:46 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 5:34 AM, Nicol Bolas <jmck...@gmail.com> wrote:
Accepting reality should superseded all else. Reality has shown us that, 99% of the time, programmers assume things happen in a left-to-right order. Which means that your vaunted "freedom of expressiveness" is only being used 1% of the time.
 
If that were the case then query languages like SQL, Linq or Python comprehensions would be an abject failure.
 
C++ is a practical language, not some ivory-tower experiment. We must bow to reality when it is presented to us. And reality tells us, time and again, that we lose far more from undefined evaluation order than we gain from it. We lose time in debugging. We lose time in development. We lose mental time in having to think about it.

Reality tells us that order-dependent code is the exception; that 99% of the time code is non-order-dependent. Any time lost dealing with unexpected errors arising from undefined evaluation order would be lost anyway trying to comprehend the intent of the code.
 
I don't know; I find this to be quite "correct" code:

auto inserts = make_array(v.insert(v.end(), std::forward<Ts>(args))...);

This code is very clear: for each item in the pack, insert it at the end, capturing those iterators in an array. I see no reason to express this code in a more verbose way. Well, besides the fact that it doesn't work, but that's only due to a silly language rule.

And there is nothing in your problem specification to say that the inserts should happen in any particular order.
 
What is the "more correct" alternative? Show me how to write that same code in a way that is as clear and concise as this, which doesn't provoke undefined behavior.

auto inserts = {v.insert(v.end(), args)...};
 
And if `make_array` offends you, feel free to change it into any function that would take a sequence of iterators.

I have never *seen* a function that takes a sequence of iterators. When is such a beast ever encountered in the wild?

Also, let's not forget that, when it comes to complex expressions, C has nothing on C++. We have so many more tools to write complicated expressions than C does. That's why my example involved parameter packs and fold expressions; to me, they're the poster-child for making sequences of complex expressions that the user will expect to evaluate in order. 

If you're writing already complicated expressions, why complicate them further by making them order-dependent?

Hyman Rosen

unread,
Jul 12, 2016, 12:24:01 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 12:15 PM, 'Edward Catmur' via ISO C++ Standard - Future Proposals <std-pr...@isocpp.org> wrote:
If you're writing already complicated expressions, why complicate them further by making them order-dependent?

Because the order should be definite, so that this is as straightforward as depending on the ordering of statements.  The notion that ambiguous order of expression evaluation is a benefit is wrong, no matter how hard you believe otherwise.

Edward Catmur

unread,
Jul 12, 2016, 12:29:04 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 6:12 AM, Nicol Bolas <jmck...@gmail.com> wrote:
On Monday, July 11, 2016 at 8:27:41 PM UTC-4, Edward Catmur wrote:

On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


I've heard a lot of "sounds like" out of people defending the undefined order rules. I've heard far less "certainly is". Is there any genuine evidence of this "performance nightmare", or is it just fear of the unknown?

I can't speak for anyone else. In my case it is induction from observations that many existing optimizations depend to some extent on breaking naive intuitions of ordering.
 
How many compilers actually use the undefined evaluation order rules to optimize code? Do they re-order expressions based on what is best for that specific expression? Or do they always evaluate expressions in an arbitrary order? Because if such flexibility is not actually making code faster, then the variance is pointless and should be done away with.
 
It does not matter whether compilers are currently able to exploit their freedom. The free lunch is over. Clock speeds are not getting any faster. If there is any prospect of the freedom being useful in future then it should be retained until we know for sure one way or the other, especially if new instructions could help compilers exploit it.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.


Let's pretend this was true (despite the undeniable fact that "experienced users" make these mistakes too).

So what? Are we supposed to restrict C++ to "experienced users" as you define it?

It would be better to restrict C++ to experienced users than to make it useless to them. C++ is not in competition with Java; it is in competition with VHDL and Verilog.

Edward Catmur

unread,
Jul 12, 2016, 12:35:26 PM7/12/16
to std-pr...@isocpp.org
Ordering of statements is straightforward, and it is that imperative subset of the language that should be used when writing order-dependent code, either directly or via the library. That's what it's for!

Ren Industries

unread,
Jul 12, 2016, 1:03:38 PM7/12/16
to std-pr...@isocpp.org
The fact you continually assert this without evidence as if it is fact is what truly "beggars belief".

It is currently trivial to force a left to right ordering; simply make each substatement its own statement, in the order you like. There is no syntax to enforce "unordered" ordering. No one is forcing you to nest statements; you can easily ensure your own ordering by using the current syntax.

The notion that removing expressiveness from a language is a benefit is wrong, no matter how hard you believe otherwise...is equally valid a statement, in that it asserts something with no actual evidence, and may therefore be dismissed with no evidence as well.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Jeffrey Yasskin

unread,
Jul 12, 2016, 1:12:44 PM7/12/16
to std-pr...@isocpp.org
You also should moderate your language. :) Most people here are rational and not stupid, but they may value different things than you do. For example, some people value the 4% speed improvement that http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r2.pdf demonstrated a compiler could achieve on some programs by fiddling intelligently with the evaluation order. It's not obvious to me that this 4% is worth the bugs it implies, but it's also not obvious to me that it's not.

Please look at the technical arguments instead of calling people who disagree with you "irrational".

Hyman Rosen

unread,
Jul 12, 2016, 1:47:45 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 1:12 PM, 'Jeffrey Yasskin' via ISO C++ Standard - Future Proposals <std-pr...@isocpp.org> wrote:
Please look at the technical arguments instead of calling people who disagree with you "irrational".

It's not a matter of technical arguments, it's a matter of values (as you say).  Science can tell you how to accomplish what you want, but it can't tell you what to want.  For me, the values involved in having a programming language that is simply specified and deterministic and consistent far outweigh the possible optimization benefit in letting the compiler pick evaluation order.  SImilarly, I find negative value in having unspecified behavior in order to force programmers to be hyper-vigilant to avoid error.  I find negative value in the notion that unspecified order adds expressiveness.

C++ can do only one thing.  It cannot have evaluation order both specified and unspecified.  It cannot evaluate both the LHS and the RHS of an assignment first.  So values must necessarily come into conflict, and battles over values are political, personal, and frequently nasty.  Besides, what constitutes a technical argument?  I say that Java has strict left-to-right evaluation, and I find that to be a technical argument - clearly the Java designers made this a conscious choice.  But others dismiss this with "why would we want C++ to be like Java?"  Or conversely, people claim to find that some code can run 4% faster when order is not specified, and I say that I don't care.

Ultimately, my goal is to not be silent, so that when C++ goes the wrong (IMO) way, no one can say that this was done with the acquiescence of the entire C++ community.

And for your amusement, or horror:
    struct A { }; A f(); A g();
    void operator<<(A, A);
    auto 
shfpointer = (void(*)(A, A))&operator<<;
    operator<<(f(), g());  // must evaluate f() then g()
    
shfpointer(f(), g());  // can evaluate f() and g() in either order

Edward Catmur

unread,
Jul 12, 2016, 2:27:53 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 6:47 PM, Hyman Rosen <hyman...@gmail.com> wrote:
On Tue, Jul 12, 2016 at 1:12 PM, 'Jeffrey Yasskin' via ISO C++ Standard - Future Proposals <std-pr...@isocpp.org> wrote:
Please look at the technical arguments instead of calling people who disagree with you "irrational".

It's not a matter of technical arguments, it's a matter of values (as you say).  Science can tell you how to accomplish what you want, but it can't tell you what to want.  For me, the values involved in having a programming language that is simply specified and deterministic and consistent far outweigh the possible optimization benefit in letting the compiler pick evaluation order.

We have programming languages that are simply specified and deterministic and consistent; they just aren't called C++.
 
C++ can do only one thing.  It cannot have evaluation order both specified and unspecified.  It cannot evaluate both the LHS and the RHS of an assignment first.

Absolutely it can; ask your vendor for a conforming extension.
 
  So values must necessarily come into conflict, and battles over values are political, personal, and frequently nasty.  Besides, what constitutes a technical argument?  I say that Java has strict left-to-right evaluation, and I find that to be a technical argument - clearly the Java designers made this a conscious choice.

What is appropriate for Java is not necessarily appropriate for C++.
 
 But others dismiss this with "why would we want C++ to be like Java?"  Or conversely, people claim to find that some code can run 4% faster when order is not specified, and I say that I don't care.

But you do understand that for some people a 4% performance difference is huge? 
 

Miro Knejp

unread,
Jul 12, 2016, 3:05:37 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 18:15 schrieb 'Edward Catmur' via ISO C++ Standard - Future Proposals:
And there is nothing in your problem specification to say that the inserts should happen in any particular order.
I shouldn't have to look up the specification to know what the code is supposed to do. This is about expectations. You with thousands of hours of experience in C++ realized that the order is unspecified. Other people with thousands of hours of experience in C++ do not. What matters is what the intention of the author was and whether it matches what we read into it. Whichever the intention was it's obviously not expressed unambiguously. As long as the code makes the author's intentions ambiguous it is not a win for code clarity.


And there is nothing in your problem specification to say that the inserts should happen in any particular order.
 
What is the "more correct" alternative? Show me how to write that same code in a way that is as clear and concise as this, which doesn't provoke undefined behavior.

auto inserts = {v.insert(v.end(), args)...};
That makes it an initializer_list, meaning the values are const, meaning you cannot move them out.
Just saying.


If you're writing already complicated expressions, why complicate them further by making them order-dependent?
No system becomes more complicated or harder to understand by removing degrees of freedom.

Miro Knejp

unread,
Jul 12, 2016, 3:20:01 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 19:03 schrieb Ren Industries:
> The fact you continually assert this without evidence as if it is fact
> is what truly "beggars belief".
People also constantly assert without evidence that this is somehow
"removing expresiveness" from the language. I have asked in the past and
now again for code examples that clearly, without doubt, expresses that
the author's intention was *deliberately* for code to be
order-independent and not just a lucky side effect of how the language
works. And by example I do not mean "I just wrote an expression and it
happens to not depend on order of evaluation". No, I mean "I wrote this
expression and I *intentionally* require it to be order independent
because the future of the universe depends on it".
>
> It is currently trivial to force a left to right ordering; simply make
> each substatement its own statement, in the order you like. There is
> no syntax to enforce "unordered" ordering. No one is forcing you to
> nest statements; you can easily ensure your own ordering by using the
> current syntax.
>
> The notion that removing expressiveness from a language is a benefit
> is wrong, no matter how hard you believe otherwise...is equally valid
> a statement, in that it asserts something with no actual evidence, and
> may therefore be dismissed with no evidence as well.
How many people use this expressiveness to express their intention in
such a way that every reader of the code unambiguously understands their
intention?

Fact is human brains are very bad at doing (or imagining) multiple
things happening at the same time which is why multithreading is such a
hard problem. It is easier to go through something step by step than
jumping around wildly or even doing things in parallel. Having a fixed
order makes it easier to understand the solution because we can begin at
one determined side of the expression and go through it one step at a time.

Edward Catmur

unread,
Jul 12, 2016, 3:23:24 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 8:06 PM, Miro Knejp <miro....@gmail.com> wrote:
Am 12.07.2016 um 18:15 schrieb 'Edward Catmur' via ISO C++ Standard - Future Proposals:
And there is nothing in your problem specification to say that the inserts should happen in any particular order.
I shouldn't have to look up the specification to know what the code is supposed to do.

Not the specification of the C++ Standard; the specification of the problem. As stated by Nicol Bolas, the problem did not specify that the inserts should happen in the order of increasing indices into the parameter pack.
 
What matters is what the intention of the author was and whether it matches what we read into it. Whichever the intention was it's obviously not expressed unambiguously. As long as the code makes the author's intentions ambiguous it is not a win for code clarity.

If the intent of the code is not obvious to a reader, it will also not be obvious to a maintainer; which is why this touches on refactoring and other maintenance tasks as well as performance and compatibility.
 
auto inserts = {v.insert(v.end(), args)...};
That makes it an initializer_list, meaning the values are const, meaning you cannot move them out.
Just saying.

 Moving iterators?
If you're writing already complicated expressions, why complicate them further by making them order-dependent?
No system becomes more complicated or harder to understand by removing degrees of freedom.

 Try tying a knot in 4 dimensions.

Ren Industries

unread,
Jul 12, 2016, 3:31:15 PM7/12/16
to std-pr...@isocpp.org
I have not asserted without evidence that it removes expressiveness; I have stated, below that very assertion, the evidence. It is indisputable that you are removing the ability to do something; that's the very point of what you wish to do.

The fact you have asked for a higher burden of proof is meaningless; you are moving goal posts, which is intellectually dishonest.

You may consider the higher goal post more meaningful; that's fine. But I've clearly met the lower goal post that you then claimed I have not met. 

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Ren Industries

unread,
Jul 12, 2016, 3:32:01 PM7/12/16
to std-pr...@isocpp.org
Complex numbers disagree; removing the degree of freedom there strictly makes it more difficult to comprehend.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Edward Catmur

unread,
Jul 12, 2016, 3:33:36 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 8:21 PM, Miro Knejp <miro....@gmail.com> wrote:
Am 12.07.2016 um 19:03 schrieb Ren Industries:
The fact you continually assert this without evidence as if it is fact is what truly "beggars belief".
People also constantly assert without evidence that this is somehow "removing expresiveness" from the language. I have asked in the past and now again for code examples that clearly, without doubt, expresses that the author's intention was *deliberately* for code to be order-independent and not just a lucky side effect of how the language works. And by example I do not mean "I just wrote an expression and it happens to not depend on order of evaluation". No, I mean "I wrote this expression and I *intentionally* require it to be order independent because the future of the universe depends on it".

Any time anyone has written code to use the parallel mode extensions of libstdc++.
 
Fact is human brains are very bad at doing (or imagining) multiple things happening at the same time which is why multithreading is such a hard problem. It is easier to go through something step by step than jumping around wildly or even doing things in parallel. Having a fixed order makes it easier to understand the solution because we can begin at one determined side of the expression and go through it one step at a time.

We're going to have to get better at it, because the 4GHz clock limit isn't going away any time soon. I don't want to see C++ lose a natural way to express soft parallelism because of poor choices made by programmers who still remember the era of the free lunch.

Miro Knejp

unread,
Jul 12, 2016, 3:42:53 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 21:33 schrieb 'Edward Catmur' via ISO C++ Standard - Future Proposals:
On Tue, Jul 12, 2016 at 8:21 PM, Miro Knejp <miro....@gmail.com> wrote:
Am 12.07.2016 um 19:03 schrieb Ren Industries:
The fact you continually assert this without evidence as if it is fact is what truly "beggars belief".
People also constantly assert without evidence that this is somehow "removing expresiveness" from the language. I have asked in the past and now again for code examples that clearly, without doubt, expresses that the author's intention was *deliberately* for code to be order-independent and not just a lucky side effect of how the language works. And by example I do not mean "I just wrote an expression and it happens to not depend on order of evaluation". No, I mean "I wrote this expression and I *intentionally* require it to be order independent because the future of the universe depends on it".

Any time anyone has written code to use the parallel mode extensions of libstdc++.
Where you don't paralellize the evaluation of arguments to an algorithm but the algorithm itself, after the arguments have been evaluated so I don't see how this relates to evaluation order of subexpressions unless there is a part of the extension I'm missing.

 
Fact is human brains are very bad at doing (or imagining) multiple things happening at the same time which is why multithreading is such a hard problem. It is easier to go through something step by step than jumping around wildly or even doing things in parallel. Having a fixed order makes it easier to understand the solution because we can begin at one determined side of the expression and go through it one step at a time.

We're going to have to get better at it, because the 4GHz clock limit isn't going away any time soon. I don't want to see C++ lose a natural way to express soft parallelism because of poor choices made by programmers who still remember the era of the free lunch.
I certainly do wish evolution would pick up the pace.

Miro Knejp

unread,
Jul 12, 2016, 3:47:35 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 21:31 schrieb Ren Industries:
> I have not asserted without evidence that it removes expressiveness; I
> have stated, below that very assertion, the evidence. It is
> indisputable that you are removing the ability to do something; that's
> the very point of what you wish to do.
>
> The fact you have asked for a higher burden of proof is meaningless;
> you are moving goal posts, which is intellectually dishonest.
>
> You may consider the higher goal post more meaningful; that's fine.
> But I've clearly met the lower goal post that you then claimed I have
> not met.
I do not deny that the change is removing a degree of freedom. We are
assessing it's value here and whether it's worth keeping compared to the
problems it makes people run into repeatedly. What I am asking for is an
example where that expresiveness is used in such a way that it is
itentional by the author, necessary for the correctness of the code, and
*unambiguous* to the reader what the author's intention was.

Miro Knejp

unread,
Jul 12, 2016, 3:51:38 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 18:29 schrieb 'Edward Catmur' via ISO C++ Standard - Future Proposals:
On Tue, Jul 12, 2016 at 6:12 AM, Nicol Bolas <jmck...@gmail.com> wrote:
On Monday, July 11, 2016 at 8:27:41 PM UTC-4, Edward Catmur wrote:

On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


I've heard a lot of "sounds like" out of people defending the undefined order rules. I've heard far less "certainly is". Is there any genuine evidence of this "performance nightmare", or is it just fear of the unknown?

I can't speak for anyone else. In my case it is induction from observations that many existing optimizations depend to some extent on breaking naive intuitions of ordering.
 
How many compilers actually use the undefined evaluation order rules to optimize code? Do they re-order expressions based on what is best for that specific expression? Or do they always evaluate expressions in an arbitrary order? Because if such flexibility is not actually making code faster, then the variance is pointless and should be done away with.
 
It does not matter whether compilers are currently able to exploit their freedom. The free lunch is over. Clock speeds are not getting any faster. If there is any prospect of the freedom being useful in future then it should be retained until we know for sure one way or the other, especially if new instructions could help compilers exploit it.
And when would that be? When will people stop saying "but in X years compilers might exploit it" when they still don't do it?
This discussion is all about weighing benefits versus drawbacks. If it was purely objective, and our brains more reliable, this exchange wouldn't exist.

Jeffrey Yasskin

unread,
Jul 12, 2016, 4:03:56 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 10:47 AM, Hyman Rosen <hyman...@gmail.com> wrote:
On Tue, Jul 12, 2016 at 1:12 PM, 'Jeffrey Yasskin' via ISO C++ Standard - Future Proposals <std-pr...@isocpp.org> wrote:
Please look at the technical arguments instead of calling people who disagree with you "irrational".

It's not a matter of technical arguments, it's a matter of values (as you say).  Science can tell you how to accomplish what you want, but it can't tell you what to want.  For me, the values involved in having a programming language that is simply specified and deterministic and consistent far outweigh the possible optimization benefit in letting the compiler pick evaluation order.  SImilarly, I find negative value in having unspecified behavior in order to force programmers to be hyper-vigilant to avoid error.  I find negative value in the notion that unspecified order adds expressiveness.

C++ can do only one thing.  It cannot have evaluation order both specified and unspecified.  It cannot evaluate both the LHS and the RHS of an assignment first.  So values must necessarily come into conflict, and battles over values are political, personal, and frequently nasty. 

Political's fine (although it's nice to avoid it when possible), but personal and nasty aren't. Thanks for this last email which is totally respectful and makes good points.
 
Besides, what constitutes a technical argument?  I say that Java has strict left-to-right evaluation, and I find that to be a technical argument - clearly the Java designers made this a conscious choice.  But others dismiss this with "why would we want C++ to be like Java?"  Or conversely, people claim to find that some code can run 4% faster when order is not specified, and I say that I don't care.

I'd call them both good technical arguments, which folks are valuing differently.

Ultimately, my goal is to not be silent, so that when C++ goes the wrong (IMO) way, no one can say that this was done with the acquiescence of the entire C++ community.

And for your amusement, or horror:
    struct A { }; A f(); A g();
    void operator<<(A, A);
    auto 
shfpointer = (void(*)(A, A))&operator<<;
    operator<<(f(), g());  // must evaluate f() then g()
    
shfpointer(f(), g());  // can evaluate f() and g() in either order

If I'm reading http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r2.pdf correctly (§6.4), it's only "f() << g()" that gets the sequenced-before edge. "operator<<(f(), g())" is written like a function call and so doesn't. You're welcome to retarget your horror to that difference. ;-)

Jeffrey

Nevin Liber

unread,
Jul 12, 2016, 4:10:45 PM7/12/16
to std-pr...@isocpp.org
On 12 July 2016 at 14:21, Miro Knejp <miro....@gmail.com> wrote:
People also constantly assert without evidence that this is somehow "removing expresiveness" from the language. I have asked in the past and now again for code examples that clearly, without doubt, expresses that the author's intention was *deliberately* for code to be order-independent and not just a lucky side effect of how the language works. And by example I do not mean "I just wrote an expression and it happens to not depend on order of evaluation". No, I mean "I wrote this expression and I *intentionally* require it to be order independent because the future of the universe depends on it".

When you compile, do you ever turn on optimizations?  If so, why, as they are known to make buggy code less deterministic.

I want the correct code I write to be as fast as possible.

I call code that is accidentally dependent on the order of evaluation a bug.  You want to call it a feature.   If it is a feature people will write both deliberate code and accidental code that takes advantage of it, since it is impossible to tell the difference between the two.


Take the expression int a = b() + c() + d();

Where b(), c() and d() return ints.  When I see an expression like that, I as the reader expect it to be both commutative and associative.  And in C++ today, it would be buggy code if the result were dependent on the order of evaluation.  I could, for instance, rewrite it as:

int e = c() + d()
int a = b() + e;

If the evaluation order is defined, I as a code maintainer can no longer do this transformation without looking at b(), c(), d() to make sure they are completely independent, because it is impossible to tell just from the original expression if is dependent on the order of evaluation or not.  In the evaluation order dependent world, this refactoring is fragile and may break code.
-- 
 Nevin ":-)" Liber  <mailto:ne...@eviloverlord.com>  +1-847-691-1404

Hyman Rosen

unread,
Jul 12, 2016, 4:15:51 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:03 PM, 'Jeffrey Yasskin' via ISO C++ Standard - Future Proposals <std-pr...@isocpp.org> wrote:
If I'm reading http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r2.pdf correctly (§6.4), it's only "f() << g()" that gets the sequenced-before edge. "operator<<(f(), g())" is written like a function call and so doesn't. You're welcome to retarget your horror to that difference. ;-)

Ah, I missed that.  OK, retargeted :-)

Seriously, though, how can having different evaluation order requirements for each of
    a() << (b() << c())
    a() << (b() =  c())
    a() +  (b() +  c())

strike anyone as good programming language design?  What do you say when teaching this to someone?
Not to mention the difference between these:
    a() <<  (b() <<  c())
    a() <<= (b() <<= c())

Hyman Rosen

unread,
Jul 12, 2016, 4:28:36 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:10 PM, Nevin Liber <ne...@eviloverlord.com> wrote:
Take the expression int a = b() + c() + d();

Where b(), c() and d() return ints.  When I see an expression like that, I as the reader expect it to be both commutative and associative.  And in C++ today, it would be buggy code if the result were dependent on the order of evaluation.  I could, for instance, rewrite it as:

int e = c() + d()
int a = b() + e;

If the evaluation order is defined, I as a code maintainer can no longer do this transformation without looking at b(), c(), d() to make sure they are completely independent, because it is impossible to tell just from the original expression if is dependent on the order of evaluation or not.  In the evaluation order dependent world, this refactoring is fragile and may break code.

In C++ today this code is not associative.  For example, even if all functions return int values, if b() returns 1, c() returns -1, and d() returns INT_MIN, reordering the computations (at the source level) to add c() and d() before adding b() would result in undefined behavior.  If the functions return float values, reordering can lead to utterly wrong results.

Your incorrect intuition about how to rewrite this code could lead you to insert undefined behavior into the program.  That's not surprising; programmers have been led astray by incorrect beliefs about order of evaluation for literally decades.  That's why strict left-to-right order is the only correct way to fix things.  One simple rule that's trivial to learn and impossible to forget.

Ren Industries

unread,
Jul 12, 2016, 4:31:31 PM7/12/16
to std-pr...@isocpp.org
This applies equally well to right-to-left order (which is currently Clang's ordering, I believe). It in no way proves that left-to-right is somehow the only correct way.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Jeffrey Yasskin

unread,
Jul 12, 2016, 4:32:39 PM7/12/16
to std-pr...@isocpp.org
I'd love to see some real code that looks like those examples. It's hard to figure out what I expect when the functions are just a(), b(), and c(), but seeing real code could help folks figure out if the weird bits are "don't do that" or "we need to change the sequencing of assignments."

I do think it's on the table to make the sequencing stricter for C++20, once folks have had time to investigate more of the implications. The change for C++17 was just cautious.

Jeffrey

Hyman Rosen

unread,
Jul 12, 2016, 4:35:40 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:31 PM, Ren Industries <renind...@gmail.com> wrote:
This applies equally well to right-to-left order (which is currently Clang's ordering, I believe). It in no way proves that left-to-right is somehow the only correct way.

Having more than one way is wrong.  C++ is written and read from left to right.  Evaluation should mirror that.  (I believe that g++ does right-to-left for function arguments and clang does them left-to-right.)

Hyman Rosen

unread,
Jul 12, 2016, 4:37:53 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:32 PM, 'Jeffrey Yasskin' via ISO C++ Standard - Future Proposals <std-pr...@isocpp.org> wrote:
I do think it's on the table to make the sequencing stricter for C++20, once folks have had time to investigate more of the implications. The change for C++17 was just cautious.

The problem is that the ordering for assignment, once specified as right-to-left in C++17, can never be fixed.  That's heinous.

Nevin Liber

unread,
Jul 12, 2016, 4:39:01 PM7/12/16
to std-pr...@isocpp.org
On 12 July 2016 at 15:28, Hyman Rosen <hyman...@gmail.com> wrote:
In C++ today this code is not associative.  For example, even if all functions return int values, if b() returns 1, c() returns -1, and d() returns INT_MIN, reordering the computations (at the source level) to add c() and d() before adding b() would result in undefined behavior.  If the functions return float values, reordering can lead to utterly wrong results.

In other words, when you write code that doesn't obey the rules of commutativity and associativity, you make it harder for humans to understand.  Great point!

Jeffrey Yasskin

unread,
Jul 12, 2016, 4:39:16 PM7/12/16
to std-pr...@isocpp.org
At this point (after the mailing comes out with the newest working draft), you should probably write a paper proposing to switch the sequencing of assignment operations to left-to-right, and try to find a National Body to write a comment on the CD endorsing your change. Arguing more on this list won't get you clear agreement one way or the other, but it's possible you've convinced enough people.

Jeffrey

Patrice Roy

unread,
Jul 12, 2016, 4:45:46 PM7/12/16
to std-pr...@isocpp.org
I've made this comment in the past, and did not expect the discussion to come back two weeks after we voted on the issue, but as a complement to Nevin's comment, I'm glad, from a code review perpsective, that code written as x = h(f(),g()); where there are side-effects to f() and g() can be considered as something that's buggy and requires rewriting, not as something potentially correct and that needs to be investigated. Said investigation is sometimes painful, costly, requires expertise, and access to source code which we do not always have.

Guaranteeing the relative evaluation ordering of f() and g() in h(f(),g()) impacts code in many ways, and the belief that simplification is one of them is not something I share. Some strong proponents of imposing an ordering in such situations consider the imposition of this order to simplify coding, particularly for beginners; I spend a significant amount of time with undergrads every semester, have done so for close to twenty years now, and again, this argument does not correspond to my experience (this is an observation, not a statistic, but it does explain my position). Stating that such code is error-prone and a bad idea is in my view much, much simpler that considering it correct and investigating it, particularly when beginners are involved. It's never been difficult to understand so far. I sympathize with those who would impose ordering even then, but am glad it was not accepted. Other languages (e.g. C#) make complex nested expressions with lots of subexpressions meaningful, and I have had the displeasure of debugging people using that «feature» in the last few years. I'm glad we escaped these difficulties with C++; it has made my personal life and work significantly simpler.

I'd be curious to know how people from SG14 (low-latency, embedded systems) would see a possible performance loss of 2-4% in practice.

On the positive side, I'm glad the other parts of the ordering proposal passed, as they seemed to be to be beneficial. I'm happy with the conclusion we collectively reached. If there's problems with pack expansion ordering, it's probably a better idea to address this separately. Papers would be welcome.

Thanks, Jeffrey, for the efforts invested in keeping the overall tone civil.


--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Hyman Rosen

unread,
Jul 12, 2016, 4:52:25 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:38 PM, Nevin Liber <ne...@eviloverlord.com> wrote:
In other words, when you write code that doesn't obey the rules of commutativity and associativity, you make it harder for humans to understand.  Great point!

When you apply the incorrect mental model to a programming language, you will fail to understand the program.  When you apply transformations to a program while failing to understand it, you will break it.  When you break the program and it fails, the excuse that the program failed to fit your model is not going to make the program work again.

Programming is already hard enough without deliberately throwing unspecified behavior into the mix.

Miro Knejp

unread,
Jul 12, 2016, 4:58:05 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 22:10 schrieb Nevin Liber:
On 12 July 2016 at 14:21, Miro Knejp <miro....@gmail.com> wrote:
People also constantly assert without evidence that this is somehow "removing expresiveness" from the language. I have asked in the past and now again for code examples that clearly, without doubt, expresses that the author's intention was *deliberately* for code to be order-independent and not just a lucky side effect of how the language works. And by example I do not mean "I just wrote an expression and it happens to not depend on order of evaluation". No, I mean "I wrote this expression and I *intentionally* require it to be order independent because the future of the universe depends on it".

When you compile, do you ever turn on optimizations?  If so, why, as they are known to make buggy code less deterministic.

I want the correct code I write to be as fast as possible.
Obviously if code is broken, it's broken. It depende on UB, so there is no helping it. It has no correct answer. It is only "buggy" because the language allows it to be due to ambiguities.


I call code that is accidentally dependent on the order of evaluation a bug.  You want to call it a feature.   If it is a feature people will write both deliberate code and accidental code that takes advantage of it, since it is impossible to tell the difference between the two.
I want the buggy code to *always* fail, not *sometimes* depending on the compiler's mood that morning.



Take the expression int a = b() + c() + d();

Where b(), c() and d() return ints.  When I see an expression like that, I as the reader expect it to be both commutative and associative.  And in C++ today, it would be buggy code if the result were dependent on the order of evaluation.  I could, for instance, rewrite it as:

int e = c() + d()
int a = b() + e;

If the evaluation order is defined, I as a code maintainer can no longer do this transformation without looking at b(), c(), d() to make sure they are completely independent, because it is impossible to tell just from the original expression if is dependent on the order of evaluation or not.  In the evaluation order dependent world, this refactoring is fragile and may break code.
In a world without fixed evaluation order this change might also break code if the compiler decides to evaluate b(), c(), d() in different orders before and after the change and the author accidentally relied on it. You can only do this change if you trust the code not to be buggy.

It is certainly a better argument than the philosophical stuff others have presented in past discussions and I thank you for that. It made me think. Now I wonder if the expectation about commutativity and associativity is not a naive intuition. These properties only hold for the operations with the values of the subexpression, why should it transitively establish these relationships between the subexpressions. Commutativity certainly holds for ints (in theory), but not for much else. So as soon as other types are involved with operator+ implying commutativity or associativity can be just as fragile. You also state "in C++ today, it would be buggy code". Well yes, because the language rules allow it to be. If the order is fixed the code is either always correct or always incorrect. It is primarily this bug trap that concerns me the most and that even experts fall into occasionally.

So I do accept the maintainer's dilemma. I feel like the real issue is the absence of a way for the author to convene whether the order mattered in the first place or not, regardless of whether evaluation order is fixed or not. In a world without fixed order the author might have accidentally relied on it in which case the transformation is a bugfix. In a world with fixed evaluation order the author might not have cared about it in which case it's an improvement to readability/efficiency. In both cases it is the maintainer's problem to figure it out. And in both cases they have to consult the source or documentation of b(), c(), d() to judge the consequences of the transformation. (A function purity annotation would really help with that.)

Hyman Rosen

unread,
Jul 12, 2016, 5:09:47 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:45 PM, Patrice Roy <patr...@gmail.com> wrote:
I've made this comment in the past, and did not expect the discussion to come back two weeks after we voted on the issue, but as a complement to Nevin's comment, I'm glad, from a code review perpsective, that code written as x = h(f(),g()); where there are side-effects to f() and g() can be considered as something that's buggy and requires rewriting, not as something potentially correct and that needs to be investigated.

In a language with a properly defined evaluation order, the following code works as obviously as it reads.
    template <typename T> T get(istream &in) { T t; in >> t; return t; }
    auto point = make_pair(get<int>(cin), get<int>(cin));

It's the way the code works in C++ today if I write
    int xy[2] = { get<int>(cin), get<int>(cin) };
so the language is inconsistent with itself with respect to when it defines order and when it doesn't.

I've said it before - C and C++ programmers have developed Stockholm Syndrome with respect to evaluation order.  The rules have been wrong for so long that programmers have come to believe that this is the way they must be, and then accuse reasonable-looking code of being broken, stupid, error-prone, and lazy when it's really the language that's that way.

Miro Knejp

unread,
Jul 12, 2016, 5:14:42 PM7/12/16
to std-pr...@isocpp.org
Am 12.07.2016 um 21:23 schrieb 'Edward Catmur' via ISO C++ Standard - Future Proposals:
 
auto inserts = {v.insert(v.end(), args)...};
That makes it an initializer_list, meaning the values are const, meaning you cannot move them out.
Just saying.

 Moving iterators?
The values in an initializer_list are const, so there is no moving the values out of it without const_cast and a deep sip of UB.

Hyman Rosen

unread,
Jul 12, 2016, 5:15:09 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 4:59 PM, Miro Knejp <miro....@gmail.com> wrote:
I feel like the real issue is the absence of a way for the author to convene whether the order mattered in the first place or not, regardless of whether evaluation order is fixed or not.

Why?  If you write a = f(); b = g(); do you worry that there's no way to tell the compiler that you don't care about the order of the two statements?

Miro Knejp

unread,
Jul 12, 2016, 5:19:10 PM7/12/16
to std-pr...@isocpp.org
Within an expression. That's what this all about. And I didn't target the compiler but the human maintainer. There is explicit syntax to tell the compiler and people that code is *intentionally ordered*: semicolons. And commas, sometimes, but not always, you better remember the rules. There is no explicit syntax to tell them that code is *intentionally unordered*.

Ren Industries

unread,
Jul 12, 2016, 5:19:11 PM7/12/16
to std-pr...@isocpp.org
Having more than one way is no more wrong than anything else; as you said, it is a value judgement. Your "should" is not justified merely because you assert it. 

I don't have Stockholm syndrome; I wish to have a way to express I don't care about the order of evaluation and the compiler may evaluate as it likes. You wish to take that away without replacing it. That is fundamentally the problem; I value speed as much as you value correctness. In my field (game design), correctness is an unnecessary and burdensome requirement; you cannot have real time games and correct lighting, for example. That results in a difference of opinion on where the chips should fall, but at least I don't assert my way is the absolute correct way without evidence like you do.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Ville Voutilainen

unread,
Jul 12, 2016, 5:22:13 PM7/12/16
to ISO C++ Standard - Future Proposals
On 13 July 2016 at 00:09, Hyman Rosen <hyman...@gmail.com> wrote:
> I've said it before - C and C++ programmers have developed Stockholm
> Syndrome with respect to evaluation order. The rules have been wrong for so
> long that programmers have come to believe that this is the way they must
> be, and then accuse reasonable-looking code of being broken, stupid,
> error-prone, and lazy when it's really the language that's that way.


Well, I don't know what evidence makes you think programmers have such
a syndrome, but the committee
was convinced by implementation vendors and high-performance
programming audiences reporting that
the optimization opportunities in unspecified function argument
evaluation are of significant importance
to them.

Matt Calabrese

unread,
Jul 12, 2016, 5:27:15 PM7/12/16
to ISO C++ Standard - Future Proposals
On Tue, Jul 12, 2016 at 2:09 PM, Hyman Rosen <hyman...@gmail.com> wrote:
In a language with a properly defined evaluation order, the following code works as obviously as it reads.

No, not "properly" nor "obviously". You keep asserting your own position as "proper" because it is the position you hold and you do not counter Nevin's or Patrice's (frankly compelling) points with anything other than more blanket assertions. I realize that you are frustrated, but you appear to be simply writing off the other view point and that is not fair. This is not as simple a decision as you make it out to be.

On Tue, Jul 12, 2016 at 2:09 PM, Hyman Rosen <hyman...@gmail.com> wrote:
I've said it before - C and C++ programmers have developed Stockholm Syndrome with respect to evaluation order.  The rules have been wrong for so long that programmers have come to believe that this is the way they must be, and then accuse reasonable-looking code of being broken, stupid, error-prone, and lazy when it's really the language that's that way.

You aren't giving the opposing position the credit it deserves. If you think this is somehow purely "Stockholm Syndrome" then please try to better understand the arguments that were made and address them appropriately, otherwise you aren't going to convince anyone of anything.

Patrice Roy

unread,
Jul 12, 2016, 5:36:28 PM7/12/16
to std-pr...@isocpp.org
I'm sorry to say I don't share your belief as to the obviousness of these examples.The relative obviousness-or-not of such statements is probably not where we want the discussion to go in order to be productive. I'll let the rest of the rant lay, as it does not seem to be an appropriate forum for these words.

I think anyone who feels as strongly as you seem to do about this topic should write a proposal for it, or push for a NB comment. Scientific arguments would be more convincing than arguments based on what people seem to think is obvious, at least in my view.

I wish you good luck; as previously stated, I think the agreement we reached in Oulu was satisfactory. Should there be convincing arguments to the contrary, we'll see, but it will be a tough sell given there are measurable and measured performance losses, and given that for some of us it increases the maintenance burden in a noticeable way. I'll gladly read your proposal should you decide to write it.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Nicol Bolas

unread,
Jul 12, 2016, 5:41:30 PM7/12/16
to ISO C++ Standard - Future Proposals
On Tuesday, July 12, 2016 at 4:52:25 PM UTC-4, Hyman Rosen wrote:
On Tue, Jul 12, 2016 at 4:38 PM, Nevin Liber <ne...@eviloverlord.com> wrote:
In other words, when you write code that doesn't obey the rules of commutativity and associativity, you make it harder for humans to understand.  Great point!

When you apply the incorrect mental model to a programming language, you will fail to understand the program.

People should not conform to their programming languages; programming languages should conform to people.

Ren Industries

unread,
Jul 12, 2016, 5:43:00 PM7/12/16
to std-pr...@isocpp.org
Tell that to those who espouse functional programming languages, which have a decidedly anti-human mind model bent.
To some degree, we're doing something quite unnatural; people and programming languages need to meet in the middle.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Nicol Bolas

unread,
Jul 12, 2016, 5:47:13 PM7/12/16
to ISO C++ Standard - Future Proposals


On Tuesday, July 12, 2016 at 1:12:44 PM UTC-4, Jeffrey Yasskin wrote:
You also should moderate your language. :) Most people here are rational and not stupid, but they may value different things than you do. For example, some people value the 4% speed improvement that http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r2.pdf demonstrated a compiler could achieve on some programs by fiddling intelligently with the evaluation order. It's not obvious to me that this 4% is worth the bugs it implies, but it's also not obvious to me that it's not.

What 4% improvement? The paper discussed a 4% variance. Performance changed plus or minus 4%, depending on the various conditions.

Nowhere in that paper was it stated that undefined order caused a flat 4% speed improvement.

Jeffrey Yasskin

unread,
Jul 12, 2016, 5:56:04 PM7/12/16
to std-pr...@isocpp.org
Right, the paper shows a +/-4% variance, meaning there's at least 4%
performance on the table for some programs if the optimizers can be
made smarter. In particular, if the optimizer can identify the
programs for which reversing the argument evaluation order causes a 4%
improvement, it could do that and get the 4% improvement.

Folks are worried about closing off that possibility. It's true that
optimizers haven't been taught to do this in the many years they've
been allowed to, but now there's evidence people should be looking
harder in this direction. If folks look over the next 3 years and
nothing materializes, that becomes a stronger argument to nail down
the left-to-right order in C++20.

Jeffrey

Edward Catmur

unread,
Jul 12, 2016, 5:56:53 PM7/12/16
to std-pr...@isocpp.org

So an oracle or perfect PGO would get that 4% in its entirety. And that's just a binary choice between LTR and RTL, not full permutation of ordering.

Edward Catmur

unread,
Jul 12, 2016, 5:59:34 PM7/12/16
to std-pr...@isocpp.org

In the example, the values of the initializer_list are iterators. I'm questioning whether moving an iterator (at least one to a normal container) is ever something one would wish to do.

Nicol Bolas

unread,
Jul 12, 2016, 6:00:41 PM7/12/16
to ISO C++ Standard - Future Proposals
On Tuesday, July 12, 2016 at 12:29:04 PM UTC-4, Edward Catmur wrote:
On Tue, Jul 12, 2016 at 6:12 AM, Nicol Bolas <jmck...@gmail.com> wrote:
On Monday, July 11, 2016 at 8:27:41 PM UTC-4, Edward Catmur wrote:

On 11 Jul 2016 9:36 p.m., "Hyman Rosen" <hyman...@gmail.com> wrote:
>
> The right decision is strict left-to-right order of evaluation for all expressions, with side-effects fully completed as they are evaluated.

That sounds like a performance nightmare.


I've heard a lot of "sounds like" out of people defending the undefined order rules. I've heard far less "certainly is". Is there any genuine evidence of this "performance nightmare", or is it just fear of the unknown?

I can't speak for anyone else. In my case it is induction from observations that many existing optimizations depend to some extent on breaking naive intuitions of ordering.
 
How many compilers actually use the undefined evaluation order rules to optimize code? Do they re-order expressions based on what is best for that specific expression? Or do they always evaluate expressions in an arbitrary order? Because if such flexibility is not actually making code faster, then the variance is pointless and should be done away with.
 
It does not matter whether compilers are currently able to exploit their freedom.

Yes it does matter.

On one side, we have a very clear fact: this "feature" of C++ is very difficult to use correctly. The evidence for this is the fact that people keep using it wrong.

On the other side, you put forth, not a factual performance lost, but a prediction that future compilers won't be able to compile code as well as they might.

Why should we value a possible future which may never come to past more than a certain present that we have to live with daily?

The free lunch is over. Clock speeds are not getting any faster. If there is any prospect of the freedom being useful in future then it should be retained until we know for sure one way or the other, especially if new instructions could help compilers exploit it.

If there is any genuine prospect of such freedom being useful in the future, then we can add specific syntax to loosen the rules in those places where you could see a performance gain.

> The reason for choosing strict left-to-right order is that it's trivial to explain, matches the order in which code is read, and matches other languages such as Java.

Any experienced user reads code blocks as a unit, just as any proficient reader of natural languages reads sentences and paragraphs in a single glance.


Let's pretend this was true (despite the undeniable fact that "experienced users" make these mistakes too).

So what? Are we supposed to restrict C++ to "experienced users" as you define it?

It would be better to restrict C++ to experienced users than to make it useless to them.

Hyperbole is not helpful in proving your point. The largest variance measured for this feature was 4%. A 4% performance drop is not going to make C++ "useless".

Miro Knejp

unread,
Jul 12, 2016, 6:02:02 PM7/12/16
to std-pr...@isocpp.org
Ah right I missed that part. Nevermind.

Nicol Bolas

unread,
Jul 12, 2016, 6:08:04 PM7/12/16
to ISO C++ Standard - Future Proposals


On Tuesday, July 12, 2016 at 5:56:04 PM UTC-4, Jeffrey Yasskin wrote:
On Tue, Jul 12, 2016 at 2:47 PM, Nicol Bolas <jmck...@gmail.com> wrote:
>
>
> On Tuesday, July 12, 2016 at 1:12:44 PM UTC-4, Jeffrey Yasskin wrote:
>>
>> You also should moderate your language. :) Most people here are rational
>> and not stupid, but they may value different things than you do. For
>> example, some people value the 4% speed improvement that
>> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r2.pdf
>> demonstrated a compiler could achieve on some programs by fiddling
>> intelligently with the evaluation order. It's not obvious to me that this 4%
>> is worth the bugs it implies, but it's also not obvious to me that it's not.
>
>
> What 4% improvement? The paper discussed a 4% variance. Performance changed
> plus or minus 4%, depending on the various conditions.
>
> Nowhere in that paper was it stated that undefined order caused a flat 4%
> speed improvement.

Right, the paper shows a +/-4% variance, meaning there's at least 4%
performance on the table for some programs if the optimizers can be
made smarter. In particular, if the optimizer can identify the
programs for which reversing the argument evaluation order causes a 4%
improvement, it could do that and get the 4% improvement.

My point is that there are also cases where there is a 4% performance loss compared to LTR. That LTR gives up-to-4% better performance in some cases. If performance is the primary opposing argument, then that fact needs to be weighed as well.
 
Folks are worried about closing off that possibility. It's true that
optimizers haven't been taught to do this in the many years they've
been allowed to, but now there's evidence people should be looking
harder in this direction. If folks look over the next 3 years and
nothing materializes, that becomes a stronger argument to nail down
the left-to-right order in C++20.

If this was a firm promise that, if nobody has proven otherwise, then we'd close things up in C++20, I'd be fine with it. But it isn't.

The problem with this reasoning is that this argument will be no less valid in 2019 than it is in 2016. You can always say that in "3-5 years", compilers will take greater advantage of this.

What arguments can you use against this in 2019 that you couldn't in 2016? The only argument you can use is that it's 3 years later, but that's not a strong argument. After all, it's been 18 years since C++98, and these things have yet to materialize.

There's always going to be some excuse to keep kicking the can along. To put it off for another standard cycle. To say that we'll get those optimizations any year now.

What is the end-game here? What is the critical point where we can make this sort of things stop?

Nicol Bolas

unread,
Jul 12, 2016, 6:10:55 PM7/12/16
to ISO C++ Standard - Future Proposals

"Could", not "would". After all, there are plenty of times where order is entirely irrelevant to performance.

Hyman Rosen

unread,
Jul 12, 2016, 6:39:08 PM7/12/16
to std-pr...@isocpp.org
How can defining the order of evaluation increase the maintenance burden?  When you tell people not to write h(f(), g()) because that can cause problems, you are doing so because you are using a language that causes the problems and you have internalized this knowledge.  (That's what I mean by Stockholm Syndrome.)  But it's a fact (that we have encountered in our company, not just in theory) that such code can easily enter production and work "correctly" for years before an environment change alters the evaluation order and breaks it.  WIth a defined order, code either works or doesn't, and it's immediately apparent.

I accept that there are people who believe that not specifying function argument evaluation order is important for optimization

Nevin's point, that a programmer should feel free to reorder a = b() + c() + d() as a = b() + (c() + d()) at the source level is wrong and can lead to undefined behavior, so that is not evidence against defined evaluation order.  Quite the contrary.

I do not agree at all that the C++ should treat a() << b(), a() = b(), and a() + b() differently.

As to the obviousness of the examples, find naifs and ask what they think something like
    Point p = Point(read<int>(), read<int>());
does.

Jeffrey Yasskin

unread,
Jul 12, 2016, 7:13:20 PM7/12/16
to std-pr...@isocpp.org
On Tue, Jul 12, 2016 at 3:08 PM, Nicol Bolas <jmck...@gmail.com> wrote:


On Tuesday, July 12, 2016 at 5:56:04 PM UTC-4, Jeffrey Yasskin wrote:
On Tue, Jul 12, 2016 at 2:47 PM, Nicol Bolas <jmck...@gmail.com> wrote:
>
>
> On Tuesday, July 12, 2016 at 1:12:44 PM UTC-4, Jeffrey Yasskin wrote:
>>
>> You also should moderate your language. :) Most people here are rational
>> and not stupid, but they may value different things than you do. For
>> example, some people value the 4% speed improvement that
>> http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r2.pdf
>> demonstrated a compiler could achieve on some programs by fiddling
>> intelligently with the evaluation order. It's not obvious to me that this 4%
>> is worth the bugs it implies, but it's also not obvious to me that it's not.
>
>
> What 4% improvement? The paper discussed a 4% variance. Performance changed
> plus or minus 4%, depending on the various conditions.
>
> Nowhere in that paper was it stated that undefined order caused a flat 4%
> speed improvement.

Right, the paper shows a +/-4% variance, meaning there's at least 4%
performance on the table for some programs if the optimizers can be
made smarter. In particular, if the optimizer can identify the
programs for which reversing the argument evaluation order causes a 4%
improvement, it could do that and get the 4% improvement.

My point is that there are also cases where there is a 4% performance loss compared to LTR. That LTR gives up-to-4% better performance in some cases. If performance is the primary opposing argument, then that fact needs to be weighed as well.

That's why I said "some programs" and "if the optimizer can identify the programs".

Folks are worried about closing off that possibility. It's true that
optimizers haven't been taught to do this in the many years they've
been allowed to, but now there's evidence people should be looking
harder in this direction. If folks look over the next 3 years and
nothing materializes, that becomes a stronger argument to nail down
the left-to-right order in C++20.

If this was a firm promise that, if nobody has proven otherwise, then we'd close things up in C++20, I'd be fine with it. But it isn't.

The problem with this reasoning is that this argument will be no less valid in 2019 than it is in 2016. You can always say that in "3-5 years", compilers will take greater advantage of this.

What arguments can you use against this in 2019 that you couldn't in 2016? The only argument you can use is that it's 3 years later, but that's not a strong argument. After all, it's been 18 years since C++98, and these things have yet to materialize.

There's always going to be some excuse to keep kicking the can along. To put it off for another standard cycle. To say that we'll get those optimizations any year now.

What is the end-game here? What is the critical point where we can make this sort of things stop?
 
There is no guaranteed end-game. It stops when the committee becomes convinced to make a change. That's kinda the nature of standardization work.

Jeffrey

Patrice Roy

unread,
Jul 12, 2016, 7:27:40 PM7/12/16
to std-pr...@isocpp.org
I don't really want to do the rounds over these arguments again, having done so before. The reason I expressed myself earlier today was to leave a trace since we are re-doing a thread that I thought had been processed.

I accept that in your perspective, Point p = Point(read<int>(), read<int>()); should have definite meaning even if read<int>() has side-effects, and even replacing read<int>() by something more abstract to make the situation less-evidently suspicious given the existing language. I take it that should problem arise due to someone writing such code, you and your colleagues find it worthwhile to open up the side-effecting functions and investigate their sources to make sense of the then-non-unspecified code. I understand that its seems obvious to you that such code is easier to maintaint, It's not a willingness I share, the obviousness does not strike me, and it does not seem beneficial to me. I also use languages where these expressions are fully-ordered, do debugging for beginners that make such dependencies idiomatic, and appreciate being able to tell them «just no» with C++.

I look forward to reading your proposal.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Thiago Macieira

unread,
Jul 12, 2016, 7:43:16 PM7/12/16
to std-pr...@isocpp.org
Em terça-feira, 12 de julho de 2016, às 18:38:47 PDT, Hyman Rosen escreveu:
> As to the obviousness of the examples, find naifs and ask what they think
> something like
> Point p = Point(read<int>(), read<int>());
> does.

Well, naifs won't be able to read it because they don't know the language in
the first place. They have to learn the language before they can express an
opinion.

As someone who programmed in the early 1990s, it's "obvious" to me that
arguments are pushed right to left, because it's the C calling convention. The
point being: what's obvious to me is not obvious to you.

--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel Open Source Technology Center

Hyman Rosen

unread,
Jul 12, 2016, 9:24:40 PM7/12/16
to std-pr...@isocpp.org


On Jul 12, 2016 7:27 PM, "Patrice Roy" <patr...@gmail.com> wrote:

> I look forward to reading your proposal.

Don't be disingenuous.  You know very well that such a proposal would not pass.  It is impossible to convince enough people that l-to-r should be adopted.  The "but optimization!" folks hold too much sway.  The fact that r-to-l for assignments passed in C++17 is further evidence that there is no hope.  And then there are the people who just like the status quo.

I am just the lone crackpot shouting in the wilderness.

Patrice Roy

unread,
Jul 12, 2016, 11:05:52 PM7/12/16
to std-pr...@isocpp.org
There were debates in Oulu, and there were people for and against what you care so much about. Maybe you'll be able to find convincing arguments; I don't pretend to know what the outcome will be should a strong proposal be presented, and I was being sincere when I said wrote that I'm looking forward to it. I, and others, are reasonable beings, given arguments we find convincing. I don't doubt you're convinced, I think that shows strongly.

Good luck, should you decide to go forward with this. If there is a proposal, particularly if the proposal's a strong one, I'm sure it will be examined and discussed rigorously.

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Greg Marr

unread,
Jul 12, 2016, 11:19:10 PM7/12/16
to ISO C++ Standard - Future Proposals
On Tuesday, July 12, 2016 at 11:05:52 PM UTC-4, Patrice Roy wrote:
There were debates in Oulu, and there were people for and against what you care so much about.

We don't have access to the discussions, so we only know what is reported here.
Was there discussion about whether assignment should be RTL vs LTR, or only about whether or not to provide an order for function arguments?
If there were reasons brought up to have it be RTL instead of LTR, then any paper suggesting to change that would need to address those reasons.

Patrice Roy

unread,
Jul 12, 2016, 11:59:52 PM7/12/16
to std-pr...@isocpp.org
Discussions are not public (to leave people maximal freedom of expression), and they are held in subgroups as well as in plenary if needed (sometimes, it's just something simple or something that everyone agrees on).

In this specific case, the paper on expression ordering suggested fixing seven orderings (if my memory's correct) and only one was (highly) problematic in the eyes of many, including myself. The proposal was split, the other six (again, from memory) were accepted, and this one was not. The arguments for and against are well-known to those who read this list.

Committee members represent various application domains and do not hold a uniform or homogeneous view of what «good programming» is, but they all care for the language and its users (being users themselves), so even if some might not agree with their decisions (which is fair), know that the debates are held and when subjects are debated like this one, it's reasonable to expect debates are held by committee members too.

In the end, it's a consensus decision, and consensus was not reached for this part of the ordering proposal. As Jeffreay and others stated, should people care enough to try to convince the committee to revisit its position on this specific issue, there are mechanisms that allow this (NB Comments, for example, or a proposal targeted towards C++20).

--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Thiago Macieira

unread,
Jul 13, 2016, 1:09:55 AM7/13/16
to std-pr...@isocpp.org
Em terça-feira, 12 de julho de 2016, às 21:24:38 PDT, Hyman Rosen escreveu:
> The fact that
> r-to-l for assignments passed in C++17 is further evidence that there is no
> hope.

Note that this RTL for assignments was done like that to match the
associativity of the operator in question:

man 7 operator:

Operator Associativity
() [] -> . left to right
! ~ ++ -- + - (type) * & sizeof right to left
* / % left to right
+ - left to right
<< >> left to right
< <= > >= left to right
== != left to right
& left to right
^ left to right
| left to right
&& left to right
|| left to right
?: right to left
= += -= *= /= %= <<= >>= &= ^= |= right to left
, left to right

Domen Vrankar

unread,
Jul 13, 2016, 4:12:30 AM7/13/16
to std-pr...@isocpp.org
> I do not agree at all that the C++ should treat a() << b(), a() = b(), and
> a() + b() differently.

Hi,

I've been following this thread on and off for a while now and am
still surprised at the above statement and that LTR for everything is
the only logical thing to do.

As Thiago already noted that RTL for assignments matches the
associativity of the operator in question.

Also one other thing why I would find LTR for everything counter
intuitive is because of my mental model:

- you take a stone (variable)
- you carve something into it (LTR operation on that variable)
- you carve some more (LTR operation on that variable)
- you decide to box the resulting statue (RTL so now I can create the
box - get a variable from for e.g. a function)
- you box the statue

First part is LTR and the assignment switches to RTL since that's the
order you would use in real life.

For everything LTR it would be:

- you create a box
- you create a statue
- you want to put the statue into the box and find out that you have
to remove the hands from the statue because the box is not of the
right size/type

Creating a box before I know what I'll put in is not what I'd do in
real life so I never read code from left to right for assignment but
first glance the variable and move straight to the right part and then
back the left one and I'd expect the language to behave the same way
if the order is defined.

On the other hand because most (all?) of the time I don't like code
that depends on the order anyway it would take a while before the all
LTR surprise would hit me... It's possible that I'd never even noticed
that the order exists...

Teaching this would for me be more intuitive (since it's from real
life) than saying that everything is defined LTR (hey grab me a box,
it doesn't matter that you don't know what I'll use it for) and ignore
my natural mental model (and yes I get the left and right confused all
the time - there's a reason why people don't like me to navigate while
they're driving - so using left and right to describe something is not
as natural for all of us as you'd think so unifying it for the sake of
unification doesn't help me much).

Regards,
Domen

Hyman Rosen

unread,
Jul 13, 2016, 11:15:04 AM7/13/16
to std-pr...@isocpp.org
On Wed, Jul 13, 2016 at 1:09 AM, Thiago Macieira <thi...@macieira.org> wrote:
Note that this RTL for assignments was done like that to match the
associativity of the operator in question:

But why?  Associativity (and precedence) tell you how to fully parenthesize an expression.  Why should that inform the order in which subexpressions are evaluated?  If I begin with the fully parenthesized expressions a() = (b() = c()) and ((a() = b()) = c()) the first parenthesized as for right associativity and the second for left, the new standard requires that the order of evaluation for both is c(), b(), a(), so how does associativity even matter?  And why should a() << b() require a different order than a() <<= b() while a() + b() remains unspecified?  How does that help people to read and write C++?

DV's explanation that intuition suggests that the RHS be evaluated first for assignments seems more reasonable, but I don't find it sufficiently compelling to overcome the simplicity of teaching that evaluation is left-to-right, always.  Even if it runs counter to intuition, it's a mistake that you can only make once because when you learn the rule, it's so sharp and easy that you can't forget it.
 

Ren Industries

unread,
Jul 13, 2016, 11:36:56 AM7/13/16
to std-pr...@isocpp.org
Why would the simplicity of teaching anything matter more than anything else? IEEE 754 is incredibly hard to teach, with many odd nuances to it, yet it is used because those nuances lead to better results. Modeling after the mathematical concepts underlying the operations may lead to more difficult to teach, but also has clear benefits, as DV explained. 


--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Patrice Roy

unread,
Jul 13, 2016, 11:51:29 AM7/13/16
to std-pr...@isocpp.org
Not to get everything off-topic; I know everything can be taught, as was pointed out to me on another occasion, but caring about the «teachability» of the language is not without interest. That being said, what is more «teachable» is highly subjective in some cases (this one included, as both Hyman and myself care for it, but don't agree as to what is easier to explain in some aspects).

That being said, instructor and teachers have a responsibility here, and the current rules can be taught (they have been for a while), just as new ones could. I think we should go for more objective arguments should this discussion continue. And I really would prefer to discuss over an actual proposal (it helps organize and focus discussions).

Cheers!

Greg Marr

unread,
Jul 13, 2016, 11:59:43 AM7/13/16
to ISO C++ Standard - Future Proposals
On Tuesday, July 12, 2016 at 11:59:52 PM UTC-4, Patrice Roy wrote:
Discussions are not public (to leave people maximal freedom of expression), and they are held in subgroups as well as in plenary if needed (sometimes, it's just something simple or something that everyone agrees on).

I understand, and understand why.  I was just reminding people that many people here don't have access.
 
In this specific case, the paper on expression ordering suggested fixing seven orderings (if my memory's correct) and only one was (highly) problematic in the eyes of many, including myself. The proposal was split, the other six (again, from memory) were accepted, and this one was not. The arguments for and against are well-known to those who read this list.

I think you might have missed the point of my question.  I know that function argument evaluation order is controversial and was discussed.  I'm not asking about this, as no order was chosen at this point, and as has been mentioned in this thread, passing C++17 as-is won't prevent this from being changed in the future.

I asked whether anyone knows whether RTL vs LTR for assignment was ever controversial and discussed, or just accepted as it was specified in the initial proposal.  This is because as has been pointed out, once C++17 is passed with this order, then it's fixed forever.

I'm not sure at this point whether one way is "better" than the other, just asking whether it was discussed, and if so, if there were particular reasons why RTL was chosen over LTR.  It has always made sense to me for the reasons given here, but the reasons given by Hyman Rosen for why LTR is better are definitely compelling.

In the end, it's a consensus decision, and consensus was not reached for this part of the ordering proposal. As Jeffreay and others stated, should people care enough to try to convince the committee to revisit its position on this specific issue, there are mechanisms that allow this (NB Comments, for example, or a proposal targeted towards C++20).

The RTL vs LTR for assignment would have to be a NB Comment at this point.  In order for a change to be accepted, I would imagine that it would need to address any reasons that RTL vs LTR was chosen.

Patrice Roy

unread,
Jul 13, 2016, 1:13:57 PM7/13/16
to std-pr...@isocpp.org
The orderings that were discussed were those proposed in wg21.link/p0145

The eight forms found on page 3 and 4 were what was proposed. As far as I remember, the only one that did not reach consensus was form 4, for reasons discussed here many times (should I have missed something, I hope someone else who was in Oulu will correct me); in form 4, the ordering of b1,b2 and b3 remains unspecified, but they cannot interleave (so it's not a complete rejection).

I understand from this thread that there are some who disagree with form 5, but that form did not seem to be surprising to anyone in Oulu, again as far as I remember (my memory goes as far as Core work and Plenary sessions go; I was not in Evolution when it was discussed there), and it has been accepted as is. I'm not an expert on ISO procedure, but I'd wager (again, correct me if I'm wrong here, as it's quite possible) that since it's been accepted, people who think it's bad should probably push for a NB comment at this stage.

As in committee meetings, we discuss what is brought up in the proposals, that's pretty much the answer I can give to your questions, Greg. I hope it's satisfactory.

Hyman Rosen

unread,
Jul 13, 2016, 1:15:01 PM7/13/16
to std-pr...@isocpp.org
On Wed, Jul 13, 2016 at 11:51 AM, Patrice Roy <patr...@gmail.com> wrote:
That being said, instructor and teachers have a responsibility here, and the current rules can be taught (they have been for a while), just as new ones could. I think we should go for more objective arguments should this discussion continue. And I really would prefer to discuss over an actual proposal (it helps organize and focus discussions).

The f(new T, new T) problem didn't emerge into popular notice until 2009, when Sutter presented it as GotW #56.  (I recall being incredulous during the Usenet discussion that C++ actually operated this way.)  We've seen several other cases on this list where even C++ experts are mistaken about order of evaluation.  I doubt that a meaningful number of C++ programmers would recognize the problem with this code even now.

There's a difference between what is taught and what is remembered.  When the rules are complex and indeterminate, people will forget what they are.  (Overloading rules, anyone?)  Further, the physical form of code exerts a strong influence on what its readers believe it does.  Seeing f declared as taking a pair of smart pointers and being called as above tells the reader with definiteness that memory is being handled correctly, whether or not the language is really defined that way.

When you must teach someone that in a() << b(), a() is called before b(), in a() <<= b(), b() is called before a(), and in a() + b(), a() and b() can be called in either order, you are imposing a cognitive load of meaningless difference, and that will make the student quickly forget the rules.  When you teach someone that in an expression, subexpressions are evaluated left-to-right, you impart a simple and universal rule that no one will forget.

Jeffrey Yasskin

unread,
Jul 13, 2016, 1:27:00 PM7/13/16
to std-pr...@isocpp.org
The sequencing of assignment was mentioned in Oulu, Kona, and Urbana, with << vs <<= being an example. I think BSI and some other folks will be sympathetic to a paper trying to change that, although I don't know which side they'll actually come down on. I don't see any arguments that assignment should be RTL in the notes except maybe something in Kona about developers tending to refactor the RHS. Microsoft may have some evidence one way or the other that didn't come out in discussion. Maybe they put it in one of the series of papers? Start at P0145 in http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/, and go back through the papers it revises.

Please write a dedicated paper about changing the assignment sequencing, rather than mixing it with adding more ordering to other operations. Also write a paper about adding more ordering, but strategically, you should probably wait a meeting for that one, so you don't confuse us about the actually-time-sensitive paper.

Thanks,
Jeffrey

Patrice Roy

unread,
Jul 13, 2016, 1:38:13 PM7/13/16
to std-pr...@isocpp.org
From the paper, a precedes b in the following cases:

  1. a.b
  2. a->b
  3. a->*b
  4. a(b1, b2, b3) // the one that did not reach consensus: the paper proposed imposing b1, then b2, then b3; what got consensus is that evaluation order of b1,b2,b3 is unspecified but the evaluations do not interleave
  5. b @=a // +=, -=, *=, etc.
  6. a[b]
  7. a << b
  8. a >> b
That's what the authors proposed, and that's what was under discussion. I gather you would have preferred to swap a and b in 5., which to me would be surprising (I don't see a << b and a <<= b as leading to the same behavior pattern and I find the difference in expression ordering to match intuition in this case, but we can disagree on this, it's fair). The fact that a() + b() should be ordered for you is something I understand, although I remain happy that we avoided this; note that it was not even proposed, thus it was not discussed, so maybe you can make a push for this and try to convince a sufficient number of individuals for C++20.

As stated before, the best way to make yourself heard at this point for 4. and 5. is probably a NB comment (for modifications in th very near future) or a proposal (for modifications later, although that might be more difficult), with convincing examples and use cases.







--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposal...@isocpp.org.
To post to this group, send email to std-pr...@isocpp.org.

Ville Voutilainen

unread,
Jul 13, 2016, 1:42:46 PM7/13/16
to ISO C++ Standard - Future Proposals
On 13 July 2016 at 20:26, 'Jeffrey Yasskin' via ISO C++ Standard -
Future Proposals <std-pr...@isocpp.org> wrote:
> The sequencing of assignment was mentioned in Oulu, Kona, and Urbana, with
> << vs <<= being an example. I think BSI and some other folks will be
> sympathetic to a paper trying to change that, although I don't know which
> side they'll actually come down on. I don't see any arguments that
> assignment should be RTL in the notes except maybe something in Kona about
> developers tending to refactor the RHS. Microsoft may have some evidence one


There was a discussion about that on the reflector.

Quoth 1:
"More specifically, consider
map<int, size_t> m;
m[0] = m.size();
"
Quoth 2:
"We've seen quite a few cases of this:
std::unique_ptr<T> p;
std::map<T*, U> m;
m[p.release()] = f(p.get());
"

Hyman Rosen

unread,
Jul 13, 2016, 1:58:49 PM7/13/16
to std-pr...@isocpp.org
On Wed, Jul 13, 2016 at 11:36 AM, Ren Industries <renind...@gmail.com> wrote:
Why would the simplicity of teaching anything matter more than anything else? IEEE 754 is incredibly hard to teach, with many odd nuances to it, yet it is used because those nuances lead to better results. Modeling after the mathematical concepts underlying the operations may lead to more difficult to teach, but also has clear benefits, as DV explained.

The nuances of C++ order of evaluation lead to worse results and dubious benefits.

And of course, C++'s implementation of IEEE 754 is broken too, so maybe your analogy is more apt than you realize,
(Broken how?  For example, C++ does not require that the following program succeed:
    #include <assert.h>
    int main() { volatile float i = 1.f; assert(i / 3.f == 1.f / 3.f); }
On my x86 Linux box, it fails when built with -mno-sse but works when SSE instructions are used.)

Edward Catmur

unread,
Jul 13, 2016, 4:03:29 PM7/13/16
to std-pr...@isocpp.org
Huge efforts are put in to get performance increases of far less than 4%. C++ might still be usable at 4% slower, but if it loses enough performance to fall behind its competitors that would be a tragedy. Impressions matter as well as raw numbers; who in a performance-sensitive situation would want to use a language that leaves 4% on the table in favor of being easier to teach and making it easier to write code that no-one should be writing anyway?

Edward Catmur

unread,
Jul 13, 2016, 4:12:23 PM7/13/16
to std-pr...@isocpp.org
On Wed, Jul 13, 2016 at 6:14 PM, Hyman Rosen <hyman...@gmail.com> wrote:
When you must teach someone that in a() << b(), a() is called before b(), in a() <<= b(), b() is called before a(), and in a() + b(), a() and b() can be called in either order, you are imposing a cognitive load of meaningless difference, and that will make the student quickly forget the rules.  When you teach someone that in an expression, subexpressions are evaluated left-to-right, you impart a simple and universal rule that no one will forget.

What was it H. L. Mencken said? "There is always a well-known solution to every human problem - neat, plausible, and wrong."

Edward Catmur

unread,
Jul 13, 2016, 5:31:24 PM7/13/16
to std-pr...@isocpp.org
I've just seen Jason Merrill's report on Oulu[1]. They found 2% penalty for LTR over RTL on one of the SPEC2006 tests. Again, I consider that more than sufficient reason to keep order of function arguments unspecified.

ezmag...@gmail.com

unread,
Jul 13, 2016, 5:39:57 PM7/13/16
to ISO C++ Standard - Future Proposals
They found 2% penalty for LTR over RTL on one of the SPEC2006 tests.

That's interesting. But then why not make the arguments evalued right to left then?
Is it because everything else is left to right and having this particular case be right-to-left may cause confusion?

T. C.

unread,
Jul 13, 2016, 5:48:43 PM7/13/16
to ISO C++ Standard - Future Proposals, ezmag...@gmail.com
On Wednesday, July 13, 2016 at 5:39:57 PM UTC-4, ezmag...@gmail.com wrote:
They found 2% penalty for LTR over RTL on one of the SPEC2006 tests.

That's interesting. But then why not make the arguments evalued right to left then?
Is it because everything else is left to right and having this particular case be right-to-left may cause confusion?


This depends on the psABI. Some work better with LTR, some RTL.

D. B.

unread,
Jul 13, 2016, 5:50:25 PM7/13/16
to std-pr...@isocpp.org
On Wed, Jul 13, 2016 at 10:39 PM, <ezmag...@gmail.com> wrote:
They found 2% penalty for LTR over RTL on one of the SPEC2006 tests.

That's interesting. But then why not make the arguments evalued right to left then?
Is it because everything else is left to right and having this particular case be right-to-left may cause confusion?

but "everything else" is not evaluated LTR. this thread is basically all about some people's dislike of the newly specd RTL evaluation of assignment chains.
 

Thiago Macieira

unread,
Jul 13, 2016, 5:53:44 PM7/13/16
to std-pr...@isocpp.org
Em quarta-feira, 13 de julho de 2016, às 11:14:41 PDT, Hyman Rosen escreveu:
> But why? Associativity (and precedence) tell you how to fully parenthesize
> an expression.

Because it's the "happens before" order. When you write:

a() = b() = c() = d();

The c() = d() assignment happens first. So regardless of the order in which c()
and d() are called, I'd expect that the pair gets called before b().

This is especially important if one of them throws. If the call to c() or d()
or the assignment throws, then the rest of the expression is abandoned. Should
a() and b() have been called? I'd argue that, due to the associativity rules,
they shouldn't have.

Think also of how many temporaries need to be kept in memory (let's think
trivial types, for a moment). If they're evaluated right to left, then there's
always one single temporary:

T &&tmp = d(),
tmp = (c() = tmp),
tmp = (b() = tmp),
a() = tmp;

Whereas a left-to-right order would imply keeping more temporaries:

T &&tmp_a = a(),
T &&tmp_b = b(),
T &&tmp_c = c(),
tmp_c = d(),
tmp_b = tmp_c,
tmp_a = tmp_b;

This directly leads to code optimisation: how many temporaries need to be kept
in the stack during the expression?

Finally, as a consequence of b() being called after (c(), d()) pair, it
follows that d() should be called before c().

Thiago Macieira

unread,
Jul 13, 2016, 5:58:14 PM7/13/16
to std-pr...@isocpp.org
Em quarta-feira, 13 de julho de 2016, às 14:39:56 PDT, ezmag...@gmail.com
escreveu:
> > They found 2% penalty for LTR over RTL on one of the SPEC2006 tests.
>
> That's interesting. But then why not make the arguments evalued right to
> left then?
> Is it because everything else is left to right and having this particular
> case be right-to-left may cause confusion?

Benchmarks show variance of up to 4%. That means some cases would better off
with LTR, some would be better off with RTL.

The conclusion is that specifying one would make some cases (half?) and some
cases better. So instead of choosing "the lesser of two evils" (whichever that
may be), the committee opted not to choose and allow sufficiently smart
compilers to order however they see fit, so they could get the best of both
worlds.

ezmag...@gmail.com

unread,
Jul 13, 2016, 6:05:35 PM7/13/16
to ISO C++ Standard - Future Proposals
but "everything else" is not evaluated LTR.

Forgive me, that remark was said thoughtlessly.

I for one would like for there to be a specified order to things, if only to lessen the number of "unspecified" or "implementation defined" areas of C++.
It may be helpful for the compiler writers, but it gives the programmer another "I cannot do this" to worry about.

However, if it is a situation where the lack of specification provides significant optimization opportunities, I can understand the decision to keep it unspecified.
It will be good to have measurements such as Jason Merrill's report made in more detail before either decision is made.

Hyman Rosen

unread,
Jul 13, 2016, 6:26:26 PM7/13/16
to std-pr...@isocpp.org
On Wed, Jul 13, 2016 at 5:53 PM, Thiago Macieira <thi...@macieira.org> wrote:
Because it's the "happens before" order. When you write:

        a() = b() = c() = d();

The c() = d() assignment happens first. So regardless of the order in which c()
and d() are called, I'd expect that the pair gets called before b().

When you write

    ((a() = b()) = c()) = d()

the a() = b() assignment happens first.  But the standard (now) still requires the valuation order d(), c(), b(), a().
It doesn't seem to me that you've made your case.

As for temporaries, 64-bit x86 has some 16 64-bit registers and 8 128-bit SSE registers, so there's plenty of room to hold them, and they don't need to go onto the stack.  But that's not even important; the amount of code that consists of such multiple assignments is minuscule.  The important thing is to make the language clear and let the compilers do the worrying about generating good translations.

There's this video Don't Help the Compiler by Stephan T. Lavavej.  It's directed at programmers.  How much more sad when the language designers are making the same mistake.
It is loading more messages.
0 new messages