I believe i = i++; has undefined behavior, but how do I prove this using [intro.execution]/15?

446 views
Skip to first unread message

John Kalane

unread,
Jul 28, 2016, 2:43:03 PM7/28/16
to ISO C++ Standard - Discussion
I believe that

i = i++;

is undefined given the Example (i = i++ + 1;) in [intro.execution]/15. But I'd like to know how to prove this using this paragraph?


Hyman Rosen

unread,
Jul 28, 2016, 3:55:12 PM7/28/16
to ISO C++ Standard - Discussion
Instead of doing that, you should be campaigning the C++ Committee to define order of evaluation of expressions as strict left-to-right, the way Java does it, and then you would not need to join the legions of programmers who have uselessly wondered the same thing, or erroneously used such constructs in innocence.

Thiago Macieira

unread,
Jul 28, 2016, 5:07:57 PM7/28/16
to std-dis...@isocpp.org, Hyman Rosen
On quinta-feira, 28 de julho de 2016 12:55:12 PDT Hyman Rosen wrote:
> On Thursday, July 28, 2016 at 2:43:03 PM UTC-4, John Kalane wrote:
> > I believe that
> >
> > i = i++;
> >
> > is undefined given the Example (i = i++ + 1;) in [intro.execution]/15
> > <http://eel.is/c++draft/intro.execution#15>. But I'd like to know how to
> > prove this using this paragraph?
>
> Instead of doing that, you should be campaigning the C++ Committee to
> define order of evaluation of expressions as strict left-to-right, the way
> Java does it, and then you would not need to join the legions of
> programmers who have uselessly wondered the same thing, or erroneously used
> such constructs in innocence.

This has no thing to do with order of evaluation, but the fact that, in this
case, there are two assignments to i in the same statement, and one of them is
implicit.

--
Thiago Macieira - thiago (AT) macieira.info - thiago (AT) kde.org
Software Architect - Intel Open Source Technology Center

Thiago Macieira

unread,
Jul 28, 2016, 5:12:37 PM7/28/16
to std-dis...@isocpp.org, John Kalane
On quinta-feira, 28 de julho de 2016 11:43:03 PDT John Kalane wrote:
> I believe that
>
> i = i++;
>
> is undefined given the Example (i = i++ + 1;) in [intro.execution]/15
> <http://eel.is/c++draft/intro.execution#15>. But I'd like to know how to
> prove this using this paragraph?

Doesn't "If a side effect on a memory location ([intro.memory]) is unsequenced
relative to either another side effect on the same memory location" say
exactly that? In your example, there are two side-effects on the same memory
location, one from the assignment operator and the other from the increment.

Note that this doesn't happen with atomics:

i = 0;
i = i.fetch_add(1) + 1; // i is 1 now and this wasn't atomic

John Kalane

unread,
Jul 28, 2016, 6:56:47 PM7/28/16
to ISO C++ Standard - Discussion, johnk...@gmail.com


On Thursday, July 28, 2016 at 6:12:37 PM UTC-3, Thiago Macieira wrote:

Doesn't "If a side effect on a memory location ([intro.memory]) is unsequenced
relative to either another side effect on the same memory location" say
exactly that? In your example, there are two side-effects on the same memory
location, one from the assignment operator and the other from the increment.

Note that this doesn't happen with atomics:

        i = 0;
        i = i.fetch_add(1) + 1;        // i is 1 now and this wasn't atomic

But how do you show that the two side effects are unsequenced? 

Thiago Macieira

unread,
Jul 28, 2016, 10:01:34 PM7/28/16
to std-dis...@isocpp.org
Because nothing says that they are sequenced. So they aren't.

bogdan

unread,
Jul 29, 2016, 7:41:47 AM7/29/16
to ISO C++ Standard - Discussion

But after the adoption of P0145R3, this is no longer undefined behaviour, right? i++ is sequenced before the i on the left, and the assignment is sequenced after the value computation of that i; so the side effect of i++ is sequenced before the assignment.

John Kalane

unread,
Jul 29, 2016, 8:11:34 AM7/29/16
to ISO C++ Standard - Discussion


On Thursday, July 28, 2016 at 11:01:34 PM UTC-3, Thiago Macieira wrote:
On quinta-feira, 28 de julho de 2016 15:56:47 PDT John Kalane wrote:
> On Thursday, July 28, 2016 at 6:12:37 PM UTC-3, Thiago Macieira wrote:
> > Doesn't "If a side effect on a memory location ([intro.memory]) is
> > unsequenced
> > relative to either another side effect on the same memory location" say
> > exactly that? In your example, there are two side-effects on the same
> > memory
> > location, one from the assignment operator and the other from the
> > increment.
> >
> > Note that this doesn't happen with atomics:
> >         i = 0;
> >         i = i.fetch_add(1) + 1;        // i is 1 now and this wasn't
> >
> > atomic
>
> But how do you show that the two side effects are unsequenced?

Because nothing says that they are sequenced. So they aren't.

By itself, this statement defies all basic principles of mathematical logic, unless it was part of the Standard. I may be wrong, but to my knowledge, the Standard doesn't contain such statement.

John Kalane

unread,
Jul 29, 2016, 9:07:23 AM7/29/16
to ISO C++ Standard - Discussion
I've just realized that in [intro.execution]/15 we have:

"Except where noted, evaluations of operands of individual operators and of subexpressions of individual
expressions are unsequenced. "

Therefore Thiago, your answer is correct, unless the expression i = i++; now is well-defined, as bogdan pointed out above. 

Thiago Macieira

unread,
Jul 29, 2016, 11:49:22 AM7/29/16
to std-dis...@isocpp.org
On sexta-feira, 29 de julho de 2016 04:41:47 PDT bogdan wrote:
> But after the adoption of P0145R3
> <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0145r3.pdf>, this
> is no longer undefined behaviour, right? i++ is sequenced before the i on
> the left, and the assignment is sequenced after the value computation of
> that i; so the side effect of i++ is sequenced before the assignment.

That paper ensures the evaluation order, but that doesn't have an effect on
the assignment. "Evaluate" means get the value of and the value of i++ is i.
You're also right that the increment happens after the obtention of the value.

However, the incrementing and the assignment of the value of i continue to be
unsequenced with regards to each other.

bogdan

unread,
Jul 29, 2016, 1:31:07 PM7/29/16
to ISO C++ Standard - Discussion

On Friday, July 29, 2016 at 6:49:22 PM UTC+3, Thiago Macieira wrote:
That paper ensures the evaluation order, but that doesn't have an effect on
the assignment. "Evaluate" means get the value of and the value of i++ is i.
You're also right that the increment happens after the obtention of the value.

However, the incrementing and the assignment of the value of i continue to be
unsequenced with regards to each other.


[intro.execution]/12: [...] Evaluation of an expression (or a sub-expression) in general includes both value computations (including determining the identity of an object for glvalue evaluation and fetching a value previously assigned to an object for prvalue evaluation) and initiation of side effects. [...]

[intro.execution]/13: [...] An expression X is said to be sequenced before an expression Y if every value computation and every side effect associated with the expression X is sequenced before every value computation and every side effect associated with the expression Y.

[expr.ass]/1: [...] In all cases, the assignment is sequenced after the value computation of the right and left operands, and before the value computation of the assignment expression. The right operand is sequenced before the left operand. [...]


Reading all of the above, it seems to me that the only part that is unsequenced with the assignment is a possible side effect of the lhs expression (only its value computation is guaranteed to be sequenced before the assignment), but we don't have any of that in this case, so I can't see any undefined behaviour. Am I missing something?

John Kalane

unread,
Jul 29, 2016, 1:34:07 PM7/29/16
to ISO C++ Standard - Discussion

On Friday, July 29, 2016 at 12:49:22 PM UTC-3, Thiago Macieira wrote:

That paper ensures the evaluation order, but that doesn't have an effect on
the assignment. "Evaluate" means get the value of and the value of i++ is i.
You're also right that the increment happens after the obtention of the value.

However, the incrementing and the assignment of the value of i continue to be
unsequenced with regards to each other.

But the sentence below is now present in [expr.ass] /1, and it wasn't there before N4606:

Thiago Macieira

unread,
Jul 29, 2016, 2:01:04 PM7/29/16
to std-dis...@isocpp.org
On sexta-feira, 29 de julho de 2016 10:31:07 PDT bogdan wrote:
> [intro.execution]/12: [...] *Evaluation* of an expression (or a
> sub-expression) in general includes both value computations (including
> determining the identity of an object for glvalue evaluation and fetching a
> value previously assigned to an object for prvalue evaluation) and
> initiation of side effects. [...]

"initiation of side-effects" does not imply conclusion of said side-effects.

> [intro.execution]/13: [...] An expression *X* is said to be sequenced
> before an expression *Y* if every value computation and every side effect
> associated with the expression *X* is sequenced before every value
> computation and every side effect associated with the expression *Y*.
>
> [expr.ass]/1: [...] In all cases, the assignment is sequenced after the
> value computation of the right and left operands, and before the value
> computation of the assignment expression. The right operand is sequenced
> before the left operand. [...]
>
>
> Reading all of the above, it seems to me that the only part that is
> unsequenced with the assignment is a possible side effect of the lhs
> expression (only its value computation is guaranteed to be sequenced before
> the assignment), but we don't have any of that in this case, so I can't see
> any undefined behaviour. Am I missing something?

We do have a side-effect: the actual saving back to memory of the incremented
value. That is still unsequenced compared to the assignment operator.

To illustrate, think of it as this pseudo code:

tmp1 = load i ; #1, "i"
tmp2 = add i, 1 ; #2, "i++"
save i, tmp1 ; #3a, "i = evaluated #1"
save i, tmp2 ; #3b, "side effect of #2"

#1 and #2 are sequenced among each other, clearly. Also, both #1 and #2 need
to be sequenced before either #3 instruction. The problem is that the two #3
aren't sequenced to each other. The pseudo-code above results in i being
incremented, but it's not the only possibility. The compiler could also
legitimately think:

* #3b before #3a, in which case the value changes from 0 to 1 and then back to
0, non-atomically
* combine #2 and #3b with a memory increment instruction like x86 has, so that
the value changes from 0 to 1 and then to 0, but the first one is atomic
* decide that i = i++ is self-assignment and drop #3a, so it always goes from
0 to 1, atomically

etc.

Thiago Macieira

unread,
Jul 29, 2016, 2:05:03 PM7/29/16
to std-dis...@isocpp.org
On sexta-feira, 29 de julho de 2016 10:34:06 PDT John Kalane wrote:
> But the sentence below is now present in [expr.ass] /1
> <http://eel.is/c++draft/expr.ass#1>, and it wasn't there before N4606:
>
> "The right operand is sequenced before the left operand"

If we take that to mean that the entire right operand, including the
conclusion of the side-effects, is sequenced before the assignment, then

i = i++;

means that i needs to be loaded, incremented, saved to memory, then the old
value should be saved back, overwriting the incremented value. In other words,
the above is self-assignment, except it has visible side-effects for other
threads.

Is that what you were expecting?

John Kalane

unread,
Jul 29, 2016, 4:52:45 PM7/29/16
to ISO C++ Standard - Discussion
On Friday, July 29, 2016 at 3:05:03 PM UTC-3, Thiago Macieira wrote:

If we take that to mean that the entire right operand, including the
conclusion of the side-effects, is sequenced before the assignment, then

        i = i++;

means that i needs to be loaded, incremented, saved to memory, then the old
value should be saved back, overwriting the incremented value. In other words,
the above is self-assignment, except it has visible side-effects for other
threads.

Is that what you were expecting?

Yes. Considering the new wording for [expr.ass]/1, that's what I would expect, at least in the case of two non-potentially concurrent actions (side-effects) we are observing in this example. I agree with you, that the fact that the final value of i doesn't change is weird, but nevertheless, this is better than undefined behavior.  

bogdan

unread,
Jul 29, 2016, 4:56:04 PM7/29/16
to ISO C++ Standard - Discussion

On Friday, July 29, 2016 at 9:01:04 PM UTC+3, Thiago Macieira wrote:
"initiation of side-effects" does not imply conclusion of said side-effects.

Yes, of course. I included that quote because you said that "'Evaluate' means get the value of" and I wanted to make sure we're on the same page using Standard terminology.

 
We do have a side-effect: the actual saving back to memory of the incremented
value. That is still unsequenced compared to the assignment operator.

I specifically mentioned possible side effects generated by the lhs expression, and there are none of those.
 

To illustrate, think of it as this pseudo code:

        tmp1 = load i                ; #1, "i"
        tmp2 = add i, 1        ; #2, "i++"
        save i, tmp1                ; #3a, "i = evaluated #1"
        save i, tmp2                ; #3b, "side effect of #2"

#1 and #2 are sequenced among each other, clearly. Also, both #1 and #2 need
to be sequenced before either #3 instruction. The problem is that the two #3
aren't sequenced to each other. The pseudo-code above results in i being
incremented, but it's not the only possibility. The compiler could also
legitimately think:

* #3b before #3a, in which case the value changes from 0 to 1 and then back to
   0, non-atomically
* combine #2 and #3b with a memory increment instruction like x86 has, so that
   the value changes from 0 to 1 and then to 0, but the first one is atomic
* decide that i = i++ is self-assignment and drop #3a, so it always goes from
   0 to 1, atomically


Actually, I think there are a couple of steps missing in your pseudo code, and they are important because they correspond to concepts used by the paragraphs I quoted:

a1 = address of i       ; #0  glvalue value computation of the "i" in "i++"
tmp1
= load from (a1)   ; #1  lvalue-to-rvalue conversion on the "i" in "i++"
                       
;     also prvalue value computation of "i++"
tmp2
= add tmp1, 1      ; #2  internal processing of "i++"
save
(a1), tmp2         ; #3b side effect of "i++"
a2
= address of i       ; #4  glvalue value computation of the lhs "i"
save
(a2), tmp1         ; #3a i = evaluated #1

[expr.ass]/1 tells us that the right operand is sequenced before the left operand, and [intro.execution]/13 tells us what that means: that every side effect of the right operand is sequenced before the value computation of the left operand; this means that #3b must be before #4.

[expr.ass]/1 also tells us that the value computations of the operands are sequenced before the assignment; this means that #4 must be before #3a.

Transitively, #3b must be before #3a.

Which also means that, of the three alternatives you listed, the first two are according to the Standard and the third one is not.

John Kalane

unread,
Jul 30, 2016, 9:27:21 AM7/30/16
to ISO C++ Standard - Discussion
I agree with your conclusions. But then, the example in [intro.execution]/15 is messed up, when it says that

i = v[i++];
i
= i++ + 1;

are undefined. Am I correct?

Also, it is not clear to me why

f(i = -1, i = -1);

is undefined. I can understand that

f(i = -1, i = 0);

for example, is undefined, but I cannot extend this conclusion to the prior example, as i will always be equal to -1 at the end of the full expression f(i = -1, i = -1); 

Bo Persson

unread,
Jul 30, 2016, 11:16:32 AM7/30/16
to std-dis...@isocpp.org
On 2016-07-30 15:27, John Kalane wrote:
>
>
> Also, it is not clear to me why
>
> |
> f(i =-1,i =-1);
> |
>
> is undefined. I can understand that
>
>
> |
> f(i =-1,i =0);
> |
>
> for example, is undefined, but I cannot extend this conclusion to the
> prior example, as i will always be equal to -1 at the end of the full
> expression f(i = -1, i = -1);
>


It is the fact that you write to the same location twice that is the
problem, not if they are two different values.

Otherwise the result could just be unspecified - one or the other.

A long time ago I saw a reference on one of the C++ news groups to
hardware that would lock up if a memory read met a memory write to the
same address. The C89 committee was aware of such problems and didn't
just invent the undefined behavior.


Bo Persson


bogdan

unread,
Jul 30, 2016, 5:06:18 PM7/30/16
to ISO C++ Standard - Discussion

On Saturday, July 30, 2016 at 4:27:21 PM UTC+3, John Kalane wrote:
I agree with your conclusions. But then, the example in [intro.execution]/15 is messed up, when it says that

i = v[i++];
i
= i++ + 1;

are undefined. Am I correct?


Yes, you are correct as far as I can tell. The new rules guarantee that the incremented value is overwritten by the assignment in both cases. However, something like

j = i++ + i++;

is still undefined behaviour, because the side effects of the two increment operations are unsequenced.


Also, it is not clear to me why

f(i = -1, i = -1);

is undefined. I can understand that

f(i = -1, i = 0);

for example, is undefined, but I cannot extend this conclusion to the prior example, as i will always be equal to -1 at the end of the full expression f(i = -1, i = -1); 


If I understand the new rules correctly, none of these two examples exhibits undefined behaviour anymore - just unspecified. [expr.call]/5 says that the evaluations of the two arguments are indeterminately sequenced, so it's unspecified which is executed first, but the evaluation of the one that goes first is completed before moving on to the other one. In your second example, it's unspecified whether the value of i upon entry to the function is -1 or 0, but the value is guaranteed to be one of these two possibilities.

Under the old rules, the two evaluations were unsequenced, which meant they could be interleaved. Even for assignment, this can be a problem - for example, on some architectures, some stores can be implemented using two assembly instructions (long long on a purely-32-bit machine). It's not safe for the Standard to make assumptions about what happens when such pairs of instructions are interleaved while writing to the same memory location, even if they store the same value, hence the undefined behaviour.

John Kalane

unread,
Jul 30, 2016, 8:12:00 PM7/30/16
to ISO C++ Standard - Discussion
Many thanks for the comprehensive answer.

bogdan

unread,
Jul 31, 2016, 5:36:57 PM7/31/16
to ISO C++ Standard - Discussion

On Sunday, July 31, 2016 at 3:12:00 AM UTC+3, John Kalane wrote:
Many thanks for the comprehensive answer.

Cheers! And now, to muddy the waters and show what I meant with "possible side effects of the lhs expression" above:

int main()
{
   
int v[] = {0, 0};
   
int& i = v[0];
   v
[i++] = 7; // undefined behaviour
}

Even under the new rules, the side effect of the increment expression is unsequenced with the assignment in this case.

John Kalane

unread,
Jul 31, 2016, 6:28:47 PM7/31/16
to ISO C++ Standard - Discussion
No muddy waters. The example was perfect. Thanks again! 

Hyman Rosen

unread,
Aug 8, 2016, 5:29:56 PM8/8/16
to std-dis...@isocpp.org
As always, these lengthy discussions about what is defined behavior and what is not illustrate the necessity for having the language define order of evaluation strictly (and this order should be left-to-right, for simplicity and teachability).  In which case the muddy water example would evaluate as

    int &ri = i;       // ref to v[0]
    int ti = i;        // 0
    ++ri;              // v[0] set to 1
    int &vi = v[ti];   // ref to v[0]
    vi = 7;            // v[0] set to 7
  

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.
To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

Bo Persson

unread,
Aug 8, 2016, 10:30:53 PM8/8/16
to std-dis...@isocpp.org
On 2016-08-08 23:29, Hyman Rosen wrote:
> As always, these lengthy discussions about what is defined behavior and
> what is not illustrate the necessity for having the language define
> order of evaluation strictly (and this order should be left-to-right,
> for simplicity and teachability). In which case the muddy water example
> would evaluate as
>
> int &ri = i; // ref to v[0]
> int ti = i; // 0
> ++ri; // v[0] set to 1
> int &vi = v[ti]; // ref to v[0]
> vi = 7; // v[0] set to 7
>

This is just extremely bad coding.

We don't want code that does the equivalent of

v[v[0]++] = 7;

because it is useless. Why make it defined?



>
> On Sun, Jul 31, 2016 at 6:28 PM, John Kalane <johnk...@gmail.com
> <mailto:johnk...@gmail.com>> wrote:
>
>
> On Sunday, July 31, 2016 at 6:36:57 PM UTC-3, bogdan wrote:
>
>
> On Sunday, July 31, 2016 at 3:12:00 AM UTC+3, John Kalane wrote:
>
> Many thanks for the comprehensive answer.
>
>
> Cheers! And now, to muddy the waters and show what I meant with
> "possible side effects of the lhs expression" above:
>
> |
> intmain()
> {
> intv[]={0,0};
> int&i =v[0];
> v[i++]=7;// undefined behaviour

Bo Persson

unread,
Aug 8, 2016, 10:35:07 PM8/8/16
to std-dis...@isocpp.org
On 2016-08-09 04:30, Bo Persson wrote:
> On 2016-08-08 23:29, Hyman Rosen wrote:
>> As always, these lengthy discussions about what is defined behavior and
>> what is not illustrate the necessity for having the language define
>> order of evaluation strictly (and this order should be left-to-right,
>> for simplicity and teachability). In which case the muddy water example
>> would evaluate as
>>
>> int &ri = i; // ref to v[0]
>> int ti = i; // 0
>> ++ri; // v[0] set to 1
>> int &vi = v[ti]; // ref to v[0]
>> vi = 7; // v[0] set to 7
>>
>
> This is just extremely bad coding.
>
> We don't want code that does the equivalent of
>
> v[v[0]++] = 7;
>
> because it is useless. Why make it defined?

Or rather, why encourage people to write crappy code by changing the
language to make this possible?

Patrice Roy

unread,
Aug 8, 2016, 11:27:59 PM8/8/16
to std-dis...@isocpp.org
Hyman, please write a proposal. I'm sure you have a lot to contribute.


Demi Obenour

unread,
Aug 9, 2016, 1:59:00 AM8/9/16
to std-dis...@isocpp.org

One thing that I think is certain: it should be possible to prove that programs following the C++ Core Guidelines are free of this undefined behavior.

Patrice Roy

unread,
Aug 9, 2016, 10:03:11 PM8/9/16
to std-dis...@isocpp.org
The Core Guidelines are huge. It might take some time to reach this ideal (although I understand the wish).

FrankHB1989

unread,
Aug 11, 2016, 1:55:05 AM8/11/16
to ISO C++ Standard - Discussion
The defined order has nothing to do with teachability because this belief often comes from thin air and does not need teaching at all. However, both physical computers and abstract machines in reality are not guaranteed to work in this way, for sufficient reasons. To teach illusion is in vain. And it will be more difficult to teach about the truth after this. On the contrary, avoiding of such naive thinking is the thing should be taught here exactly. So as always, you made things over-complicated again.

在 2016年8月9日星期二 UTC+8上午5:29:56,Hyman Rosen写道:
As always, these lengthy discussions about what is defined behavior and what is not illustrate the necessity for having the language define order of evaluation strictly (and this order should be left-to-right, for simplicity and teachability).  In which case the muddy water example would evaluate as

    int &ri = i;       // ref to v[0]
    int ti = i;        // 0
    ++ri;              // v[0] set to 1
    int &vi = v[ti];   // ref to v[0]
    vi = 7;            // v[0] set to 7
  
On Sun, Jul 31, 2016 at 6:28 PM, John Kalane <johnk...@gmail.com> wrote:

On Sunday, July 31, 2016 at 6:36:57 PM UTC-3, bogdan wrote:

On Sunday, July 31, 2016 at 3:12:00 AM UTC+3, John Kalane wrote:
Many thanks for the comprehensive answer.

Cheers! And now, to muddy the waters and show what I meant with "possible side effects of the lhs expression" above:

int main()
{
   
int v[] = {0, 0};
   
int& i = v[0];
   v
[i++] = 7; // undefined behaviour
}

Even under the new rules, the side effect of the increment expression is unsequenced with the assignment in this case.

No muddy waters. The example was perfect. Thanks again! 

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

FrankHB1989

unread,
Aug 11, 2016, 2:05:41 AM8/11/16
to ISO C++ Standard - Discussion
For this particular case, even the code itself is not following any guide, -Wsequence-point and ubsan may already help a lot.

Skills to find smelly code and keep away from are is still needed, though.

在 2016年8月10日星期三 UTC+8上午10:03:11,Patrice Roy写道:
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

Hyman Rosen

unread,
Aug 16, 2016, 2:58:04 PM8/16/16
to std-dis...@isocpp.org
On Mon, Aug 8, 2016 at 11:27 PM, Patrice Roy <patr...@gmail.com> wrote:
Hyman, please write a proposal. I'm sure you have a lot to contribute.

OK, I've made an attempt to write something proposal-like.  It doesn't say anything that I haven't already said in my posts, but I've tried to locate the places in the draft standard (N4606) that need to be updated.  I doubt that this will change anyone's mind.
ooe.txt

Patrice Roy

unread,
Aug 16, 2016, 3:42:59 PM8/16/16
to std-dis...@isocpp.org
Thank you. I'll read this; it's the best way to make the discussion go forward. I appreciate it.

--

Edward Catmur

unread,
Aug 17, 2016, 7:46:35 PM8/17/16
to ISO C++ Standard - Discussion
I think this could be improved by covering more of the main counterarguments, if only to dismiss them as you have for performance. Without consulting the previous discussion, the main points I would suggest you mention are:

* performance (fine, you don't care);
* backwards compatibility;
* divergence from C;
* maintainability;
* style (your proposal would amount to condoning violation of command-query separation, one of the pillars of object-oriented programming).

Demi Obenour

unread,
Aug 17, 2016, 9:53:00 PM8/17/16
to std-dis...@isocpp.org

I agree that it will take time.  Part of my desire comes from my experience with Rust, which has a rule that is basically "there should be no way to cause undefined behavior in safe code — if you do, there is a bug elsewhere".  A larger part, however, comes from a security perspective: undefined behavior often leads to exploits, and conversely a program that is mechanically verified to be free of undefined behavior is also very likely to have far fewer vulnerabilities.

Hyman Rosen

unread,
Aug 23, 2016, 5:27:18 PM8/23/16
to std-dis...@isocpp.org
On Wed, Aug 17, 2016 at 7:46 PM, Edward Catmur <e...@catmur.co.uk> wrote:
I think this could be improved by covering more of the main counterarguments, if only to dismiss them as you have for performance. Without consulting the previous discussion, the main points I would suggest you mention are:

* performance (fine, you don't care);
* backwards compatibility;
* divergence from C;
* maintainability;
* style (your proposal would amount to condoning violation of command-query separation, one of the pillars of object-oriented programming).

Except for performance, P0145 does not mention any of these things and yet was accepted.

AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave), C++ is already divergent from C in order of evaluation (as I described, for braced initializers), maintainability is unaffected or improved (because removing unspecified behavior can only be a net improvement).

C++ is a multi-paradigm language.  It explicitly makes no attempt to force programmers to adhere to pillars of anything when it comes to style.  It certainly does not force people to use object-oriented programming or to do so in specific ways.

In any case, I have added a section on compatibility with C++ and C (saying that it's not a problem) and revised the section on associativity to stop accusing that P0145 relies on it (in fact, P0145 offers no reason at all why assignments are to evaluated right-to-left).  The "motivation" section should be sufficient to address maintainability.  I understand that you may not believe that removing unspecified behavior increases maintainability, but there's nothing I can do about that.
ooe.txt

Edward Catmur

unread,
Aug 24, 2016, 5:29:25 PM8/24/16
to std-dis...@isocpp.org
On Tue, Aug 23, 2016 at 10:26 PM, Hyman Rosen <hyman...@gmail.com> wrote:
On Wed, Aug 17, 2016 at 7:46 PM, Edward Catmur <e...@catmur.co.uk> wrote:
I think this could be improved by covering more of the main counterarguments, if only to dismiss them as you have for performance. Without consulting the previous discussion, the main points I would suggest you mention are:

* performance (fine, you don't care);
* backwards compatibility;
* divergence from C;
* maintainability;
* style (your proposal would amount to condoning violation of command-query separation, one of the pillars of object-oriented programming).

Except for performance, P0145 does not mention any of these things and yet was accepted.

It was accepted with the exception of its most controversial part, which is what you are trying (in part) to have a second go at pushing through.

AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),

That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this); secondly and more importantly, your proposal would make currently conforming extensions illegal.

You touch on this when you say "vendors may choose to provide options for specifying non-Standard orders"; these options would be non-conforming extensions and would be near-impossible to use safely with compilation units including library or third-party code i.e. pretty much all programs.

C++ is already divergent from C in order of evaluation (as I described, for braced initializers),

Yes, this is unfortunate; I don't see that as a reason to make it worse, but I can also see your point of view.

maintainability is unaffected or improved (because removing unspecified behavior can only be a net improvement).

There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.

C++ is a multi-paradigm language.  It explicitly makes no attempt to force programmers to adhere to pillars of anything when it comes to style.  It certainly does not force people to use object-oriented programming or to do so in specific ways.

A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.
 
In any case, I have added a section on compatibility with C++ and C (saying that it's not a problem) and revised the section on associativity to stop accusing that P0145 relies on it (in fact, P0145 offers no reason at all why assignments are to evaluated right-to-left). 

Looks good!

The "motivation" section should be sufficient to address maintainability.  I understand that you may not believe that removing unspecified behavior increases maintainability, but there's nothing I can do about that.

That's a mischaracterization of my position, or maybe I've been unclear or hyperbolic. I can accept that there are some benefits to removing unspecified behavior (for example, it simplifies porting between platforms as one does not need to check that the code does not rely on a particular unspecified behavior), but I believe that the overall result is negative.


Hyman Rosen

unread,
Aug 24, 2016, 5:59:05 PM8/24/16
to std-dis...@isocpp.org
I've modified the proposal to add a new section on another problem with P0145 which just occurred to me.  With the adoption of right-to-left requirements on assignments, it is no longer possible to rewrite infix assignments as function calls.  That is, given

    Klass &a();
    Klass &b();

    a() = b();


previously we could mechanically rewrite this assignment as

    a().operator=(b())

and vice versa.  Now we cannot.  The infix form requires that b() be called before a() and the function form requires the opposite.
ooe.txt

Hyman Rosen

unread,
Aug 24, 2016, 6:55:24 PM8/24/16
to std-dis...@isocpp.org
On Wed, Aug 24, 2016 at 5:29 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
It was accepted with the exception of its most controversial part, which is what you are trying (in part) to have a second go at pushing through.

I will be happy if I can just kill right-to-left assignments.  Everything else is a step towards my ultimate goal of left-to-right everywhere, but right-to-left assignments would kill my dreams forever.
 

AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),

That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);

How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.

secondly and more importantly, your proposal would make currently conforming extensions illegal.

I am not aware of any compilers that offer their users options to control order of evaluation.  If there are such, then their vendors should offer objections to the committee and the objections can be considered on the merits.

You touch on this when you say "vendors may choose to provide options for specifying non-Standard orders"; these options would be non-conforming extensions and would be near-impossible to use safely with compilation units including library or third-party code i.e. pretty much all programs.

No, that's false.  Order of evaluation is strictly local and does not affect or carry across interface boundaries.

What could be a problem is code in header files, from inline functions and templates.  This means that vendor extensions to control order of evaluation should appear in the form of pragmas within the code, surrounding sections that need a particular order, rather than as command-line options to the compilation process as a whole.  (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)

There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.

I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?

A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.

My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.

Edward Catmur

unread,
Aug 25, 2016, 11:14:54 AM8/25/16
to std-dis...@isocpp.org
On Wed, Aug 24, 2016 at 11:55 PM, Hyman Rosen <hyman...@gmail.com> wrote:
AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),

That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);

How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.

I'm talking about moving in the other direction; from the former to the latter. Programs developed using a later compiler are often processed by an earlier compiler, and your proposal would silently break such programs.

secondly and more importantly, your proposal would make currently conforming extensions illegal.

I am not aware of any compilers that offer their users options to control order of evaluation.  If there are such, then their vendors should offer objections to the committee and the objections can be considered on the merits.

Such an extension does not have to be option-controlled; it can take the form of an explicit or implicit guarantee. I believe this was one of the arguments raised against the rejected part of P0145.

You touch on this when you say "vendors may choose to provide options for specifying non-Standard orders"; these options would be non-conforming extensions and would be near-impossible to use safely with compilation units including library or third-party code i.e. pretty much all programs.

No, that's false.  Order of evaluation is strictly local and does not affect or carry across interface boundaries.

That's why I said "compilation units including library or third-party code". Also, order of evaluation can become non-local in the face of whole-program optimization.

What could be a problem is code in header files, from inline functions and templates.  This means that vendor extensions to control order of evaluation should appear in the form of pragmas within the code, surrounding sections that need a particular order, rather than as command-line options to the compilation process as a whole.

And with lexical rather than semantic scope; that's a pretty serious obstacle to providing such extensions.

 (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)

So why is making the language enforce order such a priority?

There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.

I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?

If you have a function void f(int a, double b) but decide that the arguments should be in the other order you currently can swap the arguments and use a code transforming tool to amend calls to f accordingly. Since the present standard does not define an ordering for the argument evaluation, the meaning of the program does not change.

A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.

My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.

They can't, because they have to read and maintain code that might silently depend on the ordering introduced by your proposal.

Greg Marr

unread,
Aug 25, 2016, 5:27:04 PM8/25/16
to ISO C++ Standard - Discussion
On Thursday, August 25, 2016 at 11:14:54 AM UTC-4, Edward Catmur wrote:
On Wed, Aug 24, 2016 at 11:55 PM, Hyman Rosen <hyman...@gmail.com> wrote:
AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),

That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);

How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.

I'm talking about moving in the other direction; from the former to the latter. Programs developed using a later compiler are often processed by an earlier compiler, and your proposal would silently break such programs.

Any proposal that changes something from undefined behavior to defined behavior, such as all of P0145, will exhibit this.  If it's a reason to reject this change, then it's also a reason to reject P0145 entirely.
 

 (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)

So why is making the language enforce order such a priority?

To eliminate latent/random errors caused by undefined order.  If code is always order dependent, and the order can't change, then it is not essentially random whether or not the code works.  It either works or it doesn't.
 

There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.

I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?

If you have a function void f(int a, double b) but decide that the arguments should be in the other order you currently can swap the arguments and use a code transforming tool to amend calls to f accordingly. Since the present standard does not define an ordering for the argument evaluation, the meaning of the program does not change.

You don't know whether or not the meaning of the program will change.  For that matter, changing to a different version of the compiler, or even changing optimization switches, can change the behavior of that program.

For any program where that is already not the case, because it truly is order independent, then changing the order of the arguments also does not change the meaning of the program.
 

A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.

My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.

They can't, because they have to read and maintain code that might silently depend on the ordering introduced by your proposal.

As opposed to code that currently silently depends on the ordering provided by...absolutely nothing?  How is that user worse off than before?

Edward Catmur

unread,
Aug 25, 2016, 6:55:06 PM8/25/16
to std-dis...@isocpp.org

On 25 Aug 2016 10:27 p.m., "Greg Marr" <greg...@gmail.com> wrote:
>
> On Thursday, August 25, 2016 at 11:14:54 AM UTC-4, Edward Catmur wrote:
>>
>> On Wed, Aug 24, 2016 at 11:55 PM, Hyman Rosen <hyman...@gmail.com> wrote:
>>>>>
>>>>> AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),
>>>>
>>>>
>>>> That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);
>>>
>>>
>>> How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.
>>
>>
>> I'm talking about moving in the other direction; from the former to the latter. Programs developed using a later compiler are often processed by an earlier compiler, and your proposal would silently break such programs.
>
>
> Any proposal that changes something from undefined behavior to defined behavior, such as all of P0145, will exhibit this.  If it's a reason to reject this change, then it's also a reason to reject P0145 entirely.

No, only any change from unspecified behavior to specified behavior.

>>
>>
>>>  (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)
>>
>>
>> So why is making the language enforce order such a priority?
>
>
> To eliminate latent/random errors caused by undefined order.  If code is always order dependent, and the order can't change, then it is not essentially random whether or not the code works.  It either works or it doesn't.

It works until the code changes or is ported to an older compiler.

>>
>>
>>>> There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.
>>>
>>>
>>> I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?
>>
>>
>> If you have a function void f(int a, double b) but decide that the arguments should be in the other order you currently can swap the arguments and use a code transforming tool to amend calls to f accordingly. Since the present standard does not define an ordering for the argument evaluation, the meaning of the program does not change.
>
>
> You don't know whether or not the meaning of the program will change.  For that matter, changing to a different version of the compiler, or even changing optimization switches, can change the behavior of that program.

The behavior of a program is not the same as its meaning. A program that prints sizeof(long) behaves differently on Linux to Windows, but it has the same meaning on both.

> For any program where that is already not the case, because it truly is order independent, then changing the order of the arguments also does not change the meaning of the program.

Which means that you're restricting the class of programs that can easily be modified.

>>
>>
>>>> A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.
>>>
>>>
>>> My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.
>>
>>
>> They can't, because they have to read and maintain code that might silently depend on the ordering introduced by your proposal.
>
>
> As opposed to code that currently silently depends on the ordering provided by...absolutely nothing?  How is that user worse off than before?

Currently that dependency is incorrect and presumptively accidental. Under the proposal it would be correct and could be intentional. That's worse because it would happen more often it and breaking out would be breaking working code rather than breaking already broken code.

Greg Marr

unread,
Aug 25, 2016, 10:06:47 PM8/25/16
to ISO C++ Standard - Discussion
On Thursday, August 25, 2016 at 6:55:06 PM UTC-4, Edward Catmur wrote:

On 25 Aug 2016 10:27 p.m., "Greg Marr" <greg...@gmail.com> wrote:
>
> On Thursday, August 25, 2016 at 11:14:54 AM UTC-4, Edward Catmur wrote:
>>
>> On Wed, Aug 24, 2016 at 11:55 PM, Hyman Rosen <hyman...@gmail.com> wrote:
>>>>>
>>>>> AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),
>>>>
>>>>
>>>> That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);
>>>
>>>
>>> How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.
>>
>>
>> I'm talking about moving in the other direction; from the former to the latter. Programs developed using a later compiler are often processed by an earlier compiler, and your proposal would silently break such programs.
>
>
> Any proposal that changes something from undefined behavior to defined behavior, such as all of P0145, will exhibit this.  If it's a reason to reject this change, then it's also a reason to reject P0145 entirely.

 

No, only any change from unspecified behavior to specified behavior.


Do you mean that I should have said unspecified instead of undefined, or are you implying something else?

In either case, I don't understand what you mean.  How is this any different than any of the other things that P0145 has already done?
 

>>
>>
>>>  (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)
>>
>>
>> So why is making the language enforce order such a priority?
>
>
> To eliminate latent/random errors caused by undefined order.  If code is always order dependent, and the order can't change, then it is not essentially random whether or not the code works.  It either works or it doesn't.

It works until the code changes or is ported to an older compiler.


Just like with all of the other changes in P0145.
 

>>
>>
>>>> There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.
>>>
>>>
>>> I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?
>>
>>
>> If you have a function void f(int a, double b) but decide that the arguments should be in the other order you currently can swap the arguments and use a code transforming tool to amend calls to f accordingly. Since the present standard does not define an ordering for the argument evaluation, the meaning of the program does not change.
>
>
> You don't know whether or not the meaning of the program will change.  For that matter, changing to a different version of the compiler, or even changing optimization switches, can change the behavior of that program.

The behavior of a program is not the same as its meaning. A program that prints sizeof(long) behaves differently on Linux to Windows, but it has the same meaning on both.

> For any program where that is already not the case, because it truly is order independent, then changing the order of the arguments also does not change the meaning of the program.

Which means that you're restricting the class of programs that can easily be modified.


In what way?
Either it's order independent, so you can safely modify it, or it's order dependent, and you can't safely modify it because you don't even know what its behavior is now.

 

>>
>>
>>>> A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.
>>>
>>>
>>> My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.
>>
>>
>> They can't, because they have to read and maintain code that might silently depend on the ordering introduced by your proposal.
>
>
> As opposed to code that currently silently depends on the ordering provided by...absolutely nothing?  How is that user worse off than before?

Currently that dependency is incorrect and presumptively accidental. Under the proposal it would be correct and could be intentional. That's worse because it would happen more often it and breaking out would be breaking working code rather than breaking already broken code.


I don't follow.  It's okay if someone doesn't want to know what the order is when they can't know what the order is anyway, but not okay when they don't want to know what the order is when it's always one particular order?


I'm going to borrow a quote from you from another thread:

"Still, an API that promotes correctness but can be subverted is still better than an API that requires constant vigilance to use safely."

Take "an API" to mean "order of operations in C++", right now it requires constant vigilance to use safely.

With a defined order of operations, that API promotes correctness, so that means it is still better than the status quo.


Patrice Roy

unread,
Aug 25, 2016, 10:36:48 PM8/25/16
to std-dis...@isocpp.org
The key point, to me, is «Currently that dependency is incorrect and presumptively accidental. Under the proposal it would be correct and could be intentional. That's worse because it would happen more often it and breaking out would be breaking working code rather than breaking already broken code». We accept this proposal and we're moving from «no, this code is broken» to «oh, you're depending on the ordering of argument evaluation, but it's now legal, so we'll have to open up Pandora's box and look into all of the implementations of all the functions you're calling to make sure your dependencies make sense... after all, this code's "right" from now on, so it can't just be discarded». It's the C# mess, really, for those who have had the ... pleasure? Opportunity? Of debugging such code.

That being said, it's not a black and white issue; Hyman's preoccupations are reasonable under his perspective, and I'm grateful he took the time to write his proposal. It still looks to me like the «cure», by making correct a host of code we really, honestly don't want to support, is much worse than statu quo. I'm writing this because I support such code in languages where they thought it was a good idea to give meaning to the ordering expressions in situations such as those we are debating here. The obligation to support the side-effecting expressions and the... craftiness of people who depend on this ordering is unpleasant, to say the least.

I will re-read the proposal a few times; I said I would, and I respect the effort involved therein. It still looks like a bad idea to me, but I'm glad it's something we can debate on. I know for a fact the committee's not single-minded on this issue, so the fact that someone who believes strongly about this topic is willing to push it is, I think, useful. Whatever happens, this will have been beneficial.

I think we underestimate the impact of supporting the code that would result from this proposal. I really hope those who push for it also push for open source code.

Edward Catmur

unread,
Aug 26, 2016, 6:51:18 AM8/26/16
to std-dis...@isocpp.org
On Fri, Aug 26, 2016 at 3:06 AM, Greg Marr <greg...@gmail.com> wrote:
On Thursday, August 25, 2016 at 6:55:06 PM UTC-4, Edward Catmur wrote:

On 25 Aug 2016 10:27 p.m., "Greg Marr" <greg...@gmail.com> wrote:
>
> On Thursday, August 25, 2016 at 11:14:54 AM UTC-4, Edward Catmur wrote:
>>
>> On Wed, Aug 24, 2016 at 11:55 PM, Hyman Rosen <hyman...@gmail.com> wrote:
>>>>>
>>>>> AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),
>>>>
>>>>
>>>> That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);
>>>
>>>
>>> How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.
>>
>>
>> I'm talking about moving in the other direction; from the former to the latter. Programs developed using a later compiler are often processed by an earlier compiler, and your proposal would silently break such programs.
>
>
> Any proposal that changes something from undefined behavior to defined behavior, such as all of P0145, will exhibit this.  If it's a reason to reject this change, then it's also a reason to reject P0145 entirely.

No, only any change from unspecified behavior to specified behavior.


Do you mean that I should have said unspecified instead of undefined, or are you implying something else?

I can't make your argument for you. If you intended to say unspecified instead of undefined, that undermines your argument as parts of P0145 deal with undefined behavior. If instead you are saying that there is no difference between making unspecified behavior specified and making undefined behavior defined, then I would disagree on that point; tools and procedures can easily deal with undefined behavior (since it is always erroneous) but have to be far more careful around unspecified behavior (since it could be legitimate).

In either case, I don't understand what you mean.  How is this any different than any of the other things that P0145 has already done? 

Firstly, parts of P0145 deal with undefined behavior; this is safer as there is no danger of a backported program being conformant but having a different (wider) meaning. Secondly, where other parts of P0145 deal with unspecified behavior the vast majority of compilers already happen to behave the same way, and this way is the natural, idiomatic interpretation. There may be yet other parts of P0145 where this does not hold, but they are still not as egregious as the rejected function argument order part.

>>>  (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)
>>
>>
>> So why is making the language enforce order such a priority?
>
>
> To eliminate latent/random errors caused by undefined order.  If code is always order dependent, and the order can't change, then it is not essentially random whether or not the code works.  It either works or it doesn't.

It works until the code changes or is ported to an older compiler.


Just like with all of the other changes in P0145. 

Not the case: firstly, such code changes are far less likely as they are not idiomatic results of normal code maintenance (such as permuting function arguments or rearranging arithmetic expressions); secondly, older compilers already for the most part already happen to behave as expected.

>>>> There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.
>>>
>>>
>>> I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?
>>
>>
>> If you have a function void f(int a, double b) but decide that the arguments should be in the other order you currently can swap the arguments and use a code transforming tool to amend calls to f accordingly. Since the present standard does not define an ordering for the argument evaluation, the meaning of the program does not change.
>
>
> You don't know whether or not the meaning of the program will change.  For that matter, changing to a different version of the compiler, or even changing optimization switches, can change the behavior of that program.

The behavior of a program is not the same as its meaning. A program that prints sizeof(long) behaves differently on Linux to Windows, but it has the same meaning on both.

> For any program where that is already not the case, because it truly is order independent, then changing the order of the arguments also does not change the meaning of the program.

Which means that you're restricting the class of programs that can easily be modified.


In what way?
Either it's order independent, so you can safely modify it, or it's order dependent, and you can't safely modify it because you don't even know what its behavior is now.

At present there are no correct order-dependent programs (other than those explicitly documenting order dependency as a reliance on conforming extensions), so we do not need to worry about encountering such programs.

>>>> A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.
>>>
>>>
>>> My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.
>>
>>
>> They can't, because they have to read and maintain code that might silently depend on the ordering introduced by your proposal.
>
>
> As opposed to code that currently silently depends on the ordering provided by...absolutely nothing?  How is that user worse off than before?

Currently that dependency is incorrect and presumptively accidental. Under the proposal it would be correct and could be intentional. That's worse because it would happen more often it and breaking out would be breaking working code rather than breaking already broken code.


I don't follow.  It's okay if someone doesn't want to know what the order is when they can't know what the order is anyway, but not okay when they don't want to know what the order is when it's always one particular order?

Sure, that makes sense. See also Patrice Roy's reply; he's expressing the core philosophical and pragmatic issue far better than I could.

I'm going to borrow a quote from you from another thread:

"Still, an API that promotes correctness but can be subverted is still better than an API that requires constant vigilance to use safely."

Take "an API" to mean "order of operations in C++", right now it requires constant vigilance to use safely.

With a defined order of operations, that API promotes correctness, so that means it is still better than the status quo.

In that thread the proposal is to provide an additional API, not to invisibly change the meaning of an existing one that already works perfectly well for its use cases. If you were proposing a new syntax giving a defined order of evaluation I would be perfectly happy to support that proposal.

Hyman Rosen

unread,
Aug 26, 2016, 1:15:33 PM8/26/16
to std-dis...@isocpp.org
I agree that it becomes unwise to compile new order-dependent code on older compilers that don't support the defined order, and that indeed this can cause silent failures.  Nevertheless I believe that it is more beneficial to fix the order on old accidentally order-dependent code that is perhaps causing silent failures right now.  The problem has limited range; it would occur on new programs that deliberately used the newly defined order and failed to use any other features of C++17.  I don't think there will be many such programs.

C++ already did this when, in going from C++03 to C++11, it defined the order of evaluation within braced initializers.  Do you know if similar worries about compiling new code with old compilers were expressed for this change?

I agree that it no longer preserves the meaning of code to rearrange the parameter order of a function and then mechanically change calls to rearrange the arguments to match.

I believe that programmers who wish that everyone would code in some particular style and are dismayed when forced to review code that does not match that style have to suck it up, now and in the future.  C++ is a multiparadigm language, and that means that other people will make other choices.  Hell is other people...


"a host of code we really, honestly don't want to support"
But you don't have that choice - people are writing that code right now, despite the fact that it has unspecified or undefined behavior.  The language translators aren't telling people that their code is bad.  They're making an arbitrary choice of one of the possible interpretations of the program and using it, without notifying the programmer of what choice was made or even that a choice was made.  It's better to define the meaning of their code, and then your review can focus on telling them not to do that because you find it to be obscure, rather than not to do that because some compiler years from now will make a different choice and cause systems to behave erroneously.

Patrice Roy

unread,
Aug 26, 2016, 4:15:19 PM8/26/16
to std-dis...@isocpp.org
Regarding this :

"a host of code we really, honestly don't want to support"
But you don't have that choice - people are writing that code right now, despite the fact that it has unspecified or undefined behavior. 

I disagree. When people write code that relies on UB, they're on their own. There are other UB areas in the language, and we don't have to support code that relies on this. I understand the wish to support code that relies on interdependent side-effecting expressions in function call arguments, but I remain glad that we currently do not have to support it. Of course, we are not on the same page here, but that's Ok :)

Obviously, should the situation change, so will our responsibility, and that might help those who hold the belief that these function calls are important to support. On the bright side, since we will have to look deeper in source code to fix bugs we previously could say were simply due to broken assumptions, it will open up opportunities for tooling development (and I'm not being sarcastic, to be honest, as such software is actually fun to write).

Cheers!



--

Hyman Rosen

unread,
Aug 26, 2016, 5:42:34 PM8/26/16
to std-dis...@isocpp.org
On Fri, Aug 26, 2016 at 4:15 PM, Patrice Roy <patr...@gmail.com> wrote:
I disagree. When people write code that relies on UB, they're on their own.

Which I'm sure is of enormous comfort when the self-driving car smashes into a wall.

To repeat the dead-horse flogging, computer programs are written to control computers.  When the programming language allows for multiple legal translations of a program, and the programming environment chooses one arbitrarily without notifying the programmer that this has happened, there is a latent error waiting to manifest.

You are suggesting that it is a good quality of a programming language to contain subtle traps, and that it is satisfying to you to absolve yourself of blame for the presence of those traps when someone falls into them.  I suggest that it is better to remove those traps from the language.

Patrice Roy

unread,
Aug 26, 2016, 8:27:19 PM8/26/16
to std-dis...@isocpp.org
I sincerely hope self-driving cars have better quality control than this :) And to be honest, if self-driving cars have code that would be made better by this proposal, we're already in big trouble. In the original discussion thread, I remember (maybe I'm wrong, you can correct me) that your argument was that some companies can't afford quality control, and that predictable behavior would be better in that case. If a company relies on side-effecting expressions interdependence in practice, there's a quality control problem even in the face of predictable behavior; the question is which of the two situations is worse, and the fact the debuggability issue worries me enough to speak against it probably says where I'm coming from.

I understand your point of view, even if I don't agree. Thanks for briging the debate on a more formal level.

--

Demi Obenour

unread,
Aug 26, 2016, 8:52:21 PM8/26/16
to std-dis...@isocpp.org
On Fri, 2016-08-26 at 20:27 -0400, Patrice Roy wrote:
I sincerely hope self-driving cars have better quality control than this :) And to be honest, if self-driving cars have code that would be made better by this proposal, we're already in big trouble. In the original discussion thread, I remember (maybe I'm wrong, you can correct me) that your argument was that some companies can't afford quality control, and that predictable behavior would be better in that case. If a company relies on side-effecting expressions interdependence in practice, there's a quality control problem even in the face of predictable behavior; the question is which of the two situations is worse, and the fact the debuggability issue worries me enough to speak against it probably says where I'm coming from.

I understand your point of view, even if I don't agree. Thanks for briging the debate on a more formal level.

2016-08-26 17:42 GMT-04:00 Hyman Rosen <hyman...@gmail.com>:
On Fri, Aug 26, 2016 at 4:15 PM, Patrice Roy <patr...@gmail.com> wrote:
I disagree. When people write code that relies on UB, they're on their own.


Which I'm sure is of enormous comfort when the self-driving car smashes into a wall.

To repeat the dead-horse flogging, computer programs are written to control computers.  When the programming language allows for multiple legal translations of a program, and the programming environment chooses one arbitrarily without notifying the programmer that this has happened, there is a latent error waiting to manifest.

You are suggesting that it is a good quality of a programming language to contain subtle traps, and that it is satisfying to you to absolve yourself of blame for the presence of those traps when someone falls into them.  I suggest that it is better to remove those traps from the language.


I honestly hope that self-driving cars are not being programmed in C++, due to traps such as these.  I would rather them be programmed in the SPARK subset of Ada, with formal verification.

I also think that one of the jobs of C++ should be to make it better-suited for life critical software.

Greg Marr

unread,
Aug 26, 2016, 10:39:54 PM8/26/16
to ISO C++ Standard - Discussion
On Friday, August 26, 2016 at 6:51:18 AM UTC-4, Edward Catmur wrote:
On Fri, Aug 26, 2016 at 3:06 AM, Greg Marr <greg...@gmail.com> wrote:
Do you mean that I should have said unspecified instead of undefined, or are you implying something else?

I can't make your argument for you. If you intended to say unspecified instead of undefined, that undermines your argument as parts of P0145 deal with undefined behavior.

I said undefined behavior.  You said unspecified behavior.  I was asking why you said unspecified behavior instead.
 
If instead you are saying that there is no difference between making unspecified behavior specified and making undefined behavior defined, then I would disagree on that point;

I said no such thing.  I'm not the one that brought in unspecified behavior.
 
tools and procedures can easily deal with undefined behavior (since it is always erroneous) but have to be far more careful around unspecified behavior (since it could be legitimate).

If they can easily deal with it, why haven't they thus far?
 

In either case, I don't understand what you mean.  How is this any different than any of the other things that P0145 has already done? 

Firstly, parts of P0145 deal with undefined behavior; this is safer as there is no danger of a backported program being conformant but having a different (wider) meaning.

Exactly.  A well defined program backported to an earlier version of the compiler will be silently non-conformant.  There's nothing safe about this at all.
 
Secondly, where other parts of P0145 deal with unspecified behavior the vast majority of compilers already happen to behave the same way, and this way is the natural, idiomatic interpretation. There may be yet other parts of P0145 where this does not hold, but they are still not as egregious as the rejected function argument order part.

It works until the code changes or is ported to an older compiler.


Just like with all of the other changes in P0145. 

Not the case: firstly, such code changes are far less likely as they are not idiomatic results of normal code maintenance (such as permuting function arguments or rearranging arithmetic expressions); secondly, older compilers already for the most part already happen to behave as expected.

You just said above that P0145 makes previously undefined behavior well defined.
Which is it?
If you port it to an older compiler, and it becomes undefined behavior, then it doesn't necessarily work.
 
In what way?
Either it's order independent, so you can safely modify it, or it's order dependent, and you can't safely modify it because you don't even know what its behavior is now.

At present there are no correct order-dependent programs (other than those explicitly documenting order dependency as a reliance on conforming extensions), so we do not need to worry about encountering such programs.

Why do we not need to worry about them?  They exist.  Silently.  They exist for years without people finding them.
 
I don't follow.  It's okay if someone doesn't want to know what the order is when they can't know what the order is anyway, but not okay when they don't want to know what the order is when it's always one particular order?

Sure, that makes sense. See also Patrice Roy's reply; he's expressing the core philosophical and pragmatic issue far better than I could.

Okay.  I disagree completely, so i guess we'll just have to agree to disagree.
 

I'm going to borrow a quote from you from another thread:

"Still, an API that promotes correctness but can be subverted is still better than an API that requires constant vigilance to use safely."

Take "an API" to mean "order of operations in C++", right now it requires constant vigilance to use safely.

With a defined order of operations, that API promotes correctness, so that means it is still better than the status quo.

In that thread the proposal is to provide an additional API, not to invisibly change the meaning of an existing one that already works perfectly well for its use cases. If you were proposing a new syntax giving a defined order of evaluation I would be perfectly happy to support that proposal.

How is it invisibly changing the meaning of an existing one when the existing one has no defined meaning?  It can be any of the possible meanings.

Greg Marr

unread,
Aug 26, 2016, 10:44:35 PM8/26/16
to ISO C++ Standard - Discussion
On Friday, August 26, 2016 at 1:15:33 PM UTC-4, Hyman Rosen wrote:
I agree that it no longer preserves the meaning of code to rearrange the parameter order of a function and then mechanically change calls to rearrange the arguments to match.

This has never preserved the meaning, unless there was no order dependency among the arguments.
If there was no order dependency among the arguments, then it still preserves the meaning of the code.

"a host of code we really, honestly don't want to support"
But you don't have that choice - people are writing that code right now, despite the fact that it has unspecified or undefined behavior.  The language translators aren't telling people that their code is bad.  They're making an arbitrary choice of one of the possible interpretations of the program and using it, without notifying the programmer of what choice was made or even that a choice was made.  It's better to define the meaning of their code, and then your review can focus on telling them not to do that because you find it to be obscure, rather than not to do that because some compiler years from now will make a different choice and cause systems to behave erroneously.

It's not even 'years from now'.  Different compilers make different choices NOW.

Greg Marr

unread,
Aug 26, 2016, 10:52:25 PM8/26/16
to ISO C++ Standard - Discussion
On Friday, August 26, 2016 at 4:15:19 PM UTC-4, Patrice Roy wrote:
Regarding this :

"a host of code we really, honestly don't want to support"
But you don't have that choice - people are writing that code right now, despite the fact that it has unspecified or undefined behavior. 

I disagree. When people write code that relies on UB, they're on their own. There are other UB areas in the language, and we don't have to support code that relies on this.

I'm curious, who is 'we'?  The standards committee?  The C++ community?  Someone else?

Do you think that P0145 should not have been accepted?
After all, it takes things that were UB and makes them well defined.
If you agree with it, why are those particular parts of the UB worth changing, and not the rest?

Obviously, should the situation change, so will our responsibility, and that might help those who hold the belief that these function calls are important to support. On the bright side, since we will have to look deeper in source code to fix bugs we previously could say were simply due to broken assumptions, it will open up opportunities for tooling development (and I'm not being sarcastic, to be honest, as such software is actually fun to write).

Where can we find the tooling that warns people about this UB?

Patrice Roy

unread,
Aug 26, 2016, 11:02:33 PM8/26/16
to std-dis...@isocpp.org
The «we» I had in mind is anyone who has to suport the code. The current situation does not require support pas don't write code that has interdependent side-effecting arguments as it's not correct C++ code. The proposed change makes such code valid, which means (to me, at least) that it has to be supported, which implies investigation when it's written (it did not make sense before and could be rejected on that ground; if it makes sense, it can be unpleasant and bad but it's supported and we're stuck with it).

That's why I stated I hope people who push for this change (and I don't see it as a breaking change; I'm with Hyman on this, as to me, it would only break already broken code, but that's not what worries me) also push for open source code, since we'll have to delve into the reasons behind the behavior of code we could have just rejected outright as broken previously. This is much easier to do with source code. All of it.

I have to do this for C#-using companies today. It's unpleasant to say the least. The reasonable thing to do is not to write such code in the first place, as it's a severe maintenance tarpit, but C# makes it «sensible» so people write code based on it, and expect it to work. Interdependence between expressions can go deep, so it can become a significant debugging party. I personally appreciate being able to say it's ill-formed C++ code and tell people to refactor instead.

That's just me, but I wanted to clarify hat I meant by «we».

As for the other main question, I was for the rest of P0145. The problem case was the one we (this «we» is the wg21 members in Oulu) rejected. The others made sense (to me) and the code they support is code that, to me, should be supported.

Greg Marr

unread,
Aug 26, 2016, 11:24:01 PM8/26/16
to ISO C++ Standard - Discussion
On Friday, August 26, 2016 at 11:02:33 PM UTC-4, Patrice Roy wrote:
The «we» I had in mind is anyone who has to suport the code.

Thanks for the clarification.
 
The current situation does not require support pas don't write code that has interdependent side-effecting arguments as it's not correct C++ code.

How do you detect such situations with 100% accuracy?
 
The proposed change makes such code valid, which means (to me, at least) that it has to be supported,

There is plenty of code which is valid according to the standard but should not pass code review.
 
which implies investigation when it's written (it did not make sense before and could be rejected on that ground; if it makes sense, it can be unpleasant and bad but it's supported and we're stuck with it).

I would hope that's not true, and that any group that is smart enough to be able to detect order-dependent code with 100% accuracy is also smart enough to reject code that is valid according to the standard but is simply bad form.
 
That's why I stated I hope people who push for this change (and I don't see it as a breaking change; I'm with Hyman on this, as to me, it would only break already broken code, but that's not what worries me) also push for open source code, since we'll have to delve into the reasons behind the behavior of code we could have just rejected outright as broken previously. This is much easier to do with source code. All of it.

If you could reject it as outright broken, then you should also be able to reject it as bad form, or a maintenance headache, or simply hard to understand.
 
I have to do this for C#-using companies today. It's unpleasant to say the least. The reasonable thing to do is not to write such code in the first place, as it's a severe maintenance tarpit, but C# makes it «sensible» so people write code based on it, and expect it to work. Interdependence between expressions can go deep, so it can become a significant debugging party. I personally appreciate being able to say it's ill-formed C++ code and tell people to refactor instead.

Do you only tell people to refactor code that is ill-formed, or also code that is well formed, but hard to understand, or bug-prone, such as using naked new and delete?

That's just me, but I wanted to clarify hat I meant by «we».

Thanks, I appreciate it.
 
As for the other main question, I was for the rest of P0145. The problem case was the one we (this «we» is the wg21 members in Oulu) rejected. The others made sense (to me) and the code they support is code that, to me, should be supported.

So your objection is just that the formerly-ill-formed code that is now well defined is code that you think is good form, but this code is code that you think is not good form.  Since it's not good form, it should never be written, and thus it's okay that it is not well defined, and no diagnostic is required, because people should just know not to write it?  Given that people are still inadvertently writing this code that is not good form, after several decades, how likely do you think it is that people will suddenly stop writing it?

Patrice Roy

unread,
Aug 27, 2016, 8:13:37 AM8/27/16
to std-dis...@isocpp.org
Interesting.

It would be possible to build similar arguments to the one you're making about good form / bad form / people write it anyway so let's support it for a host of other features (you mention memory management, for example). I don't think C++ is the tool for people who don't put the time to manage memory with care, using the modern techniques and tools we have; whenever someone brings up a suggestion for a feature that would protect our users from their own «negligence», there are reactions due to possible runtime costs, C++ being C++ (and C++ users typically paying attention to those factors). When we have good tools to help us write better code at essentially no cost, like unique_ptr, it's a step ahead, but I don't necessarily see patching up for people who use naked new / improperly used delete, to take your own example, and making such code leak-free with properly-finalized objects. Correct me if you think I'm wrong, of course, but I think we instead propose (hopefully) better tools to manage memory efficiently, and static analysis / sanitizer tools to detect problematic code patterns.

Some (Chandler? I'm not sure) have suggested adding ways that compilers can shuffle the order of evaluation of sub-expressions in function calls, to help detect side-effect-interdependency in suspect source code. That might be more in line with what we do for fragile memory management today. That would not yield 100% accuracy, but I would not have counted on that anyway, but it seems to me as being an interesting approach to the problem of fragile function calls.

There's still fragile memory management in code being written today. I would not push for features that encourage this on the basis that «people do it», as there are unwanted consequences that follow (like, people keeping on doing it, and the code we end up supporting thereafter, not to mention runtime costs). I hold the same approach here: we know there are runtime costs to imposing an ordering in some (not all) cases, even though these costs affect some communities strongly and some other communities see them as negligible (matter of perspective and application domain). We agree that there's code we would not want to get our hands stuck into as it will be a debugging mess. The approach suggested is to turn this code from incorrect to supported and well-defined. I don't think it's how we should approach this problem (with an exception: someone -- was it Arthur? -- suggested an opt-in clause, maybe an attribute, for cases where one willingly states a specific ordering for subexpressions, which could help when using a suspect thrid-party API; since the impact would be local and explicit, I would not oppose it).



Demi Obenour

unread,
Aug 28, 2016, 10:24:08 PM8/28/16
to std-dis...@isocpp.org
> > email to std-discussio...@isocpp.org.
> > To post to this group, send email to std-dis...@isocpp.org.
> > Visit this group at https://groups.google.com/a/isocpp.org/group/std-
> > discussion/.
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.
> To post to this group, send email to std-dis...@isocpp.org.
> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
>
I think that everyone is missing the point: compilers treat undefined
behavior as unreachable code, and one NEVER, EVER wants this to happen
in the case of `*i = i++`. So the behavior should be at worst
unspecified.

I believe that the main problem with C++ is that it has FAR TOO MANY
undefined behaviors. UB in C and C++ is a major source of security
vulnerabilities. We MUST NOT add any new UB, and must strive to
eliminate it wherever we can.

Greg Marr

unread,
Aug 28, 2016, 11:29:05 PM8/28/16
to ISO C++ Standard - Discussion
On Saturday, August 27, 2016 at 6:13:37 AM UTC-6, Patrice Roy wrote:
It would be possible to build similar arguments to the one you're making about good form / bad form / people write it anyway so let's support it for a host of other features (you mention memory management, for example).

We've found that people have a very hard time getting memory management right, so we created tools and added language/library functionality to make it easy to get right.

We've found that people have a very hard time writing code that doesn't run into order of operations issues, so we should create tools and add language/library functionality to make it easy to get right.
 
whenever someone brings up a suggestion for a feature that would protect our users from their own «negligence», there are reactions due to possible runtime costs, C++ being C++ (and C++ users typically paying attention to those factors). When we have good tools to help us write better code at essentially no cost, like unique_ptr, it's a step ahead, but I don't necessarily see patching up for people who use naked new / improperly used delete, to take your own example, and making such code leak-free with properly-finalized objects. Correct me if you think I'm wrong, of course, but I think we instead propose (hopefully) better tools to manage memory efficiently, and static analysis / sanitizer tools to detect problematic code patterns.

Exactly, we made the language better to help users eliminate the memory problems.  So far, there are no tools to help with this particular problem.

Given how widespread this problem is, and the fact that it affects even the experts, I would hardly call it negligence.  True that there are times that it is, but not always.

Some (Chandler? I'm not sure) have suggested adding ways that compilers can shuffle the order of evaluation of sub-expressions in function calls, to help detect side-effect-interdependency in suspect source code. That might be more in line with what we do for fragile memory management today. That would not yield 100% accuracy, but I would not have counted on that anyway, but it seems to me as being an interesting approach to the problem of fragile function calls.

As you said, it's not 100%, so there will still always be the potential that users unknowingly write things that can't be detected.

There's still fragile memory management in code being written today. I would not push for features that encourage this on the basis that «people do it», as there are unwanted consequences that follow (like, people keeping on doing it, and the code we end up supporting thereafter, not to mention runtime costs).

Exactly, we provided ways to detect these errors, and a way to write the code such that these problems don't exist.  We haven't yet done the same thing for this issue.
 
I hold the same approach here: we know there are runtime costs to imposing an ordering in some (not all) cases, even though these costs affect some communities strongly and some other communities see them as negligible (matter of perspective and application domain).

As I recall, so far we've seen one benchmark that showed variance in timing in a simple test in a single compiler that merely reversed the order of evaluation with no attempts to tune the new order.  Do you have better data than that where it shows a significant variance in a significant codebase with a mature compiler implementation?

We agree that there's code we would not want to get our hands stuck into as it will be a debugging mess. The approach suggested is to turn this code from incorrect to supported and well-defined. I don't think it's how we should approach this problem (with an exception: someone -- was it Arthur? -- suggested an opt-in clause, maybe an attribute, for cases where one willingly states a specific ordering for subexpressions, which could help when using a suspect thrid-party API; since the impact would be local and explicit, I would not oppose it).

If you can detect and eliminate all occurrences of the currently incorrect code, you can also detect and eliminate all occurrences of newly well-defined code where the result of a single statement would vary with different orderings, as it can be hard to understand.  As we determined above with memory management, just because it's well-defined doesn't mean that it's supported by the community in general.

I wonder how much of 'writing code that depends on a particular order of operations is bad, period' is because the order isn't currently well-defined.  Would we still think it's 100% bad if C or C++ had defined order of operations completely from the beginning?

Patrice Roy

unread,
Aug 29, 2016, 7:14:26 AM8/29/16
to std-dis...@isocpp.org
If I may : most of the even-for-experts problems you are (I think) referring to were solved in Oulu and did not meet resistance from the committee. The remaining case, discussed in this paper, meets resistance because it's quite different, has noticeable costs, and «solutions» make people less enthusiastic (to say the least).

To pursue the memory-management analogy you brought up, there would probably be less resistance if the «solution» was opt-in, as the memory management solutions are, such that it does not impact code that does not depend on interdependent subexpressions in function calls. It would also have the upside of identifying such code, the way C++ casts help identify manoeuvers that are require explicit and visible programmer assent, which could be helpful in cleaning up such code afterward. :)

Cheers!

Hyman Rosen

unread,
Aug 29, 2016, 4:34:13 PM8/29/16
to std-dis...@isocpp.org
On Fri, Aug 26, 2016 at 8:27 PM, Patrice Roy <patr...@gmail.com> wrote:
I remember (maybe I'm wrong, you can correct me) that your argument was that some companies can't afford quality control, and that predictable behavior would be better in that case.

As far as I can recall, my argument has always been that it is very easy to write order-dependent code without realizing that you have done so, when the programming environment chooses to translate your program in the way you expected.  Then a different environment can choose a different translation, possibly years later, and the program will mysteriously misbehave.

--
std::cout << begin_simulation() << robots.kill(humans) << "\n";

Hyman Rosen

unread,
Aug 29, 2016, 4:37:25 PM8/29/16
to std-dis...@isocpp.org
On Fri, Aug 26, 2016 at 10:44 PM, Greg Marr <greg...@gmail.com> wrote:
On Friday, August 26, 2016 at 1:15:33 PM UTC-4, Hyman Rosen wrote:
I agree that it no longer preserves the meaning of code to rearrange the parameter order of a function and then mechanically change calls to rearrange the arguments to match.

This has never preserved the meaning, unless there was no order dependency among the arguments.
If there was no order dependency among the arguments, then it still preserves the meaning of the code.

Absent my proposal, it is legal to mechanically rearrange the order of function parameters and arguments.  If my proposal is adopted, it will no longer be possible to do so.

Patrice Roy

unread,
Aug 29, 2016, 8:27:01 PM8/29/16
to std-dis...@isocpp.org
I admit that the mixing of << operators and function calls is a dirty gotcha :) I would not hire the guy (or girl) who orders robots.kill(humans) in a sequence of output commands ;) but good point for you. It's minor, but it's a tricky one.

--

Greg Marr

unread,
Aug 29, 2016, 10:02:05 PM8/29/16
to ISO C++ Standard - Discussion
Since, as I understand it, we weren't talking about a compiler doing it, but a refactoring tool, there's nothing really legal or not about it. There is only "does this preserve the meaning of the program?"

My point was that mechanically changing f(a, b, c) to f(c, b, a) before now would only preserve the meaning of the program if there was no order dependency.
If there is no order dependency, then someone can continue to refactor the program without changing the meaning, no matter what the standard says about the order of evaluation.

Edward Catmur

unread,
Aug 30, 2016, 6:05:40 AM8/30/16
to std-dis...@isocpp.org
On Sat, Aug 27, 2016 at 3:39 AM, Greg Marr <greg...@gmail.com> wrote:
On Friday, August 26, 2016 at 6:51:18 AM UTC-4, Edward Catmur wrote:
On Fri, Aug 26, 2016 at 3:06 AM, Greg Marr <greg...@gmail.com> wrote:
Do you mean that I should have said unspecified instead of undefined, or are you implying something else?

I can't make your argument for you. If you intended to say unspecified instead of undefined, that undermines your argument as parts of P0145 deal with undefined behavior.

I said undefined behavior.  You said unspecified behavior.  I was asking why you said unspecified behavior instead.

If you'd said that in the first place, maybe there wouldn't have been this confusion. I said unspecified behavior instead because making undefined behavior defined is less dangerous than making unspecified behavior specified.

If instead you are saying that there is no difference between making unspecified behavior specified and making undefined behavior defined, then I would disagree on that point;

I said no such thing.  I'm not the one that brought in unspecified behavior.

Yes. I brought in unspecified behavior because function argument evaluation order is currently unspecified, so that's more relevant to this discussion.

tools and procedures can easily deal with undefined behavior (since it is always erroneous) but have to be far more careful around unspecified behavior (since it could be legitimate).

If they can easily deal with it, why haven't they thus far? 

You mean yours don't? You don't use -Wreorder, -Wsequence-point, or code reviews?


In either case, I don't understand what you mean.  How is this any different than any of the other things that P0145 has already done? 

Firstly, parts of P0145 deal with undefined behavior; this is safer as there is no danger of a backported program being conformant but having a different (wider) meaning.

Exactly.  A well defined program backported to an earlier version of the compiler will be silently non-conformant.  There's nothing safe about this at all.

That depends on the compiler. No diagnostic required does not mean no diagnostic allowed.

Secondly, where other parts of P0145 deal with unspecified behavior the vast majority of compilers already happen to behave the same way, and this way is the natural, idiomatic interpretation. There may be yet other parts of P0145 where this does not hold, but they are still not as egregious as the rejected function argument order part.

It works until the code changes or is ported to an older compiler.


Just like with all of the other changes in P0145. 

Not the case: firstly, such code changes are far less likely as they are not idiomatic results of normal code maintenance (such as permuting function arguments or rearranging arithmetic expressions); secondly, older compilers already for the most part already happen to behave as expected.

You just said above that P0145 makes previously undefined behavior well defined.
Which is it?
If you port it to an older compiler, and it becomes undefined behavior, then it doesn't necessarily work. 

It does if either the older compiler rejects the code or accepts it with accidentally the intended semantics.

In what way?
Either it's order independent, so you can safely modify it, or it's order dependent, and you can't safely modify it because you don't even know what its behavior is now.

At present there are no correct order-dependent programs (other than those explicitly documenting order dependency as a reliance on conforming extensions), so we do not need to worry about encountering such programs.

Why do we not need to worry about them?  They exist.  Silently.  They exist for years without people finding them.

If they really exist in such numbers, their existence will prevent update of whatever version of C++ implements your proposal. I don't want that to happen.

I don't follow.  It's okay if someone doesn't want to know what the order is when they can't know what the order is anyway, but not okay when they don't want to know what the order is when it's always one particular order?

Sure, that makes sense. See also Patrice Roy's reply; he's expressing the core philosophical and pragmatic issue far better than I could.

Okay.  I disagree completely, so i guess we'll just have to agree to disagree.
 

I'm going to borrow a quote from you from another thread:

"Still, an API that promotes correctness but can be subverted is still better than an API that requires constant vigilance to use safely."

Take "an API" to mean "order of operations in C++", right now it requires constant vigilance to use safely.

With a defined order of operations, that API promotes correctness, so that means it is still better than the status quo.

In that thread the proposal is to provide an additional API, not to invisibly change the meaning of an existing one that already works perfectly well for its use cases. If you were proposing a new syntax giving a defined order of evaluation I would be perfectly happy to support that proposal.

How is it invisibly changing the meaning of an existing one when the existing one has no defined meaning?  It can be any of the possible meanings.

It has a perfectly well defined meaning. Unspecified behavior is not the same as undefined behavior.

Edward Catmur

unread,
Aug 30, 2016, 6:17:32 AM8/30/16
to std-dis...@isocpp.org
On Fri, Aug 26, 2016 at 10:42 PM, Hyman Rosen <hyman...@gmail.com> wrote:
On Fri, Aug 26, 2016 at 4:15 PM, Patrice Roy <patr...@gmail.com> wrote:
I disagree. When people write code that relies on UB, they're on their own.

Which I'm sure is of enormous comfort when the self-driving car smashes into a wall.

To repeat the dead-horse flogging, computer programs are written to control computers.  When the programming language allows for multiple legal translations of a program, and the programming environment chooses one arbitrarily without notifying the programmer that this has happened, there is a latent error waiting to manifest.

And to continue along the same vein: high-level computer programs are not written to control computers, they are written to guide them. If you want to control a computer, learn to write ARM assembly, because x86 machine code probably has too many legal interpretations for you to feel comfortable.

Edward Catmur

unread,
Aug 30, 2016, 6:23:48 AM8/30/16
to std-dis...@isocpp.org
On Mon, Aug 29, 2016 at 3:24 AM, Demi Obenour <demio...@gmail.com> wrote:
I think that everyone is missing the point: compilers treat undefined
behavior as unreachable code, and one NEVER, EVER wants this to happen
in the case of `*i = i++`.  So the behavior should be at worst
unspecified.

I'm not too sure about that. Perhaps if that is written verbatim in a source file then treating it as unreachable is undesired, but it could still have been generated by a higher-level tool in which case that would be OK. If that code arises as a result of the preprocessor or template machinery it could be perfectly legitimate. And when one of or both of the variables is a reference, I actively want the compiler to conclude that the reference does not alias.

I believe that the main problem with C++ is that it has FAR TOO MANY
undefined behaviors.  UB in C and C++ is a major source of security
vulnerabilities.  We MUST NOT add any new UB, and must strive to
eliminate it wherever we can.

Plenty of code does not need to run in hostile environments. Eliminating security vulnerabilities is a worthy goal, but it is only one of many and should not be pursued at the expense of equally important use cases.

Edward Catmur

unread,
Aug 30, 2016, 6:29:49 AM8/30/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 3:02 AM, Greg Marr <greg...@gmail.com> wrote:
On Monday, August 29, 2016 at 2:37:25 PM UTC-6, Hyman Rosen wrote:
On Fri, Aug 26, 2016 at 10:44 PM, Greg Marr <greg...@gmail.com> wrote:
On Friday, August 26, 2016 at 1:15:33 PM UTC-4, Hyman Rosen wrote:
I agree that it no longer preserves the meaning of code to rearrange the parameter order of a function and then mechanically change calls to rearrange the arguments to match.

This has never preserved the meaning, unless there was no order dependency among the arguments.
If there was no order dependency among the arguments, then it still preserves the meaning of the code.

Absent my proposal, it is legal to mechanically rearrange the order of function parameters and arguments.  If my proposal is adopted, it will no longer be possible to do so.

Since, as I understand it, we weren't talking about a compiler doing it, but a refactoring tool, there's nothing really legal or not about it. There is only "does this preserve the meaning of the program?"

My point was that mechanically changing f(a, b, c) to f(c, b, a) before now would only preserve the meaning of the program if there was no order dependency.

It preserves the meaning of the program regardless. It is possible for the original author of the program to be mistaken about its meaning.

If there is no order dependency, then someone can continue to refactor the program without changing the meaning, no matter what the standard says about the order of evaluation.

That requires excessive work for the maintainer to ascertain that the meaning is preserved. Programming languages should be designed for maintainability over writeability.

FrankHB1989

unread,
Aug 30, 2016, 9:07:49 AM8/30/16
to ISO C++ Standard - Discussion


在 2016年8月25日星期四 UTC+8上午6:55:24,Hyman Rosen写道:
On Wed, Aug 24, 2016 at 5:29 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
It was accepted with the exception of its most controversial part, which is what you are trying (in part) to have a second go at pushing through.

I will be happy if I can just kill right-to-left assignments.  Everything else is a step towards my ultimate goal of left-to-right everywhere, but right-to-left assignments would kill my dreams forever.
 
The dream based on disciplining others without reasons successfully being the consensus would never go alive publicly.

And whatever you have believed, you have met the wrong enemy. Disorder is the truth of the universe. Bounded determinism (i.e. so-called "concurrency") is one weak form of it. As the fact, It is needed to be expressive for such things by the expectation of most users on a general-purposed language, and even the old rules of the language did not handle it in a neat way -- otherwise, why `kill_dependency` is needed?

You just can't fight with them by your dream, which is a trivial special case (though maybe most well-known). That dream may be easier to come true in a language not so expected to be general-purposed.
 

AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),

That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);

How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.

secondly and more importantly, your proposal would make currently conforming extensions illegal.

I am not aware of any compilers that offer their users options to control order of evaluation.  If there are such, then their vendors should offer objections to the committee and the objections can be considered on the merits.

You touch on this when you say "vendors may choose to provide options for specifying non-Standard orders"; these options would be non-conforming extensions and would be near-impossible to use safely with compilation units including library or third-party code i.e. pretty much all programs.

No, that's false.  Order of evaluation is strictly local and does not affect or carry across interface boundaries.

What could be a problem is code in header files, from inline functions and templates.  This means that vendor extensions to control order of evaluation should appear in the form of pragmas within the code, surrounding sections that need a particular order, rather than as command-line options to the compilation process as a whole.  (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)

There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.

I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?

A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.

My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.
Forcing ordered evaluation does go against to several kinds of paradigms relying on explicit nondeterminism on evaluations, by forcing the dilemma about nonconformance vs. bad implementatiosn (in general) on users.
Note the order does not need to be determined during translation, even by static languages like C++. So the order in instances of running programs can vary, thus the "always used some order of evaluation" can be nonsense.

 

FrankHB1989

unread,
Aug 30, 2016, 9:35:25 AM8/30/16
to ISO C++ Standard - Discussion


在 2016年8月26日星期五 UTC+8上午5:27:04,Greg Marr写道:
On Thursday, August 25, 2016 at 11:14:54 AM UTC-4, Edward Catmur wrote:
On Wed, Aug 24, 2016 at 11:55 PM, Hyman Rosen <hyman...@gmail.com> wrote:
AFAIK, my proposal is backwards-compatible with Standard C++ (in that a conforming compiler is already allowed to behave in the way I want all compilers to behave),

That is only one aspect of backwards compatibility. Other aspects that your proposal violates are firstly, that a conforming program accepted by a conforming compiler should not change meaning when accepted by a compiler conforming to an older version of the Standard (admittedly, P0145 already breaks this);

How does my proposal violate this?  A program that is conforming in both my proposal and in Standard C++ (before P0145) does not lose meaning in going from the latter to the former, unless you believe that programmers are deliberately expressing lack of ordering and are distressed to have that go away.  I do not believe that any but a vanishingly small number of programmers do this, nor that it is important to be able to do so.

I'm talking about moving in the other direction; from the former to the latter. Programs developed using a later compiler are often processed by an earlier compiler, and your proposal would silently break such programs.

Any proposal that changes something from undefined behavior to defined behavior, such as all of P0145, will exhibit this.  If it's a reason to reject this change, then it's also a reason to reject P0145 entirely.
 
True.

It would better that P0145 was not accepted, to eliminate the excuse to change `p->q` to `(*p).q` everywhere without aid of some nonconforming extensions.

And even in another direction, you get things wrong: the source program may be semantically equivalent under the two set of abstraction machine rules when there is no UB, but not necessarily equivalent on meanings, which depends on things encoded in the code that can't ruled by the language specification or the concrete behavior of any execution.


 (In practice none of this is going to happen because no one knowingly has so much order-dependant code that they'll want this.  The order-dependant code they do have are errors.)

So why is making the language enforce order such a priority?

To eliminate latent/random errors caused by undefined order.  If code is always order dependent, and the order can't change, then it is not essentially random whether or not the code works.  It either works or it doesn't.
 
Before you have proved the program is UB-free and the implementation being used is bug-free, that expectation is unreliable. You even can't have assumption of which parts of the program (or the implementation) constitute the "it" you said.

"However, if any such execution contains an undefined operation, this International Standard places no requirement on the implementation executing that program with that input (not even with regard to operations preceding the first undefined operation)."

Since it is not reliable, as a best result, you will eventually work in the same manner as before.


There are circumstances where it is not an improvement; the most obvious is changing the order of arguments in an API. If the order of evaluation of arguments is unspecified then permuting arguments does not change the meaning of the program so can be done safely.

I don't understand what you mean.  It has always been the case that all arguments are completely evaluated and parameters completely initialized before a function is called.  What is "permuting the arguments of an API"?  How can the meaning change when the present standard does not define an ordering for the argument evaluation?

If you have a function void f(int a, double b) but decide that the arguments should be in the other order you currently can swap the arguments and use a code transforming tool to amend calls to f accordingly. Since the present standard does not define an ordering for the argument evaluation, the meaning of the program does not change.

You don't know whether or not the meaning of the program will change.  For that matter, changing to a different version of the compiler, or even changing optimization switches, can change the behavior of that program.

Switching the compiler to the old standard mode can effectively damage the code relying on ordered rules. Careless use of such rules on old codebase can not be checked easily, either.
 
For any program where that is already not the case, because it truly is order independent, then changing the order of the arguments also does not change the meaning of the program.
 
That's plain wrong because the meaning consists by the original intents of the author of the code, not only the expected behavior of the lossily translated form of the program.
 

A multi-paradigm language should treat paradigms equitably. Your proposal would benefit the procedural paradigm at the expense of the functional and object-oriented paradigms.

My proposal is agnostic with respect to paradigm.  It does not impose any expense on functional or object-oriented paradigms.  If programmers do not care about order of evaluation, this proposal does nothing to force them to care.  Programmers are still free to program in command-query-separation style or functional style or any other style if that's what they want to do.  Their programs always used some order of evaluation; just because that order is now specified doesn't mean that they have to change how they program.  They can just stick their fingers in their ears and go la-la-la when someone tries to tell them what the order is, and they'll be no worse off than before.

They can't, because they have to read and maintain code that might silently depend on the ordering introduced by your proposal.

As opposed to code that currently silently depends on the ordering provided by...absolutely nothing?  How is that user worse off than before?

If that is the case, there is a bug. If order is needed, it should be clear by explicit language constructs like statements or function calls.
 

FrankHB1989

unread,
Aug 30, 2016, 9:53:32 AM8/30/16
to ISO C++ Standard - Discussion


在 2016年8月18日星期四 UTC+8上午9:53:00,Demi Obenour写道:

I agree that it will take time.  Part of my desire comes from my experience with Rust, which has a rule that is basically "there should be no way to cause undefined behavior in safe code — if you do, there is a bug elsewhere".  A larger part, however, comes from a security perspective: undefined behavior often leads to exploits, and conversely a program that is mechanically verified to be free of undefined behavior is also very likely to have far fewer vulnerabilities.


I don't think dichotomy of intrusive "safe" and "unsafe" code is a clear design for a general-purposed language. By specifying certain kinds of undefined behavior, a derived specification can turn formerly undefined behavior as defined one, with the cost of being less portable. There is no choice of such trade-off can be made if you have no undefined behavior without lost of conformance, which is usually with more risks to violate and hard to deal with by ordinary users.

Unspecified behavior is also concerned here in a weak form, which has nothing to do with conformance.

BTW, I really do not appreciate coders that can't differentiate (((a) b) c) vs. (a b c) as an author of an implementation of double-linked list. Several Rust styles (about ownership management aiming to be "safe") seem encourage this. OTOH, sequence containers of the standard C++ expose the "correct" style by leaving the destruction order of elements unspecified.

 
On Aug 9, 2016 22:03, "Patrice Roy" <patr...@gmail.com> wrote:
The Core Guidelines are huge. It might take some time to reach this ideal (although I understand the wish).

2016-08-09 1:58 GMT-04:00 Demi Obenour <demio...@gmail.com>:

One thing that I think is certain: it should be possible to prove that programs following the C++ Core Guidelines are free of this undefined behavior.


On Aug 8, 2016 11:27 PM, "Patrice Roy" <patr...@gmail.com> wrote:
Hyman, please write a proposal. I'm sure you have a lot to contribute.


2016-08-08 22:32 GMT-04:00 Bo Persson <b...@gmb.dk>:
On 2016-08-09 04:30, Bo Persson wrote:
On 2016-08-08 23:29, Hyman Rosen wrote:
As always, these lengthy discussions about what is defined behavior and
what is not illustrate the necessity for having the language define
order of evaluation strictly (and this order should be left-to-right,
for simplicity and teachability).  In which case the muddy water example
would evaluate as

    int &ri = i;       // ref to v[0]
    int ti = i;        // 0
    ++ri;              // v[0] set to 1
    int &vi = v[ti];   // ref to v[0]
    vi = 7;            // v[0] set to 7


This is just extremely bad coding.

We don't want code that does the equivalent of

v[v[0]++] = 7;

because it is useless. Why make it defined?

Or rather, why encourage people to write crappy code by changing the language to make this possible?







On Sun, Jul 31, 2016 at 6:28 PM, John Kalane <johnk...@gmail.com
<mailto:johnk...@gmail.com>> wrote:


    On Sunday, July 31, 2016 at 6:36:57 PM UTC-3, bogdan wrote:


        On Sunday, July 31, 2016 at 3:12:00 AM UTC+3, John Kalane wrote:

            Many thanks for the comprehensive answer.


        Cheers! And now, to muddy the waters and show what I meant with
        "possible side effects of the lhs expression" above:

        |
        intmain()
        {
           intv[]={0,0};
           int&i =v[0];
           v[i++]=7;// undefined behaviour
        }
        |

        Even under the new rules, the side effect of the increment
        expression is unsequenced with the assignment in this case.

    No muddy waters. The example was perfect. Thanks again!





--

--- You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

Demi Obenour

unread,
Aug 30, 2016, 3:25:58 PM8/30/16
to std-dis...@isocpp.org

Can you give an example of where you want the compiler to assume that code which depends on evaluation order is unreachable?

If the programmer wanted to do that, they would have written *nullptr or the like or used an extension provided by their compiler (GCC, Clang, and MSVC all provide such).


--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.

Hyman Rosen

unread,
Aug 30, 2016, 4:16:27 PM8/30/16
to std-dis...@isocpp.org
On Mon, Aug 29, 2016 at 10:02 PM, Greg Marr <greg...@gmail.com> wrote:
My point was that mechanically changing f(a, b, c) to f(c, b, a) before now would only preserve the meaning of the program if there was no order dependency.
If there is no order dependency, then someone can continue to refactor the program without changing the meaning, no matter what the standard says about the order of evaluation.

You're missing the point.  Without my proposal, programs cannot depend on the order of evaluation of parameters, so blindly rearranging them is fine; if there happens to have been a dependency, the program was broken anyway so rearranging it cannot make it more broken.

With my proposal, blind rearrangement is no longer permitted.

Hyman Rosen

unread,
Aug 30, 2016, 4:23:54 PM8/30/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 6:17 AM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
And to continue along the same vein: high-level computer programs are not written to control computers, they are written to guide them. If you want to control a computer, learn to write ARM assembly, because x86 machine code probably has too many legal interpretations for you to feel comfortable.

The endless numbers of buffer overflows and other pointer errors found in C++ code give lie to your claim.  Translated computer programs control computers in exact, precise detail, they do not "guide" them.  When the source program is ambiguous, translators choose an arbitrary but precise single meaning, generally without informing the programmer that they have done so or what choices they have made.

Hyman Rosen

unread,
Aug 30, 2016, 4:25:06 PM8/30/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 6:23 AM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
Plenty of code does not need to run in hostile environments. Eliminating security vulnerabilities is a worthy goal, but it is only one of many and should not be pursued at the expense of equally important use cases.

There are no use cases that require ambiguity. 

Hyman Rosen

unread,
Aug 30, 2016, 4:33:24 PM8/30/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 9:07 AM, FrankHB1989 <frank...@gmail.com> wrote: 
The dream based on disciplining others without reasons successfully being the consensus would never go alive publicly.

Specifying order of evaluation does not require disciplining anyone (except perhaps C++ compiler writers, but one would suppose that such people are anyway masochists :-)  I have given my reasons.

And whatever you have believed, you have met the wrong enemy. Disorder is the truth of the universe.

The discipline of programming is to reduce disorder.  Forcing error on programmers by vague references to the "truth of the universe" is doing them a major disservice.  I would hope no one will listen to you.

Forcing ordered evaluation does go against to several kinds of paradigms relying on explicit nondeterminism on evaluations, by forcing the dilemma about nonconformance vs. bad implementatiosn (in general) on users.

No one but a vanishingly small number of programmers seek nondeterminism from their C++ expressions.  Those programmers should not be permitted to force error on their colleagues for a useless expressiveness.

Note the order does not need to be determined during translation, even by static languages like C++. So the order in instances of running programs can vary, thus the "always used some order of evaluation" can be nonsense.

But it never is. 

Edward Catmur

unread,
Aug 30, 2016, 4:34:48 PM8/30/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 8:25 PM, Demi Obenour <demio...@gmail.com> wrote:

Can you give an example of where you want the compiler to assume that code which depends on evaluation order is unreachable?

I thought I just did; in `*r = s++` I want the compiler to assume that r and s do not alias.

If the programmer wanted to do that, they would have written *nullptr or the like or used an extension provided by their compiler (GCC, Clang, and MSVC all provide such).

Sure, I can write __builtin_assume(&r != &s). But why repeat myself?

On Aug 30, 2016 6:23 AM, "'Edward Catmur' via ISO C++ Standard - Discussion" <std-dis...@isocpp.org> wrote:
On Mon, Aug 29, 2016 at 3:24 AM, Demi Obenour <demio...@gmail.com> wrote:
I think that everyone is missing the point: compilers treat undefined
behavior as unreachable code, and one NEVER, EVER wants this to happen
in the case of `*i = i++`.  So the behavior should be at worst
unspecified.

I'm not too sure about that. Perhaps if that is written verbatim in a source file then treating it as unreachable is undesired, but it could still have been generated by a higher-level tool in which case that would be OK. If that code arises as a result of the preprocessor or template machinery it could be perfectly legitimate. And when one of or both of the variables is a reference, I actively want the compiler to conclude that the reference does not alias.

I believe that the main problem with C++ is that it has FAR TOO MANY
undefined behaviors.  UB in C and C++ is a major source of security
vulnerabilities.  We MUST NOT add any new UB, and must strive to
eliminate it wherever we can.

Plenty of code does not need to run in hostile environments. Eliminating security vulnerabilities is a worthy goal, but it is only one of many and should not be pursued at the expense of equally important use cases.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.
To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-discussion/7ylid9Tkgp0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to std-discussion+unsubscribe@isocpp.org.

Edward Catmur

unread,
Aug 30, 2016, 4:46:10 PM8/30/16
to std-dis...@isocpp.org
That's not remotely true of any non-trivial system running at any time in the last thirty years. ASLR, cache effects, paging, jitter... Even if an implementation translates a program the same way twice, which is by no means guaranteed, consecutive runs of that translated program have a vanishingly tiny chance of actually behaving the same.

Edward Catmur

unread,
Aug 30, 2016, 4:53:01 PM8/30/16
to std-dis...@isocpp.org
Of course there are. Nondeterministic computation is inherently more powerful than deterministic computation in a nondeterministic universe.

Demi Obenour

unread,
Aug 30, 2016, 9:50:27 PM8/30/16
to std-dis...@isocpp.org

On Aug 30, 2016 4:35 PM, "'Edward Catmur' via ISO C++ Standard - Discussion" <std-dis...@isocpp.org> wrote:
>
> On Tue, Aug 30, 2016 at 8:25 PM, Demi Obenour <demio...@gmail.com> wrote:
>>
>> Can you give an example of where you want the compiler to assume that code which depends on evaluation order is unreachable?
>
> I thought I just did; in `*r = s++` I want the compiler to assume that r and s do not alias.
>>
>> If the programmer wanted to do that, they would have written *nullptr or the like or used an extension provided by their compiler (GCC, Clang, and MSVC all provide such).
>
> Sure, I can write __builtin_assume(&r != &s). But why repeat myself?

If your program is too slow, you will notice it, and (if performance matters) fix it.  Most likely by means of algorithmic and/or memory access optimisations that are the most profitable on modern systems.  You can always tell the compiler yourself  about the assumption.

If the compiler makes an assumption that turns out to be occasionally calls, but true on all reasonable inputs, your program will not do poorly in benchmarks.  Nor will it fail in tests.  But hackers may discover your mistake and use it to cause memory corruption and arbitrary code execution.

In short: Not all code is performance critical.  Nearly all code is security critical.  Worse, a performance problem is likely to be discovered and fixed, while your first inclination that the compiler has made a wrong assumption could be when you hear about your code in the headlines for the worst possible reason.


>
>> On Aug 30, 2016 6:23 AM, "'Edward Catmur' via ISO C++ Standard - Discussion" <std-dis...@isocpp.org> wrote:
>>>
>>> On Mon, Aug 29, 2016 at 3:24 AM, Demi Obenour <demio...@gmail.com> wrote:
>>>>
>>>> I think that everyone is missing the point: compilers treat undefined
>>>> behavior as unreachable code, and one NEVER, EVER wants this to happen
>>>> in the case of `*i = i++`.  So the behavior should be at worst
>>>> unspecified.
>>>
>>>
>>> I'm not too sure about that. Perhaps if that is written verbatim in a source file then treating it as unreachable is undesired, but it could still have been generated by a higher-level tool in which case that would be OK. If that code arises as a result of the preprocessor or template machinery it could be perfectly legitimate. And when one of or both of the variables is a reference, I actively want the compiler to conclude that the reference does not alias.
>>>
>>>> I believe that the main problem with C++ is that it has FAR TOO MANY
>>>> undefined behaviors.  UB in C and C++ is a major source of security
>>>> vulnerabilities.  We MUST NOT add any new UB, and must strive to
>>>> eliminate it wherever we can.
>>>
>>>
>>> Plenty of code does not need to run in hostile environments. Eliminating security vulnerabilities is a worthy goal, but it is only one of many and should not be pursued at the expense of equally important use cases.
>>>
>>> --
>>>
>>> ---
>>> You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.

>>> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.


>>> To post to this group, send email to std-dis...@isocpp.org.
>>> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Discussion" group.
>> To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-discussion/7ylid9Tkgp0/unsubscribe.

>> To unsubscribe from this group and all its topics, send an email to std-discussio...@isocpp.org.


>>
>> To post to this group, send email to std-dis...@isocpp.org.
>> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
>
>
> --
>
> ---

> You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.

> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

Patrice Roy

unread,
Aug 30, 2016, 10:19:55 PM8/30/16
to std-dis...@isocpp.org
Don't want to spend too much time on this, but the performance issue is that the optimal evaluation order is influenced by the underlying architecture, and compilers can (and do) generate code based on this. If we push for a specific order in this particular case, some platforms will end with slightly slower code. If it does not matter to some of us ,which is understandable and fair, just remember that others care, a lot, and use C++ because they care.

It's Ok that some of us don't care, of course, but don't forget those who do. If someone needs that speed, that someone is probably less sensitive to arguments of badly written code. They are using C++ because they need this edge, and would probably suggest those who don't manage to write code that's not broken today that they should fix their own act, not break the code of others.The fact that not all code needs the speed edge is totally true, but C++ users include those that have this need. If they take the care required to get their code right, I think we should understand their being a bit... lukewarm, say, face with proposals that impact them based on the difficulty of writing correct code.

That's one of the reasons why opt-in could be a good idea: it would impact only those who want or need it, and would not penalize those who need the edge a contemporary C++ compiler can provide on a given platform. I'm not convinced there's that many interdependent, side-effecting function arguments running around that cannot be caught by today's compilers given the possibility of randomizing order of evaluation. Most of the code that's been proposed in this thread to support the view of an imposed function argument evaluation ordering is at best debatable, but I can deal with the fact that this code exists in practice. If a company cannot perform code reviews or use (free?) tools that would suggest something's wrong, including compilers that randomize the order of argument evaluation, and I truly believe this problem exists (particularly when using third-party libraries), that opt-in enforcing of a specific evaluation order would let these companies accept to pay a price they are explicitly willing to pay. It would also let the other companies who don't see writing code today as a problem continue without incurring unwanted (and for them, unwarranted) costs.



>>> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.

>>> To post to this group, send email to std-dis...@isocpp.org.
>>> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
>>
>> --
>>
>> ---
>> You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Discussion" group.
>> To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-discussion/7ylid9Tkgp0/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to std-discussion+unsubscribe@isocpp.org.

>>
>> To post to this group, send email to std-dis...@isocpp.org.
>> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
>
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.

> To post to this group, send email to std-dis...@isocpp.org.
> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.

Demi Obenour

unread,
Aug 30, 2016, 11:39:29 PM8/30/16
to std-dis...@isocpp.org

I think a specified evaluation order should be the default, with a pragma to turn it off where desired.  Everyone writes security-critical code, often without realizing it, while people who need that last bit of speed usually know it.

Patrice Roy

unread,
Aug 31, 2016, 8:11:06 AM8/31/16
to std-dis...@isocpp.org
I understand that point of view, but it would imply existing, working code where attention has already been paid to details to be revisited in order not to pay the price for code where such care has not been invested. I suspect there will be resistance to that position.

That being said, these are options that are worthy of debate. It will be interesting to hear everyone's point of view.

Edward Catmur

unread,
Aug 31, 2016, 9:48:04 AM8/31/16
to std-dis...@isocpp.org
On Wed, Aug 31, 2016 at 2:50 AM, Demi Obenour <demio...@gmail.com> wrote:

On Aug 30, 2016 4:35 PM, "'Edward Catmur' via ISO C++ Standard - Discussion" <std-dis...@isocpp.org> wrote:
>
> On Tue, Aug 30, 2016 at 8:25 PM, Demi Obenour <demio...@gmail.com> wrote:
>>
>> Can you give an example of where you want the compiler to assume that code which depends on evaluation order is unreachable?
>
> I thought I just did; in `*r = s++` I want the compiler to assume that r and s do not alias.
>>
>> If the programmer wanted to do that, they would have written *nullptr or the like or used an extension provided by their compiler (GCC, Clang, and MSVC all provide such).
>
> Sure, I can write __builtin_assume(&r != &s). But why repeat myself?
If your program is too slow, you will notice it, and (if performance matters) fix it. 

Any program that is not the fastest possible is too slow. A single wasted cycle is too many. That means it's impossible in general to tell whether a program is too slow, because superoptimization is a hard problem.

 Most likely by means of algorithmic and/or memory access optimisations that are the most profitable on modern systems.  You can always tell the compiler yourself  about the assumption.

And as I said, that violates DRY. The assumptions should be implicit. 

If the compiler makes an assumption that turns out to be occasionally calls, but true on all reasonable inputs, your program will not do poorly in benchmarks.  Nor will it fail in tests.  But hackers may discover your mistake and use it to cause memory corruption and arbitrary code execution.

Not if the program is not subject to untrusted input. 

In short: Not all code is performance critical.  Nearly all code is security critical.  Worse, a performance problem is likely to be discovered and fixed, while your first inclination that the compiler has made a wrong assumption could be when you hear about your code in the headlines for the worst possible reason.

That's completely wrong. Performance is important to all code (every wasted cycle has a cost; and if it doesn't, why use C++?), while security is only important to code running in hostile environments.

Writing secure code is easy, if dull: use safe (fat) APIs, ensure everything is checked, and don't make mistakes; it's also easy to test via fuzzing, and it has a single verifiable goal: code is either secure or it's not. Writing performant code is much harder since there are no guaranteed techniques to provide performance, profiling can only get you so far, and you can never be sure that there isn't some other way of writing your program that runs faster on some inputs. If there is a tradeoff to be made between performance and security, performance wins every time.

FrankHB1989

unread,
Aug 31, 2016, 10:59:20 AM8/31/16
to ISO C++ Standard - Discussion
 
在 2016年8月31日星期三 UTC+8上午4:33:24,Hyman Rosen写道:
On Tue, Aug 30, 2016 at 9:07 AM, FrankHB1989 <frank...@gmail.com> wrote: 
The dream based on disciplining others without reasons successfully being the consensus would never go alive publicly.

Specifying order of evaluation does not require disciplining anyone (except perhaps C++ compiler writers, but one would suppose that such people are anyway masochists :-)  I have given my reasons.

I actually agree with this conclusion literally, just by a different premise: sane programmers will use proper features to implement their need. There exist language constructs just for the sake: statements. But your dream goes against it, leads to diverged implementations of the sake. I still don't see your reasons beyond plain local/historic aspects (rather than a global need on the language specification), like habits of some groups of users, to make your dream necessary for most people.
 
And whatever you have believed, you have met the wrong enemy. Disorder is the truth of the universe.

The discipline of programming is to reduce disorder.  Forcing error on programmers by vague references to the "truth of the universe" is doing them a major disservice.  I would hope no one will listen to you.

That meant you can't eliminate disorder totally. Surely you can reduce it, but how? Based solely on your belief, you first step is to reject something that is very general in reality and simply attribute some similar needs to errors, that will become a disservice eventually more like the thing you want to reduce.

Forcing ordered evaluation does go against to several kinds of paradigms relying on explicit nondeterminism on evaluations, by forcing the dilemma about nonconformance vs. bad implementatiosn (in general) on users.

No one but a vanishingly small number of programmers seek nondeterminism from their C++ expressions.  Those programmers should not be permitted to force error on their colleagues for a useless expressiveness.

That has nothing to do with the number of programmers, but relates to the possibilities of available solutions for users. C++ is still not so general to adapt to handle such problems well, but it can be hoped so in future. Other the other hand, there is also no candidate apparently better than C++ to provide such solutions. So why reduce the solution space in a hurry, still with no obvious and sufficient benefits for every targeted user?
 
Note the order does not need to be determined during translation, even by static languages like C++. So the order in instances of running programs can vary, thus the "always used some order of evaluation" can be nonsense.

But it never is.
No, it has been, since the language specification has never effectively forbidden this implementation strategy.
 

Greg Marr

unread,
Aug 31, 2016, 11:27:19 AM8/31/16
to ISO C++ Standard - Discussion
On Tuesday, August 30, 2016 at 4:05:40 AM UTC-6, Edward Catmur wrote:
On Sat, Aug 27, 2016 at 3:39 AM, Greg Marr <greg...@gmail.com> wrote:
On Friday, August 26, 2016 at 6:51:18 AM UTC-4, Edward Catmur wrote:
On Fri, Aug 26, 2016 at 3:06 AM, Greg Marr <greg...@gmail.com> wrote:
Do you mean that I should have said unspecified instead of undefined, or are you implying something else?

I can't make your argument for you. If you intended to say unspecified instead of undefined, that undermines your argument as parts of P0145 deal with undefined behavior.

I said undefined behavior.  You said unspecified behavior.  I was asking why you said unspecified behavior instead.

If you'd said that in the first place, maybe there wouldn't have been this confusion. I said unspecified behavior instead because making undefined behavior defined is less dangerous than making unspecified behavior specified.

I thought that was what I was saying.  Obviously my chosen words did not convey to you the meaning that I desired, so I chose a different way to phrase it.
 
Yes. I brought in unspecified behavior because function argument evaluation order is currently unspecified, so that's more relevant to this discussion.

Okay.
 

tools and procedures can easily deal with undefined behavior (since it is always erroneous) but have to be far more careful around unspecified behavior (since it could be legitimate).

If they can easily deal with it, why haven't they thus far? 

You mean yours don't? You don't use -Wreorder, -Wsequence-point

What do these have to do with function parameter evaluation order or unspecified behavior?  The first is warning you that the code has a well defined meaning, but it's not what you might think it is based on the way you wrote the code.  The second one is a bit closer, but still not the same, unless it is warning about cases that I haven't seen, and haven't been mentioned in this thread any of the times the subject of the compiler or tools warning you when you've done something wrong has come up.

, or code reviews?

Of course we do code reviews.  They're done by humans, and humans are imperfect, so they don't find all of these issues.
 
In either case, I don't understand what you mean.  How is this any different than any of the other things that P0145 has already done? 

Firstly, parts of P0145 deal with undefined behavior; this is safer as there is no danger of a backported program being conformant but having a different (wider) meaning.

Exactly.  A well defined program backported to an earlier version of the compiler will be silently non-conformant.  There's nothing safe about this at all.

That depends on the compiler. No diagnostic required does not mean no diagnostic allowed.

True, but we're talking about existing compilers, and we're generally talking about things for which existing compilers do not issue diagnostics.
 
Secondly, where other parts of P0145 deal with unspecified behavior the vast majority of compilers already happen to behave the same way, and this way is the natural, idiomatic interpretation. There may be yet other parts of P0145 where this does not hold, but they are still not as egregious as the rejected function argument order part.

It works until the code changes or is ported to an older compiler.


Just like with all of the other changes in P0145. 

Not the case: firstly, such code changes are far less likely as they are not idiomatic results of normal code maintenance (such as permuting function arguments or rearranging arithmetic expressions); secondly, older compilers already for the most part already happen to behave as expected.

You just said above that P0145 makes previously undefined behavior well defined.
Which is it?
If you port it to an older compiler, and it becomes undefined behavior, then it doesn't necessarily work. 

It does if either the older compiler rejects the code or accepts it with accidentally the intended semantics.

Right, "it does if" and "accidentally", that means that it doesn't necessarily work.  It may work, it may not, you don't know.
 

In what way?
Either it's order independent, so you can safely modify it, or it's order dependent, and you can't safely modify it because you don't even know what its behavior is now.

At present there are no correct order-dependent programs (other than those explicitly documenting order dependency as a reliance on conforming extensions), so we do not need to worry about encountering such programs.

Why do we not need to worry about them?  They exist.  Silently.  They exist for years without people finding them.

If they really exist in such numbers, their existence will prevent update of whatever version of C++ implements your proposal. I don't want that to happen.

Why is that?  You think people will fail to update because programs that are currently broken will continue to be broken?
 
I'm going to borrow a quote from you from another thread:

"Still, an API that promotes correctness but can be subverted is still better than an API that requires constant vigilance to use safely."

Take "an API" to mean "order of operations in C++", right now it requires constant vigilance to use safely.

With a defined order of operations, that API promotes correctness, so that means it is still better than the status quo.

In that thread the proposal is to provide an additional API, not to invisibly change the meaning of an existing one that already works perfectly well for its use cases. If you were proposing a new syntax giving a defined order of evaluation I would be perfectly happy to support that proposal.

How is it invisibly changing the meaning of an existing one when the existing one has no defined meaning?  It can be any of the possible meanings.

It has a perfectly well defined meaning. Unspecified behavior is not the same as undefined behavior.

 I didn't say undefined  behavior, I said no defined meaning.  Having more than one possible meaning (unspecified behavior) is not the same as well defined meaning.

FrankHB1989

unread,
Aug 31, 2016, 11:34:32 AM8/31/16
to ISO C++ Standard - Discussion


在 2016年8月31日星期三 UTC+8上午9:50:27,Demi Obenour写道:

On Aug 30, 2016 4:35 PM, "'Edward Catmur' via ISO C++ Standard - Discussion" <std-dis...@isocpp.org> wrote:
>
> On Tue, Aug 30, 2016 at 8:25 PM, Demi Obenour <demio...@gmail.com> wrote:
>>
>> Can you give an example of where you want the compiler to assume that code which depends on evaluation order is unreachable?
>
> I thought I just did; in `*r = s++` I want the compiler to assume that r and s do not alias.
>>
>> If the programmer wanted to do that, they would have written *nullptr or the like or used an extension provided by their compiler (GCC, Clang, and MSVC all provide such).
>
> Sure, I can write __builtin_assume(&r != &s). But why repeat myself?

If your program is too slow, you will notice it, and (if performance matters) fix it.  Most likely by means of algorithmic and/or memory access optimisations that are the most profitable on modern systems.  You can always tell the compiler yourself  about the assumption. T

If the compiler makes an assumption that turns out to be occasionally calls, but true on all reasonable inputs, your program will not do poorly in benchmarks.  Nor will it fail in tests.  But hackers may discover your mistake and use it to cause memory corruption and arbitrary code execution.

All of these QoI stuff do not make it necessarily to modify the language specification. Both performance issues and difficulties of hack are not directly concerned with the language specification. Only when the performance digression can't be avoid effectively by means other than change the language rules, the rules are to be changed no doubtfully.
The proposed restrictions on evaluation order undoubtfully make the implementation significantly harder to get possible better performance in any cases, so it not good for everyone; but the more important point is: it gains almost nothing. You can't prevent hackers exploiting other related mistakes based on wrong assumptions of order and accidentally hidden by the naive change of the rules. You should prevent hackers of such mistakes by other means more effectively.

You can have a separated specification that will not invalidate the conformance of particular implementation and filter them out of your choices, if you really want to shape the dialects you will use. But I'm afraid you can't force others does not need or even do want to avoid such properties on the standardized language to agree with you.

 

In short: Not all code is performance critical.  Nearly all code is security critical.


Define security, please. In my point of view you are propose to make creation of security hole more easily by novice, before force the current programs to behave more secure as you expected.
 

Worse, a performance problem is likely to be discovered and fixed, while your first inclination that the compiler has made a wrong assumption could be when you hear about your code in the headlines for the worst possible reason.


I'm curious that where do you find these points. Like all kinds of other software bugs, a performance regression is not always easy to be discovered before it suddenly causes seriously problems. It can also be hard to fix without rewriting the major parts of the program, esp. when the program was originally written by programmers who can't expect the case. In reality, it is very difficult to assert how much performance is not needed reliably without some complicated profiling techniques. Write code as fast as possible unless you can't guarantee it is correct -- is a simpler choice.

Edward Catmur

unread,
Aug 31, 2016, 11:41:43 AM8/31/16
to std-dis...@isocpp.org
On Wed, Aug 31, 2016 at 4:27 PM, Greg Marr <greg...@gmail.com> wrote:

tools and procedures can easily deal with undefined behavior (since it is always erroneous) but have to be far more careful around unspecified behavior (since it could be legitimate).

If they can easily deal with it, why haven't they thus far? 

You mean yours don't? You don't use -Wreorder, -Wsequence-point
 
What do these have to do with function parameter evaluation order or unspecified behavior?  The first is warning you that the code has a well defined meaning, but it's not what you might think it is based on the way you wrote the code.  The second one is a bit closer, but still not the same, unless it is warning about cases that I haven't seen, and haven't been mentioned in this thread any of the times the subject of the compiler or tools warning you when you've done something wrong has come up.

Well, I'll take your word for it.

, or code reviews?

Of course we do code reviews.  They're done by humans, and humans are imperfect, so they don't find all of these issues.

Which is precisely the reason to not make life harder for code reviewers.

In either case, I don't understand what you mean.  How is this any different than any of the other things that P0145 has already done? 

Firstly, parts of P0145 deal with undefined behavior; this is safer as there is no danger of a backported program being conformant but having a different (wider) meaning.

Exactly.  A well defined program backported to an earlier version of the compiler will be silently non-conformant.  There's nothing safe about this at all.

That depends on the compiler. No diagnostic required does not mean no diagnostic allowed.

True, but we're talking about existing compilers, and we're generally talking about things for which existing compilers do not issue diagnostics. 

What about -Wsequence-point?
 
Secondly, where other parts of P0145 deal with unspecified behavior the vast majority of compilers already happen to behave the same way, and this way is the natural, idiomatic interpretation. There may be yet other parts of P0145 where this does not hold, but they are still not as egregious as the rejected function argument order part.

It works until the code changes or is ported to an older compiler.


Just like with all of the other changes in P0145. 

Not the case: firstly, such code changes are far less likely as they are not idiomatic results of normal code maintenance (such as permuting function arguments or rearranging arithmetic expressions); secondly, older compilers already for the most part already happen to behave as expected.

You just said above that P0145 makes previously undefined behavior well defined.
Which is it?
If you port it to an older compiler, and it becomes undefined behavior, then it doesn't necessarily work. 

It does if either the older compiler rejects the code or accepts it with accidentally the intended semantics.

Right, "it does if" and "accidentally", that means that it doesn't necessarily work.  It may work, it may not, you don't know. 

You can know by surveying existing implementations. My understanding of the justification behind P0145 was that existing implementations already behaved consistently.


In what way?
Either it's order independent, so you can safely modify it, or it's order dependent, and you can't safely modify it because you don't even know what its behavior is now.

At present there are no correct order-dependent programs (other than those explicitly documenting order dependency as a reliance on conforming extensions), so we do not need to worry about encountering such programs.

Why do we not need to worry about them?  They exist.  Silently.  They exist for years without people finding them.

If they really exist in such numbers, their existence will prevent update of whatever version of C++ implements your proposal. I don't want that to happen.

Why is that?  You think people will fail to update because programs that are currently broken will continue to be broken? 

Yes, if that breakage is exposed as a result.
 
I'm going to borrow a quote from you from another thread:

"Still, an API that promotes correctness but can be subverted is still better than an API that requires constant vigilance to use safely."

Take "an API" to mean "order of operations in C++", right now it requires constant vigilance to use safely.

With a defined order of operations, that API promotes correctness, so that means it is still better than the status quo.

In that thread the proposal is to provide an additional API, not to invisibly change the meaning of an existing one that already works perfectly well for its use cases. If you were proposing a new syntax giving a defined order of evaluation I would be perfectly happy to support that proposal.

How is it invisibly changing the meaning of an existing one when the existing one has no defined meaning?  It can be any of the possible meanings.

It has a perfectly well defined meaning. Unspecified behavior is not the same as undefined behavior.

 I didn't say undefined  behavior, I said no defined meaning.  Having more than one possible meaning (unspecified behavior) is not the same as well defined meaning.

Most non-trivial programs have at least some unspecified behavior. How many programs have you written that have defined meaning by your standard? 

Greg Marr

unread,
Aug 31, 2016, 12:05:10 PM8/31/16
to ISO C++ Standard - Discussion
I'm not missing the point at all.  I'm saying that's a separate case from what I was discussing.

I understand that the compiler is not free to blindly rearrange, per the standard.
I understand that blindly rearranging a broken program could break it in a different way.

The question was whether this change would take away freedom, namely to cause something which was previously a well-defined operation to become a breaking operation.

Since the operation was only well-defined in the past under very narrow circumstances, and it's still well-defined under those same very narrow circumstances, then the answer is no.

I don't consider losing the ability to change a broken program to another broken program to be taking away that freedom.

Others may feel differently.  That's fine.

Demi Obenour

unread,
Aug 31, 2016, 2:17:41 PM8/31/16
to std-dis...@isocpp.org
On Wed, Aug 31, 2016 at 08:34:32AM -0700, FrankHB1989 wrote:
>
>
> 在 2016年8月31日星期三 UTC+8上午9:50:27,Demi Obenour写道:
> >
> > On Aug 30, 2016 4:35 PM, "'Edward Catmur' via ISO C++ Standard -
> implementation significantly harder to get possible better performance *in
> any cases*, so it not good for everyone; but the more important point is:
> it gains almost nothing. You can't prevent hackers exploiting other related
> mistakes based on wrong assumptions of order and accidentally hidden by the
> naive change of the rules. You should prevent hackers of such mistakes by
> other means more effectively.
>
> You can have a separated specification that will not invalidate the
> conformance of particular implementation and filter them out of your
> choices, if you really want to shape the *dialects *you will use. But I'm
> afraid you can't force others does not need or even do want to avoid such
> properties on the *standardized *language to agree with you.
>
>
>
> > In short: Not all code is performance critical. Nearly all code is
> > security critical.
> >
>
> Define security, please. In my point of view you are propose to make
> creation of security hole more easily by novice, before force the current
> programs to behave more secure as you expected.
>
>
> > Worse, a performance problem is likely to be discovered and fixed, while
> > your first inclination that the compiler has made a wrong assumption could
> > be when you hear about your code in the headlines for the worst possible
> > reason.
> >
>
> I'm curious that where do you find these points. Like all kinds of other
> software bugs, a performance regression is not always easy to be discovered
> before it suddenly causes seriously problems. It can also be hard to fix
> without rewriting the major parts of the program, esp. when the program was
> originally written by programmers who can't expect the case. In reality, it
> is very difficult to assert how much performance is not needed reliably
> without some complicated profiling techniques. Write code as fast as
> possible unless you can't guarantee it is correct -- is a simpler choice.
This assumes that you can guarantee that your code is correct.

This is a horrible assumption. The long list of memory safety
vulnerabilities in C and C++ programs proves this. Nearly every
language EXCEPT C and C++ is memory-safe (which includes being free of
undefined behavior) BY DEFAULT.

Unless you have a machine-verified proof of correctness, or have spent
as much time on test cases than you have on the code itself (and maybe
not even then), you MUST always assume that (in non-trivial cases) YOUR
CODE HAS BUGS. C++ should not make it so easy for those bugs to become
exploits.

C++'s memory safety problems are so bad that Mozilla created an entire
programming language (Rust) to mitigate them.
>
> >
> > >> On Aug 30, 2016 6:23 AM, "'Edward Catmur' via ISO C++ Standard -
> > an email to std-discussio...@isocpp.org <javascript:>.
> > >>> To post to this group, send email to std-dis...@isocpp.org
> > <javascript:>.
> > >>> Visit this group at
> > https://groups.google.com/a/isocpp.org/group/std-discussion/.
> > >>
> > >> --
> > >>
> > >> ---
> > >> You received this message because you are subscribed to a topic in the
> > Google Groups "ISO C++ Standard - Discussion" group.
> > >> To unsubscribe from this topic, visit
> > https://groups.google.com/a/isocpp.org/d/topic/std-discussion/7ylid9Tkgp0/unsubscribe
> > .
> > >> To unsubscribe from this group and all its topics, send an email to
> > std-discussio...@isocpp.org <javascript:>.
> > >>
> > >> To post to this group, send email to std-dis...@isocpp.org
> > <javascript:>.
> > >> Visit this group at
> > https://groups.google.com/a/isocpp.org/group/std-discussion/.
> > >
> > >
> > > --
> > >
> > > ---
> > > You received this message because you are subscribed to the Google
> > Groups "ISO C++ Standard - Discussion" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> > an email to std-discussio...@isocpp.org <javascript:>.
> > > To post to this group, send email to std-dis...@isocpp.org <javascript:>
> > .

Edward Catmur

unread,
Aug 31, 2016, 3:16:48 PM8/31/16
to std-dis...@isocpp.org
And yet here we are, using C++.

Unless you have a machine-verified proof of correctness, or have spent
as much time on test cases than you have on the code itself (and maybe
not even then), you MUST always assume that (in non-trivial cases) YOUR
CODE HAS BUGS.  C++ should not make it so easy for those bugs to become
exploits.

C++'s memory safety problems are so bad that Mozilla created an entire
programming language (Rust) to mitigate them.

How's that working out for them?
 
> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.

> To post to this group, send email to std-dis...@isocpp.org.
> Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.

--

---
You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-discussion/7ylid9Tkgp0/unsubscribe.
To unsubscribe from this group and all its topics, send an email to std-discussion+unsubscribe@isocpp.org.

Hyman Rosen

unread,
Aug 31, 2016, 6:58:36 PM8/31/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 4:46 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:

That's not remotely true of any non-trivial system running at any time in the last thirty years. ASLR, cache effects, paging, jitter... Even if an implementation translates a program the same way twice, which is by no means guaranteed, consecutive runs of that translated program have a vanishingly tiny chance of actually behaving the same.

Then what is it that you think you are doing when writing a computer program?

Do you believe that you cannot get a computer program to, say, generate the exact same PDF file from a word processing document, each time you ask it to?

Do you want to dispense with the rules for order of evaluation of statements?  After all, "Even if an implementation translates a program the same way twice, which is by no means guaranteed, consecutive runs of that translated program have a vanishingly tiny chance of actually behaving the same."

Hyman Rosen

unread,
Aug 31, 2016, 7:01:00 PM8/31/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 4:52 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
Of course there are. Nondeterministic computation is inherently more powerful than deterministic computation in a nondeterministic universe.

C++ does not provide nondeterminstic programming in any sense that renders it more powerful than deterministic programming.  C++ translators choose a single deterministic meaning from a set of deterministic choices permitted by the language definition.  And then they don't say what they chose.

Hyman Rosen

unread,
Aug 31, 2016, 7:04:16 PM8/31/16
to std-dis...@isocpp.org
On Tue, Aug 30, 2016 at 4:34 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
I thought I just did; in `*r = s++` I want the compiler to assume that r and s do not alias.
Sure, I can write __builtin_assume(&r != &s). But why repeat myself?

Because one day someone will change the code to '*r = s' and by your belief (but not in actual fact) this will cause the code to slow down dramatically because now the compiler won't be able to infer the no-aliasing information (not that it does so anyway). 

Demi Obenour

unread,
Aug 31, 2016, 10:14:45 PM8/31/16
to 'Edward Catmur' via ISO C++ Standard - Discussion
Quite well, actually. See Servo.

But that is off-topic. As far as C++ is concerned, my point is that any
undefined behavior that can reasonably be eliminated, should be
eliminated. Especially when (as here) automated checking is infeasable,
both at compile-time and at runtime.
> > an email to std-discussio...@isocpp.org.
> > > To post to this group, send email to std-dis...@isocpp.org.
> > > Visit this group at https://groups.google.com/a/isocpp.org/group/std-
> > discussion/.
> >
> > --
> >
> > ---
> > You received this message because you are subscribed to a topic in the
> > Google Groups "ISO C++ Standard - Discussion" group.
> > To unsubscribe from this topic, visit https://groups.google.com/a/
> > isocpp.org/d/topic/std-discussion/7ylid9Tkgp0/unsubscribe.
> > To unsubscribe from this group and all its topics, send an email to
> > std-discussio...@isocpp.org.
> > To post to this group, send email to std-dis...@isocpp.org.
> > Visit this group at https://groups.google.com/a/isocpp.org/group/std-
> > discussion/.
> >
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.

Demi Obenour

unread,
Aug 31, 2016, 10:40:41 PM8/31/16
to 'Edward Catmur' via ISO C++ Standard - Discussion
Nearly code must be considered to run in hostile environments. That
includes all code that will ever be exposed to the Internet, as well as
all code that will be used by enough people to be a serious target for
hackers. The only code that is not a target is the code that hardly
anyone uses.
> Writing secure code is easy, if dull: use safe (fat) APIs, ensure
> everything is checked, and don't make mistakes; it's also easy to test via
> fuzzing, and it has a single verifiable goal: code is either secure or it's
> not. Writing performant code is much harder since there are no guaranteed
> techniques to provide performance, profiling can only get you so far, and
> you can never be sure that there isn't some other way of writing your
> program that runs faster on some inputs. If there is a tradeoff to be made
> between performance and security, performance wins every time.
>

FrankHB1989

unread,
Sep 1, 2016, 8:19:54 AM9/1/16
to ISO C++ Standard - Discussion


在 2016年9月1日星期四 UTC+8上午2:17:41,Demi Obenour写道:

You were saying "correct", not "safe". The former is quite far away to be formal, which makes it possible for humans to beat machines, particularly, to make sure the code reflecting the intents properly when coding.
Yeah, I actually can guarantee such things, as long as the code is really written by me with sufficient resources, by running some specialized algorithm taking cost of amortized O(1) time and O(1) space in my brain. The rationale is I simply don't write code with such bad smell. Just for the problems in this topic: I can already clearly specify the required order gracefully by other language constructs that are designed for such cases, why I need more dirty disputable assumptions?
For the problems you concerned in others' code, I can still guarantee O(1) to find the naive bugs by some separated rules set manually, though not O(1) to correct (and boring). In most cases I can't use automatic checks to replace manually code review, so this is acceptable.

And I'm sure if I followed the coding style of relying on the proposed change, the process above would be effectively disturbed because it folds different intents to semantically bad representations.

Usually there will be more subtle cases to invalidate that effort. But it has nothing to do with coding.

This is a horrible assumption.  The long list of memory safety
vulnerabilities in C and C++ programs proves this.  Nearly every
language EXCEPT C and C++ is memory-safe (which includes being free of
undefined behavior) BY DEFAULT.

I'm afraid that you seem to have more horrible assumptions -- you don't quite know what are the real needs, and how to determine the their priorities, when there is lack of information from background of concrete projects. The principles you insisted on are biased to those needs.

In general, both security and performance are vital aspects of things concerned here. However, you did not show that you have considered them carefully.

In reality, there is something called "fault tolerance", including to tolerant bad coders and bad code. You can't trust they will work at expected. You can't endorse them without limitation, either. This is the main reason why you should not have assumptions on the reliability of them. I agree, to assume something arbitrary "safe" BY DEFAULT is really dangerous without extraordinary care. But are you sure who rules the DEFAULT? Even if I can't prove it is credible, but I am sure I will be responsible to all the faults caused by the trust, then why I can't assume it is true by default when there is no opposite proof to show me that is wrong?

I will illustrate the point above by analyses of several related topics. First, the concept of undefined behavior works here similarly. Occurrence of UB means it would be out of the scope of the standard to guarantee it works, rather than to mean there will be something wrong unconditionally. It means you should guarantee instead of the specification by yourself, whenever needed. So keeping away undefined behavior away from every programs is not the true need. You usually don't want undefined behavior in applications just because it cost toooooo much to make yourself as a replacement of the text in the standard. That will be a great waste -- it is almost like to invent half of the language by yourself, and probably more importantly -- you can't tell others you are using the language they are familiar with; you are only using a highly customized dialect. So avoid UB by default, to keep it simply workable. (There also exist the case that you have to deal with predictable undefined behavior serioushly, e.g. when implementing the language.)

Second is the so-called memory safety. In short, it is conceptually helpful, but actually often useless. For the domain of a general purposed language, it is something not so generally useful like you assumed, as there exist tasks where you can't work with it. In several languages like C# and Rust, such cases can/should fall in code marked as "unsafe". This is somewhat misleading. It only guarantee some limited kinds of "memory safety", which is surely not equal to "safety". The real safety is based on the fact that the coders know how the code works (as they expected), but in most cases, such features only provides users illusions besides what coders should already know. And the remained unsafety is still hard to fix, if not harder. (How many Rust users will find bugs like this in "unsafe" code by themselves?) Only naive users will believe that is just suited to the true needs they have met without further analysis. So the result is, it only provides "safety" which guarantees almost nothing for coders who are able to guarantee the code really work, and illusions to users unable to do that -- if you really don't care possible pessimization introduced by implementation of such features.

A far better design strategy of general purposed language to help (but not guarantee -- a language implementation is not a strong AI) programming safely is to provide elementary blocks and let coders to build them together, to implement their needs about safety, rather than limiting things users can do by the core language features directly. (In general, provide the mechanism, not policies, to design a system aiming to be secure.) Anyway, it is the users' right and responsibility to identify which security properties are really needed and thus they can assume they are true with some conditions that the designer of the language can't control.


Unless you have a machine-verified proof of correctness, or have spent
as much time on test cases than you have on the code itself (and maybe
not even then), you MUST always assume that (in non-trivial cases) YOUR
CODE HAS BUGS.  C++ should not make it so easy for those bugs to become
exploits.

As I have said, even you have such proof, it is still not enough in most case. Concretely, how to avoid bugs in the specifications used by the prover?
The serious part is, not only code written in object language can be buggy. Do you want to rely on modifying of C++ features to guarantee any kinds of safety? THIS MECHANISM ITSELF IS FULL OF BUGS.
 

FrankHB1989

unread,
Sep 1, 2016, 8:43:38 AM9/1/16
to ISO C++ Standard - Discussion


在 2016年9月1日星期四 UTC+8上午6:58:36,Hyman Rosen写道:
On Tue, Aug 30, 2016 at 4:46 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:

That's not remotely true of any non-trivial system running at any time in the last thirty years. ASLR, cache effects, paging, jitter... Even if an implementation translates a program the same way twice, which is by no means guaranteed, consecutive runs of that translated program have a vanishingly tiny chance of actually behaving the same.

Then what is it that you think you are doing when writing a computer program?

Do you believe that you cannot get a computer program to, say, generate the exact same PDF file from a word processing document, each time you ask it to?

A computer system is not guaranteed to be able to process documents. Concrete functionality set is totally up to developers... with full of man-made choices.

And ... as for the physics rules believed by most mainstream scientists, you always can't do it exactly. You can make it sure in practice because the wave function collapsed and the possibility of success can be very very ... very near to 1. It is the man-made choice of physical implementation of computer systems. There is no rules to forbid someone creates something with no such properties as you expected, but still considers it as a computer system. The latter is full of nondeterminism in nature.

 

FrankHB1989

unread,
Sep 1, 2016, 8:58:35 AM9/1/16
to ISO C++ Standard - Discussion


在 2016年9月1日星期四 UTC+8上午7:01:00,Hyman Rosen写道:
On Tue, Aug 30, 2016 at 4:52 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:
Of course there are. Nondeterministic computation is inherently more powerful than deterministic computation in a nondeterministic universe.

C++ does not provide nondeterminstic programming in any sense that renders it more powerful than deterministic programming.
Have you heard multithreading?
 
C++ translators choose a single deterministic meaning from a set of deterministic choices permitted by the language definition.  And then they don't say what they chose.

That's simply not true.

1.9 Program execution [intro.execution]
1 The semantic descriptions in this International Standard define a parameterized nondeterministic abstract
machine. This International Standard places no requirement on the structure of conforming implementations.
In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming
implementations are required to emulate (only) the observable behavior of the abstract machine as explained
below.
2 Certain aspects and operations of the abstract machine are described in this International Standard as
implementation-defined (for example, sizeof(int)). These constitute the parameters of the abstract
these respects.6 Such documentation shall define the instance of the abstract machine that corresponds to
that implementation (referred to as the “corresponding instance” below).
3 Certain other aspects and operations of the abstract machine are described in this International Standard
as unspecified (for example, evaluation of expressions in a new-initializer if the allocation function fails to
allocate memory (5.3.4)). Where possible, this International Standard defines a set of allowable behaviors.
These define the nondeterministic aspects of the abstract machine. An instance of the abstract machine can
thus have more than one possible execution for a given program and a given input.

Edward Catmur

unread,
Sep 1, 2016, 11:08:57 AM9/1/16
to std-dis...@isocpp.org
On Wed, Aug 31, 2016 at 11:58 PM, Hyman Rosen <hyman...@gmail.com> wrote:
On Tue, Aug 30, 2016 at 4:46 PM, 'Edward Catmur' via ISO C++ Standard - Discussion <std-dis...@isocpp.org> wrote:

That's not remotely true of any non-trivial system running at any time in the last thirty years. ASLR, cache effects, paging, jitter... Even if an implementation translates a program the same way twice, which is by no means guaranteed, consecutive runs of that translated program have a vanishingly tiny chance of actually behaving the same.

Then what is it that you think you are doing when writing a computer program?

I am describing a relation between input sequences and output sequences, while optimizing for resource (time and, to a lesser extent, space) usage.

Do you believe that you cannot get a computer program to, say, generate the exact same PDF file from a word processing document, each time you ask it to?

Obviously not. That's observable behavior, which is distinct from behavior in general.

Do you want to dispense with the rules for order of evaluation of statements?  After all, "Even if an implementation translates a program the same way twice, which is by no means guaranteed, consecutive runs of that translated program have a vanishingly tiny chance of actually behaving the same."

To some extent, yes. Any such rules should be as lax as possible while making it possible for users to describe the desired observable behavior. After all, the only clause that really matters is [intro.execution]/8; all else is detail and optimization.

Edward Catmur

unread,
Sep 1, 2016, 11:26:17 AM9/1/16
to std-dis...@isocpp.org
Possibly true. Of course that same person may have also changed the call site such that aliasing can occur now that it is legitimate. 

Edward Catmur

unread,
Sep 1, 2016, 11:28:02 AM9/1/16
to std-dis...@isocpp.org
7 years, and they've got to a version 0.0.1 developer preview.

But that is off-topic.  As far as C++ is concerned, my point is that any
undefined behavior that can reasonably be eliminated, should be
eliminated.  Especially when (as here) automated checking is infeasable,
both at compile-time and at runtime.

Sure. This is not a case where the UB can reasonably be eliminated; the costs are too high.

It is loading more messages.
0 new messages