But I think the above mentioned prohibition of transformation is general
issue, not only on older machine architectures, and can be noticeable
performance difference.
Is these kind of concern already discussed?
P0145R0: Refining Expression Evaluation Order for Idiomatic C++ (Revision 1)
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0145r0.pdf
> In summary, the following expressions are evaluated in the order
> a, then b, then c, then d:
...
> 3. a(b, c, d)
I have thought of a pattern where specifying evaluation order of
function arguments can result in general and noticeable prohibition
of some compiler optimizations.
(I'm afraid of that this is already discussed, or I'm just writing
something silly. I'm not a compiler writer, and the paper should
have been reviewed by some compiler writers...)
We shouldn’t assume that the authors of the proposal are insensitive to performance reasons.
We are longtime C++ designers and implementers, very much attune to the essence of C++.
However, when writing a correct program has become difficult and common idioms have become traps and not supported by the language rules, we must reassess our design criteria and revise the rules as necessary. Please see the statement of the problem in the paper. A programming language is a set of responses to the problems of its time.
As has been pointed our repeatedly, the example code
void f()
{
std::string s = “but I have heard it works even if you don’t believe in it”
s.replace(0, 4, “”).replace(s.find(“even”), 4, “only”).replace(s.find(“ don’t”), 6, “”);
assert(s == “I have heard it works only if you believe in it”);
}
was published in Bjarne’s book TC++PL4. Before publication, that particular code was reviewed by world wide leading C++ experts (at least a dozen, if my memory serves me correctly, the majority of whom attend C++ standards meetings and make contributions.)
Interesting enough, the week just before the Kona meeting, an esteemed colleague of mine and I spent a day and half chasing an obscure bug (in a real product) that turned out to be an OEE issue on a very simple-looking code. Worse: the offending code was authored by myself L
-- Gaby
For this particular case, I personally don't think that we have to
make it work. In many languages, it works because the string
is immutable and `.replace` is creating new objects. IMHO,
referring a mutated object more than once in a sequence of
operations in a statement is a bad idea in general, not just in
C++.
Here's the problem. We have three choices:
1) The status quo. That is, we encourage the writing of such code (having functions that return references to `this` encourages it) while simultaneously making it very difficult for a user to know that they've written something incorrect relative to the standard. At best, we can hope for static analysis tools to step in and say "Hey, you did something possibly wrong here." At worse, users write subtle bugs into their programs that only appear on platforms that happen to use a different evaluation order.
2) Change std::string and various other modifiable types so that they're non-modifiable, thus breaking tons of perfectly valid (and reasonably performing) code that has already been written. Note that it will only fix this particular kind of issue, not the other problems that come from order of evaluation issues.
3) Change the rules of evaluation so that users can know that what they've written is actually valid C++ code.
On Thursday, 19 November 2015 23:52:54 UTC, Nicol Bolas wrote:Here's the problem. We have three choices:
1) The status quo. That is, we encourage the writing of such code (having functions that return references to `this` encourages it) while simultaneously making it very difficult for a user to know that they've written something incorrect relative to the standard. At best, we can hope for static analysis tools to step in and say "Hey, you did something possibly wrong here." At worse, users write subtle bugs into their programs that only appear on platforms that happen to use a different evaluation order.
2) Change std::string and various other modifiable types so that they're non-modifiable, thus breaking tons of perfectly valid (and reasonably performing) code that has already been written. Note that it will only fix this particular kind of issue, not the other problems that come from order of evaluation issues.
3) Change the rules of evaluation so that users can know that what they've written is actually valid C++ code.Yes, let's change the rules to more closely match what users expect, but I don't believe that expectation exists for function arguments specifically; that is, "everyone knows" that function arguments are evaluated in indeterminate order. This feels intuitive because (as with, say, addition) the syntax does not imply any dependency between arguments separated by a syntactic comma.
On Thursday, November 19, 2015 at 5:59:07 PM UTC-5, Matt Calabrese wrote:On Thu, Nov 19, 2015 at 2:40 PM, Zhihao Yuan <z...@miator.net> wrote:For this particular case, I personally don't think that we have to
make it work.
I think this is what bothers me most. That we'd be potentially preventing certain optimizations is one thing, but separate from that is that by defining the code and its ordering, we'd be effectively sanctioning the dependence on it by users. This is at least a little bit scary. We'd be changing incorrect code to "correct" but subtle code (of course, the fact that it is currently invalid is also sometimes subtle).I don't have a strong opinion either way, but I do agree that this is an important point that shouldn't be overlooked.
Here's the problem. We have three choices:
1) The status quo. That is, we encourage the writing of such code (having functions that return references to `this` encourages it) while simultaneously making it very difficult for a user to know that they've written something incorrect relative to the standard. At best, we can hope for static analysis tools to step in and say "Hey, you did something possibly wrong here." At worse, users write subtle bugs into their programs that only appear on platforms that happen to use a different evaluation order.
So it fails to fix the operator+ example above, and it also misses the opportunity to define OOE for operator| "because ranges" or operator/ "because filesystem" or any other soon-to-be-common chaining cases that I might have missed.
My concern can be said as a reduction of possibility of "common
subexpression elimination (CSE)"
<https://en.wikipedia.org/wiki/Common_subexpression_elimination>.
The impact may increase in the future as compilers become wiser.
On 2015/11/22 12:37 +0900, Nicol Bolas wrote:
> On Saturday, November 21, 2015 at 3:55:51 PM UTC-5, Kazutoshi SATODA wrote:
>> On 2015/11/20 6:19 +0900, Gabriel Dos Reis wrote:
>>> I expect compilers to also offer switches for non-standard evaluation
>>> order, just like some have -funsafe-math.
>>
>> I doubt the usefulness of such switches.
>>
>> Once the order is specified and people have been allowed to write codes
>> depending on that order, use of such compiler switches can result in
>> completely different behavior with far higher possibility than
>> -funsafe-math. AFAIK, the effect of -funsafe-math is often limited to
>> the accuracy of floating arithmetics, which is implementation-defined in
>> the first place.
>
> What is the difference between this switch and the switch to turn off
> strict aliasing? Strict aliasing and, let's call it, strict ordering both
> are intended to improve safety by keeping the compiler from doing something
> that's probably unsafe. And yet, you have adherents to swear that strict
> aliasing kills valid compiler optimizations. Just as you believe that
> strict ordering kills valid compiler optimizations.
>
> So what is the difference?
Turning off "strict aliasing" will disable some optimization and may
save some non-portable codes. The optimization is standard conforming.
On 2015/11/22 12:53 +0900, Nicol Bolas wrote:
> On Saturday, November 21, 2015 at 3:56:00 PM UTC-5, Kazutoshi SATODA wrote:
>> On 2015/11/20 5:24 +0900, Kazutoshi Satoda wrote:
...
>>> Is these kind of concern already discussed?
>>
>> Now I found another trip report with a comment which includes what has
>> been mentioned as the performance impact of P0145R0.
...
>> It seems that this kind of concern has not been considered as a
>> consequence of P0145R0 so far.
>
> So you see one comment on a trip report, and immediately assume that this
> was the totality of the discussion at the meeting? That's a pretty huge
> leap to be making.
I said "it seems" with some supporting circumstance.
If you know it
isn't, please let me know. That is the question of my OP, which is not
answered yet.
>> The impact may increase in the future as compilers become wiser.
>
> If "compilers become wiser" about this, it would only be through knowing
> more about what is being compiled. And if compilers know more about what is
> being compiled, they will be better able to know when code is not
> order-dependent.
>
> Indeed, the smarter the compiler gets, the more it knows about the code,
> the *lower* the chance of needing to rely on non-strict ordering to get
> appropriate optimizations. After all, if your code is not order-dependent,
> and the compiler knows it, then there's no problem; the compiler can
> reorder things because you can't tell the difference. The problem only
> happens when your code is not order-dependent, but the compiler can only
> guess that it is.
My earlier reply to Bo Persson seems applicable here:
https://groups.google.com/a/isocpp.org/d/msg/std-proposals/Ew_0zQl_yBg/IJDqDAamBQAJ
> The difference is, "the parameters are independent" can be assumed (now)
> or must be proven (P0145R0). And it can't be proven if a (sub)expression
> contains a call to unknown function via a pointer or virtual call.
And, AFAIK, compilers in present still can't prove much of independence
between a call to function in different translation unit and a non-local
variable.
func1(func2(), func3(func2()));
So GCC has attributes "pure", "const" and they are proposed to
be in the standard.
("The [[pure]] attribute" <http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2015/p0078r0.pdf>)
Do you have any reasoning to foresee so much improvement of
analyzability in the time frame of C++17 (the target of the proposal)?
I don't.
> So exactly what cases would common subexpression elimination be allowed
> under the current rules but forbidden under P0145?
I think it is shown in the example at very beginning of this thread.
https://groups.google.com/a/isocpp.org/d/msg/std-proposals/Ew_0zQl_yBg/el_sLD4GBQAJ
If you don't think the example is not showing such cases, please point
out where and how the example is wrong.
the compiler knows it can invoke the "as if" rule and reorder as it pleases -- there is no way to tell.
// Not portable!
#pragma GCC diagnostic push
#pragma GCC diagnostic warning "-std=c++11" // Huh, is it documented?
// ...
#pragma GCC diagnostic pop