On Tue, Mar 22, 2016 at 5:15 PM, Andrea Arteaga <andys...@gmail.com> wrote:
> The C++ standard defines a strict operator precedence and associativity, so
> for instance a+b+c must be interpreted by the compiler as (a+b)+c and not as
> a+(b+c); a compiler which interprets the expression according to the second
> way is not strictly standard compliant (a few compilers, like GCC and ICC,
> allow to specify a flag which disables all those optimizations which can
> cause reassociation like in this case).
The compiler is permitted to interpret the expression the second way
if it doesn't change the observable behavior of the program.
> With C++11 the new mathematical function FMA is introduced, which allows to
> perform operations like a*b+c in a single hardware instruction. The proper
> language construct to use a FMA is the function std::fma(a, b, c), but, to
> my knowledge, the compiler can also interpret the expression a*b+c as an FMA
> and generate the same instruction. In this case the group */+ used in that
> expression can be considered as a single ternary operator instead of a
> couple of binary operators.
>
> The question is whether such an interpretation is allowed, prescribed or
> regulated by the standard. What happen in the case of the expression a*b +
> c*d? Is the compiler allowed to reformulate it as fma(a, b, c*d) and as
> fma(c, d, a*b)?
Yes. Per [expr]/12:
"The values of the floating operands and the results of floating
expressions may be represented in greater precision and range than
that required by the type"
... and since fma differs from a*b+c by providing extra precision, it
falls under this rule (note that this rule also allows use of 80-bit
x87 floating point computations for storing the intermediate values of
64-bit double computations, etc).
> I'm particularly interested because suddenly, with the introduction of FMAs,
> the very same C++ code can be interpreted in different ways by the compilers
> leading to potentially different results depending on which instruction set
> is selected at compile time. Before, the strict operators precedence and
> associativity rules prevented such differences due to floating point
> arithmetic.
>
> Please let me know if some of my assumptions are wrong or if my
> understanding of the standard is misleading.
>
> --
>
> ---
> You received this message because you are subscribed to the Google Groups
> "ISO C++ Standard - Discussion" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to std-discussio...@isocpp.org.
> To post to this group, send email to std-dis...@isocpp.org.
> Visit this group at
> https://groups.google.com/a/isocpp.org/group/std-discussion/.
--
---
You received this message because you are subscribed to a topic in the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this topic, visit https://groups.google.com/a/isocpp.org/d/topic/std-discussion/wqb6unxwmJI/unsubscribe.
To unsubscribe from this group and all its topics, send an email to std-discussio...@isocpp.org.
On Wed, Mar 23, 2016 at 3:03 AM, Richard Smith <ric...@metafoo.co.uk> wrote:On Tue, Mar 22, 2016 at 5:15 PM, Andrea Arteaga <andys...@gmail.com> wrote:
> The C++ standard defines a strict operator precedence and associativity, so
> for instance a+b+c must be interpreted by the compiler as (a+b)+c and not as
> a+(b+c); a compiler which interprets the expression according to the second
> way is not strictly standard compliant (a few compilers, like GCC and ICC,
> allow to specify a flag which disables all those optimizations which can
> cause reassociation like in this case).
The compiler is permitted to interpret the expression the second way
if it doesn't change the observable behavior of the program.This sentence is crucial for my research project. Could you please link to the specific parts of the standard which express this concept? I know Fortran allows this kind of reinterpretation which does not change the mathematical meaning of the expression, but I was really assuming that the strict operator precedence of C++ was preventing such a behavior.
This International Standard places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below. 5
Footnote 5) This provision is sometimes called the “as-if” rule, because an implementation is free to disregard any requirement of this
International Standard as long as the result is as if the requirement had been obeyed, as far as can be determined from the
observable behavior of the program. For instance, an actual implementation need not evaluate part of an expression if it can
deduce that its value is not used and that no side effects affecting the observable behavior of the program are produced.
On Wed, Mar 23, 2016 at 10:11 AM, Columbo <r....@gmx.net> wrote:
On Wednesday, March 23, 2016 at 10:02:48 AM UTC+1, Andrea Arteaga wrote:On Wed, Mar 23, 2016 at 3:03 AM, Richard Smith <ric...@metafoo.co.uk> wrote:On Tue, Mar 22, 2016 at 5:15 PM, Andrea Arteaga <andys...@gmail.com> wrote:
> The C++ standard defines a strict operator precedence and associativity, so
> for instance a+b+c must be interpreted by the compiler as (a+b)+c and not as
> a+(b+c); a compiler which interprets the expression according to the second
> way is not strictly standard compliant (a few compilers, like GCC and ICC,
> allow to specify a flag which disables all those optimizations which can
> cause reassociation like in this case).
The compiler is permitted to interpret the expression the second way
if it doesn't change the observable behavior of the program.This sentence is crucial for my research project. Could you please link to the specific parts of the standard which express this concept? I know Fortran allows this kind of reinterpretation which does not change the mathematical meaning of the expression, but I was really assuming that the strict operator precedence of C++ was preventing such a behavior.See [intro.execution]/1:This International Standard places no requirement on the structure of conforming implementations. In particular, they need not copy or emulate the structure of the abstract machine. Rather, conforming implementations are required to emulate (only) the observable behavior of the abstract machine as explained below. 5
Footnote 5) This provision is sometimes called the “as-if” rule, because an implementation is free to disregard any requirement of this
International Standard as long as the result is as if the requirement had been obeyed, as far as can be determined from the
observable behavior of the program. For instance, an actual implementation need not evaluate part of an expression if it can
deduce that its value is not used and that no side effects affecting the observable behavior of the program are produced.Thanks.From this I understand that the standard defines an abstract machine which follows a well-defined set of rules. The implementations do not have to mimic the very same behavior as long as the actual outcome is the same ("conforming implementations are required to emulate (only) the observable behavior of the abstract machine"). Since (a+b)+c and a+(b+c) result into two different observable behaviors, I would assume this sentence prohibits this kind of reassociation.
This mode enables optimizations that allow arbitrary reassociations and transformations with no accuracy guarantees. It also does not try to preserve the sign of zeros. […] Due to roundoff errors the associative law of algebra do not necessary hold for floating point numbers and thus expressions like (x + y) + z are not necessary equal to x + (y + z).
Operators can be regrouped according to the usual mathematical rules only where the operators really are associative or commutative.
I still can't find any mentions to the FMA instructions. From what I read, generating FMA instructions for expressions such as a*b+c is nonconforming, since this would give different results from the canonical interpretation of the expression, which is (a*b)+c.
> With C++11 the new mathematical function FMA is introduced, which allows to
> perform operations like a*b+c in a single hardware instruction. The proper
> language construct to use a FMA is the function std::fma(a, b, c), but, to
> my knowledge, the compiler can also interpret the expression a*b+c as an FMA
> and generate the same instruction. In this case the group */+ used in that
> expression can be considered as a single ternary operator instead of a
> couple of binary operators.
>
> The question is whether such an interpretation is allowed, prescribed or
> regulated by the standard. What happen in the case of the expression a*b +
> c*d? Is the compiler allowed to reformulate it as fma(a, b, c*d) and as
> fma(c, d, a*b)?
Yes. Per [expr]/12:
"The values of the floating operands and the results of floating
expressions may be represented in greater precision and range than
that required by the type"
... and since fma differs from a*b+c by providing extra precision, it
falls under this rule (note that this rule also allows use of 80-bit
x87 floating point computations for storing the intermediate values of
64-bit double computations, etc).Well, my question is actually different. (a*b)+c*d is a different application of the operators than a+b*(c*d). Does the standard defines which one is the "correct" one, i.e. (assuming your statement above is correct, i.e. the compiler is allowed to reassociate without changing the mathematical meaning) the one according to which the abstract meaning is deducted?
On 23 March 2016 at 09:02, Andrea Arteaga <andys...@gmail.com> wrote:> With C++11 the new mathematical function FMA is introduced, which allows to
> perform operations like a*b+c in a single hardware instruction. The proper
> language construct to use a FMA is the function std::fma(a, b, c), but, to
> my knowledge, the compiler can also interpret the expression a*b+c as an FMA
> and generate the same instruction. In this case the group */+ used in that
> expression can be considered as a single ternary operator instead of a
> couple of binary operators.
>
> The question is whether such an interpretation is allowed, prescribed or
> regulated by the standard. What happen in the case of the expression a*b +
> c*d? Is the compiler allowed to reformulate it as fma(a, b, c*d) and as
> fma(c, d, a*b)?
Yes. Per [expr]/12:
"The values of the floating operands and the results of floating
expressions may be represented in greater precision and range than
that required by the type"
... and since fma differs from a*b+c by providing extra precision, it
falls under this rule (note that this rule also allows use of 80-bit
x87 floating point computations for storing the intermediate values of
64-bit double computations, etc).Well, my question is actually different. (a*b)+c*d is a different application of the operators than a+b*(c*d). Does the standard defines which one is the "correct" one, i.e. (assuming your statement above is correct, i.e. the compiler is allowed to reassociate without changing the mathematical meaning) the one according to which the abstract meaning is deducted?(a*b)+c*d is not the same as a+b*(c*d), I assume you meant something different?
In any case, for a*b+c*d, both fma(a, b, c*d) and fma(c, d, a*b) are valid transformations since + is commutative.
As was said earlier, the language allows any sequence of operations to be evaluated with extra precision, and you cannot predict which ones. (you can however prevent it by going through volatile memory or using compiler pragmas).
In any case, for a*b+c*d, both fma(a, b, c*d) and fma(c, d, a*b) are valid transformations since + is commutative.+ is commutative even in floating point algebra, but + and * are not associative, so the result changes (or may change).
As was said earlier, the language allows any sequence of operations to be evaluated with extra precision, and you cannot predict which ones. (you can however prevent it by going through volatile memory or using compiler pragmas).Can you point to the place in the standard where this is stated?
On 23 March 2016 at 14:17, Andrea Arteaga <andys...@gmail.com> wrote:In any case, for a*b+c*d, both fma(a, b, c*d) and fma(c, d, a*b) are valid transformations since + is commutative.+ is commutative even in floating point algebra, but + and * are not associative, so the result changes (or may change).Well, you have the two following possible chains of transformations:
- a*b+c*d -> fma(a, b, c*d)
- a*b+c*d -> c*d+a*b -> fma(c, d, a*b)
As you can see, it doesn't matter how you construct the fma.
The C++ standard defines a strict operator precedence and associativity, so for instance a+b+c must be interpreted by the compiler as (a+b)+c and not as a+(b+c); a compiler which interprets the expression according to the second way is not strictly standard compliant
The ANSI C and C++ language standards do not permit reassociation by the compiler; even in the absence of parentheses, floating point expressions are to be evaluated from left to right.
Wikipedia quotes an example in which fused multiply-add would break the observable behavior compared to an abstract machine: (x*x) - (y*y) becoming std::fma(x, x, -(y * y)) can result in a negative answer even if x == y under certain conditions, possibly breaking a subsequent square root.
https://en.wikipedia.org/wiki/Multiply%E2%80%93accumulate_operation
The C++ standard defines a strict operator precedence and associativity, so for instance a+b+c must be interpreted by the compiler as (a+b)+c and not as a+(b+c); a compiler which interprets the expression according to the second way is not strictly standard compliant (a few compilers, like GCC and ICC, allow to specify a flag which disables all those optimizations which can cause reassociation like in this case).
With C++11 the new mathematical function FMA is introduced, which allows to perform operations like a*b+c in a single hardware instruction. The proper language construct to use a FMA is the function std::fma(a, b, c), but, to my knowledge, the compiler can also interpret the expression
a*b+c as an FMA and generate the same instruction. In this case the group */+ used in that expression can be considered as a single ternary operator instead of a couple of binary operators.
udloyurui8turc252
The question is whether such an interpretation is allowed, prescribed or regulated by the standard. What happen in the case of the expression
a*b + c*d? Is the compiler allowed to reformulate it as fma(a, b, c*d) and as fma(c, d, a*b)?
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.
hgjgdgtumnbbgjcwsai
Just one more relevant question (in spite of the many spammers here).The fortran standard explicitely specifies that reassociation and other reinterpretations of the code due to abstract arithmetic are allowed. In the F2008 draft [1] you can find this on page 141, section 7.1.5.2.4-2:"Once the interpretation of a numeric intrinsic operation is established, the processor may evaluate any mathematically equivalent expression, provided that the integrity of parentheses is not violated."Does the C++ standard have anything as explicit, either in favor of reinterpretation or against?
hgjgdgtumnbbgjcwsai
On Mon, Apr 4, 2016 at 10:34 AM, Kalum Moore <kalum...@ghs.nsw.edu.au> wrote:
hfhfg
On Mon, Apr 4, 2016 at 10:29 AM, Kalum Moore <kalum...@ghs.nsw.edu.au> wrote:
djgskr
On Fri, Apr 1, 2016 at 7:15 PM, Faseeh X <fase...@gmail.com> wrote:
To unsubscribe from this group and all its topics, send an email to std-discussion+unsubscribe@isocpp.org.