Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Something funny about numbers

9 views
Skip to first unread message

Dann Corbit

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
Consider grouping of operations...
In particular, let's consider a very naughty equation:

float f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX; /* (or double or long double or
whatever) */

If the first multiplication happens first, you will get underflow and the
result will be zero or an underflow exception.

If the last multiplication happens first, you will get overflow and the
result will be +INF or an overflow exception.

If the FLT_MIN*FLT_MAX middle operations are grouped together, then you may
get an answer somewhere close to one (within an order of magnitude or so).

0. f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX;
1. f = FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX;
2. f = FLT_MIN*(tmp0)*FLT_MAX;
3. f = (tmp1)*FLT_MAX;
4. f = tmp2;

So here you have a range from zero to infinity {with stuff in-between}, all
a function of how operations are grouped.

My question is, since this sort of thing can certainly be approximated by
real operations on actual data, does the C Standard even hint at which
outcome is to be preferred or expected? Does it allow any mechanism by
which we might force the order of operations? Can the "as if" rule be used
here, and if so, how can we disambiguate what is intended? The C89 standard
was pretty lax about floating point (as far as I can tell -- I'm no expert
on the standard) and I am wondering if the new standard makes an attempt to
clarify.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm


Diophantus

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
On Tue, 7 Dec 1999 12:48:52 -0800, Dann Corbit <dco...@solutionsiq.com> wrote:
>0. f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX;
>1. f = FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX;
>2. f = FLT_MIN*(tmp0)*FLT_MAX;
>3. f = (tmp1)*FLT_MAX;
>4. f = tmp2;
>
>So here you have a range from zero to infinity {with stuff in-between}, all
>a function of how operations are grouped.
>
>My question is, since this sort of thing can certainly be approximated by
>real operations on actual data, does the C Standard even hint at which
>outcome is to be preferred or expected?

The C standard requires the multiplications to be done in left to right order.
Only the order of determination of the values of the operands themselves is
unspecified.

> Does it allow any mechanism by
>which we might force the order of operations? Can the "as if" rule be used

Yes, parentheses. ;)

>here, and if so, how can we disambiguate what is intended? The C89 standard

The as if rule here means that the implementation can do whatever it takes
as long as the result is the same as what would be done if the
multiplications were done in left to right order, as directed by the
grammar.

There is that example in the standard, similar to

a - 32765 + b

where a is 5 and b is 7. If INT_MAX is 32767, overflow results if the
addition is done before the subtraction. However, if the implementation
follows the required left to right order, no overflow results.

Must the implementation, therefore, follow left to right order? Only if the
overflow leads to an actual problem such as an exception, or if the overflow
arithmetic is irreversible, because, for instance, it truncates or clamps. If
the target machine doesn't fail on overflow, and the overflow arithmetic is
reversible (e.g. two's complement) then the implementation may well do the
addition first. According to the as if rule, the result is the same both ways.

>was pretty lax about floating point (as far as I can tell -- I'm no expert
>on the standard) and I am wondering if the new standard makes an attempt to
>clarify.

From what I recall, the lenience permits floating operations to be done in a
precision greater than the type of the operands. So, for example, two floats
multiplied together may undergo some intermediate conversion to, say, the
precision of double (if indeed these differ) or perhaps even some precision
that is not available in the type system. Thus implementations for machines
whose FPU's have extra internal bits of precision can be conforming.

Dann Corbit

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
Interesting. *ALL* my compilers appear to be broken.

#include <float.h>
#include <stdio.h>
int main(void)
{
float f;
f = (((FLT_MIN * FLT_MIN) * FLT_MAX) * FLT_MAX);
printf("(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=%f\n", f);
f = FLT_MIN * FLT_MIN * FLT_MAX * FLT_MAX;
printf("FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX)=%f\n", f);
f = FLT_MIN * (FLT_MIN * (FLT_MAX * FLT_MAX));
printf("FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=%f\n", f);

return 0;
}

Does anyone get the "expected" results from this?

Ben Pfaff

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
"Dann Corbit" <dco...@solutionsiq.com> writes:

> int main(void)
> {
> float f;
> f = (((FLT_MIN * FLT_MIN) * FLT_MAX) * FLT_MAX);
> printf("(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=%f\n", f);
> f = FLT_MIN * FLT_MIN * FLT_MAX * FLT_MAX;
> printf("FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX)=%f\n", f);
> f = FLT_MIN * (FLT_MIN * (FLT_MAX * FLT_MAX));
> printf("FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=%f\n", f);
>
> return 0;
> }
>
> Does anyone get the "expected" results from this?

With gcc 2.95 on a PII I get 0, 0, inf. That's what I expected.
What were you expecting?
--
"I should killfile you where you stand, worthless human." --Kaz

Dann Corbit

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
The GCC result is actually what I would expect:

bash-2.02$ cat foo.c
#include <float.h>
#include <stdio.h>


int main(void)
{
float f;
f = (((FLT_MIN * FLT_MIN) * FLT_MAX) * FLT_MAX);
printf("(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=%f\n", f);

f = FLT_MIN * FLT_MIN * FLT_MAX * FLT_MAX;

printf("FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX=%f\n", f);

f = FLT_MIN * (FLT_MIN * FLT_MAX) * FLT_MAX;
printf("FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX=%f\n", f);

f = FLT_MIN * (FLT_MIN * (FLT_MAX * FLT_MAX));
printf("FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=%f\n", f);

f = (((DBL_MIN * DBL_MIN) * DBL_MAX) * DBL_MAX);
printf("(((DBL_MIN*DBL_MIN)*DBL_MAX)*DBL_MAX)=%f\n", f);

f = DBL_MIN * DBL_MIN * DBL_MAX * DBL_MAX;
printf("DBL_MIN*DBL_MIN*DBL_MAX*DBL_MAX=%f\n", f);

f = DBL_MIN * (DBL_MIN * DBL_MAX) * DBL_MAX;
printf("DBL_MIN*(DBL_MIN*DBL_MAX)*DBL_MAX=%f\n", f);

f = DBL_MIN * (DBL_MIN * (DBL_MAX * DBL_MAX));
printf("DBL_MIN*(DBL_MIN*(DBL_MAX*DBL_MAX))=%f\n", f);

return 0;
}
bash-2.02$ gcc -Wall -ansi -pedantic foo.c
bash-2.02$ a
(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=0.000000
FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX=0.000000
FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX=15.999998
FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=Inf
(((DBL_MIN*DBL_MIN)*DBL_MAX)*DBL_MAX)=0.000000
DBL_MIN*DBL_MIN*DBL_MAX*DBL_MAX=0.000000
DBL_MIN*(DBL_MIN*DBL_MAX)*DBL_MAX=16.000000
DBL_MIN*(DBL_MIN*(DBL_MAX*DBL_MAX))=Inf
bash-2.02$

Here is something strange:
C:\tmp>cl /W4 /Ox foo.c
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8168 for 80x86
Copyright (C) Microsoft Corp 1984-1998. All rights reserved.

foo.c
foo.c(15) : warning C4056: overflow in floating-point constant arithmetic
foo.c(15) : warning C4056: overflow in floating-point constant arithmetic
foo.c(18) : warning C4305: '=' : truncation from 'const double ' to 'float '
foo.c(21) : warning C4305: '=' : truncation from 'const double ' to 'float '
foo.c(24) : warning C4305: '=' : truncation from 'const double ' to 'float '
foo.c(27) : warning C4056: overflow in floating-point constant arithmetic
foo.c(27) : warning C4056: overflow in floating-point constant arithmetic
foo.c(27) : warning C4305: '=' : truncation from 'const double ' to 'float '
Microsoft (R) Incremental Linker Version 6.00.8168
Copyright (C) Microsoft Corp 1992-1998. All rights reserved.

/out:foo.exe
foo.obj

C:\tmp>foo
(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=15.999998
FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX=15.999998
FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX=15.999998
FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=15.999998
(((DBL_MIN*DBL_MIN)*DBL_MAX)*DBL_MAX)=16.000000
DBL_MIN*DBL_MIN*DBL_MAX*DBL_MAX=16.000000
DBL_MIN*(DBL_MIN*DBL_MAX)*DBL_MAX=16.000000
DBL_MIN*(DBL_MIN*(DBL_MAX*DBL_MAX))=16.000000

Same compiler, set to "improve floating point consistency":
C:\tmp>cl /Op foo.c
Microsoft (R) 32-bit C/C++ Optimizing Compiler Version 12.00.8168 for 80x86
Copyright (C) Microsoft Corp 1984-1998. All rights reserved.

foo.c
Microsoft (R) Incremental Linker Version 6.00.8168
Copyright (C) Microsoft Corp 1992-1998. All rights reserved.

/out:foo.exe
foo.obj

C:\tmp>foo
(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=0.000000
FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX=0.000000
FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX=15.999998
FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=0.000000
(((DBL_MIN*DBL_MIN)*DBL_MAX)*DBL_MAX)=0.000000
DBL_MIN*DBL_MIN*DBL_MAX*DBL_MAX=0.000000
DBL_MIN*(DBL_MIN*DBL_MAX)*DBL_MAX=16.000000
DBL_MIN*(DBL_MIN*(DBL_MAX*DBL_MAX))=0.000000

Here is what I get on our Alpha:
$ run foo
(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=0.000000
FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX=0.000000
FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX=0.000000
FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=0.000000
(((DBL_MIN*DBL_MIN)*DBL_MAX)*DBL_MAX)=0.000000
DBL_MIN*DBL_MIN*DBL_MAX*DBL_MAX=0.000000
DBL_MIN*(DBL_MIN*DBL_MAX)*DBL_MAX=0.000000
DBL_MIN*(DBL_MIN*(DBL_MAX*DBL_MAX))=0.000000
$

James Kuyper

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
Geoff Keating wrote:
...
> f = (((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX);
...
> Oh, that was assuming that you have arithmetic traps switched off. If
> you switch them on, I think you get 1 in all cases, under the IEEE
> rules for underflow when traps are enabled.

What are those rules, and how do they achieve that result?

Robert Corbett

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
In article <_oe34.3400$uM4.2204@client>,

Dann Corbit <dco...@solutionsiq.com> wrote:
>Consider grouping of operations...
>In particular, let's consider a very naughty equation:
>
>float f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX; /* (or double or long double or
>whatever) */
>
>If the first multiplication happens first, you will get underflow and the
>result will be zero or an underflow exception.
>
>If the last multiplication happens first, you will get overflow and the
>result will be +INF or an overflow exception.
>
>If the FLT_MIN*FLT_MAX middle operations are grouped together, then you may
>get an answer somewhere close to one (within an order of magnitude or so).
>
>0. f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX;
>1. f = FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX;
>2. f = FLT_MIN*(tmp0)*FLT_MAX;
>3. f = (tmp1)*FLT_MAX;
>4. f = tmp2;
>
>So here you have a range from zero to infinity {with stuff in-between}, all
>a function of how operations are grouped.
>
>My question is, since this sort of thing can certainly be approximated by
>real operations on actual data, does the C Standard even hint at which
>outcome is to be preferred or expected? Does it allow any mechanism by

>which we might force the order of operations? Can the "as if" rule be used
>here, and if so, how can we disambiguate what is intended? The C89 standard
>was pretty lax about floating point (as far as I can tell -- I'm no expert
>on the standard) and I am wondering if the new standard makes an attempt to
>clarify.

The C89 standard explicitly defines the order of operations.
Section 5.1.2.3 makes it clear that in the case you have given,
the grouping must be

float f = ((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX;

Sincerely,
Bob Corbett

Peter Seebach

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
In article <slrn84r28l.e...@ashi.FootPrints.net>,

Diophantus <dioph...@nospam.org> wrote:
>The as if rule here means that the implementation can do whatever it takes
>as long as the result is the same as what would be done if the
>multiplications were done in left to right order, as directed by the
>grammar.

Uhm, no.

The standard does *NOT* direct that multiplications occur in left-to-right
order, nor anything else. The order is *not specified*.

-s
--
Copyright 1999, All rights reserved. Peter Seebach / se...@plethora.net
C/Unix wizard, Pro-commerce radical, Spam fighter. Boycott Spamazon!
Consulting & Computers: http://www.plethora.net/
Get paid to surf! No spam. http://www.alladvantage.com/go.asp?refid=GZX636

Peter Seebach

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
In article <82k3b9$7ue$1...@engnews1.eng.sun.com>,

Robert Corbett <cor...@lupa.Sun.COM> wrote:
>The C89 standard explicitly defines the order of operations.

It *what*!?

>Section 5.1.2.3 makes it clear that in the case you have given,
>the grouping must be

> float f = ((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX;

!!!

Oh, I get it. Associativity, thus, it's equivalent to ()'s, thus, you
have to multiply the result of the inner expression...

I get it. I wouldn't have called this order of evaluation, I'd have called
it grouping.

James Kuyper

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
Peter Seebach wrote:
>
> In article <82k3b9$7ue$1...@engnews1.eng.sun.com>,
> Robert Corbett <cor...@lupa.Sun.COM> wrote:
> >The C89 standard explicitly defines the order of operations.
>
> It *what*!?

Not in general; just for this expression.

> >Section 5.1.2.3 makes it clear that in the case you have given,
> >the grouping must be
>
> > float f = ((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX;
>
> !!!
>
> Oh, I get it. Associativity, thus, it's equivalent to ()'s, thus, you
> have to multiply the result of the inner expression...
>
> I get it. I wouldn't have called this order of evaluation, I'd have called
> it grouping.

Why? Example 7 in section 5.1.2.3 makes it clear that "The grouping of
an expression does not completely determine it's evaluation.", but this
particular expression is not such a case. The grouping does completely
determine the order of evaluation for this particular expression; if it
didn't, how do you explain what the standard says about examples 5 and 6
in that same section?

Geoff Keating

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
"Dann Corbit" <dco...@solutionsiq.com> writes:

> Consider grouping of operations...
> In particular, let's consider a very naughty equation:
>
> float f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX; /* (or double or long double or
> whatever) */
>
> If the first multiplication happens first, you will get underflow and the
> result will be zero or an underflow exception.
>
> If the last multiplication happens first, you will get overflow and the
> result will be +INF or an overflow exception.
>
> If the FLT_MIN*FLT_MAX middle operations are grouped together, then you may
> get an answer somewhere close to one (within an order of magnitude or so).
>
> 0. f = FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX;
> 1. f = FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX;
> 2. f = FLT_MIN*(tmp0)*FLT_MAX;
> 3. f = (tmp1)*FLT_MAX;
> 4. f = tmp2;
>
> So here you have a range from zero to infinity {with stuff in-between}, all
> a function of how operations are grouped.
>
> My question is, since this sort of thing can certainly be approximated by
> real operations on actual data, does the C Standard even hint at which
> outcome is to be preferred or expected?

The C standard says that this means

f = (((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX);

because that's the way it is parsed. For instance, in the code

unsigned x, y;
x = y * 65536 / 8;

if 'y' is 65537 and 'unsigned' is 32 bits, x is set to 0, even though
65537 * 8192 is not zero, because the expression is (y * 65536) / 8.

So f should be set to 0 if the implementation is trying to be
IEEE-compliant, _AND_ it does not promote 'float' to 'double' or 'long
double' in expressions. If it does do promotion, I would expect the
value 1. So this is a too-clever way of finding out if your compiler
promotes 'float' values to 'double' in expressions.

Oh, that was assuming that you have arithmetic traps switched off. If
you switch them on, I think you get 1 in all cases, under the IEEE
rules for underflow when traps are enabled.

--
- Geoffrey Keating <geo...@cygnus.com>

fm...@my-deja.com

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <08g34.4795$uM4.2955@client>,
"Dann Corbit" <dco...@solutionsiq.com> wrote:
<snip>

> Here is what I get on our Alpha:
> $ run foo
> FLT_MIN*(FLT_MIN*FLT_MAX)*FLT_MAX=0.000000
That's odd 'coz I got 15.999998 on my DEC Alpha...

> DBL_MIN*(DBL_MIN*DBL_MAX)*DBL_MAX=0.000000
And I got 16.000000 for this one.

Was compiled with the DEC compiler with the -newc -std1 -fast flags.

Regards
Farid


Sent via Deja.com http://www.deja.com/
Before you buy.

Peter Seebach

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <384DA113...@wizard.net>,
James Kuyper <kuy...@wizard.net> wrote:
>> > float f = ((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX;

>> I get it. I wouldn't have called this order of evaluation, I'd have called
>> it grouping.

>Why?

Because I don't know how the execution happens.

>Example 7 in section 5.1.2.3 makes it clear that "The grouping of
>an expression does not completely determine it's evaluation.", but this
>particular expression is not such a case. The grouping does completely
>determine the order of evaluation for this particular expression; if it
>didn't, how do you explain what the standard says about examples 5 and 6
>in that same section?

Okay, let's pretend we write it out a little differently.
float fmin1 = FLT_MIN;
float fmin2 = FLT_MIN;
float fmax1 = FLT_MAX;
float fmax2 = FLT_MAX;
float f = ((fmin1*fmin2)*fmax1)*fmax2;

I believe you're allowed to evaluate "fmax2" long before you evaluate "fmin1".

Now, the *multiplications* have to happen in a certain order - but the
underlying evaluation can happen in any order, as long as the results of the
right things are used.

Thus, grouping, not order of evaluation, even though the grouping happens to
be pretty unambiguous about which operator happens first.

John Bode

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <_oe34.3400$uM4.2204@client>,
> outcome is to be preferred or expected? Does it allow any mechanism
by
> which we might force the order of operations? Can the "as if" rule be
used
> here, and if so, how can we disambiguate what is intended? The C89
standard
> was pretty lax about floating point (as far as I can tell -- I'm no
expert
> on the standard) and I am wondering if the new standard makes an
attempt to
> clarify.

Excuse me while I dive in over my head here.

In the case of a completely commutative and associative operator like *,
isn't the compiler free to evaluate things in any order it damn well
feels like? In other words, can't a*b*c*d be evaluated as any of the
following?

a*b*c*d
a*d*b*c
d*c*b*a

So you might wind up with FLT_MIN*FLT_MAX*FLT_MIN*FLT_MAX.

My understanding is that the compiler is allowed to ignore any explicit
grouping like ((a*b)*c)*d, since (mathematically, at least) it doesn't
matter which factor is multiplied by which other factor.

So, for this particular case, there *isn't* a preferred answer, is
there?

--
www.biggerhammer.com | Low Technology Solutions for
| High Technology Problems

Get the FAQ's at http://www.eskimo.com/~scs/C-faq/top.html

Diophantus

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
On Tue, 07 Dec 1999 23:28:12 GMT, Peter Seebach <se...@plethora.net> wrote:
>In article <slrn84r28l.e...@ashi.FootPrints.net>,
>Diophantus <dioph...@nospam.org> wrote:
>>The as if rule here means that the implementation can do whatever it takes
>>as long as the result is the same as what would be done if the
>>multiplications were done in left to right order, as directed by the
>>grammar.
>
>Uhm, no.
>
>The standard does *NOT* direct that multiplications occur in left-to-right
>order, nor anything else. The order is *not specified*.

Yes it is. The associativity requires it. Of course, if the same answer can be
derived by some other order, then that is permitted by the "as if rule".

But a*b*c*d clearly means ((a*b)*c)*d

In the abstract semantics, you can't multiply by c until you have the result of
a*b, and you can't multiply by d until you have ((a*b)*c) solved.

In other words, there is a partial constraint over the order of evaluation.

Now if multiplication over the underlying types is associative and commutative
then the language implementation can rearrange things algebraically and nobody
is the wiser.

In general, floating types do not have commutative or associative
multiplication, so such a rearrangement is illegal. The multiplications must be
done in the order implied by the associativity.

Diophantus

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
On Wed, 08 Dec 1999 05:17:39 GMT, Peter Seebach <se...@plethora.net> wrote:
>In article <384DA113...@wizard.net>,
>James Kuyper <kuy...@wizard.net> wrote:
>>> > float f = ((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX;
>
>>> I get it. I wouldn't have called this order of evaluation, I'd have called
>>> it grouping.
>
>>Why?
>
>Because I don't know how the execution happens.

I would call it a partial constraint over evaluation. In computing a*b*c*d, a
number of things have to happen: the operands a, b, c, and d must have their
values determined. And these values must me multiplied together.

A partial order is imposed because an operation cannot proceed until all of its
inputs are available.

You can draw a directed acyclic graph (dag) representing the expression. Any
topological sort of the dag gives you a possible evaluation order (but also
parallel evaluations are possible).

The dag tells you that some things must happen before other things, but also
that some things can be done independently of other things. Those things that
are done independently of each other may be performed in an unspecified order
relative to one another.

Christian Bau

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to

Actually, by the syntax of the C language a*b*c*d has to be evaluated as
((a*b)*c)*d. For unsigned numbers, the result would always be the same no
matter in what order you do the multiplications, so a compiler could
instead calculate ((d*c)*b)*a. The same is true for integer numbers in
many implementations (in all implementations where 0 * overflow has a
result of 0 with no side effect).

But with floating point maths, different order of multiplication would
give different results, so the as-if rule cannot be applied except in rare
cases. For example, if you take x * 2.0 * 3.14 in IEEE arithmetic, then (x
* 2.0) * 3.14 and x * (2.0 * 3.14) will always give the same result and
the as-if rule could be applied, but (x * 0.5) * 3.14 and x * (0.5 * 3.14)
could give different results if x is very small, so the as-if rule must
not be used.

Ari Lukumies

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
Diophantus wrote:
>
> I would call it a partial constraint over evaluation. In computing a*b*c*d, a
> number of things have to happen: the operands a, b, c, and d must have their
> values determined. And these values must be multiplied together.

>
> A partial order is imposed because an operation cannot proceed until all of its
> inputs are available.
>
> You can draw a directed acyclic graph (dag) representing the expression. Any
> topological sort of the dag gives you a possible evaluation order (but also
> parallel evaluations are possible).
>
> The dag tells you that some things must happen before other things, but also
> that some things can be done independently of other things. Those things that
> are done independently of each other may be performed in an unspecified order
> relative to one another.

How about:

e = c * d;
while (some_condition) {
f += (a * b * c * d);
f = function_call(f);
}
g = 2 * c * d;
...

Now, is the optimizer free to arrange this to (ok, maybe this is not a
"complete" optimization, but...):


e = c * d;
while(some_condition) {
f += (a * b * e);
f = function_call(f);
}
g = 2 * e;
...

Or does it have to evaluate each of the c and d in each iteration and at
the end and redo the multiplication?

AriL
--
A multiverse is figments of its own imaginations.
Homepaged at http://www.angelfire.com/or/lukumies

James Kuyper Jr.

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
Peter Seebach wrote:
...

> Okay, let's pretend we write it out a little differently.
> float fmin1 = FLT_MIN;
> float fmin2 = FLT_MIN;
> float fmax1 = FLT_MAX;
> float fmax2 = FLT_MAX;
> float f = ((fmin1*fmin2)*fmax1)*fmax2;
>
> I believe you're allowed to evaluate "fmax2" long before you evaluate "fmin1".
>
> Now, the *multiplications* have to happen in a certain order - but the
> underlying evaluation can happen in any order, as long as the results of the
> right things are used.

Yes, identifiers are id-expressions, and the grouping says nothing about
the order of evaluation of those expressions relative to each other, and
only partially constrains their order relative to the evaluation of the
multiplicative expressions. However, Bob Corbett's original statement
was that "The C89 standard explicitly defines the order of operations".
You changed that to "order of evaluation", and I didn't notice the
significance of that change until now. The only operators here are =,
(), and *, and the order in which those operations occurs is completely
determined by the syntax.

James Kuyper Jr.

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
"James Kuyper Jr." wrote:
...

> Yes, identifiers are id-expressions, and the grouping says nothing about

My apologies - I was looking in the wrong standard when I wrote that.
'id-expression' is part of the C++ grammar, not the C grammar. I should
have said primary-expression, which would have worked with either
language. This doesn't affect the validity of my statement, however.

Lawrence Kirby

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <mVf34.4642$uM4.2853@client>
dco...@solutionsiq.com "Dann Corbit" writes:

>Interesting. *ALL* my compilers appear to be broken.
>

>#include <float.h>
>#include <stdio.h>
>int main(void)
>{
> float f;
> f = (((FLT_MIN * FLT_MIN) * FLT_MAX) * FLT_MAX);
> printf("(((FLT_MIN*FLT_MIN)*FLT_MAX)*FLT_MAX)=%f\n", f);
> f = FLT_MIN * FLT_MIN * FLT_MAX * FLT_MAX;

> printf("FLT_MIN*FLT_MIN*FLT_MAX*FLT_MAX)=%f\n", f);


> f = FLT_MIN * (FLT_MIN * (FLT_MAX * FLT_MAX));
> printf("FLT_MIN*(FLT_MIN*(FLT_MAX*FLT_MAX))=%f\n", f);
>

> return 0;
>}
>
>Does anyone get the "expected" results from this?

Intermediate results in a floating point expression can be stored
in a wider format that implied by the underlying type. Only values
stored in objects and the results of conversion to a narrower floating
point type must be represented in the precise format of the
indicated type (*). So, for example, the intermediate result of
FLT_MIN * FLT_MIN may be represented internally in double, long double
or some other internal format that is wider than float, and that
format may be able to represent the result. Only when the final
result is assigned to f above will the limitations of the float format
be strictly imposed.

* - many compilers deliberately don't honour this because it reduces
precision and increases overhead. Some provide switches to make
them conform but others even with the switches don't implement
strict C semantics.

--
-----------------------------------------
Lawrence Kirby | fr...@genesis.demon.co.uk
Wilts, England | 7073...@compuserve.com
-----------------------------------------


Lawrence Kirby

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <gKg34.2773$Sz5.4...@ptah.visi.com>
se...@plethora.net "Peter Seebach" writes:

>In article <slrn84r28l.e...@ashi.FootPrints.net>,
>Diophantus <dioph...@nospam.org> wrote:
>>The as if rule here means that the implementation can do whatever it takes
>>as long as the result is the same as what would be done if the
>>multiplications were done in left to right order, as directed by the
>>grammar.
>
>Uhm, no.
>
>The standard does *NOT* direct that multiplications occur in left-to-right
>order,

Yes it does, it is inherent in the syntax.

>nor anything else. The order is *not specified*.

The order is partially specified. In a*b*c*d the order in which
a,b,c,d are individually evaluated is not specified. What is specified
is that the result of a*b is multiplied by c and the result of
a*b*c is multiplied by d. That is a degree of ordering.

Barry Margolin

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <384E49C3...@elmont.fi>,

Ari Lukumies <ari.lu...@elmont.fi> wrote:
>How about:
>
> e = c * d;
> while (some_condition) {
> f += (a * b * c * d);
> f = function_call(f);
> }
> g = 2 * c * d;
> ...
>
>Now, is the optimizer free to arrange this to (ok, maybe this is not a
>"complete" optimization, but...):
>
>
> e = c * d;
> while(some_condition) {
> f += (a * b * e);
> f = function_call(f);
> }
> g = 2 * e;
> ...
>
>Or does it have to evaluate each of the c and d in each iteration and at
>the end and redo the multiplication?

The optimizer is only allowed to rearrange things if they meet the "as if"
rule. Since floating point arithmetic is not generally associative, the
rearranged version of your code doesn't meet this criterion, so it would
not be allowed.

Some other languages have less strict specifications. They allow
implementations to evaluates expressions in any way that conforms to normal
arithmetic's equivalence rules (e.g. associativity, commutativity,
distributivity); if the programmer knows that their data would be sensitive
to such rearrangement, they must usually break it up into separate
statements.

--
Barry Margolin, bar...@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.

Dann Corbit

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
John Bode <john...@my-deja.com> wrote in message
news:82maii$k8t$1...@nnrp1.deja.com...
> Excuse me while I dive in over my head here.
>
> In the case of a completely commutative and associative operator like *,
> isn't the compiler free to evaluate things in any order it damn well
> feels like? In other words, can't a*b*c*d be evaluated as any of the
> following?
>
> a*b*c*d
> a*d*b*c
> d*c*b*a
>
> So you might wind up with FLT_MIN*FLT_MAX*FLT_MIN*FLT_MAX.
>
> My understanding is that the compiler is allowed to ignore any explicit
> grouping like ((a*b)*c)*d, since (mathematically, at least) it doesn't
> matter which factor is multiplied by which other factor.
>
> So, for this particular case, there *isn't* a preferred answer, is
> there?

It seems that the compiler must order the operations from left to right.
Mathematically, they do commute, but it is certainly *not* true numerically.
However, since the compiler is free to use extended precision if it so
desires, it seems that there is no *expected* answer to the question. In
other words, it has to multiply a*b first, then multiply by c and finally by
d. But any of a large number of answers are possible, depending upon the
actual precision used in the calculation.

As always, with floating point, you just need to be extra careful if you get
anywhere near to the ragged end of precision. The question is not far
fetched at all, since we may have a variable be the result of subtraction of
two similar numbers and suddenly precision has vanished. So it is a real
issue to think about carefully.

Diophantus

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
On Wed, 08 Dec 1999 12:08:10 GMT, Ari Lukumies <ari.lu...@elmont.fi> wrote:
>Diophantus wrote:
>>
>> I would call it a partial constraint over evaluation. In computing a*b*c*d, a
>> number of things have to happen: the operands a, b, c, and d must have their
>> values determined. And these values must be multiplied together.
>>
>> A partial order is imposed because an operation cannot proceed until all of its
>> inputs are available.
>>
>> You can draw a directed acyclic graph (dag) representing the expression. Any
>> topological sort of the dag gives you a possible evaluation order (but also
>> parallel evaluations are possible).
>>
>> The dag tells you that some things must happen before other things, but also
>> that some things can be done independently of other things. Those things that
>> are done independently of each other may be performed in an unspecified order
>> relative to one another.
>
>How about:
>
> e = c * d;
> while (some_condition) {
> f += (a * b * c * d);
> f = function_call(f);
> }
> g = 2 * c * d;
> ...
>
>Now, is the optimizer free to arrange this to (ok, maybe this is not a
>"complete" optimization, but...):
>
>
> e = c * d;
> while(some_condition) {
> f += (a * b * e);
> f = function_call(f);
> }
> g = 2 * e;

Yes, because e has the same value as c * d. So the transformation does not
change what is computed. It is a simple algebraic substitution which which does
not depend on associativity or commutativity of the multiplication operation.

In fact, a decant compiler might move all multiplications out of the loop by
pre-computing a*b*c*d.

If you don't know whether the compilers you are going to be porting to are
decent or not, you can explicitly do these transformations yourself.
In some cases, it makes the code more readable and maintainable.

Mathematicians sometimes condense the terms in a long derivation that are
unaffected by consecutive steps by representing them with a single letter. This
condenses the notation and makes the steps easier to follow.

>Or does it have to evaluate each of the c and d in each iteration and at
>the end and redo the multiplication?

Only if it cannot prove that c, d and e are not modified by the function call.

This is trivially easy to prove in the case that these objects are local auto
variables and their address is not taken.

James Kuyper

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
John Bode wrote:
...

> In the case of a completely commutative and associative operator like *,

For floating point numbers, '*' is neither commutative nor associative.

Paul Jarc

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
"James Kuyper Jr." <kuy...@wizard.net> writes:
> > float f = ((fmin1*fmin2)*fmax1)*fmax2;

>
> The only operators here are =, (), and *,

Just a nit: these sets of parentheses aren't operators; they're just
punctuators. (But those in function calls are operators.)


paul

Paul Jarc

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
dioph...@nospam.org (Diophantus) writes:
> On Wed, 08 Dec 1999 12:08:10 GMT, Ari Lukumies <ari.lu...@elmont.fi> wrote:
> >How about:
> >
> > e = c * d;
> > while (some_condition) {
> > f += (a * b * c * d);
> > f = function_call(f);
> > }
> > g = 2 * c * d;
> > ...
> >
> >Now, is the optimizer free to arrange this to (ok, maybe this is not a
> >"complete" optimization, but...):
> >
> >
> > e = c * d;
> > while(some_condition) {
> > f += (a * b * e);
> > f = function_call(f);
> > }
> > g = 2 * e;
>
> Yes, because e has the same value as c * d. So the transformation
> does not change what is computed. It is a simple algebraic
> substitution which which does not depend on associativity or
> commutativity of the multiplication operation.

It does depend on associativity - it changes ((a*b)*c)*d to
(a*b)*(c*d), and (2*c)*d to 2*(c*d).


paul

Peter Seebach

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <944655...@genesis.demon.co.uk>,

Lawrence Kirby <fr...@genesis.demon.co.uk> wrote:
>>The standard does *NOT* direct that multiplications occur in left-to-right
>>order,

>Yes it does, it is inherent in the syntax.

I think we're agreeing but using different words.

(a*b)*(c*d)
specifies nothing except that the "inner" multiplications happen before the
"outer" one.

It's not left-to-right; it's "inner" to "outer", which happens to be the
same as left-to-right in the absence of other grouping.

>The order is partially specified. In a*b*c*d the order in which
>a,b,c,d are individually evaluated is not specified. What is specified
>is that the result of a*b is multiplied by c and the result of
>a*b*c is multiplied by d. That is a degree of ordering.

Yes, but the specification isn't that the *ordering* is left-to-right, just
that grouping is.

Left-to-right multiplication is a side-effect of two other specifications,
it is not itself a rule.

Lawrence Kirby

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
In article <slrn84te1a.f...@ashi.FootPrints.net>
dioph...@nospam.org "Diophantus" writes:

...

>>How about:
>>
>> e = c * d;
>> while (some_condition) {
>> f += (a * b * c * d);
>> f = function_call(f);
>> }
>> g = 2 * c * d;
>> ...
>>
>>Now, is the optimizer free to arrange this to (ok, maybe this is not a
>>"complete" optimization, but...):
>>
>>
>> e = c * d;
>> while(some_condition) {
>> f += (a * b * e);
>> f = function_call(f);
>> }
>> g = 2 * e;
>
>Yes, because e has the same value as c * d. So the transformation does not
>change what is computed.

It does because c*d is never calculated within the loop in the first case,
a*b*c is calculated and the result is multiplied by d. It is the difference
between (a*b*c)*d and (a*b)*(c*d). They can and often will produce
different results. Perhaps more significantly they can overflow
or underflow under different curcumstances e.g. c*d may overflow or
underflow in cases where the evaluation of (a*b*c)*d does not.

>It is a simple algebraic substitution which which does
>not depend on associativity or commutativity of the multiplication operation.

But it does depend on the order of evaluation in practice

>In fact, a decant compiler might move all multiplications out of the loop by
>pre-computing a*b*c*d.

Only if the compiler can prove that function_call() doesn't legitimately
modify any of a,b,c or d; we don't know how they are defined here. In fact
this is another reason the compiler may not be able to use e above.

>If you don't know whether the compilers you are going to be porting to are
>decent or not, you can explicitly do these transformations yourself.
>In some cases, it makes the code more readable and maintainable.

True, it can simplify complex expressions by breaking them down into
smaller parts.

>Mathematicians sometimes condense the terms in a long derivation that are
>unaffected by consecutive steps by representing them with a single letter. This
>condenses the notation and makes the steps easier to follow.
>
>>Or does it have to evaluate each of the c and d in each iteration and at
>>the end and redo the multiplication?
>
>Only if it cannot prove that c, d and e are not modified by the function call.
>This is trivially easy to prove in the case that these objects are local auto
>variables and their address is not taken.

But without seeing the definitions and any other code in the function
we can't assume this.

John Hauser

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to

James Kuyper:

> For floating point numbers, '*' is neither commutative nor associative.

It certainly is commutative on the vast majority of implementations.

- John Hauser

James Kuyper

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to

You're right, it's associativity which is the big problem, not
commmutativity. I didn't think long enough before sending that message.

David R Tribble

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
John Bode wrote:
> Excuse me while I dive in over my head here.
>
> In the case of a completely commutative and associative operator like
> *, isn't the compiler free to evaluate things in any order it damn

> well feels like? In other words, can't a*b*c*d be evaluated as any
> of the following?
>
> a*b*c*d
> a*d*b*c
> d*c*b*a
>
> So you might wind up with FLT_MIN*FLT_MAX*FLT_MIN*FLT_MAX.

The compiler is free to rearrange the order of evaluations of
subexpressions PROVIDED that the result is always the same AS IF
it were evaluated in the strict ordering prescribed by the virtual
machine of the C standard.

Thus for your example above, the compiler is free to rearrange the
evaluation order of the subexpressions within a*b*c*d as long as all
possible results would be the same as the evaluation of ((a*b)*c)*d.
This is possible for a limited set of types and/or values (such as
unsigned short), but is generally not possible for floating-point
types (because of the possibility that a subexpression can overflow,
underflow, or otherwise cause a trap).

-- David R. Tribble, da...@tribble.com, http://david.tribble.com --

David R Tribble

unread,
Dec 8, 1999, 3:00:00 AM12/8/99
to
James Kuyper wrote:
>
> John Bode wrote:
> ...

>> In the case of a completely commutative and associative operator
>> like *, ...

>
> For floating point numbers, '*' is neither commutative nor
> associative.

Across the full range of floating point values, that's true.

On the other hand, the compiler might take advantage of the known
types and values of intermediate subexpressions to allow them to
act as if they are commutative or associative.

unsigned short a;
float b;
unsigned short c;
double d;

result = a * b * c * d;
// same as: (((double)a * (double)b) * (double)c) * d

This could be rearranged safely as

result = (((double)a * (double)c) * (double)b) * d;

provided that the range of (float * unsigned short), i.e., (a*c*b),
does not exceed the range of double; i.e., provided that all
possible values of a, b, and c would not lead to over/underflow
in either (a*c*b) or (a*b*c).

Barry Margolin

unread,
Dec 9, 1999, 3:00:00 AM12/9/99
to
In article <384EEB5D...@wizard.net>,

James Kuyper <kuy...@wizard.net> wrote:
>John Hauser wrote:
>>
>> James Kuyper:
>> > For floating point numbers, '*' is neither commutative nor associative.
>>
>> It certainly is commutative on the vast majority of implementations.
>
>You're right, it's associativity which is the big problem, not
>commmutativity. I didn't think long enough before sending that message.

In fact, I think it's kind of difficult to talk about commutativity in this
context. Computers don't have "left" and "right" operands when performing
arithmetic operations. The generated code for a*b might load a into
Register 1 and b into Register 2, then invoke a multiply opcode; but it
could alternatively load them into the opposite registers, or load either
of them into a register and then multiply it by the other from memory.
Whether or not these produce the same result, they're all valid
implementations, since the language specification can't go into that level
of detail and still be hardware-independent. If the hardware doesn't
implement a commutative multiply (perhaps there's some difference in how
round-off errors show up -- I presume IEEE FP doesn't permit this, but
non-IEEE FP could), then it just means that the precise result of the
multiply isn't predictable on that hardware because of it. In this case,
it would be possible for the optimizer to change the results of a
multiplication (the as-if rule applies to the virtual machine, not the
actual hardware).

Marc van Leeuwen

unread,
Dec 9, 1999, 3:00:00 AM12/9/99
to
James Kuyper wrote:
> For floating point numbers, '*' is neither commutative nor associative.

Just a small point here. About associativity, everybody agrees. But as for
commutativity, I cannot see why for any a,b there would be a reasonable
explanation for having a*b!=b*a. I'm not saying that floating multiplication
must always be commutative, just that I think it might well be (while it most
certainly cannot be always associative). In fact I wouldn't be surprised if a
certain specification (eg IEEE) would explicitly or implicitly require it to
be commutative.

--
Marc van Leeuwen
Universite de Poitiers
http://wwwmathlabo.univ-poitiers.fr/~maavl/

Lawrence Kirby

unread,
Dec 9, 1999, 3:00:00 AM12/9/99
to
In article <384EFB40...@tribble.com>

da...@tribble.com "David R Tribble" writes:

>James Kuyper wrote:
>>
>> John Bode wrote:
>> ...
>>> In the case of a completely commutative and associative operator
>>> like *, ...
>>

>> For floating point numbers, '*' is neither commutative nor
>> associative.
>

>Across the full range of floating point values, that's true.
>
>On the other hand, the compiler might take advantage of the known
>types and values of intermediate subexpressions to allow them to
>act as if they are commutative or associative.
>
> unsigned short a;
> float b;
> unsigned short c;
> double d;
>
> result = a * b * c * d;
> // same as: (((double)a * (double)b) * (double)c) * d
>
>This could be rearranged safely as
>
> result = (((double)a * (double)c) * (double)b) * d;
>
>provided that the range of (float * unsigned short), i.e., (a*c*b),
>does not exceed the range of double; i.e., provided that all
>possible values of a, b, and c would not lead to over/underflow
>in either (a*c*b) or (a*b*c).

But is that true? The problem is that the rearrangement might
cause the result to differ in the lower order bits. The question is whether
that is enough to disallow the optimisation. The standard explicitly
disallows optimisations such as converting /10.0 to *0.2 because they
can produce slightly different results. The problem is that the
standard is loose enough with regards to how intermediate results in
an expression are stored that a strictly conforming program might
not be able to tell that the result is wrong even when something as
well defined as IEEE arithmetic is assumed. So I'm not at all clear
on how strictly the non-rearranagement rules really apply. Where
overflow and underflow are possible it is clear but accuracy issues
are something else. AFAICS in general there is nothing that requires
results to be consistent even if no rearrangement occurs, e.g.

result1 = a * b * c * d;
result2 = a * b * c * d;

I don't see anything that requires result1 and result2 to compare equal.
Is there anything that prevents, say, intermediate results being
held in registers at long double precision but one intermediate result in
the calculation of result2 gets spilled to memory at double precision?
It is possible for a floating point implementation to have floating
point registers that hold values at higher precision than any memory
format it it supports.

Dennis Ritchie

unread,
Dec 9, 1999, 3:00:00 AM12/9/99
to
Marc van Leeuwen wrote:
...

> Just a small point here. About associativity, everybody agrees. But as for
> commutativity, I cannot see why for any a,b there would be a reasonable
> explanation for having a*b!=b*a. I'm not saying that floating multiplication
> must always be commutative, just that I think it might well be (while it most
> certainly cannot be always associative). In fact I wouldn't be surprised if a
> certain specification (eg IEEE) would explicitly or implicitly require it to
> be commutative.

On early versions of the Cray 1, * was indeed not
commutative. I think this was fixed later, however.

Dennis

Michael Rubenstein

unread,
Dec 9, 1999, 3:00:00 AM12/9/99
to

If memory serves this was also true of the earlier CDC 6600 which
was also designed by Cray. Arithmetic on the 6600 was notorious
for being quirky.

Douglas A. Gwyn

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to

CDC (Cray) always opted for speed over accuracy.

It's easy to see how * might not be commutative,
if the LHS was an intermediate result in an extended register
and the RHS was in memory, for example.

Dik T. Winter

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
In article <384FAE35...@mathlabo.univ-poitiers.fr> Marc van Leeuwen <ma...@mathlabo.univ-poitiers.fr> writes:

> James Kuyper wrote:
> > For floating point numbers, '*' is neither commutative nor associative.
>
> Just a small point here. About associativity, everybody agrees. But as for
> commutativity, I cannot see why for any a,b there would be a reasonable
> explanation for having a*b!=b*a.

Think about machines built for speed rather than correctness. CDC Cyber 205
for example has in the hardware reference manual an explicit case where
a == b is not equal to b == a. I wouldn't be too surprised if there are
machines where the handling of multiplier and multiplicand is so asymmetric
that the operator becomes noncommutative.
--
dik t. winter, cwi, kruislaan 413, 1098 sj amsterdam, nederland, +31205924131
home: bovenover 215, 1025 jn amsterdam, nederland; http://www.cwi.nl/~dik/

Dik T. Winter

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
In article <384FD003...@bell-labs.com> Dennis Ritchie <d...@bell-labs.com> writes:
> On early versions of the Cray 1, * was indeed not
> commutative. I think this was fixed later, however.

Indeed. I have a hardware manual of the Cray 1 and it describes the
multiplication in extreme detail (about two pages). From it you can
deduce that the result might be wrong in the low order three bits, but
also that it commutes ;-).

David R Tribble

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
Lawrence Kirby wrote:
> ... The problem is that the

> standard is loose enough with regards to how intermediate results in
> an expression are stored that a strictly conforming program might
> not be able to tell that the result is wrong even when something as
> well defined as IEEE arithmetic is assumed. So I'm not at all clear
> on how strictly the non-rearranagement rules really apply. Where
> overflow and underflow are possible it is clear but accuracy issues
> are something else. AFAICS in general there is nothing that requires
> results to be consistent even if no rearrangement occurs, e.g.
>
> result1 = a * b * c * d;
> result2 = a * b * c * d;
>
> I don't see anything that requires result1 and result2 to compare
> equal.

You don't think that the "as if" rules of expression evaluation
necessarily require both results to be the same?

Paul Jarc

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
David R Tribble <da...@tribble.com> writes:

> Lawrence Kirby wrote:
> > result1 = a * b * c * d;
> > result2 = a * b * c * d;
> >
> > I don't see anything that requires result1 and result2 to compare
> > equal.
>
> You don't think that the "as if" rules of expression evaluation
> necessarily require both results to be the same?

result1's computation could be done in large registers, and result2's
could be done in memory, or smaller registers. The standard allows
use of extra precision for intermediate values, but doesn't require
that extra precision be used consistently.
If implementations be required to produce the same results "as if"
the extra-precision optimization were never used, then result1 would
have to compare equal to result2, but then why would the standard
allow extra precision at all, if the benefits must be discarded?


paul

Geoff Keating

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
David R Tribble <da...@tribble.com> writes:

> Lawrence Kirby wrote:
> > ... The problem is that the
> > standard is loose enough with regards to how intermediate results in
> > an expression are stored that a strictly conforming program might
> > not be able to tell that the result is wrong even when something as
> > well defined as IEEE arithmetic is assumed. So I'm not at all clear
> > on how strictly the non-rearranagement rules really apply. Where
> > overflow and underflow are possible it is clear but accuracy issues
> > are something else. AFAICS in general there is nothing that requires
> > results to be consistent even if no rearrangement occurs, e.g.
> >

> > result1 = a * b * c * d;
> > result2 = a * b * c * d;
> >
> > I don't see anything that requires result1 and result2 to compare
> > equal.

You mean with ==?

There are a whole bunch of cases:

- If the values are all signed integers, and no overflow occurs,
then then they do compare equal with ==, but I think memcmp()
may say they differ (due to padding bits, or negative zero).
- If the values are all signed integers and overflow occurs,
anything may happen (undefined behaviour).
- If the values are all unsigned integers, then they will always
compare equal with both == and memcmp(), at least that's the
intent of the new C99 rules.
- If the values are floating-point, then they need not compare
equal with either.
- If the values are floating-point and the implementation claims
conformance to the new C99 IEEE floating-point rules,
then they will compare equal, I think with both == and memcmp,
so long as the results are not NaN in which case they will
not compare equal with == and may or may not compare equal
with memcmp().

> You don't think that the "as if" rules of expression evaluation
> necessarily require both results to be the same?

`as if' does not affect the visible behaviour of the implementation,
by definition, so I don't see how it applies here.

--
- Geoffrey Keating <geo...@cygnus.com>

Lawrence Kirby

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
In article <385929C7...@tribble.com>

da...@tribble.com "David R Tribble" writes:

>Lawrence Kirby wrote:
>> ... The problem is that the
>> standard is loose enough with regards to how intermediate results in
>> an expression are stored that a strictly conforming program might
>> not be able to tell that the result is wrong even when something as
>> well defined as IEEE arithmetic is assumed. So I'm not at all clear
>> on how strictly the non-rearranagement rules really apply. Where
>> overflow and underflow are possible it is clear but accuracy issues
>> are something else. AFAICS in general there is nothing that requires
>> results to be consistent even if no rearrangement occurs, e.g.
>>
>> result1 = a * b * c * d;
>> result2 = a * b * c * d;
>>
>> I don't see anything that requires result1 and result2 to compare
>> equal.
>

>You don't think that the "as if" rules of expression evaluation
>necessarily require both results to be the same?

No, it doesn't.

The "as if" rule is nothing more than an informal statement of that fact
that a conforming C implementation must accurately simulate the abstract
machine defined by the standard: it must produce output that is compatible
with the abstract machine but how it gets there is of no concern. The problem
is that in this area the behaviour of the abstract machine isn't precisely
defined. The standard gives leeway for intermediate floating point
results in an expression (other than values stored in objects or the
results of some conversions) to be held at an arbitrarily higher precision
than the underlying type requires. There is no requirement for
consistency here, different intermediate results may be held at different
precisions. That means that result1 and result2 can be assigned different
values. Now AFAIK C99 introduces the FLT_EVAL_METHOD macto to <float.h>
which will help characterize this but it still has implementation-defined
values. Also unless you assume something like IEEE 754 arithmetic
the results of floating point operations aren't exactly defined anyway.

Stephen Baynes

unread,
Dec 17, 1999, 3:00:00 AM12/17/99
to
Geoff Keating wrote:
>
> David R Tribble <da...@tribble.com> writes:
>
> > Lawrence Kirby wrote:
> > >
> > > result1 = a * b * c * d;
> > > result2 = a * b * c * d;
> > >
> > > I don't see anything that requires result1 and result2 to compare
> > > equal.
>
> You mean with ==?
>
> There are a whole bunch of cases:
>
> - If the values are all signed integers, and no overflow occurs,
> then then they do compare equal with ==, but I think memcmp()
> may say they differ (due to padding bits, or negative zero).
> - If the values are all signed integers and overflow occurs,
> anything may happen (undefined behaviour).
> - If the values are all unsigned integers, then they will always
> compare equal with both == and memcmp(), at least that's the
> intent of the new C99 rules.

Where does the requirement that memcmp() must compare then
as equal when they are unsigned integers (excluding char).
Why can't unsigned integers have padding bits like integers?

<snip>

--
Stephen Baynes CEng MBCS Stephen...@soton.sc.philips.com
Philips Semiconductors Ltd
Southampton SO15 0DJ +44 (0)23 80316431 *** NEW ***
United Kingdom My views are my own.
Do you use ISO8859-1? Yes if you see © as copyright, ÷ as division and ½ as 1/2.

James Kuyper

unread,
Dec 17, 1999, 3:00:00 AM12/17/99
to
Stephen Baynes wrote:
...

> Why can't unsigned integers have padding bits like integers?

As you imply, there is no such requirement. Quite the contrary. Section
6.2.6.2 p1 says: "For unsigned integer types other than unsigned char,
the bits of the object representation shall be divided into two groups:
value bits and padding bits (there need not be any of the latter)."

Kevin D. Quitt

unread,
Dec 17, 1999, 3:00:00 AM12/17/99
to
On 16 Dec 1999 15:33:27 -0500, Paul Jarc <p...@po.cwru.edu> wrote:
>The standard allows
>use of extra precision for intermediate values, but doesn't require
>that extra precision be used consistently.

Just out of curiosity, how do other languages handle this sort of thing?
Do the have everything defined precisely, or do they not bother with the
detail the C spec goes into (and leaves off)?

--
#include <standard.disclaimer>
_
Kevin D Quitt USA 91351-4454 96.37% of all statistics are made up
Per the FCA, this email address may not be added to any commercial mail list

Dik T. Winter

unread,
Dec 19, 1999, 3:00:00 AM12/19/99
to
In article <23B9705CCFA23549.F364E583...@lp.airnews.net> KQu...@IEEInc.com writes:
> On 16 Dec 1999 15:33:27 -0500, Paul Jarc <p...@po.cwru.edu> wrote:
> >The standard allows
> >use of extra precision for intermediate values, but doesn't require
> >that extra precision be used consistently.
>
> Just out of curiosity, how do other languages handle this sort of thing?
> Do the have everything defined precisely, or do they not bother with the
> detail the C spec goes into (and leaves off)?

The only language I know that specifies something about floating-point
arithmetic is Ada, and it goes much further than the C specs. It
requires the implementation to define the set of "model numbers" (based
on radix, exponent range and mantissa range) and has specific requirements
what the result should be on a specific operation. But even there there
is still some freedom. I.e. if the implementation specifies FLOAT with
only 22 bits in the mantissa, while working on an IEEE processor, this
is OK, as long as for each operation that should result in an exact
model number that one is delivered, and if the result is not a model number
it would either deliver one of the model numbers at the end of the interval
enclosing the exact result or a number that is between those two.

The C specs do not even come close to this.

Christian Bau

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
In article
<23B9705CCFA23549.F364E583...@lp.airnews.net>,
KQu...@IEEInc.com wrote:

> On 16 Dec 1999 15:33:27 -0500, Paul Jarc <p...@po.cwru.edu> wrote:
> >The standard allows
> >use of extra precision for intermediate values, but doesn't require
> >that extra precision be used consistently.
>
> Just out of curiosity, how do other languages handle this sort of thing?
> Do the have everything defined precisely, or do they not bother with the
> detail the C spec goes into (and leaves off)?

There is Java...

In principle, all floating point operations are done exactly with the
precision that is specified. If you have double a, b, c; and calculate a *
b + c then the compiler will calculate a*b, correctly rounded to double
precision, and add a, again correctly rounded to double precision.

Then came complaints that x86 processors don't like this at all. It seems
that implementing floating point exactly according to the Java language
spec is quite difficult on an x86 chip and therefore the code runs
significantly slower. So the Java language spec has changed. The
programmer can specify whether they want the original behavior or the less
good behavior, similar to C + IEEE.

Dann Corbit

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to
"Christian Bau" <christ...@isltd.insignia.com> wrote in message
news:christian.bau-2...@christian-mac.isltd.insignia.com...

Java's floating point specification is an unmitigated disaster. See
Professor Kahan's railings against it for reference.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm

0 new messages