I've read several of Kahan's (http://www.cs.berkeley.edu/~wkahan) online
papers, and understand that the primary reason for defining positive
and negative zero involves branch cuts in the complex plane. Kahan
writes that he would like to preserve the complex identities
sqrt(conjugate(z)) = conjugate(sqrt(z))
log(conjugate(z)) = conjugate(log(z))
(where conjugate(a+bi) is a-bi). This pays off in a case where you
wish to find that
sqrt(-1 + (+0)i) = i
sqrt(-1 + (-0)i) = -i
or
log(-1 + (+0)i) = pi * i
log(-1 + (-0)i) = -pi * i
and Kahan gives an example where these distinctions matter.
I disagree with the decision to have a negative zero distinct from
some other zero on two grounds. First, while preserving some
identities in the complex plane, this decision invalidates as many
common sense identities - for example,
x+a = y+a => x = y
is false. x and y may be negative and positive zero. The
disappearance of these and other identities make life more difficult
for compiler writers than it already is, with the failure of
associativity in floating point. It also confounds all but the most
expert programmer, and makes safe-seeming code like
if (x == 0.)
exit(-1); /* divide by zero error! */
else
z = y/x;
act other than expected.
Second, those who are worried about branch cuts do not need negative
zero - they need something stronger that they can develop themselves -
infintesimals. Most of these applications rely on a
not-machine-supported complex number representation - that is, the
addition of an element "i" defined by i*i=-1. So, why not add another
"programmed in" element "w" defined by w*w=0, representing a quantity
less than any real number. Then the rules
sqrt(-1 + wi) = i
sqrt(-1 - wi) = -i
and
log(-1 + i) = pi * i
log(-1 - i) = -pi * i
can be programmed in to your complex-infintesimal math library.
Besides offering a less subtle representation of quantities a little
greater than zero and a little less than zero, you can preserve common
sense identities.
But what about the infinities, you ask? In IEEE754, 1/(+0) =
+infinity, and 1/(-0) = -infinity. So, if 0 has no sign, what is 1/0?
How about just "infinity"? Do we need two of them? If you do, you
can get them with infinitesimals.
1/w = infinity - w
-1/w = infinity + w
But if we defined just one infinity, and asserted that x op infinity
is true for op in {<, >, <=, >=}, wouldn't most people get along just
fine? Kahan has written that the single infinity/single zero model is
consistent, just unsuited to some needs.
I'd just like someone to help me appreciate the negative zero more
than I do now. Certainly, my proposals above notwithstanding, it's a
convention that isn't going away anytime soon.
Doug Moore
(do...@caam.rice.edu)
>I've always been bothered by the extramathematical elements added to
>the real numbers in IEEE754 floating point, and in particular the
>distinction between positive and negative zero.
Infinities and NaNs were included in IEEE 754 to provide closure under
all arithmetic operations; given any two operands an IEEE 754
operation produces a floating point value. There are two zeros
because there are two infinities.
>I disagree with the decision to have a negative zero distinct from
>some other zero on two grounds.
The distinction between +0 and -0 is rarely observable. For example,
+0 == -0 is true. Mostly the the sign of zero determines the sign of
the result if the result is zero. No non-exceptional IEEE arithmetic
operation can be used to distinguish between +0 and -0. 1.0/+0.0 is
+infinity and 1.0/-0.0 is -infinity but 1/0 raises the divide by zero
flag. The IEEE recommended function copysign can also be used to find
the sign of zero.
> First, while preserving some
>identities in the complex plane, this decision invalidates as many
>common sense identities - for example,
>x+a = y+a => x = y
This identity is violated in floating point arithmetic in many other
ways, such as by the addition overflowing or the term "a" being
infinity. Even in non-IEEE floating point this identity fails due to
roundoff:
(1.0 + MAX_FLOAT) == (2.0 + MAX_FLOAT) but 1.0 != 2.0.
If flush to zero is used instead of gradual underflow, the identity
fails as well. Whether or not this identity is violated by +/-0
depends on how "equality" is defined. If "=" is defined by bit-wise
equality than +/-0 will make a difference. If "=" is the IEEE 754
equality relation, than the signed zero does not cause this identity
to fail.
>disappearance of these and other identities make life more difficult
>for compiler writers than it already is, with the failure of
>associativity in floating point.
On the contrary, some would argue that compiler writers (perhaps
inadvertently) make life difficult for programmers wishing to rely on
floating point semantics. Floating point arithmetic is inherently an
approximation; it does not follow the same rules as infinitely precise
real arithmetic. Ignoring signaling NaNs and treating all NaN values
as equivalent, IEEE floating point addition and multiplication are
commutative and x*1.0 is the same as x. None of the other field
axioms hold for floating point; they fail to hold due to the limited
range and precision of floating point computation and/or operating on
IEEE special values. Optimizing compilers will often change floating
point expressions in non-equivalence preserving ways. Some languages,
like Fortran 77, explicitly license such behavior. Other times the
user specifically sanctions that certain rules can be broken. For
example, when Sun's cc compiler is run with the flag "-fsimple=2", x/y
in a loop can be replaced by x*z where z = 1/y and y is loop
invariant. While faster, x*z will in general not give the same result
as x/y since computing x*z has two rounding errors and x/y only has
one.
The essence of programming is control. Seemingly minor floating point
details, matter to some programs and some programmers. These
programmers need a way for their wishes to be respected without their
efforts being thwarted by an over-zealous (or buggy) compiler.
Programming languages exacerbate the problem by not including IEEE 754
semantics.
> It also confounds all but the most
>expert programmer, and makes safe-seeming code like
>if (x == 0.)
> exit(-1); /* divide by zero error! */
>else
> z = y/x;
Since +0 == -0, this code works as expected. However, it is true that
floating point can often surprise many programmers. The frequency of
surprise can often be reduced by calculating with more precision than
the input data; see http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf
for further discussion.
[snip]
>But if we defined just one infinity, and asserted that x op infinity
>is true for op in {<, >, <=, >=}, wouldn't most people get along just
>fine? Kahan has written that the single infinity/single zero model is
>consistent, just unsuited to some needs.
As you imply, there are two ways to add infinities to the real
numbers; the affine way with positive and negative infinity and the
projective way with a single infinity. IEEE 754 supports affine
infinities. The floating point of the 8087, a precursor to IEEE 754,
has modes supporting both affine and projective infinity. In
projective mode, the 8087 ignores the sign of infinity and the sign of
zero. (Over Kahan's objections, the IEEE 754 committee voted to drop
projective infinity to simplify the standard. The 8087 documentation
recommends projective infinity as the default mode.) A projective
infinity is "unordered" with respect to real floating point values;
that is (infinity {<, >, <=, >=, ==} x) is false (except for infinity
== infinity which is true). The comparisons are unordered precisely
because the infinity is unsigned.
For some computations, using affine or projective infinity is more
convenient.
-Joe Darcy
da...@cs.berkeley.edu
> I disagree with the decision to have a negative zero distinct from
> some other zero on two grounds. First, while preserving some
> identities in the complex plane, this decision invalidates as many
> common sense identities - for example,
>
> x+a = y+a => x = y
>
> is false. x and y may be negative and positive zero. The
If x and y was negative and positive zero, the identity you allude to
was never there, anyway. A positive zero is some very small number
that you arrived at from above and the negative zero is some very
small number that you arrived at from below. Just how small you can't
determine because you ran out of exponent and mantissa bits and an
exact zero is nonexistent in physical reality. The identity above
doesn't hold for denormals in general, of which zeros are a special
case. It can also be falsely true when the x+a,y+a and x,y have
different exponents because some bits drop off during normalization.
x+max_normal() = y+max_normal
is true for all abs(x,y) < max_normal/1.e17, but that doesn't imply
x=y at all. Real numbers are distributed equally dense everywhere, FP
numbers are not. No equality that assumes continuous space is ever
going to work in FP.
> disappearance of these and other identities make life more difficult
> for compiler writers than it already is, with the failure of
> associativity in floating point. It also confounds all but the most
> expert programmer, and makes safe-seeming code like
>
> if (x == 0.)
> exit(-1); /* divide by zero error! */
> else
> z = y/x;
>
> act other than expected.
This code should be a reason to fire any programmer for incompetence.
Never ever test a floating point number for exact equality because the
test becomes nearly useless (this is another flaw in your first
example as well). Do your error analysis and find out what the
epsilon is supposed to be. My other comment is that you make the
common case slow while not helping with the divide by zero at all.
There's other ways the divide could fail (producing a denormal or
infinity) that you don't check, so all you get is a false sense of
safety.
> I'd just like someone to help me appreciate the negative zero more
> than I do now. Certainly, my proposals above notwithstanding, it's a
> convention that isn't going away anytime soon.
Well, you have made your stance very clear: you simply don't like them
because you don't see a need for them. Kahans examples are as good as
any to explain why some people differ from your opinion.
A good introduction can be found on the web:
Go to Programming Languages -> Floating Point and Common Tools
Answerbook -> Numerical Computation Guide. The URL itself is rather
lengthy, so it may break before you get to see it:
Achim Gratz.
--+<[ It's the small pleasures that make life so miserable. ]>+--
WWW: http://www.inf.tu-dresden.de/~ag7/{english/}
E-Mail: gr...@ite.inf.tu-dresden.de
Phone: +49 351 463 - 8325
However, it IS an error. The mistake was to use +0 for both positive
zero and sign-indeterminate zero. This causes a huge number of very
subtle problems, such as -(x-y) and y-x not being equivalent.
>>I disagree with the decision to have a negative zero distinct from
>>some other zero on two grounds.
>
>The distinction between +0 and -0 is rarely observable. ...
You haven't been following the C9X draft standard, have you? The
difference not merely becomes observable - it becomes unavoidable.
And this is PRECISELY because C9X is using signed zeroes to distinguish
between sides of branch cuts.
Nick Maclaren,
University of Cambridge Computing Service,
New Museums Site, Pembroke Street, Cambridge CB2 3QG, England.
Email: nm...@cam.ac.uk
Tel.: +44 1223 334761 Fax: +44 1223 334679
Nonsense. You have misunderstood the purpose of that test. It is
simply to shield the operation from the result of an extreme value.
Provided that the code is design to work for any value, subject only
to there not actually being a division error, then that test is fine.
Your remark WOULD be justifiable if the code had been:
if (x == y)
exit(-1);
else
z = 1/(x-y);
Because x != y doe not imply that x-y is non-zero, in any floating-point
arithmetic in normal use.
>do...@caam.rice.edu (Doug Moore) writes:
>was never there, anyway. A positive zero is some very small number
>that you arrived at from above and the negative zero is some very
>small number that you arrived at from below. Just how small you can't
>determine because you ran out of exponent and mantissa bits and an
Knowing the value of a number doesn't tell you anything about its
accuracty or its history, how it was calculated. For example,
1/infinity and 1.0 - 1.0 equal 0 exactly. Other floating point
computations can also be exact (sometimes by design).
>> if (x == 0.)
>> exit(-1); /* divide by zero error! */
>> else
>> z = y/x;
>>
>> act other than expected.
>This code should be a reason to fire any programmer for incompetence.
>Never ever test a floating point number for exact equality because the
>test becomes nearly useless (this is another flaw in your first
>example as well).
Occasionally, testing floating point numbers for exact equality is
appropriate. For example, the sinc function:
if(x == 0)
return 1.0;
else
return sin(x)/x;
Finite non-zero values of x shouldn't cause a problem.
> My other comment is that you make the
>common case slow while not helping with the divide by zero at all.
The IEEE 754 exception handling features (flags and traps) are
designed help deal with these sorts of concerns. Unfortunately these
features are not well supported in current programming languages and
compilers.
-Joe Darcy
da...@cs.berkeley.edu
>In article <darcy.900653823@usul>,
>Joseph D. Darcy <da...@usul.CS.Berkeley.EDU> wrote:
>>do...@caam.rice.edu (Doug Moore) writes:
>>The distinction between +0 and -0 is rarely observable. ...
>You haven't been following the C9X draft standard, have you? The
>difference not merely becomes observable - it becomes unavoidable.
>And this is PRECISELY because C9X is using signed zeroes to distinguish
>between sides of branch cuts.
C9X uses Kahan's scheme for complex numbers and complex trigonometric
functions. This scheme knowing exploits signed zero to deal with
branch cuts. Obviously, since this handling of branch cuts by design
relies on the sign of zero, properly language/compiler preservation of
the sign of zero matters and is quite visible. However, if
programmers aren't aware of or aren't trying to use signed zero, they
should be able to remain oblivious to it.
-Joe Darcy
da...@cs.berkeley.edu
I don't think that you have thought the matter through. The changes
DON'T affect just complex numbers, and have some really horrible
ramifications. See a large number of UK comments for some of them.
There are others that I have noticed since, but too late to comment
on for the previous draft.
Look, the point isn't whether a scheme works in the obvious cases;
it is whether it contains any serious traps in the unobvious ones.
For example, in the current C9X draft, atan2(+-0.0,+-0.0) returns
FOUR possible values, depending solely on the signs - and, far, far
worse is FORBIDDEN from flagging an exception.
>> if (x == 0.)
>> exit(-1); /* divide by zero error! */
>> else
>> z = y/x;
>>
>> act other than expected.
> This code should be a reason to fire any programmer for incompetence.
> Never ever test a floating point number for exact equality because the
I disagree. For instance, you may use floating-point to do exact integer
operations (because on some (many?) processors, the integer unit isn't
fast enough, in particular concerning the division). BTW, in this case
(integer operations), a negative zero has no meaning.
--
Vincent Lefevre <Vincent...@ens-lyon.fr> - PhD stud. in Computer Science
Web: http://www.ens-lyon.fr/~vlefevre/ - 100% validated HTML - Acorn Risc PC,
Yellow Pig 17, Championnat International des Jeux Mathematiques et Logiques,
TETRHEX, Faits divers insolites, etc...
[snip]
)I disagree with the decision to have a negative zero distinct from
)some other zero on two grounds. First, while preserving some
)identities in the complex plane, this decision invalidates as many
)common sense identities - for example,
)
)x+a = y+a => x = y
)
)is false. x and y may be negative and positive zero. The
This is false, anyway, for any x and y which when added to a do not
change it.
)disappearance of these and other identities make life more difficult
)for compiler writers than it already is, with the failure of
)associativity in floating point. It also confounds all but the most
)expert programmer, and makes safe-seeming code like
)
)if (x == 0.)
) exit(-1); /* divide by zero error! */
)else
) z = y/x;
)
)act other than expected.
This is true anyway. One can't just test for x == 0.0 and hope to
accomplish anything. One needs code like
if (X < Epsilon*Y)
exit(-1);
I once had a problem with (simple code to illustrate problem which
occurred in FORTRAN)
if (X-Y == 0.0)
Z = 1.0;
else
Z = P/(X-Y);
This resulted in division by zero. What happened was that although when
Y was subtracted from X, the result (which was denormalized) was
non-zero, but when P was loaded it forced normalization before the
division took place.
)But what about the infinities, you ask? In IEEE754, 1/(+0) =
)+infinity, and 1/(-0) = -infinity. So, if 0 has no sign, what is 1/0?
)How about just "infinity"? Do we need two of them? If you do, you
)can get them with infinitesimals.
You're just complaining because they used the two point compactification
of the reals rather than the one point compactification.
)1/w = infinity - w
)-1/w = infinity + w
)
)But if we defined just one infinity, and asserted that x op infinity
)is true for op in {<, >, <=, >=}, wouldn't most people get along just
)fine? Kahan has written that the single infinity/single zero model is
)consistent, just unsuited to some needs.
)
)I'd just like someone to help me appreciate the negative zero more
)than I do now. Certainly, my proposals above notwithstanding, it's a
)convention that isn't going away anytime soon.
You ought to try doing arithmetic on a few other (older) architectures
to appreciate the fact that we have a reasonable standard at all. Try
using the weirdo BCD used on some of the old IBM machines, with digits
stored in 4 bit nybbles, and the sign stored in the MSb of the LSN.
Mike
--
----
char *p="char *p=%c%s%c;main(){printf(p,34,p,34);}";main(){printf(p,34,p,34);}
This message made from 100% recycled bits.
I don't speak for DSC. <- They make me say that.
> Nonsense. You have misunderstood the purpose of that test. It is
> simply to shield the operation from the result of an extreme value.
> Provided that the code is design to work for any value, subject only
> to there not actually being a division error, then that test is fine.
No it's not. The division still can give any number of exceptional
results including the infinity that the programmer apparently was
trying to guard against. And then again, if the intention is to exit
the program on such an error, enable traps and don't slow down the
common case. I have to admit that I have a chip on my shoulder with
regards to such code, having once waded through thousands of LOC to
remove bogus tests and correcting unjustifiable assumptions. Once I
did this, the program was three times faster and still didn't produce
any more nonsense results than before. After introduction of the
proper bias, scaling and epsilon it actually started to behave.
> Occasionally, testing floating point numbers for exact equality is
> appropriate. For example, the sinc function:
>
> if(x == 0)
> return 1.0;
> else
> return sin(x)/x;
>
> Finite non-zero values of x shouldn't cause a problem.
While I agree with the sentiment that there are exceptions to the
rules, the particular example is constructed. Computing the function
as shown has rather undesirable error properties, not to mention it's
utterly inefficient. So yes, the removal of discontinuities can
justify testing for exact equality, but that is still somewhat
dangerous. Here's an example to think about:
[ 2 | x=pi
f(x) = {
[ 1 elsewhere
Would you implement it as:
#include<math.h>
double f (double x) {
return (x == M_PI) ? 2.0 : 1.0;
}
or rather
double f (double x) {
return 1.0;
}
(Hint: I could have used a value that was less obviously not
representable as an FP number.)
So, to go to the full formulation of said rule: Never ever test FP
numbers for exact equality, unless you can prove that both numbers to
be compared are a) representable and b) didn't accumulate any rounding
errors. Condition b) rules out a large percentage of computed results
and condition a) needs careful consideration even for constants.
Given that the finer points of rules tend to get ignored anyway and
exceptions created at will, I'm perfectly contempt with the shortened
(and more pointy) formulation I gave earlier if only because it can be
remembered more easily.
Yes, doing the checking thoroughly is better, but code like that is
NOT grounds for sacking for incompetence (as was claimed), nor even
necessarily bad practice. You may have had trouble with one small
program - I have used, debugged and maintained hundreds of thousands
(and handled millions) of lines of code with that construction in.
Your remark about traps is particularly off-beam, as they are not
available in most standard languages.
Doug> x+a = y+a => x = y
Doug> is false. x and y may be negative and positive zero. The
Doug> disappearance of these and other identities make life more difficult
Doug> for compiler writers than it already is, with the failure of
Doug> associativity in floating point. It also confounds all but the most
Doug> expert programmer, and makes safe-seeming code like
Doug> if (x == 0.)
Doug> exit(-1); /* divide by zero error! */
Doug> else
Doug> z = y/x;
Doug> act other than expected.
I believe the IEEE 754 standard also requires that -0.0 = 0.0, so both
of your examples above work as expected: x = y, and the if statement
does as intended. If you need to distinguish between the two signed
zeroes, you're supposed to check the sign bits.
Ray