if (x == y) {...}
Smart engineer's way is
if (fabs(x - y) < eps) {...}
where eps is a small number, carefully chosen by aforementioned smart
engineer based on his/her accuracy requirements and error accumulation
estimates.
--
With best wishes,
Igor Tandetnik
"ming lu" <m...@argus.ca> wrote in message
news:#CZiFwFSBHA.2132@tkmsftngp05...
>
> What is C++ way to determine if two double are equal?
>
--
Truth,
James Curran
www.NJTheater.com (Professional)
www.NovelTheory.com (Personal)
www.BrandsForLess.Com (Day Job)
"Igor Tandetnik" <itand...@whenu.com> wrote in message
news:OQZlm4FSBHA.1892@tkmsftngp05...
Which determins whether they are close enough, not whether they are
equal.
--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)
That's exactly the difference between an engineer and a mathematician. A
mathematician wants a mathematically exact answer, an engineer is perfectly
content with one that is "close enough". Floating point calculations tend to
force one into "engineer mode".
The problem is that floating point arithmetic on a computer is NOT
mathematics. It is the result of engineering. These numbers are
subject to truncation of significant digits and roundoff errors. As a
result, it is difficult to imagine a valid and realistic situation
where you want to compare two doubles (or floats) to see whether they
match exactly.
Not necessarily. The reason I'm making such a fuss over this is that
it's too easy for people to decide that floating point results are
somewhat random, and to not bother learning how they actually work. Of
course, that's neither mathematics nor engineering.
Exactly my point.
You don't know how they actually work, not in a portable way. The standard
does not mandate the internal representation details of the floating point
types, nor does it specify the exact semantics of math operations. I mean,
all it has to say about multiplication is "The binary * operator indicates
multiplication." That's it. I would expect something like "operator *
produces a value representable in the target floating point type which is
closest to the mathematically exact production of two operands" or something
like that.
Java mandates that floating point operations are performed according to
IEEE-whatever standard (don't remember the number off the top of my head).
There the results are somewhat predictable. This IEEE standard unambiguously
describes the result of any operation on floating point numbers. C++
standard has no such thing. I don't say it is bad - it allows to efficiently
utilize non-IEEE hardware. But the only thing you know is that x*y gives the
result that is somewhere close to the production of x and y. You can only
hope that it is close enough for your algorithms to converge.
Of course if you program for a particular platform, you are free to utilize
your knowledge of the details of how exactly the floating point math is
performed on this platform and C++ implementation.
True, but it has nothing to do with what I said, which is that treating
floating point results as somewhat random is neither mathematics nor
engineering.
> Java mandates that floating point operations are performed according to
> IEEE-whatever standard (don't remember the number off the top of my head).
IEEE-754. And, of course, since Java doesn't permit 80-bit math, there's
a significant performance penalty implementing Java on Intel hardware.
I can but point out that MSVC doesn't support 80-bit math either (long
double is the same as double for reasons that escape me). So you incur the
same penalty doing floating point math on Intel hardware using most popular
C++ compiler for Windows platform.
I'm no expert in this field. But I think the standard says pretty much about how
a type is supposed to be implemented. It does so with numeric_limits and an
explicit reference to various standards and other documents. Expecially the C
standard and ISO/IEC 10967-1 Language independent arithmetic- part 1 (aka
LIA-1). Also IEC-559/IEEE-754 compliance or lack of compliance is explicit.
> result that is somewhere close to the production of x and y. You can only
> hope that it is close enough for your algorithms to converge.
I think you can actually check it.
> Of course if you program for a particular platform, you are free to utilize
> your knowledge of the details of how exactly the floating point math is
> performed on this platform and C++ implementation.
I see it differently: if your program needs a given floating point behaviour you
can check and make sure it is provided. If it is not, you can take whatever
action you see fit (including refusing to compile). If a platform cannot provide
IEEE-754 then it cannot provide it. It can still support C++. It may not be able
to support your program.
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
The only time IEC-559/IEEE-754 is mentioned in C++ standard is in the
description of numeric_limits::is_iec559, which suggests non-IEEE floating
point types are allowed. ISO/IEC 10967-1 is mentioned in the description of
numeric_limits::round_error(). C++ standard defers to C standard only in the
standard library section. Behavior of fundamental types is described
independently. So C standard is irrelevant to the discussion.
Well, I agree with you that the program can check is_iec559 and
round_error() and refuse to work if they are not satisfactory.
No. It is referred to for numeric_limits::is_iec559,
numeric_limits::has_denorm_loss and numeric_limits::tinyness_before. However if
you need that standard then numeric_limits::is_iec559 is the most important.
> ISO/IEC 10967-1 is mentioned in the description of
> numeric_limits::round_error(). C++ standard defers to C standard only in the
> standard library section. Behavior of fundamental types is described
> independently. So C standard is irrelevant to the discussion.
Nope. See the notes to numeric_limits. Practically all members of numeric_limits
have a note that refers to a macro that in turn is defined in the C standard or
it refers to ISO/IEC 10967-1 (called LIA-1 in the notes). Considering those
whose note refer to IEC-559 there are only 4 members of numeric_limits that do
not have notes that refer to any other standard: is_integer, is_exact, is_signed
and has_infinity. They are totally defined within C++ standard.
> Well, I agree with you that the program can check is_iec559 and
> round_error() and refuse to work if they are not satisfactory.
And it could check things like float_round_style or float_denorm_style. And any
other support it needs (signaling or silent NaNs, range of values, precisions
etc.).
Yes. The standard does not FORCE a given support (a la Java). But it lets you
check whether it is there. It is assumed that if iec559 is supportable, it is
supported. It is assumed that if a given precision is obtainable, it is
available.
This makes C++ portable as a language and C++ programs portable (among compliant
platforms) as far as the platform can support them.
I may like to program in C++ targetting embedded platforms. I'd be more
disappointed if I had to support advanced FP math in the C++ compiler for my
smart toaster. And would not be at all surprised if OpenGL does not compile for
it.
There's a subtle distinction here: C++ permits higher precision to be
used internally, so compilers that use 64 bits to represent double
values still use 80-bit math for the actual computations (since that's
what the chip does), then chop the result back to 64 bits for storage.
Java doesn't permit higher precision, but insists that 64-bit math be
done with exactly 64 bits (this may have been changed recently, I
haven't kept up on the details). You can set the Intel processors to do
that, but it slows the chip way down.
No, there's very little that the standard requires with respect to
floating point math.
> It does so with numeric_limits
numeric_limits can be used to ask the implemenation what it has done. It
imposes almost no requirements, however.
> and an
> explicit reference to various standards and other documents. Expecially the C
> standard and ISO/IEC 10967-1 Language independent arithmetic- part 1 (aka
> LIA-1). Also IEC-559/IEEE-754 compliance or lack of compliance is explicit.
Yes, you can ask if IEEE-754 is supported. But there's very little you
can do with that information, since there's no way in C++ to actually
get at the innards that IEEE-754 requires. You can't set rounding modes,
query sticky bits, set trap handlers, etc. So what you see is what you
get.
> I see it differently: if your program needs a given floating point behaviour you
> can check and make sure it is provided. If it is not, you can take whatever
> action you see fit (including refusing to compile). If a platform cannot provide
> IEEE-754 then it cannot provide it. It can still support C++. It may not be able
> to support your program.
Right: you can check whether some assumptions that you've made are valid
on a particular implementation.
Yep, you are right on all counts. Touche! Thank you, I learned something new
today.
But just as a cautionary note, while you can get a value that's a quiet
NaN, you can't portably test whether the result of some computation is a
quiet NaN. There's no requirement in IEEE-754 that there be a unique
value for a quiet NaN (and on Intel architecture there's a whole bunch
of them), so you can't compare bits with a known NaN value, and no
requirement in C++ that a compiler can't rearrange a test like a!=a into
something like !(a==a). The latter is true even if the implementation
tells you that it implements IEEE-754, because IEEE-754 doesn't (and
can't) say anything about what != does in C++.
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BB9F455...@acm.org...
I'm no expert, but I've always been convinced that IEEE-754 does say something
about what relational operators do in compliant platforms (note however that !=
is not an issue here).
Specifically I think it says that on compliant platforms (i mean IEEE-754
compliant, of course) the following table holds if either a or b is a NaN (not
sure what it says if they are both NaN but different, but I think the same table
should hold):
a==b false
a!=b true
a<b false
a>b false
Therefore it says, for example, that a<b cannot be transformed to !(a>b ||
a==b).
This has a strange implication: support to IEEE-754 cannot be added to a
compiler by means of redefining the floating point library only because the
optimizer should be aware of this behaviour of NaNs. Also a language cannot
support IEEE-754 if it explicitly says that a<b can be transformed to !(a>b ||
a==b). However the C++ standard does not say that.
In other words, at least as long as IEEE-754 is fully supported, the following
should be portable:
inline bool isNaN( double v )
{
return v!=0. && !(v<0.) && !(v>0.);
}
Of course many implementations will have some sort of extension. Like MS's
_isnan. Eventually intrinsic. But extensions, by definition, are platform
dependent.
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BB9F739...@acm.org...
The point is that there are fiew situation where you actually need some specific
FP behaviour. And you can check with a certain degree of granularity if it is
there. Therefore you would not get unexpected behaviour. You may be unable to
compile your code on a specific paltform (or implementation). But even if not
imposed by the standard it is reasonable that an implementation (C++ compiler)
for a platform where such code makes sense will support what you need.
> > and an
> > explicit reference to various standards and other documents. Expecially the
C
> > standard and ISO/IEC 10967-1 Language independent arithmetic- part 1 (aka
> > LIA-1). Also IEC-559/IEEE-754 compliance or lack of compliance is explicit.
>
> Yes, you can ask if IEEE-754 is supported. But there's very little you
> can do with that information, since there's no way in C++ to actually
> get at the innards that IEEE-754 requires. You can't set rounding modes,
> query sticky bits, set trap handlers, etc. So what you see is what you
> get.
I never intended it to mean you can support IEEE-754 completelly. But I do not
even think that extreme support and manipulation is needed in a program that is
not platform-dependent for some other reason. A good percentage of programs need
not check FP support at all. And when it is needed, precision and error
propagation are the most important issues. Being able to predict stability of a
numeric algorithm or to compute and optimize rounding errors is the big issue.
Very seldom you actually need to SET the rounding mode and cannot get away with
just checking it and behave accordingly.
After all i do not think C++ was intended to be comparable to, say, fortran for
extreme math.
And more support for math (as well as many other aspects of programming) will
probably come in when language standard will come to the next revision (as has
happened for C).
Yes, but it doesn't say what the spellings of those relational operators
are in any particular language. In addition to knowing that a C++
implementation supports IEEE-754 math, you have to know what != means in
terms of the IEEE-754 relational operators. That's what's known as a
"binding" to the IEEE-754 standard. C++ doesn't have a standard binding.
Most folks guess at what the binding ought to be, and their guess is
usually what the compiler actually does. But nothing in either standard
imposes the "obvious" binding. Indeed, in some cases the obvious binding
isn't so obvious.
To put it a little less formally, there is nothing in C++ that tells you
whether '!=' in C++ is true when one or both of its operands is NaN.
Both are legitimate; IEEE-754 (if I remember the notation correctly)
uses 'a <> b' to represent 'true if a is less than b or if a is greater
than b' and 'a <>? b' to represent 'true if a is less than b or if a is
greater than b or if a and b are unordered' (two values are unordered if
either one is NaN). So, does '!=' in C++ mean IEEE-754's '<>' or '<>?'?
Nothing in either standard gives an answer.
Error propogation is done primarily through NaN values. You cannot
portably recognize a NaN in C++, so error propogation is inherently
non-portable.
> Being able to predict stability of a
> numeric algorithm or to compute and optimize rounding errors is the big issue.
> Very seldom you actually need to SET the rounding mode and cannot get away with
> just checking it and behave accordingly.
Writing code that is robust under all four IEEE-754 rounding modes is
far more difficult than writing code that is robust under a single mode.
The rounding mode affects, for example, whether some computations
overflow. And the default mode (round to nearest) may not be the "right"
one for some computation; minimizing rounding errors often requires
changing rounding modes for particular operations.
>
> After all i do not think C++ was intended to be comparable to, say, fortran for
> extreme math.
Of course not. It pretty much adopted what C89 had, which was largely
unspecified, in order to leave flexibility for implementors.
Now that I've checked IEEE-754, the correct notation is 'a ?<> b'.
But despite wordings, would it be valid to assume that the obvius binding holds
for < and for > ?
I mean, for non NaN values I think that < and > must relate in both IEEE-754 and
C++ to the check for order condition. Therefore I do not think it would be valid
for a C++ implementation that supports NaNs (and specifically IEEE-754) to
return anything but false if either operand is a NaN.
This should make the "!(a<b) && !(a>b)" subexpression work as expected.
Now, it is true that C++ != may not bind either <> or <>? specifically.
Therefore there's no portable way to completelly relate two values if one is (or
may be) a NaN. However I think that operator == should work as expected if we
know that at least one of the two values is NOT a NaN.
Therefore I think the C++ standard should suffice to ensure:
val == val -> values dependent
val == NaN -> false
NaN == NaN -> unspecifyed
val < val -> values dependent
val < NaN -> false
NaN < val -> false
NaN < NaN -> unspecifyed
val > val -> values dependent
val > NaN -> false
NaN > val -> false
NaN > NaN -> unspecifyed
therefore this revised version of my check should work portably:
inline bool isNaN( double v )
{
return !(v==0. || v<0. || v>0.);
}
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BBB1028...@acm.org...
You are probably right. In the other subthread you convinced me that != is
unspecifyed. But see my other post there and please tell me why the behavior is
unspecifyed for == and/or < and > too. If it is not, then there is a portable
way for checking this.
> > Being able to predict stability of a
> > numeric algorithm or to compute and optimize rounding errors is the big
issue.
> > Very seldom you actually need to SET the rounding mode and cannot get away
with
> > just checking it and behave accordingly.
>
> Writing code that is robust under all four IEEE-754 rounding modes is
> far more difficult than writing code that is robust under a single mode.
> The rounding mode affects, for example, whether some computations
> overflow. And the default mode (round to nearest) may not be the "right"
> one for some computation; minimizing rounding errors often requires
> changing rounding modes for particular operations.
Yes. I can see your point. As I told you I'm no expert. My only previous
experience was in making some stability corrections to expressions that
mathematically made sense but computationally did not because they were too
sportive in mix&match of very large and very little values. You know what I
mean. The actual precision needed at the end was in the order of 9 digits od
base 10 mantissa (values in the -10e5 ~ 10e5 with precisions of 10e-4).
Therefore most times there was no big deal.
> > After all i do not think C++ was intended to be comparable to, say, fortran
for
> > extreme math.
>
> Of course not. It pretty much adopted what C89 had, which was largely
> unspecified, in order to leave flexibility for implementors.
I think it will be better the next standard round (hoping not to wait 10 years
as it happened for C).
The issue isn't what you may or may not assume. It's what's required by
the C++ Standard. There is no requirement that < have any particular
behavior when applied to a NaN value.
> I mean, for non NaN values I think that < and > must relate in both IEEE-754 and
> C++ to the check for order condition. Therefore I do not think it would be valid
> for a C++ implementation that supports NaNs (and specifically IEEE-754) to
> return anything but false if either operand is a NaN.
Where in the C++ standard do you find this requirement? <g> IEEE-754
describes two predicates that could reasonably be called '<' in C++:
a .LT. b (true only if a is less than b)
a .UL. b (true if a is less than b or if a and b are unordered)
There is nothing in either standard that tells you what '<' should be in
an IEEE-754 conforming implementation.
>
> This should make the "!(a<b) && !(a>b)" subexpression work as expected.
>
> Now, it is true that C++ != may not bind either <> or <>? specifically.
> Therefore there's no portable way to completelly relate two values if one is (or
> may be) a NaN. However I think that operator == should work as expected if we
> know that at least one of the two values is NOT a NaN.
First, you can't determine whether at least one of the two values is not
a NaN in portable C++. Second, IEEE-754 describes two predicates that
could reasonably be called '==' in C++:
a .EQ. b (true only if a equals b [implicitly, neither a nor b is NaN])
a .UE. b (true if a equals b or if a and b are unordered)
>
> Therefore I think the C++ standard should suffice to ensure:
>
> val == val -> values dependent
> val == NaN -> false
> NaN == NaN -> unspecifyed
>
> val < val -> values dependent
> val < NaN -> false
> NaN < val -> false
> NaN < NaN -> unspecifyed
>
> val > val -> values dependent
> val > NaN -> false
> NaN > val -> false
> NaN > NaN -> unspecifyed
>
> therefore this revised version of my check should work portably:
>
> inline bool isNaN( double v )
> {
> return !(v==0. || v<0. || v>0.);
> }
>
You've assumed a mapping from C++ symbols to IEEE-754 predicates that is
not specified by either standard. As I said before, it's the one that's
most likely to be used, but that doesn't mean that it's actually used
everywhere, and it doesn't mean that a compiler that does something
different doesn't conform to the two standards.
Neither am I, but I've dug into it a fair amount. It's easy to make
assumptions about how floating point works, based on implicit analogies
with integer math, real number math as we know it, and the "common
wisdom" that all comparisons that involve NaNs are false. All of these
can be misleading.
By the way, I changed notations to the FORTRAN-like notation that
IEEE-754 also uses, in order to make the distinction between these
operations and the symbols that represent them clearer.
The only two arguments to say that the obvious mapping is the correct mapping is
that operator descriptions do match.
For example the:
> a .LT. b (true only if a is less than b)
> a .UL. b (true if a is less than b or if a and b are unordered)
would match the C++ wordings:
"The operators < (less than), > (greater than) ..."
but this is probably a pretty "weak" argument.
I think this issue should have been considered at standard time because it would
not invalidate anything at all. The standard does consider IEEE-754 support in
the specification of numeric_limits. Defining an operator mapping requred for
implementations claiming IEEE-754 compliancy would break no other case. And at
worst adding numeric_limits members to check for the actual mapping would have
been pretty usefull.
One more question:
is it your opinion that whatever the mapping is, one mapping should exist? I
mean, if an implementation claims compliancy to IEEE-754 then is it safe to
assume that operator < for that floating point type is either .LT. or .UL. and
cannot be any other (after all those are the two definitions IEEE-754 have for
what C++ does with operator <, right?).
Eventually we could not assume that a mapping between < and .LT. does not imply
a mapping between > and .GT., but still < is ether .LT. or .UL.
What do you think?
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BBB2A72...@acm.org...
Yes, I think there probably ought to be a C++ binding to IEEE-754 or its
progeny. But that gets into a very large subject area, because there are
a bunch of other standards that could use C++ bindings as well: POSIX,
SQL, etc.
Of course it makes sense that an explicit binding is defined.
I meant: do you think that the C++ standard as is and the IEEE standad as is are
such that either C++ operator < for a type that claims IEEE support in
numeric_limits maps to the .LT. definition of the less than operator in IEEE or
tu the .UL. definition of the less or unordered operator?
I standardese, for a T such that numeric_limits<T>::is_iec559 is true, should
T(0.) < numeric_limits<T>::quiet_NaN()
be totally undefined (or better unspecifyed but returns either true or false and
the returned value could be random) or just implementation defined among either
one of the two possibilities described in the other standard?
Since the C++ standard specifies that numeric_limits<T>::is_iec559 is true only
if the type aderes to the other standard, even if the other standard does not
say how the two possible operators should bind, maybe there's some wording that
says that < should bind to either one.
---
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BBB3E76...@acm.org...
I think that that's implied. 1.0 < 2.0 has to be true (even though the
C++ standard doesn't say it anywhere <g>), and the two IEEE-754
less-than operations are the only ones that impose the right ordering in
the absence of NaNs. The issue revolves around the meaning of that
operator when a NaN is present, and to conform to IEEE-754 that choice
must be one of the two less-than operations.
(k < v) == (v < k)
with k a FP constant, is true only if v is NaN or has the same value as the
constant. Independent of what binding < has (but must be either one of the two
defined by IEEE, this does not work if < applied to a FP value and a NaN is
totally undefined).
Then what would make this non-portable?
template <class T>
bool isNaN( T v ) {
return std::numeric_limits<T>::has_quiet_NaN &&
(0. < v) == (v < 0.) &&
(1. < v) == (v < 1.);
}
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BBBB4F4...@acm.org...
Unfortunately, I've unconvinced myself. Now let me see if I can
unconvince you. <g> There's a clause in the IEEE-754 standard that I'd
overlooked. It requires a conforming implementation that supports
comparison predicates (as opposed to condition codes, which is the other
way that implementations are allowed to support comparisons) to provide
.LT., .LE., .EQ., .NE., .GT., and .GE. Since C++ provides exactly six
comparison operators, <, <=, ==, !=, >, and >=, in an implementation
that also conforms to IEEE-754 these have to correspond to the six
required predicates. So in a conforming C++ implementation that also
conforms to IEEE-754, a == a must be false if a is a NaN.
In which case MSVC plus Dinkumware STL is broken in this respect, too. It
claims IEEE-754 compliance but fails to implement == properly.
#include <cmath>
#include <cfloat>
#include <iostream>
#include <limits>
using namespace std;
int main()
{
cout << "Is double IEEE-754 compliant: " <<
(numeric_limits<double>::is_iec559 ? "yes" : "no") << endl;
cout << "Does double have quiet NaN: " <<
(numeric_limits<double>::has_quiet_NaN ? "yes" : "no") << endl;
double d = log(-1);
cout << "log(-1)=" << d << endl;
cout << "IsNaN according to _isnan: " << (_isnan(d) ? "yes" : "no") <<
endl;
bool b = (d == d);
cout << "IsNaN according to d==d: " << (!b ? "yes" : "no") << endl;
return 0;
}
On my MSVC SP5 with Dinkumware STL as shipped with MSVC, this prints
Is double IEEE-754 compliant: yes
Does double have quiet NaN: yes
log(-1)=-1.#IND
IsNaN according to _isnan: yes
IsNaN according to d==d: no
I have checked the generated assembly code. MSVC generates FCOMP instruction
for ==, and then only checks C3 status flag, which is set when arguments are
equal or unordered. It never checks C2 flag that distinguishes between those
two cases. So in essence MSVC binds == to .UE. rather than .EQ. I guess it
means that either the compiler should be corrected, or numeric_limits should
not claim IEEE-754 compliance, or you misinterpret the standard requirements
after all.
Of course assuming that the implementation does not lie when
numeric_limits<T>::is_iec559 is true <g>
Also many more people than expected are making the correct assumptions about FP
operators.
The only assumption that may not be immediatelly obvious is operator== applied
to two NaNs.
However operator== (and operator!=) are already in the "be carefull" list of a
FP algorithm designer. This very thread started discussing operator== in the
context of FP algorithms. And che check for NaN that some would do by comparing
x to a NaN constant would not work (interestingly not even if you compare the
constant to itself) but should in any case be changed with a more appropriate
check.
Now, if I understand the implications of your unconvincement <g>, this should be
it:
template <class T>
bool isNaN( T v ) {
if ( ! std::numeric_limits<T>::has_quiet_NaN )
return false;
std::assert( std::numeric_limits<T>::is_iec559 );
return !(0. <= v) && !(v <= 0.);
}
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Pete Becker" <peteb...@acm.org> wrote in message
news:3BBC6B17...@acm.org...
That's okay too: it should be false. Comparisons with NaN values are
always unordered.
>
> However operator== (and operator!=) are already in the "be carefull" list of a
> FP algorithm designer. This very thread started discussing operator== in the
> context of FP algorithms. And che check for NaN that some would do by comparing
> x to a NaN constant would not work (interestingly not even if you compare the
> constant to itself) but should in any case be changed with a more appropriate
> check.
>
> Now, if I understand the implications of your unconvincement <g>, this should be
> it:
>
> template <class T>
> bool isNaN( T v ) {
>
> if ( ! std::numeric_limits<T>::has_quiet_NaN )
> return false;
>
> std::assert( std::numeric_limits<T>::is_iec559 );
>
> return !(0. <= v) && !(v <= 0.);
return !(v == v);
> In which case MSVC plus Dinkumware STL is broken in this respect, too. It
> claims IEEE-754 compliance but fails to implement == properly.
> .....
> Is double IEEE-754 compliant: yes
> Does double have quiet NaN: yes
> log(-1)=-1.#IND
> IsNaN according to _isnan: yes
> IsNaN according to d==d: no
And that's why you should call our isnan function instead of relying on
the trickier x == x. VC++, like many compilers, thinks it's okay to make
the substitution !(x != x), which is true for sane arithmetic but not
IEEE floating-point comparisons.
P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com
--
Andrea Ferro
---------
Brainbench C++ Master. Scored higher than 97% of previous takers
Scores: Overall 4.46, Conceptual 5.0, Problem-Solving 5.0
More info http://www.brainbench.com/transcript.jsp?pid=2522556
"Igor Tandetnik" <itand...@whenu.com> wrote in message
news:#7gnmTOTBHA.1500@tkmsftngp05...
No more in VC7.
As a closing remark, this discussion was pretty interesting, I think. We all
started more-or less thinking that C++ is bad at numerics. We all learned
something. We now can be relaxed about concerns of portability of FP code (at
least for NaN management and therefore error propagation).
Beside hoping for better explicit support in the next standard release, the most
important open issue to enable serious and portable FP stuff in C++ should be
(Pete convinced me on this in a previous post and I doubt he can unconvince
himself this time <g>) control of rounding mode. We can propagate and check
errors portably, but not yet minimize them.
Well. I suppose I'm oversimplifying <g>
Any discussion of floating point math that's comprehensible to ordinary
programmers is oversimplified.
That is the correct expansion of the macro beween angle brackets at the end of
my sentence!
> > And that's why you should call our isnan function instead of relying on
> > the trickier x == x. VC++, like many compilers, thinks it's okay to make
> > the substitution !(x != x), which is true for sane arithmetic but not
> > IEEE floating-point comparisons.
>
> No more in VC7.
Are you sure that comparisons are now reliable, or that you just got a different
(random) result?
Is there energy enough in the universe to fuel such a discussion?
--
Mårten
--
"Buxbomshäck är ett lustigt ord. Ser man någon skratta kan det vara åt
det." - Jan Stenmark
Same size. But multiple discussions. This is just about a common sized thread
for a given topic of interest. With a complete app probably many threads on
different aspects would spin off.
> Is there energy enough in the universe to fuel such a discussion?
It would use up spare energy, actually. You underestimate the value of this
thread and overestimate it's deepness. You may think we went to details. But if
you have ever done anything serious with FP, then we just scratched the surface
of a tiny spot! And the value of concluding that NaN checking is portable if the
implementation claims (and if it does not lie) IEEE, is probably higther than
you think.
Actually I got a different result. Docs just say FP is IEEE compatible, but does
not say explicitly that comparisons are. I tried compiling this:
int main()
{
float x,y;
if ( x==y )
x=y;
if ( x!=y )
x=y;
if ( x<y )
x=y;
if ( x<=y )
x=y;
if ( x>y )
x=y;
if ( x>=y )
x=y;
}
and checking the assembly. It looks like some comparisons do test the non
comparable flag and others do not. I do not have an x87 reference here. Maybe
someone could check and post what the actual mapping is. Here's the assembly
generated by beta1 VC7.
int main()
{
00411BC0 push ebp
00411BC1 mov ebp,esp
00411BC3 sub esp,48h
00411BC6 push ebx
00411BC7 push esi
00411BC8 push edi
float x,y;
if ( x==y )
00411BC9 fld dword ptr [x]
00411BCC fcomp dword ptr [y]
00411BCF fnstsw ax
00411BD1 test ah,44h
00411BD4 jp main+1Ch (411BDCh)
x=y;
00411BD6 mov eax,dword ptr [y]
00411BD9 mov dword ptr [x],eax
if ( x!=y )
00411BDC fld dword ptr [x]
00411BDF fcomp dword ptr [y]
00411BE2 fnstsw ax
00411BE4 test ah,44h
00411BE7 jnp main+2Fh (411BEFh)
x=y;
00411BE9 mov eax,dword ptr [y]
00411BEC mov dword ptr [x],eax
if ( x<y )
00411BEF fld dword ptr [x]
00411BF2 fcomp dword ptr [y]
00411BF5 fnstsw ax
00411BF7 test ah,5
00411BFA jp main+42h (411C02h)
x=y;
00411BFC mov eax,dword ptr [y]
00411BFF mov dword ptr [x],eax
if ( x<=y )
00411C02 fld dword ptr [x]
00411C05 fcomp dword ptr [y]
00411C08 fnstsw ax
00411C0A test ah,41h
00411C0D jp main+55h (411C15h)
x=y;
00411C0F mov eax,dword ptr [y]
00411C12 mov dword ptr [x],eax
if ( x>y )
00411C15 fld dword ptr [x]
00411C18 fcomp dword ptr [y]
00411C1B fnstsw ax
00411C1D test ah,41h
00411C20 jne main+68h (411C28h)
x=y;
00411C22 mov eax,dword ptr [y]
00411C25 mov dword ptr [x],eax
if ( x>=y )
00411C28 fld dword ptr [x]
00411C2B fcomp dword ptr [y]
00411C2E fnstsw ax
00411C30 test ah,1
00411C33 jne main+7Bh (411C3Bh)
x=y;
00411C35 mov eax,dword ptr [y]
00411C38 mov dword ptr [x],eax
}
00411C3B xor eax,eax
00411C3D pop edi
00411C3E pop esi
00411C3F pop ebx
00411C40 mov esp,ebp
00411C42 pop ebp
00411C43 ret
The flags set by fcomp are as follows:
Condition C3 C2 C0
ST(0) > SRC 0 0 0
ST(0) < SRC 0 0 1
ST(0) = SRC 1 0 0
Unordered* 1 1 1
Unordered means either or both operands are NaN.
After fnstsw, flags get moved into AH as follows: C3 becomes 0x40, C2 is
0x4, C0 is 0x1.
So, the logic for x == y is that the condition is true only when C3 != C2,
which is only when x equals y and neither is NaN. .EQ. in IEEE
x != y is the exact inverse of x == y (it uses jnp "jump if not parity" in
place of jp). In other words, x != y gets rewritten as !(x == y). It is a
correct .NE., right?
(x < y) is true when C2 != C0, that is x < y and neither is NaN. Correct
.LT.
(x <= y) is true when C3 != C0, that is x <= y and neither is NaN. Correct
.LE.
(x > y) is true when C3 == C0 == 0, that is x > y and neither is NaN.
Correct .GT.
(x >= y) is true when C0 == 0, that is x >= y and neither is NaN. Correct
.GE.
Looks like VC got it right this time.
Thanks a lot Igor.
This is good news to everibody.
from the comp.lang.c++ ng
"ming lu" <m...@argus.ca> wrote in message
news:#CZiFwFSBHA.2132@tkmsftngp05...
>
>
>