On 01/04/2014 06:07 AM, ardi wrote:
> Hi,
>
> Am I right supposing that if a floating point variable x is normal
> (not denormal/subnormal) it is guaranteed that for any non-NaN and
> non-Inf variable called y, the result y/x is guaranteed to be non-NaN
> and non-Inf?
How could that be true? If the mathematical value of y/x were greater
than DBL_MAX, or smaller than -DBL_MAX, what do you expect the floating
point value of y/x to be? What you're really trying to do is prevent
floating point overflow, and a test for isnormal() is not sufficient.
You must also check whether fabs(x) > fabs(y)/DBL_MAX (assuming that x
and y are both doubles).
As far as the C standard is concerned, the accuracy of floating point
math is entirely implementation-defined, and it explicitly allows the
implementation-provided definition to be "the accuracy is unknown"
(5.2.4.2.2p6). Therefore, a fully conforming implementation of C is
allowed to implement math that is so inaccurate that DBL_MIN/DBL_MAX >
DBL_MAX. In practice, you wouldn't be able to sell such an
implementation to anyone who actually needed to perform floating point
math - but that issue is outside the scope of the standard.
However, if an implementation pre-#defines __STDC_IEC_559__, it is
required to conform to the requirements of Annex F (6.10.8.3p1), which
are based upon but not completely identical to the requirements of IEC
60559:1989, which in turn is essentially equivalent to IEEE 754:1985.
That implies fairly strict requirements on the accuracy; for the most
part, those requirements are as strict as they reasonably could be.
> If affirmative, I've two doubts about this. First, how efficient can
> one expect the isnormal() macro to be? I mean, should one expect it
> to be much slower than doing an equality comparison to zero (x==0.0)
> ? Or should the performance be similar?
The performance is inherently system-specific; for all I know there
might be floating point chips where isnormal() can be implemented by a
single floating point instruction; but at the very worst it shouldn't be
much more complicated than a few mask and shift operations on the bytes
of a copy of the argument.
> Second, how could I "emulate" isnormal() on older systems that lack
> it? For example, if I compile on IRIX 6.2, which AFAIK lacks
> isnormal(), is there some workaround which would also guarantee me
> that the division doesn't generate NaN nor Inf?
Find a precise definition of the floating point format implemented on
that machine (which might not fully conform to IEEE requirements), and
you can then implement isnormal() by performing a few mask and shift
operations on the individual bytes of the argument.
> Also, if the isnormal() macro can be slow, is there any other
> approach which would also give me the guarantee I'm asking for? ..
If you can find a alternative way of implementing the equivalent of
isnormal() that is significantly faster than calling the macro provided
by a given version of the C standard library, then you should NOT use
that alternative; what you should do is drop that version of the C
standard library and replace it with one that's better-implemented.
> ... Maybe
> comparing to some standard definition which holds the smallest normal
> value available for each data type?
Yes, that's what FLT_MIN, DBL_MIN, and LDBL_MIN are for.
> ... Are such definitions standardized
> in some way such that I can expect to find them in some standard
> header on most OSs/compilers? ...
Yes - the standard header is <float.h>.
> ... Would I be safe to test it this way
> rather than with the isnormal() macro?
It could be safe, if you handle correctly the possibility that the value
is a NaN. Keep in mind that all comparisons with a NaN fail, so
x>=DBL_MIN is not the same as !(x<DBL_MIN). If x is a NaN, the first
expression is false, while the second is true.
--
James Kuyper