Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Portability and floating point exceptions

2 views
Skip to first unread message

Andrew

unread,
Jan 28, 2010, 8:53:04 AM1/28/10
to
I have some questions about how floating point errors should be
handled portably in C++ programs. The errors I am talking about are
associated with C math functions such as exp.

Documentation such as http://docs.sun.com/app/docs/doc/806-3332/6jcg55o9k?a=view
talks about 'exceptions' but the math library is a C library.
Obviously when it uses the word 'exception' it means in the IEEE
sense, not in the C++ sense. Here is an excerpt:

---
some NaN or results may be issued by the floating-point unit and be
returned
as such to the application without any warning better than the value
of the
result. Detected errors are reported by setting errno to either ERANGE
or
EDOM, performing a system-dependant notification, and returning
either
+ , - or NaN, whichever best suits the nature of the error.
---

So, what's a C++ programmer supposed to do? I am working on a project
that is doing lots of number crunching in a Microsoft environment,
using the Visual Studio compilers. Apparantly there has been a change
between VC6 and VC8 regarding the way floating point 'exeptions' are
handled. In VC6 the IEEE exception is converted to a C++ exception but
in VC8 it is not. I am looking for some advice on how people handle
these issues in a portable way. And I don't just mean between
different versions of Visual Studio. I am thinking of POSIX platforms
too, including Linux.

Regards,

Andrew Marlow

--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Pete Becker

unread,
Jan 28, 2010, 1:06:33 PM1/28/10
to
Andrew wrote:
> some NaN or results may be issued by the floating-point unit and be
> returned
> as such to the application without any warning better than the value
> of the
> result.

Yes, that's how IEEE floats are designed: don't check for errors until
the end. That way your code runs flat out in normal execution, and code
that runs into errors perhaps runs further than it otherwise would. But
if you read carefully about how NaN values and infinities propagate,
you'll see that you don't lose them, so checking at the end is safe.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)

George Neuner

unread,
Jan 28, 2010, 8:17:06 PM1/28/10
to
On Thu, 28 Jan 2010 12:06:33 CST, Pete Becker
<pe...@versatilecoding.com> wrote:

>Andrew wrote:
>> some NaN or results may be issued by the floating-point unit and be
>> returned as such to the application without any warning better than
>> the value of the result.
>
>Yes, that's how IEEE floats are designed: don't check for errors until
>the end. That way your code runs flat out in normal execution, and code
>that runs into errors perhaps runs further than it otherwise would. But
>if you read carefully about how NaN values and infinities propagate,
>you'll see that you don't lose them, so checking at the end is safe.

The problem with quiet NaNs (and also with INFs) is that isn't easy to
identify the particular operation or data that caused your complex
computation to fail unless you check the results at every step. It
only gets worse using SIMD. It doesn't help that most FP hardware
doesn't support signaling NaNs with an interrupt and so, in most
cases, implementing language level exceptions requires slowing
calculations by inserting extra check code.

George

--

Pete Becker

unread,
Jan 29, 2010, 6:46:54 PM1/29/10
to
George Neuner wrote:
> On Thu, 28 Jan 2010 12:06:33 CST, Pete Becker
> <pe...@versatilecoding.com> wrote:
>
>> Andrew wrote:
>>> some NaN or results may be issued by the floating-point unit and be
>>> returned as such to the application without any warning better than
>>> the value of the result.
>> Yes, that's how IEEE floats are designed: don't check for errors until
>> the end. That way your code runs flat out in normal execution, and code
>> that runs into errors perhaps runs further than it otherwise would. But
>> if you read carefully about how NaN values and infinities propagate,
>> you'll see that you don't lose them, so checking at the end is safe.
>
> The problem with quiet NaNs (and also with INFs) is that isn't easy to
> identify the particular operation or data that caused your complex
> computation to fail unless you check the results at every step. It
> only gets worse using SIMD. It doesn't help that most FP hardware
> doesn't support signaling NaNs with an interrupt and so, in most
> cases, implementing language level exceptions requires slowing
> calculations by inserting extra check code.
>

The goal isn't making it easy to debug your code and validate input,
it's to make math operations run as fast as possible on valid values.
For debugging, enable floating-point exceptions and add your own
exception handlers. (Note: this has nothing to do with C++ exceptions;
floating-point math has its own idea of what constitutes an exception).
I don't know how intrusive that is in the real world; I'm not an expert
on floating-point math.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Goran

unread,
Jan 29, 2010, 6:45:05 PM1/29/10
to
On Jan 28, 2:53 pm, Andrew <marlow.and...@googlemail.com> wrote:
> I have some questions about how floating point errors should be
> handled portably in C++ programs. The errors I am talking about are
> associated with C math functions such as exp.
>
> Documentation such ashttp://docs.sun.com/app/docs/doc/806-3332/6jcg55o9k?a=view

> talks about 'exceptions' but the math library is a C library.
> Obviously when it uses the word 'exception' it means in the IEEE
> sense, not in the C++ sense. Here is an excerpt:
>
> ---
> some NaN or results may be issued by the floating-point unit and be
> returned
> as such to the application without any warning better than the value
> of the
> result. Detected errors are reported by setting errno to either ERANGE
> or
> EDOM, performing a system-dependant notification, and returning
> either
> + , - or NaN, whichever best suits the nature of the error.
> ---
>
> So, what's a C++ programmer supposed to do? I am working on a project
> that is doing lots of number crunching in a Microsoft environment,
> using the Visual Studio compilers. Apparantly there has been a change
> between VC6 and VC8 regarding the way floating point 'exeptions' are
> handled. In VC6 the IEEE exception is converted to a C++ exception but
> in VC8 it is not. I am looking for some advice on how people handle
> these issues in a portable way. And I don't just mean between
> different versions of Visual Studio. I am thinking of POSIX platforms
> too, including Linux.

I would guess that the difference you see between VC6 and 8 is in /EH
(project properties->C/C++->Code Generation->Enable C++ Exceptions,
where the default is now /EHsc). VC6 used to generate code that
catches MS's "structured" exceptions (OS exceptions) in C++ (that was
IMO a __bad__ idea). But, MS being what it is, you should be able to
get to the original situation by using /EHa (if you are asking me,
don't - at least not on the whole project level).

Goran.

Pete Becker

unread,
Jan 31, 2010, 1:20:21 PM1/31/10
to

Whoops, I forgot to mention the key behind that paragraph: NaNs and
infinities are inserted as the default behavior for various
floating-point exceptions. Replacing the appropriate exception handlers
with your own will give you hooks to see which operations are creating
those values.

George Neuner

unread,
Feb 1, 2010, 8:56:25 AM2/1/10
to
On Sun, 31 Jan 2010 12:20:21 CST, Pete Becker
<pe...@versatilecoding.com> wrote:

>Pete Becker wrote:
>> George Neuner wrote:
>>
>>> The problem with quiet NaNs (and also with INFs) is that isn't easy to
>>> identify the particular operation or data that caused your complex
>>> computation to fail unless you check the results at every step. It
>>> only gets worse using SIMD. It doesn't help that most FP hardware
>>> doesn't support signaling NaNs with an interrupt and so, in most
>>> cases, implementing language level exceptions requires slowing
>>> calculations by inserting extra check code.
>>
>> The goal isn't making it easy to debug your code and validate input,
>> it's to make math operations run as fast as possible on valid values.
>> For debugging, enable floating-point exceptions and add your own
>> exception handlers. (Note: this has nothing to do with C++ exceptions;
>> floating-point math has its own idea of what constitutes an exception).
>> I don't know how intrusive that is in the real world; I'm not an expert
>> on floating-point math.
>
>Whoops, I forgot to mention the key behind that paragraph: NaNs and
>infinities are inserted as the default behavior for various
>floating-point exceptions. Replacing the appropriate exception handlers
>with your own will give you hooks to see which operations are creating
>those values.

That's true if the language implementation has provided the hooks ...
else you need to be able to write interrupt handlers 8) Some hardware
- notably Intel/AMD - is reasonably good at reporting problems (though
SIMD error reporting could be better and there's no good way to
recover), but not all FP hardware is so accommodating.

There are chips that don't signal any numeric errors at all and others
that can only signal a generic failure and can't tell you why. There
are a number of chips that detect but don't signal underflow and just
pin underflowing results to zero (which extends the set of numeric
comparisons that will return equal. And many chips don't support
denormals (which feeds back into underflow handling).

The point is that portability of floating point code - including
exceptions - is not terribly good unless you remain with the same
hardware. Even then, if you want to stay with language level
exceptions, you are at the mercy of the implementation.

George

--

Pete Becker

unread,
Feb 1, 2010, 4:04:41 PM2/1/10
to

That's true for languages that implement IEEE-754, and that includes C90
and C++0x. And that's the context that I explicitly referred to in an
earlier message that's now been snipped from the attribution chain.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

A. McKenney

unread,
Feb 2, 2010, 3:26:25 PM2/2/10
to
On Feb 1, 4:04 pm, Pete Becker <p...@versatilecoding.com> wrote:
> George Neuner wrote:
> > On Sun, 31 Jan 2010 12:20:21 CST, Pete Becker
...

> > That's true if the language implementation has provided the hooks ...
>
> That's true for languages that implement IEEE-754, and that includes C90
> and C++0x. And that's the context that I explicitly referred to in an
> earlier message that's now been snipped from the attribution chain.

Minor nit: IEEE-754 specifies floating-point
behavior, but not language bindings. Since
C and C++ do not, AFAIK, mandate IEEE-754, to
speak of C and C++ as "languages that
implement IEEE-754" is a bit misleading.

I assume you mean that some revisions of
the C and C++ standards specify functions
intended to manage certain IEEE-754 features,
and these functions will have the desired
effect if the implementation uses IEEE-754
arithmetic.

However, since most implementations use
whatever floating-point hardware is
available (if any), you are generally
limited by any limitations in the hardware.
If the hardware doesn't allow you to trap
on NaN but not Inf, or doesn't have all
rounding modes implemented,
the C++ standard won't change that.

All the more so if the hardware has
non-IEEE-754 arithmetic.
My (possibly flawed) understanding
is that these functions are defined
in this case, but I don't know what they do.


--

Andrew

unread,
Feb 2, 2010, 3:26:57 PM2/2/10
to
On 29 Jan, 23:46, Pete Becker <p...@versatilecoding.com> wrote:
> George Neuner wrote:
> > The problem with quiet NaNs (and also with INFs) is that isn't easy to
> > identify the particular operation or data that caused your complex
> > computation to fail unless you check the results at every step.
>
> The goal isn't making it easy to debug your code and validate input,
> it's to make math operations run as fast as possible on valid values.
> For debugging, enable floating-point exceptions and add your own
> exception handlers. (Note: this has nothing to do with C++ exceptions;
> floating-point math has its own idea of what constitutes an exception).
> I don't know how intrusive that is in the real world; I'm not an expert
> on floating-point math.

I realise that. Normally it would be a dubious benefit to be so
tolerant of errors. Garbage-in, Garbage-out and all that. But there
are some problems that use complex models and these models sometimes
fail in ways that are hard to predict.

The models I am thinking of are to do with quantitative finance where
there are zillions of parameters and permutations of input and it is
impossible to test them all. Some combinations will cause the model to
fail for certain inputs. The number of cases is small, but they are
there. These calculations are extremely CPU intensive and there are
lots of them. So in the normal case they need to run really fast. We
can't afford to always call exp_check instead of exp, log_check
instead of log etc. It would slow down the normal case too much.

>From what other people have said on this thread it looks to me like
the problem is inherently platform and compiler-specific. I wonder how
java would cope with this. True, one doesn't usually find intensive
number crunching apps written in java. But suppose one was written.
Presumably, it would be vunerable to the same issue. After all, there
are some aspects of java where implementation-specific leak out. For
example, the way you do wait/notify involves putting wait in a loop
due to the spurious wakeup problem. I reckon that's a POSIX-specific
wait primitive problem surfacing (I'm sure someone will correct me if
I am wrong about that).

Regards,

Andrew Marlow

--

Pete Becker

unread,
Feb 2, 2010, 6:08:36 PM2/2/10
to
A. McKenney wrote:
> On Feb 1, 4:04 pm, Pete Becker <p...@versatilecoding.com> wrote:
>> George Neuner wrote:
>>> On Sun, 31 Jan 2010 12:20:21 CST, Pete Becker
> ...
>>> That's true if the language implementation has provided the hooks ...
>> That's true for languages that implement IEEE-754, and that includes C90
>> and C++0x. And that's the context that I explicitly referred to in an
>> earlier message that's now been snipped from the attribution chain.
>
> Minor nit: IEEE-754 specifies floating-point
> behavior, but not language bindings. Since
> C and C++ do not, AFAIK, mandate IEEE-754, to
> speak of C and C++ as "languages that
> implement IEEE-754" is a bit misleading.
>

Sorry, C90 was a typo. Should have been C99, which gives special status
to IEEE-754 and provides functions for managing a 754-compliant
environment. See <fenv.h>.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

0 new messages