Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Exceptions are good...but to throw or not to throw is the problem

293 views
Skip to first unread message

Apache Beta

unread,
Feb 13, 2003, 5:07:21 AM2/13/03
to
All Exceptional people,

There has a been a lot thats said about exceptions..all that good stuff.

All that bad stuff on another thread :-)

However one issue has typically been side-stepped. When is it a
good idea to throw and when to return code ? Let me explain...

Primarily i feel the following are
the main issues to undestand to make effective use of exceptions ..

1) How to write (exception safe/netural) code in the face of
excetions
2) Exception catching and doing something about it before continuing
3) When should your function/method resorrt to throwing an exception
4) Change something and retry the operation
( maybe change seveal things, but one at a time and retry after
each change)

The "Exceptional C++" series does a great job especially with issues 1
and 2.
And almost nothing on part 3 or 4.

But the issue # 3 is really the first problem that needs to be
understood
to use exception for doing something about runtime errors in code.
I havent seen this being discussed just as much as the other related
issues.
Even though i have seen an article or two titled something like
"to throw or not to throw"

If you look at the standard library for in this matter... the strategies
are like
- Have two versions , one that throws and one that doesnt ( eg. at()
, operator[] )
- Set some flag to enable or disable exceptions being called ( iostream
library )
- Perhaps others that i havent noticed

It doesnt seem to be feasible to follow the first mechanism too often.
Eventhough you give the calling code flexibility to choose or not choose

exceptions.

Given that you follow one of these strategies, does it imply it may be
good idea to
throw exceptions anywhere you would like to return an error code ?
Uh... well that would create a a havoc since we often can/like ignore
error codes
but not exceptions....(ever checked returned code from printf ?)

Sidenote:
If you are the "exceptions are a bitch" type person cause you think
it creates more complex code look at this nice little experiment...
http://www.shindich.com/sources/c++_topics/exceptions.html
that contrasts code with and w/o exceptions.
Unforutnately its title seem to suggest that it addresses the specfic
question
I ask here...but actually talks about more general issue of "are
exceptions a good idea".

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]

Dave Steffen

unread,
Feb 13, 2003, 10:51:48 AM2/13/03
to
Apache Beta <apach...@cup.hp.com> writes:

[...]

> 3) When should your function/method resorrt to throwing an exception
> 4) Change something and retry the operation
> ( maybe change seveal things, but one at a time and retry after
> each change)

[...]


>
> But the issue # 3 is really the first problem that needs to be
> understood to use exception for doing something about runtime errors
> in code. I havent seen this being discussed just as much as the
> other related issues. Even though i have seen an article or two
> titled something like "to throw or not to throw"
>
> If you look at the standard library for in this matter... the
> strategies are like - Have two versions , one that throws and one
> that doesnt ( eg. at() , operator[] ) - Set some flag to enable or
> disable exceptions being called ( iostream library ) - Perhaps
> others that i havent noticed

I suspect this was done because a lot of pre-exception code would
have been broken, or at least bent, by the change to exceptions.
For example, pre-exception code should do this:

int* i = new int[1000];
if (i == NULL) { ... handle error ...}

but post-exception code should do this:

try {
int* i = new int[1000];
}
catch (std::bad_alloc) { // I think that's how it's spelled!
... handle error ...
{

so having two versions of parts of the standard library (one that
throws, one that doesn't) was _probably_ an attempt to keep everyone
happy, and not break too much code.

> Given that you follow one of these strategies, does it imply it may
> be good idea to throw exceptions anywhere you would like to return
> an error code ? Uh... well that would create a a havoc since we
> often can/like ignore error codes but not exceptions....(ever
> checked returned code from printf ?)

Well, IMHO we have several data points in favor of using exceptions:

1) Important parts of the standard library (i.e. 'new') throw
exceptions by default

2) The C++ Gods (i.e. Stroustrup) think that they're absolutely
necessary for correct error handling

3) Some kinds of errors can't be reported in any other way. For
example, how do you return an error code if a constructor
fails?

and we have IMHO one data point against them:

1) Writing exception-correct code is not trivial, and takes a
certain amount of education.

I suppose we could add a second point here, that it's extremely
difficult to graft exception safety onto code that wasn't built with
it in mind.

My personal opinion is that I agree with the experts; I think it was
Scott Meyers who wrote something to the effect of "Exceptions are
difficult simply because correct error handling is difficult, and
exceptions make it impossible to ignore the issue".

However, I also tend to not throw an exception if I can possibly
help it. Sometimes there are simpler ways of signalling that a
procedure failed.

--------------------------------------------------------------------------
Dave Steffen Wave after wave will flow with the tide
Dept. of Physics And bury the world as it does
Colorado State University Tide after tide will flow and recede
stef...@lamar.colostate.edu Leaving life to go on as it was...
- Peart / RUSH
"The reason that our people suffer in this way....
is that our ancestors failed to rule wisely". -General Choi, Hong Hi

Daniel T.

unread,
Feb 13, 2003, 9:34:05 PM2/13/03
to
Apache Beta <apach...@cup.hp.com> wrote:

> [Edited for brevity]
> When should your function/method resorrt to throwing an exception?
>
> This is really the first problem that needs to be understood to use exception

> for doing something about runtime errors in code.

You just answered your own question. Use exceptions for doing something
about runtime errors in code. If it isn't an error, then don't throw an
exception. If it is an error, then throw an exception. It really is that
simple. The hard part is deciding what is, and isn't an error and no one
can help you with that because only you know what the function in
question is supposed to do, and what results are acceptable.

Andreas Huber

unread,
Feb 13, 2003, 9:55:49 PM2/13/03
to
> However one issue has typically been side-stepped. When is it a
> good idea to throw and when to return code ? Let me explain...

Never ever report any errors with return-values. It's that simple!

> If you look at the standard library for in this matter... the strategies
> are like
> - Have two versions , one that throws and one that doesnt ( eg. at()
> , operator[] )

To understand this design decision it's important to see that passing an out
of range index to operator[] is conceptually quite different from e.g.
trying to open a file on a full disk. The former is a so-called program
logic error, which could have been detected _before_ running the program.
The latter is a problem than can only be detected at runtime.
There is not much agreement on what should be done upon detection of a
program logic error, mainly because of different safety requirements. For
some programs (e.g. DVD-player) its perfectly legitimate to simply not
detect the problem and continue with undefined behavior (and probably sooner
or later fail with an access violation or the like), because such a failure
will most probably not destroy data or harm anybody. Others should perform a
_graceful_ emergency shut down followed by a system restart (e.g.
life-supporting medical appliance), for obvious reasons.

> - Set some flag to enable or disable exceptions being called ( iostream
> library )

Stream libraries were introduced before the dawn of exceptions. The non-EH
behavior is only there for backward compatibility. Since most people agree
that runtime errors should be reported with exceptions, I recommend to
switch on the exceptions right after creating the stream object.

> Eventhough you give the calling code flexibility to choose or not choose
> exceptions.

See above, don't do that. One problem with error-codes is that you can
ignore them _without_a_trace_in_your_code_. To ignore exceptions you have to
write a catch( /**/ ) {} handler which can be detected (and reviewed) easily
with tools.

> Given that you follow one of these strategies, does it imply it may be
> good idea to
> throw exceptions anywhere you would like to return an error code ?

YES! That's exactly what exceptions were invented for! The huge benefit is
that you don't have to write all that tedious
passing-the-error-up-the-callchain code anymore. Simply throw the exception,
document that fact and you're done. As errors tend to happen deep down
burried at the 47th level of the call-stack and are typically not resolved
(!=handled) until the stack has unwound to the 10th level, you can easily
see how much boring "if (error) return error;"-coding exceptions save you
from doing.

> Uh... well that would create a a havoc since we often can/like ignore
> error codes
> but not exceptions....(ever checked returned code from printf ?)

It's always better to leave the decisition to actively ignore an error to
your clients.

HTH,

Andreas

Wolfram Roesler

unread,
Feb 14, 2003, 5:35:48 PM2/14/03
to
"Andreas Huber" <spam...@gmx.net> wrote in
news:3e4c3a28$1...@news.swissonline.ch:

> It's always better to leave the decisition to actively ignore an error
> to your clients.

Isn't that a point AGAINST using exceptions, since, when flagged
by an exception, an error can't be ignored by the client?

Regards
W. Roesler

James Kanze

unread,
Feb 15, 2003, 5:03:00 AM2/15/03
to
"Andreas Huber" <spam...@gmx.net> wrote in message
news:<3e4c3a28$1...@news.swissonline.ch>...

> > However one issue has typically been side-stepped. When is it a good
> > idea to throw and when to return code ? Let me explain...

> Never ever report any errors with return-values. It's that simple!

So simple that no respeceted expert has made this recommendation to
date. Exceptions aren't a silver bullet. Experts disagree as to when
and where they should be used. But no one says that they are the only
reasonable solution, regardless of the case.

> > If you look at the standard library for in this matter... the
> > strategies are like
> > - Have two versions , one that throws and one that doesnt ( eg. at()
> > , operator[] )

> To understand this design decision it's important to see that passing
> an out of range index to operator[] is conceptually quite different
> from e.g. trying to open a file on a full disk. The former is a
> so-called program logic error, which could have been detected _before_
> running the program. The latter is a problem than can only be
> detected at runtime. There is not much agreement on what should be
> done upon detection of a program logic error, mainly because of
> different safety requirements. For some programs (e.g. DVD-player) its
> perfectly legitimate to simply not detect the problem and continue
> with undefined behavior (and probably sooner or later fail with an
> access violation or the like), because such a failure will most
> probably not destroy data or harm anybody. Others should perform a
> _graceful_ emergency shut down followed by a system restart (e.g.
> life-supporting medical appliance), for obvious reasons.

Letting a program stumble on after undefined behavior is never really an
appropriate solution. If it happens, it is because we are human, and we
make errors. Most of the time, the most appropriate action to take in
case of an error is to abort with an error message -- this would be a
very good implementation specific behavior for the undefined behavior in
operator[]. A few, rare applications should try and recover. These
should use at.

> > - Set some flag to enable or disable exceptions being called ( iostream
> > library )

> Stream libraries were introduced before the dawn of exceptions. The
> non-EH behavior is only there for backward compatibility. Since most
> people agree that runtime errors should be reported with exceptions, I
> recommend to switch on the exceptions right after creating the stream
> object.

Operator new was also introduced before the dawn of exceptions. There
is no non-EH behavior for backward compatibility. IO is a funny case;
the default error reporting is probably the best solution most of the
time, although there aren't very many other cases when such a strategy
would be appropriate.

Exceptions might be appropriate for bad() in iostream. They might also
be appropriate when reading temporary files which were written by the
same program just before -- if you write 100 bytes, seek to the start,
and a read of 100 bytes fails, there is probably something seriously
wrong. But such cases aren't the rule.

> > Eventhough you give the calling code flexibility to choose or not
> > choose exceptions.

> See above, don't do that. One problem with error-codes is that you can
> ignore them _without_a_trace_in_your_code_.

There are ways of avoiding that.

> To ignore exceptions you have to write a catch( /**/ ) {} handler
> which can be detected (and reviewed) easily with tools.

> > Given that you follow one of these strategies, does it imply it may
> > be good idea to throw exceptions anywhere you would like to return
> > an error code ?

> YES! That's exactly what exceptions were invented for!

According to the author, exceptions were invented for exceptional
cases. In practice, the general rule is that they are a good solution
for errors in non critical code which almost certainly cannot be handled
locally. Insufficient memory in new being the classical example:

- In short running programs, like a compiler, the best solution for
insufficient memory is just to abort. If you can't compile his
program, you can't compile it, and throwing an exception rather than
aborting isn't going to help much here.

- In long running programs, like servers, insufficient memory can
occur for two reasons: a memory leak (a programming error), or a
request which was too complicated. In the second case, an
exception, caught at the highest level, is an excellent way to abort
the request. In the first case, the only way out is to abort and
restart the program; since it is generally difficult to distinguish,
if requests can have arbitrary complexity, you should probably go
for the exception, and implement some sort of counter at the highest
level -- if say 5 successive different requests fail because of lack
of memory, you abort.

- In critical applications, you don't use dynamically allocated
memory, so the problem doesn't occur:-).

- In some particular cases, it may be possible to recover locally from
insufficient memory, say by reorganizing some memory or spilling
data to disk. In such cases, you use new (nothrow), checking the
return value for NULL.

The fact that the experts felt it necessary to provide a new which
reports errors by return code says a lot.

> The huge benefit is that you don't have to write all that tedious
> passing-the-error-up-the-callchain code anymore.

That is the only benefit. It's not a negligible benefit in cases where
the error will be passed up a long chain. It's not a real benefit at
all where the error will be treated immediately by the caller.

You use exceptions for exceptional cases, where there is no chance of
local recovery. You use return codes in all other cases.

> Simply throw the exception, document that fact and you're done. As
> errors tend to happen deep down burried at the 47th level of the
> call-stack and are typically not resolved (!=handled) until the stack
> has unwound to the 10th level, you can easily see how much boring "if
> (error) return error;"-coding exceptions save you from doing.

We must encounter different types of errors. For the most part, with
the exception of things like insufficient memory, I find myself handling
errors one or two levels above where they occur.

> > Uh... well that would create a a havoc since we often can/like
> > ignore error codes but not exceptions....(ever checked returned code
> > from printf ?)

> It's always better to leave the decisition to actively ignore an error
> to your clients.

You don't have the choice. If the client wants to ignore the error, he
will. If he wants to treat the error, he will.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

Bronek Kozicki

unread,
Feb 15, 2003, 5:23:35 AM2/15/03
to
Andreas Huber <spam...@gmx.net> wrote:
>> Uh... well that would create a a havoc since we often can/like
>> ignore error codes
>> but not exceptions....(ever checked returned code from printf ?)
>
> It's always better to leave the decisition to actively ignore an
> error to your clients.

wchich means "do not catch exception thrown in your code, instead make
client aware of the exception and let him handle the situation", and for
this to work your code *must* be exception safe. Right ?


B.

Roshan Naik

unread,
Feb 15, 2003, 11:47:36 AM2/15/03
to
> To understand this design decision it's important to see that passing an out
> of range index to operator[] is conceptually quite different from e.g.
> trying to open a file on a full disk. The former is a so-called program
> logic error, which could have been detected _before_ running the program.
> The latter is a problem than can only be detected at runtime.

Uh.. what makes you think operator[] for vector can be range-checked at
compile time ? Thats really a outlandish expectation from any compiler
given a dynamically resizable container. Do you know of any compiler
that can do that ?

> Never ever report any errors with return-values. It's that simple!

..snip..

>>Given that you follow one of these strategies, does it imply it may be
>>good idea to
>>throw exceptions anywhere you would like to return an error code ?

>
> YES! That's exactly what exceptions were invented for! The huge benefit is
> that you don't have to write all that tedious
> passing-the-error-up-the-callchain code anymore.

Your emphatic implication that exceptions were invented to totally
replace traditional error handling techniques seems like your personal
opinion unless Bjarne told you so.

Atleast in his book (TCPL), Bjarne doesnt seem to think the same. In
Chapter 11 (special edtn) he implies that exception handling mechanism
is more like an alternative than a replacement to traditional error
handling techniques.....

To quote the text...
"The exception handling mechanism provides an alternative to the
traditional techniques when they are insufficient, inelegant and error
prone"


--Roshan

Neil Butterworth

unread,
Feb 15, 2003, 11:58:14 AM2/15/03
to

"Wolfram Roesler" <w...@grp.de> wrote in message
news:Xns93227C70...@130.133.1.4...

> "Andreas Huber" <spam...@gmx.net> wrote in
> news:3e4c3a28$1...@news.swissonline.ch:
>
> > It's always better to leave the decisition to actively ignore an error
> > to your clients.
>
> Isn't that a point AGAINST using exceptions, since, when flagged
> by an exception, an error can't be ignored by the client?
>

Note the use of the phrase "actively ignore". The following code actively
ignores any exceptions thrown by foo():

try {
foo();
}
catch( ... ) {
// ignore it
}


NeilB

Andreas Huber

unread,
Feb 15, 2003, 2:46:48 PM2/15/03
to
Wolfram,

> Isn't that a point AGAINST using exceptions, since, when flagged
> by an exception, an error can't be ignored by the client?

I'm not sure I understand what your are saying, since ignoring an exception
is quite easy:

try
{
// ...
}
catch ( ... )
{
}

The huge difference to error-codes is that the programmer must actively
write this code and tools can easily detect that someone chooses to ignore
an exception.

Regards,

Andreas

Andreas Huber

unread,
Feb 15, 2003, 9:06:24 PM2/15/03
to
James,

> > Never ever report any errors with return-values. It's that simple!
>
> So simple that no respeceted expert has made this recommendation to
> date. Exceptions aren't a silver bullet. Experts disagree as to when
> and where they should be used. But no one says that they are the only
> reasonable solution, regardless of the case.

Ok, I knew this point will be coming from someone of the list regulars, and
you are of course right. After a few projects where I have seen people fall
back into the error-code return scheme for no apparent reason I tend to
press this beyond of what is reasonable. Simply because it's easier to fix
cases where people throw exceptions and should not than the other way round.
BTW, in my last project (~0.25Mloc) there was not a single case where people
should have used error-code returns rather than exceptions! I believe
exceptions are the right choice for 99.9% of your everyday error-reporting
needs.

> Letting a program stumble on after undefined behavior is never really an
> appropriate solution. If it happens, it is because we are human, and we
> make errors. Most of the time, the most appropriate action to take in
> case of an error is to abort with an error message -- this would be a
> very good implementation specific behavior for the undefined behavior in
> operator[]. A few, rare applications should try and recover. These
> should use at.

I don't see your point, as some applications simply cannot _afford_ to
detect
the argument to operator[] being out of bounds (in release mode of course).

> Operator new was also introduced before the dawn of exceptions. There
> is no non-EH behavior for backward compatibility. IO is a funny case;

Yes there is: new (nothrow)

> the default error reporting is probably the best solution most of the
> time, although there aren't very many other cases when such a strategy
> would be appropriate.
>
> Exceptions might be appropriate for bad() in iostream. They might also
> be appropriate when reading temporary files which were written by the
> same program just before -- if you write 100 bytes, seek to the start,
> and a read of 100 bytes fails, there is probably something seriously
> wrong. But such cases aren't the rule.

Ok, I failed to say that I only turn on exceptions for eof in about half the
cases. However, I use exceptions for fail and bad.

> > See above, don't do that. One problem with error-codes is that you can
> > ignore them _without_a_trace_in_your_code_.
>
> There are ways of avoiding that.

Yep, there sure are ways. However, you still have to use exceptions for
constructor failures which leaves you with two different approaches of error
reporting. Unless there are very strong reasons not to do so, I tend to go
for the KISS approach in such cases. That's why I recommend to use
exceptions for all runtime-error reporting.

> According to the author, exceptions were invented for exceptional
> cases. In practice, the general rule is that they are a good solution
> for errors in non critical code which almost certainly cannot be handled
> locally. Insufficient memory in new being the classical example:
>
> - In short running programs, like a compiler, the best solution for
> insufficient memory is just to abort. If you can't compile his
> program, you can't compile it, and throwing an exception rather than
> aborting isn't going to help much here.
>
> - In long running programs, like servers, insufficient memory can
> occur for two reasons: a memory leak (a programming error), or a
> request which was too complicated. In the second case, an
> exception, caught at the highest level, is an excellent way to abort
> the request. In the first case, the only way out is to abort and
> restart the program; since it is generally difficult to distinguish,
> if requests can have arbitrary complexity, you should probably go
> for the exception, and implement some sort of counter at the highest
> level -- if say 5 successive different requests fail because of lack
> of memory, you abort.
>
> - In critical applications, you don't use dynamically allocated
> memory, so the problem doesn't occur:-).
>
> - In some particular cases, it may be possible to recover locally from
> insufficient memory, say by reorganizing some memory or spilling
> data to disk. In such cases, you use new (nothrow), checking the
> return value for NULL.

I don't know the "official" rationale behind new (nothrow) and honestly I
don't care that much because I can't think of any use for it (unless your
platform or coding standard forbids you to use exceptions). In fact, all the
cases you mentioned can just as well be handled with exceptions without any
major disadvantages.
1) Are you saying that you would use new (nothrow) in short running
programs? Why not use new and not handle the exception, this will
automatically lead to abort() being called.
2) We agree ;-)
3) What do you mean with critical? Realtime?
4) I might not have as much experience as you do, but I have so far (8 years
of C++ programming) not come across a single case where you could have
handled an out of memory situation locally (right after calling new). Even
if you could, why not use normal new and put a try-catch around it?

> The fact that the experts felt it necessary to provide a new which
> reports errors by return code says a lot.

As mentioned above I don't know the rationale and I couldn't find one either
but there are platforms that have until very recently not supported
exception handling (WinCE). To be standards conformant, such platforms
couldn't possibly support normal new but only new (nothrow). To me this is a
far stronger case for having new (nothrow).

> That is the only benefit. It's not a negligible benefit in cases where
> the error will be passed up a long chain. It's not a real benefit at
> all where the error will be treated immediately by the caller.

How do you know that your immediate caller will be able to handle the error?
IMO, there's no way to tell but I'd be very interested if you have come up
with an _easy_ scheme that allows you to do so.

> You use exceptions for exceptional cases, where there is no chance of
> local recovery. You use return codes in all other cases.

Again, how do you know who your caller is and what he does? Please give a
simple rule to find out whether an error can be handled by your immediate
caller or not. Honestly, even if such a rule existed I would still opt for
the exception-only approach, for KISS reasons.

> We must encounter different types of errors. For the most part, with
> the exception of things like insufficient memory, I find myself handling
> errors one or two levels above where they occur.

This is contrary to my experience, please give an example.

> > > Uh... well that would create a a havoc since we often can/like
> > > ignore error codes but not exceptions....(ever checked returned code
> > > from printf ?)
> > It's always better to leave the decisition to actively ignore an error
> > to your clients.
>
> You don't have the choice. If the client wants to ignore the error, he
> will. If he wants to treat the error, he will.

I was referring to the following: As long as you can't handle the runtime
error locally you better inform your client instead of ignoring it and
grinding on.

Regards,

Andreas

Andreas Huber

unread,
Feb 15, 2003, 9:06:58 PM2/15/03
to
Bronek,

> > It's always better to leave the decisition to actively ignore an
> > error to your clients.
>
> wchich means "do not catch exception thrown in your code, instead make
> client aware of the exception and let him handle the situation", and for
> this to work your code *must* be exception safe. Right ?

Yes, as long as there is no way for you to handle the error locally. Do not
write any code that is not exception-safe!

Regards,

Andreas

Francis Glassborow

unread,
Feb 16, 2003, 6:12:31 AM2/16/03
to
In message <3e4e743c$1...@news.swissonline.ch>, Andreas Huber
<spam...@gmx.net> writes

>I don't know the "official" rationale behind new (nothrow) and honestly I
>don't care that much because I can't think of any use for it (unless your
>platform or coding standard forbids you to use exceptions). In fact, all the
>cases you mentioned can just as well be handled with exceptions without any
>major disadvantages.
>1) Are you saying that you would use new (nothrow) in short running
>programs? Why not use new and not handle the exception, this will
>automatically lead to abort() being called.
>2) We agree ;-)
>3) What do you mean with critical? Realtime?
>4) I might not have as much experience as you do, but I have so far (8 years
>of C++ programming) not come across a single case where you could have
>handled an out of memory situation locally (right after calling new). Even
>if you could, why not use normal new and put a try-catch around it?

The problem with using any form of new other than new(nothrow) is that
the implementation just about has to put in all the exception handling
mechanism (including calling abort for an unhandled exception). The
problem is not with short running programs but in very long running
programs in highly constrained resources where the very existence of
exception handling mechanisms will make the program exceed the available
resources. Typically this is in some forms of embedded programming. In
highly competitive markets even pennies count and moving to larger
resources is not a commercially acceptable solution.

No exception handling mechanism is probably irrelevant on PCs of all
forms but it may be essential for programming the far commoner micro
controllers that pervade our lives unseen and unconsidered even by many
programmers.


--
ACCU Spring Conference 2003 April 2-5
The Conference you cannot afford to miss
Check the details: http://www.accuconference.co.uk/
Francis Glassborow ACCU

Francis Glassborow

unread,
Feb 16, 2003, 6:13:55 AM2/16/03
to
In message <3e4e743c$1...@news.swissonline.ch>, Andreas Huber
<spam...@gmx.net> writes
>Ok, I knew this point will be coming from someone of the list regulars, and
>you are of course right. After a few projects where I have seen people fall
>back into the error-code return scheme for no apparent reason I tend to
>press this beyond of what is reasonable. Simply because it's easier to fix
>cases where people throw exceptions and should not than the other way round.
>BTW, in my last project (~0.25Mloc) there was not a single case where people
>should have used error-code returns rather than exceptions! I believe
>exceptions are the right choice for 99.9% of your everyday error-reporting
>needs.

If that is true you are programming in a very special problem domain.
The choice is not just between throwing an exception and an error code.
There are other solutions that are relevant in other cases even new
offers a more sophisticated set of choices.


--
ACCU Spring Conference 2003 April 2-5
The Conference you cannot afford to miss
Check the details: http://www.accuconference.co.uk/
Francis Glassborow ACCU

Andreas Huber

unread,
Feb 16, 2003, 6:30:17 AM2/16/03
to
Roshan,

"Roshan Naik" <rosha...@yahoo.com> wrote in message
news:EFh3a.285$l_5...@news.cpqcorp.net...


> > To understand this design decision it's important to see that passing an
out
> > of range index to operator[] is conceptually quite different from e.g.
> > trying to open a file on a full disk. The former is a so-called program
> > logic error, which could have been detected _before_ running the
program.
> > The latter is a problem than can only be detected at runtime.
>
> Uh.. what makes you think operator[] for vector can be range-checked at
> compile time ? Thats really a outlandish expectation from any compiler
> given a dynamically resizable container. Do you know of any compiler
> that can do that ?

I didn't say that the compiler could have checked this. There are really two
cases here:
1) The index depends on user input. In this case you'd better check for out
of range situations before passing to operator[].
2) The index only depends on calculations your program makes. Believe it or
not, but in this case a sufficiently sophisticated static program analysis
tool could have told you that the index will be out of range in certain
cases. Such tools take your program sources as input and do _not_ run it
while analysing. They come to that conclusion simply by analysing the
outcome of _every_ branch your program makes. A human reviewer only reading
your program could do the same.

> Your emphatic implication that exceptions were invented to totally
> replace traditional error handling techniques seems like your personal
> opinion unless Bjarne told you so.

Ok, the inventor of exceptions (AFAIK _not_ Bjarne Stroustrup) probably
really did not want to completely replace the traditional way of error
handling. But, see below...

> Atleast in his book (TCPL), Bjarne doesnt seem to think the same. In
> Chapter 11 (special edtn) he implies that exception handling mechanism
> is more like an alternative than a replacement to traditional error
> handling techniques.....
>
> To quote the text...
> "The exception handling mechanism provides an alternative to the
> traditional techniques when they are insufficient, inelegant and error
> prone"

I know that Bjarne has written stuff that is contrary to my views. However,
a long time has passed since that book came out and compilers are much
better at exception handling today than they used to be. Moreover, I claim
that it is much simpler and safer to have one simple rule _everyone_
(rookies and seniors) in your project can follow rather than having to
establish IMO very difficult to comprehend heuristics when to use exceptions
and when not. In fact, I believe it's almost always a bad decision to return
an error code. Please see my answer to James Kanze.

Regards,

Andreas

Amir Yantimirov

unread,
Feb 16, 2003, 6:38:15 AM2/16/03
to
I have few arguments against using of exceptions.

Usually I have a class what completely cover sertain problem domain
and handle all errors within. It is expose some error flag or state,
some class specific error code what signals is some special action
should/can be done for recover, some general error code as returned by
system and routine to generate error message what can use class
information (name of file for file class for example). Methods of
class returns only bool: are all right or not.

Using that methology with error codes is straightforward and with
exceptions is cumbersome.

And next, usually error is spotted by service thread which nor can
handle it nor has caller to propagate.

Amir Yantimirov
http://www174.pair.com/yamir/programming/

LLeweLLyn

unread,
Feb 16, 2003, 6:40:25 AM2/16/03
to
"Andreas Huber" <spam...@gmx.net> writes:

> Bronek,
>
> > > It's always better to leave the decisition to actively ignore an
> > > error to your clients.
> >
> > wchich means "do not catch exception thrown in your code, instead make
> > client aware of the exception and let him handle the situation", and for
> > this to work your code *must* be exception safe. Right ?
>
> Yes, as long as there is no way for you to handle the error locally. Do not
> write any code that is not exception-safe!

[snip]

I hope you are thinking of the basic guarantee and not the
strong. IMO, especially when dealing with 3rd party libs, the
strong guarantee is too hard to provide in all but a few places.

Julián Albo

unread,
Feb 16, 2003, 6:54:03 PM2/16/03
to
Andreas Huber escribió:

> I didn't say that the compiler could have checked this. There are
really two
> cases here:
> 1) The index depends on user input. In this case you'd better check
for out
> of range situations before passing to operator[].

Why? Using at is simpler. In general, the class can detect error
conditions much better than the code that invokes one of his functions.

Regards.

Andreas Huber

unread,
Feb 16, 2003, 6:55:32 PM2/16/03
to
Francis,

> The problem with using any form of new other than new(nothrow) is that
> the implementation just about has to put in all the exception handling
> mechanism (including calling abort for an unhandled exception). The
> problem is not with short running programs but in very long running
> programs in highly constrained resources where the very existence of
> exception handling mechanisms will make the program exceed the
available
> resources. Typically this is in some forms of embedded programming. In
> highly competitive markets even pennies count and moving to larger
> resources is not a commercially acceptable solution.

I'm well aware of this and as I have already written in the answer to
James'
post, your environment could prohibit the use of exceptions. As outlined
below, I believe this is only true for a very special type of
applications.

> No exception handling mechanism is probably irrelevant on PCs of all
> forms but it may be essential for programming the far commoner micro
> controllers that pervade our lives unseen and unconsidered even by
many
> programmers.

I agree and I'm aware that there are programs which have to handle so
few
exceptional situations that only a small percentage of all functions
could
possibly fail with runtime errors (e.g. the software in a wristwatch
that
has all the bells and whistles but has no I/O apart from the buttons and
the display). For such an application it could indeed be overkill to
employ
exceptions.
However, I consider this a _very_ special type of programming
environment,
where also a few other well-established programming rules could be
rendered
impractical or even invalid.

For the rest (I believe the vast majority of all written lines of C++
code),
the
following thought experiment explains why exceptions are superior to
error-code returns: Two programs are written for the same hardware. Both
programs are _absolutely_identical_ in functionality but one makes full
use
of exceptions while the other has exceptions disabled and works with
error
code returns only. The type of functionality does not matter much, but
let's
assume that the programs must deal with quite a few different types of
runtime errors (out-of-memory, I/O problems, etc.) Both implement
"perfect"
error handling, i.e. for all runtime errors that could possibly happen
both
programs have to try a remedy.
I believe both programs should roughly be in the same league for
executable
size and runtime performance (on some platforms the program employing
exceptions could even be _faster_ and _smaller_ but I won't go into
details
for now).
Why? Well, consider how you have to write the program that does not use
exception handling: After almost each and every function call you have
to
insert "if (error) return error;". Because in C++ one line of code often
results in more than one function call (copy constructors, overloaded
operators, etc.) you are forced to tear apart a lot of expressions. For
example, the following results in 3 function calls for the expression z
=
.... alone and every single one could result in an exception being
thrown
(because all may allocate memory)!

matrix z, a, b, c;
// fill a, b, c ....
z = a * b * c;

Tearing appart all the expressions in your program in such a way and
inserting all the necessary if (error) return error; statements is not
only
extremely tedious but also insanely error-prone. So error-prone, that
it's
almost impossible that the two programs could possibly behave identical
in
all situations (given that the programs are not toy examples but
real-world
applications).
Moreover, as the return type is now occupied by the error-code you have
to
use reference parameters for the returned results. This leads to badly
readable code for mathematical expressions. Last but not least, you
always
have to ask newly created objects whether they are useable as
constructors
cannot return error-codes.

To cut a long story short, I believe in a lot of cases programs
employing
traditional error-handling are only faster and smaller because they
almost
never reach the level of correctness of programs employing exceptions.

Regards,

Andreas

Andreas Huber

unread,
Feb 16, 2003, 6:55:59 PM2/16/03
to
> >Ok, I knew this point will be coming from someone of the list
regulars,
and
> >you are of course right. After a few projects where I have seen
people
fall
> >back into the error-code return scheme for no apparent reason I tend
to
> >press this beyond of what is reasonable. Simply because it's easier
to
fix
> >cases where people throw exceptions and should not than the other way
round.
> >BTW, in my last project (~0.25Mloc) there was not a single case where
people
> >should have used error-code returns rather than exceptions! I believe
> >exceptions are the right choice for 99.9% of your everyday
error-reporting
> >needs.
>
> If that is true you are programming in a very special problem domain.
> The choice is not just between throwing an exception and an error
code.
> There are other solutions that are relevant in other cases even new
> offers a more sophisticated set of choices.

I don't follow. BTW, the project was the software for an ATM running on
Win2000...

Regards,

Andreas

Andreas Huber

unread,
Feb 16, 2003, 6:56:26 PM2/16/03
to
Amir,

> Usually I have a class what completely cover sertain problem domain
> and handle all errors within. It is expose some error flag or state,
> some class specific error code what signals is some special action
> should/can be done for recover, some general error code as returned by
> system and routine to generate error message what can use class
> information (name of file for file class for example). Methods of
> class returns only bool: are all right or not.
>
> Using that methology with error codes is straightforward and with
> exceptions is cumbersome.

Well, if you are absolutely sure that your immediate caller can handle
_all_
runtime errors then returning an error code can indeed be an option.
However, as I have explained in other posts, I believe this is a _very_
special situation and for the rest of your program you would most
probably
want to employ exceptions, so why complicate things with two types of
error
handling?

> And next, usually error is spotted by service thread which nor can
> handle it nor has caller to propagate.

A thread has a sort of a caller, i.e. the other thread by which it was
started. The other thread will at some point want to collect the results
calculated by this thread and that's also where you could transmit your
exception. I know this is not possible with normal (std) exceptions, but
when you've had the foresight to give your exceptions a clone method,
then
transmitting to and rethrowing in the other thread works just fine.

Regards,

Andreas

Andreas Huber

unread,
Feb 16, 2003, 6:56:54 PM2/16/03
to
> I hope you are thinking of the basic guarantee and not the
> strong. IMO, especially when dealing with 3rd party libs, the
> strong guarantee is too hard to provide in all but a few places.

Yep, I had in mind _at_least_ the basic guarantee. Depending on your
needs
you might want to implement the strong guarantee, which is indeed harder
in
some cases...

Regards,

Andreas

Andreas Huber

unread,
Feb 17, 2003, 12:25:17 PM2/17/03
to
Julian,

> > I didn't say that the compiler could have checked this. There are
> really two
> > cases here:
> > 1) The index depends on user input. In this case you'd better check
> for out
> > of range situations before passing to operator[].
>
> Why? Using at is simpler. In general, the class can detect error
> conditions much better than the code that invokes one of his
functions.

You are of course right. What was I thinking? ;-)

Regards,

Andreas

James Kanze

unread,
Feb 17, 2003, 3:01:05 PM2/17/03
to
"Andreas Huber" <spam...@gmx.net> wrote in message
news:<3e4fd3e7$1...@news.swissonline.ch>...

> For the rest (I believe the vast majority of all written lines of C++
> code), the following thought experiment explains why exceptions are
> superior to error-code returns: Two programs are written for the same
> hardware. Both programs are _absolutely_identical_ in functionality
> but one makes full use of exceptions while the other has exceptions
> disabled and works with error code returns only. The type of
> functionality does not matter much, but let's assume that the programs
> must deal with quite a few different types of runtime errors
> (out-of-memory, I/O problems, etc.)

What's in the etc.? What about a compiler, for example? Would you
consider an error in the program being compiled an error? (I wouldn't;
it's an expected situation. And I certainly wouldn't use exceptions to
handle it.)

> Both implement "perfect" error handling, i.e. for all runtime errors
> that could possibly happen both programs have to try a remedy. I
> believe both programs should roughly be in the same league for
> executable size and runtime performance (on some platforms the program
> employing exceptions could even be _faster_ and _smaller_ but I won't
> go into details for now). Why? Well, consider how you have to write
> the program that does not use exception handling: After almost each
> and every function call you have to insert "if (error) return
> error;".

Nonsense. Only after function calls that can fail.

What do you do if you run out of memory. A lot of programs (compilers,
etc.) should simple fail in such cases. For a lot of programs, the
probability is so low, and the possibilities for recovery so poor, that
failing is acceptable as well. Such programs replace the new handler,
and so never see an std::bad_alloc. (With a lot of compilers, this has
already been done for you:-(. And on some systems, like Linux, your
program, or some program, will just crash.)

What is the probability of a write failing? It's pretty low on my
machines, and it's perfectly acceptable to only test the results after
the final close (which you have to do anyway, and you can't use an
exception here anyway without wrapping it, since in case of other
exceptions, it's likely to be called when unwinding the stack). And
that's only one if in the entire program.

Things like opening a file should be tested and handled immediately, and
the actual processing will never begin. No problem of propagating
deeply here.

What types of errors are you thinking of that require if's all over the
place?

> Because in C++ one line of code often results in more than one
> function call (copy constructors, overloaded operators, etc.) you are
> forced to tear apart a lot of expressions. For example, the following
> results in 3 function calls for the expression z = .... alone and
> every single one could result in an exception being thrown (because
> all may allocate memory)!

> matrix z, a, b, c;
> // fill a, b, c ....
> z = a * b * c;

I think that everyone is in agreement that in such trivial cases, new
should throw if it doesn't abort the program. There are other
solutions, however, and they don't necessarily result in more if
statements being written. (We used them before exceptions existed.)
The most usual is simply to mark the object as bad, and continue. A bit
like NaN in IEEE floating point. A lot more if's get executed and the
run-time is probably a little slower than with exceptions, but the code
size is probably smaller. With exceptions, I need table entries for all
of the call spots where an exception may propagate. If I use deferred
error checking, like with non-signaling NaN's, the only place I need to
check for an error is at the end of the expression.

> Tearing appart all the expressions in your program in such a way and
> inserting all the necessary if (error) return error; statements is not
> only extremely tedious but also insanely error-prone. So error-prone,
> that it's almost impossible that the two programs could possibly
> behave identical in all situations (given that the programs are not
> toy examples but real-world applications). Moreover, as the return
> type is now occupied by the error-code you have to use reference
> parameters for the returned results. This leads to badly readable code
> for mathematical expressions. Last but not least, you always have to
> ask newly created objects whether they are useable as constructors
> cannot return error-codes.

Interestingly, I've rarely seen mathematical code which uses signaling
NaN's. I don't know whether it's because the mathematicians consider
the code less clean with the asynchronous interuptions, or for some
other reasons.

> To cut a long story short, I believe in a lot of cases programs
> employing traditional error-handling are only faster and smaller
> because they almost never reach the level of correctness of programs
> employing exceptions.

My experience to date has been that programs employing exceptions are
almost never correct, where as programs with return values often are.
People like David Abraham have been making an enormous effort, both in
developing new programming idioms and in educating people about them;
it's largely a result of such efforts that I even consider exceptions.
But if I look at most of the existing code at my client sites, there's
still a lot to be done, especially with regards to education.

If you can afford to punt on insufficient memory, and abort, you can
probably use all of the standard library without seeing a single
exception. If you have a large existing code base that you have to live
with, that's probably your only choice; code written five or more years
ago is NOT exception safe. If you're doing a green fields project,
exceptions are definitly worth considering for some times of errors,
provided you can be sure that the people working on the project are up
to date, and know how to program with them. I'd still avoid them for
most run-of-the mill errors; exceptions are best when there aren't any
try blocks, and if you have to handle the error at that call site, that
means a try block for each call site.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

James Kanze

unread,
Feb 17, 2003, 3:10:18 PM2/17/03
to
"Andreas Huber" <spam...@gmx.net> wrote in message
news:<3e4e743c$1...@news.swissonline.ch>...

> > > Never ever report any errors with return-values. It's that simple!

> > So simple that no respeceted expert has made this recommendation to
> > date. Exceptions aren't a silver bullet. Experts disagree as to
> > when and where they should be used. But no one says that they are
> > the only reasonable solution, regardless of the case.

> Ok, I knew this point will be coming from someone of the list
> regulars, and you are of course right. After a few projects where I
> have seen people fall back into the error-code return scheme for no
> apparent reason I tend to press this beyond of what is reasonable.

I've seen just the opposite. I've seen a lot of code written with that
attitude that if you don't know what to do, throw an exception. There's
obviously a middle road, but I've found that about 90% of the time, a
return code is more appropriate than an exception.

Maybe that's just because I expect most errors, and don't consider them
exceptional. Or that I'm use to thinking of error processing
(detection, propagation and handling) as part of the algorithm. Or that
I've noticed that most of the people throwing exceptions like wild don't
have the foggiest notion as to what exception safety means (and have
never heard of smart pointers, or RAII). This last problem is, of
course, one of education. But it's one many of us still have to deal
with.

> Simply because it's easier to fix cases where people throw exceptions
> and should not than the other way round.

I'm not sure what you mean here. If the client code was written to use
exceptions, it's going to silently ignore return values. If it was
written to use return values, it likely won't compile with exceptions.

> BTW, in my last project (~0.25Mloc) there was not a single case where
> people should have used error-code returns rather than exceptions!

You mean you never opened a file whose name was provided by the user.

> I believe exceptions are the right choice for 99.9% of your everyday
> error-reporting needs.

I think it depends on the type of application. Most of my work is on
large servers; exceptions are useful there for aborting requests without
bringing the system down. But I can't think of anywhere in a compiler
where they would be appropriate. And I'm sceptical about graphic
clients, although I'll admit that my scepticism may be partially based
on my negative experience with exceptions in Java.

> > Letting a program stumble on after undefined behavior is never
> > really an appropriate solution. If it happens, it is because we are
> > human, and we make errors. Most of the time, the most appropriate
> > action to take in case of an error is to abort with an error message
> > -- this would be a very good implementation specific behavior for
> > the undefined behavior in operator[]. A few, rare applications
> > should try and recover. These should use at.

> I don't see your point, as some applications simply cannot _afford_ to
> detect the argument to operator[] being out of bounds (in release mode
> of course).

My point is that you shouldn't release code with an out of bounds
operator[]. And if you are testing it (which is nice whenever you can
afford it), the correct response is probably to abort, rather than to
throw at an unexpected moment.

> > Operator new was also introduced before the dawn of exceptions.
> > There is no non-EH behavior for backward compatibility. IO is a
> > funny case;

> Yes there is: new (nothrow)

How does that solve a backward compatibility problem?

> > the default error reporting is probably the best solution most of
> > the time, although there aren't very many other cases when such a
> > strategy would be appropriate.

> > Exceptions might be appropriate for bad() in iostream. They might
> > also be appropriate when reading temporary files which were written
> > by the same program just before -- if you write 100 bytes, seek to
> > the start, and a read of 100 bytes fails, there is probably
> > something seriously wrong. But such cases aren't the rule.

> Ok, I failed to say that I only turn on exceptions for eof in about
> half the cases. However, I use exceptions for fail and bad.

I've never used an exception for eof. Nor for fail. I can see it for
bad, in some cases. Generally speaking, however, it just doesn't seem
worth the hassle.

> > > See above, don't do that. One problem with error-codes is that you
> > > can ignore them _without_a_trace_in_your_code_.

> > There are ways of avoiding that.

> Yep, there sure are ways. However, you still have to use exceptions
> for constructor failures which leaves you with two different
> approaches of error reporting.

You always have to alternative of making constructors which can't fail.
Generally, if something can fail, and the failure can be handled
locally, I'd avoid doing it in a constructor.

> Unless there are very strong reasons not to do so, I tend to go for
> the KISS approach in such cases. That's why I recommend to use
> exceptions for all runtime-error reporting.

I'd say that KISS argues for using return codes everywhere:-).

The official rational is that some people had code which was prepared to
handle out of memory locally, at the point of the new, and that
exceptions weren't appropriate for these cases. Since the cases are
pretty much the exceptions, the default new throws, and you need a
special form for the no throw version.

There are, of course, also cases where you simply cannot afford to
throw. You certainly wouldn't want to use the normal new in a tracing
mechanism (where it might be called during stack walkback due to another
exception); you use new (nothrow), and if it fails, you fall back to
unbuffered output, but you don't throw, whatever happens.

> In fact, all the cases you mentioned can just as well be handled with
> exceptions without any major disadvantages.

> 1) Are you saying that you would use new (nothrow) in short running
> programs? Why not use new and not handle the exception, this will
> automatically lead to abort() being called.

You don't want abort. You want to display a reasonable error message,
and you don't want the core dump. The easiest solution is the one
designed for this case: replace the new handler with one which does what
you want.

> 2) We agree ;-)

> 3) What do you mean with critical? Realtime?

Critical, like if the program fails, something bad happens. I've worked
on things like locomotive brake systems, for example. Almost by
definition, you can run out of a dynamic resource; whether you throw an
exception, return NULL or abort in the new handler, the train is going
to crash. And the railroads don't like that.

On large systems, the critical parts will usually be isolated on low
level controls today, so that the main part of the control software can
be written "normally".

> 4) I might not have as much experience as you do, but I have so far (8
> years of C++ programming) not come across a single case where you
> could have handled an out of memory situation locally (right after
> calling new). Even if you could, why not use normal new and put a
> try-catch around it?

I've not run into them in my personal experience, but I know that they
exist. An obvious example might be an editor -- when the call to new
fails, it spills part of the file it is editing to disk, frees up the
associated memory, and tries again. In general, I'd expect the case to
occur in any system which has its own memory management, and only uses
operator new to get the blocks it manages.

> > The fact that the experts felt it necessary to provide a new which
> > reports errors by return code says a lot.

> As mentioned above I don't know the rationale and I couldn't find one
> either but there are platforms that have until very recently not
> supported exception handling (WinCE). To be standards conformant, such
> platforms couldn't possibly support normal new but only new
> (nothrow). To me this is a far stronger case for having new (nothrow).

Most such platforms probable use some sort of embedded C++, in which the
standard new returns NULL, rather than throwing. It was never the
intent that a platform should be allowed to support new (nothrow)
without supporting the normal new.

> > That is the only benefit. It's not a negligible benefit in cases
> > where the error will be passed up a long chain. It's not a real
> > benefit at all where the error will be treated immediately by the
> > caller.

> How do you know that your immediate caller will be able to handle the
> error? IMO, there's no way to tell but I'd be very interested if you
> have come up with an _easy_ scheme that allows you to do so.

When in doubt, suppose that he does. It's easier (and a lot cheaper) to
convert a return code into an exception than it is to convert an
exception into a return code.

In the end, of course, it is an educated guess. It's highly unlikely
that you can handle running out of memory locally, unless you are using
some other memory management above operator new. On the other hand,
it's really rare that you don't handle file not found locally; a system
which forces a throw for a failure to open a file is just going to cause
extra work for its users.

> > You use exceptions for exceptional cases, where there is no chance
> > of local recovery. You use return codes in all other cases.

> Again, how do you know who your caller is and what he does? Please
> give a simple rule to find out whether an error can be handled by your
> immediate caller or not. Honestly, even if such a rule existed I would
> still opt for the exception-only approach, for KISS reasons.

> > We must encounter different types of errors. For the most part,
> > with the exception of things like insufficient memory, I find myself
> > handling errors one or two levels above where they occur.

> This is contrary to my experience, please give an example.

> > > > Uh... well that would create a a havoc since we often can/like
> > > > ignore error codes but not exceptions....(ever checked returned
> > > > code from printf ?)

> > > It's always better to leave the decisition to actively ignore an error
> > > to your clients.

> > You don't have the choice. If the client wants to ignore the error, he
> > will. If he wants to treat the error, he will.

> I was referring to the following: As long as you can't handle the
> runtime error locally you better inform your client instead of
> ignoring it and grinding on.

Totally agreed. Before exceptions, we had the case where most
programmers would do everything possible to handle the error locally,
and ignore it otherwise. With exceptions, I see most programmers making
no effort to handle an error locally, but just throwing an exception
anytime they aren't sure what to do. Neither situation is acceptable.

Other than that, the only errors I've not been able to handle locally
are those related to insufficient resources (mainly memory), and those
due to failure of the underlying model. If carefully planned for, some
of the insufficient resources can be handled by backing out of the
transaction, request or whatever, and just aborting it, instead of the
entire program. Such errors are candidates for exceptions. (As an
example of what I mean by "carefully planned for", you know when you
invoke operator new, so you can plan for it. No such luck with stack
overflow, however.) If carefully planned for, some of the failures in
the underlying model can be planned for and handled by backing out as
well. This is often the case of ios::bad, at least as long as the
actual writing is in some way synchronized with output requests -- not
necessarily one write per request, but at least no error except during a
request. Others, like say a parity failure on memory read, can't be.

While I'm at it, I might mention that istream is an awkward case,
because the standard makes no provision for distinguishing the type of
an error. Throwing on eof can only be a programming error, since it
will cause some reads to fail that would otherwise succeed. Throwing on
fail means that you get an exception on every possible condition,
without the slightest means of knowing whether it was due to end of
file, a format error, or a hardware read error. And throwing on bad is
useless, because istream never sets bad. In practice, the possibility
of throwing in an istream is worthless. But this isn't an argument
against exceptions in general; it is just due to a faulty interface.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Bronek Kozicki

unread,
Feb 18, 2003, 12:40:45 PM2/18/03
to
LLeweLLyn <llewe...@xmission.dot.com> wrote:
> I hope you are thinking of the basic guarantee and not the
> strong. IMO, especially when dealing with 3rd party libs, the
> strong guarantee is too hard to provide in all but a few places.

AFAIR basic guarantee says that object (when exception happens) must be
in state consistent enough to be destroyed without leaking resources.
Nothing else is guaranteed, am I right ?


B.

Rob

unread,
Feb 19, 2003, 5:56:33 AM2/19/03
to

"Bronek Kozicki" <br...@rubikon.pl> wrote in message
news:10454198...@cache1.news-service.com...

> LLeweLLyn <llewe...@xmission.dot.com> wrote:
> > I hope you are thinking of the basic guarantee and not the
> > strong. IMO, especially when dealing with 3rd party libs, the
> > strong guarantee is too hard to provide in all but a few places.
>
> AFAIR basic guarantee says that object (when exception happens) must be
> in state consistent enough to be destroyed without leaking resources.
> Nothing else is guaranteed, am I right ?
>

Not quite. The basic guarantee requires that there will be no resource
leaks, and the object will be both usable and destructible (but it is not
required to be in a predictable state).

Andreas Huber

unread,
Feb 19, 2003, 5:58:44 AM2/19/03
to
[ ... ]

> > disabled and works with error code returns only. The type of
> > functionality does not matter much, but let's assume that the programs
> > must deal with quite a few different types of runtime errors
> > (out-of-memory, I/O problems, etc.)
>
> What's in the etc.?

Etc. could stand for failures of whatever hardware you have to control.
Another thing is thread cancellation (wait functions fail with an exception,
so that a thread can be interrupted without leaking resources).

> What about a compiler, for example? Would you
> consider an error in the program being compiled an error? (I wouldn't;
> it's an expected situation. And I certainly wouldn't use exceptions to
> handle it.)

Haven't got any non-toy experience with parsers but from what I've seen so
far I'd probably agree.

> > Both implement "perfect" error handling, i.e. for all runtime errors
> > that could possibly happen both programs have to try a remedy. I
> > believe both programs should roughly be in the same league for
> > executable size and runtime performance (on some platforms the program
> > employing exceptions could even be _faster_ and _smaller_ but I won't
> > go into details for now). Why? Well, consider how you have to write
> > the program that does not use exception handling: After almost each
> > and every function call you have to insert "if (error) return
> > error;".
>
> Nonsense. Only after function calls that can fail.

As pointed out above with thread cancellation a lot of functions can fail.
However, even if you don't need thread cancellation I think it is better to
program as if almost all functions can fail with runtime errors. Why? Before
I started working in projects that had such a coding policy, the following
scenario was so familiar:

1. Programmer A writes a function that does not return an error-code.
2. Programmers B, C, D use the function in their code.
3. Some time later (days, weeks, months or even years), programmer A decides
that the function now can fail and henceforth returns an error-code. For
some reason, programmer A fails to ensure that all the necessary changes are
made. I don't really blame him, as the necessary changes could be quite
extensive and even the best programmers cut corners during a schedule
crunch.
4. The program now exhibits strange behavior (crashes under load,
functionality is different under certain situations, etc.)
5. Programmer C starts debugging and after a really long session finally
finds the bug and fixes it. It took that long because the bug surfaced in a
completely different corner of the program than where the problem was.

Of course points 1-3 also happen in programs making full use of exceptions.
However, the resulting bugs are so much easier to diagnose as the newly
thrown exception is almost always unexpected and leads to abort() (or
graceful shutdown, etc.). The fix usually also takes a lot less time as you
don't have to change the interfaces for code that only propagates the
exceptions.

As you pointed out, it's of course possible to enforce checking of the
return value which improves diagnosis but _not_ fixing as you still have to
change the interfaces of functions that only propagate the error. This last
point combined with all other advantages (ONE mechanism for all runtime
error-reporting, no tedious IsOk() or Init() after construction, most third
party libraries throw exceptions, etc.) that really convinced me to switch
to exceptions.

[...]


> Things like opening a file should be tested and handled immediately, and
> the actual processing will never begin. No problem of propagating
> deeply here.
>
> What types of errors are you thinking of that require if's all over the
> place?

My last project was an ATM where one of the biggest challenges was the
correct handling of hardware failures. Since you typically have about 8
external devices to control and at the same time also have to communicate
over a network and log every step the program makes on two different media
(disk and paper) you pretty soon come to the conclusion that about 80% of
all functions can possibly fail (this is just a very rough guess and does
not include thread cancellation). Moreover, there are pretty clear
requirements how the program must behave when any hardware fails. In the end
the project was 10% over budget and I'm quite sure that we wouldn't have
made it even within 20% without exception handling.

[ ... ]


> I think that everyone is in agreement that in such trivial cases, new
> should throw if it doesn't abort the program. There are other
> solutions, however, and they don't necessarily result in more if
> statements being written. (We used them before exceptions existed.)
> The most usual is simply to mark the object as bad, and continue. A bit
> like NaN in IEEE floating point. A lot more if's get executed and the
> run-time is probably a little slower than with exceptions, but the code
> size is probably smaller. With exceptions, I need table entries for all
> of the call spots where an exception may propagate. If I use deferred
> error checking, like with non-signaling NaN's, the only place I need to
> check for an error is at the end of the expression.

So it could be that a program uses three different ways of reporting
failures: error-codes, exceptions and NaNs?

> My experience to date has been that programs employing exceptions are
> almost never correct, where as programs with return values often are.

I've seen really bad abuses of exception handling as well but I have yet to
see a system with traditional error handling which comes close to being
correct. Maybe I'm just working in the wrong field?

[ ... ]


> If you're doing a green fields project,
> exceptions are definitly worth considering for some times of errors,
> provided you can be sure that the people working on the project are up
> to date, and know how to program with them.

Absolutely, proper education is crucial.

> exceptions are best when there aren't any
> try blocks,

Wild agreement! I haven't the code of my last project handy, but I'd guess
we don't have more than about 100 try blocks in our program (~.25Mloc).

Regards,

Andreas

LLeweLLyn

unread,
Feb 19, 2003, 5:59:28 AM2/19/03
to
"Bronek Kozicki" <br...@rubikon.pl> writes:

> LLeweLLyn <llewe...@xmission.dot.com> wrote:
> > I hope you are thinking of the basic guarantee and not the
> > strong. IMO, especially when dealing with 3rd party libs, the
> > strong guarantee is too hard to provide in all but a few places.
>
> AFAIR basic guarantee says that object (when exception happens) must be
> in state consistent enough to be destroyed without leaking resources.
> Nothing else is guaranteed, am I right ?

[snip]

Yes. Here are the definitions I use:

http://www.boost.org/more/generic_exception_safety.html

and scroll down to item 3.

Andreas Huber

unread,
Feb 19, 2003, 6:10:27 AM2/19/03
to
[...]

> > Ok, I knew this point will be coming from someone of the list
> > regulars, and you are of course right. After a few projects where I
> > have seen people fall back into the error-code return scheme for no
> > apparent reason I tend to press this beyond of what is reasonable.
>
> I've seen just the opposite. I've seen a lot of code written with that
> attitude that if you don't know what to do, throw an exception. There's
> obviously a middle road, but I've found that about 90% of the time, a
> return code is more appropriate than an exception.

I know what you mean, I've seen it as well. However, I would expect that the
same people who throw like wild would also return error-codes like wild ;-).

[ ... ]


> Or that
> I've noticed that most of the people throwing exceptions like wild don't
> have the foggiest notion as to what exception safety means (and have
> never heard of smart pointers, or RAII).

Yeah, and not surprisingly those people are often wild supporters of SESE
(single entry, single exit) as they would otherwise face the same resource
leaking problems as with exceptions.

> > Simply because it's easier to fix cases where people throw exceptions
> > and should not than the other way round.
>
> I'm not sure what you mean here. If the client code was written to use
> exceptions, it's going to silently ignore return values. If it was
> written to use return values, it likely won't compile with exceptions.

If someone chooses to throw an exception but should have returned an error
code then this problem is easier to detect than if someone chooses to return
an error-code but should have thrown an exception.

> > BTW, in my last project (~0.25Mloc) there was not a single case where
> > people should have used error-code returns rather than exceptions!
>
> You mean you never opened a file whose name was provided by the user.

Luckily ATMs don't let you do that ;-)

> > I believe exceptions are the right choice for 99.9% of your everyday
> > error-reporting needs.
>
> I think it depends on the type of application. Most of my work is on
> large servers; exceptions are useful there for aborting requests without
> bringing the system down. But I can't think of anywhere in a compiler
> where they would be appropriate.

Agreed.

> And I'm sceptical about graphic
> clients, although I'll admit that my scepticism may be partially based
> on my negative experience with exceptions in Java.

Out of curiosity: What's so bad about exceptions in Java?

[ ... ]


> > I don't see your point, as some applications simply cannot _afford_ to
> > detect the argument to operator[] being out of bounds (in release mode
> > of course).
>
> My point is that you shouldn't release code with an out of bounds

I totally agree. However, unless you use some kind of static analysis or
really skeptical human reviewers with a lot of time, I bet there are a few
program logic errors in just about any release of any real-world program. As
some programs simply cannot afford to detect such errors in a release, they
will thus run into undefined behavior.

> operator[]. And if you are testing it (which is nice whenever you can
> afford it), the correct response is probably to abort, rather than to
> throw at an unexpected moment.

As I write code with the assumption that almost any function can fail, an
exception is never unexpected. Moreover because an ATM should avoid to run
into undefined behavior at all costs, we throw program logic exceptions even
in release mode. They are never caught without being rethrown and will thus
lead to a graceful termination of the program. I know this is a policy that
a lot of experts would disagree with (they would just call abort(), which
then brings the program down) but it works quite nicely and allows us to
keep the watchdog simple.

> > > Operator new was also introduced before the dawn of exceptions.
> > > There is no non-EH behavior for backward compatibility. IO is a
> > > funny case;
>
> > Yes there is: new (nothrow)
>
> How does that solve a backward compatibility problem?

Agreed, I guess it was a bit late :-).

> I've never used an exception for eof. Nor for fail. I can see it for
> bad, in some cases. Generally speaking, however, it just doesn't seem
> worth the hassle.

[ ... ]


> > Yep, there sure are ways. However, you still have to use exceptions
> > for constructor failures which leaves you with two different
> > approaches of error reporting.
>
> You always have to alternative of making constructors which can't fail.
> Generally, if something can fail, and the failure can be handled
> locally, I'd avoid doing it in a constructor.

What do you do when functions are called on a half-constructed object (the
caller forgot to call Init(), Open() or whatever)?

[ ... ]

> There are, of course, also cases where you simply cannot afford to
> throw. You certainly wouldn't want to use the normal new in a tracing
> mechanism (where it might be called during stack walkback due to another
> exception); you use new (nothrow), and if it fails, you fall back to
> unbuffered output, but you don't throw, whatever happens.

You mean you don't want to let an exception propagate out of a destructor. I
can't see why you are not allowed to throw or use normal new.

[ ... ]


> Critical, like if the program fails, something bad happens. I've worked
> on things like locomotive brake systems, for example. Almost by
> definition, you can run out of a dynamic resource; whether you throw an
> exception, return NULL or abort in the new handler, the train is going
> to crash. And the railroads don't like that.

Agreed.

> In general, I'd expect the case to
> occur in any system which has its own memory management, and only uses
> operator new to get the blocks it manages.

Good point.

> > How do you know that your immediate caller will be able to handle the
> > error? IMO, there's no way to tell but I'd be very interested if you
> > have come up with an _easy_ scheme that allows you to do so.
>
> When in doubt, suppose that he does. It's easier (and a lot cheaper) to
> convert a return code into an exception than it is to convert an
> exception into a return code.

As the thrown exception is not part of the interface, this may lead to a few
different exceptions being thrown to report exactly the same error. It
works, but it will make it more difficult to understand the error handling.

> On the other hand,
> it's really rare that you don't handle file not found locally; a system
> which forces a throw for a failure to open a file is just going to cause
> extra work for its users.

On the ATM we open a file the name of which is given in a configuration
file. If the file cannot be opened, the configuration is corrupt and the
only thing we can do is shutdown. So I simply let the stream throw.

[ ... ]


> Totally agreed. Before exceptions, we had the case where most
> programmers would do everything possible to handle the error locally,
> and ignore it otherwise. With exceptions, I see most programmers making
> no effort to handle an error locally, but just throwing an exception
> anytime they aren't sure what to do. Neither situation is acceptable.

Agreed.

> Other than that, the only errors I've not been able to handle locally
> are those related to insufficient resources (mainly memory), and those
> due to failure of the underlying model.

I'd add external hardware to the list.

[ ... ]


> While I'm at it, I might mention that istream is an awkward case,
> because the standard makes no provision for distinguishing the type of
> an error. Throwing on eof can only be a programming error, since it
> will cause some reads to fail that would otherwise succeed.

True for some cases, but I'd claim that it makes perfect sense in others.
I'll dig out an example...

> Throwing on
> fail means that you get an exception on every possible condition,
> without the slightest means of knowing whether it was due to end of
> file, a format error, or a hardware read error.

Sometimes you don't care, see above.

> And throwing on bad is
> useless, because istream never sets bad.

Interesting, I didn't know.

Regards,

Andreas

LLeweLLyn

unread,
Feb 19, 2003, 10:17:55 AM2/19/03
to
"Andreas Huber" <spam...@gmx.net> writes:

> Francis,
>
> > The problem with using any form of new other than new(nothrow) is
that
> > the implementation just about has to put in all the exception
handling

> After almost each and every function call you have to
> insert "if (error) return error;".

'almost each and every'? Nonsense. In most projects I've worked on,
the majority of functions can't fail. Now I can imagine projects
where functions that can't fail are the minority, but I believe
they remain an important minority (as opposed to the insignificant
minority you imply).

> Because in C++ one line of code often
> results in more than one function call (copy constructors, overloaded
> operators, etc.) you are forced to tear apart a lot of expressions.
For
> example, the following results in 3 function calls for the expression
z
> =
> .... alone and every single one could result in an exception being
> thrown
> (because all may allocate memory)!
>
> matrix z, a, b, c;
> // fill a, b, c ....
> z = a * b * c;

Math functions that use return codes to signal out of memory errors?
*shiver*. Have you ever encountered a matrix, vector, tensor, or
bigint class whose binary operator* returns a special value for an
out of memory error? Would you dare to use it if you did? (Of
course *you* wouldn't; you would want exceptions to be thrown. But
even where exceptions are unavailible, I've never seen math
functions use return codes to signal non-math errors.)

Maybe I'm biased. I've mostly used matrix and vector classes in games,
where only 2d 3d and 4d matrices and vectors are used, and they're
such a big performance bottle neck no-one dares allocate memory in
them, or check them for errors. And important enough somebody has
re-written them in assembler, for each of 3 different platforms.

In some of the projects I've worked on, the best way to handle out of
memory errors has been 'dump_memory_control_blocks();abort();' ;
there was only one program running (which had to push 100s of
thousands of textured polys per frame), no OS, no VM MMU, and the
smallest amount of RAM the hardware vendor thought they could get
away with. Exceptions were disabled by default, and return codes
were used for things like forcing the CD/DVD reading functions to
retry on failure.

[snip]


> To cut a long story short, I believe in a lot of cases programs
> employing traditional error-handling are only faster and smaller
> because they almost never reach the level of correctness of programs
> employing exceptions.

[snip]

Given the reliability of certian programs which never use exceptions
and the bugginess of certain competitors which use exceptions
throughout, I believe there are factors (such as good
design, good testing, etc) which are far more important to
correctness than exceptions vs error-return codes. I'll believe
that exceptions can express error-handling relationships return
codes cannot, and I'll believe judicious application of those
relationships can improve program correctness (and maybe even
performance if one has a range-table implementation of
exceptions), but they don't cure cancer, the common cold, or world
hunger.

Furthermore I still believe that there are times when one can expect
one's immediate caller to handle the error, and in such cases,
throwing an exception is inneficient, unecessary, *and* often
no-less error-prone.

Igor Ivanov

unread,
Feb 19, 2003, 12:26:25 PM2/19/03
to
ka...@gabi-soft.de (James Kanze) wrote in message
news:<d6651fb6.03021...@posting.google.com>...
[snip]

> My point is that you shouldn't release code with an out of bounds
> operator[]. And if you are testing it (which is nice whenever you can
> afford it), the correct response is probably to abort, rather than to
> throw at an unexpected moment.
[snip]

It's quite acceptable in many interactive programs to *not abort* on
out of bounds. Imagine something like MS Office which can do a
thousand different things. When one single (or even several) piece of
functionality fails because of an index out of bounds, the exception
is caught by the outermost handler, it displays a message, and the
program goes on waiting for further user's input. You can then do any
of the other 999 things which work.

Regards
Igor

Andreas Huber

unread,
Feb 19, 2003, 3:47:19 PM2/19/03
to
[ ... ]

> > After almost each and every function call you have to
> > insert "if (error) return error;".
>
> 'almost each and every'? Nonsense. In most projects I've worked on,
> the majority of functions can't fail. Now I can imagine projects
> where functions that can't fail are the minority, but I believe
> they remain an important minority (as opposed to the insignificant
> minority you imply).

Well for me it was the opposite. My very first project was a green-fields
CAD-like project back in 1994, which - now that I'm thinking about it -
probably wouldn't have benefitted much from the use of exceptions (exception
handling wasn't yet in a mature enough state that we considered using it).
All other projects involved some form external hardware the control of which
was a very central point in the application. The hardware could fail in
various ways (jammed paper in the printer, no paper in the printer, printer
out of order, jammed cash dispenser, jammed ATM card, unreadable ATM card,
etc. the list is virtually endless). The hardware was controlled through a
series of layers mostly implemented with state-machines. I very much believe
that it would have killed us, had error propagation not been automatic.
Last but not least we also had the problem of thread cancellation which I
believe is only tackleable with exceptions or some form of OS support.

> Math functions that use return codes to signal out of memory errors?
> *shiver*. Have you ever encountered a matrix, vector, tensor, or
> bigint class whose binary operator* returns a special value for an
> out of memory error? Would you dare to use it if you did? (Of
> course *you* wouldn't; you would want exceptions to be thrown. But
> even where exceptions are unavailible, I've never seen math
> functions use return codes to signal non-math errors.)

Of course I've never seen anything like this. I just wanted to point out
what you *would* have to do with _error-code_returns_ in case you wanted to
achieve the same level of correctness as with exceptions. Was I to implement
such a library without exceptions, I'd probably do what James recommended:
Mark the result as invalid.

[ ... ]


> In some of the projects I've worked on, the best way to handle out of
> memory errors has been 'dump_memory_control_blocks();abort();' ;
> there was only one program running (which had to push 100s of
> thousands of textured polys per frame), no OS, no VM MMU, and the
> smallest amount of RAM the hardware vendor thought they could get
> away with. Exceptions were disabled by default, and return codes
> were used for things like forcing the CD/DVD reading functions to
> retry on failure.

James has already convinced me, I guess I wouldn't use exceptions here
anymore either.

[ ... ]


> Given the reliability of certian programs which never use exceptions
> and the bugginess of certain competitors which use exceptions
> throughout, I believe there are factors (such as good
> design, good testing, etc) which are far more important to
> correctness than exceptions vs error-return codes.

Definitely. But I'd claim that exceptions are _never_ the culprit, given
your programmers know what they do!

[ ... ]


> Furthermore I still believe that there are times when one can expect
> one's immediate caller to handle the error, and in such cases,
> throwing an exception is inneficient, unecessary, *and* often
> no-less error-prone.

This I'll probably never agree with. To me it's still extremely difficult to
know whether your _immediate_ caller is able to handle an error. Ok, file
open is probably a good example of an almost-always locally handleable
error. However, I sometimes find myself introducing another abstraction
layer to reduce complexity during iterative development which could of
course move the handler one more level away from the throw point. With
exceptions this is much easier than with error-codes. I general, a function
can not normally "know" how much abstraction layers its clients use.
Moreover, if you are working on a platform where exceptions are enabled
anyway, why not use exceptions throughout? This saves you and your team from
making a bunch of decisions most programmers have their problems with and
comes at a very low price (assuming that your programmers are proficient in
exception safety issues).

Regards,

Andreas

Andrea Griffini

unread,
Feb 19, 2003, 3:49:36 PM2/19/03
to
On 19 Feb 2003 12:26:25 -0500, igiv...@yahoo.com (Igor Ivanov) wrote:

>It's quite acceptable in many interactive programs to *not abort* on
>out of bounds. Imagine something like MS Office which can do a
>thousand different things. When one single (or even several) piece of
>functionality fails because of an index out of bounds, the exception
>is caught by the outermost handler, it displays a message, and the
>program goes on waiting for further user's input. You can then do any
>of the other 999 things which work.

The big error in this reasoning is IMO that after a serious
logic error is found you can't assume that everything else
is in good shape and in a state foreseen by the programmer.
Why should the system be ok ? Surely not because you programmed
it correctly, because the presence of that logic error is
screaming that your logic is broken.

Common experience in programming is that the bug manifestation
is quite often far (in millions instructions executed) from
the bug itself, and that out-of-bound access is in the normal
case just a bug manifestation that happens in perfectly safe
code section. Often it's just the *victim* of the bug that
goes crazy.

What does this mean ? that if you get an out-of-boundary
exception during a recomputation of a embedded spreadsheet
the *worst* thing you can do is allowing the user to save
the document: that is probably not really the document any
more, but just a pile of junk bytes without any trustable
meaning. Save that over the old copy and you just multiplied
the damage: instead of losing half an hour your user lost
a month (and s/he'll notice that two weeks from now).

If there are no pseudo-physical "barriers" that you can
feel confident about then assuming that an out of boundary
error in the spreadsheet recomputation is a bug in the
spreadsheet recomputation is quite naive.

That's why I think that allowing a process to continue
after e.g. a sigsev is detected is pure nonsense in 99.999%
of the situations. And under windows IMO is nonsense in
100% of them, because if you happen to work in a field
where trapping sigsev and continuing is a must then your
choice of windows as OS is highly questionable.

Andrea

Hillel Y. Sims

unread,
Feb 20, 2003, 5:37:01 AM2/20/03
to
"Andrea Griffini" <agr...@tin.it> wrote in message
news:3e53d484...@news.tin.it...

> On 19 Feb 2003 12:26:25 -0500, igiv...@yahoo.com (Igor
Ivanov) wrote:
>
> >It's quite acceptable in many interactive programs to *not
abort* on
> >out of bounds. Imagine something like MS Office which can do
a
> >thousand different things. When one single (or even several)
piece of
> >functionality fails because of an index out of bounds, the
exception
> >is caught by the outermost handler, it displays a message,
and the
> >program goes on waiting for further user's input. You can
then do any
> >of the other 999 things which work.
>
> The big error in this reasoning is IMO that after a serious
> logic error is found you can't assume that everything else
> is in good shape and in a state foreseen by the programmer.
> Why should the system be ok ? Surely not because you
programmed
> it correctly, because the presence of that logic error is
> screaming that your logic is broken.

But std::logic_error is still part of well-defined behavior. If
some code can detect an attempt to perform undefined behavior
*before* the undefined behavior actually happens (eg attempt to
deref null ptr or out of bounds), maybe why not throw the
exception; there's no need to necessarily abort() immediately
always, because undefined beh