There has a been a lot thats said about exceptions..all that good stuff.
All that bad stuff on another thread :-)
However one issue has typically been side-stepped. When is it a
good idea to throw and when to return code ? Let me explain...
Primarily i feel the following are
the main issues to undestand to make effective use of exceptions ..
1) How to write (exception safe/netural) code in the face of
excetions
2) Exception catching and doing something about it before continuing
3) When should your function/method resorrt to throwing an exception
4) Change something and retry the operation
( maybe change seveal things, but one at a time and retry after
each change)
The "Exceptional C++" series does a great job especially with issues 1
and 2.
And almost nothing on part 3 or 4.
But the issue # 3 is really the first problem that needs to be
understood
to use exception for doing something about runtime errors in code.
I havent seen this being discussed just as much as the other related
issues.
Even though i have seen an article or two titled something like
"to throw or not to throw"
If you look at the standard library for in this matter... the strategies
are like
- Have two versions , one that throws and one that doesnt ( eg. at()
, operator[] )
- Set some flag to enable or disable exceptions being called ( iostream
library )
- Perhaps others that i havent noticed
It doesnt seem to be feasible to follow the first mechanism too often.
Eventhough you give the calling code flexibility to choose or not choose
exceptions.
Given that you follow one of these strategies, does it imply it may be
good idea to
throw exceptions anywhere you would like to return an error code ?
Uh... well that would create a a havoc since we often can/like ignore
error codes
but not exceptions....(ever checked returned code from printf ?)
Sidenote:
If you are the "exceptions are a bitch" type person cause you think
it creates more complex code look at this nice little experiment...
http://www.shindich.com/sources/c++_topics/exceptions.html
that contrasts code with and w/o exceptions.
Unforutnately its title seem to suggest that it addresses the specfic
question
I ask here...but actually talks about more general issue of "are
exceptions a good idea".
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
[...]
> 3) When should your function/method resorrt to throwing an exception
> 4) Change something and retry the operation
> ( maybe change seveal things, but one at a time and retry after
> each change)
[...]
>
> But the issue # 3 is really the first problem that needs to be
> understood to use exception for doing something about runtime errors
> in code. I havent seen this being discussed just as much as the
> other related issues. Even though i have seen an article or two
> titled something like "to throw or not to throw"
>
> If you look at the standard library for in this matter... the
> strategies are like - Have two versions , one that throws and one
> that doesnt ( eg. at() , operator[] ) - Set some flag to enable or
> disable exceptions being called ( iostream library ) - Perhaps
> others that i havent noticed
I suspect this was done because a lot of pre-exception code would
have been broken, or at least bent, by the change to exceptions.
For example, pre-exception code should do this:
int* i = new int[1000];
if (i == NULL) { ... handle error ...}
but post-exception code should do this:
try {
int* i = new int[1000];
}
catch (std::bad_alloc) { // I think that's how it's spelled!
... handle error ...
{
so having two versions of parts of the standard library (one that
throws, one that doesn't) was _probably_ an attempt to keep everyone
happy, and not break too much code.
> Given that you follow one of these strategies, does it imply it may
> be good idea to throw exceptions anywhere you would like to return
> an error code ? Uh... well that would create a a havoc since we
> often can/like ignore error codes but not exceptions....(ever
> checked returned code from printf ?)
Well, IMHO we have several data points in favor of using exceptions:
1) Important parts of the standard library (i.e. 'new') throw
exceptions by default
2) The C++ Gods (i.e. Stroustrup) think that they're absolutely
necessary for correct error handling
3) Some kinds of errors can't be reported in any other way. For
example, how do you return an error code if a constructor
fails?
and we have IMHO one data point against them:
1) Writing exception-correct code is not trivial, and takes a
certain amount of education.
I suppose we could add a second point here, that it's extremely
difficult to graft exception safety onto code that wasn't built with
it in mind.
My personal opinion is that I agree with the experts; I think it was
Scott Meyers who wrote something to the effect of "Exceptions are
difficult simply because correct error handling is difficult, and
exceptions make it impossible to ignore the issue".
However, I also tend to not throw an exception if I can possibly
help it. Sometimes there are simpler ways of signalling that a
procedure failed.
--------------------------------------------------------------------------
Dave Steffen Wave after wave will flow with the tide
Dept. of Physics And bury the world as it does
Colorado State University Tide after tide will flow and recede
stef...@lamar.colostate.edu Leaving life to go on as it was...
- Peart / RUSH
"The reason that our people suffer in this way....
is that our ancestors failed to rule wisely". -General Choi, Hong Hi
> [Edited for brevity]
> When should your function/method resorrt to throwing an exception?
>
> This is really the first problem that needs to be understood to use exception
> for doing something about runtime errors in code.
You just answered your own question. Use exceptions for doing something
about runtime errors in code. If it isn't an error, then don't throw an
exception. If it is an error, then throw an exception. It really is that
simple. The hard part is deciding what is, and isn't an error and no one
can help you with that because only you know what the function in
question is supposed to do, and what results are acceptable.
Never ever report any errors with return-values. It's that simple!
> If you look at the standard library for in this matter... the strategies
> are like
> - Have two versions , one that throws and one that doesnt ( eg. at()
> , operator[] )
To understand this design decision it's important to see that passing an out
of range index to operator[] is conceptually quite different from e.g.
trying to open a file on a full disk. The former is a so-called program
logic error, which could have been detected _before_ running the program.
The latter is a problem than can only be detected at runtime.
There is not much agreement on what should be done upon detection of a
program logic error, mainly because of different safety requirements. For
some programs (e.g. DVD-player) its perfectly legitimate to simply not
detect the problem and continue with undefined behavior (and probably sooner
or later fail with an access violation or the like), because such a failure
will most probably not destroy data or harm anybody. Others should perform a
_graceful_ emergency shut down followed by a system restart (e.g.
life-supporting medical appliance), for obvious reasons.
> - Set some flag to enable or disable exceptions being called ( iostream
> library )
Stream libraries were introduced before the dawn of exceptions. The non-EH
behavior is only there for backward compatibility. Since most people agree
that runtime errors should be reported with exceptions, I recommend to
switch on the exceptions right after creating the stream object.
> Eventhough you give the calling code flexibility to choose or not choose
> exceptions.
See above, don't do that. One problem with error-codes is that you can
ignore them _without_a_trace_in_your_code_. To ignore exceptions you have to
write a catch( /**/ ) {} handler which can be detected (and reviewed) easily
with tools.
> Given that you follow one of these strategies, does it imply it may be
> good idea to
> throw exceptions anywhere you would like to return an error code ?
YES! That's exactly what exceptions were invented for! The huge benefit is
that you don't have to write all that tedious
passing-the-error-up-the-callchain code anymore. Simply throw the exception,
document that fact and you're done. As errors tend to happen deep down
burried at the 47th level of the call-stack and are typically not resolved
(!=handled) until the stack has unwound to the 10th level, you can easily
see how much boring "if (error) return error;"-coding exceptions save you
from doing.
> Uh... well that would create a a havoc since we often can/like ignore
> error codes
> but not exceptions....(ever checked returned code from printf ?)
It's always better to leave the decisition to actively ignore an error to
your clients.
HTH,
Andreas
> It's always better to leave the decisition to actively ignore an error
> to your clients.
Isn't that a point AGAINST using exceptions, since, when flagged
by an exception, an error can't be ignored by the client?
Regards
W. Roesler
> Never ever report any errors with return-values. It's that simple!
So simple that no respeceted expert has made this recommendation to
date. Exceptions aren't a silver bullet. Experts disagree as to when
and where they should be used. But no one says that they are the only
reasonable solution, regardless of the case.
> > If you look at the standard library for in this matter... the
> > strategies are like
> > - Have two versions , one that throws and one that doesnt ( eg. at()
> > , operator[] )
> To understand this design decision it's important to see that passing
> an out of range index to operator[] is conceptually quite different
> from e.g. trying to open a file on a full disk. The former is a
> so-called program logic error, which could have been detected _before_
> running the program. The latter is a problem than can only be
> detected at runtime. There is not much agreement on what should be
> done upon detection of a program logic error, mainly because of
> different safety requirements. For some programs (e.g. DVD-player) its
> perfectly legitimate to simply not detect the problem and continue
> with undefined behavior (and probably sooner or later fail with an
> access violation or the like), because such a failure will most
> probably not destroy data or harm anybody. Others should perform a
> _graceful_ emergency shut down followed by a system restart (e.g.
> life-supporting medical appliance), for obvious reasons.
Letting a program stumble on after undefined behavior is never really an
appropriate solution. If it happens, it is because we are human, and we
make errors. Most of the time, the most appropriate action to take in
case of an error is to abort with an error message -- this would be a
very good implementation specific behavior for the undefined behavior in
operator[]. A few, rare applications should try and recover. These
should use at.
> > - Set some flag to enable or disable exceptions being called ( iostream
> > library )
> Stream libraries were introduced before the dawn of exceptions. The
> non-EH behavior is only there for backward compatibility. Since most
> people agree that runtime errors should be reported with exceptions, I
> recommend to switch on the exceptions right after creating the stream
> object.
Operator new was also introduced before the dawn of exceptions. There
is no non-EH behavior for backward compatibility. IO is a funny case;
the default error reporting is probably the best solution most of the
time, although there aren't very many other cases when such a strategy
would be appropriate.
Exceptions might be appropriate for bad() in iostream. They might also
be appropriate when reading temporary files which were written by the
same program just before -- if you write 100 bytes, seek to the start,
and a read of 100 bytes fails, there is probably something seriously
wrong. But such cases aren't the rule.
> > Eventhough you give the calling code flexibility to choose or not
> > choose exceptions.
> See above, don't do that. One problem with error-codes is that you can
> ignore them _without_a_trace_in_your_code_.
There are ways of avoiding that.
> To ignore exceptions you have to write a catch( /**/ ) {} handler
> which can be detected (and reviewed) easily with tools.
> > Given that you follow one of these strategies, does it imply it may
> > be good idea to throw exceptions anywhere you would like to return
> > an error code ?
> YES! That's exactly what exceptions were invented for!
According to the author, exceptions were invented for exceptional
cases. In practice, the general rule is that they are a good solution
for errors in non critical code which almost certainly cannot be handled
locally. Insufficient memory in new being the classical example:
- In short running programs, like a compiler, the best solution for
insufficient memory is just to abort. If you can't compile his
program, you can't compile it, and throwing an exception rather than
aborting isn't going to help much here.
- In long running programs, like servers, insufficient memory can
occur for two reasons: a memory leak (a programming error), or a
request which was too complicated. In the second case, an
exception, caught at the highest level, is an excellent way to abort
the request. In the first case, the only way out is to abort and
restart the program; since it is generally difficult to distinguish,
if requests can have arbitrary complexity, you should probably go
for the exception, and implement some sort of counter at the highest
level -- if say 5 successive different requests fail because of lack
of memory, you abort.
- In critical applications, you don't use dynamically allocated
memory, so the problem doesn't occur:-).
- In some particular cases, it may be possible to recover locally from
insufficient memory, say by reorganizing some memory or spilling
data to disk. In such cases, you use new (nothrow), checking the
return value for NULL.
The fact that the experts felt it necessary to provide a new which
reports errors by return code says a lot.
> The huge benefit is that you don't have to write all that tedious
> passing-the-error-up-the-callchain code anymore.
That is the only benefit. It's not a negligible benefit in cases where
the error will be passed up a long chain. It's not a real benefit at
all where the error will be treated immediately by the caller.
You use exceptions for exceptional cases, where there is no chance of
local recovery. You use return codes in all other cases.
> Simply throw the exception, document that fact and you're done. As
> errors tend to happen deep down burried at the 47th level of the
> call-stack and are typically not resolved (!=handled) until the stack
> has unwound to the 10th level, you can easily see how much boring "if
> (error) return error;"-coding exceptions save you from doing.
We must encounter different types of errors. For the most part, with
the exception of things like insufficient memory, I find myself handling
errors one or two levels above where they occur.
> > Uh... well that would create a a havoc since we often can/like
> > ignore error codes but not exceptions....(ever checked returned code
> > from printf ?)
> It's always better to leave the decisition to actively ignore an error
> to your clients.
You don't have the choice. If the client wants to ignore the error, he
will. If he wants to treat the error, he will.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
wchich means "do not catch exception thrown in your code, instead make
client aware of the exception and let him handle the situation", and for
this to work your code *must* be exception safe. Right ?
B.
Uh.. what makes you think operator[] for vector can be range-checked at
compile time ? Thats really a outlandish expectation from any compiler
given a dynamically resizable container. Do you know of any compiler
that can do that ?
> Never ever report any errors with return-values. It's that simple!
..snip..
>>Given that you follow one of these strategies, does it imply it may be
>>good idea to
>>throw exceptions anywhere you would like to return an error code ?
>
> YES! That's exactly what exceptions were invented for! The huge benefit is
> that you don't have to write all that tedious
> passing-the-error-up-the-callchain code anymore.
Your emphatic implication that exceptions were invented to totally
replace traditional error handling techniques seems like your personal
opinion unless Bjarne told you so.
Atleast in his book (TCPL), Bjarne doesnt seem to think the same. In
Chapter 11 (special edtn) he implies that exception handling mechanism
is more like an alternative than a replacement to traditional error
handling techniques.....
To quote the text...
"The exception handling mechanism provides an alternative to the
traditional techniques when they are insufficient, inelegant and error
prone"
--Roshan
Note the use of the phrase "actively ignore". The following code actively
ignores any exceptions thrown by foo():
try {
foo();
}
catch( ... ) {
// ignore it
}
NeilB
> Isn't that a point AGAINST using exceptions, since, when flagged
> by an exception, an error can't be ignored by the client?
I'm not sure I understand what your are saying, since ignoring an exception
is quite easy:
try
{
// ...
}
catch ( ... )
{
}
The huge difference to error-codes is that the programmer must actively
write this code and tools can easily detect that someone chooses to ignore
an exception.
Regards,
Andreas
> > Never ever report any errors with return-values. It's that simple!
>
> So simple that no respeceted expert has made this recommendation to
> date. Exceptions aren't a silver bullet. Experts disagree as to when
> and where they should be used. But no one says that they are the only
> reasonable solution, regardless of the case.
Ok, I knew this point will be coming from someone of the list regulars, and
you are of course right. After a few projects where I have seen people fall
back into the error-code return scheme for no apparent reason I tend to
press this beyond of what is reasonable. Simply because it's easier to fix
cases where people throw exceptions and should not than the other way round.
BTW, in my last project (~0.25Mloc) there was not a single case where people
should have used error-code returns rather than exceptions! I believe
exceptions are the right choice for 99.9% of your everyday error-reporting
needs.
> Letting a program stumble on after undefined behavior is never really an
> appropriate solution. If it happens, it is because we are human, and we
> make errors. Most of the time, the most appropriate action to take in
> case of an error is to abort with an error message -- this would be a
> very good implementation specific behavior for the undefined behavior in
> operator[]. A few, rare applications should try and recover. These
> should use at.
I don't see your point, as some applications simply cannot _afford_ to
detect
the argument to operator[] being out of bounds (in release mode of course).
> Operator new was also introduced before the dawn of exceptions. There
> is no non-EH behavior for backward compatibility. IO is a funny case;
Yes there is: new (nothrow)
> the default error reporting is probably the best solution most of the
> time, although there aren't very many other cases when such a strategy
> would be appropriate.
>
> Exceptions might be appropriate for bad() in iostream. They might also
> be appropriate when reading temporary files which were written by the
> same program just before -- if you write 100 bytes, seek to the start,
> and a read of 100 bytes fails, there is probably something seriously
> wrong. But such cases aren't the rule.
Ok, I failed to say that I only turn on exceptions for eof in about half the
cases. However, I use exceptions for fail and bad.
> > See above, don't do that. One problem with error-codes is that you can
> > ignore them _without_a_trace_in_your_code_.
>
> There are ways of avoiding that.
Yep, there sure are ways. However, you still have to use exceptions for
constructor failures which leaves you with two different approaches of error
reporting. Unless there are very strong reasons not to do so, I tend to go
for the KISS approach in such cases. That's why I recommend to use
exceptions for all runtime-error reporting.
> According to the author, exceptions were invented for exceptional
> cases. In practice, the general rule is that they are a good solution
> for errors in non critical code which almost certainly cannot be handled
> locally. Insufficient memory in new being the classical example:
>
> - In short running programs, like a compiler, the best solution for
> insufficient memory is just to abort. If you can't compile his
> program, you can't compile it, and throwing an exception rather than
> aborting isn't going to help much here.
>
> - In long running programs, like servers, insufficient memory can
> occur for two reasons: a memory leak (a programming error), or a
> request which was too complicated. In the second case, an
> exception, caught at the highest level, is an excellent way to abort
> the request. In the first case, the only way out is to abort and
> restart the program; since it is generally difficult to distinguish,
> if requests can have arbitrary complexity, you should probably go
> for the exception, and implement some sort of counter at the highest
> level -- if say 5 successive different requests fail because of lack
> of memory, you abort.
>
> - In critical applications, you don't use dynamically allocated
> memory, so the problem doesn't occur:-).
>
> - In some particular cases, it may be possible to recover locally from
> insufficient memory, say by reorganizing some memory or spilling
> data to disk. In such cases, you use new (nothrow), checking the
> return value for NULL.
I don't know the "official" rationale behind new (nothrow) and honestly I
don't care that much because I can't think of any use for it (unless your
platform or coding standard forbids you to use exceptions). In fact, all the
cases you mentioned can just as well be handled with exceptions without any
major disadvantages.
1) Are you saying that you would use new (nothrow) in short running
programs? Why not use new and not handle the exception, this will
automatically lead to abort() being called.
2) We agree ;-)
3) What do you mean with critical? Realtime?
4) I might not have as much experience as you do, but I have so far (8 years
of C++ programming) not come across a single case where you could have
handled an out of memory situation locally (right after calling new). Even
if you could, why not use normal new and put a try-catch around it?
> The fact that the experts felt it necessary to provide a new which
> reports errors by return code says a lot.
As mentioned above I don't know the rationale and I couldn't find one either
but there are platforms that have until very recently not supported
exception handling (WinCE). To be standards conformant, such platforms
couldn't possibly support normal new but only new (nothrow). To me this is a
far stronger case for having new (nothrow).
> That is the only benefit. It's not a negligible benefit in cases where
> the error will be passed up a long chain. It's not a real benefit at
> all where the error will be treated immediately by the caller.
How do you know that your immediate caller will be able to handle the error?
IMO, there's no way to tell but I'd be very interested if you have come up
with an _easy_ scheme that allows you to do so.
> You use exceptions for exceptional cases, where there is no chance of
> local recovery. You use return codes in all other cases.
Again, how do you know who your caller is and what he does? Please give a
simple rule to find out whether an error can be handled by your immediate
caller or not. Honestly, even if such a rule existed I would still opt for
the exception-only approach, for KISS reasons.
> We must encounter different types of errors. For the most part, with
> the exception of things like insufficient memory, I find myself handling
> errors one or two levels above where they occur.
This is contrary to my experience, please give an example.
> > > Uh... well that would create a a havoc since we often can/like
> > > ignore error codes but not exceptions....(ever checked returned code
> > > from printf ?)
> > It's always better to leave the decisition to actively ignore an error
> > to your clients.
>
> You don't have the choice. If the client wants to ignore the error, he
> will. If he wants to treat the error, he will.
I was referring to the following: As long as you can't handle the runtime
error locally you better inform your client instead of ignoring it and
grinding on.
Regards,
Andreas
> > It's always better to leave the decisition to actively ignore an
> > error to your clients.
>
> wchich means "do not catch exception thrown in your code, instead make
> client aware of the exception and let him handle the situation", and for
> this to work your code *must* be exception safe. Right ?
Yes, as long as there is no way for you to handle the error locally. Do not
write any code that is not exception-safe!
Regards,
Andreas
The problem with using any form of new other than new(nothrow) is that
the implementation just about has to put in all the exception handling
mechanism (including calling abort for an unhandled exception). The
problem is not with short running programs but in very long running
programs in highly constrained resources where the very existence of
exception handling mechanisms will make the program exceed the available
resources. Typically this is in some forms of embedded programming. In
highly competitive markets even pennies count and moving to larger
resources is not a commercially acceptable solution.
No exception handling mechanism is probably irrelevant on PCs of all
forms but it may be essential for programming the far commoner micro
controllers that pervade our lives unseen and unconsidered even by many
programmers.
--
ACCU Spring Conference 2003 April 2-5
The Conference you cannot afford to miss
Check the details: http://www.accuconference.co.uk/
Francis Glassborow ACCU
If that is true you are programming in a very special problem domain.
The choice is not just between throwing an exception and an error code.
There are other solutions that are relevant in other cases even new
offers a more sophisticated set of choices.
--
ACCU Spring Conference 2003 April 2-5
The Conference you cannot afford to miss
Check the details: http://www.accuconference.co.uk/
Francis Glassborow ACCU
"Roshan Naik" <rosha...@yahoo.com> wrote in message
news:EFh3a.285$l_5...@news.cpqcorp.net...
> > To understand this design decision it's important to see that passing an
out
> > of range index to operator[] is conceptually quite different from e.g.
> > trying to open a file on a full disk. The former is a so-called program
> > logic error, which could have been detected _before_ running the
program.
> > The latter is a problem than can only be detected at runtime.
>
> Uh.. what makes you think operator[] for vector can be range-checked at
> compile time ? Thats really a outlandish expectation from any compiler
> given a dynamically resizable container. Do you know of any compiler
> that can do that ?
I didn't say that the compiler could have checked this. There are really two
cases here:
1) The index depends on user input. In this case you'd better check for out
of range situations before passing to operator[].
2) The index only depends on calculations your program makes. Believe it or
not, but in this case a sufficiently sophisticated static program analysis
tool could have told you that the index will be out of range in certain
cases. Such tools take your program sources as input and do _not_ run it
while analysing. They come to that conclusion simply by analysing the
outcome of _every_ branch your program makes. A human reviewer only reading
your program could do the same.
> Your emphatic implication that exceptions were invented to totally
> replace traditional error handling techniques seems like your personal
> opinion unless Bjarne told you so.
Ok, the inventor of exceptions (AFAIK _not_ Bjarne Stroustrup) probably
really did not want to completely replace the traditional way of error
handling. But, see below...
> Atleast in his book (TCPL), Bjarne doesnt seem to think the same. In
> Chapter 11 (special edtn) he implies that exception handling mechanism
> is more like an alternative than a replacement to traditional error
> handling techniques.....
>
> To quote the text...
> "The exception handling mechanism provides an alternative to the
> traditional techniques when they are insufficient, inelegant and error
> prone"
I know that Bjarne has written stuff that is contrary to my views. However,
a long time has passed since that book came out and compilers are much
better at exception handling today than they used to be. Moreover, I claim
that it is much simpler and safer to have one simple rule _everyone_
(rookies and seniors) in your project can follow rather than having to
establish IMO very difficult to comprehend heuristics when to use exceptions
and when not. In fact, I believe it's almost always a bad decision to return
an error code. Please see my answer to James Kanze.
Regards,
Andreas
Usually I have a class what completely cover sertain problem domain
and handle all errors within. It is expose some error flag or state,
some class specific error code what signals is some special action
should/can be done for recover, some general error code as returned by
system and routine to generate error message what can use class
information (name of file for file class for example). Methods of
class returns only bool: are all right or not.
Using that methology with error codes is straightforward and with
exceptions is cumbersome.
And next, usually error is spotted by service thread which nor can
handle it nor has caller to propagate.
Amir Yantimirov
http://www174.pair.com/yamir/programming/
> Bronek,
>
> > > It's always better to leave the decisition to actively ignore an
> > > error to your clients.
> >
> > wchich means "do not catch exception thrown in your code, instead make
> > client aware of the exception and let him handle the situation", and for
> > this to work your code *must* be exception safe. Right ?
>
> Yes, as long as there is no way for you to handle the error locally. Do not
> write any code that is not exception-safe!
[snip]
I hope you are thinking of the basic guarantee and not the
strong. IMO, especially when dealing with 3rd party libs, the
strong guarantee is too hard to provide in all but a few places.
> I didn't say that the compiler could have checked this. There are
really two
> cases here:
> 1) The index depends on user input. In this case you'd better check
for out
> of range situations before passing to operator[].
Why? Using at is simpler. In general, the class can detect error
conditions much better than the code that invokes one of his functions.
Regards.
> The problem with using any form of new other than new(nothrow) is that
> the implementation just about has to put in all the exception handling
> mechanism (including calling abort for an unhandled exception). The
> problem is not with short running programs but in very long running
> programs in highly constrained resources where the very existence of
> exception handling mechanisms will make the program exceed the
available
> resources. Typically this is in some forms of embedded programming. In
> highly competitive markets even pennies count and moving to larger
> resources is not a commercially acceptable solution.
I'm well aware of this and as I have already written in the answer to
James'
post, your environment could prohibit the use of exceptions. As outlined
below, I believe this is only true for a very special type of
applications.
> No exception handling mechanism is probably irrelevant on PCs of all
> forms but it may be essential for programming the far commoner micro
> controllers that pervade our lives unseen and unconsidered even by
many
> programmers.
I agree and I'm aware that there are programs which have to handle so
few
exceptional situations that only a small percentage of all functions
could
possibly fail with runtime errors (e.g. the software in a wristwatch
that
has all the bells and whistles but has no I/O apart from the buttons and
the display). For such an application it could indeed be overkill to
employ
exceptions.
However, I consider this a _very_ special type of programming
environment,
where also a few other well-established programming rules could be
rendered
impractical or even invalid.
For the rest (I believe the vast majority of all written lines of C++
code),
the
following thought experiment explains why exceptions are superior to
error-code returns: Two programs are written for the same hardware. Both
programs are _absolutely_identical_ in functionality but one makes full
use
of exceptions while the other has exceptions disabled and works with
error
code returns only. The type of functionality does not matter much, but
let's
assume that the programs must deal with quite a few different types of
runtime errors (out-of-memory, I/O problems, etc.) Both implement
"perfect"
error handling, i.e. for all runtime errors that could possibly happen
both
programs have to try a remedy.
I believe both programs should roughly be in the same league for
executable
size and runtime performance (on some platforms the program employing
exceptions could even be _faster_ and _smaller_ but I won't go into
details
for now).
Why? Well, consider how you have to write the program that does not use
exception handling: After almost each and every function call you have
to
insert "if (error) return error;". Because in C++ one line of code often
results in more than one function call (copy constructors, overloaded
operators, etc.) you are forced to tear apart a lot of expressions. For
example, the following results in 3 function calls for the expression z
=
.... alone and every single one could result in an exception being
thrown
(because all may allocate memory)!
matrix z, a, b, c;
// fill a, b, c ....
z = a * b * c;
Tearing appart all the expressions in your program in such a way and
inserting all the necessary if (error) return error; statements is not
only
extremely tedious but also insanely error-prone. So error-prone, that
it's
almost impossible that the two programs could possibly behave identical
in
all situations (given that the programs are not toy examples but
real-world
applications).
Moreover, as the return type is now occupied by the error-code you have
to
use reference parameters for the returned results. This leads to badly
readable code for mathematical expressions. Last but not least, you
always
have to ask newly created objects whether they are useable as
constructors
cannot return error-codes.
To cut a long story short, I believe in a lot of cases programs
employing
traditional error-handling are only faster and smaller because they
almost
never reach the level of correctness of programs employing exceptions.
Regards,
Andreas
I don't follow. BTW, the project was the software for an ATM running on
Win2000...
Regards,
Andreas
> Usually I have a class what completely cover sertain problem domain
> and handle all errors within. It is expose some error flag or state,
> some class specific error code what signals is some special action
> should/can be done for recover, some general error code as returned by
> system and routine to generate error message what can use class
> information (name of file for file class for example). Methods of
> class returns only bool: are all right or not.
>
> Using that methology with error codes is straightforward and with
> exceptions is cumbersome.
Well, if you are absolutely sure that your immediate caller can handle
_all_
runtime errors then returning an error code can indeed be an option.
However, as I have explained in other posts, I believe this is a _very_
special situation and for the rest of your program you would most
probably
want to employ exceptions, so why complicate things with two types of
error
handling?
> And next, usually error is spotted by service thread which nor can
> handle it nor has caller to propagate.
A thread has a sort of a caller, i.e. the other thread by which it was
started. The other thread will at some point want to collect the results
calculated by this thread and that's also where you could transmit your
exception. I know this is not possible with normal (std) exceptions, but
when you've had the foresight to give your exceptions a clone method,
then
transmitting to and rethrowing in the other thread works just fine.
Regards,
Andreas
Yep, I had in mind _at_least_ the basic guarantee. Depending on your
needs
you might want to implement the strong guarantee, which is indeed harder
in
some cases...
Regards,
Andreas
> > I didn't say that the compiler could have checked this. There are
> really two
> > cases here:
> > 1) The index depends on user input. In this case you'd better check
> for out
> > of range situations before passing to operator[].
>
> Why? Using at is simpler. In general, the class can detect error
> conditions much better than the code that invokes one of his
functions.
You are of course right. What was I thinking? ;-)
Regards,
Andreas
> For the rest (I believe the vast majority of all written lines of C++
> code), the following thought experiment explains why exceptions are
> superior to error-code returns: Two programs are written for the same
> hardware. Both programs are _absolutely_identical_ in functionality
> but one makes full use of exceptions while the other has exceptions
> disabled and works with error code returns only. The type of
> functionality does not matter much, but let's assume that the programs
> must deal with quite a few different types of runtime errors
> (out-of-memory, I/O problems, etc.)
What's in the etc.? What about a compiler, for example? Would you
consider an error in the program being compiled an error? (I wouldn't;
it's an expected situation. And I certainly wouldn't use exceptions to
handle it.)
> Both implement "perfect" error handling, i.e. for all runtime errors
> that could possibly happen both programs have to try a remedy. I
> believe both programs should roughly be in the same league for
> executable size and runtime performance (on some platforms the program
> employing exceptions could even be _faster_ and _smaller_ but I won't
> go into details for now). Why? Well, consider how you have to write
> the program that does not use exception handling: After almost each
> and every function call you have to insert "if (error) return
> error;".
Nonsense. Only after function calls that can fail.
What do you do if you run out of memory. A lot of programs (compilers,
etc.) should simple fail in such cases. For a lot of programs, the
probability is so low, and the possibilities for recovery so poor, that
failing is acceptable as well. Such programs replace the new handler,
and so never see an std::bad_alloc. (With a lot of compilers, this has
already been done for you:-(. And on some systems, like Linux, your
program, or some program, will just crash.)
What is the probability of a write failing? It's pretty low on my
machines, and it's perfectly acceptable to only test the results after
the final close (which you have to do anyway, and you can't use an
exception here anyway without wrapping it, since in case of other
exceptions, it's likely to be called when unwinding the stack). And
that's only one if in the entire program.
Things like opening a file should be tested and handled immediately, and
the actual processing will never begin. No problem of propagating
deeply here.
What types of errors are you thinking of that require if's all over the
place?
> Because in C++ one line of code often results in more than one
> function call (copy constructors, overloaded operators, etc.) you are
> forced to tear apart a lot of expressions. For example, the following
> results in 3 function calls for the expression z = .... alone and
> every single one could result in an exception being thrown (because
> all may allocate memory)!
> matrix z, a, b, c;
> // fill a, b, c ....
> z = a * b * c;
I think that everyone is in agreement that in such trivial cases, new
should throw if it doesn't abort the program. There are other
solutions, however, and they don't necessarily result in more if
statements being written. (We used them before exceptions existed.)
The most usual is simply to mark the object as bad, and continue. A bit
like NaN in IEEE floating point. A lot more if's get executed and the
run-time is probably a little slower than with exceptions, but the code
size is probably smaller. With exceptions, I need table entries for all
of the call spots where an exception may propagate. If I use deferred
error checking, like with non-signaling NaN's, the only place I need to
check for an error is at the end of the expression.
> Tearing appart all the expressions in your program in such a way and
> inserting all the necessary if (error) return error; statements is not
> only extremely tedious but also insanely error-prone. So error-prone,
> that it's almost impossible that the two programs could possibly
> behave identical in all situations (given that the programs are not
> toy examples but real-world applications). Moreover, as the return
> type is now occupied by the error-code you have to use reference
> parameters for the returned results. This leads to badly readable code
> for mathematical expressions. Last but not least, you always have to
> ask newly created objects whether they are useable as constructors
> cannot return error-codes.
Interestingly, I've rarely seen mathematical code which uses signaling
NaN's. I don't know whether it's because the mathematicians consider
the code less clean with the asynchronous interuptions, or for some
other reasons.
> To cut a long story short, I believe in a lot of cases programs
> employing traditional error-handling are only faster and smaller
> because they almost never reach the level of correctness of programs
> employing exceptions.
My experience to date has been that programs employing exceptions are
almost never correct, where as programs with return values often are.
People like David Abraham have been making an enormous effort, both in
developing new programming idioms and in educating people about them;
it's largely a result of such efforts that I even consider exceptions.
But if I look at most of the existing code at my client sites, there's
still a lot to be done, especially with regards to education.
If you can afford to punt on insufficient memory, and abort, you can
probably use all of the standard library without seeing a single
exception. If you have a large existing code base that you have to live
with, that's probably your only choice; code written five or more years
ago is NOT exception safe. If you're doing a green fields project,
exceptions are definitly worth considering for some times of errors,
provided you can be sure that the people working on the project are up
to date, and know how to program with them. I'd still avoid them for
most run-of-the mill errors; exceptions are best when there aren't any
try blocks, and if you have to handle the error at that call site, that
means a try block for each call site.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> > > Never ever report any errors with return-values. It's that simple!
> > So simple that no respeceted expert has made this recommendation to
> > date. Exceptions aren't a silver bullet. Experts disagree as to
> > when and where they should be used. But no one says that they are
> > the only reasonable solution, regardless of the case.
> Ok, I knew this point will be coming from someone of the list
> regulars, and you are of course right. After a few projects where I
> have seen people fall back into the error-code return scheme for no
> apparent reason I tend to press this beyond of what is reasonable.
I've seen just the opposite. I've seen a lot of code written with that
attitude that if you don't know what to do, throw an exception. There's
obviously a middle road, but I've found that about 90% of the time, a
return code is more appropriate than an exception.
Maybe that's just because I expect most errors, and don't consider them
exceptional. Or that I'm use to thinking of error processing
(detection, propagation and handling) as part of the algorithm. Or that
I've noticed that most of the people throwing exceptions like wild don't
have the foggiest notion as to what exception safety means (and have
never heard of smart pointers, or RAII). This last problem is, of
course, one of education. But it's one many of us still have to deal
with.
> Simply because it's easier to fix cases where people throw exceptions
> and should not than the other way round.
I'm not sure what you mean here. If the client code was written to use
exceptions, it's going to silently ignore return values. If it was
written to use return values, it likely won't compile with exceptions.
> BTW, in my last project (~0.25Mloc) there was not a single case where
> people should have used error-code returns rather than exceptions!
You mean you never opened a file whose name was provided by the user.
> I believe exceptions are the right choice for 99.9% of your everyday
> error-reporting needs.
I think it depends on the type of application. Most of my work is on
large servers; exceptions are useful there for aborting requests without
bringing the system down. But I can't think of anywhere in a compiler
where they would be appropriate. And I'm sceptical about graphic
clients, although I'll admit that my scepticism may be partially based
on my negative experience with exceptions in Java.
> > Letting a program stumble on after undefined behavior is never
> > really an appropriate solution. If it happens, it is because we are
> > human, and we make errors. Most of the time, the most appropriate
> > action to take in case of an error is to abort with an error message
> > -- this would be a very good implementation specific behavior for
> > the undefined behavior in operator[]. A few, rare applications
> > should try and recover. These should use at.
> I don't see your point, as some applications simply cannot _afford_ to
> detect the argument to operator[] being out of bounds (in release mode
> of course).
My point is that you shouldn't release code with an out of bounds
operator[]. And if you are testing it (which is nice whenever you can
afford it), the correct response is probably to abort, rather than to
throw at an unexpected moment.
> > Operator new was also introduced before the dawn of exceptions.
> > There is no non-EH behavior for backward compatibility. IO is a
> > funny case;
> Yes there is: new (nothrow)
How does that solve a backward compatibility problem?
> > the default error reporting is probably the best solution most of
> > the time, although there aren't very many other cases when such a
> > strategy would be appropriate.
> > Exceptions might be appropriate for bad() in iostream. They might
> > also be appropriate when reading temporary files which were written
> > by the same program just before -- if you write 100 bytes, seek to
> > the start, and a read of 100 bytes fails, there is probably
> > something seriously wrong. But such cases aren't the rule.
> Ok, I failed to say that I only turn on exceptions for eof in about
> half the cases. However, I use exceptions for fail and bad.
I've never used an exception for eof. Nor for fail. I can see it for
bad, in some cases. Generally speaking, however, it just doesn't seem
worth the hassle.
> > > See above, don't do that. One problem with error-codes is that you
> > > can ignore them _without_a_trace_in_your_code_.
> > There are ways of avoiding that.
> Yep, there sure are ways. However, you still have to use exceptions
> for constructor failures which leaves you with two different
> approaches of error reporting.
You always have to alternative of making constructors which can't fail.
Generally, if something can fail, and the failure can be handled
locally, I'd avoid doing it in a constructor.
> Unless there are very strong reasons not to do so, I tend to go for
> the KISS approach in such cases. That's why I recommend to use
> exceptions for all runtime-error reporting.
I'd say that KISS argues for using return codes everywhere:-).
The official rational is that some people had code which was prepared to
handle out of memory locally, at the point of the new, and that
exceptions weren't appropriate for these cases. Since the cases are
pretty much the exceptions, the default new throws, and you need a
special form for the no throw version.
There are, of course, also cases where you simply cannot afford to
throw. You certainly wouldn't want to use the normal new in a tracing
mechanism (where it might be called during stack walkback due to another
exception); you use new (nothrow), and if it fails, you fall back to
unbuffered output, but you don't throw, whatever happens.
> In fact, all the cases you mentioned can just as well be handled with
> exceptions without any major disadvantages.
> 1) Are you saying that you would use new (nothrow) in short running
> programs? Why not use new and not handle the exception, this will
> automatically lead to abort() being called.
You don't want abort. You want to display a reasonable error message,
and you don't want the core dump. The easiest solution is the one
designed for this case: replace the new handler with one which does what
you want.
> 2) We agree ;-)
> 3) What do you mean with critical? Realtime?
Critical, like if the program fails, something bad happens. I've worked
on things like locomotive brake systems, for example. Almost by
definition, you can run out of a dynamic resource; whether you throw an
exception, return NULL or abort in the new handler, the train is going
to crash. And the railroads don't like that.
On large systems, the critical parts will usually be isolated on low
level controls today, so that the main part of the control software can
be written "normally".
> 4) I might not have as much experience as you do, but I have so far (8
> years of C++ programming) not come across a single case where you
> could have handled an out of memory situation locally (right after
> calling new). Even if you could, why not use normal new and put a
> try-catch around it?
I've not run into them in my personal experience, but I know that they
exist. An obvious example might be an editor -- when the call to new
fails, it spills part of the file it is editing to disk, frees up the
associated memory, and tries again. In general, I'd expect the case to
occur in any system which has its own memory management, and only uses
operator new to get the blocks it manages.
> > The fact that the experts felt it necessary to provide a new which
> > reports errors by return code says a lot.
> As mentioned above I don't know the rationale and I couldn't find one
> either but there are platforms that have until very recently not
> supported exception handling (WinCE). To be standards conformant, such
> platforms couldn't possibly support normal new but only new
> (nothrow). To me this is a far stronger case for having new (nothrow).
Most such platforms probable use some sort of embedded C++, in which the
standard new returns NULL, rather than throwing. It was never the
intent that a platform should be allowed to support new (nothrow)
without supporting the normal new.
> > That is the only benefit. It's not a negligible benefit in cases
> > where the error will be passed up a long chain. It's not a real
> > benefit at all where the error will be treated immediately by the
> > caller.
> How do you know that your immediate caller will be able to handle the
> error? IMO, there's no way to tell but I'd be very interested if you
> have come up with an _easy_ scheme that allows you to do so.
When in doubt, suppose that he does. It's easier (and a lot cheaper) to
convert a return code into an exception than it is to convert an
exception into a return code.
In the end, of course, it is an educated guess. It's highly unlikely
that you can handle running out of memory locally, unless you are using
some other memory management above operator new. On the other hand,
it's really rare that you don't handle file not found locally; a system
which forces a throw for a failure to open a file is just going to cause
extra work for its users.
> > You use exceptions for exceptional cases, where there is no chance
> > of local recovery. You use return codes in all other cases.
> Again, how do you know who your caller is and what he does? Please
> give a simple rule to find out whether an error can be handled by your
> immediate caller or not. Honestly, even if such a rule existed I would
> still opt for the exception-only approach, for KISS reasons.
> > We must encounter different types of errors. For the most part,
> > with the exception of things like insufficient memory, I find myself
> > handling errors one or two levels above where they occur.
> This is contrary to my experience, please give an example.
> > > > Uh... well that would create a a havoc since we often can/like
> > > > ignore error codes but not exceptions....(ever checked returned
> > > > code from printf ?)
> > > It's always better to leave the decisition to actively ignore an error
> > > to your clients.
> > You don't have the choice. If the client wants to ignore the error, he
> > will. If he wants to treat the error, he will.
> I was referring to the following: As long as you can't handle the
> runtime error locally you better inform your client instead of
> ignoring it and grinding on.
Totally agreed. Before exceptions, we had the case where most
programmers would do everything possible to handle the error locally,
and ignore it otherwise. With exceptions, I see most programmers making
no effort to handle an error locally, but just throwing an exception
anytime they aren't sure what to do. Neither situation is acceptable.
Other than that, the only errors I've not been able to handle locally
are those related to insufficient resources (mainly memory), and those
due to failure of the underlying model. If carefully planned for, some
of the insufficient resources can be handled by backing out of the
transaction, request or whatever, and just aborting it, instead of the
entire program. Such errors are candidates for exceptions. (As an
example of what I mean by "carefully planned for", you know when you
invoke operator new, so you can plan for it. No such luck with stack
overflow, however.) If carefully planned for, some of the failures in
the underlying model can be planned for and handled by backing out as
well. This is often the case of ios::bad, at least as long as the
actual writing is in some way synchronized with output requests -- not
necessarily one write per request, but at least no error except during a
request. Others, like say a parity failure on memory read, can't be.
While I'm at it, I might mention that istream is an awkward case,
because the standard makes no provision for distinguishing the type of
an error. Throwing on eof can only be a programming error, since it
will cause some reads to fail that would otherwise succeed. Throwing on
fail means that you get an exception on every possible condition,
without the slightest means of knowing whether it was due to end of
file, a format error, or a hardware read error. And throwing on bad is
useless, because istream never sets bad. In practice, the possibility
of throwing in an istream is worthless. But this isn't an argument
against exceptions in general; it is just due to a faulty interface.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
AFAIR basic guarantee says that object (when exception happens) must be
in state consistent enough to be destroyed without leaking resources.
Nothing else is guaranteed, am I right ?
B.
Not quite. The basic guarantee requires that there will be no resource
leaks, and the object will be both usable and destructible (but it is not
required to be in a predictable state).
Etc. could stand for failures of whatever hardware you have to control.
Another thing is thread cancellation (wait functions fail with an exception,
so that a thread can be interrupted without leaking resources).
> What about a compiler, for example? Would you
> consider an error in the program being compiled an error? (I wouldn't;
> it's an expected situation. And I certainly wouldn't use exceptions to
> handle it.)
Haven't got any non-toy experience with parsers but from what I've seen so
far I'd probably agree.
> > Both implement "perfect" error handling, i.e. for all runtime errors
> > that could possibly happen both programs have to try a remedy. I
> > believe both programs should roughly be in the same league for
> > executable size and runtime performance (on some platforms the program
> > employing exceptions could even be _faster_ and _smaller_ but I won't
> > go into details for now). Why? Well, consider how you have to write
> > the program that does not use exception handling: After almost each
> > and every function call you have to insert "if (error) return
> > error;".
>
> Nonsense. Only after function calls that can fail.
As pointed out above with thread cancellation a lot of functions can fail.
However, even if you don't need thread cancellation I think it is better to
program as if almost all functions can fail with runtime errors. Why? Before
I started working in projects that had such a coding policy, the following
scenario was so familiar:
1. Programmer A writes a function that does not return an error-code.
2. Programmers B, C, D use the function in their code.
3. Some time later (days, weeks, months or even years), programmer A decides
that the function now can fail and henceforth returns an error-code. For
some reason, programmer A fails to ensure that all the necessary changes are
made. I don't really blame him, as the necessary changes could be quite
extensive and even the best programmers cut corners during a schedule
crunch.
4. The program now exhibits strange behavior (crashes under load,
functionality is different under certain situations, etc.)
5. Programmer C starts debugging and after a really long session finally
finds the bug and fixes it. It took that long because the bug surfaced in a
completely different corner of the program than where the problem was.
Of course points 1-3 also happen in programs making full use of exceptions.
However, the resulting bugs are so much easier to diagnose as the newly
thrown exception is almost always unexpected and leads to abort() (or
graceful shutdown, etc.). The fix usually also takes a lot less time as you
don't have to change the interfaces for code that only propagates the
exceptions.
As you pointed out, it's of course possible to enforce checking of the
return value which improves diagnosis but _not_ fixing as you still have to
change the interfaces of functions that only propagate the error. This last
point combined with all other advantages (ONE mechanism for all runtime
error-reporting, no tedious IsOk() or Init() after construction, most third
party libraries throw exceptions, etc.) that really convinced me to switch
to exceptions.
[...]
> Things like opening a file should be tested and handled immediately, and
> the actual processing will never begin. No problem of propagating
> deeply here.
>
> What types of errors are you thinking of that require if's all over the
> place?
My last project was an ATM where one of the biggest challenges was the
correct handling of hardware failures. Since you typically have about 8
external devices to control and at the same time also have to communicate
over a network and log every step the program makes on two different media
(disk and paper) you pretty soon come to the conclusion that about 80% of
all functions can possibly fail (this is just a very rough guess and does
not include thread cancellation). Moreover, there are pretty clear
requirements how the program must behave when any hardware fails. In the end
the project was 10% over budget and I'm quite sure that we wouldn't have
made it even within 20% without exception handling.
[ ... ]
> I think that everyone is in agreement that in such trivial cases, new
> should throw if it doesn't abort the program. There are other
> solutions, however, and they don't necessarily result in more if
> statements being written. (We used them before exceptions existed.)
> The most usual is simply to mark the object as bad, and continue. A bit
> like NaN in IEEE floating point. A lot more if's get executed and the
> run-time is probably a little slower than with exceptions, but the code
> size is probably smaller. With exceptions, I need table entries for all
> of the call spots where an exception may propagate. If I use deferred
> error checking, like with non-signaling NaN's, the only place I need to
> check for an error is at the end of the expression.
So it could be that a program uses three different ways of reporting
failures: error-codes, exceptions and NaNs?
> My experience to date has been that programs employing exceptions are
> almost never correct, where as programs with return values often are.
I've seen really bad abuses of exception handling as well but I have yet to
see a system with traditional error handling which comes close to being
correct. Maybe I'm just working in the wrong field?
[ ... ]
> If you're doing a green fields project,
> exceptions are definitly worth considering for some times of errors,
> provided you can be sure that the people working on the project are up
> to date, and know how to program with them.
Absolutely, proper education is crucial.
> exceptions are best when there aren't any
> try blocks,
Wild agreement! I haven't the code of my last project handy, but I'd guess
we don't have more than about 100 try blocks in our program (~.25Mloc).
Regards,
Andreas
> LLeweLLyn <llewe...@xmission.dot.com> wrote:
> > I hope you are thinking of the basic guarantee and not the
> > strong. IMO, especially when dealing with 3rd party libs, the
> > strong guarantee is too hard to provide in all but a few places.
>
> AFAIR basic guarantee says that object (when exception happens) must be
> in state consistent enough to be destroyed without leaking resources.
> Nothing else is guaranteed, am I right ?
[snip]
Yes. Here are the definitions I use:
http://www.boost.org/more/generic_exception_safety.html
and scroll down to item 3.
I know what you mean, I've seen it as well. However, I would expect that the
same people who throw like wild would also return error-codes like wild ;-).
[ ... ]
> Or that
> I've noticed that most of the people throwing exceptions like wild don't
> have the foggiest notion as to what exception safety means (and have
> never heard of smart pointers, or RAII).
Yeah, and not surprisingly those people are often wild supporters of SESE
(single entry, single exit) as they would otherwise face the same resource
leaking problems as with exceptions.
> > Simply because it's easier to fix cases where people throw exceptions
> > and should not than the other way round.
>
> I'm not sure what you mean here. If the client code was written to use
> exceptions, it's going to silently ignore return values. If it was
> written to use return values, it likely won't compile with exceptions.
If someone chooses to throw an exception but should have returned an error
code then this problem is easier to detect than if someone chooses to return
an error-code but should have thrown an exception.
> > BTW, in my last project (~0.25Mloc) there was not a single case where
> > people should have used error-code returns rather than exceptions!
>
> You mean you never opened a file whose name was provided by the user.
Luckily ATMs don't let you do that ;-)
> > I believe exceptions are the right choice for 99.9% of your everyday
> > error-reporting needs.
>
> I think it depends on the type of application. Most of my work is on
> large servers; exceptions are useful there for aborting requests without
> bringing the system down. But I can't think of anywhere in a compiler
> where they would be appropriate.
Agreed.
> And I'm sceptical about graphic
> clients, although I'll admit that my scepticism may be partially based
> on my negative experience with exceptions in Java.
Out of curiosity: What's so bad about exceptions in Java?
[ ... ]
> > I don't see your point, as some applications simply cannot _afford_ to
> > detect the argument to operator[] being out of bounds (in release mode
> > of course).
>
> My point is that you shouldn't release code with an out of bounds
I totally agree. However, unless you use some kind of static analysis or
really skeptical human reviewers with a lot of time, I bet there are a few
program logic errors in just about any release of any real-world program. As
some programs simply cannot afford to detect such errors in a release, they
will thus run into undefined behavior.
> operator[]. And if you are testing it (which is nice whenever you can
> afford it), the correct response is probably to abort, rather than to
> throw at an unexpected moment.
As I write code with the assumption that almost any function can fail, an
exception is never unexpected. Moreover because an ATM should avoid to run
into undefined behavior at all costs, we throw program logic exceptions even
in release mode. They are never caught without being rethrown and will thus
lead to a graceful termination of the program. I know this is a policy that
a lot of experts would disagree with (they would just call abort(), which
then brings the program down) but it works quite nicely and allows us to
keep the watchdog simple.
> > > Operator new was also introduced before the dawn of exceptions.
> > > There is no non-EH behavior for backward compatibility. IO is a
> > > funny case;
>
> > Yes there is: new (nothrow)
>
> How does that solve a backward compatibility problem?
Agreed, I guess it was a bit late :-).
> I've never used an exception for eof. Nor for fail. I can see it for
> bad, in some cases. Generally speaking, however, it just doesn't seem
> worth the hassle.
[ ... ]
> > Yep, there sure are ways. However, you still have to use exceptions
> > for constructor failures which leaves you with two different
> > approaches of error reporting.
>
> You always have to alternative of making constructors which can't fail.
> Generally, if something can fail, and the failure can be handled
> locally, I'd avoid doing it in a constructor.
What do you do when functions are called on a half-constructed object (the
caller forgot to call Init(), Open() or whatever)?
[ ... ]
> There are, of course, also cases where you simply cannot afford to
> throw. You certainly wouldn't want to use the normal new in a tracing
> mechanism (where it might be called during stack walkback due to another
> exception); you use new (nothrow), and if it fails, you fall back to
> unbuffered output, but you don't throw, whatever happens.
You mean you don't want to let an exception propagate out of a destructor. I
can't see why you are not allowed to throw or use normal new.
[ ... ]
> Critical, like if the program fails, something bad happens. I've worked
> on things like locomotive brake systems, for example. Almost by
> definition, you can run out of a dynamic resource; whether you throw an
> exception, return NULL or abort in the new handler, the train is going
> to crash. And the railroads don't like that.
Agreed.
> In general, I'd expect the case to
> occur in any system which has its own memory management, and only uses
> operator new to get the blocks it manages.
Good point.
> > How do you know that your immediate caller will be able to handle the
> > error? IMO, there's no way to tell but I'd be very interested if you
> > have come up with an _easy_ scheme that allows you to do so.
>
> When in doubt, suppose that he does. It's easier (and a lot cheaper) to
> convert a return code into an exception than it is to convert an
> exception into a return code.
As the thrown exception is not part of the interface, this may lead to a few
different exceptions being thrown to report exactly the same error. It
works, but it will make it more difficult to understand the error handling.
> On the other hand,
> it's really rare that you don't handle file not found locally; a system
> which forces a throw for a failure to open a file is just going to cause
> extra work for its users.
On the ATM we open a file the name of which is given in a configuration
file. If the file cannot be opened, the configuration is corrupt and the
only thing we can do is shutdown. So I simply let the stream throw.
[ ... ]
> Totally agreed. Before exceptions, we had the case where most
> programmers would do everything possible to handle the error locally,
> and ignore it otherwise. With exceptions, I see most programmers making
> no effort to handle an error locally, but just throwing an exception
> anytime they aren't sure what to do. Neither situation is acceptable.
Agreed.
> Other than that, the only errors I've not been able to handle locally
> are those related to insufficient resources (mainly memory), and those
> due to failure of the underlying model.
I'd add external hardware to the list.
[ ... ]
> While I'm at it, I might mention that istream is an awkward case,
> because the standard makes no provision for distinguishing the type of
> an error. Throwing on eof can only be a programming error, since it
> will cause some reads to fail that would otherwise succeed.
True for some cases, but I'd claim that it makes perfect sense in others.
I'll dig out an example...
> Throwing on
> fail means that you get an exception on every possible condition,
> without the slightest means of knowing whether it was due to end of
> file, a format error, or a hardware read error.
Sometimes you don't care, see above.
> And throwing on bad is
> useless, because istream never sets bad.
Interesting, I didn't know.
Regards,
Andreas
> Francis,
>
> > The problem with using any form of new other than new(nothrow) is
that
> > the implementation just about has to put in all the exception
handling
> After almost each and every function call you have to
> insert "if (error) return error;".
'almost each and every'? Nonsense. In most projects I've worked on,
the majority of functions can't fail. Now I can imagine projects
where functions that can't fail are the minority, but I believe
they remain an important minority (as opposed to the insignificant
minority you imply).
> Because in C++ one line of code often
> results in more than one function call (copy constructors, overloaded
> operators, etc.) you are forced to tear apart a lot of expressions.
For
> example, the following results in 3 function calls for the expression
z
> =
> .... alone and every single one could result in an exception being
> thrown
> (because all may allocate memory)!
>
> matrix z, a, b, c;
> // fill a, b, c ....
> z = a * b * c;
Math functions that use return codes to signal out of memory errors?
*shiver*. Have you ever encountered a matrix, vector, tensor, or
bigint class whose binary operator* returns a special value for an
out of memory error? Would you dare to use it if you did? (Of
course *you* wouldn't; you would want exceptions to be thrown. But
even where exceptions are unavailible, I've never seen math
functions use return codes to signal non-math errors.)
Maybe I'm biased. I've mostly used matrix and vector classes in games,
where only 2d 3d and 4d matrices and vectors are used, and they're
such a big performance bottle neck no-one dares allocate memory in
them, or check them for errors. And important enough somebody has
re-written them in assembler, for each of 3 different platforms.
In some of the projects I've worked on, the best way to handle out of
memory errors has been 'dump_memory_control_blocks();abort();' ;
there was only one program running (which had to push 100s of
thousands of textured polys per frame), no OS, no VM MMU, and the
smallest amount of RAM the hardware vendor thought they could get
away with. Exceptions were disabled by default, and return codes
were used for things like forcing the CD/DVD reading functions to
retry on failure.
[snip]
> To cut a long story short, I believe in a lot of cases programs
> employing traditional error-handling are only faster and smaller
> because they almost never reach the level of correctness of programs
> employing exceptions.
[snip]
Given the reliability of certian programs which never use exceptions
and the bugginess of certain competitors which use exceptions
throughout, I believe there are factors (such as good
design, good testing, etc) which are far more important to
correctness than exceptions vs error-return codes. I'll believe
that exceptions can express error-handling relationships return
codes cannot, and I'll believe judicious application of those
relationships can improve program correctness (and maybe even
performance if one has a range-table implementation of
exceptions), but they don't cure cancer, the common cold, or world
hunger.
Furthermore I still believe that there are times when one can expect
one's immediate caller to handle the error, and in such cases,
throwing an exception is inneficient, unecessary, *and* often
no-less error-prone.
It's quite acceptable in many interactive programs to *not abort* on
out of bounds. Imagine something like MS Office which can do a
thousand different things. When one single (or even several) piece of
functionality fails because of an index out of bounds, the exception
is caught by the outermost handler, it displays a message, and the
program goes on waiting for further user's input. You can then do any
of the other 999 things which work.
Regards
Igor
Well for me it was the opposite. My very first project was a green-fields
CAD-like project back in 1994, which - now that I'm thinking about it -
probably wouldn't have benefitted much from the use of exceptions (exception
handling wasn't yet in a mature enough state that we considered using it).
All other projects involved some form external hardware the control of which
was a very central point in the application. The hardware could fail in
various ways (jammed paper in the printer, no paper in the printer, printer
out of order, jammed cash dispenser, jammed ATM card, unreadable ATM card,
etc. the list is virtually endless). The hardware was controlled through a
series of layers mostly implemented with state-machines. I very much believe
that it would have killed us, had error propagation not been automatic.
Last but not least we also had the problem of thread cancellation which I
believe is only tackleable with exceptions or some form of OS support.
> Math functions that use return codes to signal out of memory errors?
> *shiver*. Have you ever encountered a matrix, vector, tensor, or
> bigint class whose binary operator* returns a special value for an
> out of memory error? Would you dare to use it if you did? (Of
> course *you* wouldn't; you would want exceptions to be thrown. But
> even where exceptions are unavailible, I've never seen math
> functions use return codes to signal non-math errors.)
Of course I've never seen anything like this. I just wanted to point out
what you *would* have to do with _error-code_returns_ in case you wanted to
achieve the same level of correctness as with exceptions. Was I to implement
such a library without exceptions, I'd probably do what James recommended:
Mark the result as invalid.
[ ... ]
> In some of the projects I've worked on, the best way to handle out of
> memory errors has been 'dump_memory_control_blocks();abort();' ;
> there was only one program running (which had to push 100s of
> thousands of textured polys per frame), no OS, no VM MMU, and the
> smallest amount of RAM the hardware vendor thought they could get
> away with. Exceptions were disabled by default, and return codes
> were used for things like forcing the CD/DVD reading functions to
> retry on failure.
James has already convinced me, I guess I wouldn't use exceptions here
anymore either.
[ ... ]
> Given the reliability of certian programs which never use exceptions
> and the bugginess of certain competitors which use exceptions
> throughout, I believe there are factors (such as good
> design, good testing, etc) which are far more important to
> correctness than exceptions vs error-return codes.
Definitely. But I'd claim that exceptions are _never_ the culprit, given
your programmers know what they do!
[ ... ]
> Furthermore I still believe that there are times when one can expect
> one's immediate caller to handle the error, and in such cases,
> throwing an exception is inneficient, unecessary, *and* often
> no-less error-prone.
This I'll probably never agree with. To me it's still extremely difficult to
know whether your _immediate_ caller is able to handle an error. Ok, file
open is probably a good example of an almost-always locally handleable
error. However, I sometimes find myself introducing another abstraction
layer to reduce complexity during iterative development which could of
course move the handler one more level away from the throw point. With
exceptions this is much easier than with error-codes. I general, a function
can not normally "know" how much abstraction layers its clients use.
Moreover, if you are working on a platform where exceptions are enabled
anyway, why not use exceptions throughout? This saves you and your team from
making a bunch of decisions most programmers have their problems with and
comes at a very low price (assuming that your programmers are proficient in
exception safety issues).
Regards,
Andreas
>It's quite acceptable in many interactive programs to *not abort* on
>out of bounds. Imagine something like MS Office which can do a
>thousand different things. When one single (or even several) piece of
>functionality fails because of an index out of bounds, the exception
>is caught by the outermost handler, it displays a message, and the
>program goes on waiting for further user's input. You can then do any
>of the other 999 things which work.
The big error in this reasoning is IMO that after a serious
logic error is found you can't assume that everything else
is in good shape and in a state foreseen by the programmer.
Why should the system be ok ? Surely not because you programmed
it correctly, because the presence of that logic error is
screaming that your logic is broken.
Common experience in programming is that the bug manifestation
is quite often far (in millions instructions executed) from
the bug itself, and that out-of-bound access is in the normal
case just a bug manifestation that happens in perfectly safe
code section. Often it's just the *victim* of the bug that
goes crazy.
What does this mean ? that if you get an out-of-boundary
exception during a recomputation of a embedded spreadsheet
the *worst* thing you can do is allowing the user to save
the document: that is probably not really the document any
more, but just a pile of junk bytes without any trustable
meaning. Save that over the old copy and you just multiplied
the damage: instead of losing half an hour your user lost
a month (and s/he'll notice that two weeks from now).
If there are no pseudo-physical "barriers" that you can
feel confident about then assuming that an out of boundary
error in the spreadsheet recomputation is a bug in the
spreadsheet recomputation is quite naive.
That's why I think that allowing a process to continue
after e.g. a sigsev is detected is pure nonsense in 99.999%
of the situations. And under windows IMO is nonsense in
100% of them, because if you happen to work in a field
where trapping sigsev and continuing is a must then your
choice of windows as OS is highly questionable.
Andrea
But std::logic_error is still part of well-defined behavior. If
some code can detect an attempt to perform undefined behavior
*before* the undefined behavior actually happens (eg attempt to
deref null ptr or out of bounds), maybe why not throw the
exception; there's no need to necessarily abort() immediately
always, because undefined behavior has *not* yet been
encountered (as far as you know..).
> What does this mean ? that if you get an out-of-boundary
> exception during a recomputation of a embedded spreadsheet
> the *worst* thing you can do is allowing the user to save
> the document: that is probably not really the document any
> more, but just a pile of junk bytes without any trustable
> meaning. Save that over the old copy and you just multiplied
> the damage: instead of losing half an hour your user lost
> a month (and s/he'll notice that two weeks from now).
How do you figure this? std::logic_error means that C++ code
detected an attempt to cause undefined behavior *before* it
actually happened. This means the program state is *not* yet
corrupt, and if a high-level controller can isolate the
offending module and all the intervening code is exception-safe
(well maybe that's a long shot, but just because it tried to do
something bad doesn't necessarily mean it can't clean up
properly behind itself) then saving the document should still be
fairly safe.
On the other hand...
> That's why I think that allowing a process to continue
> after e.g. a sigsev is detected is pure nonsense in 99.999%
> of the situations.
Absolutely agreed. But I think that this is a completely
different situation than std::logic_error, however. In this
case, it would indeed be a horrible idea to try to save the user
data, as corruption is a strong possibility. Also note that
catch(...) cleanup/swallow blocks can interfere with sigsegv
type stuff on many OSs (including Windows) and is therefore also
"pure nonsense" in 99.999% of situations. (not to mention
unnecessary, anyhow -- just use ScopeGuard or RAII cleanup and
avoid the headache :-)
hys
--
(c) 2003 Hillel Y. Sims
hsims AT factset.com
So now I have a question: How common is it to use functions which can
fail in complex expressions?
[snip]
> > Given the reliability of certian programs which never use exceptions
> > and the bugginess of certain competitors which use exceptions
> > throughout, I believe there are factors (such as good
> > design, good testing, etc) which are far more important to
> > correctness than exceptions vs error-return codes.
>
> Definitely. But I'd claim that exceptions are _never_ the culprit, given
> your programmers know what they do!
[snip]
I would guess exceptions *are* an important factor(1) in a minority of
cases - even given programmers which are familiar with exception
safety. (I do wonder how common such programmers are; I've never
worked anywhere where more than 1 programmer in 10 could describe
even 1 exception safety idiom. However certain newsgroups and
mailing lists seem to stuffed with such programmers. And I've done
a fair amount of work in environments where exceptions where
unavailible.)
(1) Not necessarily the 'culprit'; in real projects, there are always
more than 3 places to put the blame when something goes wrong. And
however error-prone a feature, one can always use a combination of
prior planning, design, and education to reduce error.
Neither I nor you can strictly prove anything here. My real-life
experience shows that if the program and the data it operates on are
not too monolithic it makes sense not to kill it on such errors
because most data is not corrupted. For example, I have a reporting
system which copes fine with many types of reports but chokes in some
cases (some types of SQL queries interacting with proprietary data in
a not-yet-well defined manner). I know that if it fails on a report,
only this report would be inconsistent, so it makes sense to warn the
user and let the program live on. On the other hand, if I become
pedantic then I should assume that my results may not be correct even
if no errors were reported - simply because as I know every program
has at least one bug.
It may be different in different areas but my approach works in mine.
I also know that no experienced person would blindly trust a
spreadsheet program even if it doesn't manifest any errors - after all
it's the user's responsibility to give correct results to the
customer, and also that same experienced person would find its way
with the program even if some functions don't work as expected.
>
> Common experience in programming is that the bug manifestation
> is quite often far (in millions instructions executed) from
> the bug itself, and that out-of-bound access is in the normal
> case just a bug manifestation that happens in perfectly safe
> code section. Often it's just the *victim* of the bug that
> goes crazy.
I agree about bug manifestation in general, but again my experience
with array index range checking in particular is that is in 99% of the
cases a direct local error which results in an unexpected index value.
>
> What does this mean ? that if you get an out-of-boundary
> exception during a recomputation of a embedded spreadsheet
> the *worst* thing you can do is allowing the user to save
> the document: that is probably not really the document any
> more, but just a pile of junk bytes without any trustable
> meaning. Save that over the old copy and you just multiplied
> the damage: instead of losing half an hour your user lost
> a month (and s/he'll notice that two weeks from now).
>
> If there are no pseudo-physical "barriers" that you can
> feel confident about then assuming that an out of boundary
> error in the spreadsheet recomputation is a bug in the
> spreadsheet recomputation is quite naive.
> That's why I think that allowing a process to continue
> after e.g. a sigsev is detected is pure nonsense in 99.999%
> of the situations. And under windows IMO is nonsense in
> 100% of them, because if you happen to work in a field
> where trapping sigsev and continuing is a must then your
> choice of windows as OS is highly questionable.
I don't know what sigsev is, so cannot comment. I am relying on my own
experience with certain types of programs in certain areas. I am not
saying that letting the program go on after an out of bounds error is
always acceptable.
Igor
> "Bronek Kozicki" <br...@rubikon.pl> writes:
>
> > LLeweLLyn <llewe...@xmission.dot.com> wrote:
> > > I hope you are thinking of the basic guarantee and not the
> > > strong. IMO, especially when dealing with 3rd party libs,
the
> > > strong guarantee is too hard to provide in all but a few
places.
> >
> > AFAIR basic guarantee says that object (when exception happens)
must be
> > in state consistent enough to be destroyed without leaking
resources.
> > Nothing else is guaranteed, am I right ?
> [snip]
>
> Yes. Here are the definitions I use:
>
> http://www.boost.org/more/generic_exception_safety.html
>
> and scroll down to item 3.
Ah, but you contradict yourself. Item 3 very clearly says that more
is guaranteed: "the invariants of the component are preserved."
Cheers,
Dave
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
I think there are cases (of which open a file is one) of procedures
(i.e. the return value has no computational significance) that naturally
return a bool to indicate success/failure. Changing those to return void
and throw an exception means that the caller who would naturally have
written:
if( !procedure(<arguments>)) { /* whatever */ }
Must write:
try {
procedure(<arguments>);
....
}
catch(problem &){ }
What makes this worse is where there is a natural alternative at the
call site in the event of failure which will allow the rest of the code
to flow naturally. Exceptions make that awkward exactly because one of
the advantages of exceptions is the separation of detection from
handling.
Not all failures are exceptional and we should not pre-empt the other
programmers decisions.
--
ACCU Spring Conference 2003 April 2-5
The Conference you cannot afford to miss
Check the details: http://www.accuconference.co.uk/
Francis Glassborow ACCU
> LLeweLLyn <llewe...@xmission.dot.com> writes:
>
> > "Bronek Kozicki" <br...@rubikon.pl> writes:
> >
> > > LLeweLLyn <llewe...@xmission.dot.com> wrote:
> > > > I hope you are thinking of the basic guarantee and not the
> > > > strong. IMO, especially when dealing with 3rd party libs,
> the
> > > > strong guarantee is too hard to provide in all but a few
> places.
> > >
> > > AFAIR basic guarantee says that object (when exception happens)
> must be
> > > in state consistent enough to be destroyed without leaking
> resources.
> > > Nothing else is guaranteed, am I right ?
> > [snip]
> > Yes. Here are the definitions I use:
> > http://www.boost.org/more/generic_exception_safety.html
> >
> > and scroll down to item 3.
> Ah, but you contradict yourself. Item 3 very clearly says that more
> is guaranteed: "the invariants of the component are preserved."
Correct; I misread what Bronek wrote.
Using exceptions throughout is of course kind of foolproof in one
respect, but it sure can make certain fragments of the program more
complicated and difficult to maintain, which can translate into more,
not fewer, bugs. I think the right approach is for some functionality
to expose "dual interface" - one throwing and one non-throwing. For
example I find it is often useful to have something like this:
int string2int(const string& s, int& error_code) - doesn't throw,
return value may be undefined depending on error_code;
int string2int(const string& s) - throws, uses the previous one
internally;
The client can then choose the right version for the circumstances. If
I know how to deal with an invalid string immediately (which may or
may not result in throwing an of exception later), I use the first
one, no need for try/catch hassle. If I don't know what to do with it,
I use the second one.
If only one version is exposed to clients, then it would normally be
the throwing one.
Exception safety however (by Abrahahams) is not related directly to
whether you use exceptions or not, so I find its name somewhat
misleading. It's rather about error safety and you have to deal with
much the same issues whether you use exceptions or not for error
reporting and handling.
Regards
Igor
> > >It's quite acceptable in many interactive programs to *not abort*
> > >on out of bounds. Imagine something like MS Office which can do a
> > >thousand different things. When one single (or even several) piece
> > >of functionality fails because of an index out of bounds, the
> > >exception is caught by the outermost handler, it displays a
> > >message, and the program goes on waiting for further user's
> > >input. You can then do any of the other 999 things which work.
> > The big error in this reasoning is IMO that after a serious logic
> > error is found you can't assume that everything else is in good
> > shape and in a state foreseen by the programmer. Why should the
> > system be ok ? Surely not because you programmed it correctly,
> > because the presence of that logic error is screaming that your
> > logic is broken.
> But std::logic_error is still part of well-defined behavior. If some
> code can detect an attempt to perform undefined behavior *before* the
> undefined behavior actually happens (eg attempt to deref null ptr or
> out of bounds), maybe why not throw the exception; there's no need to
> necessarily abort() immediately always, because undefined behavior has
> *not* yet been encountered (as far as you know..).
You seem to have missed the point. The question is: what caused the
index to be out of bounds? There are two possible answers:
- The user has entered some invalid data. In this case, of course,
you would expect the data to have been validated, and the user to
have gotten an error message, before you get to the point of using a
bounds check error to detect the invalid data. It's a very rare
case indeed where std::vector<>::at should be used for validating
user input.
- There is a program error somewhere. It may just be that that the
routine which calculates the index has an error in its algorithm,
which results in an invalide index; in this case, all you say is
valid. It is at least equally likely, however, that code somewhere
else has screwed up, and has written data where it shouldn't; the
most frequent errors of this sort I see is code writing to memory
that it has already freed (and which has, in turn, been allocated as
the object which is managing the index). In this case, I don't
think that there is much that you can do. Any work in progress that
hasn't been saved recently should be dumped to a special file, so
that the user can look at it when he restarts the program, and
decide whether it is what he wants, or whether it was corrupted
before the error was detected.
> > What does this mean ? that if you get an out-of-boundary exception
> > during a recomputation of a embedded spreadsheet the *worst* thing
> > you can do is allowing the user to save the document: that is
> > probably not really the document any more, but just a pile of junk
> > bytes without any trustable meaning. Save that over the old copy
> > and you just multiplied the damage: instead of losing half an hour
> > your user lost a month (and s/he'll notice that two weeks from
> > now).
> How do you figure this? std::logic_error means that C++ code detected
> an attempt to cause undefined behavior *before* it actually happened.
> This means the program state is *not* yet corrupt, and if a high-level
> controller can isolate the offending module and all the intervening
> code is exception-safe (well maybe that's a long shot, but just
> because it tried to do something bad doesn't necessarily mean it can't
> clean up properly behind itself) then saving the document should still
> be fairly safe.
> On the other hand...
> > That's why I think that allowing a process to continue after e.g. a
> > sigsev is detected is pure nonsense in 99.999% of the situations.
> Absolutely agreed. But I think that this is a completely different
> situation than std::logic_error, however.
Why? It is, after all, just a bounds check error on an array -- in this
case, the array of bytes allocated to the program by the system.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Quite common, at least in real-number maths, e.g. arcos( 2 ). However, such
failures are only a very small issue when your processor supports stuff like
NaN and Inf.
> (1) Not necessarily the 'culprit'; in real projects, there are always
> more than 3 places to put the blame when something goes wrong. And
> however error-prone a feature, one can always use a combination of
> prior planning, design, and education to reduce error.
I wouldn't call C++ exception handling exactly error-prone. Rather, a
complex issue (error handling) is addressed by a language feature which can
be abused so easily simply because the issue itself (and not the feature) is
difficult (error handling with error-code returns isn't a piece of cake
either). As a result, the feature cannot be made fool-proof. This is true
for almost any other "advanced" C++ feature. For example, there was a time
when inheritance was praised as _the_ mechanism for code reuse. Everyone
then went to design their really deep hierarchies where every class
inherited from all other classes (;-)) and everyone was happy because it
felt good to reuse code and finally do something about the software crisis
(I'm guilty as well). Then word got about that this is probably not such a
good idea (Liskov principle) and a few years later a lot of programmers seem
to know this. At least I have seen a lot fewer abuses in my last project
than I used to.
I would claim the same about RTTI and templates. However, most abuses of
templates or RTTI do not lead to _incorrect_ code but rather to badly
maintainable and/or slow and/or bloated code.
Regards,
Andreas
> > What's in the etc.?
> Etc. could stand for failures of whatever hardware you have to
> control.
I see three issues, which probably should be treated distinctly (if only
to be sure we're talking about the same thing):
- A failure in the underlying abstract model. Various IO problems
come in here -- the abstract model of reading from a disk supposes
that you read what was written, a hardware error is not part of the
model. Most people tend to treat writing to disk or allocating
memory in this category; it's a lot simpler.
In some cases, such errors are simply unrecoverable; if your CPU
goes up in smoke, there's not much you can do. Others, like stack
overflow, can occur at practically any moment; handling them with an
exception means that exception safe code is impossible. Still, that
leaves quite a few others. These are really the only place I can
see exceptions as being reasonable.
- A software failure. A pre- or a post-condition check fails, for
example. I'm dubious about these -- if the code is wrong, how can
you recover. You can't rewrite it from within the program. I
suppose that in some special cases, you might be able to disable
certain functions, and stumble on, hoping that the error didn't
destroy data needed by the other functions. I've never seen such a
case, however.
- An error in the input data. That's something that you should
expect, and that you should be able to handle locally. Not a job
for exceptions. Unless, of course, that input data is somehow
guaranteed by your underlying abstract model -- a temporary file
that you write and then reread, for example.
> Another thing is thread cancellation (wait functions fail with an
> exception, so that a thread can be interrupted without leaking
> resources).
Thread cancellation is a delicate subject. Generally speaking, it's
something best avoided, at least in any asynchronous fashion. In all
cases I've seen, we have in fact used other mechanisms to terminate the
thread, all of which intervened at the level of the main loop of the
thread, i.e. the top-most level.
How is this any different than if the programmer changes some of the
preconditions of a function. The ways a function can fail are part of
its contract. If it fails in a new way, then the contract has changed.
You should probably change its name, so that all callers are required to
review their code.
Note that exceptions don't change anything here. Code isn't
automatically exception safe. At the lowest level, in fact,
implementing exception safety requires that some functions cannot
throw.
> Of course points 1-3 also happen in programs making full use of
> exceptions. However, the resulting bugs are so much easier to
> diagnose as the newly thrown exception is almost always unexpected and
> leads to abort() (or graceful shutdown, etc.).
Unless exceptions are treated at some higher level, which is usually the
case. My function calls a function that cannot throw, so it is not
exception safe. But it is called from a function (probably several
levels up) in a try block, because most of the code is called from this
try block.
The whole justification for exceptions lies in the intermediate
functions. They have to know whether the functions they call can throw
or not (in order to ensure that they are exception safe), but they don't
have to know any of the details about what can or cannot be thrown.
> The fix usually also takes a lot less time as you don't have to change
> the interfaces for code that only propagates the exceptions.
If a function propagates an exception, and it didn't before, you have
changed its interface. Just not in a way that the compiler can detect.
> As you pointed out, it's of course possible to enforce checking of the
> return value which improves diagnosis but _not_ fixing as you still
> have to change the interfaces of functions that only propagate the
> error.
If you've changed the contract, you've changed the interface. The only
remaining question is whether this change is detected by the compiler or
not.
> This last point combined with all other advantages (ONE mechanism for
> all runtime error-reporting, no tedious IsOk() or Init() after
> construction, most third party libraries throw exceptions, etc.) that
> really convinced me to switch to exceptions.
> [...]
> > Things like opening a file should be tested and handled
> > immediately, and the actual processing will never begin. No
> > problem of propagating deeply here.
> > What types of errors are you thinking of that require if's all over
> > the place?
> My last project was an ATM where one of the biggest challenges was the
> correct handling of hardware failures. Since you typically have about
> 8 external devices to control and at the same time also have to
> communicate over a network and log every step the program makes on two
> different media (disk and paper) you pretty soon come to the
> conclusion that about 80% of all functions can possibly fail (this is
> just a very rough guess and does not include thread cancellation).
> Moreover, there are pretty clear requirements how the program must
> behave when any hardware fails. In the end the project was 10% over
> budget and I'm quite sure that we wouldn't have made it even within
> 20% without exception handling.
It sounds to me that "error handling" in this case is rigorously part of
the application, and should be integrated into the algorithms used.
Obviously, not seeing the exact specifications, and not actually working
on the project, it is difficult to make global comments -- there could
be any number of problems that you faced that I am unaware of. But a
priori, this sounds like the type of application where I wouldn't use
exceptions at all. Recoverable errors are handled locally, and
unrecoverable errors are handled by passing control to a separate
routine, which logs the appropriate information, outputs the appropriate
message to the screen, and takes the machine out of service.
> [ ... ]
> > I think that everyone is in agreement that in such trivial cases,
> > new should throw if it doesn't abort the program. There are other
> > solutions, however, and they don't necessarily result in more if
> > statements being written. (We used them before exceptions
> > existed.) The most usual is simply to mark the object as bad, and
> > continue. A bit like NaN in IEEE floating point. A lot more if's
> > get executed and the run-time is probably a little slower than with
> > exceptions, but the code size is probably smaller. With
> > exceptions, I need table entries for all of the call spots where an
> > exception may propagate. If I use deferred error checking, like
> > with non-signaling NaN's, the only place I need to check for an
> > error is at the end of the expression.
> So it could be that a program uses three different ways of reporting
> failures: error-codes, exceptions and NaNs?
All C++ programs I know do. I've never seen one without error codes,
and I've never seen one that didn't use iostream (or at least ostream)
in a way that ressembled NaN.
Like it or not, one size doesn't fit all. It would be nice if it did,
but that isn't the reality that I know.
> > My experience to date has been that programs employing exceptions
> > are almost never correct, where as programs with return values
> > often are.
> I've seen really bad abuses of exception handling as well but I have
> yet to see a system with traditional error handling which comes close
> to being correct. Maybe I'm just working in the wrong field?
I don't know. I'll admit that I've seen radically different quality
levels practiced in the different application domains where I've worked.
Which is to be expected: the results of a failure in a railway brake
system are a bit more drastic than in a Bank's information system (what
the employee uses to check whether you've any money in your account
before cashing your check). In the larger telephone systems, I've seen
programs run for years without a problem, without exceptions. Maybe I'm
just biased; most of my work on critical systems was done before
exceptions were available, and the systems worked. I've only seen one
more or less critical system which used exceptions, and frankly, they
didn't bring it much (but it was a very small system).
> [ ... ]
> > If you're doing a green fields project, exceptions are definitly
> > worth considering for some times of errors, provided you can be
> > sure that the people working on the project are up to date, and
> > know how to program with them.
> Absolutely, proper education is crucial.
For return codes as well. In the end, a lot depends on psychological
issues as well. If your people are trained in using return codes, and
doing correctly, I'd hesitate changing. If you're having problems,
introducing a new technology is a good justification for the extra
training that is needed, regardless of the relevant values of the
technology. In the end, you can't forget the people factor, since it is
people who are actually writing the code.
> > exceptions are best when there aren't any try blocks,
> Wild agreement! I haven't the code of my last project handy, but I'd
> guess we don't have more than about 100 try blocks in our program
> (~.25Mloc).
Well, I was thinking along the lines of maybe one try block per thread.
We have less than 10 types of threads, so that would be less than 100
try blocks. In some of the code I'm maintaining, it's running to
something like three per function. Of course, if it weren't try blocks,
it would be something else -- a function physically big enough to
contain three try blocks is too big, and too complex, regardless of what
you are doing.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> I would guess exceptions *are* an important factor(1) in a minority of
> cases - even given programmers which are familiar with exception
> safety. (I do wonder how common such programmers are; I've never
> worked anywhere where more than 1 programmer in 10 could describe
> even 1 exception safety idiom. However certain newsgroups and
> mailing lists seem to stuffed with such programmers. And I've
done
> a fair amount of work in environments where exceptions where
> unavailible.)
Just curious: may you point out some references to safety idioms
(especially on-line ones)?
Sincerly yours,
Anton.
I would suggest you need to think the other way round to decide
whether to throw an exception or set an error code when an error
occurs. There are two questions to ask
1) Are the circumstances in which an immediate caller may wish
to ignore the error?
2) Does ignoring the error have no adverse effects on the calling
program?
If the answer is yes to both questions, it's a fair call that throwing
an exception is undesirable.
File opening is one example that passes this screen.
>
> I think there are cases (of which open a file is one) of procedures
> (i.e. the return value has no computational significance) that
naturally
> return a bool to indicate success/failure. Changing those to return
void
> and throw an exception means that the caller who would naturally have
> written:
>
> if( !procedure(<arguments>)) { /* whatever */ }
>
> Must write:
> try {
> procedure(<arguments>);
> ....
> }
> catch(problem &){ }
>
> What makes this worse is where there is a natural alternative at the
> call site in the event of failure which will allow the rest of the
code
> to flow naturally. Exceptions make that awkward exactly because one of
> the advantages of exceptions is the separation of detection from
> handling.
Precisely.
>
> Not all failures are exceptional and we should not pre-empt the other
> programmers decisions.
>
Sort of agreed. The whole point of writing a library is to anticipate
a reasonable set of use cases. If another programmer can get into
trouble
by ignoring an error, then we should not allow the error to be ignored:
i.e. throw an exception.
How often do you write code that opens a file, and does *nothing* on
failure?
Surely in most cases, the failure is an exceptional circumstance, and
though
it needs to be planned for, the behaviour in such a case should not be
part of
the normal flow of code. There is not really a lot of difference between
if( !procedure(<arguments>))
{
// do something useful
}
else
{
if(weCanHandle(theErrorInTheProcedure()))
{
// handle the problems we know about
}
else
{
// we can't handle the error, pass it up the call chain, or
abort
// or something
}
}
and
try
{
procedure(<arguments>);
// do something useful
}
catch(problemsWeCanHandle &)
{
// handle the problems we know about
}
// if we can't handle the error, the exception propagates automatically
indeed, I think the code with exceptions is cleaner than that without.
> What makes this worse is where there is a natural alternative at the
> call site in the event of failure which will allow the rest of the
code
> to flow naturally. Exceptions make that awkward exactly because one of
> the advantages of exceptions is the separation of detection from
> handling.
>
> Not all failures are exceptional and we should not pre-empt the other
> programmers decisions.
It is true that not all failures are exceptional; indeed a parser might
well
just shove it on the error list and keep going. However, IMHO it is
preemptive
to assume that other programmers *are* going to want to handle the error
locally, unless you are that other programmer, and this is
application-specific code rather than library code.
Anthony
--
Anthony Williams
Senior Software Engineer, Beran Instruments Ltd.
Remove NOSPAM when replying, for timely response.
> So now I have a question: How common is it to use functions which can
> fail in complex expressions?
At least as common as the use of a finite representation (double) for an
infinite concept (real numbers). That isn't the question. The question
is what to do when they fail. And in this case, return codes *aren't*
one of the available options.
In the case of the basic arithmetic types, the C/C++ standards quietly
sidestep the issue -- it's undefined behavior. In practice, on most
modern machines, integral expressions will silently give wrong answers,
and floating point will propagate an Infinity which can be tested for at
the end of the calculations. Historically, most of the math routines in
the C math library have just set errno.
We're not lacking precedents:-). Exceptions are yet another
alternative. My tendancy would be to favor either something along the
IEEE model, propagating the error until the end of the calculation, or
else an outright termination of the program -- ranges should be
validated before starting the calculations. But I'm not sure that the
latter is always feasable, and the former has definite run-time
overhead.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> > > Of course I've never seen anything like this. I just wanted to point
> out
> > > what you *would* have to do with _error-code_returns_ in case you
> wanted to
> > > achieve the same level of correctness as with exceptions.
> > [snip]
> >
> > So now I have a question: How common is it to use functions which can
> > fail in complex expressions?
>
> Quite common, at least in real-number maths, e.g. arcos( 2 ). However, such
> failures are only a very small issue when your processor supports stuff like
> NaN and Inf.
Yes, I've done that, but I don't think about it much; most of my
experience is in application where one can't afford to check
real-number math functions for failure. And of course (at least in
C++) real number math function errors aren't handled by
exceptions. (as I'm sure you're aware.)
>
> > (1) Not necessarily the 'culprit'; in real projects, there are always
> > more than 3 places to put the blame when something goes wrong. And
> > however error-prone a feature, one can always use a combination of
> > prior planning, design, and education to reduce error.
>
> I wouldn't call C++ exception handling exactly error-prone. Rather, a
> complex issue (error handling) is addressed by a language feature which can
> be abused so easily simply because the issue itself (and not the feature) is
> difficult (error handling with error-code returns isn't a piece of cake
> either). As a result, the feature cannot be made fool-proof.
Of course. Exactly why my footnote begins with 'Not necessarily the
'culprit' ... '.
I think we can agree that:
(a) One can't use exceptions for their intended purpose without
encountering a number of error opportunities which require
some thought to avoid. (True of many other c++ features, as
you say.)
(b) Some (most?) of the error opportunities mentioned in (a) have direct
(and not-so-direct) equivalents in non-exception using
error-handling strategies.
(c) Despite the aforemented error opportunities, exceptions remain
quite valuable for error-handling.
Of course I still don't care for the idea of using exceptions for
every concievable kind of error.
> This is true
> for almost any other "advanced" C++ feature. For example, there was a time
> when inheritance was praised as _the_ mechanism for code reuse. Everyone
> then went to design their really deep hierarchies where every class
> inherited from all other classes (;-)) and everyone was happy because it
> felt good to reuse code and finally do something about the software crisis
> (I'm guilty as well).
I started learning C++ in 1995, when books preaching that hearsay were
still in vogue. I too went down the same path - unil I got some
experience with some container libs based around inheritance, and
learned how awful they were to use. Then I spent a similar amount
of time (ab)using templates for every concievable task. Now I'm
wondering what language feature(s) I've been overusing for the
past few years. :-)
[snip]
> I think there are cases (of which open a file is one) of procedures
> (i.e. the return value has no computational significance) that naturally
> return a bool to indicate success/failure. Changing those to return void
> and throw an exception means that the caller who would naturally have
> written:
>
> if( !procedure(<arguments>)) { /* whatever */ }
>
> Must write:
> try {
> procedure(<arguments>);
> ....
> }
> catch(problem &){ }
>
This is - agreed - slightly inconvenient.
> What makes this worse is where there is a natural alternative at the
> call site in the event of failure which will allow the rest of the code
> to flow naturally. Exceptions make that awkward exactly because one of
> the advantages of exceptions is the separation of detection from
> handling.
>
> Not all failures are exceptional and we should not pre-empt the other
> programmers decisions.
I agree, but I find the "nothrow" approach from new very nice and
convenient in situations as above: Provide an extra function with an
extra std::nothrow parameter and allow that function return an
errorcode whenever that is deemed reasonable.
I personally do not use the signature as an indication that any
failure is returned: rather, the nothrow version protects from a throw
in common scenarios. For a file open, this could be when the file does
not exist or if the file is already open with an exclusive lock: if
there is a hard disk error or other nontypical errors I let the
function throw anyway.
This lets the user use the standard function in the "normal" version,
in the easy no throw version and with full errorhandling (try-catch)
in the abnormal case.
While the selection of "removed" throws might seem arbitrary, I have
found that practical use does not pose any problems. Do others have
experiences similar to mine?
Kind regards
Peter Koch Larsen
I am indeed guilty of missing the point quite often, but I'm not
entirely convinced yet in this particular case...
> The question is: what caused the
> index to be out of bounds? There are two possible answers:
>
> - The user has entered some invalid data. In this case, of
course,
> you would expect the data to have been validated, and the
user to
> have gotten an error message, before you get to the point
of using a
> bounds check error to detect the invalid data. It's a
very rare
> case indeed where std::vector<>::at should be used for
validating
> user input.
>
Ok, so then if we're not even talking about logic_error coming
from this case anyhow then we can ignore this piece. If it does
cause logic_error to be raised somehow, then my point is still
valid - we have detected an attempt to perform an illegal
operation *before* it was actually performed, and I think this
makes a great deal of difference with respect to
unwinding/cleanup procedures.
> - There is a program error somewhere. It may just be that
that the
> routine which calculates the index has an error in its
algorithm,
> which results in an invalide index; in this case, all you
say is
> valid.
Well ok, if what I have said is valid, then can you please
explain how I missed the point? :-)
> It is at least equally likely, however, that code somewhere
> else has screwed up, and has written data where it
shouldn't; the
> most frequent errors of this sort I see is code writing
to memory
> that it has already freed (and which has, in turn, been
allocated as
> the object which is managing the index). In this case, I
don't
> think that there is much that you can do. Any work in
progress that
> hasn't been saved recently should be dumped to a special
file, so
> that the user can look at it when he restarts the
program, and
> decide whether it is what he wants, or whether it was
corrupted
> before the error was detected.
It is true, if we have entered the magical realm of "undefined
behavior" then there is not much we can do about it anyhow but
hope that the OS will detect it for us and shut down the app
before any more damage occurs, and it would probably be a bad
idea to overwrite an existing file as the data could be corrupt.
I do not buy your "equally likely" assumption though. I kind of
feel like the whole point of logic_error might be to detect
attempts to perform illegal behavior before it actually happens,
so that we can avoid this kind of scenario. It has been said
that it's not particularly valid to assume any sort of behavior
that could occur after illegal behavior has been encountered. If
you're going to assume that logic_error means that illegal
behavior has been performed, then you might as well assume that
illegal behavior has been performed at any other arbitrary point
in your code (and I think that would sort of lead us back down
the same path of troubles as the stack overflow exception
discussion).
>
> > > That's why I think that allowing a process to continue
after e.g. a
> > > sigsev is detected is pure nonsense in 99.999% of the
situations.
>
> > Absolutely agreed. But I think that this is a completely
different
> > situation than std::logic_error, however.
>
> Why? It is, after all, just a bounds check error on an
array -- in this
> case, the array of bytes allocated to the program by the
system.
>
But it is a bounds check performed within the context of
well-defined C++ behavior. If C++ code can detect that an
out-of-bounds error is inherent in a requested operation, before
actually performing the illegal function, then it seems valid to
assume the program state is stable and in some cases we might
clean up and continue execution (maybe we can isolate a
particular DLL-type module that did something wrong and rollback
its operation and shut it down, but the overall program state is
not corrupt so we can continue running without shutdown).
However, if the illegal access has actually been performed, then
we are now into the undefined behavior realm, which on good OS's
will usually lead to stuff like sigsegv/accvio/gpf, and its
usually best if the entire application is aborted immediately;
in some cases if the C++ code continues to execute in a context
where it becomes aware of this (maybe such as signal handlers),
we should not try to do normal clean up because the program
state (and data) may be corrupted.
hys
--
(c) 2003 Hillel Y. Sims
hsims AT factset.com
> alternative. My tendancy would be to favor either something along the
> IEEE model, propagating the error until the end of the calculation, or
> else an outright termination of the program -- ranges should be
> validated before starting the calculations. But I'm not sure that the
> latter is always feasable, and the former has definite run-time
> overhead.
Propagating the error until the end has a problem: perhaps due to the
failed state one never reach the end. Is a commmon mistake of beginners
write a loop
like this:
cout << "Enter numbers, 0 to end" << endl;
do
{
cin >> num;
} while (num != 0);
And test with an input as "1 no_number".
Same situation can occur in more sophisticated code. You need to do an
specification of the behaviour of all functions when called in a failed
state, and check carefully that all programmers take that into account.
Regards.
I don't think so. Once your team has got how to write exception safe code
(and it's really not that difficult), the code almost invariably becomes
more compact and more readable. Unlike e.g. in Java, you don't have to
litter your code with try { ... } catch ( ... ) {} blocks just to clean up
after yourself. E.g. in my last project (.25 Mloc) we don't have more than
about 100 try blocks. Granted, in cases where the error is almost always
handled by the immediate caller (e.g. file open), the code becomes slightly
less readable. But in my experience these cases are so seldom that it does
not really hurt to have an exceptions-only policy.
> to expose "dual interface" - one throwing and one non-throwing. For
> example I find it is often useful to have something like this:
>
> int string2int(const string& s, int& error_code) - doesn't throw,
> return value may be undefined depending on error_code;
>
> int string2int(const string& s) - throws, uses the previous one
> internally;
>
> The client can then choose the right version for the circumstances. If
> I know how to deal with an invalid string immediately (which may or
> may not result in throwing an of exception later), I use the first
> one, no need for try/catch hassle. If I don't know what to do with it,
> I use the second one.
I would hate to do that, as it doubles the number of functions in an
interface and often does more harm than good. In this particular situation
if the programmer is given the joice, I believe that the choice is more
based on what he's comfortable with than what the technically best solution
is. This is especially true for people who have so far not had a lot of
exposure to exceptions.
> Exception safety however (by Abrahahams) is not related directly to
> whether you use exceptions or not, so I find its name somewhat
> misleading. It's rather about error safety and you have to deal with
> much the same issues whether you use exceptions or not for error
> reporting and handling.
I mainly agree, but: Without exceptions, you can get away with SESE (single
entry, single exit), as follows:
int f()
{
int error = 0;
// acquire resources here
if ( !( error = g() ) )
{
// ...
if ( !( error = h() ) )
{
// ...
if ( !( error = i() ) )
{
/* etc */
}
// ...
}
// ...
}
// release resources here
return error;
}
With exceptions there is no clean way to do that (release resources at the
end of a method), mainly because C++ lacks a finally statement. And I'm
really grateful for that, so even lazy people are forced to do it the right
way, i.e. by RAII.
To put it another way: Without exceptions, you have to write a return
statement to prematurely end your function, i.e. to the reader it is well
visible that there could be a problem with previously allocated resources.
With exceptions however, you don't have such a "warning" so you need a whole
new way of thinking, reading and writing code. Even otherwise excellent
programmers tend to simply not see that it's almost always a mistake to
acquire resources without using RAII if they weren't made aware of the
problems with exceptions and the necessary paradigm shift.
Have you ever seen people using RAII in projects with exceptions disabled? I
haven't.
Regards,
Andreas
> How often do you write code that opens a file, and does *nothing* on
> failure?
Never.
Sorry, I missed that.
> I think we can agree that:
>
> (a) One can't use exceptions for their intended purpose without
> encountering a number of error opportunities which require
> some thought to avoid. (True of many other c++ features, as
> you say.)
>
> (b) Some (most?) of the error opportunities mentioned in (a) have
direct
> (and not-so-direct) equivalents in non-exception using
> error-handling strategies.
>
> (c) Despite the aforemented error opportunities, exceptions remain
> quite valuable for error-handling.
Absolutely.
> I started learning C++ in 1995, when books preaching that hearsay were
> still in vogue. I too went down the same path - unil I got some
> experience with some container libs based around inheritance, and
> learned how awful they were to use. Then I spent a similar amount
> of time (ab)using templates for every concievable task. Now I'm
> wondering what language feature(s) I've been overusing for the
> past few years. :-)
Yeah, but I do hope that the percentage of abusive code I write is
decreasing ;-).
Agreed. If we detect a program logic error the thrown exception is never
caught without being rethrown (except at the topmost level of course), so
the system will shut down as graceful as possible. We do this because we
have to undo certain actions when this happens. I know, a program logic
error could also mean that the data that is needed for undo is corrupt so
from a pedantic point of view this is probably not a good idea. However, it
worked quite nicely so far and it allowed us to keep the watchdog simple.
> - An error in the input data. That's something that you should
> expect, and that you should be able to handle locally. Not a job
> for exceptions. Unless, of course, that input data is somehow
> guaranteed by your underlying abstract model -- a temporary file
> that you write and then reread, for example.
These were expected errors in the hardware (jammed paper in the receipt
printer, etc.), so I guess from your point of view such errors would fall
under this category.
> > Another thing is thread cancellation (wait functions fail with an
> > exception, so that a thread can be interrupted without leaking
> > resources).
>
> Thread cancellation is a delicate subject. Generally speaking, it's
> something best avoided, at least in any asynchronous fashion. In all
Agreed, but sometimes you simply have to.
> cases I've seen, we have in fact used other mechanisms to terminate the
> thread, all of which intervened at the level of the main loop of the
> thread, i.e. the top-most level.
Not good enough for us, threads often had to wait for callbacks from the
hardware (which was wrapped in a software-layer not implemented by us).
*Sometimes*, the hardware could expectably fail (paper jam) and never call
back (which is a bug in the underlying hard- or software, but that's just
the way it was :-(). In this case we had to *reboot* to correct the problem.
Thread cancellation really helped us here because you don't have to
implement timeouts for every call to the hardware at the lowest level. You
only do so where timeouts are known and specified, i.e. at a very high
level.
[ ... ]
Very true.
> Note that exceptions don't change anything here. Code isn't
> automatically exception safe. At the lowest level, in fact,
> implementing exception safety requires that some functions cannot
> throw.
I agree. However, we do not live in a perfect world. Programmers do cut
corners, especially during a schedule crunch. What this really boils down to
is the following:
- I believe I can educate almost any programmer how to program
exception-safely (I'm talking about the basic guarantee here). All it takes
are a smart pointer, a mutex a la boost and something like Alexandrescus
scope guard. The only functions guaranteed not to throw are members of these
three classes. All other functions may explicitly throw exceptions, even if
they are not documented in the interface. Most people need some practice but
after a few weeks everybody I ever had to deal with felt quite comfortable
about this. What's most important, writing exception-safe code does not take
more time, sometimes it takes even less!
- I can also educate anybody that they have to change the function name, as
soon as they decide to return an error code from a function previously
returning void. The problem here is that most programmers will _not_ do so
during a schedule crunch, especially when the error does not happen all that
often. This is admittedly bad but it's just human. Nobody really wants to
"slow" the project down. Please note that I wrote "slow" because the problem
will eventually surface and cause much more wasted time than the programmer
saved in the first place. People are doing this kind of stuff invariably
because they hope that the problem will not surface before they have the
time to correct it. Sometimes it works but in most cases it doesn't.
We agree that in both cases (the newly thrown exception and the newly
returned error-code) the programmer has introduced a bug in the software. My
point is that with exceptions this bug will surface much more predictably
and diagnoseably than with error-code returns.
> > Of course points 1-3 also happen in programs making full use of
> > exceptions. However, the resulting bugs are so much easier to
> > diagnose as the newly thrown exception is almost always unexpected and
> > leads to abort() (or graceful shutdown, etc.).
>
> Unless exceptions are treated at some higher level, which is usually the
> case. My function calls a function that cannot throw, so it is not
> exception safe. But it is called from a function (probably several
> levels up) in a try block, because most of the code is called from this
> try block.
I'm not sure whether I understand. As pointed out above it's very well
possible _and_ feasible to implement _all_ the code of a project so that it
is exception-safe under all circumstances (except in some really esoteric
cases like stack overflow).
> The whole justification for exceptions lies in the intermediate
> functions. They have to know whether the functions they call can throw
> or not (in order to ensure that they are exception safe),
If you have the coding policy that virtually every function (except for the
ones mentioned above) can throw then this is not a problem.
> If a function propagates an exception, and it didn't before, you have
> changed its interface. Just not in a way that the compiler can detect.
I fully agree. Again, in a not so perfect world exceptions are superior to
error-code returns because you can handle the newly reported error at a very
high level without touching intermediate levels. I agree that from a
pedantic point of view this is not true (you have to _document_ the newly
thrown exception at every level).
[ ... ]
> > My last project was an ATM where one of the biggest challenges was the
> > correct handling of hardware failures. Since you typically have about
> > 8 external devices to control and at the same time also have to
> > communicate over a network and log every step the program makes on two
> > different media (disk and paper) you pretty soon come to the
> > conclusion that about 80% of all functions can possibly fail (this is
> > just a very rough guess and does not include thread cancellation).
> > Moreover, there are pretty clear requirements how the program must
> > behave when any hardware fails. In the end the project was 10% over
> > budget and I'm quite sure that we wouldn't have made it even within
> > 20% without exception handling.
>
> It sounds to me that "error handling" in this case is rigorously part of
> the application, and should be integrated into the algorithms used.
> Obviously, not seeing the exact specifications, and not actually working
> on the project, it is difficult to make global comments -- there could
> be any number of problems that you faced that I am unaware of. But a
> priori, this sounds like the type of application where I wouldn't use
> exceptions at all. Recoverable errors are handled locally, and
I don't see how we could have handled some recoverable errors locally. E.g.
for some ATM cards, we had to try to get permission from a special server.
If for some reason this failed (and this could fail on about three different
levels) we had to try to get permission from the normal server.
> unrecoverable errors are handled by passing control to a separate
> routine, which logs the appropriate information, outputs the appropriate
> message to the screen, and takes the machine out of service.
For all errors (recoverable or not) we had to undo a few things on different
levels before we could recover or take the machine out of service. You can
definitely do this without exceptions but I'd claim that it's easier with
exceptions. We simply let the stack unwind to the point where the problem
could be handled and this also ensured that all the things we had to undo
were undone.
[ ... ]
> before cashing your check). In the larger telephone systems, I've seen
> programs run for years without a problem, without exceptions. Maybe I'm
> just biased; most of my work on critical systems was done before
> exceptions were available, and the systems worked. I've only seen one
> more or less critical system which used exceptions, and frankly, they
> didn't bring it much (but it was a very small system).
As I have never done any work on critical systems I'm maybe biased also.
[ ... ]
> > Wild agreement! I haven't the code of my last project handy, but I'd
> > guess we don't have more than about 100 try blocks in our program
> > (~.25Mloc).
>
> Well, I was thinking along the lines of maybe one try block per thread.
> We have less than 10 types of threads, so that would be less than 100
> try blocks. In some of the code I'm maintaining, it's running to
I have just counted them. At first I was shocked to find 253 but then I saw
that most are only used to log some information (which is required by the
specs) and then rethrow.
> LLeweLLyn wrote:
>
> > I would guess exceptions *are* an important factor(1) in a minority of
> > cases - even given programmers which are familiar with exception
> > safety. (I do wonder how common such programmers are; I've never
> > worked anywhere where more than 1 programmer in 10 could describe
> > even 1 exception safety idiom. However certain newsgroups and
> > mailing lists seem to stuffed with such programmers. And I've
> done
> > a fair amount of work in environments where exceptions where
> > unavailible.)
>
> Just curious: may you point out some references to safety idioms
> (especially on-line ones)?
The first few that come to mind are:
RAII:
www.research.att.com/~bs/glossary.html#Gresource-acquisition-is-initialization
(Of course this predates, and is good for more than, exception
saftey, but it's role in exception saftey is important.)
Implementing assignment with swap:
http://www.gotw.ca/gotw/059.htm
Not throwing exceptions from destructors:
http://www.parashift.com/c++-faq-lite/exceptions.html#faq-17.3
I already dropped a link to the boost page on exception saftye, though
I'm only mostly confident the 3 guarantees qualify as idioms. Note
that gotw.ca has items on not throwing from destructors and raii.
>But std::logic_error is still part of well-defined behavior. If
>some code can detect an attempt to perform undefined behavior
>*before* the undefined behavior actually happens (eg attempt to
>deref null ptr or out of bounds), maybe why not throw the
>exception; there's no need to necessarily abort() immediately
>always, because undefined behavior has *not* yet been
>encountered (as far as you know..).
A check that throws a logic_error exception and even an
"assert" are just single *points* in which you detect
that something isn't going the way you supposed it would have.
Once you know that something has gone different from what
you expected (once the "impossible" happened) how can you
be confident that everything that happened "before" was
logically correct ?
Sure... you got an out of boundary access and the vector::at
member function prevented that access to do a damage... but
who can ensure you that other accesses that have been done
before that and inside the bounds of the array were indeed
logically correct ? The fact that you wrote the code so that
accesses were meant to be correct doesn't help now... because
an out-of-bound access is dancing in front of you to scream
that your logic is faulty on that point.
It's even debatable if you should trust yourself about that
correctness *before* seeing the error, but surely you cannot
trust your reasoning after an error becomes *manifest*.
If you get a logic error then your program is not doing what
you think it does... just assuming that everything else must
be correct is IMO way too optimistic. A counter-example shows
that there is a problem in the logic of a theorem, just adding
the counter-example as special case doesn't fix the theorem.
Until you're able to understand what happened, and why you
got that out of bound access (that is to say until you KNOW
what was the logical mistake, the missed possibility, the
unchecked contract or even the typo, and you understand
clearly WHY that resulted in the observed behaviour)
your program is broken.
To be able to continue execution I think there must be thick
trustable WALLS, and the ability to re-init from scratch and
in a known safe state the offending part.
That's why I think that just catching an error that shouldn't
have happened and that we don't even understand completely
and continue anyway without those guarantees is a really
quite questionable position.
I think that for programming in general, but especially in C++
and C (and in other languages where instead of the friendly
runtime error angels you meet the undefined behaviour daemons)
the confusion between what is a crash and what is an error
is a very dangerous line of thinking.
I've seen way too many times approaching debugging as if
it's the art of "shaking" here and there until just the bug
manifestation disappears. That's IMO definetively NOT the
way to robust programs. Crashes are our *friends*, the BUGS
are the enemies!
An out of bound check (like assert and like the segv error)
is just a quite useful crash-on-demand feature; it's not
by any way a magic tool able to make errors disappear.
Using logic_error and the ability to swallow exceptions
is sometimes the temptation to just HIDE crashes, to
avoid fixing errors. Is hiding symptoms instead of curing.
It's something that can help you to get on the market
quicker, so I can buy and use your program to print wrong
invoices.
>How do you figure this? std::logic_error means that C++ code
>detected an attempt to cause undefined behavior *before* it
>actually happened.
That would be true only if every portion of your program
has *all the possible* logic consinstency checks. The fact
that you got an out of bound it's NOT a proof that there
have been no errors before... in my experience that's indeed
quite a sign of the OPPOSITE. Something, somewhere in some
point in time before that out of bound access went wrong.
In my experience (and I'm not alone in this, I'm sure) the
possibility that I as the programmer was *SO* lucky that my
trap check caught the error *immediately* is indeed quite
remote.
>This means the program state is *not* yet
>corrupt, and if a high-level controller can isolate the
>offending module and all the intervening code is exception-safe
>(well maybe that's a long shot, but just because it tried to do
>something bad doesn't necessarily mean it can't clean up
>properly behind itself) then saving the document should still be
>fairly safe.
I think chances are against you... hmmm... well,
against the poor user.
>On the other hand...
>
> > That's why I think that allowing a process to continue
> > after e.g. a sigsev is detected is pure nonsense in 99.999%
> > of the situations.
>
>Absolutely agreed. But I think that this is a completely
>different situation than std::logic_error, however.
Sigsegv is just a bound check if you put the OS hat on.
However the OS, differently from most application environments
does have the "safety walls" and a reasonable certainity
that the system is still in good condition. So it can just
shut down the process and continue.
Drop the safety walls, drop the reasonable certainity
that the system is still in a good condition and you
get an OS in which after a sigsegv a reboot is not a
bad idea after all.
Andrea
> James Kanze escribió:
>
>> alternative. My tendancy would be to favor either something along the
>> IEEE model, propagating the error until the end of the calculation, or
>> else an outright termination of the program -- ranges should be
>> validated before starting the calculations. But I'm not sure that the
>> latter is always feasable, and the former has definite run-time
>> overhead.
>
> Propagating the error until the end has a problem: perhaps due to the
> failed state one never reach the end. Is a commmon mistake of beginners
> write a loop
> like this:
>
> cout << "Enter numbers, 0 to end" << endl;
> do
> {
> cin >> num;
> } while (num != 0);
>
> And test with an input as "1 no_number".
>
> Same situation can occur in more sophisticated code. You need to do an
> specification of the behaviour of all functions when called in a failed
> state, and check carefully that all programmers take that into account.
That's a very good point. For some reason I never saw the standard
streams as an example of the way exceptions simplify invariants,
because their error states are not "broken" in some sense: all the
stream operations still have well-defined semantics. This is,
however, a classic example of exceptions simplifying invariants. When
an object can be put into an error state where it is usable but its
behavior is different from its usual behavior, it can induce the same
need to "look over your shoulder" while using it that occurs when it
has an unusable-but-destructible state.
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
>> - An error in the input data. That's something that you should
>> expect, and that you should be able to handle locally. Not a job
>> for exceptions. Unless, of course, that input data is somehow
>> guaranteed by your underlying abstract model -- a temporary file
>> that you write and then reread, for example.
>
> These were expected errors in the hardware (jammed paper in the receipt
> printer, etc.), so I guess from your point of view such errors would fall
> under this category.
This is where the "exceptions should be unexpected" argument really
breaks down. Whether or not these things should be handled locally
depends on whether they *can* be handled locally. It's entirely
reasonable in a GUI app to throw an exception and return to the main
loop when you detect that no printer is connected.
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> Implementing assignment with swap:
>
> http://www.gotw.ca/gotw/059.htm
Watch out for this one; doing that automatically often gets you safety
at the wrong level of granularity and doesn't scale well to composite
systems.
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
[ ... ]
> Have you ever seen people using RAII in projects with exceptions disabled? I
> haven't.
I have. It can still make for simpler, and hence more
solid, code. Exception support in some embedded compilers
is largely unusable, but they can still handle RAII (which
is all about resource release in destructors, after all).
It's a pain not being able to use constructors to acquire
resources, but it would be a bigger pain not being able to
rely on destructors to release them.
-- James
> LLeweLLyn <llewe...@xmission.dot.com> writes:
>
> > Implementing assignment with swap:
> >
> > http://www.gotw.ca/gotw/059.htm
>
> Watch out for this one; doing that automatically often gets you safety
> at the wrong level of granularity and doesn't scale well to composite
> systems.
[snip]
What do you mean by this? Could you post some examples? (I've only
used assignment implementated with swap for envelope-letter type
classes. )
> David Abrahams <da...@boost-consulting.com> writes:
>
>> LLeweLLyn <llewe...@xmission.dot.com> writes:
>>
>> > Implementing assignment with swap:
>> >
>> > http://www.gotw.ca/gotw/059.htm
>>
>> Watch out for this one; doing that automatically often gets you safety
>> at the wrong level of granularity and doesn't scale well to composite
>> systems.
> [snip]
>
> What do you mean by this? Could you post some examples? (I've only
> used assignment implementated with swap for envelope-letter type
> classes. )
Suppose you do the copy/swap thing for some simple handle/body class
X, without thinking about it, just because it's easy to get right, but
not because you know you need the strong guarantee. Now suppose you
have two other classes, Y and Z. Y and Z both contain a collection of
Xs, and you need the strong guarantee for the assignment of Y, but
only the basic guarantee in Z...
What is the effect of your copy/swap? Does it help with Y's exception
safety? Not a bit. Does it hurt efficiency? If Z only needs a
simple basic guarantee assignment, you may be doing a whole bunch of
unneccessary allocations.
The importance of analyzing exception safety in compound systems is
the basic point most people who teach exception-safety have missed.
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Anton.
and as soon as you do that, RAII will save you from duplicating
code. In fact, if you acquire the same kind of resource in two
different scopes RAII will save you from duplicating code.
From a design standpoint, any type which must both acquire dynamic
resources for its implementation, and act like a builtin, must use
RAII. (I imagine some might claim this is not RAII, but IMO it is.)
RAII also simplifies building udts which have udts as members.
> i.e. to the reader it is well
> visible that there could be a problem with previously allocated
resources.
> With exceptions however, you don't have such a "warning" so you need a
whole
> new way of thinking, reading and writing code. Even otherwise
excellent
> programmers tend to simply not see that it's almost always a mistake
to
> acquire resources without using RAII if they weren't made aware of the
> problems with exceptions and the necessary paradigm shift.
>
> Have you ever seen people using RAII in projects with exceptions
disabled? I
> haven't.
Speaking as someone whose written a fair amount of C++ with exceptions
disabled, I would guess nearly all such projects use some amount
of RAII. I would not be surprised to learn that RAII predated
exceptions.
I precieve it):
Errors of program logic (like bad parameters, etc) raise exceptions of
pre- and postcondition. However, these exceptions are treated as real
severe ones and ideally should be caught in testing (e.g. unttesting).
The method caller should verfiry validity of parameters (as example) on
call.
It seems nice: caller first verifies validity of parameters passed. If
even with valid parameters something goes wrong, exception is raised.
Anton.
[...]
> > - An error in the input data. That's something that you should
> > expect, and that you should be able to handle locally. Not a job
> > for exceptions. Unless, of course, that input data is somehow
> > guaranteed by your underlying abstract model -- a temporary file
> > that you write and then reread, for example.
> These were expected errors in the hardware (jammed paper in the
> receipt printer, etc.), so I guess from your point of view such errors
> would fall under this category.
This is an interesting problem in itself. In the case of an ATM, I
suppose you take the machine off-line if you cannot log, aborting in
some way. In a more general case, however, you want to display the
message at the highest level, so that the user can intervene, but you
also want to maintain the low level context, in order to continue once
the problem has been resolved. If you throw an exception OR return a
return code, you've lost the low-level context.
> [ ... ]
> I agree. However, we do not live in a perfect world. Programmers do
> cut corners, especially during a schedule crunch. What this really
> boils down to is the following:
> - I believe I can educate almost any programmer how to program
> exception-safely (I'm talking about the basic guarantee here). All
> it takes are a smart pointer, a mutex a la boost and something like
> Alexandrescus scope guard. The only functions guaranteed not to
> throw are members of these three classes.
It takes a lot more than that. See, for example, the issues presented
in http://www.gotw.ca/gotw/056.htm. You can educate programmers how to
program exception-safely, but it isn't trivial. It's a new idiom, and
it requires a new way of thinking.
> All other functions may explicitly throw exceptions, even if they
> are not documented in the interface. Most people need some practice
> but after a few weeks everybody I ever had to deal with felt quite
> comfortable about this. What's most important, writing
> exception-safe code does not take more time, sometimes it takes even
> less!
> - I can also educate anybody that they have to change the function
> name, as soon as they decide to return an error code from a function
> previously returning void. The problem here is that most programmers
> will _not_ do so during a schedule crunch, especially when the error
> does not happen all that often.
I guess I've just worked on larger, or better run projects. First, of
course, interfaces haven't been in the hands of individual programmers.
A programmer just doesn't add an error condition to a function like
that; he needs to get it approved by the architecture team, or whoever
else is in charge of ensuring that one person's changes don't screw up
every one else.
This is actually one of the weaknesses with exceptions, at least the way
they are implemented in C++. If a function return's void, a programmer
will have to check out the header file in order for it to return a
value. If a function doesn't throw an exception, this fact is NOT
verified anywhere by the compiler. If the programmer encounters a
situation he doesn't know how to deal with, there's nothing simpler than
inserting a throw.
Not that I really consider this an argument against exceptions. Even
without exceptions, there are many ways to change an interface which the
compiler cannot verify. That's why we have code reviews.
> This is admittedly bad but it's just human. Nobody really wants to
> "slow" the project down.
And trying to check in code which doesn't pass code review doesn't slow
the project down?
> Please note that I wrote "slow" because the problem will eventually
> surface and cause much more wasted time than the programmer saved in
> the first place. People are doing this kind of stuff invariably
> because they hope that the problem will not surface before they have
> the time to correct it. Sometimes it works but in most cases it
> doesn't. We agree that in both cases (the newly thrown exception
> and the newly returned error-code) the programmer has introduced a
> bug in the software. My point is that with exceptions this bug will
> surface much more predictably and diagnoseably than with error-code
> returns.
Curious. My experience is the contrary. With exceptions, the bug will
not surface until run-time, and there is no trace in the header file
that the contract has been modified. On really big projects, of course,
simple programmers can't even check out the header files. But even on
smaller ones, it doesn't take much work to train the programmers to be
hesitant about checking them out.
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
The problem here is that if you have a compound system, you often *have*
to
provide the strong guarantee, in order to ensure that you meet the basic
guarantee.
If I have a class that contains two vectors, and the invariant states
that the
have the same number of entries, I have a problem if I use the
assignment
operator for the vectors in the assignment operator of my class --- if
the
second assignment throws I need to rollback the first one to ensure my
invariant holds. In this case, if the vectors provide the strong
guarantee for
assignment, then I can make do with only copying one, and then relying
on the
strong guarantee for the assignment of the second; if they only provide
the
basic guarantee, then I have to copy both to ensure that my invariant
holds,
because I cannot rely on the number of entries in the assigned vector
being
correct.
This is true in general --- if you have a compound object and you are
performing an operation that may affect more than one of the parts, and
the
operations on the affected parts may themselves throw, then you need to
do
copy/swap at least for some of the parts to ensure even the basic
guarantee
for the whole, regardless of the guarantees provided by the operations
on the
parts, unless the operations themselves are nothrow or there is a
nothrow
means of rolling back the operation.
Anthony
--
Anthony Williams
Senior Software Engineer, Beran Instruments Ltd.
Remove NOSPAM when replying, for timely response.
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Well, it doesn't double overall because you only do it for those
functions for which you need it. If you find yourself try/catching
something again and again it means the code likely begs for
refactoring. You either write a wrapper containing try/catch and
returning an error code, or, if you have access to the lower level
code, write a version which doesn't throw in the first place. (The
wrapper as it happens can introduce a huge performance overhead.)
Basically it all means that the situation in which the original
version throws is often not "exceptional" or an "error" and there is
no point in using exceptions for flow control.
Well, I am of a slightly different opinion. I think that finally has
its place along with RAII. You may find in a guru's article about
writing exception safe code http://cuj.com/experts/1812/alexandr.htm
that "Defining classes is hard; that's another reason for avoiding
writing lots of them". If it's hard for an expert, it's even harder
for a rank and file programmer. While finally has its drawbacks like
everything else, it at least would allow you to write a completely
transparent and understandable code where you don't need to watch your
back at every step. I think it's real pity that such a multi-paradigm
language as C++ doesn't have a built-in try/finally support. (I must
add that the mentioned expert doesn't like try/finally, but here I
don't agree with him.)
> To put it another way: Without exceptions, you have to write a return
> statement to prematurely end your function, i.e. to the reader it is
well
> visible that there could be a problem with previously allocated
resources.
> With exceptions however, you don't have such a "warning" so you need a
whole
> new way of thinking, reading and writing code. Even otherwise
excellent
> programmers tend to simply not see that it's almost always a mistake
to
> acquire resources without using RAII if they weren't made aware of the
> problems with exceptions and the necessary paradigm shift.
>
> Have you ever seen people using RAII in projects with exceptions
disabled? I
> haven't.
Of course, myself :-). E.g. I have a file class wrapping the standard
fopen() which atomatically closes the file in the dtor (a la an
example in TC++PL). Incidentally, the class has a throwing and a
non-throwing versions of open :-).
Regards
Igor
Lucky you. I know that such projects exist (IIRC, Dave Abrahams once
mentioned something in this direction), but I've yet to encounter such
code
myself. Most no-exceptions code I've seen does not even come close to
being
correct (not leaking resources and not ignoring errors).
> It can still make for simpler, and hence more solid, code.
Yeah, I would definitely use RAII in a no-exceptions project.
Regards,
Andreas
> David Abrahams <da...@boost-consulting.com> writes:
>
>> Suppose you do the copy/swap thing for some simple handle/body
>> class X, without thinking about it, just because it's easy to get
>> right, but not because you know you need the strong guarantee. Now
>> suppose you have two other classes, Y and Z. Y and Z both contain a
>> collection of Xs, and you need the strong guarantee for the
>> assignment of Y, but only the basic guarantee in Z...
>>
>> What is the effect of your copy/swap? Does it help with Y's
>> exception safety? Not a bit. Does it hurt efficiency? If Z only
>> needs a simple basic guarantee assignment, you may be doing a whole
>> bunch of unneccessary allocations.
>>
>> The importance of analyzing exception safety in compound systems is
>> the basic point most people who teach exception-safety have missed.
>
> The problem here is that if you have a compound system, you often
> *have* to provide the strong guarantee, in order to ensure that you
> meet the basic guarantee.
That may be true, but it is not "the problem here."
> If I have a class that contains two vectors, and the invariant
> states that the have the same number of entries, I have a problem if
> I use the assignment operator for the vectors in the assignment
> operator of my class --- if the second assignment throws I need to
> rollback the first one to ensure my invariant holds. In this case,
> if the vectors provide the strong guarantee for assignment, then I
> can make do with only copying one, and then relying on the strong
> guarantee for the assignment of the second;
You mean:
// #1
myclass& myclass::operator=(myclass const& rhs)
{
vec tmp = rhs.v1; // copy one in case of failure
v1 = rhs.v1;
try {
v2 = rhs.v2; // rely on strong guarantee
}
catch(...) {
v1 = rhs.v1; // rollback
throw;
}
}
??
This is flawed in at least two ways, to firstly that the rollback
might itself throw.
> if they only provide the basic guarantee, then I have to copy both
> to ensure that my invariant holds, because I cannot rely on the
> number of entries in the assigned vector being correct.
You mean:
// #2
myclass& myclass::operator=(myclass const& rhs)
{
vec t1 = rhs.v1; // copy in case of failure
try {
v1 = rhs.v1;
vec t2 = rhs.v2;
try {
v2 = rhs.v2;
}
catch(...) {
v2 = t2;
throw;
}
}
catch(...) {
v1 = rhs.v1; // rollback
throw;
}
}
This has the same flaw, only more so. In any case, even ignoring that
flaw in #1, the point is that if the vectors provide the strong
guarantee for assignment by doing copy/swap themselves, the program
ends up making 3 copies in #1 instead of 2. That's the other flaw.
The superior solution is:
myclass& myclass::operator=(myclass const& rhs)
{
vec t1 = rhs.v1;
vec t2 = rhs.v2;
swap(v1, t1);
swap(v2, t2);
}
or, if a copy ctor and nothrow swap is defined for myclass:
myclass& myclass::operator=(myclass rhs)
{
swap(this, rhs);
}
On the other hand, if yourclass does not need the strong guarantee in
its assignment:
yourclass& yourclass::operator=(yourclass const& rhs)
{
v1 = rhs.v1;
v2 = rhs.v1;
}
is fine. The performance of that could be a *lot* worse if vector
assignment always did copy/swap, since in a typical application the
LHS may already have sufficient capacity to hold the rhs. Since
elementwise assignment within the vector follows the same logic,
mindless copy/swap can have a nasty ripple effect on efficiency.
> This is true in general --- if you have a compound object and you are
> performing an operation that may affect more than one of the parts,
> and the operations on the affected parts may themselves throw, then
> you need to do copy/swap at least for some of the parts to ensure even
> the basic guarantee for the whole, regardless of the guarantees
> provided by the operations on the parts, unless the operations
> themselves are nothrow or there is a nothrow means of rolling back the
> operation.
That's only true if a particular relationship between the parts is
part of the component's invariant... which is often but not always.
In that case giving the components a strong assignment operator is no
help anyway, since you need a nothrow assignment in order to do
rollback. It ultimately means that the outer component has to
manage the sequence of copies and swaps. This should always be done
at the outer level where it is needed to avoid the ripple effect of
unneccessary copies.
This is exactly the same argument that applies to mutex locking.
Adding locks on all your components doesn't automatically make the
system threadsafe, and it can hurt efficiency, because it does the
locking at the wrong level of granularity.
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Are you serious? This issue is easily avoided by the very simple rule
mentioned at the end of the gotw article. Or by the even simpler rule given
by Peter Dimov in the boost::smart_ptr docs (we used this one).
> program exception-safely, but it isn't trivial. It's a new idiom, and
> it requires a new way of thinking.
I never said it's trivial. It definitely takes a few lessons and a few
exercises but it is no longer a big deal once you rearranged your thinking.
BTW, I would bet that you need about same amount of time to educate
programmers how to safely program with error-codes!
> > All other functions may explicitly throw exceptions, even if they
> > are not documented in the interface. Most people need some practice
> > but after a few weeks everybody I ever had to deal with felt quite
> > comfortable about this. What's most important, writing
> > exception-safe code does not take more time, sometimes it takes even
> > less!
>
> > - I can also educate anybody that they have to change the function
> > name, as soon as they decide to return an error code from a function
> > previously returning void. The problem here is that most programmers
> > will _not_ do so during a schedule crunch, especially when the error
> > does not happen all that often.
>
> I guess I've just worked on larger, or better run projects. First, of
> course, interfaces haven't been in the hands of individual programmers.
> A programmer just doesn't add an error condition to a function like
> that; he needs to get it approved by the architecture team, or whoever
> else is in charge of ensuring that one person's changes don't screw up
> every one else.
This may be necessary for 20+ programmer teams. The largest project (the
ATM) I've ever worked on was 9 programmers. We definitely had our share of
communication problems and screw-ups in the beginning but I very much
believe that negotiating every little interface-change with the architecture
team (which was notoriously understaffed anyway) would have been overkill.
What we did was interface-reviews, i.e. no interface-change was done without
having been discussed by at least two programmers, normally there were
three.
> This is actually one of the weaknesses with exceptions, at least the way
> they are implemented in C++. If a function return's void, a programmer
> will have to check out the header file in order for it to return a
> value. If a function doesn't throw an exception, this fact is NOT
> verified anywhere by the compiler. If the programmer encounters a
This argument breaks down as soon as the programmer has to add another
failure type, i.e. the function returned one type (exactly one value !=0) of
error before and now must return two different types of errors (two
different values != 0). Then the interface doesn't change at all and we're
in the same situation as with exceptions.
> And trying to check in code which doesn't pass code review doesn't slow
> the project down?
Again, for smaller projects it's probably overkill to review _all_
implementation code (I agree for interface code). Moreover, reviewers are
human too. Even more so during schedule crunches.
> > Please note that I wrote "slow" because the problem will eventually
> > surface and cause much more wasted time than the programmer saved in
> > the first place. People are doing this kind of stuff invariably
> > because they hope that the problem will not surface before they have
> > the time to correct it. Sometimes it works but in most cases it
> > doesn't. We agree that in both cases (the newly thrown exception
> > and the newly returned error-code) the programmer has introduced a
> > bug in the software. My point is that with exceptions this bug will
> > surface much more predictably and diagnosably than with error-code
> > returns.
>
> Curious. My experience is the contrary. With exceptions, the bug will
> not surface until run-time, and there is no trace in the header file
As mentioned above, the same is true for error-codes when the function wants
to report a new type of error.
Regards,
Andreas
Exactly! (I assume this is an answer to James' post)
Regards,
Andreas
What if the function is a member function of some class, and the post
conditions are an encapsulated detail. If the postconditions of the
member function are not met, does this imply that class invariants are
invalidated? If so, what can a caller do to fix the situation?
Bob
> What do you think about Eiffel idiom of treating exceptions (at least
as
>
> I precieve it):
>
> Errors of program logic (like bad parameters, etc) raise exceptions of
> pre- and postcondition. However, these exceptions are treated as real
> severe ones and ideally should be caught in testing (e.g. unttesting).
> The method caller should verfiry validity of parameters (as example)
on
> call.
>
> It seems nice: caller first verifies validity of parameters passed. If
> even with valid parameters something goes wrong, exception is raised.
[snip]
IMO, the ideal reaction to a violated pre- or postcondition would be
to start the debugger, with the execution point at the point the
violation was detected. Core dumps and stack traces are the next
best things. Don't throw an exception unless doing so provides you
with useful debug info. (In most C++ environments, throwing an
exception does quite the opposite; the stack is unwound, detection
context is lost, and if you get a core file after std::terminate()
is called, it is of no use.)
The best reaction to a programming error is to track it down and fix
it. If throwing an exception helps you do that (in java, it does,
as you get a stack trace as a side effect, but in C++ it makes
things worse), throw away.
Now I know other people here will be able to point out unusual cases,
where the above is not The Right Thing, but I am convinced it is
the best general rule.
> Errors of program logic (like bad parameters, etc) raise exceptions of
> pre- and postcondition. However, these exceptions are treated as real
> severe ones and ideally should be caught in testing (e.g. unttesting).
> The method caller should verfiry validity of parameters (as example)
> on call.
> It seems nice: caller first verifies validity of parameters passed. If
> even with valid parameters something goes wrong, exception is raised.
This sounds like something on the same order as converting a segment
violation into an exception.
In such cases, you have a program error. Many languages, like Java, do
treat these as exceptions. IMHO, this is NOT an alternative in C++. In
C++, exceptions cause the stack to be cleaned up, and call destructors.
If the program error has also corrupted the state of an object on the
stack, you may get a further exception (from the same or a similar
source) from a destructor, which will abort the program. (Note that
this is not a problem in languages without destructors, although I still
don't see any real reason why you would want an exception in such
cases.) And you don't want to call terminate if you can avoid it; if
you have half finished work, it may be a good idea to save it somewhere,
but you most certainly shouldn't overwrite the previous state with it,
and you may have resources which you want to clean up even if you are
going to abort (deleting temporary files, etc.).
--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
You contradict yourself. You say you "introduce another abstraction layer"
but obviously this holds only for normal processing, your error
reporting does _not_ "introduce another abstraction layer" but stays
the same level. Sounds like mischmasch. "To reduce complexity during
iterative development" why not shift error reporting to "another
abstraction layer", too? To do so you have to use drolly try/catch
where you would have been better off using return/output values.
Gruß
Volker
> Anthony Williams <anthony.wil...@anthonyw.cjb.net> writes:
>
> > David Abrahams <da...@boost-consulting.com> writes:
> >
> >> Suppose you do the copy/swap thing for some simple handle/body
> >> class X, without thinking about it, just because it's easy to get
> >> right, but not because you know you need the strong guarantee. Now
> >> suppose you have two other classes, Y and Z. Y and Z both contain a
> >> collection of Xs, and you need the strong guarantee for the
> >> assignment of Y, but only the basic guarantee in Z...
> >>
> >> What is the effect of your copy/swap? Does it help with Y's
> >> exception safety? Not a bit. Does it hurt efficiency? If Z only
> >> needs a simple basic guarantee assignment, you may be doing a whole
> >> bunch of unneccessary allocations.
> >>
> >> The importance of analyzing exception safety in compound systems is
> >> the basic point most people who teach exception-safety have missed.
> >
> > The problem here is that if you have a compound system, you often
> > *have* to provide the strong guarantee, in order to ensure that you
> > meet the basic guarantee.
>
> That may be true, but it is not "the problem here."
OK, badly chosen phrase. This whole discussion is demonstrating your point
that exception safety of compound systems is indeed complex.
> > If I have a class that contains two vectors, and the invariant
> > states that the have the same number of entries, I have a problem if
> > I use the assignment operator for the vectors in the assignment
> > operator of my class --- if the second assignment throws I need to
> > rollback the first one to ensure my invariant holds. In this case,
> > if the vectors provide the strong guarantee for assignment, then I
> > can make do with only copying one, and then relying on the strong
> > guarantee for the assignment of the second;
>
> You mean:
>
> // #1
> myclass& myclass::operator=(myclass const& rhs)
> {
> vec tmp = rhs.v1; // copy one in case of failure
> v1 = rhs.v1;
> try {
> v2 = rhs.v2; // rely on strong guarantee
> }
> catch(...) {
> v1 = rhs.v1; // rollback
> throw;
> }
> }
No, I mean
myclass& myclass::operator=(myclass const& rhs)
{
vec tmp=rhs.v1; // take copy in case of failure
v1=rhs.v1;
try
{
v2=rhs.v2; // strong guarantee
}
catch(...)
{
v1.swap(tmp); // rollback, nothrow
throw;
}
}
> > if they only provide the basic guarantee, then I have to copy both
> > to ensure that my invariant holds, because I cannot rely on the
> > number of entries in the assigned vector being correct.
>
>
> You mean:
[Snipped an extended version of above, which I didn't mean]
> The superior solution is:
>
> myclass& myclass::operator=(myclass const& rhs)
> {
> vec t1 = rhs.v1;
> vec t2 = rhs.v2;
> swap(v1, t1);
> swap(v2, t2);
> }
Which is what I meant, and which doesn't use vec::operator= at all.
> or, if a copy ctor and nothrow swap is defined for myclass:
>
> myclass& myclass::operator=(myclass rhs)
> {
> swap(this, rhs);
> }
Which is essentially shorthand for the above.
> On the other hand, if yourclass does not need the strong guarantee in
> its assignment:
>
> yourclass& yourclass::operator=(yourclass const& rhs)
> {
> v1 = rhs.v1;
> v2 = rhs.v1;
> }
>
> is fine. The performance of that could be a *lot* worse if vector
> assignment always did copy/swap, since in a typical application the
> LHS may already have sufficient capacity to hold the rhs. Since
> elementwise assignment within the vector follows the same logic,
> mindless copy/swap can have a nasty ripple effect on efficiency.
Point conceded, but the condition is wrong. It is not that myclass needs the
strong guarantee per se, just that it needs it for the combined assignment to
the pair of vectors --- if there is a relationship between different data
members in a class, then you often need the strong guarantee for operations on
the related data members, to ensure that the relationship is maintained --- as
below.
> > This is true in general --- if you have a compound object and you are
> > performing an operation that may affect more than one of the parts,
> > and the operations on the affected parts may themselves throw, then
> > you need to do copy/swap at least for some of the parts to ensure even
> > the basic guarantee for the whole, regardless of the guarantees
> > provided by the operations on the parts, unless the operations
> > themselves are nothrow or there is a nothrow means of rolling back the
> > operation.
>
> That's only true if a particular relationship between the parts is
> part of the component's invariant... which is often but not always.
> In that case giving the components a strong assignment operator is no
> help anyway, since you need a nothrow assignment in order to do
> rollback.
Or a nothrow swap.
> It ultimately means that the outer component has to
> manage the sequence of copies and swaps. This should always be done
> at the outer level where it is needed to avoid the ripple effect of
> unneccessary copies.
>
> This is exactly the same argument that applies to mutex locking.
> Adding locks on all your components doesn't automatically make the
> system threadsafe, and it can hurt efficiency, because it does the
> locking at the wrong level of granularity.
Agreed. Sometimes you just have to do the copy/swap to get the basic
guarantee, but if you restrict yourself to those circumstances where it is
*necessary*, you can avoid a lot of inefficiencies.
I am not sure why I decided to argue the point, given that it appears we agree
on the major issues --- I just succeeded in demonstrating that exception
safety of compound types is not trivial.
As a final thought, you can always provide the basic guarantee for myclass (the
one with the invariant on the vectors) as follows if the vectors give the
basic guarantee for assignment:
myclass& myclass::operator=(myclass const& rhs)
{
try
{
v1=rhs.v1;
v2=rhs.v2;
}
catch(...)
{
v1.clear();
v2.clear();
throw;
}
}
It does at least guarantee that the invariant holds, even if the resulting
object isn't much use ;-)
Anthony
--
Anthony Williams
Senior Software Engineer, Beran Instruments Ltd.
Remove NOSPAM when replying, for timely response.
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Maybe caller shouldn't fix anything in case of broken postconditions?
You see, if preconditions are met, shouldn't callee guarantee
postconditions and invariants?
Anton.
I don't know Eiffel at all, so I can't comment about that. In C++ I
think
it's a very good idea to do pre- and postcondition testing, at least in
debug mode. Whether you do it in release mode or not depends on your
safety
requirements. For some programs (e.g. DVD-player) its perfectly
legitimate
to simply not detect the problem and grind on (and probably sooner or
later
fail with an access violation or the like), because such a failure will
most
probably not destroy data or harm anybody. Others should perform a
_graceful_ emergency shut down followed by a system restart (e.g.
life-supporting medical appliance), for obvious reasons. I think it
almost
never makes sense to keep the program running after the detection of
such an
error (there is a discussion going about this issue in this very
thread).
How exactly the program shuts down after the detection of a program
logic
error is also often debated. If you want to have maximum safety that
your
emergency shutdown works under all circumstances you have to delegate
shutdown to a separate process and call abort() after doing so. This is
necessary because the detection of a program logic error could mean that
some of the programs data was corrupted. Continuing to operate on this
data
is potentially dangerous and could lead to undefined behavior.
Now, if you throw an exception after the detection of a program logic
error,
the resulting stack unwind also operates on the maybe now corrupted data
(it
calls the destructors of all stack objects). So there is the potential
risk
that stack unwind will run into undefined behavior.
In my last project (an ATM) we assessed this risk to be very small and
used
exceptions for program logic errors anyway. However, I would never do
that
if somebody could be injured or killed (or lead to any other potentially
problematic damage) as the result of my programs screw-up.
Regards,
Andreas
What? Sorry, but I can only guess what you mean, so I clarify what I meant
with "introducing another abstraction layer":
Situation before:
void f()
{
// ...
try
{
g();
}
catch ( const std::runtime_error & )
{
// ...
}
// ...
}
void g()
{
// ...
throw std::runtime_error( "Houston, we have a problem" );
// ...
}
Situation after layer was introduced (h() is the layer):
void f()
{
// ...
try
{
h();
}
catch ( const std::runtime_error & )
{
// ...
}
// ...
}
void h()
{
// ...
g();
// ...
}
void g()
{
// ...
throw std::runtime_error( "Houston, we have a problem" );
// ...
}
In the real world the additional abstraction layer does not only consist of
functions but also of classes, but that is of no importance here. The point
is that the code reporting the error was moved farther away from the code
handling the error. As soon as you have to propagate an error through more
than one layer it is easier with exceptions as the intermediate layers don't
have to do anything to make the error known to their caller.
My main concern is that for the implementer of g() it is difficult to know
through how many function calls the error has to be propagated, as he not
normally knows a lot about its callers. After all, that is the main point of
making a function isn't it?
Regards,
Andreas
> David Abrahams <da...@boost-consulting.com> writes:
>
>> > If I have a class that contains two vectors, and the invariant
>> > states that the have the same number of entries, I have a problem if
>> > I use the assignment operator for the vectors in the assignment
>> > operator of my class --- if the second assignment throws I need to
>> > rollback the first one to ensure my invariant holds. In this case,
>> > if the vectors provide the strong guarantee for assignment, then I
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[emphasis mine - dwa]
>> > can make do with only copying one, and then relying on the strong
>> > guarantee for the assignment of the second;
>>
>> You mean:
<snip my wrong guess>
>
>> The superior solution is:
>>
>> myclass& myclass::operator=(myclass const& rhs)
>> {
>> vec t1 = rhs.v1;
>> vec t2 = rhs.v2;
>> swap(v1, t1);
>> swap(v2, t2);
>> }
>
> Which is what I meant, and which doesn't use vec::operator= at all.
Well it clearly does (in the first 2 lines), but it doesn't rely on
the strong guarantee for the vector's assignment... I don't see how it
can be what you meant given that you're arguing that having the strong
guarantee for vector assignment is important. Or did I misunderstand
your previous paragraph.
>> On the other hand, if yourclass does not need the strong guarantee in
>> its assignment:
>>
>> yourclass& yourclass::operator=(yourclass const& rhs)
>> {
>> v1 = rhs.v1;
>> v2 = rhs.v1;
>> }
>>
>> is fine. The performance of that could be a *lot* worse if vector
>> assignment always did copy/swap, since in a typical application the
>> LHS may already have sufficient capacity to hold the rhs. Since
>> elementwise assignment within the vector follows the same logic,
>> mindless copy/swap can have a nasty ripple effect on efficiency.
>
> Point conceded, but the condition is wrong. It is not that myclass needs the
> strong guarantee
You must mean yourclass.
> per se, just that it needs it for the combined assignment to the
> pair of vectors --- if there is a relationship between different
> data members in a class, then you often need the strong guarantee
> for operations on the related data members, to ensure that the
> relationship is maintained --- as below.
No, having the strong guarantee for operations on the individual data
members (except for the last one you modify) is no help at all... as
below.
Moreover, having the strong guarantee on the last operation should
never be purchased at the expense of copy/swap inside the operation,
because that operation might not always be the last in any sequence
requiring a strong guarantee in the composite system.
And it's worth pointing out that copy/swap is not even always required
at the outer level to get the strong guarantee in composite systems
(see http://www.boost.org/more/generic_exception_safety.html for an
example)
>> > This is true in general --- if you have a compound object and you are
>> > performing an operation that may affect more than one of the parts,
>> > and the operations on the affected parts may themselves throw, then
>> > you need to do copy/swap at least for some of the parts to ensure even
>> > the basic guarantee for the whole, regardless of the guarantees
>> > provided by the operations on the parts, unless the operations
>> > themselves are nothrow or there is a nothrow means of rolling back the
>> > operation.
>>
>> That's only true if a particular relationship between the parts is
>> part of the component's invariant... which is often but not always.
>> In that case giving the components a strong assignment operator is no
>> help anyway, since you need a nothrow assignment in order to do
>> rollback.
>
> Or a nothrow swap.
Sure. Since you were asserting the value of preemptive copy/swap to
get strong assignment, I was trying to involve the guarantee offered
by assignment in the discussion. If you just use your nothrow swap
the vector assignment guarantee (beyond basic) is irrelevant.
>> It ultimately means that the outer component has to manage the
>> sequence of copies and swaps. This should always be done at the
>> outer level where it is needed to avoid the ripple effect of
>> unneccessary copies.
>>
>> This is exactly the same argument that applies to mutex locking.
>> Adding locks on all your components doesn't automatically make the
>> system threadsafe, and it can hurt efficiency, because it does the
>> locking at the wrong level of granularity.
>
> Agreed. Sometimes you just have to do the copy/swap to get the basic
> guarantee,
Yes, but you never have to do that with the whole object in the
assignment operator. IOW, the "classic" copy/swap assignment is
almost always a mistake:
X& operator=(X other)
{
swap(*this, other);
}
Any copy/swap stuff you find yourself having to do is going to happen
on subobjects.
> but if you restrict yourself to those circumstances where it is
> *necessary*, you can avoid a lot of inefficiencies.
Yes, that's what I'm saying.
> I am not sure why I decided to argue the point, given that it
> appears we agree on the major issues --- I just succeeded in
> demonstrating that exception safety of compound types is not
> trivial.
Me neither; I think that was my point all along.
> As a final thought, you can always provide the basic guarantee for
> myclass (the one with the invariant on the vectors) as follows if
> the vectors give the basic guarantee for assignment:
>
> myclass& myclass::operator=(myclass const& rhs)
> {
> try
> {
> v1=rhs.v1;
> v2=rhs.v2;
> }
> catch(...)
> {
> v1.clear();
> v2.clear();
> throw;
> }
> }
>
> It does at least guarantee that the invariant holds, even if the
> resulting object isn't much use ;-)
I think you're assuming that "both vectors are empty" is in the
invariant of myclass, which wasn't in the spec ;-)
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> Anthony Williams <anthony.wil...@anthonyw.cjb.net> writes:
>
> > If I have a class that contains two vectors, and the invariant
> > states that the have the same number of entries, I have a problem if
> > I use the assignment operator for the vectors in the assignment
> > operator of my class --- if the second assignment throws I need to
> > rollback the first one to ensure my invariant holds. In this case,
> > if the vectors provide the strong guarantee for assignment, then I
> > can make do with only copying one, and then relying on the strong
> > guarantee for the assignment of the second;
>
> You mean:
>
> // #1
> myclass& myclass::operator=(myclass const& rhs)
> {
> vec tmp = rhs.v1; // copy one in case of failure
> v1 = rhs.v1;
> try {
> v2 = rhs.v2; // rely on strong guarantee
> }
> catch(...) {
> v1 = rhs.v1; // rollback
> throw;
> }
> }
>
> ??
No. I'm pretty sure he meant:
// #2
myclass& myclass::operator= (myclass const& rhs) {
vec tmp (v1); // copy the old! value in case of failure
v1 = rhs.v1; // rely on strong guarantee
try {
v2 = rhs.v2; } // rely again on strong guarantee
catch (...) {
v1.swap (tmp); // rollback safely
throw; }
return *this; }
or better yet:
// #3
myclass& myclass::operator= (myclass const& rhs) {
vec tmp (rhs.v1);
v2 = rhs.v2; // needs strong guarantee
v1.swap (tmp);
return *this; }
either of which needs one less temporary vec than:
// #4
myclass& myclass::operator= (myclass const& rhs) {
myclass tmp (rhs); // contains *two* temporary vecs
swap (tmp);
return *this; }
but requires the strong guarantee in vec::operator=().
Of course, if vec::operator=() achieves its strong guarantee by making
a temporary copy internally, you haven't really gained anything.
Fortunately, there are many interesting cases where the strong
guarantee is obtained more cheaply. For example, std::vector<T> almost
certainly has a strong-guarantee assignment as long as T has nothrow
copy and nothrow assignment.
More generally, even though the idiom of #4 is easy to write, it may
not be the most efficient way to provide the strong guarantee for
assignment. If you have a class 'myclass' that contains many members
that provide nothrow assignment, and/or at least one member that
provides a strong assignment that is (usually) cheaper than copy/swap,
then you may be better off with something like:
// #5
myclass& myclass::operator= (myclass const& rhs) {
// copy the members whose assignments might throw
type1 tmp1 (rhs.m1);
type2 tmp2 (rhs.m2);
// ...
// except possibly for the largest with strong assignment
mBig = rhs.mBig;
// assign all the members with nothrow assignment
mEz1 = rhs.mEz1;
mEz2 = rhs.mEz2;
// ...
// now swap in all the temporaries
m1.swap (tmp1);
m2.swap (tmp2);
// ...
return *this; }
Admittedly, #5 has some things going against it: it's harder to write,
much more error-prone (too easy to leave out a field, or add a new
field in the wrong place, or forget to modify the way you deal with a
field if its exception guarantee changes), and my not be all that much
more efficient (because types with a nothrow assignment are often cheap
to copy/swap anyway). Still, it's an opportunity to watch for.
-Ron Hunsinger
>You just answered your own question. Use exceptions for doing something
>about runtime errors in code. If it isn't an error, then don't throw an
>exception. If it is an error, then throw an exception. It really is
that
>simple. The hard part is deciding what is, and isn't an error and no
one
>can help you with that because only you know what the function in
>question is supposed to do, and what results are acceptable.
I don't fully agree with this; I think it is also useful to throw
an exception in cases where the result is not an "error" per se,
but you want to transfer processing way back up the call chain.
My MCMC simulator occasionally finds out, partway through a very
elaborate computation, that the computation cannot be completed.
This is not an "error" since I fully expect it to happen even when
everything is working perfectly. But it requires me to abandon
the entire computation and start over with a different approach.
The error-code version of this passes an error code through about
nine levels of function calls. The exception version just throws,
and upper-level code catches. Everything to do with that computation
is neatly unwound.
Mary Kuhner mkku...@gs.washington.edu
>
> However one issue has typically been side-stepped. When is it a
> good idea to throw and when to return code ? Let me explain...
>
Hi!
A little bit of self-promotion ... :)
> Of course, if vec::operator=() achieves its strong guarantee by making
> a temporary copy internally, you haven't really gained anything.
No, in fact you've lost something: the ability to use the much cheaper
basic guarantee when that's all you need. But thank you for making my
point ;-)
> Fortunately, there are many interesting cases where the strong
> guarantee is obtained more cheaply. For example, std::vector<T> almost
> certainly has a strong-guarantee assignment as long as T has nothrow
> copy and nothrow assignment.
And there are lots more of those with no restriction on T,
e.g. list<T>::insert. Have you seen
http://www.stlport.org/doc/exception_safety.html? I think that was
written in '96 ;-)
--
Dave Abrahams
Boost Consulting
www.boost-consulting.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
It is for all of these reasons that I believe that program logic
errors should _not_ be handled with exceptions. Exceptions are for
anticipated conditions to which your program can meaningfully respond.
Program logic errors do not fall into this category, and should be
dealt with differently. One problem with handling bugs with exceptions
is that it adds twists and turns to your program logic that add no
real value, and if the bug is fixed in the future, it may be difficult
to remove those twists and turns. In other words, adding code to deal
with logic errors increases the overall entropy of the program.
On the other hand, I've never seen a completely convincing general
description of an alternative mechanism for dealing with program logic
errors. Perhaps this is because the nature of program logic errors is
that in general nothing meaningful can be said. The best general
solution I know of is lots of assertions, design for testability, and
lots of testing. Even then, in a sufficiently complex system this may
not catch everything, and other release-time means (I suppose
including exceptions) may be needed.
Wow, I guess I'm not sure what my position is. Let me try to sum up:
1) Exceptions _shouldn't_ be used for handling logic errors.
2) Handling logic errors is hard, so as a last resort, when nothing
else works, maybe I'd consider using exceptions, even though it's
something I shouldn't have to do.
Yeah. That sounds pretty clear. :-)
Bob
I don't think so. You only catch program logic errors at the highest level
to initiate emergency shutdown. The whole rest of your program code is
virtually free of catch clauses for logic exceptions. So there are no twists
and turns and nothing to remove when the bug is fixed. Moreover, no
real-world program can really claim to be bug free. It would therefore be
quite silly to remove the mechanisms that try to keep the damage as low as
possible.
The value in handling bugs with exceptions is that you can keep your program
very simple in terms of rollbacks. Whenever an operation failed (no matter
whether it was due to a runtime error or a program logic error) in the ATM,
we had to undo certain things. Because the undo mechanism relied on stack
unwind, the code was exactly the same for runtime and program logic errors.
Most importantly, rollback failure could _not_ cause major damage.
Because of a number of programming safety rules (e.g. dumb pointers and dumb
arrays were forbidden, programmers had to use at() instead of operator[] on
vector, etc.) the probability of data corruption was quite low. As a result,
a detected program logic errors almost always meant that a programmer made
false assumptions, forgot to test for null pointes etc. We therefore almost
never had to fear that the following stack unwind would operate on corrupted
data.
> 2) Handling logic errors is hard, so as a last resort, when nothing
> else works, maybe I'd consider using exceptions, even though it's
I wouldn't say "when nothing else works". It is quite straight forward to
move rollback functionality to a watchdog-like process, it "only"
complicates error handling.
> something I shouldn't have to do.
Again, I don't think "shouldn't" is the right word. Whether you do use
exceptions or not for logic errors should IMO depend entirely on an educated
assessment of the attached risks and costs.
For example, although highly unlikely maybe our ATM screws up badly (e.g.
someone gets US$600 although he shouldn't have) once in hundred years
because we used exceptions to report program logic errors. So what? We
probably saved US$60000 in development costs because we didn't aim to
prevent this case. Of course, you can't really make such an assessment if
your software could harm or kill someone.
All engineering is about "as good as necessary"!
Regards,
Andreas
Mary K. Kuhner wrote:
> In article <postmaster-8A2E4...@nnrp02.earthlink.net>,
> Daniel T. <postm...@earthlink.net> wrote:
>
>
>>You just answered your own question. Use exceptions for doing something
>
>
>>about runtime errors in code. If it isn't an error, then don't throw an
>
>
>>exception. If it is an error, then throw an exception. It really is
>
> that
>
>>simple. The hard part is deciding what is, and isn't an error and no
>
> one
>
>>can help you with that because only you know what the function in
>>question is supposed to do, and what results are acceptable.
>
>
> I don't fully agree with this; I think it is also useful to throw
> an exception in cases where the result is not an "error" per se,
> but you want to transfer processing way back up the call chain.
>
> My MCMC simulator occasionally finds out, partway through a very
> elaborate computation, that the computation cannot be completed.
> This is not an "error" since I fully expect it to happen even when
> everything is working perfectly. But it requires me to abandon
> the entire computation and start over with a different approach.
>
> The error-code version of this passes an error code through about
> nine levels of function calls. The exception version just throws,
> and upper-level code catches. Everything to do with that computation
> is neatly unwound.
Question. Do exceptions unwind the stack or do you lose?
void a()
{
try
{
b();
}
catch( SomeException* ptr )
{
// function b's state still on stack or no?
ptr->CurseSombodyOut();
}
}
b()
{
c();
}
c()
{
Throw new SomeException;
Hi,
> Question. Do exceptions unwind the stack or do you lose?
Loose what? Stack unwinding is no contradiction to loosing
the stack context, resp. local variables.
> void a()
> {
> try
> {
> b();
> }
> catch( SomeException* ptr )
> {
> // function b's state still on stack or no?
> ptr->CurseSombodyOut();
> }
> }
All information local to b() is lost, of course. How should you
access it here anyhow? If you *need* to preserve part of this information,
make it part of the exception class you threw.
So long,
Thomas
I`m talking about stack space... storage, not data. In other words could theoretically enough thrown exceptions blow the stack (stack
overflow) because of the stack NOT unwinding? Sorry my question was not clear about that. In my own work I use exceptions as fatal
error handling such as out of memory, crucial file or library missing, etc. so I can notify and bail and do as much house cleaning as
possible before exiting.
Hi,
> I`m talking about stack space... storage, not data. In other words could theoretically enough thrown exceptions blow the stack (stack
> overflow) because of the stack NOT unwinding? Sorry my question was not clear about that. In my own work I use exceptions as fatal
> error handling such as out of memory, crucial file or library missing, etc. so I can notify and bail and do as much house cleaning as
> possible before exiting.
Well, I don't think the C++ standard says a word about how to allocate stack
space or not, but for the time being, let's suppose you're working on a
rather standard architecture that organizes the stack by means of a stack
pointer and keeps local variables relative to this pointer. On these
architectures, you *must* unwind the stack as otherwise you never reach a
point where the data of the catching function is visible; the stack pointer
must be correct at this point in the sense that it must point to the same
local data that has been placed on the stack since the catching function
has been called in first place. Since the "catcher" called the "thrower", the
stack contains less data when catching, and hence the stack must have been
unwound. Furthermore, the stack contents must remain correct, i.e. if you
leave the catching function, the contents of the caller of this function
must again become active, etc, etc.
However, note that throwing itself might require stack space; let it be for
the constructor of the exception class you throw, or let it be for internal
housekeeping of the library that implements the exception mechanism. The
conclusion is that is not a very wise idea to throw an exception to signal
a stack overflow. (-; (As a side remark, this even *might* work if proper
precautions are taken, but that's getting too complicated and OT here...)
So long,
Thomas
Actually out of memory is potentially one of the cases where throwing
may not actually help. Yes, I can construct scenario's in which locally
thrown and caught exceptions might blow the stack (cause overflow). I
can even construct really bizarre cases where a non local catch might
cause the same problem but I would have to work very hard to do that and
I think the chance of it happening in non-pathological code is of the
same order of magnitude as a hardware failure so I would not worry about
it.
--
ACCU Spring Conference 2003 April 2-5
The Conference you cannot afford to miss
Check the details: http://www.accuconference.co.uk/
Francis Glassborow ACCU
You seem to believe that when introducing "additional layers" it is
normal (and actually even common) that errors in the now layered
parts can, should or are allowed to propagate _untouched_ to the parts
that are using the additional layers.
I simply deny that this often or even normally is the case.
http://c2.com/cgi/wiki?ExceptionPerContext
Volker