Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Exception-Free Programming

21 views
Skip to first unread message

Andy Robinson

unread,
Jan 12, 2005, 4:05:52 PM1/12/05
to
Does anyone here argue that exceptions are usually a bad idea?

I apologise if this subject has been done to death already.

I have placed a rant called "Exception-Free Programming" at
http://www.seventhstring.com/resources/exceptionfree.html
and I'll be interested to know what you think.

I realise that I'm putting my head into the lion's mouth. But I
also think there must be others out there who share my views.

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

David B. Held

unread,
Jan 13, 2005, 7:07:22 AM1/13/05
to
Andy Robinson wrote:
> Does anyone here argue that exceptions are usually a bad idea?

Oh, you'll always find people who argue that. Especially look at
the embedded camp.

> I apologise if this subject has been done to death already.

It has.

> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

I'll bite.

> I realise that I'm putting my head into the lion's mouth. But I
> also think there must be others out there who share my views.

There are, but as you will see, I'm not one of them. ;>

"I think that future generations will regard exceptions as an
uninteresting side-show, explored with enthusiasm for a while, only
to be abandoned."

And what evidence do you have that this is or will occur?

"It is said that checking the returned status code after calling
a function is a burden to the programmer."

I wouldn't call it a 'burden' so much as 'a potential source of
silent errors'. When are silent errors good? Would you like your
compiler to silently pass over errors because the compiler writer
didn't feel like checking all the return codes? For many cases,
your program would compile and run, but produce output that is only
transiently wrong. And this is somehow better than having the
program crash at the earliest possible point?

"But it isn't half as much of a burden as using an exception,"

Using exceptions are about as much of a burden as using classes (that
is to say, hiding your data, rather than making everything a global
variable). In fact, many of the arguments you make for "exception-
free programming" could be very analogously made for "access-free
programming" where all data is available to anyone who can get a
pointer to it.

"with the result that the function may never return at all, so
leading to the whole can of worms known as "exception-safe
programming". "

Which, as some point out, is a bit of a misnomer. The more accurate
term might be "error-safe programming", which means that the
complication has nothing to do with the exception mechanism and
everything to do with writing correct code.

"At least when you check a return value and deal with it, it is
clear what is happening. When exceptions are being used you cannot
tell just by looking at a function, which of the functions it calls
might throw an exception and never return."

Interesting. Are you also suggesting that when return codes are used
that they are always checked, and that any function with an unchecked
return value is necessarily a no-fail function? That's the only way
that you could see what is happening in return-value code without
seeing the documentation of the called functions. And that's the only
way that return-value coding would be intrinsically better than
exception handling for this point. I think reality disagrees with you
here.

"Even a humble "+" operator might cause function execution to abort,
if the operator has been overloaded."

And somehow that is worse than if it always returns, but does not always
succeed. You are advocating a paradigm in which the path of execution
follows intent, instead of correctness. You intend for operator+ to
succeed, so you demand that it always does. What happens when your
intent diverges from correctness? You are allowed to proceed anyway,
with a flaw in your program.

"It is claimed that exceptions allow the separation of normal code
from error handling code. But it's no problem to get the same effect
with status codes. If you don't want to deal with a potential error
locally then after a function call returns a status code, just write
"if (status != NO_ERROR) return status;". This is one line longer than
the exception-based version but I argue that this is good : it makes
it clear what is happening, and where the potential return points
are."

And it's also extremely error-prone. You are at the mercy of other
coders to propagate status codes to your function. If you call a
library function written by somebody else, and you want to know why
the function failed, you will only get the reasons that the library
author wants to pass on to you, which may mask all sorts of status
codes that he does not propagate, no matter how important those may
be. For instance, suppose you call a function foo(), written by
someone else, and foo() calls another function bar(), which allocates
memory. Now, if foo() decides not to pass on any status codes from
bar(), but just tells you: "operation failed", you will never know
if foo() failed because of something foo() did, or because bar() ran
out of memory. Clearly, you can take a different course of action if
bar() is able to throw you a std::bad_alloc. But without knowing
how foo()'s callees fail, you have no idea how to respond to foo()'s
status codes, except within the margins given to you by the author of
foo(). Tell me how that is better.

"It is claimed that programmers sometimes ignore status codes whereas
exceptions can't be silently ignored."

Hahahahaha!!! "sometimes"!!! Surely you're joking! "Rampant disregard
for status codes" is the phrase that comes to my mind.

"In fact though, if you neglect to consider the possibility of an
error - whether by ignoring a status code or by failing to consider
the possibility of an exception - then you're headed for trouble
either way. Neglecting the possibility of an exception at some point
in your function means that if it happens, your function may well
leave things in an inconsistent state."

However, it's much more likely that you won't catch the exception, or
that the exception will be caught in a place that causes an obvious
program failure. On the other hand, silently ignoring error codes
can lead to program execution that appears correct for quite a while,
and only manifests itself after it is too late to diagnose the exact
cause of the problem.

"The exception itself gets caught further up the call stack, by a
catch block that doesn't know what to do with it. Or if you are in
a callback then it propagates back into the OS and crashes."

I consider that a good thing. If my program is not correctly handling
errors, I want it to crash as soon as possible. Please tell me why
I would rather have it continue to run as if things were correct but
in fact, are not.

"The fact is that whenever you do something which might fail, you
must take care to deal with this failure appropriately if it happens.
Exceptions do not make it any more likely that a sloppy programmer
will remember this responsibility, nor do they make it any easier to
take of when you do remember it."

Au contraire! Exceptions won't make the lazy programmer more
responsible, but it will sure make him more obvious! If a sloppy
programmer does not catch exceptions, then they will manifest their
ugly heads when a problem occurs, rather than when a problem finally
causes a program to crash. Anyone who has tried to debug a C program
written by someone else without documentation should know what I mean.

"It is pointed out that exceptions propagate automatically. I argue
below that this is really a form of obfuscation and is not good."

But you don't do a very good job of it. ;>

"Apparently, reasearch shows that the "if" statement is a common
source of bugs. Therefore, it is argued (in the C++ FAQ Lite), we
should use exceptions because they can propagate without "if"
statements. This is another bizarre one, because it completely ignores
the fact that exceptions are far harder to use correctly than "if"
statements. I am reminded of an old gag: "Research shows that most
traffic accidents happen on roads. So for safety's sake, drive on the
sidewalk".

Well, it seems bizarre to you because you assume that everyone always
checks status codes. But I don't think it takes any research at all to
see that this is simply not the case. Not for a majority of
programmers, and not for a majority of functions. It is especially
difficult to correctly use "if" statements in code that you have not
written and to which you do not have access.

"It is pointed out that a C++ constructor cannot indicate an error
by returning a status code. That just means you shouldn't do anything
in the constructor which might fail - move such things into a separate
initialization function."

Hahaha!!! This is where your argument gets particularly weak. What if
the constructor has invariants which would be broken by having a
separate initialization function? Also, with a two-step construction,
you need to maintain the construction state of *each* object
internally!!! You need to make sure that initialization doesn't occur
twice, and you need to make sure that it has occurred at least once.
Which means that you need to check the initialization state in *EVERY*
member function! I'm sorry, but I don't see how anyone can suggest
such a "solution" with a straight face. What's the point in having a
constructor at all if you don't do construction in it?

"It is pointed out that overloaded operators cannot return status
codes if they fail. This is true but exceptions are not the answer. If
an operator needs a way of reporting that it has failed then you
should implement it as a function returning a status code instead.

Hahahaha!!! The whole point of operator overloading is to *write the
solution in the language of the problem domain*. Rewriting everything
as named functions completely defeats the purpose of overloaded
operators.

"Yes, this is more verbose."

No, it's not *just* "more verbose." It's a huge step backwards.

"But it's a good kind of verbosity which makes the possible behaviours
of the program explicitly clear. Comments, for the same reason, are
another example of good verbosity."

Comments should not be verbose. They should only document things which
are unclear from the code itself. Verbosity is never a good thing in
code. I consider it a good day when I can write a net negative number
of lines of code. If you want verbosity, advocate COBOL or Ada. Does
a return code make erroneous functions "explicitly clear"? Well, when
I look at a function being called, I don't see the return code at all.
How that is "explicitly clear" baffles me. What I *do* see is either
the status code being saved or the status code being compared to
something else, usually the "no error" code, which doesn't tell me
anything at all, except that the programmer conscientiously checked
the return code. If the code is handled locally, I might see a switch
statement checking the various possibilities, but there is no guarantee
that all of the possible status codes will be checked. Really, I don't
see how status codes are any more self-documenting than exceptions.
You ultimately have to know what codes the callee might return, and why.

"First, think of all those books and magazine articles that you won't
have to read!"

You mean the ones that tell you how to write code that behaves
gracefully in the face of errors? Guess what? The copy/swap idiom for
class assignment is useful whether you deal with exceptions or return
codes. But I don't think anyone invented it until we had exceptions.
Consider that.

"I'm taking about the articles which tell you how to decide when to
use an exception, followed by the article next month telling you how
to use "exception-safe programming" to cope with the resulting mess."

It's only a mess if your code was already written to ignore errors,
which is symptomatic of idioms that allow and perhaps even encourage
such coding (hint: status codes). Code that is "exception-safe" is
"error-safe", which means that you could replace the exceptions with
error codes and the calling code would still be correct (and perhaps
more so than if it were designed with error codes in mind). If writing
error-safe code is a hassle to be avoided, then let's go ahead and
ditch exceptions.

"I'm sure I can't be the only person to have reflected that if
that's how difficult it is, then why are we doing it?"

My guess is that we've gotten used to ignoring errors, and exceptions
force us to think about them like we never have before. If you are
advocating the status quo, which says don't handle errors until they
cause your program to crash, then by all means come out and say it.

"The title of this article is intended to suggest the idea that the
best way of making your programs exception-safe is to make them
exception-free."

And I would paraphrase that as: "the best way of making your programs
error-safe is to ignore the errors".

"The idea of functions returning values to indicate what they did,
will always be present in your programs even if you also use
exceptions."

But in general, I write functions to return values indicating what
they *did*, not what they *didn't*. If you prefer checking what the
function *didn't* do, that's fine, but don't be surprised if you don't
have a large following.

"This makes exceptions redundant, causing problems while solving
none."

They are only redundant if you think two-phase construction is a good
idea and operator overloading is a bad one. But if you think that, why
are you using C++?

"Programming safely in the presence of exceptions is difficult"

Correction: "Programming safely in the presence of errors is difficult."

"- but where is the advantage gained from this difficulty? There"
isn't one."

Notice how the rest of the sentence doesn't make any sense when you
state the first part correctly.

"When we use status codes as function values and test them, then it's
clear where the possible return points of the calling function are."

When we use status codes as function values when the function has a
natural return value that is not a status code, we are forcing an
unnatural paradigm onto the design of functions that makes code *less*
readable and usable. A function is a mapping from a domain to a
codomain. The codomain defines the return values. Functions are useful
precisely because you can define the codomain to be relevant to the
problem at hand. If you always define the codomain to be the space of
error return codes, then you have eliminated a great deal of the
usefulness of true functions (not just procedure-like functions).

Furthermore, returning to the same point is very much like demanding
Single-Entry, Single-Exit. Sounds good in principle, not so useful in
practice. Talk about obfuscation...look at a function with nested
loops and non-trivial exit conditions coded for SESE.

"In the old days a function could only relinquish control by saying
"return" or reaching the end. This is simple and good, let's not spoil
it for no reason."

In the old days we wrote code in assembler and had control over which
opcodes ended up in our executable. This is simple and good, let's not
spoil it for no reason.

"Using exceptions forces us to regard "exceptional" failures as an
entirely different category of event than "expected" (not exceptional)
failures."

Uhh...how so? You can choose to throw or not throw in either case.

"This is supposed to justify handling them by a different mechanism.
But in my experience there is no clear dividing line between these
categories."

That's because there is no *intrinsic* dividing line...

"When you try to open a file and fail, is that exceptional? It all
depends on whether the caller checked for its existence and access
rights beforehand."

No, it all depends on what the function promised to do, and what it
required as a precondition. If the function documents that there are
no preconditions on the file to be opened, then it should probably
report any problems as an exception. If the function requires you to
do all the checking yourself, then it should probably assert on any
problems it encounters.

"Lengthy articles have been written about this issue, which supports
my point that the distinction isn't obvious."

No, it just means that the distinction isn't mature, and that the
programming community is learning because we weren't all born with a
programming gene and stone tablets from above telling us how to write
error-safe programs.

"Anyway what is the advantage of using an exception rather than a
returned status-code? If the caller catches the exception immediately
it occurs then clearly there is no advantage, in fact the status-code
version is likely to be cleaner."

All unsubstantiated points. If the caller catches the exception
immediately, it better be because it is going to do something useful
with it, like log it or respond to it. I don't see how the status-code
version is likely to be cleaner. In fact, odds are, the function
could return multiple exceptions. But the caller doesn't need to
know all of them. It can choose to handle just a subset of possible
errors. The status-code version could do the same, but it would also
have to remember to propagate the error up the call stack. It's
entirely possible that the caller thinks he knows all the codes
returned by the function and never propagates the status code. But
if the called function changes, then that assumption is broken. But
if the thrown exceptions change, the exception-safe code is still
exception-safe (most likely).

"If the exception is allowed to propagate up through several nested
function calls before being caught then this might involve less
visible code as some of the functions involved do not need to contain
any code for this particular situation. But in fact this doesn't
really simplify things in a useful way at all. The compiler still has
to insert hidden return points wherever you call a function or
overloaded operator which might throw an exception, and as a
programmer you have to know where these return points are and always
bear in mind the possibility that they might be taken."

Which amounts to saying that you have to know how your program can fail
and keep in mind the alternative execution paths that will result. This
is true whether you are writing exception-safe code or status-code-safe
code. Exceptions, however, automate most of this process for you,
whereas status codes require a lot of manual boilerplate coding (and
there is your redundancy that you were complaining about exceptions).

"To hide things from view when you don't need to know about them can
be good. But to hide things which are crucially important and must not
be ignored is obfuscation, and the apparent tidiness of the resulting
code is a dangerous illusion."

You can write exception-safe code without knowing what exceptions pass
through it. In fact, that's the best way to write it. And the
robustness of the resulting code will be just as correct for status-code
paradigms.

Now, let's review your "poor man's throw":

if (status != NO_ERROR) return status;

First, I must restate that this idiom essentially throws away the
functional nature of functions in C++, which is a terrible waste. In
order for this idiom to work, every function that can possibly fail
must use this protocol. This means that you basically cannot return
data from a function. You have to pass out parameters instead, which
is inefficient, because often times, the return value is a temporary
that can be elided. Considering how often objects get copied in C++,
forcing programmers to use out parameters everywhere is a serious
performance burden. Second, let's return to your idea of using a
"fat status". You correctly observe that integral error codes might
not be sufficient (especially since they cannot emulate exceptions'
capability of passing pertinent instance-specific messages up the
call stack). Using your idiom, every caller must allocate a status
code whether it is used or not! And they must do so *on every
function invokation*. This makes your program *extremely* sensitive
to the performance of the status object. Alternatively, you could
use a global object, but this would make multi-threading all that
more complicated. Then you could use thread-local storage, but
then you wouldn't need to return the code anyway...it would just
always be visible. In that case, you wouldn't need to use the
return mechanism at all. You could just check an error flag.
But clearly, you intend for the status code to be passed through
the return mechanism, which is a performance nightmare once you
get past simplistic error numbers.

I find your arguments wholly unconvincing, glossing over important
points and disregarding the reality of modern programming. I don't
think exceptions are perfect, and they will certainly continue to
mature. But really, the problem of exception complexity is not an
issue with exceptions as much as an issue of writing error-safe code,
which has clearly not been a priority for far too long. You would
do much better to write an article on error-safe idioms using error
codes, for those poor souls who cannot use a language featuring
exceptions.

Dave

ntrif...@hotmail.com

unread,
Jan 13, 2005, 7:10:47 AM1/13/05
to
The only reason for not using exceptions I can think of is that it
takes time to learn exception-safe coding. But again, the same can be
said for C++ in general.

jakacki

unread,
Jan 13, 2005, 7:09:42 AM1/13/05
to
> Does anyone here argue that exceptions are usually a bad
> idea?
>
> I apologise if this subject has been done to death
> already.
>
> I have placed a rant called "Exception-Free Programming"
> at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

You challenge arguments of exceptions proponents, but you
fail to cite any articles, books or postings except C++ FAQ
Lite. I think you should invest more work in justifying
claims you make. In many cases they contradict existing
literature. To be taken seriously you have to polemize
directly with claims made in this literature, pointing out
particular publications and revisiting arguments presented
there.

BR
Grzegorz

--
Free C++ frontend library: http://opencxx.sourceforge.net
China from the inside: http://www.staryhutong.com
Myself: http://www.dziupla.net/gj/cv

Emil Kirichev

unread,
Jan 13, 2005, 7:17:08 AM1/13/05
to
I can argue that exceptions are bad idea. There are places that some
error handling mechanism is needed, and exceptions are the best choise.
Of course I think that too much exceptions may cause an unduly
complexity and code may become difficult to support, but generally,
exceptions are always needed, when you use them correctly.

L.Suresh

unread,
Jan 13, 2005, 11:38:52 AM1/13/05
to
Here's a list of my views.

> I speak here of C++ though of course the issues apply to all
> languages supporting exceptions

So you will find me referring to JAVA occassionally.

a) I find JAVA's enforcement of checked exceptions wonderful. It forces
you to handle all the checked exceptions, that way you can be sure that
you have handled all exceptional paths. Sure, you can subvert these
mechanisms and write catch-all blocks to suppress the exceptions. But,
thats bad programming.

An example would be, the checkError() method of java.io.PrintStream. A
common pitfall for the novice would be to ignore to call the method to
check if the underlying stream has thrown an IOException. (PrintStream
is known not to emit any exceptions and set the error state
internally.)

OTOH, if the exception had been thrown, it would have forced the caller
to handle it as he sees fit. Here you get all the help you from the
compiler. Some say that this will lull you into a false sense of
security. But when properly used you can be sure (from the help you get
from the compiler) that all exceptional paths are taken care.

b) C++ , JAVA differ how they treat the exception specifications.

int f(); //#1

in C++ can throw any exception , whereas in JAVA its a no-throw
guarantee. I feel that JAVA has an edge over C++ in enforcement of
exceptions. In C++ you can write code such as,

int f() throw() {
throw 1; // Flagrant violation, but the compiler lets it go...
}

The reason given by Stroustrup for #1 throwing any exceptions is that
it would require exception specification for virtually every function.
It would be a significant cause for recompilation and would inhibit
cooperation with software written in other languages.

c) Exceptions are sometimes good way of returning from a recursive
function.

d) Exceptions are a nice way to signal errors in constructors. True,
you can move the error causing part to a separate initialization
function. This would cause a burden on the part of the client to call
the initialization function as well. If the initialization function
fails, then there is no use for the object in the first place. You need
not have constructed it. Throwing exception from the constructor
achieves this nicely. When an exception is thrown from the constructor
the object is not constructed in the first place and the C++ ensures
proper cleanup.

e) Exception specification signals a contract of the function. By
looking at the signature you can get an idea of how the function may
signal an error when its not able to fulfil its job. Again in C++,
since the enforcement by the compiler is not great you cannot be
exactly sure that you can receive a particular exception when i call a
function.

f) Throwing exceptions help you to map different status codes to
different types of exceptions. Instead of code that does,

if (status_code == ...) {
// do this
} else if (status_code == ...) {
// do that
}

You can write the error-handling code in different handlers. This lends
clarity to the program. And the handlers can handle even derived class
thrown as exceptions.

Also several exception status can be grouped into a single exceptions
if the caller cannot differentiate between status_code_1 and
status_code_2.

g) Exception handler gives clarity to the code, without messing the
usual path with exceptional path. As you had said the line between
usual and exceptional path may be thin. But then it calls for a
judgement on the part of the programmer to decide.

h) Progpagate exception propagation and handle where it is appropriate.
Sometimes the low-level code cannot recover from an error which
supposedly can be recovered from a code at a higher level.

>The compiler still has to insert hidden return points wherever you >
call a function or overloaded operator which might throw

> an .exception, and as a programmer you have to know where these


> return points are and always bear in mind the possibility that

> they might be taken. To hide things from view when you don't need >


to know about them can be good. But to hide things which are
> crucially important and must not be ignored is obfuscation, and
> the apparent tidiness of the resulting code is a dangerous illusion

Yes, you have to know how the mechanism works! If a function doesnt
handle any exception thrown from the function it calls, it means that
the function cannot take recovery measures and gives its caller a
chance to handle.

All these things must be used judiciously of course understanding the
exact costs.

-lsu

Thomas Richter

unread,
Jan 13, 2005, 12:10:23 PM1/13/05
to
Hi,

> Does anyone here argue that exceptions are usually a bad idea?

> I apologise if this subject has been done to death already.

> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

Very provocative (on intend, I believe).

What can be said about it? "Exceptions are a bad idea if you don't
know how to deal with them". Using exceptions requires a rigorous
change in your programming paradigm, to one that is very different
from "classical" programming with return codes in several ways, more
than what could be expected from a tiny innocent looking
construct. Using traditional programming plus exceptions is indeed a
pretty bad idea. Thus, if you feel unsecure with exceptions, you
shouldn't really use them indeed.

Short cut: Use the programming language and programming style that
suits you best.

I, personally, can do several worlds. (-;

Greetings,
Thomas

(who has already used coroutines and would like to see them in a
higher programming language because they are handy at times ;-)

msalters

unread,
Jan 13, 2005, 3:02:18 PM1/13/05
to

Andy Robinson wrote:
> Does anyone here argue that exceptions are usually a bad idea?

We sometimes see that opinion. The mods keep out most of the
trolls, so it's much more common in clc++.

> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

I think you managed to miss quite a few important points, are
not familiar with all the possible ways to implement exceptions,
and manage to get some facts wrong on other points as well.

In detail:
"We all know you can do everything which exceptions do by using
status codes as return values of functions". Wrong; an int has
a limited domain whereas there are an inifinite set of objects,
with an infinite hierarchy. When passing an exception through a
callback, you have more control about which exceptions are
filtered (from both sides); filtering int error code is a
nightmare.

"It is claimed that exceptions allow the separation of normal code
from error handling code. But it's no problem to get the same effect
with status codes."

Wrong. /You/ may be able to separate them. The maintenance
programmer, perhaps. The compiler, not. There are already
implementations that won't even load exception handlers
in RAM until needed. Most implementations keep them out of
the cache. Your if( ) branch is just that, and often both
branches will end up in cache. How should the compiler know
0 is an error? Or 1 is the error, and 0 is OK? You need a
lot of work, and PGO, and some luck to get an approximation.

"Neglecting the possibility of an exception at some point in
your function means that if it happens, your function may well
leave things in an inconsistent state."

This is technically true, since you write "may". Of course,
if your function uses RAII style objects, the dtors will
clean things up. But, for that to work, you need matching
ctors, and those might need exceptions.

"I have seen it claimed (in the C++ FAQ Lite) that the
if-statement which tests a return code increases the
software development burden, because both branches of
the if need to be tested. I find this utterly bizarre.
Clearly we need to test error handling whether by status
codes or by exceptions."
Again, an obvious truth which is false once you look at the
/real/ problem. Both if()s and exceptions add paths through
a function. However, with exceptions these paths add up
(either nothing, or A, or B, or C is thrown) while if()s
multiply ( if(A),if(B) and if(C) can each be true or false)
Clearly, shorter functions suffer less. But as you admit,
with exceptions functions are shorter to start with.

"If an operator needs a way of reporting that it has failed
then you should implement it as a function returning a status

code instead". Tricky, how do you think that should work with
all the STL algorithms? Exceptions propagate automatically,
whether from operators or functions, but how do you propagate
an unknown error code ?

"The title of this article is intended to suggest the idea
that the best way of making your programs exception-safe

is to make them exception-free." But you don't tell us how
to make programs return-value-safe.

"The idea of functions returning values to indicate what they
did, will always be present in your programs even if you also

use exceptions. This makes exceptions redundant"
No, it doesn't. My functions return values from the operation
result domain, not what or how they did it. If the operation
does not happen, no value from the result domain exists. It's
not uncommon for the return type to have no default constructor,
so what do I return? And even if it does, how can you
distinguish between say an empty string or a failed operation?

"When exceptions are being propagated then [a dtor] is the only
way of doing such tidying up. This can be a pain, forcing you
to create special classes whose only reason for existence is to
accomplish some specialised tidy-up in their destructors."
No. Google for ScopeGuard. You need only one class, and it exists
today.

"One solution is to define your own set of status codes and map
all the others onto these. But this is a never-ending task
(for instance Microsoft's file nserror.h specifying HRESULTs
is 9200 lines long), and any status code you have omitted to
specify a mapping for will not be reported in a useful way.

My solution is to use 64 bit integers where the top 32 bits
specify the category of status code and the lower 32 bits have
the original raw status code."

You do know why the HRESULT can be so big? That's because the
top 16 bits of a HRESULT specify the category and the lower
16 bits have the original raw status code. Of course, the
lower 16 bits might be 8 bits category and 8 bits original
original original error code. Now I understand why we'll
"need" 128 bits integers, it's because someone will want to
wrap your error code.

"The internal mechanics of throwing and catching exceptions
are compiler-specific. This is fortunate as it has prevented
the rot from spreading further - into the OS API itself for
instance. That really would be horrible."

The irony. Using Windows as an example, without being aware
that in Windows exceptions /are/ in the OS, and are not
actually compiler-specific (the binding to e.g. VC++ is, though)
Just google for SEH.

Regards,
Michiel Salters

"Philipp Bachmann" <"reverse email address"

unread,
Jan 13, 2005, 3:10:29 PM1/13/05
to
> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.
>
> I realise that I'm putting my head into the lion's mouth. But I
> also think there must be others out there who share my views.

Just three quick comments:
- You say "But it isn't half as much of a burden as using an exception,


with the result that the function may never return at all, so leading to
the whole can of worms known as "exception-safe programming".

In my opinion, a difference should be made between "exceptional
conditions" in an application and "exceptions" as one possible
implementation technique provided as a feature of the programming
language to signal such conditions. The topic of "exception-safe
programming" has to do with "exceptional conditions" or errors and
not with "exceptions" in particular. So even if you use status codes
to signal success or failure of a call to a (member) function, you
must decide which level of "exception safety" you want.
- You say "Even a humble '+' operator might cause function execution
to abort, if the operator has been overloaded." Yes. But the convention
to return a single value (and not "std::pair< T,int >" for the value and
a status code) otherwise forbids to overload such an operator if
it is not possible to ensure that it couldn't fail at all. Of course you
could alternatively say, "o.k., lets return 'std::numeric_limits< T >::max()'
in case of failure", but this makes it impossible to define a standard
set of status codes which return codes could be easily checked
against (e.g. enum errorCode_t { runtimeError = -1,
outOfMemory = -2 ... }).
- The comparison of exceptions with "setjmp()" / "longjmp()" misses
the point, that exceptions at least provide the advantage of stack
unwinding.

Cheers,
Philipp.

Dave Moore

unread,
Jan 13, 2005, 3:09:39 PM1/13/05
to

"Andy Robinson" <an...@seventhstring.com> wrote in message
news:cs3bs5$dqp$1$8300...@news.demon.co.uk...

> Does anyone here argue that exceptions are usually a bad idea?

Probably, but I have become pretty inured to it due to the colossal heap of
evidence to the contrary 8*).

> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

Well I read your posting, and I found it completely unconvincing, and in
fact I think it demonstrates that you simply don't understand how exceptions
are supposed to work (at least in C++). I am glad you called it a rant
rather than an article or dissertation, because either of those would imply
you were presenting justifiable claims based on research, rather than just
flying off the handle.

I imagine others who are more qualified will set you straight on some of the
specific points you mentioned, but I can chime in on some general issues.
With regard to C++, since that is the only language with exceptions I am
familiar with, here are 3 major points about exceptions that you totally
missed, glossed over, or at the very least failed to refute in your
diatribe.

1) Exceptions are BUILT-IN.
For developers interested in writing error-safe code, this makes them
(eventually) oh-so-much-easier to use than the error-code based methods you
advocate in your rant. The latter force one to write gobs of boring,
repetetive, and basically irrelevant boilerplate code in order to obtain an
end result that is, quite likely, not nearly as robust as the equivalent
treatment using exceptions(!!) Exceptions may require a significant
investment of time to learn on the front end, but thereafter one can write
error-safe code that is MUCH cleaner than the equivalent using error-codes.
If you don't believe me, see (for example) Herb Sutter's book "Exceptional
C++" ... with a set of 10 worked examples, he shows how simple, clear and
robust error-safe programming can be when C++ exceptions are used properly.
Ok, you have to think carefully about your design to achieve such success,
but how can that possibly be a bad thing?

2) Exceptions are STANDARD.
This is almost the same as the previous point, but it is worth
mentioning again because it is IMHO a HUGE advantage. C++ exceptions
provide developers with a standardized mechanism to address the problem of
writing error safe code. This has in turn facilitated the development of a
consistent set of criteria that we can all use to evaluate the potential
pitfalls associated with a given piece of code. As stated above, using
exceptions correctly requires time and effort, but you only have to do it
once. It is unlikely that a similar unity of approach could be achieved by
language developers using simple error codes, and by the time they had
developed something as robust and flexible as C++ exceptions, they would
likely end up requiring the same effort to learn and use properly. Why
reinvent the wheel? Additionally, I would argue that it was the
standardization of C++ exceptions that brought the issue of writing
error-safe code into the spotlight, generating all those magazine articles
that you mentioned in your rant. Why is having so many smart people
focusing on a single (long-ignored) issue, using a common framework and
grammar a bad thing for the programming community?

3) Exceptions are OPTIONAL.
For the programmers who don't need (single developers writing in-house,
throw-away code), or care about (by all accounts, a large majority)
error-safe code, they can pretty much ignore them. Ironically, the C++
exception mechanism still works invisibly in the background to make their
resulting programs more error-safe than if it were not there. For example,
if an attempt to allocate memory fails using new, the resulting bad_alloc
exception will still propagate to the top of the stack and kill the program
right away, even if the programmer has never heard of exceptions or
error-safe programming. This benefit is totally FREE, requiring no lines of
code, not even to check error-codes. Furthermore, if you want to implement
something as ill-advised as error-codes, you are free to do so and
exceptions will stay out of your way, while still providing a (very) basic
level of error safety as I mentioned above.

Whew ... I got bit worked up and wrote more than I initially intended.
Anyway, in order to successfully argue against C++ exceptions, I think you
will have to (at least) come up with strong points that outweight the 3
significant benefits described above. I'm not saying that it can't be done
(although I expect it will be very hard), just that you haven't done it yet.

Dave Moore

Andy Robinson

unread,
Jan 13, 2005, 4:44:10 PM1/13/05
to
jakacki wrote:

>> Does anyone here argue that exceptions are usually a bad
>> idea?
>>
>> I apologise if this subject has been done to death
>> already.
>>
>> I have placed a rant called "Exception-Free Programming"
>> at
>> http://www.seventhstring.com/resources/exceptionfree.html
>> and I'll be interested to know what you think.
>
> You challenge arguments of exceptions proponents, but you
> fail to cite any articles, books or postings except C++ FAQ
> Lite. I think you should invest more work in justifying
> claims you make. In many cases they contradict existing
> literature. To be taken seriously you have to polemize
> directly with claims made in this literature, pointing out
> particular publications and revisiting arguments presented
> there.

Surely what matters is whether an argument is valid, not
the credentials of the person making it. The only reason
I gave a source for some of the arguments I mentioned,
was because I thought those arguments were so strange that
I was worried people might think I had made them up.

If there are arguments in favour of using exceptions
which I have failed to consider, I'll be grateful if
you would mention them.

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter

unread,
Jan 13, 2005, 4:49:21 PM1/13/05
to

"Andy Robinson" <an...@seventhstring.com> wrote in message
news:cs3bs5$dqp$1$8300...@news.demon.co.uk...
> Does anyone here argue that exceptions are usually a bad idea?
>
> I apologise if this subject has been done to death already.
>
> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.
>
> I realise that I'm putting my head into the lion's mouth. But I
> also think there must be others out there who share my views.

I used to agree with your point of view on this. But with time and
experience, I changed my mind for two reasons:

1) Programs that don't use exceptions to report errors tend to have serious
bugs in them for the following reason - programmers forget to check the
error codes, or check them incompletely. The resulting bugs rarely show up
in testing; they show up on the customer's machine when they are the most
expensive to fix. Exceptions cannot be ignored by omission, the programmer
has to deliberately write code to catch and ignore it. There's no blithely
going on assuming that the previous operation succeeded.

2) Most (nearly all) of the problems associated with writing exception safe
code revolve around memory leaks. But using automatic memory management
(garbage collection) avoids this problem. Other resources, such as file
handles, will still have to be managed in an exception safe manner, but with
memory management automatically taken care of, this problem is greatly
reduced in scope.

I've explored both of these ideas with C++ (I use a garbage collector with
C++), and the combination has been so successful that they are major themes
in the D programming language.

P.S. The question arises in your article and by many people just when should
something be considered a natural return value and when should it be
considered an error? The answer is it depends on the purpose of the function
being written. If its stated purpose is to open a file for reading, such as
ReadFile(), then it should throw an exception if the file cannot be opened
because the file doesn't exist. Correspondingly, a function named
DoesFileExist() should not throw if the file doesn't exist, because asking a
question is part of the natural flow of the program, and not an error.

-Walter
www.digitalmars.com free C, C++ and D compilers
"code of the nerds"

Andy Robinson

unread,
Jan 13, 2005, 5:21:58 PM1/13/05
to
David B. Held wrote:

> I'll bite.

Thanks for the reply.

I hope you won't mind if I first summarise some of your points as:
"Error handling is tricky to get right. In reality, there are lots of
careless programmers who don't check return codes. Not checking
return codes leads to silent errors, whereas failing to handle an
exception makes visible errors."

I pretty much agree with this, except that you place a lot of value on
the "noise" made by an uncaught exception in bringing attention to
problems. In fact exceptions, by definition, don't happen very often.
So the noise made by an uncaught exception is not really a good way
of tracking down problems, since the exception probably won't happen
(except on the customer's desk of course). If we really want a way of
somehow enforcing better discipline in error handling then it has to
be something which operates at compile time and which does not depend
on the error actually happening before we can discover the problem.


> Using exceptions are about as much of a burden as using classes
(that
> is to say, hiding your data, rather than making everything a global
> variable). In fact, many of the arguments you make for "exception-
> free programming" could be very analogously made for "access-free
> programming" where all data is available to anyone who can get a
> pointer to it.

This has not been my experience. I think almost everyone who has used
them would agree that classes have enormous value in making programs
easier to read, write and understand, and thus more reliable. My
experiences with exceptions have not been so pleasant.


> "Even a humble "+" operator might cause function execution to
> abort, if the operator has been overloaded."
>
> And somehow that is worse than if it always returns, but does
> not always succeed. You are advocating a paradigm in which the
> path of execution follows intent, instead of correctness.
> You intend for operator+ to succeed, so you demand that it
> always does. What happens when your intent diverges
> from correctness? You are allowed to proceed anyway,
> with a flaw in your program.

No, in my view an operator should not be used to implement something
which may need to indicate that it has failed. (unless it can
indicate this by its result, such as the "not-a-number" convention
for doubles).


You argue that with status codes, we are at the mercy of other people
(e.g. 3rd party libraries) to propagate them and generally use them
correctly. This is certainly true. And if we use exceptions then we
depend on 3rd party libraries to handle them correctly too (to be
"exception safe"). Which is considerably harder, in my view.


> "It is pointed out that a C++ constructor cannot indicate
> an error by returning a status code. That just means you
> shouldn't do anything in the constructor which might fail
> - move such things into a separate initialization function."
>
> Hahaha!!! This is where your argument gets particularly weak.
> What if the constructor has invariants which would be broken
> by having a separate initialization function? Also, with a
> two-step construction, you need to maintain the construction
> state of *each* object internally!!! You need to make sure
> that initialization doesn't occur twice, and you need to make
> sure that it has occurred at least once.
> Which means that you need to check the initialization state
> in *EVERY* member function! I'm sorry, but I don't see how
> anyone can suggest such a "solution" with a straight face.
> What's the point in having a constructor at all if you don't
> do construction in it?

For one thing this is a bit of a side-show, a technical difficulty
caused by C++ embracing exceptions a bit more closely than I would
wish. But there are simple ways around it. What I in fact do is have
a member variable to say whether initialization is complete, and then
begin each member function with "ASSERT(m_initialised);". Problem
solved. Or we could get a constructor to return a status code by
passing a "StatusCode *" parameter to it.


> Does a return code make erroneous functions "explicitly clear"?
> Well, when I look at a function being called, I don't see the
> return code at all.
> How that is "explicitly clear" baffles me. What I *do* see
> is either the status code being saved or the status code
> being compared to something else, usually the "no error" code,
> which doesn't tell me anything at all, except that the
> programmer conscientiously checked the return code.

My point about this is that it makes it clear that an error might
cause this function to abort at this point. This is important
information when writing the function, but is frequently invisible
when exceptions are being used.


> "I'm taking about the articles which tell you how to decide
> when to use an exception, followed by the article next
> month telling you how to use "exception-safe programming"
> to cope with the resulting mess."
>
> It's only a mess if your code was already written to ignore errors,
> which is symptomatic of idioms that allow and perhaps even encourage
> such coding (hint: status codes). Code that is "exception-safe" is
> "error-safe", which means that you could replace the exceptions with
> error codes and the calling code would still be correct (and perhaps
> more so than if it were designed with error codes in mind).

I agree with you abouut the interchangability of the two techniques.
My point is that it is easier to achieve safe code with status codes,
for the reasons originally stated. After all, you can convert between
block-structured progams and goto-based programs too. But it's harder
to write correctly with gotos.


> If you are advocating the status quo, which says don't handle
> errors until they cause your program to crash, then by all
> means come out and say it.

That's never been my status quo.


> "The idea of functions returning values to indicate what
> they did, will always be present in your programs even if
> you also use exceptions."
>
> But in general, I write functions to return values indicating what
> they *did*, not what they *didn't*.

I agree with you about this, and sometimes find myself wishing for a
clean syntax by which a function could return multiple values.
However it's a very minor technicality. If a function wants to return
other values apart from its StatusCode then use additional pointer or
reference parameters.


> "In the old days a function could only relinquish control by
> saying "return" or reaching the end. This is simple and
> good, let's not spoil it for no reason."
>
> In the old days we wrote code in assembler and had control
> over which opcodes ended up in our executable. This is simple
> and good, let's not spoil it for no reason.

It wasn't simple at all, it was horrendous for anything bigger than a
toy program. We had very good reasons for moving on.


> "Using exceptions forces us to regard "exceptional" failures
> as an entirely different category of event than "expected"
> (not exceptional) failures."
>
> Uhh...how so? You can choose to throw or not throw in either case.
>
> "This is supposed to justify handling them by a different
> mechanism. But in my experience there is no clear dividing
> line between these categories."
>
> That's because there is no *intrinsic* dividing line...
>
> "When you try to open a file and fail, is that exceptional?
> It all depends on whether the caller checked for its
> existence and access rights beforehand."
>
> No, it all depends on what the function promised to do, and what it
> required as a precondition. If the function documents that there
> are no preconditions on the file to be opened, then it should
> probably report any problems as an exception. If the function
> requires you to do all the checking yourself, then it should
> probably assert on any problems it encounters.

But I don't see any advantage to doing things this way. If the
File::Open(...) function returns a status code then this is easily
documented, written, and used. No judgement decisions (about whether
and when to use exceptions) have to be made, implemented, or
documented. It's easier for everyone.


> "Anyway what is the advantage of using an exception rather
> than a returned status-code? If the caller catches the
> exception immediately it occurs then clearly there is no
> advantage, in fact the status-code version is likely to
> be cleaner."
>
> All unsubstantiated points. If the caller catches the exception
> immediately, it better be because it is going to do something useful
> with it, like log it or respond to it. I don't see how the
status-code
> version is likely to be cleaner. In fact, odds are, the function
> could return multiple exceptions. But the caller doesn't need to
> know all of them. It can choose to handle just a subset of possible
> errors. The status-code version could do the same, but it would
also
> have to remember to propagate the error up the call stack. It's
> entirely possible that the caller thinks he knows all the codes
> returned by the function and never propagates the status code. But
> if the called function changes, then that assumption is broken. But
> if the thrown exceptions change, the exception-safe code is still
> exception-safe (most likely).

This is an interesting point. Personally I can't recall ever wanting
to do this, but I agree it could happen. (I pretty much always either
deal with all errors at a certain point, or propagate all errors).
The essential point here is the "categorisation" of errors which,
with exceptions, happens because they can belong to different
exception subclasses. Clearly there are any number of ways we could
categorise status codes if we wanted the same capability.


> "If the exception is allowed to propagate up through several
> nested function calls before being caught then this might
> involve less visible code as some of the functions involved
> do not need to contain any code for this particular
> situation. But in fact this doesn't really simplify things
> in a useful way at all. The compiler still has to insert
> hidden return points wherever you call a function or
> overloaded operator which might throw an exception, and as a
> programmer you have to know where these return points are
> and always bear in mind the possibility that they might
> be taken."
>
> Which amounts to saying that you have to know how your program
> can fail and keep in mind the alternative execution paths that
> will result. This is true whether you are writing
> exception-safe code or status-code-safe code.

Yes indeed.

> Exceptions, however, automate most of this process for you,
> whereas status codes require a lot of manual boilerplate coding (and
> there is your redundancy that you were complaining about
> exceptions).

Here we disagree. It's not a lot of code, it's just a one line
test-and-return. And the big advantage of this, as I keep saying, is
that it makes the posssible return points visible, which makes it
vastly easier to keep in mind the alternative execution paths.

> "To hide things from view when you don't need to know about
> them can be good. But to hide things which are crucially
> important and must not be ignored is obfuscation, and the
> apparent tidiness of the resulting code is a dangerous
> illusion."
>
> You can write exception-safe code without knowing what
> exceptions pass through it. In fact, that's the best way to
> write it. And the robustness of the resulting code will be just
> as correct for status-code paradigms.

Here you make it sound as if it was reasonably easy. But in the Dec
2003 issue of the C/C+ Users Journal you co-wrote an article with
Andrei Alexandrescu called "Exception Safety Analysis" which doesn't
make it sound so easy. You say "Whenever you write a function, you
need to have an understanding of its behavior on exceptional paths,
and make a statement about that function's behaviour in the presence
of exceptions". In the later section entitled "Exception-safety
analysis" you give a algorithm by which we can determine the
exception safety of a function. This algorithm occupies almost two
whole columns and makes truly scary reading, given that it is
intended to be executed by humans. You apologise for its
laboriousness, but also comment that "there would be more to add if
the algorithm were to be made rigorous"! (Anyone else reading this
thread, I really recommend you to read this article).

The algorithm depends recursively, as we would expect, on the
exception safety of each function it calls. In practice we would need
to document the exception safety of each function we analyse, and
then use that documentation when we analyse that function's callers.
This means that we depend on the conscientiousness of the person who
wrote that documentation for each already-existing function, unless we
are prepared to re-analyse them all for ourselves.

Given the difficulty of performing such analysis, this scenario begins
to seem quite unreal. You speak several times of careless programmers
who don't check status codes. How likely is it that they will perform
exception-safety analysis?


> Now, let's review your "poor man's throw":
>
> if (status != NO_ERROR) return status;
>
> First, I must restate that this idiom essentially throws away the
> functional nature of functions in C++, which is a terrible waste.
In
> order for this idiom to work, every function that can possibly fail
> must use this protocol. This means that you basically cannot return
> data from a function. You have to pass out parameters instead,
which
> is inefficient, because often times, the return value is a temporary
> that can be elided. Considering how often objects get copied in
C++,
> forcing programmers to use out parameters everywhere is a serious
> performance burden.

I kind of agree, but I find in reality it's not much of a problem. For
small utility classes (e.g. a complex number class) operator
overloading or functions which return values often work fine without
needing exceptions or status codes. Functions which can fail are
typically larger so a bit of inefficiency in the call mechanism
is insignificant in context. Also let's remember that exception
handling also has overheads, even when no exception is thrown.

> Second, let's return to your idea of using a
> "fat status". You correctly observe that integral error codes might
> not be sufficient (especially since they cannot emulate exceptions'
> capability of passing pertinent instance-specific messages up the
> call stack). Using your idiom, every caller must allocate a status
> code whether it is used or not! And they must do so *on every
> function invokation*. This makes your program *extremely* sensitive
> to the performance of the status object.

No, I did say that you can "return 0" if all is well, or "return new
StatusCode(...)" if there is an error. In practice I've never found
this scheme necessary. It seems to me that an error message has two
parts. The first is "what were we trying to do from the user's
perspective" (open a document? calculate a formula in a spreadsheet?)
- this is known in the top level function which supervises the
operation. And the second is "why did it fail" (can't find file?
formula tries to divide by 0?) this is known at a lower level where
it happens, and is propagated back as a status code (a number is
sufficient). The top level function looks up the status code and
produces "Can't open document - file <whatever> not found". But there
would be no efficiency problem with "fat status" either, if we wanted
it.


> I find your arguments wholly unconvincing, glossing over important
> points and disregarding the reality of modern programming. I don't
> think exceptions are perfect, and they will certainly continue to
> mature. But really, the problem of exception complexity is not an
> issue with exceptions as much as an issue of writing error-safe
code,
> which has clearly not been a priority for far too long. You would
> do much better to write an article on error-safe idioms using error
> codes, for those poor souls who cannot use a language featuring
> exceptions.

Sorry you didn't like it - but I don't think that you've refuted my
points.


Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Jorgen Grahn

unread,
Jan 13, 2005, 5:22:22 PM1/13/05
to
On 12 Jan 2005 16:05:52 -0500, Andy Robinson <an...@seventhstring.com> wrote:
> Does anyone here argue that exceptions are usually a bad idea?
>
> I apologise if this subject has been done to death already.
>
> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.
>
> I realise that I'm putting my head into the lion's mouth. But I
> also think there must be others out there who share my views.

Speaking as an ordinary user, I'd love to see some discussion on this topic.
Error handling sucks, and it's just a matter of deciding which technique
sucks worse in what situation...

But for that to happen, someone had better come up with something less
inflammatory than your paper ...

One direct comment though:

[the article]


> It is pointed out that a C++ constructor cannot indicate an error by
> returning a status code. That just means you shouldn't do anything in the
> constructor which might fail - move such things into a separate
> initialization function.

That can be a terrible price to pay for avoiding exceptions -- you miss out
on RAII. Let's say I have a class Foo with some complex state and good
invariants (or whatever the name is for those predicates which hold for all
objects of a certain kind).

If the constructor can fail to bring my object to this well-defined state
and I'm not allowed to throw an exception, I have to tell myself "this is
either a good Foo, or a broken one" every time these objects appear in my
code. Or I have to add an Init() method, and tell myself "this is either a
good Foo, or a broken Foo, or a Foo I haven't tried initializing yet".

/Jorgen

--
// Jorgen Grahn <jgrahn@ Ph'nglui mglw'nafh Cthulhu
\X/ algonet.se> R'lyeh wgah'nagl fhtagn!

Andy Robinson

unread,
Jan 13, 2005, 5:22:47 PM1/13/05
to
msalters wrote:

> In detail:
> "We all know you can do everything which exceptions do by using
> status codes as return values of functions". Wrong; an int has
> a limited domain whereas there are an inifinite set of objects,

I did also mention using "status objects" if required.

> "I have seen it claimed (in the C++ FAQ Lite) that the
> if-statement which tests a return code increases the
> software development burden, because both branches of
> the if need to be tested. I find this utterly bizarre.
> Clearly we need to test error handling whether by status
> codes or by exceptions."
> Again, an obvious truth which is false once you look at the
> /real/ problem. Both if()s and exceptions add paths through
> a function. However, with exceptions these paths add up
> (either nothing, or A, or B, or C is thrown) while if()s
> multiply ( if(A),if(B) and if(C) can each be true or false)
> Clearly, shorter functions suffer less. But as you admit,
> with exceptions functions are shorter to start with.

I'm not sure I get this.


> "If an operator needs a way of reporting that it has failed
> then you should implement it as a function returning a status
> code instead". Tricky, how do you think that should work with
> all the STL algorithms? Exceptions propagate automatically,
> whether from operators or functions, but how do you propagate
> an unknown error code ?

if (status != NO_ERROR) return status;

> "The title of this article is intended to suggest the idea


> that the best way of making your programs exception-safe
> is to make them exception-free." But you don't tell us how
> to make programs return-value-safe.

Most of the techniques - e.g. RAII - are the same, it's just easier
because you can see what is happening.

> "The idea of functions returning values to indicate what they
> did, will always be present in your programs even if you also
> use exceptions. This makes exceptions redundant"
> No, it doesn't. My functions return values from the operation
> result domain, not what or how they did it. If the operation
> does not happen, no value from the result domain exists. It's
> not uncommon for the return type to have no default constructor,
> so what do I return? And even if it does, how can you
> distinguish between say an empty string or a failed operation?

I talked in another post about returning multiple values.

> "When exceptions are being propagated then [a dtor] is the only
> way of doing such tidying up. This can be a pain, forcing you
> to create special classes whose only reason for existence is to
> accomplish some specialised tidy-up in their destructors."
> No. Google for ScopeGuard. You need only one class, and it exists
> today.

I did - and the first thing I found was a thread from this very group,
last month, discussing the pitfalls of using it.


> "The internal mechanics of throwing and catching exceptions
> are compiler-specific. This is fortunate as it has prevented
> the rot from spreading further - into the OS API itself for
> instance. That really would be horrible."
>
> The irony. Using Windows as an example, without being aware
> that in Windows exceptions /are/ in the OS, and are not
> actually compiler-specific (the binding to e.g. VC++ is, though)
> Just google for SEH.

Horrors!

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andy Robinson

unread,
Jan 13, 2005, 5:55:32 PM1/13/05
to
"Philipp Bachmann" wrote:

> In my opinion, a difference should be made between
> "exceptional conditions" in an application and "exceptions" as one
> possible implementation technique provided as a feature of the
> programming language to signal such conditions. The topic of
> "exception-safe programming" has to do with "exceptional
> conditions" or errors and not with "exceptions" in particular. So
> even if you use status codes to signal success or failure of a
> call to a (member) function, you must decide which level of
> "exception safety" you want.

I'm sure there's some truth in this. But my point is that whatever
you're trying to achieve, it's easier if you can see what's happening
(that is, if you can see the return points of a function, visible in
the source).

> - You say "Even a humble '+' operator might cause function execution
> to abort, if the operator has been overloaded." Yes. But the

I've talked more about this in a different post.

> - The comparison of exceptions with "setjmp()" / "longjmp()" misses
> the point, that exceptions at least provide the advantage of stack
> unwinding.

I agree that exceptions are cleaner than longjmp. But both remain
essentially a long-distance "goto", which is undesirable for the same
reason that goto's are.

Thanks for the reply - I apologise if mine are getting briefer - I
expected controversy so I shouldn't complain about having several
replies to make!

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Emil

unread,
Jan 13, 2005, 6:48:03 PM1/13/05
to
This argument is a lot like arguing whether C++ is a good idea and
whether it provides any _real_ benefits over using plain old C.

The answer of course is that C++ simply provides tools for higher level
of abstraction. This _can_ be beneficial, yet someone in particular may
or may not benefit from it. Data abstraction can even be disasterous
for some; that's why they stick with C.

Additionally, avoiding exceptions leaves you with no practical options
for reporting failures from constructors. This essentially disables one
of the most important features of C++, namely the guarantee that no
object of user defined type can be used before it has been properly
initialized.

--Emil

David Abrahams

unread,
Jan 13, 2005, 6:47:13 PM1/13/05
to
msalters wrote:
> There are already
> implementations that won't even load exception handlers
> in RAM until needed.

Really? Which ones? I've been talking about that optimization as a
possibility for years, but I have yet to see it anywhere.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

David Abrahams

unread,
Jan 13, 2005, 6:52:56 PM1/13/05
to
Andy Robinson wrote:

> Here you make it sound as if it was reasonably easy.

It actually is.

> But in the Dec
> 2003 issue of the C/C+ Users Journal you co-wrote an article with
> Andrei Alexandrescu called "Exception Safety Analysis" which doesn't
> make it sound so easy. You say "Whenever you write a function, you
> need to have an understanding of its behavior on exceptional paths,
> and make a statement about that function's behaviour in the presence
> of exceptions". In the later section entitled "Exception-safety
> analysis" you give a algorithm by which we can determine the
> exception safety of a function. This algorithm occupies almost two
> whole columns and makes truly scary reading, given that it is
> intended to be executed by humans. You apologise for its
> laboriousness, but also comment that "there would be more to add if
> the algorithm were to be made rigorous"! (Anyone else reading this
> thread, I really recommend you to read this article).

I highly recommend it too. The introduction of the concept of
functional purity to error-handling analysis is interesting.

> The algorithm depends recursively, as we would expect, on the
> exception safety of each function it calls. In practice we would need
> to document the exception safety of each function we analyse, and
> then use that documentation when we analyse that function's callers.
> This means that we depend on the conscientiousness of the person who
> wrote that documentation for each already-existing function, unless we
> are prepared to re-analyse them all for ourselves.
>
> Given the difficulty of performing such analysis, this scenario begins
> to seem quite unreal. You speak several times of careless programmers
> who don't check status codes. How likely is it that they will perform
> exception-safety analysis?

I have two points:

1. The required analysis is the same one you'd have to perform for any
code that can fail, whether with exceptions or status codes or some
other mechanism, to understand its behavior in the presence of error
conditions. You still need to know whether a function can fail, and
whether, if it fails, it may have disturbed the program state.

2. I think the procedure Dave and Andrei gave for doing the analysis
looks scarier than it needs to. At least for me, understanding the
behavior of functions in the presence of errors is much easier than
walking through their procedure. I think the procedure is primarily
interesting as a clue to how one might automate the analysis.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David B. Held

unread,
Jan 14, 2005, 6:16:59 AM1/14/05
to
{Would participants in this thread please watch the level of heat in
their responses. -mod}

Andy Robinson wrote:

> David B. Held wrote:
> [...]


> I hope you won't mind if I first summarise some of your points as:
> "Error handling is tricky to get right. In reality, there are lots of
> careless programmers who don't check return codes. Not checking
> return codes leads to silent errors, whereas failing to handle an
> exception makes visible errors."

Yes. And it's such a good point that it was worth repeating every
time I said it. ;)

> I pretty much agree with this, except that you place a lot of value on
> the "noise" made by an uncaught exception in bringing attention to
> problems. In fact exceptions, by definition, don't happen very often.
> So the noise made by an uncaught exception is not really a good way
> of tracking down problems, since the exception probably won't happen
> (except on the customer's desk of course). If we really want a way of
> somehow enforcing better discipline in error handling then it has to
> be something which operates at compile time and which does not depend
> on the error actually happening before we can discover the problem.

Well, I agree that static assertions are definitely better than
dynamic ones. But if you create a program that allows the user to
specify the size of an array, how is it possible to determine at
compile time whether that array is going to be too big? If you
can answer that question, writing exception-safe code should be no
problem for you!

As far as exceptions not happening any sooner than errors reported
by return codes, consider this scenario: Code allocates a buffer.
Allocator fails, but because the buffer isn't needed right away,
no attempt is made to write to it. In the exceptional case, the
allocator throws as soon as the program can know that there is a
failure. In the status case, the program does not fail until there
is an attempt to write to the buffer. Consider another scenario:
Program opens a file that will be read to/written from frequently
later on (so opening/closing it is not efficient). If the file
stream c'tor were to throw an exception on a file-does-not-exist
situation, then the program would know right away that there is a
problem. With status codes, the program could run for quite a while
before needing to access the file and finding out that it isn't
available. In fact, one could argue that it is a shortcoming of
the C++ iostreams library that streams do *not* throw exceptions.
Instead, stream operations silently fail, and it can be frustrating
to determine whether your program is failing or not if you forgot
to check the stream state (status code) somewhere.

> [...]


> This has not been my experience. I think almost everyone who has used
> them would agree that classes have enormous value in making programs
> easier to read, write and understand, and thus more reliable. My
> experiences with exceptions have not been so pleasant.

Perhaps that's because when you started to use classes they were
well-established and had a common set of idioms; but when you started
to use exceptions, the state-of-the-art was not so mature. The fact
is, there are thousands of programmers who have learned to find the
utility in exceptions that you have found in classes.

> [...]


> No, in my view an operator should not be used to implement something
> which may need to indicate that it has failed. (unless it can
> indicate this by its result, such as the "not-a-number" convention
> for doubles).

Which means that you pretty much think the C standard library is an
abomination. I always thought it was ridiculous that atoi(), strtol(),
etc. returned "special" values on error. In fact, they are only
special if you remembered to check errno; and, in fact, check it twice.
Because it's entirely possible that errno is reflecting the result of
a previous call.

I agree that the error-handling in the C library is primitive and
suboptimal, but not because it uses functions the way functions
were designed to be used. It's primitive because it does not have
the luxury of throwing exceptions.

> You argue that with status codes, we are at the mercy of other people
> (e.g. 3rd party libraries) to propagate them and generally use them
> correctly. This is certainly true. And if we use exceptions then we
> depend on 3rd party libraries to handle them correctly too (to be
> "exception safe"). Which is considerably harder, in my view.

Well, we certainly want people to write libraries correctly either way.
But if a library author fails to handle a particular error that is
signaled by a status code, there is *no* way for us to know that.
Whereas, if a library author fails to handle a particular exception,
we will eventually find out, and can make sure that our code behaves
gracefully anyway. Remember that writing exception-safe code just
means writing error-safe code. If it is difficult to write exception-
safe code, then it is EQUALLY difficult to write error-safe code that
uses status codes, because *it is the SAME TASK*. When you say that
it is "considerably harder" to write exception-safe code, you are
really just saying: "Coding was so much easier when I could ignore
status codes."

If I challenged you to prove that all of your code provides at least
the basic guarantee using the status code idiom, you would have just
as many headaches as if the code used exceptions. In fact, you would
have more, because you would have to write by hand all of the error
propagation that is automated by exceptions.

> [...]


> For one thing this is a bit of a side-show, a technical difficulty
> caused by C++ embracing exceptions a bit more closely than I would
> wish.

Hahaha!!! It has nothing to do with exceptions and everything to do
with idiomatic C++. I don't believe exceptions existed when c'tors
were designed.

> But there are simple ways around it. What I in fact do is have
> a member variable to say whether initialization is complete, and then
> begin each member function with "ASSERT(m_initialised);". Problem
> solved.

You call it a "solution", I call it a "hack". And there's no way you
can say that is "natural C++". I certainly would be wary to use any
library you've written, given the luxury you take with needless
status codes. Such hackery would be acceptable in a language like C,
but I find it completely distasteful in C++.

> Or we could get a constructor to return a status code by
> passing a "StatusCode *" parameter to it.

Indeed. If you don't mind polluting every c'tor in your source code
with an exception interface. Why even bother with c'tors and d'tors
if you want to throw away automation that is given to you on a silver
platter?

> [...]


> My point about this is that it makes it clear that an error might
> cause this function to abort at this point. This is important
> information when writing the function, but is frequently invisible
> when exceptions are being used.

If you wrote your code to be error-safe, then it would work properly
whether you used exceptions or not. The whole point of exception-
safe code is that you don't *need* to know every exception that can
occur. Just whether a given operation can fail, and whether it has
effects. That is enough to write safe code in either idiom. Let's
look at a somewhat contrived (but not *too* contrived) example:

int TransferMoney(Account& From, Account& To)
{
int status;
if ((status = From->debit(100)) != SUCCESS) return status;
if ((status = To->credit(100)) != SUCCESS)
{
From->rollback();
return status;
}
return SUCCESS;
}

This is more or less idiomatic C style error handling. Note that we
require some kind of no-fail rollback() operation (or credit()
operation). Now, if we instead assume that debit() and credit()
can throw, we are led to consider other ways of making our code
error-safe. An obvious technique is the copy/swap idiom:

void TransferMoney(Account& From, Account& To)
{
Account Temp(From);
Temp->debit(100);
To->credit(100);
swap(Temp, From);
}

Look, ma! No status checking! The copy/swap idiom uses the principle
of: "Do the dangerous stuff in a disposable bomb shelter. That way, if
something blows up, just walk away like nothing happened." Here, we
do the dangerous debit() operation on a temporary that we've copied
from From. If the copy operation throws, we don't care, because we
didn't modify anything, so TransferMoney() has no effect in that case.
If debit() throws, we also don't care, because our parameters were
again not modified (assuming Account's copy c'tor actually copies
and doesn't do something stupid). If credit() throws, we again don't
care because we have caused no permanent effects. All we require is
that swap() not fail. Then we are guaranteed that by this time, the
entire function will succeed. Hence, the strong guarantee.

In this case, Account may or may not be easy to copy. If it is
expensive to copy or non-copyable, we can always resort to
try/catch/rollback(). Making swap() no-fail should always be trivial.
If it is not trivial, then you should take a good hard look at your
class design and ask yourself why it is not. On the other hand,
rollback() is almost certainly non-trivial, and is much more likely
to be expensive than copy. So why not take advantage of this gift
from the exception world, and try the copy/swap idiom with the
status code idiom? Here it is:

int TransferMoney(Account& From, Account& To)
{
Account Temp(From);
if (!Temp->initialized()) return COPY_FAILED;
int status;
if ((status = Temp->debit(100)) != SUCCESS) return status;
if ((status = To->credit(100))) != SUCCESS) return status;
swap(Temp, From);
return SUCCESS;
}

Hmm...our function is still correct, but is it so obvious as the
previous example? Our function has mysteriously bloated by nearly
100%! In fact, there's so much boilerplate in this function that
the only reason I am confident that it's correct is that I wrote
it! If you had written it, I would take much more time to examine
it and ensure that the execution paths are correct. And the only
reason I am *really* confident that it's correct is that I adapted
it from an exception safety idiom that I know produces correct
results!

Now you go ahead and sit there and tell me with a straight face
that the third version is better than the second. I dare you.

> [...]


> I agree with you abouut the interchangability of the two techniques.
> My point is that it is easier to achieve safe code with status codes,
> for the reasons originally stated. After all, you can convert between
> block-structured progams and goto-based programs too. But it's harder
> to write correctly with gotos.

I'm glad you made that point. You may think that exceptions are like
goto's, and status codes are like blocks. In the physical sense, there
may be some truth to that. However, I would argue that with respect to
maturity, status codes are like gotos and exceptions are like blocks.
Status codes are primitive, unstructured, and weak. Exceptions are
mature, rich, and powerful. In fact, it is fairly difficult to
write error-safe code with status codes, because the status codes
themselves introduce a lot of clutter that obscure the intent of the
code. Returning to our example:

Account Temp(From);
if (!Temp->initialized()) return COPY_FAILED;

This line should be required after every copy that could fail, which
means that it should be automated. The fact that you must write it
manually means that it is a potential source of omission errors. A
vast, deep source of such errors.

int status;

Here's that status object that we always have to allocate somewhere.
Here it's easy to allocate because it's small and stack-based. But
what if we wanted to return an error message saying how the operation
failed? Well then we'd be talking about a lot more complexity,
wouldn't we?

if ((status = Temp->debit(100)) != SUCCESS) return status;

The original line here was a mere 17 characters long. It has now
blossomed to over 50 characters!! It's way over twice the size!
The intent of this code is quite literally dwarfed by the surrounding
error checking mechanism (and it should be clear that I did not
contrive the code to be small or the error handling to be large)!
Just looking at this line of code makes you wonder if it's about
a bank transaction or a status code. A democratic vote of the
text would certainly lead a casual reader to conclude the latter.

if ((status = To->credit(100))) != SUCCESS) return status;

Once again, the action is obscured by error handling.

swap(Temp, From);

This is one of the few lines that was not obfuscated by error
checking.

return SUCCESS;

This line shouldn't even be necessary, because logically speaking,
the function doesn't have a natural return value. In fact, in other
languages (like Ada, say), it wouldn't be a function at all.

> [...]


> I agree with you about this, and sometimes find myself wishing for a
> clean syntax by which a function could return multiple values.
> However it's a very minor technicality.

Ha! That's what you said about constructor exceptions!


> If a function wants to return
> other values apart from its StatusCode then use additional pointer or
> reference parameters.

So basically, write C++ as a quite literal "procedural language".
You really should take a look at Ada.

> [...]


> But I don't see any advantage to doing things this way. If the
> File::Open(...) function returns a status code then this is easily
> documented, written, and used. No judgement decisions (about whether
> and when to use exceptions) have to be made, implemented, or
> documented. It's easier for everyone.

Is it? In fact, this is exactly why error codes are ignored. "Well,
in this case, it's not an error for File::Open() to fail, so I don't
need to check the return value..." Uh...right. More often than not,
that's merely a cover-up for laziness. That, or: "But this call will
never fail!" File::Open() can make a contract with its callees, but
it can't enforce it. Status codes lead to the U.N. of error-handling:
plenty of documentation, but no power to do anything about it.

> [...]


>>Exceptions, however, automate most of this process for you,
>>whereas status codes require a lot of manual boilerplate coding (and
>>there is your redundancy that you were complaining about
>>exceptions).
>
> Here we disagree. It's not a lot of code, it's just a one line
> test-and-return. And the big advantage of this, as I keep saying, is
> that it makes the posssible return points visible, which makes it
> vastly easier to keep in mind the alternative execution paths.

The fact is, you can't see all the possible execution paths through a
program with only a quick glance. The compiler is quite free to
reorder instructions at several levels, and remove some code entirely.
So this illusion of being able to precisely trace the flow of execution
is something of a quaint but outdated fairy tale. Anyone who has
tried to write multi-threaded code is even more acutely aware of
the fragility of this illusion.

I think my illustration above gives some idea of the clarity cost of
status code handling and the obvious benefit of exception mechanisms.
The fact is, error-safe code self-documents the possible return paths
for you. Irreversible operations are only performed on temporaries
or at the end of a sequence (to spell it out: irreversible operations
may be an exit point). Sequential operations on parameters or globals
must be no-fail (they are not exit points) or no-effect (possible
exit point). Those two rules alone tell you almost everything you
need to know about both the error safety of the code and the possible
execution paths.

Destructor invokation is not explicit, yet you insist that's a good
thing. However, all kinds of nasty things can happen if you don't use
d'tors correctly. Writing correct d'tors is no more difficult than
writing correct error-safe code. You just follow a few idioms and
don't do anything dangerous.

> [...]


> Here you make it sound as if it was reasonably easy. But in the Dec
> 2003 issue of the C/C+ Users Journal you co-wrote an article with
> Andrei Alexandrescu called "Exception Safety Analysis" which doesn't
> make it sound so easy.

Heh. I'm surprised anyone read that. Anyway, my reply is that
writing exception-safe code is easy, but proving it is hard. ;)

> You say "Whenever you write a function, you
> need to have an understanding of its behavior on exceptional paths,
> and make a statement about that function's behaviour in the presence
> of exceptions". In the later section entitled "Exception-safety
> analysis" you give a algorithm by which we can determine the
> exception safety of a function. This algorithm occupies almost two
> whole columns and makes truly scary reading, given that it is
> intended to be executed by humans. You apologise for its
> laboriousness, but also comment that "there would be more to add if
> the algorithm were to be made rigorous"! (Anyone else reading this
> thread, I really recommend you to read this article).

Heh. And, in fact, I have a more elaborate version of the algorithm
that attempts to encapsulate the basic guarantee as well. Lucky for
you, Andrei insisted on the simpler version. ;> Anyway, if you were
to write an article on how to eat a bowl of cereal, and your target
audience had no concept of what a spoon or bowl was or how to operate
them, I dare say you would end up with a similarly daunting algorithm.
You can follow the algorithm as a complete novice to exception safety,
and fairly confidently conclude whether a given function provides the
strong or nothrow guarantee (the dirty secret is that it more or less
completely omits the basic guarantee, because of the messiness of
defining "invariants"). And the resulting trace should prove to be a
more or less convincing proof of your conclusion to anyone who should
challenge you. On the other hand, most programmers will be able to
do many of the steps in their head without really thinking about them,
and quite a few already do.

> The algorithm depends recursively, as we would expect, on the
> exception safety of each function it calls. In practice we would need
> to document the exception safety of each function we analyse, and
> then use that documentation when we analyse that function's callers.
> This means that we depend on the conscientiousness of the person who
> wrote that documentation for each already-existing function, unless we
> are prepared to re-analyse them all for ourselves.

Yes, this is true in the strictest sense (and should make perfect
sense, with a little thought...how can you prove that your code is
error-safe if the code it calls is not?). However, you can get by
with quite a lot if you just know whether your callees can fail or
whether they have effects. Usually libraries will at least tell
you whether a given function throws, even if it does not exhaust the
list of possible exceptions. And it's usually fairly obvious whether
a function is intended to have effects (but you'd better hope it
is documented when the reality does not match the intuition).

> Given the difficulty of performing such analysis, this scenario begins
> to seem quite unreal. You speak several times of careless programmers
> who don't check status codes. How likely is it that they will perform
> exception-safety analysis?

> [...]

If their program crashes from numerous uncaught exceptions, I think
they will become quite motivated to look into the issue. Whereas, it
doesn't seem nearly as likely that mysterious crashes will incite a
rash of status code checking. I've certainly never observed that
pattern myself.

Dave

Stephen Howe

unread,
Jan 14, 2005, 6:23:20 AM1/14/05
to
> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

I religiously check the return values of functions that use status codes.
But that is just it, my colleagues frequently do not.

I jumped on one of my colleagues recently for failing to check the return
value of fopen() (and yes, it had failed) and failing to check the return
values of fread(), fwrite() and even fclose() (a flush to disk could fail).
Your case for status code is weaker than it appears. If all these C
functions threw exceptions on failure, my colleagues would _have_ to write
code the dealt with the exceptions.

Stephen Howe

David B. Held

unread,
Jan 14, 2005, 6:19:10 AM1/14/05
to
David Abrahams wrote:
> [...]

> 1. The required analysis is the same one you'd have to perform for any
> code that can fail, whether with exceptions or status codes or some
> other mechanism, to understand its behavior in the presence of error
> conditions. You still need to know whether a function can fail, and
> whether, if it fails, it may have disturbed the program state.

Exactly right.

> 2. I think the procedure Dave and Andrei gave for doing the analysis
> looks scarier than it needs to. At least for me, understanding the
> behavior of functions in the presence of errors is much easier than
> walking through their procedure. I think the procedure is primarily
> interesting as a clue to how one might automate the analysis.

And right again. The pipe dream was, in fact, to create a tool
that automated exception analysis. Of course, C++ is a little
weak in certain areas (doesn't allow you to explicitly mark an
operation as pure), and even if we could throw in all the non-magical
features to help us out, I'm still not convinced that an automated
tool would be possible, after having spent a fair number of hours
thinking about it.

The algorithm was more of a tool for helping you to prove formally
that a given function is "exception-safe". However, it would probably
not be a bad idea for someone not entirely familiar with exception-safe
programming in C++ to step through a few non-trivial functions with
the algorithm to see places where they might be making assumptions
that lead to traps. But anyone who has had to write exception-safe
code in the real world has probably already internalized the essential
points of the algorithm.

Dave

wka...@yahoo.com

unread,
Jan 14, 2005, 6:21:56 AM1/14/05
to
If you're one of the lucky people who can count on the fingers of one
hand the number of times you've forgotten to free a resource at an
early return, I can see where you might feel that freeing resources in
destructors only is unnecessary overhead. For the rest of us, it's a
good defensive programming habit, even if we don't use exceptions. Is
it really a big burden to use iostreams instead of cstdio streams,
auto_ptr for individual heap objects, and the vector template for heap
arrays? For me, these substitutions cover most of the cases where I
need to worry about exception or multiple-return-point safety.

Would you require the types passed to STL contains all have the member
function 'int init(void)' ? That kills the idea of using primitive
types in container templates, or minimally forces the use of a traits
template.

Suppose you had a protected member function that called a virtual
member function. Supposer further that, in some derived class, the
override of the virtual function set a derived class member variable,
and this value was used by the derived class member function that
called the base class protected function. Would you see this as bad
style or an example of the flexibility and power of the virtual member
function capability? To me, this is analogous to using an exception.
Sometimes it's desirable for "non-adjacent" layers in the code to
interact in ways that are hidden from the intermediate layers.

Exceptions are one of several features in C++ that have suffered from
our perverted need to be able to link C++ programs with linkers that
were written in 1973. A C++-capable linker should generate
return-point tables for exception handling with unneeded entries
removed, even if we choose to forgo the tedium of including accurate
throw specifications. To me, throw specifications go against the
spirit of exceptions, since they force intermediate functions to have
unnecessary knowledge of the interaction between the thrower and the
throwee.

jakacki

unread,
Jan 14, 2005, 6:22:33 AM1/14/05
to
> > You challenge arguments of exceptions proponents, but
> > you fail to cite any articles, books or postings except
> > C++ FAQ Lite. I think you should invest more work in
> > justifying claims you make. In many cases they
> > contradict existing literature. To be taken seriously
> > you have to polemize directly with claims made in this
> > literature, pointing out particular publications and
> > revisiting arguments presented there.
>
> Surely what matters is whether an argument is valid, not
> the credentials of the person making it. The only reason
> I gave a source for some of the arguments I mentioned,
> was because I thought those arguments were so strange
> that I was worried people might think I had made them up.

In your article you write

It is said ...

It is claimed ...

It is pointed out ...

This means that you polemize with statements that you have
read somewhere. Why don't you cite the source? If it is a
journal article or a conference paper, then it is likely
that the author presents analysis to prove his/her
conclusions. Without showing exactly where these analyses
are wrong, you are only glazing over details.

> If there are arguments in favour of using exceptions
> which I have failed to consider, I'll be grateful if
> you would mention them.

Well, go to all these papers that you have hidden behind
"It is said...", "It is claimed...", "It is pointed out..."
and address concrete claims made there. If you cite them
explicitly, it would be verifiable if and how you
misinterpret them. Moreover, you would give a chance to
their authors to defend the views and/or explain them
better.

BR
Grzegorz

--
Free C++ frontend library: http://opencxx.sourceforge.net
China from the inside: http://www.staryhutong.com
Myself: http://www.dziupla.net/gj/cv

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Torsten Robitzki

unread,
Jan 14, 2005, 6:27:13 AM1/14/05
to
Andy Robinson wrote:

> Does anyone here argue that exceptions are usually a bad idea?
>
> I apologise if this subject has been done to death already.
>
> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

Your article makes very clear that you know that return codes and
exceptions are only two different ways of reporting an error from the
place the error / exceptional condition occurred to a place where that
error could be reported or even better be handled. Thus writing
exception save code is just as complicated as writing error aware and
"return save" code.

One obvious difference is (as you've mentioned) that you can't see the
direct error reporting path in the code (AKA int rc = f(); if (rc)
return rc; ;-). I think this is a matter of taste if code is more
understandable in case every second line is a if (rc) return rc; or
nearly every line of code might throw an exception.

From discussions I followed before I got the feeling that the
preferences for exceptions grows with the complexity of a project (and
thus with the maximum of functions on the call stack at the time a very
rare error occurred). On the other side most complains about exceptions
came from GUI writers. Maybe it's because invalid input from users are
treated as exceptions from some GUI frameworks.

regards
Torsten

Bob Bell

unread,
Jan 14, 2005, 6:26:33 AM1/14/05
to
Andy Robinson wrote:
> David B. Held wrote:
> > "When you try to open a file and fail, is that exceptional?
> > It all depends on whether the caller checked for its
> > existence and access rights beforehand."
> >
> > No, it all depends on what the function promised to do, and what it
> > required as a precondition. If the function documents that there
> > are no preconditions on the file to be opened, then it should
> > probably report any problems as an exception. If the function
> > requires you to do all the checking yourself, then it should
> > probably assert on any problems it encounters.
>
> But I don't see any advantage to doing things this way. If the
> File::Open(...) function returns a status code then this is easily
> documented, written, and used. No judgement decisions (about whether
> and when to use exceptions) have to be made, implemented, or
> documented. It's easier for everyone.

What about opening a file in the constructor of an object? It is very
common (in my code, at least) to use constructors and destructors to
manage resources like files (RAII). Constructors cannot return status
codes. How do you propose to report failures in constructors without
exceptions?

> > Exceptions, however, automate most of this process for you,
> > whereas status codes require a lot of manual boilerplate coding
(and
> > there is your redundancy that you were complaining about
> > exceptions).
>
> Here we disagree. It's not a lot of code, it's just a one line
> test-and-return.

Even if it were only a "one line test-and-return", it would be for
every function call that can fail. That sounds like a lot of code to
me.

But of course it isn't one line; many functions, when detecting an
error, must back out some work (freeing temporary allocations, closing
files, etc.), which complicates the test-and-return code.

> > Second, let's return to your idea of using a
> > "fat status". You correctly observe that integral error codes
might
> > not be sufficient (especially since they cannot emulate exceptions'
> > capability of passing pertinent instance-specific messages up the
> > call stack). Using your idiom, every caller must allocate a status
> > code whether it is used or not! And they must do so *on every
> > function invokation*. This makes your program *extremely*
sensitive
> > to the performance of the status object.
>
> No, I did say that you can "return 0" if all is well, or "return new
> StatusCode(...)" if there is an error. In practice I've never found
> this scheme necessary. It seems to me that an error message has two
> parts. The first is "what were we trying to do from the user's
> perspective" (open a document? calculate a formula in a spreadsheet?)
> - this is known in the top level function which supervises the
> operation. And the second is "why did it fail" (can't find file?
> formula tries to divide by 0?) this is known at a lower level where
> it happens, and is propagated back as a status code (a number is
> sufficient). The top level function looks up the status code and
> produces "Can't open document - file <whatever> not found". But there
> would be no efficiency problem with "fat status" either, if we wanted
> it.

Here you illustrate one of the most important points in favor of
exceptions. The point at which errors are detected can be quite distant
(in terms of function call depth) from the point at which it can be
handled. With exceptions no matter how distant it is, the error
information is guaranteed to be transmitted. With status codes, the
programmer must build the entire transmission mechanism, a tedious,
repetitive, error-prone process.

Bob

Bob Bell

unread,
Jan 14, 2005, 6:25:53 AM1/14/05
to
Andy Robinson wrote:

> "Philipp Bachmann" wrote:
> > - The comparison of exceptions with "setjmp()" / "longjmp()" misses
> > the point, that exceptions at least provide the advantage of
stack
> > unwinding.
>
> I agree that exceptions are cleaner than longjmp. But both remain
> essentially a long-distance "goto", which is undesirable for the same
> reason that goto's are.

No; exceptions compare with longjmp about the same way that structured
branching compares with goto. The reason goto and longjmp are
undesirable is that they are unconstrained; they allow control to
transfer anywhere, producing spaghetti. Exceptions, like if, for,
switch, etc., are constrained to transfer control in very well-defined
ways.

Bob

dietma...@yahoo.com

unread,
Jan 14, 2005, 6:29:04 AM1/14/05
to
L.Suresh wrote:
> a) I find JAVA's enforcement of checked exceptions wonderful.

Thanks for pointing this out in this context! Although I whole-heartly
disagree with your statement, it provides another insight to me why
exception specifications are a bad idea: it is in some sense nothing
else than enforced checking of return codes and return codes are a
suboptimal approach to handle exceptional error conditions. ... and
both actually fail with the same kind of problem: in code which is
generic in some form (i.e. a user may parameterize it with some user
defined functionality through some mechanism like callbacks, virtual
functions, or template parameters) neither return codes nor exception
specification can seamlessly cope with errors unknown to the author
of the generic code. The simple issue for the exception specification
enthusiast: what good is it that some function only propagates
"IO Exception" if the code used at some point actually wants to channel
a "DB Exception" through? Sure enough, I can wrap my exception but what
good is that? ... and for the return code people: where do I put my
database specific information to recover from the problem?
--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.contendix.com> - Software Development & Consulting

dietma...@yahoo.com

unread,
Jan 14, 2005, 6:33:50 AM1/14/05
to
Andy Robinson wrote:
> Does anyone here argue that exceptions are usually a bad idea?

No, because exceptions are a good idea. Essentially, I claim
that your whole analysis is ignorant of several major aspects
and wrong on many accounts.

Let me just note some major aspects you entirely ignored or are
plain wrong about:

Exception-safety vs. clean-up in "Exception-Free" Programming:

The key to exception-safety is cleaning up objects when leaving
their local context. This is in no way different for any other
form of error handling. If you have correct code if no
exception is thrown it is a trivial transformation to make the
code also exception-safe: all you need to do is wrapping up
explicit clean-up code in function with a try/catch block. Of
course, such clean-up tends to be error-prone anyway
(independent of use of exceptions) and is best handled with
RAII idioms anyway.

Generic code vs. error codes

In generic code (i.e. code uses operations provided by a user
e.g. in form of virtual functions or operations on template
parameters) you have to assume that essentially any operation
can fail due to reasons unknown to you. This results in code
which primarily handles error recovery if you don't have
exceptions. This is somewhat related to the next issue:

Code clarity

Often enough the code is sufficiently complex without the
handling errors which are not related to the local context
(i.e. errors which are propagated from some called function).
Note that I don't mind handling "expected" problems, e.g. due
to expected data quality problems. However, in the logic of
complex functions I don't want to be bothered with potential
memory allocation problems somewhere else.

Return values vs. error codes

The error codes being returned from functions use up a rather
precious resource, namely the return value of function: if a
function computes a result its only option for storing it is in
an out parameter. This complicates the codes (e.g. by
separating a variable declaration from its initialization) and
also reduces code clarity. It also inhibits function chaining
(i.e. calling a function with the result of another function)
due to two reasons: for one, each function has to be checked
individually for errors and second function chaining only works
with return values. Inhibiting function chaining reduces code
clarity but the impact is actually even worse: especially in
template code where you have to expect each operation to
possibly fail (see above) this is actually a non-starter:
frequently it is impossible to deduce the return type from the
context but it is possible to hand of the return value to some
function template which appropriately processes this value.

Execution cost

You seem to imply in your document that exception handling
effectively does the same as if-statements coming with each
function call. This is actually incorrect: good exception
handling approaches don't burdon the normal program flow at
all. The exception handling code is entirely separate and
is used only if an exception is actually thrown. Explicit
error checking can result in considerable slower code if the
errornous situation is indeed rather rare (again, out of
memory comes to mind). The issue of costs of exception
handling is discussed in more detail in the PDTR
(<http://www.open-std.org/jtc1/sc22/wg21/docs/PDTR18015.pdf>;
see section 2.4).

Encapsulation vs. return codes

In your document you claim that explicit checking makes it
visible which operations can fail. This is, however, the
wrong place to document this knowledge: the client should be
as ignorant about implementation details of called functions
as possible since a change in the function's implementation
might cause the "knowledge" about the function to become
wrong. This effectively means that the caller of a function
should always assume that the function may fail. Thus, each
and every use of a function is already a visible indication
of a point of failure.

Effectively, the assumed cost of exception handling is
non-existant: the correctness constraints imposed for
exception-safety are present without exceptions, too. The only
real cost is the decision on how to deal with a particular
situation to recover from a problem. However, even this
decision has to be made anyway: you need to decide whether a
problem is expected in a given context and dealt with in the
current context. On the other hand, the effect on code clarity
is tremendous.

> I realise that I'm putting my head into the lion's mouth. But I
> also think there must be others out there who share my views.

Probably there are people who share your view but I think all
of these people had limited exposure to complex and generic
systems. Also, they may come to the same conclusions due to the
same errornous assumptions.

Francis Glassborow

unread,
Jan 14, 2005, 9:49:27 PM1/14/05
to
In article <1105655860....@z14g2000cwz.googlegroups.com>, Emil
<em...@collectivestudios.com> writes

>Additionally, avoiding exceptions leaves you with no practical options
>for reporting failures from constructors. This essentially disables one
>of the most important features of C++, namely the guarantee that no
>object of user defined type can be used before it has been properly
>initialized.

Not only from ctors (and the two stage process using an init function is
ill suited to use with such things as the STL.) but also with overloaded
operators. If you do not want to use exceptions you are left with a
limited subset of C++ which might be useful in some circumstances but
only where sacrificing a large part of the power of C++ is acceptable.


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Andy Robinson

unread,
Jan 14, 2005, 9:58:11 PM1/14/05
to
David Abrahams wrote:

> 1. The required analysis is the same one you'd have to perform for
> any code that can fail, whether with exceptions or status codes or
> some other mechanism, to understand its behavior in the presence of
> error
> conditions. You still need to know whether a function can fail, and
> whether, if it fails, it may have disturbed the program state.

Yes indeed. And my point is that tricky things like this are hard to
get right with exceptions because many effective return points are
hidden (the functions we call, which might throw an exception) and we
can only discover their existence by reading documentation about the
functions we call, which fallible programmers may or may not have
bothered to get right, or which may not even exist. (Or by
recursively analyzing every function called, which is not practical
on a regular basis).

If we assume that every programmer is a conscientious genius then it
doesn't matter whether we use exceptions or status codes. But if we
relax this requirement then my view is that exceptions are harder to
use correctly.


Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Francis Glassborow

unread,
Jan 14, 2005, 9:56:47 PM1/14/05
to
In article <34ng7uF...@individual.net>, Dave Moore
<dtm...@email.unc.edu> writes

>Whew ... I got bit worked up and wrote more than I initially intended.
>Anyway, in order to successfully argue against C++ exceptions, I think you
>will have to (at least) come up with strong points that outweight the 3
>significant benefits described above. I'm not saying that it can't be done
>(although I expect it will be very hard), just that you haven't done it yet.

Thanks for your well stated points. In addition I would add:

Exceptions separate concerns
By this I mean that the programmer can deal with normal conditions quite
separately from abnormal ones. One place where this surfaces in my work
is in teaching novices. They can learn to write code that, for example,
validates input, pre-conditions and post-conditions without creating
highly complicated source code. They can put the handling of problems to
one side to be dealt with at a more appropriate point.

This is not only a benefit to the novice but also to those using C++ for
commercial code. Reducing complexity my removing the entanglement of
normal and special case code has large pay-offs. As many of us
discovered in the 1990s, you cannot simply bolt exceptions onto existing
code, you have to design and implement from scratch but the results are
usually much clearer and easier to maintain.

During almost a decade of presenting advanced C++ training courses I
have never met a programmer who did not find exceptions a positive
benefit. The only reservations were in areas of highly constrained
resources, but such special cases will always exist and it is part of
the skill of a programmer to know when a general technique is
inappropriate.

BTW any good programmer will have a substantial toolkit of techniques to
deal with problem situations, and exceptions is only one of those
techniques.

--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

Andy Robinson

unread,
Jan 14, 2005, 10:00:40 PM1/14/05
to
Emil wrote:

> This argument is a lot like arguing whether C++ is a good idea and
> whether it provides any _real_ benefits over using plain old C.
>
> The answer of course is that C++ simply provides tools for higher
> level of abstraction. This _can_ be beneficial, yet someone in
> particular may or may not benefit from it. Data abstraction can even
> be disasterous for some; that's why they stick with C.

A few people have implied, as you seem to be doing, that if I am
opposed to one innovation I must be opposed to all innovations. This
is not the case, I think each should be judged on its merits. I think
C++ is wonderful, I could never go back to a classless language for


anything bigger than a toy program.

> Additionally, avoiding exceptions leaves you with no practical
> options for reporting failures from constructors. This essentially
> disables one of the most important features of C++, namely the
> guarantee that no object of user defined type can be used before it
> has been properly initialized.

This I have discussed further in a couple of other posts.

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andy Robinson

unread,
Jan 14, 2005, 10:02:15 PM1/14/05
to

>> It is pointed out that a C++ constructor cannot indicate an error
>> by returning a status code. That just means you shouldn't do
>> anything in the constructor which might fail - move such things
>> into a separate initialization function.

Jorgen Grahn wrote:
> That can be a terrible price to pay for avoiding exceptions -- you
> miss out on RAII. Let's say I have a class Foo with some complex
> state and good invariants (or whatever the name is for those
> predicates which hold for all objects of a certain kind).
>
> If the constructor can fail to bring my object to this well-defined
> state and I'm not allowed to throw an exception, I have to tell
> myself "this is either a good Foo, or a broken one" every time these
> objects appear in my
> code. Or I have to add an Init() method, and tell myself "this is
> either a good Foo, or a broken Foo, or a Foo I haven't tried
> initializing yet".

(If you're using status codes then you won't *want* to throw an
exception).

Is this really so tough? The class merely needs a member to say
whether it's been initialised or not, and the destructor will look at
it in order to know how to destroy it.

In practice you would inialise immediately after construction, and
delete it immediately if initialisation fails. So you wouldn't have a
mixture of good and broken Foos.

Anyway this is not a fundamental problem with status codes, it's just
a question of a minor technical inelegance caused by the fact that
this particular part of C++ is designed to work best with exceptions.

--

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Daniel James

unread,
Jan 14, 2005, 10:11:30 PM1/14/05
to
In article news:<cs4nsc$274$1...@news.astound.net>, David B. Held wrote:
> Andy Robinson wrote:
> > Does anyone here argue that exceptions are usually a bad idea?
>
> Oh, you'll always find people who argue that. Especially look at
> the embedded camp.

The "embedded camp" take a lot of the blame for arguing against the use of
exceptions (and templates) ... but in my experience that blame is usually
misplaced, and where exceptions are not used it is usually because they are
not well supported (or are, perhaps mistakenly, believed not to be well
supported) by the tools being used.

In the last embedded C++ project I worked on I was told that the client had
their own application framework for development targetting their custom
hardware, and that it used neither templates nor exceptions. When I queried
this I was told that they had decided not to use these language features
for fear that the compiler they proposed to use would not implement them
well.

Some time later I spoke to one of the compiler's developers on a support
matter and chanced to ask, out of interest, whether they thought exception
support was important on embedded platforms. The reply, ironically enough,
was "Oh, yes. We think exceptions are crucial to writing robust
error-handling, which is especially important in an embedded system that
may have to run for months/years unattended. We believe our compiler
currently generates the best exception code on an embedded platform."

I agree withmost of the rest of what you write (apart from the suggestion
that Ada necessarily leads to verbosity of the same order of magnitude as
COBOL's <smile>).

Daniel James | djng
Sonadata Limited, UK | at sonadata
| dot co dot uk

ka...@gabi-soft.fr

unread,
Jan 14, 2005, 10:19:28 PM1/14/05
to

I think you've chosen a very bad example. Exceptionally, you
have a case where an interal error state is necessary. Even if
you succeed in opening the file, an error can occur later which
makes the object unusable. So you need the internal state, and
you need to check it before each function anyway. Given that,
there's really no reason not to use it for the constructor as
well. There's no way that you can guarantee that all existing
instances of the class are usable objects.

And of course, typically, not being able to open a file is an
expected "error", something that should be treated locally. So
exceptions wouldn't normally be called for.

This doesn't mean that they aren't appropriate elsewhere. I
tend to group errors in three categories:

Expected errors:
This includes things like file not found for a filename
given to me by the user. Nothing exceptional, and most of
the time, something that can and should be handled
immediately.

Return codes are the preferred solution (but see below).

Critical errors:
These are things that mean I have to abandon some large
treatment. Not necessarily the entire process, but a
request on a server, or the parsing of a statement in a
compiler. A typical example would be insufficient memory or
stack because a request is too complicated (e.g. the filter
in an LDAP request nests too deeply).

This is where exceptions shine. Even if I'm abandoning the
entire process, since I'll still want to clean up by
executing the destructors on the stack.

Internal errors:
These are things that simply can't happen:-). If they do,
it means that I've messed up somewhere, that there is a
mistake in my reasoning on the program. And if my reasoning
about the program was wrong, it means that I don't know what
state I'm in. Almost anything I might do (including
executing destructors when walking back that stack) becomes
dangerous.

In such cases, I get out of there as soon as possible, doing
just the vital minimum of cleaning up necessary to avoid
corrupting the system (which should be nothing on a good
system) and providing a maximum of information (core dump,
etc.) for a post mortem.

Those are my general rules. The first one, however, requires
some clarification, especially with regards to errors in
constructors. Because, of course, a constructor cannot return
an error code. Ever. Which means that you must do something
else. And all of the alternatives I can think of result in an
unusable existing object. In practice, I find that the price of
having to take unusable objects into account is a lot higher
than the price of having to write a local try/catch, and handle
the exception locally. Except, of course, when the nature of
the object is such that it can become unusable after
construction, even if the constructor succeeds, and I have to
take unusable objects into account regardless.

Similar considerations may hold for overloaded operators; I've
not got enough experience with them, however, to pontificate
about them.

And of course, these are just general considerations, and not
hard and fast rules.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Dave Moore

unread,
Jan 14, 2005, 10:28:00 PM1/14/05
to

<dietma...@yahoo.com> wrote in message
news:1105640434.5...@f14g2000cwb.googlegroups.com...

> L.Suresh wrote:
> > a) I find JAVA's enforcement of checked exceptions wonderful.
>
> Thanks for pointing this out in this context! Although I whole-heartly
> disagree with your statement, it provides another insight to me why
> exception specifications are a bad idea: it is in some sense nothing
> else than enforced checking of return codes ...

Not at all. If a function abrogates its exception specification (say
because some client code threw an exception a library wasn't designed to
deal with), this results in a call of std::unexpected(), which normally
calls std::terminate(). However, C++ also allows you to change this
behavior by using set_unexpected() to define another "handler" to be called
by std::unexpected. As with the rest of the C++ exception mechanism, this
requires some work to use properly (see TC++PL 3rd ed., section 14.6.3), but
in the end it allows you to deal flexibly with problem cases like the one
given above.

> ... and return codes are a


> suboptimal approach to handle exceptional error conditions.

Agreed 8*).

>... and
> both actually fail with the same kind of problem: in code which is
> generic in some form (i.e. a user may parameterize it with some user
> defined functionality through some mechanism like callbacks, virtual
> functions, or template parameters) neither return codes nor exception
> specification can seamlessly cope with errors unknown to the author
> of the generic code. The simple issue for the exception specification
> enthusiast: what good is it that some function only propagates
> "IO Exception" if the code used at some point actually wants to channel
> a "DB Exception" through?

This case is explicitly handled in section 14.6.3 of Stroustrup's TC++PL,
3rd ed. The examples given are a bit too involved to rehash here, but they
show how one can portably and flexibly pass the "DB Exception" through so
that it reaches a point where it can be dealt with properly.

Dave Moore

Don Waugaman

unread,
Jan 14, 2005, 10:24:31 PM1/14/05
to
In article <1105661323.0...@c13g2000cwb.googlegroups.com>,
Bob Bell <bel...@pacbell.net> wrote:
>Andy Robinson wrote:

>> > Exceptions, however, automate most of this process for you,
>> > whereas status codes require a lot of manual boilerplate coding
>(and
>> > there is your redundancy that you were complaining about
>> > exceptions).

>> Here we disagree. It's not a lot of code, it's just a one line
>> test-and-return.

>Even if it were only a "one line test-and-return", it would be for
>every function call that can fail. That sounds like a lot of code to
>me.

>But of course it isn't one line; many functions, when detecting an
>error, must back out some work (freeing temporary allocations, closing
>files, etc.), which complicates the test-and-return code.

And, to amplify Bob's comment here, this means that to use the "one line
test-and-return" idiom, locally scoped objects must be used to manage
resources[1]. This is exactly what would be done to make a function
uphold
one of the exception guarantees - which means that one of the advantages
previously cited (not having to write "exception-safe" code) is gone.

These two points that the orignal web discussion made are therefore
inconsistent - they work against each other, and tend to strengthen
the argument for using exceptions rather than writing exception-free
code. After all, once you've gone through the trouble of changing
your code to write in this style, you've gone through most of the pain
involved in changing to exceptions. Why not go the rest of the way and
get the benefits?

[1] Or, of course, the resources must all be freed on one line. I don't
think anyone could possibly argue that this could be done coherently,
readably, or consistently as a one-liner (by a human) - but compilers
with assistance from RAII can do it quite well automatically.
--
- Don Waugaman (d...@cs.arizona.edu) O- _|_ Will pun
Web Page: http://www.cs.arizona.edu/people/dpw/ | for food
In the Sonoran Desert, where we say: "It's a dry heat..." | <><
"No, Who's on first." "I don't know - THIRD BASE!!" -- Abbott &
Costello

Don Waugaman

unread,
Jan 14, 2005, 10:25:17 PM1/14/05
to
David Abrahams wrote:
> msalters wrote:
> > There are already
> > implementations that won't even load exception handlers
> > in RAM until needed.

> Really? Which ones? I've been talking about that optimization as a
> possibility for years, but I have yet to see it anywhere.

Exception handlers (which I take to mean code in catch blocks), I would
agree.

However, for the tables used to look up exception handing information,
and the code used to interpret those tables, you get automatic loading
"for free" when the information is stored in a different object file
section from the main program code in a system with demand-paged
executables. I believe the IA64 ABI for C++ supports this.

Even better, the exception information in that situation can be paged
out, thus such a system "automatically" allows for unloading as well
once it is no longer needed.

Andy Robinson

unread,
Jan 14, 2005, 10:30:18 PM1/14/05
to
jakacki wrote:

> In your article you write
>
> It is said ...
>
> It is claimed ...
>
> It is pointed out ...
>
> This means that you polemize with statements that you have
> read somewhere. Why don't you cite the source? If it is a
> journal article or a conference paper, then it is likely
> that the author presents analysis to prove his/her
> conclusions. Without showing exactly where these analyses
> are wrong, you are only glazing over details.
>
>> If there are arguments in favour of using exceptions
>> which I have failed to consider, I'll be grateful if
>> you would mention them.
>
> Well, go to all these papers that you have hidden behind
> "It is said...", "It is claimed...", "It is pointed out..."
> and address concrete claims made there. If you cite them
> explicitly, it would be verifiable if and how you
> misinterpret them. Moreover, you would give a chance to
> their authors to defend the views and/or explain them
> better.

Well, we have plenty of people defending the pro-exception view
already. I don't think this is the time for me to look for more!


Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andy Robinson

unread,
Jan 14, 2005, 10:28:27 PM1/14/05
to
David B. Held wrote:

> As far as exceptions not happening any sooner than errors reported
> by return codes, consider this scenario: Code allocates a buffer.
> Allocator fails, but because the buffer isn't needed right away,
> no attempt is made to write to it. In the exceptional case, the
> allocator throws as soon as the program can know that there is a
> failure. In the status case, the program does not fail until there
> is an attempt to write to the buffer. Consider another scenario:
> Program opens a file that will be read to/written from frequently
> later on (so opening/closing it is not efficient). If the file
> stream c'tor were to throw an exception on a file-does-not-exist
> situation, then the program would know right away that there is a
> problem. With status codes, the program could run for quite a while
> before needing to access the file and finding out that it isn't
> available. In fact, one could argue that it is a shortcoming of
> the C++ iostreams library that streams do *not* throw exceptions.
> Instead, stream operations silently fail, and it can be frustrating
> to determine whether your program is failing or not if you forgot
> to check the stream state (status code) somewhere.

I have no argument with this, I just think it's a bit peripheral. It's
all based on the idea that we wait until a bug happens before we fix
it, which I think is an approach that is guaranteed to produce buggy
programs. What we really need are programming practices that help us
not to write bugs in the first place, and debugging stategies that
are based not on using a debugger, but on examining the source code.


>> This has not been my experience. I think almost everyone who has
>> used them would agree that classes have enormous value in making
>> programs easier to read, write and understand, and thus more
>> reliable. My experiences with exceptions have not been so pleasant.
>
> Perhaps that's because when you started to use classes they were
> well-established and had a common set of idioms; but when you
> started
> to use exceptions, the state-of-the-art was not so mature. The fact
> is, there are thousands of programmers who have learned to find the
> utility in exceptions that you have found in classes.

Who knows, maybe this will happen to me too. I don't see any sign of
it yet!


>> No, in my view an operator should not be used to implement
>> something which may need to indicate that it has failed. (unless it
>> can indicate this by its result, such as the "not-a-number"
>> convention for doubles).
>
> Which means that you pretty much think the C standard library is an
> abomination. I always thought it was ridiculous that atoi(),
> strtol(),
> etc. returned "special" values on error. In fact, they are only
> special if you remembered to check errno; and, in fact, check it
> twice. Because it's entirely possible that errno is reflecting the
> result of a previous call.
>
> I agree that the error-handling in the C library is primitive and
> suboptimal, but not because it uses functions the way functions
> were designed to be used. It's primitive because it does not have
> the luxury of throwing exceptions.

I would say that if we want atoi to detect errors (e.g. overflow) then
its prototype in an ideal world would be:
StatusCode atoi(int *retval, const char *nptr);

I didn't mean that functions/operators *should* return special values
for failure. Only that it can sometimes be useful, such as the NaN
convention.


>> You argue that with status codes, we are at the mercy of other
>> people (e.g. 3rd party libraries) to propagate them and generally
>> use them correctly. This is certainly true. And if we use
>> exceptions then we depend on 3rd party libraries to handle them
>> correctly too (to be "exception safe"). Which is considerably
>> harder, in my view.
>
> Well, we certainly want people to write libraries correctly either
> way. But if a library author fails to handle a particular error that
> is signaled by a status code, there is *no* way for us to know that.
> Whereas, if a library author fails to handle a particular exception,
> we will eventually find out, and can make sure that our code behaves
> gracefully anyway. Remember that writing exception-safe code just
> means writing error-safe code. If it is difficult to write
> exception- safe code, then it is EQUALLY difficult to write
> error-safe code that
> uses status codes, because *it is the SAME TASK*. When you say that
> it is "considerably harder" to write exception-safe code, you are
> really just saying: "Coding was so much easier when I could ignore
> status codes."

No, I never ignored status codes, unless I had considered the meaning
of it and decided that there were good reasons for not caring. In
which case I comment the reason.

> If I challenged you to prove that all of your code provides at least
> the basic guarantee using the status code idiom, you would have just
> as many headaches as if the code used exceptions. In fact, you
> would have more, because you would have to write by hand all of the
> error propagation that is automated by exceptions.

I do agree about it being the same task, and my point is that it is a
task more easily accomplished when we can see the potential return
points all identified with "return" statements, and can see from a
function prototype whether it returns a status code.

I don't think many people attempt formal proof of such things. What we
do is examine the code and think "what if..." at each line. My claim
is that exceptions make this harder.


You make various points about the problem of handling errors in ctors.
My view is that it would be nice if there was a clean way to return
multiple values from a function, in which case ctors would return a
status code, and "new" would return a valid pointer and a NO_ERROR
status code, or a null pointer and error status. As it is, if we
prefer not to use exceptions then we must work around this. But it's
hardly a major problem.


> void TransferMoney(Account& From, Account& To)
> {
> Account Temp(From);
> Temp->debit(100);
> To->credit(100);
> swap(Temp, From);
> }

I agree that this is gorgeous. But that's partly because it's a
carefully-chosen example. You observe that it may not be any good if
copying an Account is too expensive. It also depends on the idea that
the correct response to an exception is to put everything back the
way it was, which is correct in this case, but not always. Member
functions need to manipulate their own state and the response to an
error may not be to restore the original state. So we end up either
using lots of try/catch blocks, or writing special classes to set
state appropriately in their dtor, if an exception is thrown. And the
thing is, we can't (easily) write a program which only uses
exceptions in places where they shine : as soon as we start using
them, they start throwing themselves all over the place!

> int TransferMoney(Account& From, Account& To)
> {
> Account Temp(From);
> if (!Temp->initialized()) return COPY_FAILED;
> int status;
> if ((status = Temp->debit(100)) != SUCCESS) return status;
> if ((status = To->credit(100))) != SUCCESS) return status;
> swap(Temp, From);
> return SUCCESS;
> }

I actually like this too. It is longer, but it makes it clear where
the return points are and it makes it clear that the programmer is
aware of them. In the exception-based version, a casual reader might
wonder whether the programmer had really thought about these things.
This version makes it clear that "swap" is expected to succeed
unconditionally, which is not clear in the exception-based version.
And we can easily confirm that this assumption is correct (or at
least, that it agrees with what the writer of "swap" thought) by
looking at the prototype to confirm that its return type is not
"StatusCode".


> I'm glad you made that point. You may think that exceptions are
> like
> goto's, and status codes are like blocks. In the physical sense,
> there
> may be some truth to that. However, I would argue that with respect
> to maturity, status codes are like gotos and exceptions are like
> blocks.
> Status codes are primitive, unstructured, and weak. Exceptions are
> mature, rich, and powerful. In fact, it is fairly difficult to
> write error-safe code with status codes, because the status codes
> themselves introduce a lot of clutter that obscure the intent of the
> code.

I do think exceptions are very like long-distance goto's. And I agree
that status codes are immature. But this is just a consequence or
more work having been put into exceptions.


> Returning to our example:
>
> Account Temp(From);
> if (!Temp->initialized()) return COPY_FAILED;
>
> This line should be required after every copy that could fail, which
> means that it should be automated. The fact that you must write it
> manually means that it is a potential source of omission errors. A
> vast, deep source of such errors.

Here we disagree. I would say that using status codes means always
checking them, unless occasionally writing a comment instead to say
why it's correct not to. And the fact that you must write it manually
is a valuable and visible reminder that the called function may fail.
This helps to write bug-free code.


> int status;
>
> Here's that status object that we always have to allocate somewhere.
> Here it's easy to allocate because it's small and stack-based. But
> what if we wanted to return an error message saying how the
> operation
> failed? Well then we'd be talking about a lot more complexity,
> wouldn't we?

No more complicated than an exception object.


> if ((status = Temp->debit(100)) != SUCCESS) return status;
>
> The original line here was a mere 17 characters long. It has now
> blossomed to over 50 characters!! It's way over twice the size!

I agree that the error checking adds to the code. But the thing is,
you can't write correct code without thinking about the possible
errors, so it is good that the code should show these thoughts
instead of hiding them.


>> But I don't see any advantage to doing things this way. If the
>> File::Open(...) function returns a status code then this is easily
>> documented, written, and used. No judgement decisions (about
>> whether and when to use exceptions) have to be made, implemented,
>> or documented. It's easier for everyone.
>
> Is it? In fact, this is exactly why error codes are ignored.
> "Well, in this case, it's not an error for File::Open() to fail, so
> I don't need to check the return value..." Uh...right. More
> often than not, that's merely a cover-up for laziness.

A wilfully careless programmer is not going to write good code no
matter what tools they use.


>>>Exceptions, however, automate most of this process for you,
>>>whereas status codes require a lot of manual boilerplate coding
>>>(and there is your redundancy that you were complaining about
>>>exceptions).
>>
>> Here we disagree. It's not a lot of code, it's just a one line
>> test-and-return. And the big advantage of this, as I keep saying,
>> is that it makes the posssible return points visible, which makes
>> it vastly easier to keep in mind the alternative execution paths.
>
> The fact is, you can't see all the possible execution paths through
> a
> program with only a quick glance. The compiler is quite free to
> reorder instructions at several levels, and remove some code
> entirely. So this illusion of being able to precisely trace the flow
> of execution
> is something of a quaint but outdated fairy tale. Anyone who has
> tried to write multi-threaded code is even more acutely aware of
> the fragility of this illusion.

I don't think you're being terribly serious here. AFAIK there are some
places where the order is undefined, and in other places the compiler
can only reorder things if the effect is "as if" the original order
was used. I don't see the relevance.


> I think my illustration above gives some idea of the clarity cost of
> status code handling and the obvious benefit of exception
> mechanisms. The fact is, error-safe code self-documents the possible
> return paths for you.

So, where in the exception based TransferMoney is this documentation?

> Irreversible operations are only performed on temporaries
> or at the end of a sequence (to spell it out: irreversible
> operations
> may be an exit point). Sequential operations on parameters or
> globals must be no-fail (they are not exit points) or no-effect
> (possible
> exit point). Those two rules alone tell you almost everything you
> need to know about both the error safety of the code and the
> possible execution paths.

These are perfectly good rules at least for some types of functions.
But exceptions make it harder to tell, just by looking at the code,
whether the rules are in fact being obeyed by the code.


> Destructor invokation is not explicit, yet you insist that's a good
> thing. However, all kinds of nasty things can happen if you don't
> use
> d'tors correctly. Writing correct d'tors is no more difficult than
> writing correct error-safe code. You just follow a few idioms and
> don't do anything dangerous.

I mentioned earlier that one problem with exceptions is that as soon
as you start using them, they get everywhere. It's their non-local
nature that I don't like. On the other hand I like the idiom of using
stack-based objects so their dtors will called when the block is
exited. It's useful and does not have implications outside the scope
of the enclosing block, which makes it far safer.


Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andy Robinson

unread,
Jan 14, 2005, 10:29:33 PM1/14/05
to
Bob Bell wrote:

> What about opening a file in the constructor of an object? It is
> very common (in my code, at least) to use constructors and
> destructors to manage resources like files (RAII). Constructors
> cannot return status codes. How do you propose to report failures in
> constructors without exceptions?

> Even if it were only a "one line test-and-return", it would be for


> every function call that can fail. That sounds like a lot of code to
> me.

These two, I've talked about in an earlier post.


> But of course it isn't one line; many functions, when detecting an
> error, must back out some work (freeing temporary allocations,
> closing files, etc.), which complicates the test-and-return code.

In which case the exception-based version would have to catch the
exception and perform the same back-out. No simpler.


> Here you illustrate one of the most important points in favor of
> exceptions. The point at which errors are detected can be quite
> distant (in terms of function call depth) from the point at which it
> can be handled. With exceptions no matter how distant it is, the
> error information is guaranteed to be transmitted. With status
> codes, the programmer must build the entire transmission mechanism,
> a tedious, repetitive, error-prone process.

My argument is that it is good that such transmission should be
visible.

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Jan 14, 2005, 10:34:19 PM1/14/05
to
L.Suresh wrote:
> Here's a list of my views.

> > I speak here of C++ though of course the issues apply to all
> > languages supporting exceptions

> So you will find me referring to JAVA occassionally.

> a) I find JAVA's enforcement of checked exceptions

> wonderful. It forces you to handle all the checked exceptions,
> that way you can be sure that you have handled all exceptional
> paths. Sure, you can subvert these mechanisms and write
> catch-all blocks to suppress the exceptions. But, thats bad
> programming.

Then, presumable, you find JAVA's lack of enforcement of
unchecked exceptions bad.

I just posted an article in which I explained the cases where I
think exceptions are justified. In practice, with the exception
of constructors (and probably overloaded operators -- I've not
enough experience with them to be sure), exceptions mean that
you are about to abort some large functional block. What is
being thrown is completely irrelevant to the calling function,
and to any number of functions above it. (In many cases, it is
completely irrelevant, period. Except maybe for logging error
messages.) In such cases, all that is needed is to know that
the function might throw. What isn't important.

In Java, of course, most of the checked exceptions are really
things that should be return codes. Checked exceptions are
an important means of working around this defect in the
language.

(IMHO, both Java and C++ suffer from the fact that you can
silently ignore a return value. But that's an entirely
different issue. And of course, in C++, if the return value is
in fact a return code, it is usual to use a class which triggers
a failed assertion if the destructor is called without the value
having been read, so that in the most important case, you can
insist.)

And of course, Java also uses exceptions for internal errors.
I'm not quite sure how you are supposed to handle and continue
from something like VirtualMachineError. (To be fair to Java,
the documentation does say that it "that indicates serious
problems that a reasonable application should not try to catch."
Hard to figure out why they made it an exception, then. And of
course, even if you do not catch it, finally clauses will still
be executed. With who knows what results, since the VM isn't
working correctly.)

> An example would be, the checkError() method of
> java.io.PrintStream. A common pitfall for the novice would be
> to ignore to call the method to check if the underlying stream
> has thrown an IOException. (PrintStream is known not to emit
> any exceptions and set the error state internally.)

But this is a purely Java problem. Since you can't design a
return code which asserts that is has been read in the absense
of deterministic destructors.

> OTOH, if the exception had been thrown, it would have forced
> the caller to handle it as he sees fit. Here you get all the
> help you from the compiler. Some say that this will lull you
> into a false sense of security. But when properly used you
> can be sure (from the help you get from the compiler) that all
> exceptional paths are taken care.

? I don't see how. How can using checked assertions ensure
that you've actually done the right thing and tested it in the
error cases. Probably THE most common error in program
development is to fail to test the error cases. (The argument
is irrelevant to the question of exceptions or not. Regardless
of how errors are reported, not testing the error cases is an
error in the development process.)

> b) C++ , JAVA differ how they treat the exception
specifications.

> int f(); //#1

> in C++ can throw any exception , whereas in JAVA its a
> no-throw guarantee.

No it's not. In Java, it's a guarantee that the program won't
throw one of a small set of exceptions. But it can still throw
most of the exceptions one sees in practice.

> I feel that JAVA has an edge over C++ in
> enforcement of exceptions. In C++ you can write code such as,

> int f() throw() {
> throw 1; // Flagrant violation, but the compiler lets it go...
> }

> The reason given by Stroustrup for #1 throwing any exceptions
> is that it would require exception specification for virtually
> every function.

Which is also why Java only does it for the error conditions
which you normally have to treat immediately. Where the
alternative in the calling code is really:

if ( someFunction() ) {
// handle error...
}

vs.:

try {
someFunction() ;
} catch ( ErrorType error ) {
// ...
}

The Java solution has the advantage of compile time verification
that the test is made, rather than runtime verification, and the
possibility of transmitting a lot of information in the error
case without adding to the run-time case of the non-error case.
The C++ solution has the advantage that it is a lot more
readable.

(Let me clarify that last statement. IMHO, an if which
immediately handles the error is a lot more readable than a try
catch block. In the cases where I think exceptions are
appropriate, the comparison is between ONE try/catch block and
tens, if not hundreds of if's, most of which only serve to
propagate an error which the caller isn't interested in. Which
is, of course, a different issue.)

> f) Throwing exceptions help you to map different status codes to
> different types of exceptions. Instead of code that does,

> if (status_code == ...) {
> // do this
> } else if (status_code == ...) {
> // do that
> }

> You can write the error-handling code in different handlers.

Do you really see that much of a difference between:

// ...
} catch ( Type1 const& x1 ) {
// ...
} catch ( Type2 const& x2 ) {
// ...
} // ...

and:

switch ( error.typeCode() ) {
case Error1:
// ...
break ;

case Error2:
// ...
break ;
// ...
}

(OK. Switch is broken in C/C++. Catch isn't. But we're talking
here about code which doesn't occur but in a very few places in
the system.)

> This lends clarity to the program. And the handlers
> can handle even derived class thrown as exceptions.

> Also several exception status can be grouped into a single
> exceptions if the caller cannot differentiate between
> status_code_1 and status_code_2.

If they don't have a common base class, that is easier handled
with the switch.

(Just for the record, I prefer the way catch works to the
switch. But I don't think the difference that important.)

> g) Exception handler gives clarity to the code, without
> messing the usual path with exceptional path. As you had said
> the line between usual and exceptional path may be thin. But
> then it calls for a judgement on the part of the programmer to
> decide.

The clarity is relative, since it comes at the cost of adding
invisible control flow to the code. In the end, I think it
depends on how you reason about the program. If you reason
procedurally, the invisible control flow can be very expensive
in terms of clarity. If you reason in terms of state, it plays
a much smaller role.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

ka...@gabi-soft.fr

unread,
Jan 14, 2005, 10:43:42 PM1/14/05
to
Walter wrote:
> "Andy Robinson" <an...@seventhstring.com> wrote in message
> news:cs3bs5$dqp$1$8300...@news.demon.co.uk...

> > Does anyone here argue that exceptions are usually a bad idea?

> > I apologise if this subject has been done to death already.

> > I have placed a rant called "Exception-Free Programming" at
> > http://www.seventhstring.com/resources/exceptionfree.html
> > and I'll be interested to know what you think.

> > I realise that I'm putting my head into the lion's


> > mouth. But I also think there must be others out there who
> > share my views.

> I used to agree with your point of view on this. But with
> time and experience, I changed my mind for two reasons:

> 1) Programs that don't use exceptions to report errors tend to
> have serious bugs in them for the following reason -
> programmers forget to check the error codes, or check them
> incompletely. The resulting bugs rarely show up in testing;
> they show up on the customer's machine when they are the most
> expensive to fix. Exceptions cannot be ignored by omission,
> the programmer has to deliberately write code to catch and
> ignore it. There's no blithely going on assuming that the
> previous operation succeeded.

I have a hard time understanding this. If you test the error
case, your test case fails if you forgot to check a return
code. And if you don't test the error case, you won't notice
that you've forgotten the catch block either.

In most of the applications I worked on before exceptions, our
ReturnCode type would trigger an assertion violation if it
hadn't been read before destruction, so you'd catch the error
even if you didn't test the error cases. (Of course, companies
which insist on such behavior in the return codes typically do
test error cases:-).)

> 2) Most (nearly all) of the problems associated with writing
> exception safe code revolve around memory leaks.

I've not found that to be true either. At least not
exclusively. Exception safety means maintaining transactional
itegrity, at least at the highest level.

A more accurate argument would be that most of the problems
associated with writing exception safe code are also there if
you use return codes.

[...]

> P.S. The question arises in your article and by many people
> just when should something be considered a natural return
> value and when should it be considered an error? The answer
> is it depends on the purpose of the function being written.
> If its stated purpose is to open a file for reading, such as
> ReadFile(), then it should throw an exception if the file
> cannot be opened because the file doesn't exist.

That's debatable. There's nothing exceptional about a user
making a mistake when entering a filename, and many programs
will be able to recover from it locally. A lot depends on the
context, but when writing a function where the context isn't
necessarily clear, I prefer a return code, on the grounds that
something like:

ifstream f( filename ) ;
if ( ! f ) {
throw "We really need that file" ;
}

is a lot simpler than:

ifstream f ;
try {
f.open( filename ) ;
catch ( runtime_error& error ) {
// set things up to retry and loop on the problem...
}

That is, it is simpler to map a return code into an exception,
than it is to use an exception when what you really need is a
return code.

> Correspondingly, a function named DoesFileExist() should not
> throw if the file doesn't exist, because asking a question is
> part of the natural flow of the program, and not an error.

And isn't trying to open a non-existant file part of the natural
flow of the program as well? Something that you sort of expect
to happen from time to time, even with reasonable users.

Andy Robinson

unread,
Jan 14, 2005, 10:42:55 PM1/14/05
to
wka...@yahoo.com wrote:

> If you're one of the lucky people who can count on the fingers of
> one hand the number of times you've forgotten to free a resource at
> an early return, I can see where you might feel that freeing
> resources in destructors only is unnecessary overhead.

I don't think I said that did I? I hope not.

> For the rest of us, it's a
> good defensive programming habit, even if we don't use exceptions.

Me too, it's very useful.

> Is it really a big burden to use iostreams instead of cstdio
> streams, auto_ptr for individual heap objects, and the vector
> template for heap
> arrays? For me, these substitutions cover most of the cases where I
> need to worry about exception or multiple-return-point safety.

Yes, I too use these things (well, I don't use iostreams or cstdio
much).


> Would you require the types passed to STL contains all have the
> member
> function 'int init(void)' ? That kills the idea of using primitive
> types in container templates, or minimally forces the use of a
> traits template.

I use STL containers but only in fairly simple ways. I regret I don't
know enough to discuss this.


> Suppose you had a protected member function that called a virtual
> member function. Supposer further that, in some derived class, the
> override of the virtual function set a derived class member
> variable, and this value was used by the derived class member
> function that
> called the base class protected function. Would you see this as bad
> style or an example of the flexibility and power of the virtual
> member
> function capability? To me, this is analogous to using an
> exception. Sometimes it's desirable for "non-adjacent" layers in the
> code to interact in ways that are hidden from the intermediate
> layers.

I would judge it on its merits in the circumstances. I'm a practical
person. Any piece of code which is easy to understand and easy to
ensure that it is correct, is fine with me.


Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

L.Suresh

unread,
Jan 14, 2005, 10:53:44 PM1/14/05
to
I look at exception specification as a contract.

If funcA() calls funcB(), the caller needs to know the following when
funcB() encounters exceptional situations.

a) How will funcB() signal what has gone wrong?

By looking at the exception specification it can be understood that an
exceptional condition may be reported through the list of exceptions
specified. (Let's not discuss about returning error codes as a means of
signalling what has gone wrong with the callee)
So, exception specification is a contract that guarantees to inform the
caller in a specific way. It's the language that has to enforce that
this is done properly.

b) In what ways can funcB() go wrong?

The author of funcB() has to consider these:
i) Is this situation an exceptional one or is it a normal one?
iI) If exceptional situation A has occured, does the actual
information needs to be passed on to the caller? If at all the
actual information is passed, will it be of any use to the
caller.

For eg: funcB() might be a remote proxy, and lets say it
encounterd a network error. funcA() will be baffled to hear about
network exceptions when it is thinking that funcB() is doing
local work. In this case the funcB() would better translate
some other exception so that the caller can understand that some
irrecoverable exceptional condition as occurred that
cannot be handled. funcA() proceeds to do someother meaningful
work.

Without specifying the exceptions in the interface how does the
funcA() know how funcB() will report errors? When i moved
in from JAVA to C++ i was scared to look at functions without
any specifications. It might throw anything. What
exceptions can be thrown, How am i supposed to handle them? Can
i recover from the exception? Nothing is known
by looking at the function.

iii) If funcB() is a template it can advertise the exception using
a template parameter.

template <typename T, typename E>
void func() throw (E)
{
}

Here, the instantiator of the function tunes this function to
his requirements and advertises the exception based on
the behaviour of T. Since the instantiator of this function
knows about T, he can provide E as well.

iv) If funcB() is a virtual function, im not sure what is the
problem you mention. If funcB() throws IOException then
the method that overrides it should have an exception
specification that is as restrictive as funcB() or more restrictive.
funcB_overridden() shoud throw IOException / SocketException
(which is derived from IOException)
If the caller knows about the dynamic-type of the object that
has funcB() he can handle the SocketException, or if he is
manipulating the object through a base-class pointer/ref, then
he should be contented with handling IOException.

Now, if the function wants to channel a DBException, derive it
from IOException. The caller can handle DBException /
IOException as appropriate. Or as you said, you can use
exception chaining. Exception chaining in is a remarkable
tool for printing stack traces in a remote call. Exception
chaining can be used as a debug tool or to get the underlying
error while writing generic code. Well, if you can't do both
then there is a problem in deciding an exception specification as
IOException.

[ -- Digression on how JAVA handles generic stuff

For example, java.lang.reflect.Proxy acts as a proxy for
interfaces. It provides the same interface as the object it proxies
and dispatches request to the actual object. The invoke method
in Proxy has this signature.

public Object invoke(Object proxy, Method method, Object[] args)
throws Throwable;

As an author of Proxy, the exceptions that will be raised on
calling the method cannot be known. The exception that is
thrown is what is advertised in the "Method". Here, JAVA has an
advantage that all exceptions must be derived from
Throwable.
]

c) What is the state of the system/object in presence of an exception.
It's exception guarantee. It should fail-fast or be in a stable state
after the function has completed.

d) Can the caller do something useful with the error information?

My experience with JAVA and C++ is that JAVA enforces exceptions in a
way that would be force you to handle exceptions. While that doesnt
happen in C++. Yes, you may say that it's bad programming in C++. As a
beginner in JAVA i learnt and designed to handle exceptions much
earlier than i did as a beginner in C++ :)

--lsu

Andy Robinson

unread,
Jan 14, 2005, 10:50:47 PM1/14/05
to
dietma...@yahoo.com wrote:

> Exception-safety vs. clean-up in "Exception-Free" Programming:
>
> The key to exception-safety is cleaning up objects when leaving
> their local context. This is in no way different for any other
> form of error handling. If you have correct code if no
> exception is thrown it is a trivial transformation to make the
> code also exception-safe: all you need to do is wrapping up
> explicit clean-up code in function with a try/catch block. Of
> course, such clean-up tends to be error-prone anyway
> (independent of use of exceptions) and is best handled with
> RAII idioms anyway.

You really feel that exception-safety can be achieved with a trivial
transformation?

I'm skipping some things which are already being discussed in other
branches (fibers?) of this thread.


> Return values vs. error codes

...


> It also inhibits function chaining
> (i.e. calling a function with the result of another function)

Yes, but on the other hand I'm not sure that function chaining where
exceptions may be thrown, is such a good idea. Suppose a function has
multiple arguments, one of which is a function which may throw an
exception. Because the order of evaluation of function arguments is
undefined, we can't tell which other arguments get evaluated if an
exception is thrown. Yes, it may not matter. But I don't see any harm
in separating the calls so we can see the ordering.


> You seem to imply in your document that exception handling
> effectively does the same as if-statements coming with each
> function call. This is actually incorrect: good exception
> handling approaches don't burdon the normal program flow at
> all.

That's good to know, thank you. Clearly this could be an argument for
using them on, say, a heavy number-crunching loop though there would
likely be other solutions too.


> Encapsulation vs. return codes
>
> In your document you claim that explicit checking makes it
> visible which operations can fail. This is, however, the
> wrong place to document this knowledge: the client should be
> as ignorant about implementation details of called functions
> as possible since a change in the function's implementation
> might cause the "knowledge" about the function to become
> wrong. This effectively means that the caller of a function
> should always assume that the function may fail. Thus, each
> and every use of a function is already a visible indication
> of a point of failure.

That's brave! It also disagrees with what David B. Held says about the
"swap" function, for instance.

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Jan 14, 2005, 11:09:13 PM1/14/05
to
msalters wrote:

> Andy Robinson wrote:
> > Does anyone here argue that exceptions are usually a bad
> > idea?

> We sometimes see that opinion. The mods keep out most of the
> trolls, so it's much more common in clc++.

The mods have never rejected a posting because it presented a
similar argument.

> > I have placed a rant called "Exception-Free Programming" at
> > http://www.seventhstring.com/resources/exceptionfree.html
> > and I'll be interested to know what you think.

> I think you managed to miss quite a few important points, are
> not familiar with all the possible ways to implement
> exceptions, and manage to get some facts wrong on other points
> as well.

> In detail:

> "We all know you can do everything which exceptions do by
> using status codes as return values of functions". Wrong; an
> int has a limited domain whereas there are an inifinite set of
> objects, with an infinite hierarchy. When passing an exception
> through a callback, you have more control about which
> exceptions are filtered (from both sides); filtering int error
> code is a nightmare.

It's pretty rare for return codes to be an int. On every
project I've worked on where we used return codes, they were a
class type. How else do you get an assertion failure if they
aren't read?

> "It is claimed that exceptions allow the separation of normal
> code from error handling code. But it's no problem to get the
> same effect with status codes."

> Wrong. /You/ may be able to separate them. The maintenance
> programmer, perhaps. The compiler, not. There are already


> implementations that won't even load exception handlers in RAM
> until needed.

I think you're confusing things. There are (a very few)
compilers which split the tables needed for stack walkback into
a separate segment, so that it is isolated addresswise from the
rest of the function. I know of none that put the catch block
elsewhere but in the function where it occurs.

> Most implementations keep them out of the cache.; Your if( )
> branch is just that, and often both branches will end up in
> cache. How should the compiler know 0 is an error? Or 1 is
> the error, and 0 is OK?

Why should it care? It optimizes for whatever branch is taken
the most often. (Obviously, if you feed it profiler data where
the error branch occurs more often than the usual case, the
compiler will optimize the error case. But that's your fault,
not the compilers, and that has nothing to do with exceptions or
not.)

There are aspects of exceptions which help optimizations, and
there are a few which hurt. There is also the fact that
optimization of return codes is a more mature technology. But
frankly, I've yet to see a single concrete case where the
difference was worth modifying a basic design decision (either
way) to get.

With regards to the comment you were responding to, of course,
the question is sometimes just how much you want to separate
them. Which in turn depends on the type of error -- sometimes,
too much separation is bad (you've lost context needed to
react), and other times, not enough separation forces the
programmer to consider details that are better ignored for the
moment.

> You need a lot of work, and PGO, and some luck to get an
> approximation.

> "Neglecting the possibility of an exception at some point in
> your function means that if it happens, your function may well
> leave things in an inconsistent state." This is technically
> true, since you write "may". Of course, if your function uses
> RAII style objects, the dtors will clean things up.

Of course, the only difference is that with exceptions, you
must use RAII, appropriate or not. With return codes, it's one
of your options, but not the only one. (In practice, it's
exceedingly rare that RAII isn't the preferred option,
regardless of the error handling mechanism, so I don't really
consider this an argument against exceptions.)

> But, for that to work, you need matching ctors, and those
> might need exceptions.

> "I have seen it claimed (in the C++ FAQ Lite) that the
> if-statement which tests a return code increases the software
> development burden, because both branches of the if need to be
> tested. I find this utterly bizarre. Clearly we need to test
> error handling whether by status codes or by exceptions."

> Again, an obvious truth which is false once you look at the
> /real/ problem. Both if()s and exceptions add paths through a
> function. However, with exceptions these paths add up (either
> nothing, or A, or B, or C is thrown) while if()s multiply (
> if(A),if(B) and if(C) can each be true or false). Clearly,
> shorter functions suffer less. But as you admit, with
> exceptions functions are shorter to start with.

Regardless. You have to test all possible paths. But only the
possible ones. And in practice, exceptions or return codes make
no difference here -- the bounding factor is the number of
different error conditions you have to check.

> "If an operator needs a way of reporting that it has failed
> then you should implement it as a function returning a status
> code instead". Tricky, how do you think that should work with
> all the STL algorithms?

They shouldn't use overloaded operators:-)?

> Exceptions propagate automatically, whether from operators or
> functions, but how do you propagate an unknown error code?

Seriously, this is a strong argument. For a very small subset
of error conditions -- I don't think I've ever thrown anything
but a bad_alloc from an object which would go into an STL
container.

> "The title of this article is intended to suggest the idea
> that the best way of making your programs exception-safe is to
> make them exception-free." But you don't tell us how to make
> programs return-value-safe.

> "The idea of functions returning values to indicate what they
> did, will always be present in your programs even if you also
> use exceptions. This makes exceptions redundant"

> No, it doesn't. My functions return values from the operation
> result domain, not what or how they did it.

So you don't use iostream? Or Fallible objects?

> If the operation does not happen, no value from the result
> domain exists. It's not uncommon for the return type to have
> no default constructor, so what do I return? And even if it
> does, how can you distinguish between say an empty string or a
> failed operation?

By using Fallible, obviously. Nothing new here -- we were doing
it long before there were exceptions, and it is still the
appropriate solution for some things.

For better or for worse, all possible errors don't fit under the
same hat.

> "When exceptions are being propagated then [a dtor] is the
> only way of doing such tidying up. This can be a pain,
> forcing you to create special classes whose only reason for
> existence is to accomplish some specialised tidy-up in their
> destructors." No. Google for ScopeGuard. You need only one
> class, and it exists today.

Bof. Actually, he has a sort of a point, and it is one of the
reasons why I'm interested in the lambda classes. I do use a
lot of local classes for this sort of thing, and lambda classes
would make them easier to write.

Everything considered, I don't agree with Andy Robinson's
position. But I did once, and it is a reasonable position, as
far as it goes. If I don't agree with it today, it is because
people like Dave Abraham didn't dismiss my issues, but addressed
them. Exceptions aren't a silver bullet, which will make good
programmers out of bad. They're one of a number of tools
available. And I'd be very scepticle of someone who tried to
say that you should react the same way to file not found as to a
precondition failure.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Andy Robinson

unread,
Jan 14, 2005, 11:11:41 PM1/14/05
to
Torsten Robitzki wrote:

> From discussions I followed before I got the feeling that the
> preferences for exceptions grows with the complexity of a project
> (and thus with the maximum of functions on the call stack at the
> time a very rare error occurred). On the other side most complains
> about exceptions came from GUI writers. Maybe it's because invalid
> input from users are treated as exceptions from some GUI frameworks.

I do think that there is a divide according to what kind of
programming people do. GUI applications like the one I mostly am
working on, live in a very impure world dealing with a lot of user
interface, a lot of differering API's and so on. I think it's harder
make exceptions work cleanly in that situation.


Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Andy Robinson

unread,
Jan 14, 2005, 11:10:51 PM1/14/05
to
Stephen Howe wrote:

>> I have placed a rant called "Exception-Free Programming" at
>> http://www.seventhstring.com/resources/exceptionfree.html
>> and I'll be interested to know what you think.
>

> I religiously check the return values of functions that use status
> codes. But that is just it, my colleagues frequently do not.
>
> I jumped on one of my colleagues recently for failing to check the
> return value of fopen() (and yes, it had failed) and failing to
> check the return values of fread(), fwrite() and even fclose() (a
> flush to disk could fail). Your case for status code is weaker than
> it appears. If all these C functions threw exceptions on failure, my
> colleagues would _have_ to write code the dealt with the exceptions.
>
> Stephen Howe

I don't see why. Why wouldn't they just ignore the issue, if that's
their nature?

I've talked in a different post about the undesirability of waiting
until a bug happens, before we fix it. The right way that this bug
should have been found is by a conscientious person reading the code
carefully and fixing bugs like this before they ever happen. This is
true regardless of whether we are using status codes or exceptions.
If we only fix the bugs that happen, then the bugs that haven't
happened yet don't get fixed. Oops.

ka...@gabi-soft.fr

unread,
Jan 14, 2005, 11:11:13 PM1/14/05
to
Andy Robinson wrote:
> "Philipp Bachmann" wrote:

> > In my opinion, a difference should be made between
> > "exceptional conditions" in an application and
> > "exceptions" as one possible implementation technique
> > provided as a feature of the programming language to
> > signal such conditions. The topic of "exception-safe
> > programming" has to do with "exceptional conditions" or
> > errors and not with "exceptions" in particular. So even if
> > you use status codes to signal success or failure of a
> > call to a (member) function, you must decide which level
> > of "exception safety" you want.

> I'm sure there's some truth in this. But my point is that
> whatever you're trying to achieve, it's easier if you can see
> what's happening (that is, if you can see the return points of
> a function, visible in the source).

And what about information hiding:-) ?

I understand the point you are trying to make. I agree in part.
But everything has it's price. Is the price of the invisible
path more or less than the price of the alternatives?

[...]


> > - The comparison of exceptions with "setjmp()" / "longjmp()"
> > misses the point, that exceptions at least provide the
> > advantage of stack unwinding.

> I agree that exceptions are cleaner than longjmp. But both
> remain essentially a long-distance "goto", which is
> undesirable for the same reason that goto's are.

Exceptions are just glorified goto's. I'd agree, up to a point.
So is calling abort(), but my code is full of assert's.

I've come to the point were as soon as someone says exceptions
are good, or are bad, I become sceptical. Like everything else,
exceptions have a price. If that price is less than the price
of the alternative, then I use them. If one of the alternatives
has a lower price, I use it. The hidden control flow in
exceptions is part of that price. But the explicit control flow
in return codes (which can complicate some functions
considerably) isn't free either, and it is a price that can add
up quickly if the place where you handle the error is far from
where you detect it.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

David Abrahams

unread,
Jan 15, 2005, 5:28:13 AM1/15/05
to
Andy Robinson wrote:
> David Abrahams wrote:
>
>> 1. The required analysis is the same one you'd have to perform for
>> any code that can fail, whether with exceptions or status codes or
>> some other mechanism, to understand its behavior in the presence of
>> error
>> conditions. You still need to know whether a function can fail, and
>> whether, if it fails, it may have disturbed the program state.
>
> Yes indeed. And my point is that tricky things like this are hard to
> get right with exceptions because many effective return points are
> hidden (the functions we call, which might throw an exception) and we
> can only discover their existence by reading documentation about the
> functions we call,

That's a silly argument. If you aren't going to read the documentation
about the functions you call, the game is over. Your program is broken.

> which fallible programmers may or may not have
> bothered to get right, or which may not even exist. (Or by
> recursively analyzing every function called, which is not practical
> on a regular basis).
>
> If we assume that every programmer is a conscientious genius then it
> doesn't matter whether we use exceptions or status codes. But if we
> relax this requirement then my view is that exceptions are harder to
> use correctly.

No genius is required, and you don't need to consider lots of "effective
return points" individually. You just need to be aware of what's
happening to the program state at each point in the program... which is
required if you're going to write correct programs in the first place.
For the most part, you can write code as though anything at all might
throw, and it becomes very uniform and easy to deal with. The regions
that must not throw in order to maintain correctness are actually much
rarer, and those are where you need to focus a great deal of attention.

Exceptions are much easier to use correctly once you understand them,
because in most cases they remove opportunities for decision- (and thus
error) making. And much more importantly, they can prevent a brutal
dumbing-down of the program's overall abstraction level that eventually
makes the entire thing (not just error-handling) harder to write correctly.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

David B. Held

unread,
Jan 15, 2005, 5:24:03 AM1/15/05
to
Andy Robinson wrote:

> dietma...@yahoo.com wrote:
>
>>If you have correct code if no exception is thrown it is a trivial
>>transformation to make the code also exception-safe: all you need to
>>do is wrapping up explicit clean-up code in function with a try/catch
>>block.
> [...]

> You really feel that exception-safety can be achieved with a trivial
> transformation?

Look carefully again at what he said. "*If you have correct code if
no exception is thrown...*".

> [...]


> Yes, but on the other hand I'm not sure that function chaining where
> exceptions may be thrown, is such a good idea. Suppose a function has
> multiple arguments, one of which is a function which may throw an
> exception. Because the order of evaluation of function arguments is
> undefined, we can't tell which other arguments get evaluated if an
> exception is thrown. Yes, it may not matter. But I don't see any harm
> in separating the calls so we can see the ordering.

Straw man. Dietmar never said that function chaining on multiple
arguments was a good idea. Function chaining on single arguments is
not only perfectly safe, but fairly common.

> [...]


>>In your document you claim that explicit checking makes it
>>visible which operations can fail. This is, however, the
>>wrong place to document this knowledge: the client should be
>>as ignorant about implementation details of called functions
>>as possible since a change in the function's implementation
>>might cause the "knowledge" about the function to become
>>wrong. This effectively means that the caller of a function
>>should always assume that the function may fail. Thus, each
>>and every use of a function is already a visible indication
>>of a point of failure.
>
> That's brave! It also disagrees with what David B. Held says
> about the "swap" function, for instance.

I think it's consistent with what I said, given a caveat. I think
Dietmar would agree that if a function is explicitly documented to
be no-fail, then you need not make the "default assumption", which
is a perfectly good assumption to make, and one with which I would
agree. In the absence of documentation that states otherwise, you
*should* assume that a function can fail, and you should *not*
assume that you know how. Or rather, *how* it fails should not be
relevant to making your calling code error-safe. If it is, then
your safety is fragile and likely to break. This is a lesson you
learn very quickly once you start writing generic (read: template-
heavy) code.

Dave

L.Suresh

unread,
Jan 15, 2005, 5:26:18 AM1/15/05
to
> Then, presumable, you find JAVA's lack of enforcement of
> unchecked exceptions bad.

JAVA's unchecked exceptions, the derivatives of Error,
RuntimeExceptions
have different reasons to be there. Unchecked exceptions are used when
enforcing it on the caller is a burden and the caller cannot / need
not handle the exception. In pre assert JDK days Runtime Exceptions
were thrown to indicate pre-condition failures. RuntimeExceptions
are usually not meant to be caught by the immediate caller, they need
to bubbled up to the top where they are reported, and the program
stopped.
They indicate a programming error / bad state apart from
erroneous data read from the outside.

> In Java, of course, most of the checked exceptions are really
> things that should be return codes. Checked exceptions are
> an important means of working around this defect in the
> language.

Are you telling that the functions should return error code
instead of throwing checked exceptions? Firstly, it will cripple
the functions to return error code instead of other return
values. Secondly, returning back immutable values like
java.lang.String will be a big pain.

I tend to view exceptional conditions as "really exceptional",
they occur about 20% of the time. For those 20% using return
codes to me seems to clutter up the remaining flow. To me, for those
20% of the flow i dont mind if the exception handling mechanism is
a tad slower as long as it allows me to specify separation of concerns,

code clarity and enforces me to handle relevant exceptional situations.

> And of course, in C++, if the return value is
> in fact a return code, it is usual to use a class which triggers
> a failed assertion if the destructor is called without the value
> having been read, so that in the most important case, you can
> insist.)

How do you make sure that a return value is read? Presumably, you
pass the return code to a class, and make that return code to be
read from the class. Why should this be done when this can be enforced
using the compiler? In my view one of the advantages is that return
codes are typified using exceptions, because of which it is easier
to enforce that exceptions are handled.

> And of course, Java also uses exceptions for internal errors.
> I'm not quite sure how you are supposed to handle and continue
> from something like VirtualMachineError.

Thats true :). But one of the derivatives of VirtualMachineError,
namely
the OutOfMemoryError can be handled to some extent. It is possible to
salvage something by releasing strong references to some resources and
ensuring that we atleast exit gracefully. For example, Eclipse IDE when
it encounters OutOfMemoryError, saves all my files and runs in a
crippled
mode. Eventually i have to restart it, but i can finish off some
crucial things before restarting.

> ? I don't see how. How can using checked assertions ensure
> that you've actually done the right thing and tested it in the
> error cases.

It doesnt ensure that i have done the right thing. It enforces me to
do the right thing. What i do by catching the exception is upto me,
but the idea is that the language forced me to consider the exceptional
situation.

>> b) C++ , JAVA differ how they treat the exception specifications.
>> int f(); //#1
>> in C++ can throw any exception , whereas in JAVA its a
>> no-throw guarantee.

> No it's not. In Java, it's a guarantee that the program won't
> throw one of a small set of exceptions. But it can still throw
> most of the exceptions one sees in practice.

Yeah, but they are the unchecked exceptions that the immediate caller
cannot / need not handle.

> I feel that JAVA has an edge over C++ in
> enforcement of exceptions. In C++ you can write code such as,
> int f() throw() {
> throw 1; // Flagrant violation, but the compiler lets it go...
> }
> The reason given by Stroustrup for #1 throwing any exceptions
> is that it would require exception specification for virtually
> every function.

> if ( someFunction() ) {
> // handle error...
> }

> vs.:

> try {
> someFunction() ;
> } catch ( ErrorType error ) {
> // ...
> }

> (Let me clarify that last statement. IMHO, an if which
> immediately handles the error is a lot more readable than a try
> catch block.

If there were five different functions, all returning 3 error codes
then the "if" version becomes more cluttered. If you look at the
"if" version, the emphasis seems to be in error handling. But in
the try catch version the emphasis is on calling the function which
is really the main flow. A simple client which does, socket, bind,
connect
calls will be wrapped in "if" statements which drowns away the emphasis
on the actual calls, and actually a distraction in reading the normal
code.


>> Do you really see that much of a difference between:

> } catch ( Type1 const& x1 ) {


> // ...
> } catch ( Type2 const& x2 ) {
> // ...
> } // ...


> and:

> switch ( error.typeCode() ) {
> case Error1:
> // ...
> break ;
> case Error2:
> // ...
> break ;
> // ...
> }

Yes. Here you have got the "error" variable from the called function
somehow which disrupts the normal semantics of the function. Probably
the function returned the Error when it could have given you the result
of its calculation. So the semantics of the function has been
convoluted
to incorporate return codes. You prevent function chaining with this.
The code is cluttered because this is not the main flow. You
do not have the type safety advantage. If you drop "case Error2" its
okay, but its not okay if the compiler enforced you to catch Type1.
--lsu

David Abrahams

unread,
Jan 15, 2005, 5:30:19 AM1/15/05
to
Don Waugaman wrote:
> David Abrahams wrote:
>> msalters wrote:
>> > There are already
>> > implementations that won't even load exception handlers
>> > in RAM until needed.
>
>> Really? Which ones? I've been talking about that optimization as a
>> possibility for years, but I have yet to see it anywhere.
>
> Exception handlers (which I take to mean code in catch blocks), I would
> agree.
>
> However, for the tables used to look up exception handing information,
> and the code used to interpret those tables, you get automatic loading
> "for free" when the information is stored in a different object file
> section from the main program code in a system with demand-paged
> executables. I believe the IA64 ABI for C++ supports this.

Do you have a reference? Do any compilers you know of take advantage of
that support?


> Even better, the exception information in that situation can be paged
> out, thus such a system "automatically" allows for unloading as well
> once it is no longer needed.

Like I said, I've been talking aout that optimization as a possibility
for years, so I understand the implications. But so far I haven't seen
an implementation that implements it.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

dietma...@yahoo.com

unread,
Jan 15, 2005, 5:31:49 AM1/15/05
to
Just a nit:

David B. Held wrote:
> In fact, one could argue that it is a shortcoming of
> the C++ iostreams library that streams do *not* throw exceptions.

... by default: you can setup streams to throw an exception by
setting the appropriate bits with the 'exceptions()' member functions.
If the stream is already in a bad state it throws when turning
exceptions on. Of course, the primary reason why exceptions are not
thrown by default is that any I/O operation should be expected to
fail, i.e. it is the normal case to fail not the exceptional one.

BTW, I don't think the particular argument that exceptions abort
execution earlier does not hold at all: if you religously check the
status and return a failure code as soon as you detect a failed
operation, processing is aborted effectively at the same point as
the exception - the behavior would essentially be identically as
the behavior when throwing an exception. Of course, all the status
code checking could be hidden by using exceptions (and the compiler
can do much better as it can avoid the checking and manipulate the
stack directly).
--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.contendix.com> - Software Development & Consulting

Andy Robinson

unread,
Jan 15, 2005, 11:19:19 PM1/15/05
to
ka...@gabi-soft.fr wrote:

> It's pretty rare for return codes to be an int. On every
> project I've worked on where we used return codes, they were a
> class type. How else do you get an assertion failure if they
> aren't read?

I like this idea. I will definitely try it next time I'm working on
some new project that isn't already committed to a strategy.

>> "When exceptions are being propagated then [a dtor] is the
>> only way of doing such tidying up. This can be a pain,
>> forcing you to create special classes whose only reason for
>> existence is to accomplish some specialised tidy-up in their
>> destructors." No. Google for ScopeGuard. You need only one
>> class, and it exists today.
>
> Bof. Actually, he has a sort of a point, and it is one of the
> reasons why I'm interested in the lambda classes. I do use a
> lot of local classes for this sort of thing, and lambda classes
> would make them easier to write.

Thank you, it's good to know that not everyone thinks I'm totally mad.

Andy Robinson, Seventh String Software, www.seventhstring.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Francis Glassborow

unread,
Jan 15, 2005, 11:22:50 PM1/15/05
to
In article <1105714867.2...@f14g2000cwb.googlegroups.com>,
ka...@gabi-soft.fr writes

>Everything considered, I don't agree with Andy Robinson's
>position. But I did once, and it is a reasonable position, as
>far as it goes. If I don't agree with it today, it is because
>people like Dave Abraham didn't dismiss my issues, but addressed
>them. Exceptions aren't a silver bullet, which will make good
>programmers out of bad. They're one of a number of tools
>available. And I'd be very scepticle of someone who tried to
>say that you should react the same way to file not found as to a
>precondition failure.

Which reminds me of my first reaction to using exceptions, I thought I'd
gone to hell. They made my carefully written elegant code into a chaotic
mess of try/catch blocks, often nested to several levels. Then some time
along the way I made the necessary shift of viewpoint and found that
code that was written with an understanding of exceptions was actually
cleaner and more elegant. Among other things it hid distracting details
from high level code, it avoided horrible ripples when a low level
function needed modification to deal with a hitherto unrecognised
problem and it allowed me to encourage even the rawest novices to
validate data and at least mark problems for latter handling.

Now there then had to be a maturing time whilst I got exceptions into a
proper perspective as just one tool for handling problems. By the way, I
have strong objections to any argument that seems to be based on the
premise that exceptions and return codes are the principle mechanism for
reporting/dealing with problems. They are just two options and there are
others. It is the task of a professional to understand his/her tools and
use the right one for the job. Yes it is sometimes possible to use a
classical screwdriver on an x-headed screw but you would not expect to
hear a professional engineer advocate it.


--
Francis Glassborow ACCU
Author of 'You Can Do It!' see http://www.spellen.org/youcandoit
For project ideas and contributions: http://www.spellen.org/youcandoit/projects

James Kanze

unread,
Jan 15, 2005, 11:20:59 PM1/15/05
to
Francis Glassborow wrote:

[...]
> BTW any good programmer will have a substantial toolkit of
> techniques to deal with problem situations, and exceptions is
> only one of those techniques.

I think that this is really the key. Exceptions do have a
price; his article correctly points out some of the prices. The
problem is that the problem they address has to be solved (at
least in correct programs), and that the other solutions also
have a price; he didn't say much about the price of the
alternatives in his paper. A good programmer will understand
the alternatives, and choose the most appropriate solution.

I have written, and continue to write, programs in which I don't
use exceptions. For a lot of little tools, for example, if I
can't just ignore the error (with a message) and get on with it,
exiting with an error code is the appropriate action, and I
don't need a C++ exception for that. And even in something as
complex as a compiler, I doubt that trying to recover from
bad_alloc is probably not appropriate. (Whether exceptions are
appropriate for recovering from other types of errors probably
depends on the compiler architecture. And of course, other
considerations may come into play -- if part of the compiler is
tool generated, and the tools generate C, there may be issues of
exceptions propagating through C functions which prevent their
use even if they would otherwise be appropriate.)

Such situations are rarer in larger applications; exceptions are
ideal for aborting a request in a typical server.

--
James Kanze home: www.gabi-soft.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

James Kanze

unread,
Jan 15, 2005, 11:20:15 PM1/15/05
to
L.Suresh wrote:
>>Then, presumable, you find JAVA's lack of enforcement of
>>unchecked exceptions bad.

> JAVA's unchecked exceptions, the derivatives of Error,
> RuntimeExceptions have different reasons to be there.
> Unchecked exceptions are used when enforcing it on the caller
> is a burden and the caller cannot / need not handle the
> exception. In pre assert JDK days Runtime Exceptions were
> thrown to indicate pre-condition failures.

I hope not. A pre-condition failure is an internal error. The
program should stop immediately.

> RuntimeExceptions are usually not meant to be caught by the
> immediate caller, they need to bubbled up to the top where
> they are reported, and the program stopped. They indicate a
> programming error / bad state apart from erroneous data read
> from the outside.

In sum, RuntimeExceptions are for cases when an exception is
appropriate, Errors are for cases when you really should stop
the program immediately, rather than raise an exception, and
other exceptions are for cases when a return code would be
preferable.

>>In Java, of course, most of the checked exceptions are really
>>things that should be return codes. Checked exceptions are an
>>important means of working around this defect in the language.

> Are you telling that the functions should return error code
> instead of throwing checked exceptions? Firstly, it will
> cripple the functions to return error code instead of other
> return values. Secondly, returning back immutable values like
> java.lang.String will be a big pain.

Sounds like your checked exceptions are a struggling work-around
for a language defect. I've used return codes in such cases in
C++, and never had any problems with it. (But I'll be honest: I
don't understand the last sentence at all. First, of course,
you don't actually return a String, you return a pointer. And
pointers -- both in Java and in C++ -- have a sentinal value for
indicating error conditions: null. And secondly, because I
don't quite see what the immutable has to so with it: Java has
no out parameters at all, so even if the object itself is
mutable, you can't modify anything at the call site.)

Of course, even in Java, something like Fallible is possible
(now that Java has generic classes, of course). Sometimes, it's
a good solution. Other times, out (or inout) parameters are
preferable. Java's lack of out and inout parameters is a major
design flaw.

> I tend to view exceptional conditions as "really exceptional",
> they occur about 20% of the time. For those 20% using return
> codes to me seems to clutter up the remaining flow.

IMHO, an if seems to clutter less than a try/catch. We are
talking here about error conditions that will be handled by the
immediate caller.

And of course, I'd hardly call something that occurs 20% of the
time exceptional. Something that frequent is part of the main
logic. (But I'm not sure that frequency is really the
criteria. I think the real issue is whether there is a chance
to continue, or whether you will have to abort the current
action -- for some sufficiently high level definition of action,
e.g. a request in a server, parsing a statement in a compiler,
etc.)

> To me, for those 20% of the flow i dont mind if the exception
> handling mechanism is a tad slower as long as it allows me to
> specify separation of concerns, code clarity and enforces me
> to handle relevant exceptional situations.

>>And of course, in C++, if the return value is in fact a return
>>code, it is usual to use a class which triggers a failed
>>assertion if the destructor is called without the value having
>>been read, so that in the most important case, you can
>>insist.)

> How do you make sure that a return value is read?

By tracking when it is read, and asserting in the destructor.

> Presumably, you pass the return code to a class, and make that
> return code to be read from the class. Why should this be
> done when this can be enforced using the compiler?

Now THAT is a real question. Why do C and its derivatives allow
us to ignore return values? (I can actually think of reasons in
the case of C++, but in C, I think it can only be considered as
a major design flaw.)

> In my view one of the advantages is that return codes are
> typified using exceptions, because of which it is easier to
> enforce that exceptions are handled.

>>And of course, Java also uses exceptions for internal errors.
>>I'm not quite sure how you are supposed to handle and continue
>>from something like VirtualMachineError.

> Thats true :). But one of the derivatives of
> VirtualMachineError, namely the OutOfMemoryError can be
> handled to some extent. It is possible to salvage something
> by releasing strong references to some resources and ensuring
> that we atleast exit gracefully. For example, Eclipse IDE
> when it encounters OutOfMemoryError, saves all my files and
> runs in a crippled mode. Eventually i have to restart it, but

> I can finish off some crucial things before restarting.

I'm not saying that it should be impossible to do anything in
such cases. But you don't want to risk doing everything that a
stack walkback will normally entail.

And of course, OutOfMemoryError really shouldn't be an Error --
it's really the sort of thing that I would use a
RuntimeException for (e.g. an exception in C++).

>>? I don't see how. How can using checked assertions ensure
>>that you've actually done the right thing and tested it in the
>>error cases.

> It doesnt ensure that I have done the right thing. It


> enforces me to do the right thing. What i do by catching the
> exception is upto me, but the idea is that the language forced
> me to consider the exceptional situation.

Except that Java's checked assertions aren't used in exceptional
situations. They're used in very everyday situations when only
the most incompetent programmer would not consider the error
situation, because it occurs so often, and because handling it
is part of the basic algorithm.

>>>b) C++ , JAVA differ how they treat the exception
>>>specifications. int f(); //#1 in C++ can throw any exception
>>>, whereas in JAVA its a no-throw guarantee.

>>No it's not. In Java, it's a guarantee that the program won't
>>throw one of a small set of exceptions. But it can still
>>throw most of the exceptions one sees in practice.

> Yeah, but they are the unchecked exceptions that the immediate
> caller cannot / need not handle.

But that the immediate caller still needs to know about, because
it is necessary to know whether a function is no throw or not in
order to write exception safe code. (More correctly, it is
important that a certain number of primitive operations be
guaranteed no throw in order to write exception safe code. In
Java, of course, the fact that every operation can potentially
throw VirtualMachineError means that formally, exception safety
isn't possible. In practice, of course, the best Java
programmers will simply assume that nothing derived from Error
will actually occur, accepting the fact that their program is
incorrect if it does; and the others are simply blissfully
unaware that exception safety exists.)

>>I feel that JAVA has an edge over C++ in enforcement of
>>exceptions. In C++ you can write code such as,
>>int f() throw() {
>>throw 1; // Flagrant violation, but the compiler lets it go...
>>}
>>The reason given by Stroustrup for #1 throwing any exceptions
>>is that it would require exception specification for virtually
>>every function.

>>if ( someFunction() ) {
>>// handle error...
>>}

>
>>vs.:
>
>
>>try {
>> someFunction() ;
>>} catch ( ErrorType error ) {
>>// ...
>>}

>>(Let me clarify that last statement. IMHO, an if which
>>immediately handles the error is a lot more readable than a
>>try catch block.

> If there were five different functions, all returning 3 error
> codes then the "if" version becomes more cluttered. If you
> look at the "if" version, the emphasis seems to be in error
> handling.

Funny thing, but in correct code, the emphasis often is in error
handling. Trying to push errors off to the side is not going to
improve program quality.

> But in the try catch version the emphasis is on calling the
> function which is really the main flow. A simple client which
> does, socket, bind, connect calls will be wrapped in "if"
> statements which drowns away the emphasis on the actual calls,
> and actually a distraction in reading the normal code.

You mean that correct code is a distraction. A simple client
doing bind and connect *will* have to check each function. And
should have to, because at that level, there is almost certainly
some things that will have to be done if the calls fail.

In an application, of course, all of this will be hidden in some
sort of Connection class. And there is probably no real reason
for this class to throw either -- the argument for throwing in
the constructor: that you don't want invalid instances to be
able to exist, doesn't hold, because instances can also become
invalid later. Just because bind and connect don't have errors
doesn't mean that the connection cannot fall.

>>>Do you really see that much of a difference between:

>>} catch ( Type1 const& x1 ) {
>>// ...
>>} catch ( Type2 const& x2 ) {
>>// ...
>>} // ...

>>and:

>>switch ( error.typeCode() ) {
>>case Error1:
>>// ...
>>break ;
>>case Error2:
>>// ...
>>break ;
>>// ...
>>}

> Yes. Here you have got the "error" variable from the called
> function somehow which disrupts the normal semantics of the
> function.

Let's remember what we are talking about. In the cases where
Java uses checked exceptions, the normal semantics of the
function include the possible error.

> Probably the function returned the Error when it could have
> given you the result of its calculation. So the semantics of
> the function has been convoluted to incorporate return codes.
> You prevent function chaining with this.

Could you give some concrete examples where this is relevant?
Do you really thing that long chains of operations which might
even under normal conditions interrupt anywhere in the chain
leads to readable code? That something like:

OutputFile( filename ).write( ... ).flush().close() ;

is good programming style?

> The code is cluttered because this is not the main flow. You
> do not have the type safety advantage. If you drop "case
> Error2" its okay, but its not okay if the compiler enforced
> you to catch Type1.

That's a possible argument. On the other hand, you really have
to design your exception hierarchy well. Because the types of
errors you want to treat, as opposed to those you want to map
and pass up, may vary greatly from one call site to the next.

But it's only an argument with unchecked exceptions, because
otherwise the compiler won't let me ignore it.

There's also a problem of syntax, which could be (and maybe
should be) fixed: it's a lot easier with the switch if I want
the same treatment for two different errors.

--
James Kanze home: www.gabi-soft.fr

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 pl. Pierre Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

James Kanze

unread,
Jan 15, 2005, 11:30:28 PM1/15/05
to
Stephen Howe wrote:
>>I have placed a rant called "Exception-Free Programming" at
>>http://www.seventhstring.com/resources/exceptionfree.html
>>and I'll be interested to know what you think.

> I religiously check the return values of functions that use
> status codes. But that is just it, my colleagues frequently
> do not.

Then you have a problem which exceptions will not solve.
Because it isn't a problem of language, or programming; it is a
problem of basic attitude.

> I jumped on one of my colleagues recently for failing to check
> the return value of fopen() (and yes, it had failed) and
> failing to check the return values of fread(), fwrite() and
> even fclose() (a flush to disk could fail). Your case for
> status code is weaker than it appears. If all these C
> functions threw exceptions on failure, my colleagues would
> _have_ to write code the dealt with the exceptions.

Not at all. I'm sure that anyone failing to check the return
value of fopen will also fail to have tests where it should fail
in the test suite. In which case, it's really a question of
what happens when the function does fail at the client site.
With exceptions, you do get a guaranteed core dump, which is
nice; without exceptions, it's hard to say what you get, but I
doubt that the program will work either. But the companies I've
worked for don't consider a core dump at the customer site an
acceptable behavior either. Thus, the code will be tested with
wrong filenames, etc., before delivery.

David B. Held

unread,
Jan 15, 2005, 11:48:59 PM1/15/05
to
dietma...@yahoo.com wrote:
> Just a nit:
> David B. Held wrote:
>
>>In fact, one could argue that it is a shortcoming of
>>the C++ iostreams library that streams do *not* throw exceptions.
>
> ... by default: you can setup streams to throw an exception by
> setting the appropriate bits with the 'exceptions()' member functions.
> If the stream is already in a bad state it throws when turning
> exceptions on. Of course, the primary reason why exceptions are not
> thrown by default is that any I/O operation should be expected to
> fail, i.e. it is the normal case to fail not the exceptional one.

I disagree. If I have a known file with known data in it, I expect to
be able to open the file and read the data in a particular format in
which all the calls succeed. Just because any of the calls might not
succeed does not mean that I expect it. In the sense that I/O depends
on hardware that is volatile and subject to change by other
users/programs, I agree that one can "expect I/O to fail." But in an
ideal world, I think there are plenty of cases where you expect I/O to
be as reliable as integer arithmetic, and you have to handle the
failures in a special way.

> BTW, I don't think the particular argument that exceptions abort
> execution earlier does not hold at all: if you religously check the
> status and return a failure code as soon as you detect a failed
> operation, processing is aborted effectively at the same point as
> the exception - the behavior would essentially be identically as
> the behavior when throwing an exception. Of course, all the status
> code checking could be hidden by using exceptions (and the compiler
> can do much better as it can avoid the checking and manipulate the
> stack directly).

The point is that you don't have control over all the code that might
fail. So if you call into a library that does not religiously check
status codes, it is literally impossible to know every event that has
failed without duplicating the library's functionality some way. If
the library throws exceptions, on the other hand, or functions that
the library itself calls throw exceptions, you are not at all dependent
on the piety of the library authors (except for them to write correct
code). I wouldn't say that this is a primary reason for a language to
use exceptions, but I think it is a mentionable benefit.

Dave

David B. Held

unread,
Jan 16, 2005, 12:03:00 AM1/16/05
to
Andy Robinson wrote:

> David B. Held wrote:
>>[...]


>>Perhaps that's because when you started to use classes they were
>>well-established and had a common set of idioms; but when you
>>started to use exceptions, the state-of-the-art was not so mature.
>>The fact is, there are thousands of programmers who have learned to
>>find the utility in exceptions that you have found in classes.
>
> Who knows, maybe this will happen to me too. I don't see any sign of
> it yet!

I would say that you cannot learn the full value of exceptions until
you have used them. You need to build up an intuition of how they
work, and until you do, exceptions will seem strange and baffling.
There are plenty of C coders who have yet to build up an intuition
about OOP to see where it can bring value to their projects. So if
you insist on continuing to use return codes, you may never see the
value of exceptions.

> [...]


> I would say that if we want atoi to detect errors (e.g. overflow) then
> its prototype in an ideal world would be:
>
> StatusCode atoi(int *retval, const char *nptr);

Out parameters are evil. ;>

> [...]


> I do agree about it being the same task, and my point is that it is a
> task more easily accomplished when we can see the potential return
> points all identified with "return" statements, and can see from a
> function prototype whether it returns a status code.

Cleaning up resources is a tedious task as well, and one could argue
that it is easier to confirm that your code doesn't leak when the
cleanup functions are all called explicitly. So why don't we make
d'tor calls manual instead of automatic? And instead of reference-
counting, let's go back to manual cleanup. I mean, we don't *really*
know that a d'tor cleans up a resource when it goes out of scope
unless we can *see* the delete call. So let's make that explicit
and maybe just ban d'tors altogether.

> I don't think many people attempt formal proof of such things. What we
> do is examine the code and think "what if..." at each line. My claim
> is that exceptions make this harder.

And my claim is that it is only harder because you have lots of
experience with return codes and not nearly as much with exceptions.
Once you learn to use exceptions, it's much easier to look at a
function and quickly spot the trouble areas. Especially when it's
your own code and you already know which functions are pure and which
ones throw. And once you think about things in terms of fail/no-fail
and pure/effect, it quickly becomes clear that you don't need to
to see a return statement to follow the return paths. All fallible
operations are a return point.

> [...]


> You make various points about the problem of handling errors in ctors.
> My view is that it would be nice if there was a clean way to return
> multiple values from a function,

It's called boost::tuple. ;>

> in which case ctors would return a status code,

Ugly and unnatural.

> and "new" would return a valid pointer and a NO_ERROR status code,
> or a null pointer and error status.

Seriously, if new fails, what are you going to do with the status
code??? Especially in code buried deep in a library? You're going
to propagate it right up the stack, of course. Exactly the same
thing an exception would do for you, but automagically.

> [...]


>>void TransferMoney(Account& From, Account& To)
>>{
>> Account Temp(From);
>> Temp->debit(100);
>> To->credit(100);
>> swap(Temp, From);
>>}
>
> I agree that this is gorgeous. But that's partly because it's a
> carefully-chosen example.

Hardly. The only times I write a try/catch block are in main() and
Java. ;>> Granted, you can't always afford to provide the strong
guarantee. Granted, the basic guarantee can sometimes be messier.
But often times, you *can* provide the strong guarantee at minimal
to no extra cost, and when you do, it is often as beautiful as this
example illustrates (which should be a pretty good clue that you're
doing something very right).

Even when you need to roll back or undo operations just to get the
basic guarantee, you have things like ScopeGuard to help you make
it easy.

> It also depends on the idea that the correct response to an exception
> is to put everything back the way it was, which is correct in this case,
> but not always.

Providing the strong guarantee *if you can* is almost always the right
thing to do. I can hardly think of an example where it isn't. The only
times I don't are when it is too expensive or awkward. For functions
that execute irreversible operations, like non-persistent I/O (console
or network I/O), you simply can't provide the strong guarantee, of
course.

> Member functions need to manipulate their own state and the response
> to an error may not be to restore the original state. So we end up
> either using lots of try/catch blocks, or writing special classes to
> set state appropriately in their dtor, if an exception is thrown. And
> the thing is, we can't (easily) write a program which only uses
> exceptions in places where they shine : as soon as we start using
> them, they start throwing themselves all over the place!

In my experience, most try/catch blocks are an optimization, not a
requirement. That is, I usually only write try/catch blocks when I
think that the RAII equivalent will be more expensive somehow. But
the fact is, an RAII class is essentially a nicely encapsulated
try/finally pair. And even when try/catch is more natural than an
RAII encapsulation, you're not doing anything differently than you
would have to with return codes.

Suppose we have a function that performs 2 fallible reversible
operations which require multiple undo operations:

op1() is reversible by: undo_op1() followed by finish_op1()
op2() is reversible by: undo_op2() followed by finish_op2()

Suppose that you didn't want to encapsulate these undo operations
and use a ScopeGuard:

void foo(T arg1)
{
try
{
op1(arg1);
finish_op1();

try
{
op2(arg1);
finish_op2();
}
catch (...)
{
undo_op2();
finish_op2();
throw;
}
}
catch (...)
{
undo_op1();
finish_op1();
throw;
}
}

This code may look contrived, but is actually abstracted from some
real application code I have had to write. Now let's see what the
status code version looks like:

int foo(T arg1)
{
int status;
if ((status = op1(arg1)) == SUCCESS)
{
finish_op1();

if ((status = op2(arg1)) == SUCCESS)
{
finish_op2();
return SUCCESS;
}
undo_op2();
finish_op2();
return status;
}
undo_op1();
finish_op1();
return status;
}

So is the return code version easier to read than the try/catch
block version? Is it more concise? Is it more clear? Well, we
see that the exception version contains 6 lines that the return
code version does not: 2 try, 2 catch, and 2 throw. And we see
that the return code version contains 6 lines that the other
version does not: 2 if, 1 declaration, and 3 returns. So as far
as conciseness goes, it's about a tie. If you scale it up, the
return code version will have a very marginal advantage, because
each additional operation will add 1 if and 1 return, whereas
it will add 1 try, 1 catch, and 1 throw.

How about clarity? Overall, I don't find a significant difference
in clarity, because both functions have been obfuscated somewhat
by error handling. However, I think it is clear that the try/catch
version "looks cleaner". The lines that actually do something are
not masked by a bunch of return code checking. However, I find
the manual error propagation as repugnant as if I had to call
d'tors explicitly at the end of a function. That is a menial task
that involves boilerplate code (you said it yourself, you only
need to add 1 line to code, and it's *always* the same line),
which means that I should not have to be bothered with it. That
is the compiler's task (just like I shouldn't have to tell the
compiler to always allocate sizeof(int) bytes when I declare an
int).

The tasks that should be left to the programmer are ones that
require an informed decision, and whether or not to propagate
unhandled errors should never be a question. Therefore, it can
and should be automated.

> [...]


> I actually like this too. It is longer, but it makes it clear where
> the return points are and it makes it clear that the programmer is
> aware of them.

Explicit d'tor calls make it clear where resource cleanup occurs,
and makes it clear that the programmer is aware of it. Still, I don't
see why they are better than implicit ones.

> In the exception-based version, a casual reader might
> wonder whether the programmer had really thought about these things.

A casual reader that is not familiar with error-safe idioms using
exceptions, perhaps.

> This version makes it clear that "swap" is expected to succeed
> unconditionally, which is not clear in the exception-based version.
> And we can easily confirm that this assumption is correct (or at
> least, that it agrees with what the writer of "swap" thought) by
> looking at the prototype to confirm that its return type is not
> "StatusCode".

And yet, error handling information belongs in the documentation,
not the function signature. Or are you advocating a "returns" clause
that labels which error return codes a function can return? ;>
Functions should tell you what they do, not what they don't do.
Documentation should tell you how they fail. Besides, a good
programmer should already know that swap() should never fail. This
is idiomatic C++.

> [...]


> I do think exceptions are very like long-distance goto's. And I agree
> that status codes are immature. But this is just a consequence or
> more work having been put into exceptions.

Status codes are not so much immature as primitive. After all, we
have thousands of entrenched status codes that we must check everywhere.
The reason that exceptions are better than status codes is that they
abstract the notion of error propagation. As Dietmar points out, this
is critical in generic code. Part of my point about not being able to
trace execution paths fully is that when you write generic code, you
can literally only see half of what goes on, and you can't even control
the other half. Suppose I write a generic function in which an
operation on T may fail:

template <typename T>
void foo(T arg)
{
bar(T);
}

Note that bar() may be overloaded in many different ways, and that
some of those overloads probably won't even exist when you write foo().
The question is, how should you handle a failure of bar()? If you
do this:

template <typename T>
int foo(T arg)
{
return bar(T);
}

You are forcing everyone who calls foo() to overload bar() using an
int return code. But if someone wants to call foo() from a project
in which status codes are fat, this will simply not do. So you could
parameterize the status code type:

template <typename S, typename T>
S foo(T arg)
{
return bar(T);
}

Unfortunately, S cannot be deduced, so you will have to specify it
manually for each call to foo(). Hardly an ideal situation. Of
course, you could pass-by-ref:

template <typename S, typename T>
void foo(S& status, T arg)
{
bar(S, T);
}

But then you uglify every interface that must use this convention.
Hardly what I would call "elegant code." Note that to use exceptions,
the first version works just fine and is also the most concise. Error
propagation occurs without any mental gymnastics and contrived coding
contracts.

So you see, the problem with status codes is that *they hardwire the
error propagation mechanism*, which makes them fragile and brittle
(my whole point about being at the mercy of others who must propagate
errors). What you see as the benefit of return codes (that they
document return points) is exactly the problem with them (they
needlessly impose a fixed structure on error propagation). Exceptions
abstract the propagation process so that errors can pass right through
generic functions without any programmer intervention. And this is
only possible because the return paths are *not* explicit.

You say that classes and d'tors are a useful abstraction, and I am
telling you that exceptions are merely the logical abstraction of
error propagation. And if you give them an honest chance, I think
you will find that they deliver. That is not to say that they are
always used in an optimal way. Clearly they aren't. But that is not
a flaw in the mechanism any more than overzealous operator overloading
is a flaw in the ability to overload operators.

> [...]


> Here we disagree. I would say that using status codes means always
> checking them, unless occasionally writing a comment instead to say
> why it's correct not to. And the fact that you must write it manually
> is a valuable and visible reminder that the called function may fail.
> This helps to write bug-free code.

Only if you believe that error semantics should be documented in
code, rather than in documentation. I think it's an evolution of
programming design to move error documention to the documentation,
and let error-safe code speak for itself (to those who know how to
listen).

> [...]


> No more complicated than an exception object.

Except that exception objects are not built if there is no
exception. But a fat status code has to be handled specially in
order to avoid serious code bloat.

> [...]


> I agree that the error checking adds to the code. But the thing is,
> you can't write correct code without thinking about the possible
> errors, so it is good that the code should show these thoughts
> instead of hiding them.

The code should demonstrate error correctness in a more subtle way.
Suppose we demanded documentation of resource cleanup by requiring
d'tors to be called explicitly. Is that reasonable? Why is manual
error propagation more reasonable?

>[...]


> These are perfectly good rules at least for some types of functions.
> But exceptions make it harder to tell, just by looking at the code,
> whether the rules are in fact being obeyed by the code.

The whole point is that you can't write error-safe code just by looking
at the code. You either need to know what error contracts the called
code is fulfilling or you need to write very defensively. In no case
can you divine how a given called function responds to errors just by
looking at an invokation. Looking at status code handling might
seduce you into thinking you know, but you really don't. You just
know that the function might return *some* status code. You still have
to know when and why.

> [...]


> I mentioned earlier that one problem with exceptions is that as soon
> as you start using them, they get everywhere. It's their non-local
> nature that I don't like. On the other hand I like the idiom of using
> stack-based objects so their dtors will called when the block is
> exited. It's useful and does not have implications outside the scope
> of the enclosing block, which makes it far safer.

But it is exactly their non-local nature that makes them useful. It is
what allows them to propagate through functions that can't handle them,
and thus automates boilerplate propagation code. You shouldn't have to
say: "Look, ma! I propagated an error code!" any more than you should
have to say: "Look, ma! I called a d'tor!" Exceptions are an
abstraction, and generic code simply couldn't handle errors in a
rational and consistent way without them. The growing popularity of
generic libraries tells me that exception usage and awareness will only
increase, not decrease in the future.

Dave

David Abrahams

unread,
Jan 16, 2005, 12:02:05 AM1/16/05
to
Andy Robinson wrote:

> I do think that there is a divide according to what kind of
> programming people do. GUI applications like the one I mostly am
> working on, live in a very impure world dealing with a lot of user
> interface, a lot of differering API's and so on. I think it's harder
> make exceptions work cleanly in that situation.

Actually, most GUI apps have a huge advantage: even if you don't
consider exceptions, they need an undo history. That makes clean
recovery from errors very practical and elegant.

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Walter

unread,
Jan 16, 2005, 12:27:44 AM1/16/05
to

<ka...@gabi-soft.fr> wrote in message
news:1105716747....@f14g2000cwb.googlegroups.com...

> Walter wrote:
> > 1) Programs that don't use exceptions to report errors tend to
> > have serious bugs in them for the following reason -
> > programmers forget to check the error codes, or check them
> > incompletely. The resulting bugs rarely show up in testing;
> > they show up on the customer's machine when they are the most
> > expensive to fix. Exceptions cannot be ignored by omission,
> > the programmer has to deliberately write code to catch and
> > ignore it. There's no blithely going on assuming that the
> > previous operation succeeded.
> I have a hard time understanding this. If you test the error
> case, your test case fails if you forgot to check a return
> code. And if you don't test the error case, you won't notice
> that you've forgotten the catch block either.

David Held posted a much more lucid explanation of this than I did, so I'll
defer to his version!

> In most of the applications I worked on before exceptions, our
> ReturnCode type would trigger an assertion violation if it
> hadn't been read before destruction, so you'd catch the error
> even if you didn't test the error cases. (Of course, companies
> which insist on such behavior in the return codes typically do
> test error cases:-).)

That sounds like an intriguing approach I've never thought of. One think I
do like with error handling via exceptions is there is (at least in D,
anyway) a default exception handler wrapping main() that will catch any
uncaught exceptions, print their associated error message, and terminate the
program. For a lot of batch style programs, this completely suffices,
meaning I simply don't need to write a lot of error handling code and
pretty-printing the results.

> > 2) Most (nearly all) of the problems associated with writing
> > exception safe code revolve around memory leaks.
>
> I've not found that to be true either. At least not
> exclusively. Exception safety means maintaining transactional
> itegrity, at least at the highest level.

I'm surprised. I find that in my C++ code, a major focus in writing classes
and routines is managing memory. Transactions happen now and then, but
nowhere near as often.

> A more accurate argument would be that most of the problems
> associated with writing exception safe code are also there if
> you use return codes.

That's true, but what I was referring to is the care one must take in
allocating resources (primarilly memory) in a manner that leaves each
reference in some exception safe container so that any exit via exception
will clean it up. With error return codes, it's just more obvious in the
control flow that there are paths (all the if(error)goto ...) along which
there can be leaks.

I'm going to disagree with you here (though I'll note it's a stylistic
disagreement, meaning I can't prove you're wrong <g>). For example, look at
the C++ version of the benchmark in www.digitalmars.com/d/cppstreams.html.
No check is made if opening the file failed, despite it being written by a
very competent C++ programmer. This mistake is just common as dirt. I make
those mistakes too. Contrast with the D version, which doesn't do the check
either, but the D version works properly anyway because the file reading
function throws an exception on error which is handled nicely by the default
handler. The programmer clearly expected the purpose of the file read was to
read the file, not to check to see if the file existed or if it was a bad
file name, etc.


> > Correspondingly, a function named DoesFileExist() should not
> > throw if the file doesn't exist, because asking a question is
> > part of the natural flow of the program, and not an error.
>
> And isn't trying to open a non-existant file part of the natural
> flow of the program as well? Something that you sort of expect
> to happen from time to time, even with reasonable users.

Validation of user input with a function like, say, isUserInputValid(char*
string), should return an error code. But ReadFile() should read the file,
and it's an error (and should throw an exception) if it failed to read the
file.

And lastly, I just don't want to write error status checking code, as it
makes the code look ugly and distracts from the logic and natural flow of
the algorithm. I'd rather handle them with exception handlers, usually
allowing the default one to do its thing, or I can just wrap the whole user
input processing code with one exception handler and take care of the "loop
and retry" all in one place.

P.S. Another reason I like the exception handling route is I get sensible
error messages for free, rather than having to roll my own
translate-the-error-code-into-something-user-friendly each time. How many C
programs just give you some generic message if the file open fails, rather
than decoding errno and giving a targetted message?

-Walter
www.digitalmars.com free C, C++, D compilers

Andrew Peter Marlow

unread,
Jan 16, 2005, 6:16:21 AM1/16/05
to
On Sun, 16 Jan 2005 00:02:05 -0500, David Abrahams wrote:
>> I do think that there is a divide according to what kind of
>> programming people do. GUI applications like the one I mostly am
>> working on, live in a very impure world dealing with a lot of user
>> interface, a lot of differering API's and so on. I think it's harder
>> make exceptions work cleanly in that situation.
>
> Actually, most GUI apps have a huge advantage: even if you don't
> consider exceptions, they need an undo history. That makes clean
> recovery from errors very practical and elegant.

Indeed. The GUI s/w I am working on at the moment
makes extensive use of exceptions to handle
post-condition violations such as resource
problems, model errors or bad database data.
These exceptions are thrown from fairly low down
and caught right near the top where the calculation
is aborted but a user-friendly error message needs
to be displayed.

Andrew Peter Marlow

unread,
Jan 16, 2005, 6:15:34 AM1/16/05
to
On Sat, 15 Jan 2005 23:20:15 -0500, James Kanze wrote:
>> Unchecked exceptions are used when enforcing it on the caller
>> is a burden and the caller cannot / need not handle the
>> exception. In pre assert JDK days Runtime Exceptions were
>> thrown to indicate pre-condition failures.
>
> I hope not. A pre-condition failure is an internal error. The
> program should stop immediately.

If I could just throw in my two pennyworth at this point...
A pre-condition violation is indeed an internal error
but the program should not necessarily terminate immediately.
For some programs that is appropriate, but not for mine.
I work on a server app that can invoke many functions on behalf
of many users. If one of those users invokes a function that
encounters an internal error we do not want to abort the server.
This would ruin things for the other users. We throw a logic
exception which is caught at the top in our function entrypoint
and reports the error, aborting the function. But the server
continues to run.

Peter Dimov

unread,
Jan 16, 2005, 7:32:18 PM1/16/05
to
Andy Robinson wrote:
> Jorgen Grahn wrote:
> > That can be a terrible price to pay for avoiding exceptions -- you
> > miss out on RAII. Let's say I have a class Foo with some complex
> > state and good invariants (or whatever the name is for those
> > predicates which hold for all objects of a certain kind).
> >
> > If the constructor can fail to bring my object to this well-defined
> > state and I'm not allowed to throw an exception, I have to tell
> > myself "this is either a good Foo, or a broken one" every time
these
> > objects appear in my
> > code. Or I have to add an Init() method, and tell myself "this is
> > either a good Foo, or a broken Foo, or a Foo I haven't tried
> > initializing yet".
>
> (If you're using status codes then you won't *want* to throw an
> exception).
>
> Is this really so tough? The class merely needs a member to say
> whether it's been initialised or not, and the destructor will look at
> it in order to know how to destroy it.

It is not really a question of "tough". Exception safety isn't so
tough, but you argue against it.

> In practice you would inialise immediately after construction, and
> delete it immediately if initialisation fails. So you wouldn't have a
> mixture of good and broken Foos.
>
> Anyway this is not a fundamental problem with status codes, it's just
> a question of a minor technical inelegance caused by the fact that
> this particular part of C++ is designed to work best with exceptions.

I think that you are missing the big picture. Just try to redesign this
particular part of C++ based on return codes.

Exceptions enable designs that enforce postconditions. To take a simple
example:

int sqrt( int x );
// post: r * r == x

Now you can write:

int r = sqrt( x );
assert( r * r == x );

and the assert will never fail. If sqrt() is unable to satisfy its
postcondition, it will never return.

This increases the expressive power of the language and allows C++ to
support an implicit postcondition on constructors: the created object
is valid.

Of course, coming up with the right postcondition is still up to you.
Exceptions just increase the set of possible postconditions.

L.Suresh

unread,
Jan 16, 2005, 7:39:09 PM1/16/05
to
> I hope not. A pre-condition failure is an internal error. The
> program should stop immediately.

Yes, that's what i had mentioned :)

> In sum, RuntimeExceptions are for cases when an exception is
> appropriate, Errors are for cases when you really should stop
> the program immediately, rather than raise an exception, and
> other exceptions are for cases when a return code would be
> preferable.

Yes, except the last thing about returning codes for other exceptions
get us back to the starting point!!

> (But I'll be honest: I
> don't understand the last sentence at all. First, of course,
> you don't actually return a String, you return a pointer. And
> pointers -- both in Java and in C++ -- have a sentinal value for
> indicating error conditions: null. And secondly, because I
> don't quite see what the immutable has to so with it: Java has
> no out parameters at all, so even if the object itself is
> mutable, you can't modify anything at the call site.)
>

Immutable has nothing to do with it, i wanted to say that returning
values
using the return value is going to be a pain. Sometimes you can pass
objects like Object[] or AnyObject so that their values can be changed
at the call site. But if its an unwrapped object that is returned then
there are problems.

> But that the immediate caller still needs to know about, because
> it is necessary to know whether a function is no throw or not in
> order to write exception safe code. (More correctly, it is
> important that a certain number of primitive operations be
> guaranteed no throw in order to write exception safe code. In
> Java, of course, the fact that every operation can potentially
> throw VirtualMachineError means that formally, exception safety
> isn't possible. In practice, of course, the best Java
> programmers will simply assume that nothing derived from Error
> will actually occur, accepting the fact that their program is
> incorrect if it does; and the others are simply blissfully
> unaware that exception safety exists.)

Unchecked exceptions like Errors, RuntimeExceptions are'nt meant to be
abused. Mostly an irrecoverable error has happened and the program
stopped immediately. Are you saying that the immediate caller needs to
know about
what unchecked exceptions a method throws? Thats the whole idea about
unchecked exceptions, the immediate caller need not know about it.

> Funny thing, but in correct code, the emphasis often is in error
> handling. Trying to push errors off to the side is not going to
> improve program quality.

Error handling is inherent part of programming. It's not pushing off
to the sides, but handling it in a different place from the main flow.
That's what i meant when i said i have separation of concerns. My
concern in the main flow is not exceptions, it is normal unhindered
flow.
While i handle exceptional flows in another place. Here, distinction
must
be made about "exceptional" and what it means, and it calls for a
judgement from the programmer.

> You mean that correct code is a distraction. A simple client
> doing bind and connect *will* have to check each function. And
> should have to, because at that level, there is almost certainly
> some things that will have to be done if the calls fail.

It is a distraction to the main logic of the program. I say it is a
distraction because of mixing of concerns here. A simple network client
has different flows and i feel it is essential to capture those
different flows separately.

> In an application, of course, all of this will be hidden in some
> sort of Connection class. And there is probably no real reason
> for this class to throw either -- the argument for throwing in
> the constructor: that you don't want invalid instances to be
> able to exist, doesn't hold, because instances can also become
> invalid later. Just because bind and connect don't have errors
> doesn't mean that the connection cannot fall.

The success of bind and connect leads to a valid Connection instance
and it doesnt have any bearing on the future behaviour of that
Connection.

> Could you give some concrete examples where this is relevant?
> Do you really thing that long chains of operations which might
> even under normal conditions interrupt anywhere in the chain
> leads to readable code? That something like:
>
> OutputFile( filename ).write( ... ).flush().close() ;
>
> is good programming style?

I personally don't advocate long function chains. In some places, small
function chains like f.flush().close(); is readable.

Also, you prevent calling functions like,
open(file.getFilename());

> But it's only an argument with unchecked exceptions, because
> otherwise the compiler won't let me ignore it.

Yes.

--lsu

Tim Rowe

unread,
Jan 17, 2005, 4:10:41 AM1/17/05
to
On 15 Jan 2005 23:20:15 -0500, James Kanze <ka...@none.news.free.fr>
wrote:

>I hope not. A pre-condition failure is an internal error. The
>program should stop immediately.

Eek! No!

The program should respond in whatever way is appropriate when it
discovers it has an internal error. /Maybe/ shut down, maybe find
another way to accomplish the task, maybe flag the error and try to
carry on.

It would be a bad thing if the fly-by-wire software keeping a Boeing
777 in the air were to generate a precondition failure. It would be
far worse if the software to respond by saying "that's it; I'm giving
up. You're on your own now."
Replies to tim at digitig dot co dot uk

L.Suresh

unread,
Jan 17, 2005, 3:04:25 PM1/17/05
to
The program should stop ASAP!!. An internal error is a bad thing, the
state
machine of the system is mangled, it could be worse and can manifest in
the worst possible way if you try to carry on. The best thing is to
report the error and shutdown. (Precondition failure caused by user /
client input are trivial and do not constitute internal errors)

--lsu

Gerhard Menzl

unread,
Jan 18, 2005, 5:59:09 PM1/18/05
to
Andy Robinson wrote:

> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

Just one aspect I would like to add to the discussion: what data type
would you propose for your status code? Typically, int. Now suppose the
need to integrate third party software arises. Unfortunately, their
return codes are long. Or unsigned short. Or HRESULT. Or

struct result
{
long errorcode;
char const* reason;
};

Or they do use int, but the return values conflict with your own. How
does this affect your code? And in how many places?

Now think exceptions. Then all it takes to integrate third party
software which throws its own exception objects is to add another catch
block to you top exception handlers (of which there should be few).
Ideally, if exception classes derived from std::exception have become
the norm, you don't have to change anything.

Consider what this means for the maintenance of your system.

--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail
address with "kapsch", and the top level domain part with "net".

Sergey P. Derevyago

unread,
Jan 18, 2005, 6:00:24 PM1/18/05
to
ka...@gabi-soft.fr wrote:
> I just posted an article in which I explained the cases where I
> think exceptions are justified. In practice, with the exception
> of constructors (and probably overloaded operators -- I've not
> enough experience with them to be sure), exceptions mean that
> you are about to abort some large functional block.
>
BTW this means that the Strong guarantee is almost needless: the affected
objects get destructed ASAP and therefore we aren't interested in their state
after the exception.
I.e. the efforts to ensure the Strong guarantee where it isn't equal the
Basic guarantee are pointless.
--
With all respect, Sergey. http://ders.angen.net/
mailto : ders at skeptik.net

Anthony Williams

unread,
Jan 18, 2005, 6:02:39 PM1/18/05
to
Andy Robinson <an...@seventhstring.com> writes:

> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.

Your rant basically comes down to "prefer returning error codes to using
exceptions", which I thoroughly disagree with. You have the same set of issues
to deal with, and using error return codes scatters error handling all across
your code, so it obscures the natural flow. I won't go into these issues more,
as they have been covered quite well by other posts on this topic.

An area that doesn't seem to be getting the same coverage is the idea of
writing code so that exceptions are unnecessary. This doesn't mean using error
codes as you suggest, but rather writing code that can't fail. This is a
variation of the idea behind Null Objects --- if you try and open a file, then
you don't get an exception, and you don't have an error code to handle; rather
you have a file object that just doesn't have any data. If it really matters
whether the file existed, then you write code to explicitly check for
that. Likewise, if you're querying a database, and the database can't be
contacted, just return a result set with no rows. If you're reading a
configuration file, and there is no configuration data, just use defaults,
etc.

At some point, you might find that you really have to handle an error, in
which case an exception might be appropriate, in order to terminate the
current process or abort the current request. Alternatively, if it is a
scenario that can be planned for, then a direct query can be made ---
e.g. does the file exist? can we authenticate with the server? etc.

Anthony
--
Anthony Williams
Software Developer

ka...@gabi-soft.fr

unread,
Jan 18, 2005, 6:05:23 PM1/18/05
to
Tim Rowe wrote:
> On 15 Jan 2005 23:20:15 -0500, James Kanze
> <ka...@none.news.free.fr> wrote:

> >I hope not. A pre-condition failure is an internal error.
> >The program should stop immediately.

> Eek! No!

> The program should respond in whatever way is appropriate when
> it discovers it has an internal error.

Yes. I was making a general statement. I know that there can
be exceptions in special cases.

> /Maybe/ shut down, maybe find another way to accomplish the
> task, maybe flag the error and try to carry on.

> It would be a bad thing if the fly-by-wire software keeping a
> Boeing 777 in the air were to generate a precondition failure.
> It would be far worse if the software to respond by saying
> "that's it; I'm giving up. You're on your own now."

And yet, that's exactly what happens. The standard behavior on
detecting any internal failure in a critical system is to shut
down. Immediately.

Don't forget that critical systems have backups. Shut-down, and
the backup will recognize it, and take over. Continue, and who
knows what you might do.

--
James Kanze GABI Software http://www.gabi-soft.fr

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

ka...@gabi-soft.fr

unread,
Jan 18, 2005, 6:06:52 PM1/18/05
to

Yes, but isn't this a Java problem, due to the lack of out
parameters?

There are two reasons which can lead you to use exceptions. One
is a conceptual reason, linked to the idea that you want to
abort a large functional block (e.g. a request in a server),
without aborting the process. The other is a technical reason.
In C++, this is usually because you are in a constructor, and
don't want to leave an invalid object lying around -- there is
also an argument concerning handling return codes of unknown
types in generic code. (Note that this latter reason also
argues against checked exceptions.) In Java, this is often,
too, because you don't have the out parameters you need. Most
of the checked exceptions in Java would be much more naturally
handled by return codes.

> > But that the immediate caller still needs to know about,
> > because it is necessary to know whether a function is no
> > throw or not in order to write exception safe code. (More
> > correctly, it is important that a certain number of
> > primitive operations be guaranteed no throw in order to
> > write exception safe code. In Java, of course, the fact
> > that every operation can potentially throw
> > VirtualMachineError means that formally, exception safety
> > isn't possible. In practice, of course, the best Java
> > programmers will simply assume that nothing derived from
> > Error will actually occur, accepting the fact that their
> > program is incorrect if it does; and the others are simply
> > blissfully unaware that exception safety exists.)

> Unchecked exceptions like Errors, RuntimeExceptions are'nt
> meant to be abused. Mostly an irrecoverable error has
> happened and the program stopped immediately.

Totally agreed. So why do they raise an exception, rather than
stopping the program immediately?

> Are you saying that the immediate caller needs to know about
> what unchecked exceptions a method throws?

I'm saying that just about any time you should be using
exceptions, the immediate caller doesn't need to know a thing
about them, other than the fact that the function is not no
throw. Unless I've misunderstood something, you were the one
arguing for checked exceptions.

> Thats the whole idea about unchecked exceptions, the immediate
> caller need not know about it.

Roughly speaking, Java categorizes exceptions into three
categories: Errors, RuntimeExceptions and checked exceptions.
IMHO, for the most part, what they categorize as Errors should
immediately stop the program, and not be exceptions. And the
checked exceptions should be return codes.

This sub-thread started because someone (I think it was you)
praised Java's policy of checking exceptions. My point is 1)
they (correctly) don't do it in a lot of cases, and 2) the cases
they do shouldn't be exceptions in C++. Roughly speaking, of
course -- one can argue about any given Java exception, as to
whether the language designers put it in the correct category,
and there are probably cases in C++ where exceptions are used
for purely technical reasons, should be handled nearby, and
where checking would be nice. But I think that while Java is
correct in having the three categories, I don't think all three
should be handled by exceptions, and while I recognize that
there are some cases where checked exceptions would be nice in
C++, I rather think that they are the exception -- that most of
the time, the C++ model does exactly what it should do.

> > Funny thing, but in correct code, the emphasis often is in
> > error handling. Trying to push errors off to the side is
> > not going to improve program quality.

> Error handling is inherent part of programming. It's not
> pushing off to the sides, but handling it in a different place
> from the main flow. That's what i meant when i said i have
> separation of concerns. My concern in the main flow is not
> exceptions, it is normal unhindered flow. While i handle
> exceptional flows in another place. Here, distinction must be
> made about "exceptional" and what it means, and it calls for a
> judgement from the programmer.

I've been thinking about this in response to this thread. How
to explain the difference that I feel. In the end, I think it
is simply that I feel that something like handling illegal input
from a user is, or should be, a fundamental part of the
algorithm, and not something to be "isolated".

I do think that where you draw the line is rather arbitrary, and
will depend on the application, but on the whole, I'm not eager
to use exceptions for "errors" which I expect to occur on a
regular basis. Like, for example, file not found when the
filename comes from user input.

> > You mean that correct code is a distraction. A simple
> > client doing bind and connect *will* have to check each
> > function. And should have to, because at that level, there
> > is almost certainly some things that will have to be done if
> > the calls fail.

> It is a distraction to the main logic of the program. I say
> it is a distraction because of mixing of concerns here. A
> simple network client has different flows and i feel it is
> essential to capture those different flows separately.

> > In an application, of course, all of this will be hidden in
> > some sort of Connection class. And there is probably no
> > real reason for this class to throw either -- the argument
> > for throwing in the constructor: that you don't want invalid
> > instances to be able to exist, doesn't hold, because
> > instances can also become invalid later. Just because bind
> > and connect don't have errors doesn't mean that the
> > connection cannot fall.

> The success of bind and connect leads to a valid Connection
> instance and it doesnt have any bearing on the future
> behaviour of that Connection.

That wasn't my point. My point was that even if bind and
connect succeed, you still have to check the status of the
object before or after each use, because even if bind and
connect succeed, the connection can fail later. This is much
like the various classes in iostream.

And that given the fact that you have to check the status
everywhere -- that you can never guarantee a valid Connection
(object), one of the strongest technical arguments in favor of
exceptions falls by the wayside.

> > Could you give some concrete examples where this is
> > relevant? Do you really thing that long chains of
> > operations which might even under normal conditions
> > interrupt anywhere in the chain leads to readable code?
> > That something like:

> > OutputFile( filename ).write( ... ).flush().close() ;

> > is good programming style?

> I personally don't advocate long function chains. In some
> places, small function chains like f.flush().close(); is
> readable.

(Except, of course, that 1) close() subsumes flush():-).)

The aspect of chaining is interesting. I use chaining in a
number of specific, mostly limited cases, and in those cases, I
like it. To date, the only possible "errors" in those cases
have been critical errors, like bad_alloc, which are correctly
reported by an exception. I suspect that there are cases where
the desire to support chaining represents a valid technical
argument for exceptions; I've yet to encounter one, however.

> Also, you prevent calling functions like,
> open(file.getFilename());

The question is: if file.getFilename() can fail in a way that
might reasonably require local handling, do you want to do this?

--
James Kanze GABI Software http://www.gabi-soft.fr

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

ka...@gabi-soft.fr

unread,
Jan 18, 2005, 6:07:18 PM1/18/05
to
Walter wrote:
> <ka...@gabi-soft.fr> wrote in message
> news:1105716747....@f14g2000cwb.googlegroups.com...
> > Walter wrote:
> > > 1) Programs that don't use exceptions to report errors
> > > tend to have serious bugs in them for the following reason
> > > - programmers forget to check the error codes, or check
> > > them incompletely. The resulting bugs rarely show up in
> > > testing; they show up on the customer's machine when they
> > > are the most expensive to fix. Exceptions cannot be
> > > ignored by omission, the programmer has to deliberately
> > > write code to catch and ignore it. There's no blithely
> > > going on assuming that the previous operation succeeded.

> > I have a hard time understanding this. If you test the
> > error case, your test case fails if you forgot to check a
> > return code. And if you don't test the error case, you
> > won't notice that you've forgotten the catch block either.

> David Held posted a much more lucid explanation of this than I
> did, so I'll defer to his version!

I haven't seen it yet (or I missed it), so I'll look for it.
Still, I do suppose you test error conditions, and not just the
everything is fine situations. So you should notice a missing
return value.

> > In most of the applications I worked on before exceptions,
> > our ReturnCode type would trigger an assertion violation if
> > it hadn't been read before destruction, so you'd catch the
> > error even if you didn't test the error cases. (Of course,
> > companies which insist on such behavior in the return codes
> > typically do test error cases:-).)

> That sounds like an intriguing approach I've never thought
> of.

I've been using it for close to fifteen years. I didn't invent
it; I learned it because it was being used by a customer, so it
is a lot older. (At that time, we didn't have exceptions.) It
has the advantage of signaling the missing check even when the
error condition can't be easily simulated.

> One think I do like with error handling via exceptions is
> there is (at least in D, anyway) a default exception handler
> wrapping main() that will catch any uncaught exceptions, print
> their associated error message, and terminate the program.
> For a lot of batch style programs, this completely suffices,
> meaning I simply don't need to write a lot of error handling
> code and pretty-printing the results.

I often do this in application specific classes in prototypes
and such. As you say, if just handling any possible error at
the top level is fine, it is by far the simplest solution.

It's not something I do in robust applications.

> > > 2) Most (nearly all) of the problems associated with
> > > writing exception safe code revolve around memory leaks.

> > I've not found that to be true either. At least not
> > exclusively. Exception safety means maintaining
> > transactional itegrity, at least at the highest level.

> I'm surprised. I find that in my C++ code, a major focus in
> writing classes and routines is managing memory. Transactions
> happen now and then, but nowhere near as often.

I find that I spend too much time developing memory management
strategies, exceptions or return codes. I've not found the
problem to be more difficult because of exceptions. This may
be simply because the strategies I use for managing memory in
exception free code also revolve around ensuring that the memory
is always "owned" by one or more objects, whose destructors are
responsible for freeing it. So my exception free code is
intrisically exception safe with regards to memory management.
On the other hand, my programs usually have extensive state, and
while carrying out various operations, there are almost always
moments when the state is incoherent.

[...]

I'm surprised that some default handler is able to handle this
correctly.

There are obviously cases where failing to open a file is a
critical error. The most obvious one is not being able to open
a temporary file which I just wrote. But most of the time,
failure to open a file can be handled immediately. At least in
the programs I write.

But my point is that in a library where different uses may be
reasonable, the preferable tactic is probably to use a return
code, simply because it is very easy, and has little impact on
my code or on runtime, to convert the return code to an
exception, whereas the reverse isn't true.

> > > Correspondingly, a function named DoesFileExist() should
> > > not throw if the file doesn't exist, because asking a
> > > question is part of the natural flow of the program, and
> > > not an error.

> > And isn't trying to open a non-existant file part of the
> > natural flow of the program as well? Something that you
> > sort of expect to happen from time to time, even with
> > reasonable users.

> Validation of user input with a function like, say,
> isUserInputValid(char* string), should return an error
> code. But ReadFile() should read the file, and it's an error
> (and should throw an exception) if it failed to read the file.

Are you saying that I should probably systematically call
something like isFileReadable() before calling ReadFile(). It's
a possible programming style. I've just not used it much.

> And lastly, I just don't want to write error status checking
> code, as it makes the code look ugly and distracts from the
> logic and natural flow of the algorithm. I'd rather handle
> them with exception handlers, usually allowing the default one
> to do its thing, or I can just wrap the whole user input
> processing code with one exception handler and take care of
> the "loop and retry" all in one place.

> P.S. Another reason I like the exception handling route is I
> get sensible error messages for free, rather than having to
> roll my own
> translate-the-error-code-into-something-user-friendly each
> time. How many C programs just give you some generic message
> if the file open fails, rather than decoding errno and giving
> a targetted message?

And how many systems using exceptions don't think to pass the
errno up with the error:-)? Reporting errors should always go
through some central function, which takes care of all this.

--
James Kanze GABI Software http://www.gabi-soft.fr
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

wka...@yahoo.com

unread,
Jan 18, 2005, 6:10:46 PM1/18/05
to
Andy Robinson wrote:
> wka...@yahoo.com wrote:
>
> > If you're one of the lucky people who can count on the fingers of
> > one hand the number of times you've forgotten to free a resource at
> > an early return, I can see where you might feel that freeing
> > resources in destructors only is unnecessary overhead.
>
> I don't think I said that did I? I hope not.
>
> > For the rest of us, it's a
> > good defensive programming habit, even if we don't use exceptions.
>
> Me too, it's very useful.
>
> > Is it really a big burden to use iostreams instead of cstdio
> > streams, auto_ptr for individual heap objects, and the vector
> > template for heap
> > arrays? For me, these substitutions cover most of the cases where
I
> > need to worry about exception or multiple-return-point safety.
>
> Yes, I too use these things (well, I don't use iostreams or cstdio
> much).

I guess I need to 'fess up that I've mostly worked in environments
with lots of exception-unsafe legacy C code, making it impractical
to use exceptions. My understanding is that, in almost all cases, the
only impact of exceptions on a function that does not catch exceptions
is that resources taken in the function need to freed by destructors
of objects in the function's stack frame (activation record). I
thought that the need to free resources in destructors was one of
your complaints against exceptions? But here you seem to agree that
this is a good idea with or without exceptions.

>
> > Would you require the types passed to STL contains all have the
> > member
> > function 'int init(void)' ? That kills the idea of using primitive
> > types in container templates, or minimally forces the use of a
> > traits template.
>
> I use STL containers but only in fairly simple ways. I regret I don't
> know enough to discuss this.
>
> > Suppose you had a protected member function that called a virtual
> > member function. Supposer further that, in some derived class, the
> > override of the virtual function set a derived class member
> > variable, and this value was used by the derived class member
> > function that
> > called the base class protected function. Would you see this as
bad
> > style or an example of the flexibility and power of the virtual
> > member
> > function capability? To me, this is analogous to using an
> > exception. Sometimes it's desirable for "non-adjacent" layers in
the
> > code to interact in ways that are hidden from the intermediate
> > layers.
>
> I would judge it on its merits in the circumstances. I'm a practical
> person. Any piece of code which is easy to understand and easy to
> ensure that it is correct, is fine with me.

The other issue is whether you accept that the analogy I'm putting
forward
is a meaningful one. If you do, you might want to accept that it makes
sense to evaluate the use of exceptions case-by-case as well.

Stephen Howe

unread,
Jan 18, 2005, 6:16:48 PM1/18/05
to
> I don't see why. Why wouldn't they just ignore the issue, if that's
> their nature?

Because exceptions not dealt with, will terminate their program.
Exceptions have the habit of rubbing their nose in the fact that "there is a
problem".

Ignoring return codes may not do this. Depending on the nature of the
program, ignoring a return code, may have no noticeable effect, to side
effects, to terminating the program due to all types of undefined behaviour.
But it is perfectly possible for the program to silently run.

I can think of one case where one of my colleagues programs _appeared_ to
work but it was only by checking the content of files generated on output
did we realise that something was badly wrong - which was directly traceable
to ignoring return codes.

Stephen Howe

Emil

unread,
Jan 18, 2005, 6:17:32 PM1/18/05
to
> A few people have implied, as you seem to be doing, that if I am
> opposed to one innovation I must be opposed to all innovations. This
> is not the case, I think each should be judged on its merits.

In my post I did not mention innovation, I was specifically talking
about abstraction. Exceptions provide higher level of abstraction when
handling failures in your code.

I propose you do this simple experiment. Examine the source code of one
of your projects, and see what action you take in each function in
response to a failure in a function it calls. Please consider the ones
that simply report the problem to their caller, after possibly
releasing the local resources they've acquired. This will probably
cover most of the cases.

Then consider this: if you use exceptions to report failures, those
cases are automatically taken care of. This is A Good Thing.

> > Additionally, avoiding exceptions leaves you with no practical
> > options for reporting failures from constructors. This essentially
> > disables one of the most important features of C++, namely the
> > guarantee that no object of user defined type can be used before it
> > has been properly initialized.
>
> This I have discussed further in a couple of other posts.

I suggest you read The Design and Evolution of C++, where Stroustrup
explains why exceptions are necessary much better than I ever could.

But let me give you an example. The older generation remembers that
some time ago Borland Pascal was a popular object oriented language. In
Borland Pascal, there was a special keyword 'constructor' which was
used to designate constructors from other member functions. This
allowed programmers to use the return value of a constructor to report
failures.

Now, in C++ constructors do not return anything. Was this an oversight?
Wouldn't this be a good feature, after all if you don't want to return
something you can always define a constructor as void, right?

You don't suppose the designers of C++ didn't think about this, do you?
Of course they did, but they were smarter than that.

If constructors were allowed to report failures through return codes,
then you open the possibility for the caller to ignore them. This means
that a careful implementer would have to check whether the object is
properly initialized in every member functions. Of course this would be
impossible to enforce.

Exceptions, coupled with the fact that they are the only way of
reporting failures from constructors, take care of this entire class of
bugs. The compiler guarantees that if a member function ends up being
called, one of the constructors of its object successfully finished its
execution.

Yes, you can live without this feature. But embracing it will make your
programs simpler, more robust, and easier to maintain.

--Emil

Stephen Howe

unread,
Jan 18, 2005, 6:16:27 PM1/18/05
to
>> I religiously check the return values of functions that use
>> status codes. But that is just it, my colleagues frequently
>> do not.
>
> Then you have a problem which exceptions will not solve.

I know that :-). They have improved.

> Because it isn't a problem of language, or programming; it is a
> problem of basic attitude.
>
>> I jumped on one of my colleagues recently for failing to check
>> the return value of fopen() (and yes, it had failed) and
>> failing to check the return values of fread(), fwrite() and
>> even fclose() (a flush to disk could fail). Your case for
>> status code is weaker than it appears. If all these C
>> functions threw exceptions on failure, my colleagues would
>> _have_ to write code the dealt with the exceptions.
>
> Not at all. I'm sure that anyone failing to check the return
> value of fopen will also fail to have tests where it should fail
> in the test suite. In which case, it's really a question of
> what happens when the function does fail at the client site.
> With exceptions, you do get a guaranteed core dump, which is

> nice;...

Right. That is why I prefer exceptions.
Programmers cannot just ignore them.

>... without exceptions, it's hard to say what you get, but I


> doubt that the program will work either.

Your very likely right. It is just occasionally, a program may silently
appear to work and it may take quite some time to realise it does not.

Stephen Howe

Don Waugaman

unread,
Jan 18, 2005, 6:17:54 PM1/18/05
to
David Abrahams wrote:
>Don Waugaman wrote:
>> David Abrahams wrote:
>>> msalters wrote:
>>> > There are already
>>> > implementations that won't even load exception handlers
>>> > in RAM until needed.
>>
>>> Really? Which ones? I've been talking about that optimization as a
>>> possibility for years, but I have yet to see it anywhere.
>>
>> Exception handlers (which I take to mean code in catch blocks), I
would
>> agree.
>>
>> However, for the tables used to look up exception handing
information,
>> and the code used to interpret those tables, you get automatic
loading
>> "for free" when the information is stored in a different object file
>> section from the main program code in a system with demand-paged
>> executables. I believe the IA64 ABI for C++ supports this.
>
>Do you have a reference? Do any compilers you know of take advantage
of
>that support?

Let me guess: you're from Missouri? :-)

Seriously, though, check

http://www.codesourcery.com/cxx-abi/abi.html#unwind

which also refers to

http://developer.intel.com/design/ia-64/devinfo.htm

for the details on how this apparently is to be done in the IA64 ABI.
'g++' implements this, or at least it appears so on my experiments with
a cross-compiler hosted on i686-linux. I can send you the readelf
output and the program I checked this with, if you'd like, but basically
the exception information goes into an elf section called
'.IA_64.unwind', which (again, as near as I can tell) will be loaded
into the final executable out-of-line from the regular program text.

dietma...@yahoo.com

unread,
Jan 18, 2005, 6:22:49 PM1/18/05
to
Dave Moore wrote:
> <dietma...@yahoo.com> wrote in message
> news:1105640434.5...@f14g2000cwb.googlegroups.com...
> > L.Suresh wrote:
> > > a) I find JAVA's enforcement of checked exceptions wonderful.
> >
> > Thanks for pointing this out in this context! Although I
whole-heartly
> > disagree with your statement, it provides another insight to me why
> > exception specifications are a bad idea: it is in some sense
nothing
> > else than enforced checking of return codes ...
>
> Not at all. If a function abrogates its exception specification (say
> because some client code threw an exception a library wasn't designed
to
> deal with), this results in a call of std::unexpected(), which
normally
> calls std::terminate(). However, C++ also allows you to change this
> behavior by using set_unexpected() to define another "handler" to be
called
> by std::unexpected.

Are you saying that each library can install its own exception handler?
If not, i.e. if it is an application wide setting, I don't see how
setting
'std::unexpected()' can be of any reasonable help. Well, of course, I
could set up the unexpected handler on each entry into my library. On
the
other hand, what value is gained from that anyway? The functions are
explicitly declared to throw only a certain set of exceptions and I
know
in advance that this is really not at all the case and deal with it.
What
is the advantage of exception specifications? To me this just causes
trouble with returned value.
--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.contendix.com> - Software Development & Consulting

dietma...@yahoo.com

unread,
Jan 18, 2005, 6:22:27 PM1/18/05
to
L.Suresh wrote:
> I look at exception specification as a contract.

So do I. Unfortunately, it is a contract which is too
restrictive by design as soon as the contract involves any form
of generic function (where genericity involves at least
template functions or function using a virtual function): the
user of the function may pass things in which throw rather
different exceptions than those known to the function.

That is, each generic function with an exception specification
makes a contract it cannot really guarantee.

>
> If funcA() calls funcB(), the caller needs to know the following when
> funcB() encounters exceptional situations.

Your analysis is wrong! It needs to know effectively only which
class of guarantee is made with respect to exceptions, i.e.
whether the function makes the basic, the strong, or the
nothrow guarantee. It does not need to care about exceptions it
does not know: the whole idea of exceptions is that a function
not knowning about an exception better ignores it, assuming
that it is handled appropriately somewhere up the call chain.

> a) How will funcB() signal what has gone wrong?
>
> By looking at the exception specification it can be understood that
an
> exceptional condition may be reported through the list of exceptions
> specified.

The only essential information which can be derived from
exception specifications is whether the function throws or does
not throw. That is, either the exception specification shall
be absent (the function might throw) or it shall be empty (the
function will not throw).

> So, exception specification is a contract that guarantees to inform
the
> caller in a specific way. It's the language that has to enforce that
> this is done properly.

As mentioned above, exception specifications agree on a
contract they cannot really hold. It was an error during the
design of C++ (and various other languages which share the same
problem) to assume that functions can know in advance what kind
of exceptions they can throw: it is actually dependant on the
use of these functions.

> b) In what ways can funcB() go wrong?

Function typically have their own reasons why they might go
wrong (this set can be empty, of course) but if they involve
any form of genericity, they will also inherit all
opportunities to go wrong of each point of customization. That
is, they have not idea at all what else can go wrong! An
exception specification could advertise that a function knows
that it can go wrong with at least a certain set of known
problems but this is not how exception specifications in C++
(or in several other languages) are designed to work.

> Without specifying the exceptions in the interface how does the
> funcA() know how funcB() will report errors?

That's simple: from the documentation! It is simply something
which is not reasonably handled by the language. We can have
exception specification as some form of documentation but this
would require that we change the language first.

> When i moved
> in from JAVA to C++ i was scared to look at functions without
> any specifications.

It should be the other way around: functions with exception
specifications are scary. They display that the author of the
function assumed he knew what might get wrong in functions
called e.g. on a parameter! If I pass in an object of my class
*I* know what might go wrong! The author of the function might
know what may go wrong in addition. The only reasonable
restriction is that a function might require that whatever I
pass in might not throw to provide the nothrow guarantee.
However, even this is a major restriction and one which is
rarely imposed.

> It might throw anything.

Right. This is actually the basic idea behind exceptions: A
function might transport information about a problem the author
of the function can be completely ignorant of.

> What exceptions can be thrown,

Who cares? It may be interesting which exceptions the function
conjures up itself such that you know which exceptions you
should also expect in addition to exceptions thrown e.g. by
members of parameters. This is what documentation is good for.

> How am i supposed to handle them?

That's simple to: you only handle exceptions you know of!
Other exceptions will be handled somewhere up the call chain
and are best ignored in the first place.

> Can i recover from the exception?

Actually, exception specifications do not tell you anything
about whether you can recover from it. I would claim that the
program can recover from any exception because any other
failures should not throw but abort the program immediately:
It it is known that the program cannot recover anyway, it is
most reasonable to avoid any chances of damages.

> Nothing is known by looking at the function.

Right. Actually, nothing *CAN* be known by looking at the
function except what exceptions the function throws by itself.

> iii) If funcB() is a template it can advertise the exception using
> a template parameter.
>
> template <typename T, typename E>
> void func() throw (E)
> {
> }

Actually, this does not really work at all:
- It is unusual that a function which micht throw just throws
one kind of exception. Of course, we could use
'std::exception' but what would be advantage of the
specification if we actually state that it can throw
anything? ... and even this is really impossible in C++
because nothing requires that an exception derives from
'std::exception'.
- It is clumpsy and not used universally anyway.

> Here, the instantiator of the function tunes this function to
> his requirements and advertises the exception based on
> the behaviour of T. Since the instantiator of this function
> knows about T, he can provide E as well.

Actually, the instantiator of this function does not really
know what 'T' may throw since 'T' may be a parameter of the
instantiator itself.

> iv) If funcB() is a virtual function, im not sure what is the
> problem you mention.

It is the same as with template parameters: any enforcement of
exception specifications restricts the function in its
reporting capabilities.

> If funcB() throws IOException then
> the method that overrides it should have an exception
> specification that is as restrictive as funcB() or more restrictive.

Right. And this is too restrictive.

> funcB_overridden() shoud throw IOException / SocketException
> (which is derived from IOException)

Why should it? If I implement my overridden function e.g. in
terms of a database library (which is really not that unusual)
I want the database exception to propagate to some level at
which they can be reasonably handled. Of course, the database
library is a third party product and thus its exceptions are
*NOT* derived from whatever exception class I create.

> Now, if the function wants to channel a DBException, derive it
> from IOException.

DBException is provided by a third party. Actually, it is
common to have libraries from different vendors being used
together. If these used exception specifications it would
effectively result in the necessity to wrap and unwrap
exceptions all the time. For no value at all: the exception
specifications become a hinderance because you cannot propagate
the actual exception across them but you still have to handle
the wrapped up exceptions which are not declared anyway.

> The caller can handle DBException /
> IOException as appropriate. Or as you said, you can use
> exception chaining. Exception chaining in is a remarkable
> tool for printing stack traces in a remote call.

Are you implying that each function shall wrap up all
exeptions? That would be really stup^H^H^H^Hcumbersome! The
idea of exceptions is to transparently communicate problems to
function where it can be appropriately handled.

> Exception
> chaining can be used as a debug tool or to get the underlying
> error while writing generic code. Well, if you can't do both
> then there is a problem in deciding an exception specification as
> IOException.

As mentioned before, I think each exception specification (with
the exception of an empty throw specification which indicates
the nothrow guarantee) is a problem. ... and I have yet to see
any value at all!

> c) What is the state of the system/object in presence of an
exception.
> It's exception guarantee. It should fail-fast or be in a stable
state
> after the function has completed.

I wouldn't object to an exception specification which states
that the exception class (basic, strong, nothrow). On the other
hand, even this is hard to guarantee since generic functions
have a hard time to infer whether they conform to the basic or
strong guarantee such that the exception specification would
typically be too pessimistic (i.e. it would be basic even
though the function could typically be strong).

> d) Can the caller do something useful with the error information?

Who cares? What is important is that someone up the call chain
can do something useful!

> My experience with JAVA and C++ is that JAVA enforces exceptions in a
> way that would be force you to handle exceptions. While that doesnt
> happen in C++. Yes, you may say that it's bad programming in C++. As
a
> beginner in JAVA i learnt and designed to handle exceptions much
> earlier than i did as a beginner in C++ :)

My experience is that there are natural places for handling
errors. Depending on what you are doing these may be more or
less close to the location where the problem arises. In my
experience exception specifications lead to too early handling
(i.e. it is handled somewhere where actually insufficient
information about the context is available).

My analysis of exception specification (except empty throw
specifications) still is that they impose a rather high price
for no returned value. This means, you are better off turning
all exception specifications into comments: as a documentation
tool stating which *additional* exceptions a function may throw
they might be reasonable.

I know that [former] Java programmers tend to be really proud
of Java's exception specification mechanism. Of course, this is
really something to be proud of: it is messed up much worse
than in C++ (actually enforcing exception specifications
increases the maintenance cost dramatically, still without any
benefit at all)! Normally, Java didn't mess up things that bad
:-)

Ulrich Achleitner

unread,
Jan 18, 2005, 7:01:05 PM1/18/05
to
On 12 Jan 2005 16:05:52 -0500, Andy Robinson <an...@seventhstring.com>
wrote:

> Does anyone here argue that exceptions are usually a bad idea?
>
> I apologise if this subject has been done to death already.


>
> I have placed a rant called "Exception-Free Programming" at
> http://www.seventhstring.com/resources/exceptionfree.html
> and I'll be interested to know what you think.
>

> I realise that I'm putting my head into the lion's mouth. But I
> also think there must be others out there who share my views.
[excessive quoting deleted --mod]

certainly, their are various ways to write reliable programs, even in
assembler... program quality merely depends on the quality and self
discipline (code organization, documenation) of the developer.

however, in a high level language...

i have read through some of your arguments, and i must say they do not
convince me.
i am in favour of exceptions for mainly two reasons:

1) try/catch blocks clearly separate the desired way of execution from the
error handling stuff. with return codes you may end up with code segments
having an if(returncode==ERROR_CODE){...} evry second line, making it
hardly readable.

2) exceptions cannot be silently ignored: either they are handled, or they
crash the program. the worst thing about error codes is that some
developer may ignore them (because of laziness or also because of what he
might consider a good reason), and the program may go one with wrong
results without anybody ever noticing it.

just as an example of a non-convincing argument:
i do not see a problem with this: "Exceptions are the same dangerous hack
dressed up in fancy new clothes". if so, the hack is no longer dangerous,
because it is done automatically by the compiler and usually without any
error ever.


--
have a nice day
ulrich

Casey Hawthorne

unread,
Jan 18, 2005, 7:03:36 PM1/18/05
to
Form the first few lines of your article:

"For instance, who today uses co-routines?"

See:

Coroutines are well suited for implementing more familiar program
components such as cooperative tasks, iterators, infinite lists, and
pipes.

>http://en.wikipedia.org/wiki/Coroutine


>From your article:

"It is pointed out that exceptions propagate automatically. I argue
below that this is really a form of obfuscation and is not good."


I was at an AspectJ seminar Wednesday night, when an audience member
pointed out that all the Aspect Oriented Programming (AOP) AspectJ
code looked like invisible code!

The speaker, Gregor Kiczales, quite rightly pointed out that the
software industry depends on invisible code. He then started out with
the following list:
from machine code to:
assemblers
Fortran
C
C++
Java


In any case: most computing questions are not of a yes/no variety --
if most computing questions were -- then most people could do
programming.

The use of Status Flags and/or Exceptions (or a mixture) are
application dependent.


Your article attempts to paint a yes/no picture on a complex issue of
when to use Status Flags and/or Exceptions.


As for BCPL being typeless -- Python is almost the same these days:
you can stick almost anything into a variable and it is treated as an
object.

--
Regards,
Casey

Ralf Fassel

unread,
Jan 19, 2005, 4:14:28 PM1/19/05
to
* ka...@gabi-soft.fr

| > > In most of the applications I worked on before exceptions, our
| > > ReturnCode type would trigger an assertion violation if it
| > > hadn't been read before destruction, ...

|
| > That sounds like an intriguing approach I've never thought of.
|
| I've been using it for close to fifteen years.

Could you show some short abstract of this idea? A quick scan on
Boost did not reveal anything...

I would guess it is something along the lines:

class ReturnCode;
ReturnCode foo();

...
foo(); // error, since the ReturnCode is destroyed immediately?

...
{ ReturnCode bar = foo(); } // error, since `bar' is destroyed immediately?

...
ReturnCode bar = foo();
if (bar == xyz) // ok, since bar is read?


| Are you saying that I should probably systematically call something
| like isFileReadable() before calling ReadFile(). It's a possible
| programming style. I've just not used it much.

Even if isFileReadable() returns true, the code would have to deal
with ReadFile() signaling a read-failed error due to the time window
between the two calls (same problem as stat()/open()).

R'

Andy Robinson

unread,
Jan 19, 2005, 4:22:31 PM1/19/05
to
Gerhard Menzl wrote:

> Andy Robinson wrote:
>
>> I have placed a rant called "Exception-Free Programming" at
>> http://www.seventhstring.com/resources/exceptionfree.html
>> and I'll be interested to know what you think.
>
> Just one aspect I would like to add to the discussion: what data
> type would you propose for your status code? Typically, int. Now
> suppose the need to integrate third party software arises.
> Unfortunately, their return codes are long. Or unsigned short. Or
> HRESULT. Or
>
> struct result
> {
> long errorcode;
> char const* reason;
> };

Perhaps something like
struct StatusCode
{int category;
union
{int status;
StatusCodeData *data;
};
};
(it would need assignment operator)

The category would tell you what to look for in the union. It's small
enough (64 bits on a 32 bit machine) to return by value so no
allocation problems. And most of the time - all the time for me - we
wouldn't need to use the StatusCodeData but it's there if we want to
attach more data. (of course, we would subclass StatusCodeData
however we want).


Andy Robinson, Seventh String Software, www.seventhstring.com

David Abrahams

unread,
Jan 19, 2005, 6:04:01 PM1/19/05
to
Don Waugaman <d...@email.cs.arizona.edu> writes:

> David Abrahams wrote:
>
>> Do you have a reference? Do any compilers you know of take
>> advantage of that support?
>
> Let me guess: you're from Missouri? :-)

?? I don't get it.

Oh, I didn't realize that the separation was mandated down to that
level of detail. That's great!

That link is dead for me.

> for the details on how this apparently is to be done in the IA64 ABI.
> 'g++' implements this, or at least it appears so on my experiments with
> a cross-compiler hosted on i686-linux. I can send you the readelf
> output and the program I checked this with, if you'd like, but basically
> the exception information goes into an elf section called
> '.IA_64.unwind', which (again, as near as I can tell) will be loaded
> into the final executable out-of-line from the regular program text.

Rockin'. Now we just need to get them to move catch blocks and it
will be perfect. Oh, and optimize the tables for space using some
kind of compression for embedded platforms. Gee, I could probably
think of something else too ;-)

--
Dave Abrahams
Boost Consulting
http://www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Peter Dimov

unread,
Jan 19, 2005, 6:05:30 PM1/19/05
to
ka...@gabi-soft.fr wrote:
> Walter wrote:

[...]

> > Validation of user input with a function like, say,
> > isUserInputValid(char* string), should return an error
> > code. But ReadFile() should read the file, and it's an error
> > (and should throw an exception) if it failed to read the file.
>
> Are you saying that I should probably systematically call
> something like isFileReadable() before calling ReadFile(). It's
> a possible programming style. I've just not used it much.

It's only possible if there are no other processes or threads in the
system, because it introduces a race condition.

Bob Bell

unread,
Jan 19, 2005, 6:22:41 PM1/19/05
to
ka...@gabi-soft.fr wrote:
> Bob Bell wrote:
> > Andy Robinson wrote:
> > > But I don't see any advantage to doing things this way. If
> > > the File::Open(...) function returns a status code then this
> > > is easily documented, written, and used. No judgement
> > > decisions (about whether and when to use exceptions) have to
> > > be made, implemented, or documented. It's easier for
> > > everyone.
>
> > What about opening a file in the constructor of an object? It
> > is very common (in my code, at least) to use constructors and
> > destructors to manage resources like files (RAII).
> > Constructors cannot return status codes. How do you propose to
> > report failures in constructors without exceptions?
>
> I think you've chosen a very bad example. Exceptionally, you
> have a case where an interal error state is necessary. Even if
> you succeed in opening the file, an error can occur later which
> makes the object unusable. So you need the internal state, and
> you need to check it before each function anyway. Given that,
> there's really no reason not to use it for the constructor as
> well. There's no way that you can guarantee that all existing
> instances of the class are usable objects.

An internal error state may be necessary, but I don't have to maintain
it.

class File {
public:
File(const char* iPath);
~File();
void read(void* oData, size_t iSize);
void write(const void* iData, size_t iSize);
// ...
private:
// ...
};

The constructor opens the file; if any error is returned by the
underlying (OS?) functions, the constructor throws and the File never
exists.

Once a File exists, you may call read() and write(). If the underlying
functions called by them fail for any reason, read() and write() throw.
But I'm not maintaining any error state; I just call the underlying
functions, and they either work or they don't.

Also, I'm not attempting to guarantee that, because a File object
exists, it is usable. The existence of a File object implies only that
a file was successfully opened, but that's it; subsequent member
functions may fail, and if they do, they throw (except, of course, for
the destructor).

The intended use for this class is to open a file, read/write it, then
close it. Typically, if something goes wrong while using the File and
it becomes unusable, throwing an exception will unwind the stack past
the point of the File's creation, and it will be destroyed. But if you
wanted to catch exceptions and attempt to fix the problem, that's OK
too. A simple example is catching EndOfFile and rewinding the file to
an earlier position.

In practice, this seems to work well for me; I have a consistent method
for dealing with errors with this class, and I don't need to worry
about closing the file, since the destructor does it.

My philosophy with exceptions is "if a problem occurs that can't be
dealt with here, throw". I understand the rule "if a problem must be
dealt with by an immediate caller, return an error code, otherwise
throw," and I think it's a good rule. However, it's been my experience
that in general it's difficult to know if a function's immediate caller
can or should handle a problem. There have been many times when I
returned an error code thinking the immediate caller would handle it,
only to find the error code ended up getting propagated up the stack
anyway. Over time I've ended up using error codes very rarely, usually
only in places where they must be used (such as language or callback
boundaries), and with modern, zero-overhead-for-normal-execution
compilers, this hasn't been a problem. YMMV.

Bob

Matt Seitz

unread,
Jan 19, 2005, 6:21:39 PM1/19/05
to
dietma...@yahoo.com wrote:
> It was an error during the
> design of C++ to assume that functions can know in advance what kind

> of exceptions they can throw:

Couldn't a function know what kind of exceptions it can throw by wrapping the
function body in a try/catch?

Example:

void foo(){
try{
bar()
}catch(...){
throw foo_exception()
}
}

In this case, doesn't foo know that the only exception it can throw is
foo_exception?

This may not be a good design, as you point out below. But it seems like it is
possible. Perhaps it would be better to say "It was an error to assume that
functions *should* know what kind of exceptions they can throw.

> Are you implying that each function shall wrap up all
> exeptions? That would be really stup^H^H^H^Hcumbersome! The
> idea of exceptions is to transparently communicate problems to
> function where it can be appropriately handled.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Tim Rowe

unread,
Jan 19, 2005, 6:27:59 PM1/19/05
to
On 18 Jan 2005 18:05:23 -0500, ka...@gabi-soft.fr wrote:

>And yet, that's exactly what happens. The standard behavior on
>detecting any internal failure in a critical system is to shut
>down. Immediately.
>
>Don't forget that critical systems have backups. Shut-down, and
>the backup will recognize it, and take over. Continue, and who
>knows what you might do.

I don't know what the behaviour is on the 777 critical systems, but I
chose that example because I understand that the backup systems run
the same software as the primary systems so if there /were/ a critical
bug it would have the potential to hit all control paths. It's
certainly not a trivial decision how to respond to an internal failure
in a system like that.


Replies to tim at digitig dot co dot uk

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Tim Rowe

unread,
Jan 19, 2005, 6:27:37 PM1/19/05
to
On 17 Jan 2005 15:04:25 -0500, "L.Suresh"
<suresh.l...@gmail.com> wrote:

>The program should stop ASAP!!. An internal error is a bad thing, the
>state

And if shutting down kills 300 people for certain, whereas not
shutting down might or might not kill 300 people, you still think it
should shut down and kill the 300 people for sure, rather than risk
your programming purity for the chance of saving those 300 lives? In
other words, what if shutting down /is/ the worst possible way it can
manifest itself? There are worse things in this world than mangled
system state.

Yes, sure, /usually/ shutting down is the right thing to do, but NOT
WITHOUT THINKING OF THE CONSEQUENCES! (Sorry to shout, but this
matters!)


Replies to tim at digitig dot co dot uk

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

dietma...@yahoo.com

unread,
Jan 19, 2005, 6:51:52 PM1/19/05
to
Andy Robinson wrote:
> dietma...@yahoo.com wrote:
> > Exception-safety vs. clean-up in "Exception-Free" Programming:
> >
> > The key to exception-safety is cleaning up objects when leaving
> > their local context. This is in no way different for any other
> > form of error handling. If you have correct code if no
> > exception is thrown it is a trivial transformation to make the
> > code also exception-safe: all you need to do is wrapping up
> > explicit clean-up code in function with a try/catch block. Of
> > course, such clean-up tends to be error-prone anyway
> > (independent of use of exceptions) and is best handled with
> > RAII idioms anyway.
>
> You really feel that exception-safety can be achieved with a trivial
> transformation?

... starting from a program which correctly maintains its resources
without exception: yes! Essentially, what you need to do is wrap up
all clean-up code which is not in a destructor into a catch-block
which rethrows at the end, either duplicating the code or never
returning from the try-block (after all, in exceptional cases you
would throw an exception). This yields code which is exception safe
but does not yet take advantage of exception. To take advantage of
exceptions you would also replace all returns of error codes by
throwing an exception and correspondingly remove all checks for
return codes. To finish the transformation, you would redeclare the
now unused return types to 'void' and/or change "out parameters"
to become returns values.

This transformation is trivial as it does not change the program's
design and could possibly even be automated. It is a, however, a
lot of work which is somewhat error prone if the program has lots
of clean-up code. It would be easier to do if C++ had a "finally"
block which is always executed independent of how the try block is
exited (like in Java or C#). It may be reasonable to start the
transformtion by wrapping all clean-up code into destructors after
which the transformation making the program exception safe would
be the identity operation, followed by the transformations taking
advantage of exceptions.

> > Return values vs. error codes
> ...
> > It also inhibits function chaining
> > (i.e. calling a function with the result of another function)
>
> Yes, but on the other hand I'm not sure that function chaining where
> exceptions may be thrown, is such a good idea.

As I mentioned already, you sometimes don't have any choice in
template code because you cannot always deduce the return type of
a function: you have to pass on the returned value. ... and, of
course, you cannot use an "out" parameter because this has the
same problem.

> Suppose a function has
> multiple arguments, one of which is a function which may throw an
> exception. Because the order of evaluation of function arguments is
> undefined, we can't tell which other arguments get evaluated if an
> exception is thrown. Yes, it may not matter.

I didn't talk about multiple parameters: the special case of passing
just one parameter it quite common. Also, there are many operations
which provide the strong or nothrow guarantee, e.g. because they
don't mutate anything: creating subsequences using 'std::find()'
comes to mind (which may throw if the iterator used might throw).
Although function chaining isn't strictly required in this case, it
is clumpsy to do it otherwise.

> But I don't see any harm
> in separating the calls so we can see the ordering.

... and function ordering is often irrelevant. Actually, I can
imagine that a compiler could parallelize the evaluation of
expressions passed to a multi parameter function: while I'm not
sure the current standard allows it, a future version taking
parallelization into account might. With modern processor
architecture this could given an opportunity to take advantage of
things like hyper-threading.

> > Encapsulation vs. return codes
> >
> > In your document you claim that explicit checking makes it
> > visible which operations can fail. This is, however, the
> > wrong place to document this knowledge: the client should be
> > as ignorant about implementation details of called functions
> > as possible since a change in the function's implementation
> > might cause the "knowledge" about the function to become
> > wrong. This effectively means that the caller of a function
> > should always assume that the function may fail. Thus, each
> > and every use of a function is already a visible indication
> > of a point of failure.
>
> That's brave! It also disagrees with what David B. Held says about
the
> "swap" function, for instance.

Note that David B. Held does essentially talk about a pretty
course distinction: he assumes that the function documents which
exception guarantee it makes (basic, strong, nothrow). Maybe my
statement was too strong but I generally assume each function
fail - unless it is an essential feature of the function that it
does not fail! ... and actually something like 'swap()' is bound
to fail, at least for some types. It is just guaranteed to never
fail for a very limited set of types (effectively PODs). It also
worth noting that the mentioned exceptions guarantees are needed
to lift a function's exception guarantee from "basic" to "strong".
If you just strive for the basic guarantee (which is, admittedly,
not always sufficient) it should be unnecessary to make any
assumptions about function except that they have the "basic"
guarantee.


--
<mailto:dietma...@yahoo.com> <http://www.dietmar-kuehl.de/>
<http://www.contendix.com> - Software Development & Consulting

"Daniel Krügler (ne Spangenberg)"

unread,
Jan 19, 2005, 6:53:27 PM1/19/05
to
Good morning James Kanze,

ka...@gabi-soft.fr schrieb:

> Validation of user input with a function like, say,
>
>>isUserInputValid(char* string), should return an error
>>code. But ReadFile() should read the file, and it's an error
>>(and should throw an exception) if it failed to read the file.
>>
>>
>
>Are you saying that I should probably systematically call
>something like isFileReadable() before calling ReadFile(). It's
>a possible programming style. I've just not used it much.
>
>

I totally agree with your opinion, James and I would like to add, that
the proposed
isFileReadable() actually does help only in a limited way. The reason
is, of course,
that global, volatile objects like files, directories and so on can
change there state in
unpredictable ways. Thus I can not guarantee **in general** that code
similar to

if (isFileReadable(filename)) {
readFile(filename);
}

must succeed.

Greetings from Bremen,

Daniel

Gerhard Menzl

unread,
Jan 20, 2005, 6:52:40 AM1/20/05
to
Andy Robinson wrote:

> Perhaps something like
> struct StatusCode
> {int category;
> union
> {int status;
> StatusCodeData *data;
> };
> };
> (it would need assignment operator)
>
> The category would tell you what to look for in the union. It's small
> enough (64 bits on a 32 bit machine) to return by value so no
> allocation problems. And most of the time - all the time for me - we
> wouldn't need to use the StatusCodeData but it's there if we want to
> attach more data. (of course, we would subclass StatusCodeData
> however we want).

You introduce an int member (category) in order to save the size of an
extra int (status)?

Apart from that, you have carefully avoided the answer to my question
proper: how do you handle return codes and additional error information
from third party software that are of incompatible type? And how does
whatever tedious technique you employ to achieve this compare to an
exception-based approach?

--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the obviously faked part of my e-mail
address with "kapsch".

L.Suresh

unread,
Jan 20, 2005, 6:56:47 AM1/20/05
to
:) I said it should shut down "AS SOON AS POSSIBLE". How Soon? That
depends on the program. In this case i would assume that after the
internal error occurs the flying-machine should be landed at the
earliest, while you hope for the best that the internal error does not
manifest in a worser way.

--lsu

Stephen Howe

unread,
Jan 20, 2005, 6:58:48 AM1/20/05
to
> Yes, sure, /usually/ shutting down is the right thing to do, but NOT
> WITHOUT THINKING OF THE CONSEQUENCES! (Sorry to shout, but this
> matters!)

Yes but if the program is now internally inconsistent, any action could be
wrong.
What is there is to think about if you are no longer sure what objects or
controlling variables can be trusted?
Program invariants going wrong and program inconsistency is the worst type
of error IMO.
Only a a bug fix can help here.

Stephen Howe

Hyman Rosen

unread,
Jan 20, 2005, 7:25:33 AM1/20/05
to
Tim Rowe wrote:
> And if shutting down kills 300 people for certain

Then you probably want to use Ada. That's also what you use
when you need to land a probe on one of Saturn's moons. The
Huygens software is written in Ada.

It is loading more messages.
0 new messages