Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Non throwing stl like containers

3 views
Skip to first unread message

Gil Shafriri

unread,
Aug 1, 2002, 8:32:31 PM8/1/02
to
Hello,

In my current development group C++ execeptions are not being
used. This
is for good reason because we don't have support for c++ exception
in the
WINCE development tools.
This is a fact that I can't change. On the other hand need generic
containers as hash, map, and vector. The Stl containers throw
exceptions so
we can't use them.
I wounder if someme knows about stl like containers ,even with limited
functionality, that return error codes instead of throwing execptions.

Thanks

Einat


[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]

James Kanze

unread,
Aug 2, 2002, 7:48:09 AM8/2/02
to
"Gil Shafriri" <e_l...@hotmail.com> wrote in message
news:<3d49...@news.microsoft.com>...

> In my current development group C++ execeptions are not being
> used. This is for good reason because we don't have support for c++
> exception in the WINCE development tools. This is a fact that I can't
> change. On the other hand need generic containers as hash, map, and
> vector. The Stl containers throw exceptions so we can't use them. I
> wounder if someme knows about stl like containers ,even with limited
> functionality, that return error codes instead of throwing execptions.

What exceptions do they throw ?

The obvious one is bad_alloc, so the immediate question is: what do you
do if you run out of memory? If you replace the new_handler, say to
abort with an error message, you won't get this one.

All of the other exceptions are for special functions (e.g. at). The
answer there is to not use these functions.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung

Peter Koch Larsen

unread,
Aug 2, 2002, 7:49:57 AM8/2/02
to
"Gil Shafriri" <e_l...@hotmail.com> wrote in message news:<3d49...@news.microsoft.com>...
> Hello,
>
> In my current development group C++ execeptions are not being
> used. This
> is for good reason because we don't have support for c++ exception
> in the
> WINCE development tools.
No support for exceptions under Windows CE? This sounds most
inadequate.

> This is a fact that I can't change. On the other hand need generic
> containers as hash, map, and vector. The Stl containers throw
> exceptions so
> we can't use them.

Where do STL-containers throw? So far as i know, they do not: what may
throw is the contained objects whenever their code - e.g. the copy
constructor - is invoked or some other core function (such as new).


> I wounder if someme knows about stl like containers ,even with limited
> functionality, that return error codes instead of throwing execptions.

[snip]
You can use the standard containers although you should realize that
they can't be completely standard under your environment as standard
C++ does require exceptions. As an example, i do not know how to tell
the user that a memory allocation failed.

Kind regards
Peter

Anthony Yuen

unread,
Aug 2, 2002, 7:56:55 AM8/2/02
to
Gil Shafriri <e_l...@hotmail.com> wrote:
> Hello,
>
> In my current development group C++ execeptions are not being
> used. This
> is for good reason because we don't have support for c++ exception
> in the
> WINCE development tools.

I think you're in luck! In the Aug 2002 issue of Dr. Dobb's
Journal, there's a two-part series of articles that talks about adding
Exceptions and RTTI to the Windows CE compiler! You should definitely
check it out. It talks about the TCU library which sounds like what you
need to use exceptions in WinCE.

Peter van Merkerk

unread,
Aug 2, 2002, 7:58:13 AM8/2/02
to
> In my current development group C++ execeptions are not being
> used. This is for good reason because we don't have support for
> c++ exception in the WINCE development tools.
> This is a fact that I can't change. On the other hand need generic
> containers as hash, map, and vector. The Stl containers throw
> exceptions so we can't use them.
> I wounder if someme knows about stl like containers ,even with limited
> functionality, that return error codes instead of throwing execptions.

You could take a look at STLPort (http://www.stlport.org).

--
Peter van Merkerk
merkerk(at)turnkiek.nl

Pete Becker

unread,
Aug 3, 2002, 7:15:20 AM8/3/02
to
Gil Shafriri wrote:
>
> Hello,
>
> In my current development group C++ execeptions are not being
> used. This
> is for good reason because we don't have support for c++ exception
> in the
> WINCE development tools.
> This is a fact that I can't change. On the other hand need generic
> containers as hash, map, and vector. The Stl containers throw
> exceptions so
> we can't use them.
> I wounder if someme knows about stl like containers ,even with limited
> functionality, that return error codes instead of throwing execptions.
>

Our library can be compiled without exceptions (which is how the CE
version is built). When an exception would be thrown the code calls a
user-replaceable handler, which by default aborts the program.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Pete Becker

unread,
Aug 3, 2002, 9:41:11 PM8/3/02
to
Anthony Yuen wrote:
>
> I think you're in luck! In the Aug 2002 issue of Dr. Dobb's
> Journal, there's a two-part series of articles that talks about adding
> Exceptions and RTTI to the Windows CE compiler! You should definitely
> check it out. It talks about the TCU library which sounds like what you
> need to use exceptions in WinCE.
>

I looked at that library. It's rather intrusive: it requires
modification of every exception class and every class that might have
been instantiated as an auto object at the time when an exception is
thrown. And it doesn't get the semantics right: it uses a class's
destructor when objects of that type have only been partially
constructed. I.e. it's not a general-purpose solution.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

e_lapid

unread,
Aug 4, 2002, 6:58:52 AM8/4/02
to
Why did you choose this kind of solution ? Wouldn't it be simpler to invent
a non stadard extension allowing the programmer to test a failure flag ?

For example :

std::map<int,int> MyMap;

MyMap[3] = 5; //if this fails because of out of memory condition - a global
flag (per thread!) is set.
if(std::GetThreadLastError() == MEMORY_ALLOCATION_FAILURE_FLAG)
{
error handling.
}

Do i miss anything here?

Thanks,

Gil


"Pete Becker" <peteb...@acm.org> wrote in message
news:3D4A6877...@acm.org...


> Gil Shafriri wrote:
> >
> > Hello,
> >
> > In my current development group C++ execeptions are not being
> > used. This
> > is for good reason because we don't have support for c++ exception
> > in the
> > WINCE development tools.
> > This is a fact that I can't change. On the other hand need generic
> > containers as hash, map, and vector. The Stl containers throw
> > exceptions so
> > we can't use them.
> > I wounder if someme knows about stl like containers ,even with limited
> > functionality, that return error codes instead of throwing execptions.
> >
>
> Our library can be compiled without exceptions (which is how the CE
> version is built). When an exception would be thrown the code calls a
> user-replaceable handler, which by default aborts the program.

James Dennett

unread,
Aug 4, 2002, 7:09:34 AM8/4/02
to
Pete Becker wrote:
> Gil Shafriri wrote:
> >
> > Hello,
> >
> > In my current development group C++ execeptions are not being
> > used. This
> > is for good reason because we don't have support for c++ exception
> > in the
> > WINCE development tools.
> > This is a fact that I can't change. On the other hand need generic
> > containers as hash, map, and vector. The Stl containers throw
> > exceptions so
> > we can't use them.
> > I wounder if someme knows about stl like containers ,even with limited
> > functionality, that return error codes instead of throwing execptions.
> >
>
> Our library can be compiled without exceptions (which is how the CE
> version is built). When an exception would be thrown the code calls a
> user-replaceable handler, which by default aborts the program.

And what does your code do if the handler returns?
I guess it just terminates the current operation,
and it is then up to the user code to check some
flag set by the handler to know that the operation
failed. Does this failure mode give the same
guarantees (e.g., "strong" guaratee where applicable)
that would be seen if an operation threw an exception?

It does seem valuable to me to have such an alternative
to exceptions for systems where exceptions are not an
option, but really the interface of the STL in particular
is not well-suited to such platforms.

Which reminds me of a core language change which might
help; the ability to disable conversion to void for a
UDT would allow library designers to return error codes
that could not be completely ignored. Yes, I know that
lint-like tools can give warnings where return values
are ignored, but they are less than ideal because it's
part of the C philosophy that functions *often* return
things which are normally ignored (take std::strcpy for
one example of many). If we could do something like

struct udt {
bool succeeded() const;
private:
operator void(); // disallow conversion to void
};

to guarantee that an expression of type udt could not
be used to form an expression statement then we could
flag explicitly in code when return values *cannot* be
ignored.

This does involve changes to a few parts of the standard;
I think the definition of expression statements, adding
a compiler-declared conversion to void to types which don't
declare one, and writing the rules to say that operator
void() is actually used when discarding an expression.
Probably we need to cover all other cases where a value
can be discarded, such as when used as the lhs of the
comma operator.

--
James Dennett <jden...@acm.org>

Niklas Matthies

unread,
Aug 5, 2002, 6:14:33 AM8/5/02
to
On 4 Aug 2002 07:09:34 -0400, James Dennett <jden...@acm.org> wrote:
[]

> Which reminds me of a core language change which might
> help; the ability to disable conversion to void for a
> UDT would allow library designers to return error codes
> that could not be completely ignored. Yes, I know that
> lint-like tools can give warnings where return values
> are ignored, but they are less than ideal because it's
> part of the C philosophy that functions *often* return
> things which are normally ignored (take std::strcpy for
> one example of many). If we could do something like
>
> struct udt {
> bool succeeded() const;
> private:
> operator void(); // disallow conversion to void
> };
>
> to guarantee that an expression of type udt could not
> be used to form an expression statement then we could
> flag explicitly in code when return values *cannot* be
> ignored.

You can get close to this by supplying a destructor that terminates the
program (or issues a user-visible warning of some sort) if succeeded()
hasn't been called. Granted, it's just a run-time solution, but once the
programmer has been made aware of this, which should happen pretty soon
while testing his program, he will quickly take care of checking all
return values. (Or switch to another library...)

-- Niklas Matthies
--
There is no such thing as luck. Luck is nothing but an absence of bad luck.

Pete Becker

unread,
Aug 5, 2002, 6:15:52 AM8/5/02
to
James Dennett wrote:
>
> And what does your code do if the handler returns?

It continues execution. Ordinarily that won't be a good thing.

> I guess it just terminates the current operation,
> and it is then up to the user code to check some
> flag set by the handler to know that the operation
> failed. Does this failure mode give the same
> guarantees (e.g., "strong" guaratee where applicable)
> that would be seen if an operation threw an exception?
>

Of course not. You can't replace exceptions with anything simple. What
we do provides a hook to let the application clean up sensibly. The
alternative is having to rewrite all of your code. And, of course,
you're still free to do that if it's what you prefer.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Pete Becker

unread,
Aug 5, 2002, 6:17:16 AM8/5/02
to
e_lapid wrote:
>
> "Pete Becker" <peteb...@acm.org> wrote in message
> > Our library can be compiled without exceptions (which is how the CE
> > version is built). When an exception would be thrown the code calls a
> > user-replaceable handler, which by default aborts the program.
>
> Why did you choose this kind of solution ? Wouldn't it be simpler to invent
> a non stadard extension allowing the programmer to test a failure flag ?
>

Because it's more general. If you want a flag you put in a handler that
sets a flag.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Andrea Griffini

unread,
Aug 5, 2002, 6:17:53 AM8/5/02
to
On 4 Aug 2002 07:09:34 -0400, James Dennett <jden...@acm.org> wrote:

>Which reminds me of a core language change which might
>help; the ability to disable conversion to void for a
>UDT would allow library designers to return error codes
>that could not be completely ignored.

I think that the problem with return code is not that
the caller unintentionally ignores them... but that
handling them clutters the code and the caller doesn't
want to handle that condition.

Having a function that may do or may do not what it
has been asked to doesn't solve the problem, but just
drops the burden on the caller; since caller are often
more than one this complicates the code, especially
when there's nothing the caller can do to prevent
the problem or to fix it afterward.

In many cases in my opinion it's better to have
funtion that guarantees that upon return the job required
will have been performed... even if this means that
not being able to comply with the request will terminate
the program *before* returning.
C++ solution of throwing an exception is better, of course,
but the next solution should be in my opinion to have two
versions: one that just stops the program when not able
to fulfill the request and one that uses some form of
return code to indicate the failure.

I think that in most cases when programmers now are ignoring
return codes would be happy to call the "hardened" version
that guarantees success and stops the program if can't
get to that. This is what happens with the new operator
and its nothrow incarnation. Even in programs that never
use a catch(...) it makes sense to use both forms of
the new operator.

When working in C one of the first functions I write is
my replacement of malloc to have it to abort the program
in case of failure: I pretend the memory and if it's not
there then there's no point in continuing.
Handling an out-of memory condition in every function
can make the code truly horrible to read an maintain.
Also in my pesonal experience the probability that in
complex cases all the partial-undo and deallocations
are done properly is very low anyway.
Of course there are cases in which failure to comply
a memory request doesn't mean the system is unusable
(for example after requiring *huge* amounts of memory,
i.e. when I can expect a failure not being an
exceptional event) and in that case, a return-code
based malloc is IMO the way to go for C.

Providing only return codes and forcing callers to
use funny syntaxes to be able to ignore them is in
my opinion a step in the wrong direction.

I think it's better to have

printf(...);

than

if (!printf(...).succeeded())
die("Hmmmm... we're in big troubles);

Also the risk is that you will get just

printf(...).succeeded();

because we programmers are lazy :)

One nice thing about C++ exceptions is that I can
write clean code ignoring exceptional events and
counting on the program to quit in such cases (not
completely true, it must be considered that stack
unwinding may be done and so care should be placed
on what is done in destructors).
If I want to intercept and swallow exceptions then
I can by using "catch".

Excluding exception I would like to work toward
the same usability... and this means substituting
exceptions with aborts, not return codes.

Just my 0.02
Andrea

James Kanze

unread,
Aug 5, 2002, 3:32:33 PM8/5/02
to
Pete Becker <peteb...@acm.org> wrote in message
news:<3D4A6877...@acm.org>...
> Gil Shafriri wrote:

> > In my current development group C++ execeptions are not being
> > used. This is for good reason because we don't have support for c++
> > exception in the WINCE development tools. This is a fact that I
> > can't change. On the other hand need generic containers as hash,
> > map, and vector. The Stl containers throw exceptions so we can't
> > use them. I wounder if someme knows about stl like containers
> > ,even with limited functionality, that return error codes instead
> > of throwing execptions.

> Our library can be compiled without exceptions (which is how the CE
> version is built). When an exception would be thrown the code calls a
> user-replaceable handler, which by default aborts the program.

Any reason why this wasn't adopted in the standard. It seems like the
obvious solution, far more flexible than what we have. I would have no
problem for the default to throw an exception, rather than to abort.
But I do think it the sort of thing a user should be able to choose. At
present, it is practically impossible to upgrade compilers with existing
code, because new throws an exception, rather than returning null, the
code is not exception safe, and exception safety isn't something you can
easily hack in after the fact; like many other things (const
correctness, thread safety, etc.), it takes some advance planning.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

P.J. Plauger

unread,
Aug 5, 2002, 10:28:35 PM8/5/02
to
"James Kanze" <ka...@gabi-soft.de> wrote in message
news:d6651fb6.02080...@posting.google.com...

> Pete Becker <peteb...@acm.org> wrote in message
> news:<3D4A6877...@acm.org>...
>> Gil Shafriri wrote:
>
>>> In my current development group C++ execeptions are not being
>>> used. This is for good reason because we don't have support for c++
>>> exception in the WINCE development tools. This is a fact that I
>>> can't change. On the other hand need generic containers as hash,
>>> map, and vector. The Stl containers throw exceptions so we can't
>>> use them. I wounder if someme knows about stl like containers
>>> ,even with limited functionality, that return error codes instead
>>> of throwing execptions.
>
>> Our library can be compiled without exceptions (which is how the CE
>> version is built). When an exception would be thrown the code calls a
>> user-replaceable handler, which by default aborts the program.
>
> Any reason why this wasn't adopted in the standard. It seems like the
> obvious solution, far more flexible than what we have.

I tried, on more than one occasion. But every time I got a mechanism
like
this accepted at one meeting the committee voted to remove it at the
next
meeting.

> I would have no
> problem for the default to throw an exception, rather than to abort.
> But I do think it the sort of thing a user should be able to choose.

I agree, and so do many of our customers.

> At
> present, it is practically impossible to upgrade compilers with
> existing
> code, because new throws an exception, rather than returning null, the
> code is not exception safe, and exception safety isn't something
> you can
> easily hack in after the fact; like many other things (const
> correctness, thread safety, etc.), it takes some advance planning.

Indeed.

P.J. Plauger
Dinkumware, Ltd.
http://www.dinkumware.com

Pete Becker

unread,
Aug 5, 2002, 10:29:01 PM8/5/02
to
James Kanze wrote:
>
> Any reason why this wasn't adopted in the standard.

It's because exceptions are cool.

Seriously, lots of libraries had installable error handlers back in the
olden days. I suspect that exceptions were seen as a simpler solution,
but I don't recall any discussion on the matter.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Gennaro Prota

unread,
Aug 13, 2002, 6:01:07 PM8/13/02
to

Sorry. I missed you reply (the traffic here on comp.lang.c++-moderated
is very high!)


On 8 Aug 2002 17:51:44 -0400, pdi...@mmltd.net (Peter Dimov) wrote:

>Gennaro Prota <gennar...@yahoo.com> wrote
>> I take the opportunity to ask you a question: I have recently been
>> told (really today) by one of the boost's developers that the use of
>> exceptions to check preconditions is now considered a bad practice. It
>
>> was in reply to a question concerning range-checking like the one
>> performed by vector<>::at(). This would instantly make the standard
>> library ill-designed in this regard.
>
>It all depends on what do you mean by a "precondition". A function's
>behavior is well defined when the precondition is met, and undefined
>otherwise. Therefore, at(i) does not have a precondition. Its behavior
>is well defined for any value of i.
>

Yes. If we adopt a suitable definition, we can say with you that, as a
design choice, at() has no preconditions and throwing an exception is
part of his behavior. But, first of all, this is not how the standard
uses the term 'precondition' (see 17.3.1.3/3, 17.4.3.8 and, for
instance, 21.3.4/2)

Apart from that, what I've been told is that exceptions are simply bad
in such cases (to use your terminology: that at() should have
preconditions) because passing an index out of range is a programming
error. But this approach is pervasive in the library and there are
special exception classes (std::logic_error and its derivatives
std::invalid_argument, std::out_of_range, etc..) to deal with such
errors. Also, 19.1/2 explicitly states that, in theory, these errors
are preventable, which sounds as "we are aware that we could do
without exceptions here, but we chose to do otherwise". So what? Is
this "mistake" a recent discovery of the C++ community? Is it an old
discovery and 19.1/2 was written after realizing it just to say that
the design of the library could have been better? It's not a
recognized mistake (the person that talked with me thought so, but it
is an opionion)?


Genny.

Peter Dimov

unread,
Aug 14, 2002, 5:13:52 PM8/14/02
to
Gennaro Prota <gennar...@yahoo.com> wrote in message
news:<6qmgluoq4dt0h28bo...@4ax.com>...

> Sorry. I missed you reply (the traffic here on comp.lang.c++-moderated

> is very high!)
>
>
> On 8 Aug 2002 17:51:44 -0400, pdi...@mmltd.net (Peter Dimov) wrote:
>
> >Gennaro Prota <gennar...@yahoo.com> wrote
> >> I take the opportunity to ask you a question: I have recently been

> >> told (really today) by one of the boost's developers that the use
> of >> exceptions to check preconditions is now considered a bad
> practice. It
>
> >> was in reply to a question concerning range-checking like the one

> >> performed by vector<>::at(). This would instantly make the standard

> >> library ill-designed in this regard. >
> >It all depends on what do you mean by a "precondition". A function's
> >behavior is well defined when the precondition is met, and undefined
> >otherwise. Therefore, at(i) does not have a precondition. Its
behavior
> >is well defined for any value of i.
> >
>
> Yes. If we adopt a suitable definition, we can say with you that, as a

> design choice, at() has no preconditions and throwing an exception is
> part of his behavior. But, first of all, this is not how the standard
> uses the term 'precondition' (see 17.3.1.3/3, 17.4.3.8 and, for
> instance, 21.3.4/2)

True. In my opinion, the standard is wrong. The language is probably
influenced by Betrand Meyer's DbC theory that contract violations should
throw exceptions.

This is pretty weak contract enforcement, since the user can depend on
particular, well defined, behavior on contract violations, and
therefore, can deliberately violate contracts to take advantage of that
well defined behavior. In effect, there is a meta-contract present that
silently modifies (and undermines) the original contract.

at() has no precondition, even though the standard says there is one. It
is never a programming error to take advantage of a function's well
defined behavior. That's what programming is all about.

David Abrahams

unread,
Aug 15, 2002, 7:23:27 AM8/15/02
to
pdi...@mmltd.net (Peter Dimov) wrote in message news:<7dc3b1ea.02081...@posting.google.com>...

> Gennaro Prota <gennar...@yahoo.com> wrote in message
> news:<6qmgluoq4dt0h28bo...@4ax.com>...

> > Yes. If we adopt a suitable definition, we can say with you that, as a


>
> > design choice, at() has no preconditions and throwing an exception is
> > part of his behavior. But, first of all, this is not how the standard
> > uses the term 'precondition' (see 17.3.1.3/3, 17.4.3.8 and, for
> > instance, 21.3.4/2)
>
> True. In my opinion, the standard is wrong. The language is probably
> influenced by Betrand Meyer's DbC theory that contract violations should
> throw exceptions.
>
> This is pretty weak contract enforcement, since the user can depend on
> particular, well defined, behavior on contract violations, and
> therefore, can deliberately violate contracts to take advantage of that
> well defined behavior. In effect, there is a meta-contract present that
> silently modifies (and undermines) the original contract.
>
> at() has no precondition, even though the standard says there is one. It
> is never a programming error to take advantage of a function's well
> defined behavior. That's what programming is all about.

I realize that it's not adding all that much to the thread, and the
moderators may reject this for overquoting, but this is beautifully
stated. Peter's said exactly what I would have here, only better.

-Dave

Gennaro Prota

unread,
Aug 15, 2002, 7:03:10 PM8/15/02
to

Peter Dimov wrote:
>> at() has no precondition, even though the standard says there is one.

>> It is never a programming error to take advantage of a function's
>> well defined behavior. That's what programming is all about.

So? Are you claiming that the fact that at() may throw exceptions can
hide programming errors? If an exception is not what I wanted I do see
that I made an error, and I'm not 'taking advantage' of the mechanism.
OTOH, not having exceptions doesn't mean that I automatically catch all
the errors, for instance because I can pass an index which is "in range"
but still wrong.


David Abrahams wrote:
>I realize that it's not adding all that much to the thread, and the
>moderators may reject this for overquoting, but this is beautifully
>stated. Peter's said exactly what I would have here, only better.

And nobody questioned the terminology issue, at least not me (If you all
are happy by calling pseudo-precondition what the standard calls
precondition I will not protest: choose a term and I will adapt :-)).


Genny.

Andrea Griffini

unread,
Aug 15, 2002, 7:03:30 PM8/15/02
to
On 14 Aug 2002 17:13:52 -0400, pdi...@mmltd.net (Peter Dimov) wrote:

>True. In my opinion, the standard is wrong. The language is probably
>influenced by Betrand Meyer's DbC theory that contract violations
>should throw exceptions.
>
>This is pretty weak contract enforcement, since the user can depend on
>particular, well defined, behavior on contract violations, and
>therefore, can deliberately violate contracts to take advantage of that

>well defined behavior.

I don't think this is the view expressed in OOSC.
Violating a precondition means an error and it's
not what a procedure should do intentionally.
Note also that compiling precondition check is in
EIFFEL a build option; code relying on that is
intrinsically broken. Sure the general suggestion is
to keep precondition check always on excluding cases
in which you measured a very specific performance
problem... but that's another story.

Intentionally breaking preconditions is not too
different from installing a signal handler and
deliberately dereferencing a NULL pointer to
get that handler called.

Andrea

Peter Dimov

unread,
Aug 16, 2002, 3:11:33 PM8/16/02
to
Gennaro Prota <gennar...@yahoo.com> wrote in message
news:<23cnlukrupavi1hom...@4ax.com>...

> Peter Dimov wrote:
>>> at() has no precondition, even though the standard says there is one.
>
>>> It is never a programming error to take advantage of a function's
>>> well defined behavior. That's what programming is all about.
>
> So? Are you claiming that the fact that at() may throw exceptions can
> hide programming errors?

I can't see how you would come to this conclusion, sorry.

> If an exception is not what I wanted I do see
> that I made an error, and I'm not 'taking advantage' of the mechanism.

If an exception is not what you wanted, you shouldn't be using at().
Use at() when you need the service offered by at(), which is to throw
an exception when the index is out of range. For example, imagine that
you are writing an interpreter for a script language and you are
evaluating the expression "a[n]", where a is represented as a
std::vector. a.at(n) is just what the doctor ordered. Out of range
errors in the user script don't mean that your interpreter is flawed.

Gennaro Prota

unread,
Aug 17, 2002, 6:07:35 AM8/17/02
to
On 16 Aug 2002 15:11:33 -0400, pdi...@mmltd.net (Peter Dimov) wrote:

>Gennaro Prota <gennar...@yahoo.com> wrote in message
>news:<23cnlukrupavi1hom...@4ax.com>...
>> Peter Dimov wrote:
>>>> at() has no precondition, even though the standard says there is one.
>>
>>>> It is never a programming error to take advantage of a function's
>>>> well defined behavior. That's what programming is all about.
>>
>> So? Are you claiming that the fact that at() may throw exceptions can
>> hide programming errors?
>
>I can't see how you would come to this conclusion, sorry.

Not a conclusion. I was asking. Because I don't see what is your point
when saying: "It is never a programming error to take advantage of a
function's well defined behavior". At this point, I think we should
understand each other on the meaning we give to the word "error":
since 'taking advantage of' implies an intent (i.e. something not
accidental, like an error) if we agree that a program has an 'error'
when it doesn't do what you want it to do then the sentence above
seems a tautology to me. That's why I don't see your point, especially
considering the original question. What's your position about it?


>> If an exception is not what I wanted I do see
>> that I made an error, and I'm not 'taking advantage' of the mechanism.
>
>If an exception is not what you wanted, you shouldn't be using at().

Maybe I was not clear enough. What I meant is that for a particular
call in my program I expected the index to be 'in range' but it
wasn't. That is an error.


>Use at() when you need the service offered by at(), which is to throw
>an exception when the index is out of range.

But it's not a matter of *need*. I can do with or without exceptions.
I just don't see why using exceptions to avoid error checking code on
function return is a good idea, whereas doing the same for function
arguments is bad.

> For example, imagine that
>you are writing an interpreter for a script language and you are
>evaluating the expression "a[n]", where a is represented as a
>std::vector. a.at(n) is just what the doctor ordered. Out of range
>errors in the user script don't mean that your interpreter is flawed.

And this example shows what I've said above: passing an n out of range
in this circumstance is not a programming error because I've
considered that possibility and decided that an exception is what I
wanted for out of range indexes. This leaves my original question: why
range checking should never involve exceptions?


Genny.

Bob Bell

unread,
Aug 17, 2002, 6:08:15 AM8/17/02
to
Gennaro Prota <gennar...@yahoo.com> wrote in message news:<23cnlukrupavi1hom...@4ax.com>...

> Peter Dimov wrote:
> >> at() has no precondition, even though the standard says there is one.
>
> >> It is never a programming error to take advantage of a function's
> >> well defined behavior. That's what programming is all about.
>
> So? Are you claiming that the fact that at() may throw exceptions can
> hide programming errors?

Yes:

void F(std::vector<int>& iVec)
{
}

void F(void)
{
std::vector<int> vec;
iVec.at(10);
}

void G(void) throw()
{
try {
// "normal" way to perform G's operation

// ...
F();
// ...

}
catch (...) {
// "fallback" way, perhaps slower, to perform G's operation
}
}

G tries two ways to perform it's work. The first way, in the try
block, uses system resources that may not be available. If anything
goes wrong and an exception is thrown, it will fall into the catch
block and try another way, one which doesn't need system resources and
can't fail. The catch(...) suppresses the exception thrown from
std::vector<>::at in F().

> OTOH, not having exceptions doesn't mean that I automatically catch all
> the errors, for instance because I can pass an index which is "in range"
> but still wrong.

If you pass an index that is in range for a vector, then as far as the
vector is concerned, it is not wrong, period. Given that we're talking
about the vector's preconditions, it doesn't make sense to say "an


index which is in range but still wrong."

The only way for the index to be wrong in this fashion is from the
point of view of a higher level entity. But that just means that the
higher level entity's preconditions were violated, and it is the
higher level entity's job to detect that with an assertion (not an
exception).

Bob

Peter Dimov

unread,
Aug 19, 2002, 7:26:18 PM8/19/02
to
Gennaro Prota <gennar...@yahoo.com> wrote in message news:<r4sqluksdv8vv594s...@4ax.com>...

> On 16 Aug 2002 15:11:33 -0400, pdi...@mmltd.net (Peter Dimov) wrote:
>
> >Gennaro Prota <gennar...@yahoo.com> wrote in message
> >news:<23cnlukrupavi1hom...@4ax.com>...
> >> Peter Dimov wrote:
> >>>> at() has no precondition, even though the standard says there is one.
>
> >>>> It is never a programming error to take advantage of a function's
> >>>> well defined behavior. That's what programming is all about.
> >>
> >> So? Are you claiming that the fact that at() may throw exceptions can
> >> hide programming errors?
> >
> >I can't see how you would come to this conclusion, sorry.
>
> Not a conclusion. I was asking. Because I don't see what is your point
> when saying: "It is never a programming error to take advantage of a
> function's well defined behavior". At this point, I think we should
> understand each other on the meaning we give to the word "error":
> since 'taking advantage of' implies an intent (i.e. something not
> accidental, like an error) if we agree that a program has an 'error'
> when it doesn't do what you want it to do then the sentence above
> seems a tautology to me. That's why I don't see your point, especially
> considering the original question. What's your position about it?

A "programming error" in this context is violating a function
contract. Passing an index that is out of range to at() is not an
error. Passing the same index to operator[] is an error.

> >> If an exception is not what I wanted I do see
> >> that I made an error, and I'm not 'taking advantage' of the mechanism.
> >
> >If an exception is not what you wanted, you shouldn't be using at().
>
> Maybe I was not clear enough. What I meant is that for a particular
> call in my program I expected the index to be 'in range' but it
> wasn't. That is an error.

Again, if you are sure that the index must be in range, at() is not
the correct function. You should assert(index in range) and use
operator[].

> >Use at() when you need the service offered by at(), which is to throw
> >an exception when the index is out of range.
>
> But it's not a matter of *need*. I can do with or without exceptions.
> I just don't see why using exceptions to avoid error checking code on
> function return is a good idea, whereas doing the same for function
> arguments is bad.

Imagine that you are writing an at() function. Are you allowed to
assert that the index is in range, to invoke the debugger, to prepare
an error log with a stack trace, and so on? No. You don't know whether
the out if range index is an error, or not. You must throw the
exception, and hope for the best.

On the other hand, within operator[] you are free to do any of the
above. An out of range index passed to operator[] is always an error.

> > For example, imagine that
> >you are writing an interpreter for a script language and you are
> >evaluating the expression "a[n]", where a is represented as a
> >std::vector. a.at(n) is just what the doctor ordered. Out of range
> >errors in the user script don't mean that your interpreter is flawed.
>
> And this example shows what I've said above: passing an n out of range
> in this circumstance is not a programming error because I've
> considered that possibility and decided that an exception is what I
> wanted for out of range indexes. This leaves my original question: why
> range checking should never involve exceptions?

No, your original question was about preconditions, not range checks.
Precondition violations are programming errors, by definition. Out of
range values may or may not be a programming error, depending on how
the function is specified. If you define the behavior of the function
in the case of out of range values (at()), you lose the ability to
distinguish errors from legitimate uses.

John Potter

unread,
Aug 20, 2002, 7:54:11 AM8/20/02
to
On 19 Aug 2002 19:26:18 -0400, pdi...@mmltd.net (Peter Dimov) wrote:

> A "programming error" in this context is violating a function
> contract. Passing an index that is out of range to at() is not an
> error. Passing the same index to operator[] is an error.

Ok, I understand that abstraction. So what? If operator[] checks for
that error, my program dies. I have coded algorithms where the check
in at() causes a 20+ multiplicative increase in execution time.

What are you saying? Is the standard defective because it allows
operator[] to range check? IMO, the standard is fine and any
implementation which range checks operator[] is defective or at least
non-marketable.

Is the standard defective because it provides a function at() which
does range check and throws rather than aborting? IMO that is a
reasonable thing to do for those who use debuggers rather than writing
correct code.

How does this theory apply to practical C++?

John

Peter Dimov

unread,
Aug 21, 2002, 6:32:45 AM8/21/02
to
jpo...@falcon.lhup.edu (John Potter) wrote in message news:<3d61c9e2...@news.earthlink.net>...

> On 19 Aug 2002 19:26:18 -0400, pdi...@mmltd.net (Peter Dimov) wrote:
>
> > A "programming error" in this context is violating a function
> > contract. Passing an index that is out of range to at() is not an
> > error. Passing the same index to operator[] is an error.
>
> Ok, I understand that abstraction. So what? If operator[] checks for
> that error, my program dies. I have coded algorithms where the check
> in at() causes a 20+ multiplicative increase in execution time.

"So what?"

> What are you saying? Is the standard defective because it allows
> operator[] to range check? IMO, the standard is fine and any
> implementation which range checks operator[] is defective or at least
> non-marketable.

No, the standard is not defective because it allows operator[] to
range check. No, implementations that provide a checked mode as an
option (in debug builds, for instance) are not defective or at least
non-marketable, quite the opposite, in fact.

> Is the standard defective because it provides a function at() which
> does range check and throws rather than aborting? IMO that is a
> reasonable thing to do for those who use debuggers rather than writing
> correct code.

No, the standard is not defective because it provides at(). What is
defective is the Requires clause of at(), because at() has no
requirements.

BTW throwing an exception is not the best response for debugger fans
because it destroys the context on its way up.

> How does this theory apply to practical C++?

It works for me.

I'm obviously missing the point of your post.

James Kanze

unread,
Aug 21, 2002, 9:09:25 AM8/21/02
to
jpo...@falcon.lhup.edu (John Potter) wrote in message
news:<3d61c9e2...@news.earthlink.net>...
> On 19 Aug 2002 19:26:18 -0400, pdi...@mmltd.net (Peter Dimov) wrote:

> > A "programming error" in this context is violating a function
> > contract. Passing an index that is out of range to at() is not an
> > error. Passing the same index to operator[] is an error.

> Ok, I understand that abstraction. So what? If operator[] checks for
> that error, my program dies. I have coded algorithms where the check
> in at() causes a 20+ multiplicative increase in execution time.

Are you kidding? Correctly implemented, at() should take about twice
the time of operator[], no more.

> What are you saying? Is the standard defective because it allows
> operator[] to range check? IMO, the standard is fine and any
> implementation which range checks operator[] is defective or at least
> non-marketable.

What about the debugging version of the STLPort? I don't know whether
it checks for bounds in operator[], but it certainly checks for a lot of
other undefined behavior. At significant run-time cost.

And it is so "unmarketable" that other implementations (Dinkumware) have
announced that they intend to follow suite.

> Is the standard defective because it provides a function at() which
> does range check and throws rather than aborting? IMO that is a
> reasonable thing to do for those who use debuggers rather than writing
> correct code.

I'll admit that I don't quite see what you can do with an exception when
the code is manifestly wrong. My own implementations have always used
an assertion failure for such tests.

I do think that some sort of defined behavior is necessary, and that the
user must be able to "catch" this behavior in some way. At least in
embedded systems, you want to somehow signal the backup processor to
takeover before dying. For "normal" applications (under Unix or
Windows), assert is the correct behavior in 99.9% of the cases.

Giving random, incorrect results, is never acceptable behavior for a
system.

--
James Kanze mailto:jka...@caicheuvreux.com

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

John Potter

unread,
Aug 21, 2002, 12:16:00 PM8/21/02
to
On 21 Aug 2002 06:32:45 -0400, pdi...@mmltd.net (Peter Dimov) wrote:

> No, the standard is not defective because it provides at(). What is
> defective is the Requires clause of at(), because at() has no
> requirements.

Ok, now I see. At is fine. Sequences, deque, vector are fine because
there is no Requires clause for at. String is defective.

> I'm obviously missing the point of your post.

Easy since I missed the point of yours. Your point is that precondition
violation should be asserted not thrown. [] should assert if it range
checks not throw. At should throw because it has no precondition. Do I
have it now?

John

John Potter

unread,
Aug 21, 2002, 12:19:05 PM8/21/02
to
On 21 Aug 2002 09:09:25 -0400, ka...@gabi-soft.de (James Kanze) wrote:

> jpo...@falcon.lhup.edu (John Potter) wrote in message
> news:<3d61c9e2...@news.earthlink.net>...

> > Ok, I understand that abstraction. So what? If operator[] checks

> > for that error, my program dies. I have coded algorithms where the
> > check in at() causes a 20+ multiplicative increase in execution
> > time.
>
> Are you kidding? Correctly implemented, at() should take about twice
> the time of operator[], no more.

It's been a while, but at the time the test plus throw or an assert
prevented inlining. A killer for a sorting algorithm. A test plus call
a function which did the throw or assert did only about double the time.
That kept the normal path inlined.

John

Peter Dimov

unread,
Aug 22, 2002, 12:02:32 PM8/22/02
to
jpo...@falcon.lhup.edu (John Potter) wrote in message news:<3d638f0c...@news.earthlink.net>...

> On 21 Aug 2002 06:32:45 -0400, pdi...@mmltd.net (Peter Dimov) wrote:
>
> > No, the standard is not defective because it provides at(). What is
> > defective is the Requires clause of at(), because at() has no
> > requirements.
>
> Ok, now I see. At is fine. Sequences, deque, vector are fine because
> there is no Requires clause for at. String is defective.
>
> > I'm obviously missing the point of your post.
>
> Easy since I missed the point of yours. Your point is that precondition
> violation should be asserted not thrown. [] should assert if it range
> checks not throw. At should throw because it has no precondition. Do I
> have it now?

Yes. It all started when Gennaro Prota pointed out in

http://groups.google.com/groups?selm=6qmgluoq4dt0h28bo9682droboq0j2p3o9%404ax.com

that string::at has a Requires clause, and that 17.4.3.8 specifically
allows such (defective IMO) Requires clauses.

Going a bit more theoretical, my point is that behavior is defined
when requirements are met, and undefined otherwise; it follows that
conversely, when behavior is always defined (at), requirements are
always met.

Daniel James

unread,
Aug 22, 2002, 12:04:56 PM8/22/02
to
In article <7dc3b1ea.02082...@posting.google.com>, Peter

Dimov wrote:
> BTW throwing an exception is not the best response for debugger
> fans because it destroys the context on its way up.

Good C++ debuggers can be set to break when an exception is thrown,
other debuggers should allow a breakpoint to be placed in the
constructor of the exception class, so that execution can be broken
before the unwinding commences.

Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with sonadata
to reply]

Peter Dimov

unread,
Aug 23, 2002, 4:26:57 PM8/23/02
to
Daniel James <inte...@nospam.demon.co.uk> wrote in message
news:<VA.0000077...@nospam.demon.co.uk>...

> In article <7dc3b1ea.02082...@posting.google.com>, Peter
> Dimov wrote:
> > BTW throwing an exception is not the best response for debugger
> > fans because it destroys the context on its way up.
>
> Good C++ debuggers can be set to break when an exception is thrown,
> other debuggers should allow a breakpoint to be placed in the
> constructor of the exception class, so that execution can be broken
> before the unwinding commences.

True, and _really good_ C++ debuggers can be set to break only when an
exception that is of interest to you is being thrown. Breaking on
every throw can get annoying when you don't have access to the source
of the exception constructor.

But really, no matter how wonderful contemporary debugging
technologies are, it's often better to invoke the debugger explicitly
via assert (or dump core or equivalent if the assert fires in the
field.)

Bob Bell

unread,
Aug 25, 2002, 5:14:42 AM8/25/02
to
pdi...@mmltd.net (Peter Dimov) wrote in message news:<7dc3b1ea.02082...@posting.google.com>...

> Daniel James <inte...@nospam.demon.co.uk> wrote in message
> news:<VA.0000077...@nospam.demon.co.uk>...
> > In article <7dc3b1ea.02082...@posting.google.com>, Peter
> > Dimov wrote:
> > > BTW throwing an exception is not the best response for debugger
> > > fans because it destroys the context on its way up.
> >
> > Good C++ debuggers can be set to break when an exception is thrown,
> > other debuggers should allow a breakpoint to be placed in the
> > constructor of the exception class, so that execution can be broken
> > before the unwinding commences.
>
> True, and _really good_ C++ debuggers can be set to break only when an
> exception that is of interest to you is being thrown. Breaking on
> every throw can get annoying when you don't have access to the source
> of the exception constructor.
>
> But really, no matter how wonderful contemporary debugging
> technologies are, it's often better to invoke the debugger explicitly
> via assert (or dump core or equivalent if the assert fires in the
> field.)

Agreed. Exception handling and debugging are two different things, and
don't overlap, IMHO.

Exceptions are used when you detect a condition in your program which
can be handled in a meaningful way. These conditions are anticipated
by the designer of the program, and it is correct behavior for the
program to throw the exception.

Bugs, on the other hand, are conditions which are unanticipated by the
designer. These conditions cannot be handled in a meaningful way -- if
they could, they would be part of the designed behavior of the system
and therefore not a bug.

Using exceptions for debugging blurs this distinction. It makes as
much sense to do this as it does to use other constructs, such as for
loops, for debugging purposes.

Sometimes I write for loops or other constructs to help me find bugs,
but I do so with the intention that once the bug is found, the
debugging code will be removed. If exceptions are used for debugging
purposes then they should be subject to the same guideline -- the code
is temporary, and once the bug is found, the exception handling code
is removed.

In any case, I wouldn't use exceptions to find bugs because it's
overkill -- once you detect the condition which would lead to a throw,
why not just stop in the debugger at that point? Assertions are better
for bug detection, for the following reasons:

1) Assertions represent conditions which must be true in order for the
program to continue (no coincidence that this is pretty much the
inverse of my definition of a bug).

2) They express these conditions directly in the code and serve some
documentation purpose.

3) They are removed in non-debug builds.

4) They allow a debugger to break at the point at which the condition
is detected, which lets me examine the context of the assertion
directly.

Bob

Francis Glassborow

unread,
Aug 25, 2002, 10:53:03 AM8/25/02
to
In article <c87c1cfb.02082...@posting.google.com>, Bob Bell
<bel...@pacbell.net> writes

>Exceptions are used when you detect a condition in your program which
>can be handled in a meaningful way. These conditions are anticipated
>by the designer of the program, and it is correct behavior for the
>program to throw the exception.

I would modify that be saying that mostly exceptions are to handle
problems where the desired way of handling the problem is not known at
the point of detection. For example, I may be able to detect that an
input stream has failed in my overload of operator>>() but I would not
usually know what the user of my code will want to do about it. IOWs
exceptions are to transfer a problem form the point of detection to a
potential point of solution.

--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

Daniel James

unread,
Aug 25, 2002, 10:58:58 AM8/25/02
to
In article <c87c1cfb.02082...@posting.google.com>,
Bob Bell wrote:
> Agreed. Exception handling and debugging are two different
> things, and don't overlap, IMHO.

They don't overlap ... but they interact.

> Exceptions are used when you detect a condition in your
> program which can be handled in a meaningful way. These
> conditions are anticipated by the designer of the program,
> and it is correct behavior for the program to throw the
> exception.
>
> Bugs, on the other hand, are conditions which are
> unanticipated by the designer. These conditions cannot be
> handled in a meaningful way -- if they could, they would
> be part of the designed behavior of the system and
> therefore not a bug.

I wrote a long response to your post, but deleted it because I
don't think we're fundamentally in disagreement. I had
basically two points, which actually boil down to the same
thing:

A design (or an implementation) may contain errors in its
exception handling code, and debugging is as useful a
technique for finding these bugs as it is for finding bugs in
other code.

There is one other point, though:

> 1) Assertions represent conditions which must be true in

> order for the program to continue ... [snip]


> 3) They are removed in non-debug builds.

[Not disagreeing] ... but some conditions depend on the input
*data* and can't reliably be tested for using assertions
because it's not possible to test an application with all
possible input data sets. One can try to identify boundary
conditions and to check for correct handling of extreme values
of input data, but for any but trivial applications it isn't
possible to be sure that all the boundary conditions have been
identified.

Runtime checking is clearly necessary in some cases to ensure
that an application doesn't just crash when faced with a
combination of data values that can't be handled. I'd say that
such a check comes within your defintion of "conditions ...
which can be handled in a meaningful way" and which "are
anticipated by the designer of the program", and that an
exception is a perfectly acceptable mechanism for reporting
the failure of such a check.

Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with
sonadata to reply]

Sungbom Kim

unread,
Aug 28, 2002, 5:55:27 PM8/28/02
to
Bob Bell wrote:
>
> Agreed. Exception handling and debugging are two different things, and
> don't overlap, IMHO.
>
> Exceptions are used when you detect a condition in your program which
> can be handled in a meaningful way. These conditions are anticipated
> by the designer of the program, and it is correct behavior for the
> program to throw the exception.
>
> Bugs, on the other hand, are conditions which are unanticipated by the
> designer. These conditions cannot be handled in a meaningful way -- if
> they could, they would be part of the designed behavior of the system
> and therefore not a bug.
>
> Using exceptions for debugging blurs this distinction. It makes as
> much sense to do this as it does to use other constructs, such as for
> loops, for debugging purposes.

I have found in my experience that exceptions and assertions
are not clearly distinguished all the time.

Just think of checking preconditions; which would you advocate for it?
It's a natural usage of exceptions as is clearly seen in constructors
or functions that return values other than error codes, but it can
also be regarded as a part of the program logic and thus can also be
handled with assertions.

In large projects, I tend to use assertions for routines that are
relatively application-specific and strongly coupled, and
exceptions for more general-purpose and independent ones.

--
Sungbom Kim <musi...@bawi.org>

Francis Glassborow

unread,
Aug 29, 2002, 1:14:29 PM8/29/02
to
In article <3D6CDD28...@bawi.org>, Sungbom Kim <musi...@bawi.org>
writes

>In large projects, I tend to use assertions for routines that are
>relatively application-specific and strongly coupled, and
>exceptions for more general-purpose and independent ones.

Surely you use assert for cases where you are absolutely certain that
the precondition can only be violated by bad program logic and not as a
result of bad data. Also note that they should only be used in a context
where an immediate program abort is acceptable.

--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Bob Bell

unread,
Aug 29, 2002, 5:44:01 PM8/29/02
to
Sungbom Kim <musi...@bawi.org> wrote in message news:<3D6CDD28...@bawi.org>...

> I have found in my experience that exceptions and assertions
> are not clearly distinguished all the time.
>
> Just think of checking preconditions; which would you advocate for it?
> It's a natural usage of exceptions as is clearly seen in constructors
> or functions that return values other than error codes, but it can
> also be regarded as a part of the program logic and thus can also be
> handled with assertions.
>
> In large projects, I tend to use assertions for routines that are
> relatively application-specific and strongly coupled, and
> exceptions for more general-purpose and independent ones.

This is the way I think about it (forgive me for being somewhat
elementary, but I want to define my terms).

The conditions that hold true once a function has completed are called
its postconditions. If for any reason the function fails in such a way
that the postconditions are not true, nothing meaningful can be said
about the state of the system (if you could say anything meaningful,
it would by definition be part of the postconditions). Therefore, if
the postconditions are not true, you have undefined behavior. Either
the postconditions hold or the system is in an undefined state.

There are only two reasons the postconditions would ever fail to be
true for a function:

-- there is a bug in the function (or a function called by it)
-- the function has no bugs, but the caller failed to establish the
proper preconditions (in other words, there is a bug in the caller, or
its caller etc.)

Note that every bug in the system will fall into one of these two
categories. In other words, if the postconditions fail, there is a
bug.

The preconditions are the conditions a caller must establish in order
to call a function. If the preconditions are not established, then the
function cannot make any guarantess and we again have undefined
behavior.

Now, suppose that a function F could detect that a given precondition
X was not met, and threw an exception as a result. This means that the
postconditions of F are now expanded to include "if (!X), did nothing
and threw an exception." In other words, it is now part of the defined
behavior of F to respond when X is not true.

But if preconditions, when not met, lead to undefined behavior, then X
is not a precondition for F anymore, since if X is not met we still
have well-defined behavior. In other words, F now has well defined
behavior whether X is true or not, and it is no longer necessary (from
a buggy/correct point of view) for a caller to establish X before
calling F.

Conclusion: because exception-throwing doesn't represent a failure of
postconditions, functions throwing exceptions don't detect bugs.

Assertions, on the other hand, are perfect for detecting bugs, since
they stop the program in its tracks whenever the asserted condition is
not true. Also, assertions never detect conditions (like X) that a
function can meaningfully respond to, because an assertion stops the
system from responding at all (by breaking in the debugger).

Bob Bell

Matthew Collett

unread,
Aug 30, 2002, 6:11:41 AM8/30/02
to
In article <c87c1cfb.0208...@posting.google.com>,
bel...@pacbell.net (Bob Bell) wrote:

> There are only two reasons the postconditions would ever fail to be
> true for a function:
>
> -- there is a bug in the function (or a function called by it)
> -- the function has no bugs, but the caller failed to establish the
> proper preconditions (in other words, there is a bug in the caller, or
> its caller etc.)
>

> Bob Bell

What about the case where the function requires some external
resource(s) whose availability can only be determined at runtime?

Best wishes,
Matthew Collett

--
The word "reality" is generally used with the intention of evoking sentiment.
-- Arthur Eddington

Andrea Griffini

unread,
Aug 30, 2002, 6:30:34 AM8/30/02
to
On 29 Aug 2002 13:14:29 -0400, Francis Glassborow
<francis.g...@ntlworld.com> wrote:

>Also note that they should only be used in a context
>where an immediate program abort is acceptable.

Instead of that view can't one consider the other side
of the medal ? I mean that exceptions should be used
only when it's known for sure that executing further
code in a logically crippled and untrustable environment
is not going to do any more damage or when it's known
for sure that the environment is safe and coherent.

I'm not sure if consindering precodintions being
the perfect net is a good idea. They're just test
cases and not the definition of correctness: if
any condition is not met then you know there is an
error, if all conditions are met you can't be sure
there is no error.
Pretending to continue after we measured something
impossible happening somewhat implies we think that
everything before that point was right but in my
opinion is is rarely the case.

We know from experience that often the point in
which a bad memory access is performed is not the
actually wrong piece of code, but just a victim
of an error that happened before and in a
completely different context.
That's why I think that intercepting seg fault
and continuing is (if we exclude some very specific
cases) a pretty questionable approach: it's
not sofware that is more robust unless your definition
of robust is just "harder to stop" and unrelated to
useful work being actually done.

A precondition is in my opinion not too different
from a bad memory access. It's sure done at an
higher logical level, but it's just an indicator
that something is going bad in an unexpected way.

Pretending we know what is happening after something
that shouldn't have happened is dancing in front
of us seems to me somewhat silly.

Andrea

Francis Glassborow

unread,
Aug 30, 2002, 12:27:48 PM8/30/02
to
In article <3d6e7ab9...@news.tin.it>, Andrea Griffini
<agr...@tin.it> writes

>On 29 Aug 2002 13:14:29 -0400, Francis Glassborow
><francis.g...@ntlworld.com> wrote:
>
> >Also note that they should only be used in a context
> >where an immediate program abort is acceptable.
>
>Instead of that view can't one consider the other side
>of the medal ? I mean that exceptions should be used
>only when it's known for sure that executing further
>code in a logically crippled and untrustable environment
>is not going to do any more damage or when it's known
>for sure that the environment is safe and coherent.

For me, exceptions are largely tools for use in (potentially) library
code. I do not believe that it is a legitimate task for a library
routine to close down a program. That decision is one for the user
(application programmer). Note that there is an entire class of programs
that must never stop except under the control of a human being. Safety
critical programs must always fail safe, abort is hardly ever a correct
solution for such programs. Of course they should not just continue
running as if there had not been a problem but 'abort' is not a solution
for such cases.

However note that there are a range of tools to deal with error
conditions and a good programmer knows to select one that is
appropriate. As we all know, sometimes the only answer is to hit the big
red switch, but we should have tried everything else first. (Well
actually I can remember a bug in a word processor where my only sane
answer was to switch off because by doing so the program did not discard
the temporary back up files, but that was a result of bad programming
rather than good)

--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Sungbom Kim

unread,
Aug 30, 2002, 10:32:46 PM8/30/02
to
Sungbom Kim wrote:
>
> I have found in my experience that exceptions and assertions
> are not clearly distinguished all the time.
>
> Just think of checking preconditions; which would you advocate for it?
> It's a natural usage of exceptions as is clearly seen in constructors
> or functions that return values other than error codes, but it can
> also be regarded as a part of the program logic and thus can also be
> handled with assertions.
>
> In large projects, I tend to use assertions for routines that are
> relatively application-specific and strongly coupled, and
> exceptions for more general-purpose and independent ones.

Thanks for the valuable opinions of those who replied to that article
of mine, and let me elaborate on this a little bit.

It is clear when the choice of exceptions is a must;
failure to acquire required resources is an obvious example.

It is, however, not very clear in other cases which one to choose;
let's take as an example a function which takes year, month, day
as its parameter. If an invalid month or day was given, should the
function throw an exception or give an assertion failure? I cannot say
which one it should choose in this case, unless it's a constructor.

int day_of_week(int year, int month, int day)
{
assert(month >= 0 && month < 12 && day >= 1 && day <= 31);
// OR
if (month >= 0 && month < 12 && day >= 1 && day <= 31)
throw std::domain_error("invalid date");

// further processing
}

Please enlighten me on this matter.

Bob Bell

unread,
Aug 31, 2002, 7:36:26 AM8/31/02
to
Matthew Collett <m.co...@auckland.ac.nz> wrote in message news:<m.collett-A0248...@lust.ihug.co.nz>...

> In article <c87c1cfb.0208...@posting.google.com>,
> bel...@pacbell.net (Bob Bell) wrote:
>
> > There are only two reasons the postconditions would ever fail to be
> > true for a function:
> >
> > -- there is a bug in the function (or a function called by it)
> > -- the function has no bugs, but the caller failed to establish the
> > proper preconditions (in other words, there is a bug in the caller, or
> > its caller etc.)
> >
> > Bob Bell
>
> What about the case where the function requires some external
> resource(s) whose availability can only be determined at runtime?

What about it? Are you saying that it is a precondition that the
function must be able to acquire the resource, and yet the acquisition
can fail? That would be an error in the specification of the function,
if you ask me.

Remember, if the preconditions of a function fail, then the function
cannot guarantee anything; i.e., its behavior is undefined. If on the
other hand the function handles the acquisition failure gracefully
(e.g., by throwing an exception, returning an error code, or working
without the resource), then it is not a precondition that the resource
be available.

I am using the terms precondition and postcondition very strictly,
which may be different from what many are used to. However, I find
that such strict definitions allow me to reason about function
behavior even the function is not able to provide its intended
service, as in the example above.

Bob

Alexander Terekhov

unread,
Aug 31, 2002, 7:57:54 AM8/31/02
to

Francis Glassborow wrote:
[...]

> Note that there is an entire class of programs
> that must never stop except under the control of a human being. Safety
> critical programs must always fail safe, abort is hardly ever a correct
> solution for such programs.

Precisely because "safety critical programs must always fail safe",
abort() is the only correct solution for such programs; unless you
are talking about hardware assisted "protection rings" where "wrong
input" is simply bounced back to callers so that they "can decide".

regards,
alexander.

Francis Glassborow

unread,
Aug 31, 2002, 2:10:16 PM8/31/02
to
In article <3D6FC873...@web.de>, Alexander Terekhov
<tere...@web.de> writes

>Francis Glassborow wrote:
>[...]
> > Note that there is an entire class of programs
> > that must never stop except under the control of a human being. Safety
> > critical programs must always fail safe, abort is hardly ever a correct
> > solution for such programs.
>
>Precisely because "safety critical programs must always fail safe",
>abort() is the only correct solution for such programs; unless you
>are talking about hardware assisted "protection rings" where "wrong
>input" is simply bounced back to callers so that they "can decide".

A nuclear power station whose software aborted when faced with a problem
would be a major disaster waiting to happen. At the very least
unforeseen events should require a safe close-down. Of course that would
probably involve non-software elements like ' if you do not receive
control signals execute crash close down.' abort should be the last line
of defence and sometimes even that requires some form of action first.
For example a modern high-performance aircraft relies on its software
for stability. If that software aborts the plane is uncontrollable so
there better be something else...

'abort' is just not a sane option in time critical circumstances.
Wasn't it a program abort that blew up Ariane V? And do nothing would
have been the better option:-(

--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Daniel James

unread,
Sep 1, 2002, 6:39:29 AM9/1/02
to
In article <3D6F93D4...@bawi.org>, Sungbom Kim wrote:
> It is clear when the choice of exceptions is a must;
> failure to acquire required resources is an obvious
> example.
>
> It is, however, not very clear in other cases which one
> to choose

I think this is the wrong way to look at the question.

An assertion - in the sense of a C assert that prints a
message and aborts - will abort the program if its condition
is false. I don't think it is ever right to release as
production code any program that delivers such a poor user
experience.

Assertions, then, should be used only during debugging, and
should be removed from the code when building a version for
release (as is done automatically by the C assert when NDEBUG
is defined).

It follows that the proper use of assertions is to check,
during development, for conditions that indicate that the
program logic is incorrect. Once the program is debugged and
ready for release the programmer should be confident that the
program logic is correct, and therefore that none of the
assertions can be triggered, and the assertions can safely be
removed.

Errors that can be caused by inappropriate data need to be
checked for and trapped at runtime. These errors may be
reported by setting a failure code or by throwing an
exception, and the program must handle the error condition as
gracefully as possible. Such handling may involve aborting
the current operation, and may even involve terminating the
program, but should be more graceful and controlled - and
provide a better user experience - than simply aborting.

The question of whether to use exceptions or return codes is
still somewhat contentious. The general advice is to use
retrun codes for errors that are "expected" and exceptions
for errors that are "exceptional". However it can be argued
that all errors are (or should be) "exceptional" and that
each case should be argued on its own merits. I tend to use
return codes for errors that can be handled (and prhaps
corrected) "fairly" locally, and exceptions for more serious
erros that must result in abandoning the current operation.

Of course, when writing library code you can't always know
whether the caller will regard a particular error as being
able to be handled or corrected locally. I tend to use
exceptions because they enable the code to handle the
"normal" to be relatively uncluttered by error-handling code,
but that's a matter of personal style.

As an example of what I'm talking about here, I might specify
a function for checking a digital signature as:

bool VerifySignature( data, cert, signature )

(types of data, cert, and signature omitted as they're not
relevant). Such a function returns a bool (true if the sig
verifies OK, false if it doesn't) because the immediate
caller will probably have different actions to perform
depending on the result of the verification - those are both
"expected" results (even if you don't really expect the
verification to fail). I'd make the function throw an
exception if an actual error was detected during the
verification process - even for an error such as an expired
certificate, which is also something that is "expected" to
occur from time to time.

Another function

bool HasCertExpired( cert )

would not throw an exception if the cert had expired, but
would return a bool result as requested. It would throw if
the cert was in some other way invalid, or didn't contain an
expiry date.

Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with
sonadata to reply]

Niklas Matthies

unread,
Sep 1, 2002, 7:15:30 AM9/1/02
to
On 31 Aug 2002 07:36:26 -0400, Bob Bell <bel...@pacbell.net> wrote:
> Matthew Collett <m.co...@auckland.ac.nz> wrote in message news:<m.collett-A0248...@lust.ihug.co.nz>...
> > In article <c87c1cfb.0208...@posting.google.com>,
> > bel...@pacbell.net (Bob Bell) wrote:
> >
> > > There are only two reasons the postconditions would ever fail to be
> > > true for a function:
> > >
> > > -- there is a bug in the function (or a function called by it)
> > > -- the function has no bugs, but the caller failed to establish the
> > > proper preconditions (in other words, there is a bug in the caller, or
> > > its caller etc.)
> >
> > What about the case where the function requires some external
> > resource(s) whose availability can only be determined at runtime?
>
> What about it? Are you saying that it is a precondition that the
> function must be able to acquire the resource, and yet the acquisition
> can fail? That would be an error in the specification of the function,
> if you ask me.

More precisely: Preconditions are part of the contract between caller
and callee. The caller promises to the callee that the preconditions
are met. Hence preconditions need to be something that the caller can
actually ensure, otherwise the caller couldn't promise them to be met.

This provides a simple rule of what to make into preconditions: If it's
something the caller can be expected to ensure, make it a precondition
(and check it with asserts if you like). If it's something neither the
caller nor the callee can't reliably ensure (because it can't be
expected to be under the either's control, such as availability of
external resources, or would require duplicating much of the callee's
functionality on the caller's side), check the condition and throw an
exception (or perform some other appropriate runtime error return) in
case it is not met.

-- Niklas Matthies
--
To handle yourself, use your head. To handle others, use your heart.

Francis Glassborow

unread,
Sep 1, 2002, 12:21:28 PM9/1/02
to
In article <VA.000007a...@nospam.demon.co.uk>, Daniel James
<inte...@nospam.demon.co.uk> writes

>It follows that the proper use of assertions is to check,
>during development, for conditions that indicate that the
>program logic is incorrect. Once the program is debugged and
>ready for release the programmer should be confident that the
>program logic is correct, and therefore that none of the
>assertions can be triggered, and the assertions can safely be
>removed.

But if you are that confident you might as well leave them in. if you
are wrong, at least when you are dragged from your bed at 3am by an
irate customer 12 time zones away from you, you will have some clue as
to what it was you missed. Removing asserts is an optimisation which
should be applied if leaving them in damages performance.

--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Bob Bell

unread,
Sep 1, 2002, 6:03:21 PM9/1/02
to
Sungbom Kim <musi...@bawi.org> wrote in message news:<3D6F93D4...@bawi.org>...

> Sungbom Kim wrote:
> It is clear when the choice of exceptions is a must;
> failure to acquire required resources is an obvious example.
>
> It is, however, not very clear in other cases which one to choose;
> let's take as an example a function which takes year, month, day
> as its parameter. If an invalid month or day was given, should the
> function throw an exception or give an assertion failure? I cannot say
> which one it should choose in this case, unless it's a constructor.
>
> int day_of_week(int year, int month, int day)
> {
> assert(month >= 0 && month < 12 && day >= 1 && day <= 31);
> // OR
> if (month >= 0 && month < 12 && day >= 1 && day <= 31)
> throw std::domain_error("invalid date");
>
> // further processing
> }
>
> Please enlighten me on this matter.

It depends on whether or not you consider it a bug to pass invalid
arguments to day_of_week. If it's a bug, use an assertion. If it's not
a bug, use exceptions.

The main point of my post was that exceptions should not be used to
detect and handle bugs. Using exceptions to handle bugs is just
another form of adding "workaround" code -- you're not sure what
causes the bug, but if you add this extra code, you can suppress the
bug. The problems with this approach in general are

-- you don't know what causes the bug, so you can't be sure you've
completely suppressed it (i.e., it may pop up again somewhere else)
-- the additional logic added to suppress the bug will increase the
overall complexity of your program
-- if you do manage to fix the bug later on, it may be difficult to
determine that the extra code is not needed and then remove it, simply
because you've forgotten what it is for; the extra code then becomes a
permanent addition to your source that everyone is afraid to touch

It doesn't matter whether you use exceptions to detect and handle bugs
or error codes or switch statements or while loops; all methods suffer
from the above problems (and possibly others; these are just off the
top of my head).

Whether or not invalid arguments to day_of_week are a bug or an
exception is largely a question of specification and design. Are
callers supposed to call day_of_week with valid arguments only, or are
they allowed to call it with any arguments they want?

I cannot give you a single answer because it depends on the design of
the (sub)system in which day_of_week resides.

Bob

Dave Harris

unread,
Sep 2, 2002, 7:19:25 AM9/2/02
to
bel...@pacbell.net (Bob Bell) wrote (abridged):

> This is the way I think about it (forgive me for being somewhat
> elementary, but I want to define my terms).

Sure. I think your definitions disagree with my usual reference, which is
Meyer's "Object Oriented Software Construction".


> Therefore, if the postconditions are not true, you have
> undefined behavior. Either the postconditions hold or the system is
> in an undefined state.

Meyer says the pre- and post-conditions define a contract, and an
exception is what happens when the contract cannot be fulfilled. In this
view, failing to meet a post-condition is not undefined behaviour, or
necessarily a bug.


> Now, suppose that a function F could detect that a given precondition
> X was not met, and threw an exception as a result. This means that the
> postconditions of F are now expanded to include "if (!X), did nothing
> and threw an exception."

Again, in Meyer's "Design by Contract" view, the exceptions a routine
throws are by definition not part of its post-condition. They are what
happens when the post-condition isn't met.


> If the preconditions are not established, then the function cannot
> make any guarantess and we again have undefined behavior.

OK. Terminology aside, this is where the logic starts to get murky. Who's
to say that that an exception can't be thrown under the umbrella of
undefined behaviour?

Always with exceptions it is good to think about who will catch them. For
an assert-style exception, really we are saying that no-one should catch
it. At least, no ordinary application code. It may be caught by the
operating system or language run-time, and also by application code which
is conceptually at a similar level. This kind of exception is, in my view,
a meta-programming facility.

So this kind of exception is not documented as part of routine behaviour
which object-level programmers should rely on, but it is documented as
something which meta-level programmers can rely on. This need for dual
views can make the subject confusing.


> Also, assertions never detect conditions (like X) that a
> function can meaningfully respond to, because an assertion stops the
> system from responding at all (by breaking in the debugger).

In some environments, breaking into the debugger, or halting the program,
may be inappropriate. Then an assertion system might be better off
throwing an exception, which can be caught and turned into an orderly
shut-down and/or restart.

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

Daniel James

unread,
Sep 2, 2002, 10:42:41 AM9/2/02
to
In article <trmO+TDI...@robinton.demon.co.uk>, Francis
Glassborow wrote:
> In article <VA.000007a...@nospam.demon.co.uk>, Daniel
> James <inte...@nospam.demon.co.uk> writes
> >Once the program is debugged and ready for release the
> >programmer should be confident that the program logic is
> >correct, and therefore that none of the assertions can be
> >triggered, and the assertions can safely be removed.
>
> But if you are that confident you might as well leave them in.
> if you are wrong, at least when you are dragged from your bed
> at 3am by an irate customer 12 time zones away from you, you
> will have some clue as to what it was you missed. Removing
> asserts is an optimisation which should be applied if leaving
> them in damages performance.

That's arguably true - certainly true in some cases. Some
assertions are just debugging checks to help find obvious coding
errors before the code is built into a testable application. This
sort of assertion is (should be) redundant by the time the
application is built and ready for testing. It's probably worth
leaving such an assertion enabled during testing, but best to
remove it for the final release (and final pre-release testing,
of course). Other assertions test for conditions that could
possibly arise at runtime even with correct code and need to be
checked for at all times - though I'd argue that these probably
shouldn't have been assertions in the first place.

What I might do in such a case, though, is to use an ASSERT macro
that gives output-and-abort beaviour in debug builds but which
outputs some diagnostics and then throws some sort of exception
in release builds. It depends on the type of application and the
type of test being made, of course, but the aim should be to
maximize the usefulness of the information collected when
someting does go wrong while minimizing the amount of nastiness
in the "user experience".

Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with
sonadata to reply]

Alexander Terekhov

unread,
Sep 2, 2002, 10:49:25 AM9/2/02
to

< Forward Inline >

-------- Original Message --------
Message-ID: <3D7340B4...@web.de>
From: Alexander Terekhov <tere...@web.de>
Newsgroups: comp.programming.threads
Subject: Re: pthread program hanging at exit()
Date: Mon, 02 Sep 2002 12:43:00 +0200

< I'm going to post this to c.l.c++.mod too; in response to the following message >

--------
From: Francis Glassborow <francis.g...@ntlworld.com>
Newsgroups: comp.lang.c++.moderated
Subject: Re: Non throwing stl like containers
Date: 31 Aug 2002 14:10:16 -0400

--------

David Schwartz wrote:
>
> Stefan Kuehbauch wrote:
>
> > The exit() function should not be affected by - lets say - normal program
> > errors, what else could be done instead from inside a process to stop
> > itsself safely and correctly ? Sending a kill -9 to itsself ?

Almost correct. "program errors" should safely call abort().

[...]
> Unfortunately, programs that are to be bug-resistant have to prepare
> for the possibility that a clean shutdown may not be possible and must
> be ready to do an abortive shutdown if needed.

Dead accurate! They just shouldn't send "core-dump" stuff to their clients
as normal output... as Ariane-IV's Ada program did (running on Ariane->>V<<
rocket BTW; due to "can't happen" input and brain-dead failover/takeover
concept failing to just ignore non-critical stuff causing internal error and,
instead of sending garbage, simply continue doing something useful; discarding
"bad" work item after takeover/restart(*))... The client, "main computer" in
this case, simply could not "cope" with such silly garbage and just hit self-
destruct button; in end effect. ;-) ;-)

regards,
alexander.

(*) http://groups.google.com/groups?selm=3C6393BD.E56764E5%40web.de
(Subject: Re: Exceptions - do you use them?)

http://groups.google.com/groups?selm=c29b5e33.0202161915.3cd77b6f%40posting.google.com
(Subject: Re: Guru of the Week #82: Solution)

http://groups.google.com/groups?selm=c29b5e33.0202120915.2161db17%40posting.google.com
(Subject: Re: Java-like exception specifications)

"....
Well, if *I* will ever have to go to the intensive-care
(I have no personal spacecrafts or things like that with
live-support systems), I would really hope that in the
situation of unexpected/unknown failure, my life support
system will perform FAILOVER/RECYCLED-RETRY eventually/soon
getting rid of "bad" input (if error continues to show up
even after "full recycling") -- that is what I mean with
higher level recovery "system", PRESERVING "dump" infos
so that this problem could be easily debugged/solved
and removed in the next release."

--
"[Lower level details: the process was busy calculating pre-launch alignment
figures (irrelevant after lift-off); it was kept running (on an unwarranted
"just in case I'm needed" basis) for about 40 seconds into any flight;
Ariane V is designed to take up an early higher horizontal velocity than
Ariane IV, so the pre-launch process receives larger values to work with,
and the processing involves conversion from 64bit float to 16bit integer;
this conversion operation raised a run-time error exception on the large
value presented 36 seconds into the flight (because this particular
conversion operation was not dynamically checked - other conversions were
checked but it was "known" to be unnecessary to check this one).]

The designed response to such an exception is to shut down the active
primary inertial reference processor (because they have a live back-up
running in parallel) and transfer control to the back-up secondary
processor.

Unfortunately, the secondary contains identical software, and it had
already shut down (on the previous cycle, 72ms before) having hit the
exception fractionally earlier.

So the main on board computer had to stick with the primary inertial
reference processor, which by then was presenting its diagnostic bit
pattern. This bit pattern indicated full nozzle deflection required, so
that's what happened, which ripped off the boosters, which correctly
triggered a full auto-destruct."

Dave Harris

unread,
Sep 2, 2002, 5:11:06 PM9/2/02
to
musi...@bawi.org (Sungbom Kim) wrote (abridged):

> It is, however, not very clear in other cases which one to choose;
> let's take as an example a function which takes year, month, day
> as its parameter. If an invalid month or day was given, should the
> function throw an exception or give an assertion failure?

The way to decide is to think about who will catch it. If the exception
will be caught by the immediate caller, then return a status code. If by
some more distant caller, throw an exception. If no-one, then use an
assert-like macro (which itself might throw an exception instead of
calling abort()).

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Bob Bell

unread,
Sep 3, 2002, 6:22:05 AM9/3/02
to
Daniel James <inte...@nospam.demon.co.uk> wrote in message news:<VA.000007a...@nospam.demon.co.uk>...

> What I might do in such a case, though, is to use an ASSERT macro
> that gives output-and-abort beaviour in debug builds but which
> outputs some diagnostics and then throws some sort of exception
> in release builds.

I think this is a bad idea. This means that your code has very
significant differences between debug time and release time, in that
every ASSERT, which in debug mode simply shuts down the application
when false, now throws an exception, and adds a whole new control
path. Such control paths are dynamic, in the sense that where it goes
depends on the call stack (i.e., which functions called the function
with the ASSERT). If you have a lot of ASSERTs, that's a lot of new
control paths.

It also means that you will not be able to test any of these code
paths during debugging, because they won't exist because during
debugging ASSERT aborts the program rather than throwing. This is a
significant problem, because testing exception handling behavior is
complex enough as it is; now you're adding exceptions that only occur
during release builds.

Unless you can guarantee that at no place in the code is the ASSERT
exception caught, you could potentially have the system continue
execution when it should not, or continue in an unintended location.
But if you guarantee that the ASSERT exception is never caught (except
perhaps in main()), then what are you gaining over abort()? Not much,
IMHO.

If you are using ASSERT to trap conditions that should be true and
want such trapping to work in release and debug builds, then make
ASSERT work the same way in both builds, so that once the behavior of
ASSERT is tested, you know it will continue to work the way you
intended.

> It depends on the type of application and the
> type of test being made, of course, but the aim should be to
> maximize the usefulness of the information collected when
> someting does go wrong while minimizing the amount of nastiness
> in the "user experience".

This I agree with. :-)

Bob Bell

Bob Bell

unread,
Sep 3, 2002, 6:38:50 AM9/3/02
to
bran...@cix.co.uk (Dave Harris) wrote in message news:<memo.20020902...@brangdon.madasafish.com>...

> musi...@bawi.org (Sungbom Kim) wrote (abridged):
> > It is, however, not very clear in other cases which one to choose;
> > let's take as an example a function which takes year, month, day
> > as its parameter. If an invalid month or day was given, should the
> > function throw an exception or give an assertion failure?
>
> The way to decide is to think about who will catch it. If the exception
> will be caught by the immediate caller, then return a status code. If by
> some more distant caller, throw an exception. If no-one, then use an
> assert-like macro (which itself might throw an exception instead of
> calling abort()).

The problem with this strategy is how can a function know if its
immediate caller is going to be able to handle the problem? I don't
think it's enough for a function to decide that its caller "should"
handle the problem; the behavior of the caller is simply outside of
the scope (pun intended :-) of the function, and there's no way to
know if the caller will handle an error condition or pass it on to
its caller.

The other problem with this strategy is that it doesn't really answer
Sungbom's question. If it's a bug, it's a bug; no one should catch it.
If it's not a bug, then the function needs to "work" (where "work"
means either provide the day of the week, or indicate an error
condition by throwing an exception or some other means). So the
question comes back to whether or not it's a bug to pass invalid
arguments to day_of_week. Who's going to catch it only comes in to
play once you decide it's not a bug.

Bob

Dave Harris

unread,
Sep 4, 2002, 5:53:17 AM9/4/02
to
bel...@pacbell.net (Bob Bell) wrote (abridged):
> > The way to decide is to think about who will catch it. If the
> > exception will be caught by the immediate caller, then return a
> > status code. If by some more distant caller, throw an exception.
> > If no-one, then use an assert-like macro (which itself might throw
> > an exception instead of calling abort()).
>
> The problem with this strategy is how can a function know if its
> immediate caller is going to be able to handle the problem?

In order to write a function, you need to know something of what its
clients want it to do. If you don't know this, you probably shouldn't be
writing it yet.

As you said in an earlier message:

I cannot give you a single answer because it depends on the
design of the (sub)system in which day_of_week resides.

> The other problem with this strategy is that it doesn't really answer
> Sungbom's question.

It doesn't try to provide a concrete answer. It tries to provide a
strategy for arriving at an answer. In my experience, switching attention
from the thrower to the catcher helps clarify the role played by
exceptions in particular situations.


> If it's a bug, it's a bug; no one should catch it.

Yes. That's roughly the third case in my list (only I would put it the
other way about).


> Who's going to catch it only comes in to play once you decide it's not
> a bug.

No; who's going to catch it helps define whether it is a bug. If no-one is
going to handle the situation, the situation must not occur. If it occurs
anyway the program has a bug.


> So the question comes back to whether or not it's a bug to pass invalid
> arguments to day_of_week.

I think Sungbom Kim was asking for guidelines about whether to consider
the situation a bug. In other words, about how to arrive at a sensible
specification and design.

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Daniel James

unread,
Sep 4, 2002, 11:25:31 AM9/4/02
to
In article <c87c1cfb.02090...@posting.google.com>, Bob
Bell wrote:
> I think this is a bad idea. This means that your code has very
> significant differences between debug time and release time,
> in that every ASSERT, which in debug mode simply shuts down
> the application when false, now throws an exception, and adds
> a whole new control path.

Ah, yes, /mea culpa/ ...

> It also means that you will not be able to test any of these
> code paths during debugging, because they won't exist because
> during debugging ASSERT aborts the program rather than throwing.

.. I lied. What I actually have done is to use an ASSERT macro
whose DEBUG version breaks into the debugger and allows the user to
abort, thow, or continue. I agree that it would be bad to have
exception paths that cannot be debugged. I should have spelled that
out, but was concentrating on slightly different issues.

> Unless you can guarantee that at no place in the code is the

> ASSERT exception caught, ...

That would be rather strange. The whole *point* of throwing an
exception is to be able to catch it. I *hope* I can guarantee that
*every* exception gets caught (but perhaps not until main()).

> If you are using ASSERT to trap conditions that should be
> true and want such trapping to work in release and debug
> builds, then make ASSERT work the same way in both builds,
> so that once the behavior of ASSERT is tested, you know it
> will continue to work the way you intended.

Yes, as I said above, this is *almost* what I do. My earlier post
was an oversimplification.

Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with sonadata
to reply]

Bob Bell

unread,
Sep 5, 2002, 6:16:36 AM9/5/02
to
bran...@cix.co.uk (Dave Harris) wrote in message news:<memo.2002090...@brangdon.madasafish.com>...

> bel...@pacbell.net (Bob Bell) wrote (abridged):
> > > The way to decide is to think about who will catch it. If the
> > > exception will be caught by the immediate caller, then return a
> > > status code. If by some more distant caller, throw an exception.
> > > If no-one, then use an assert-like macro (which itself might throw
> > > an exception instead of calling abort()).
> >
> > The problem with this strategy is how can a function know if its
> > immediate caller is going to be able to handle the problem?
>
> In order to write a function, you need to know something of what its
> clients want it to do. If you don't know this, you probably shouldn't be
> writing it yet.
>
> As you said in an earlier message:
>
> I cannot give you a single answer because it depends on the
> design of the (sub)system in which day_of_week resides.

I see your point. Yet I do not accept that a function can dictate how
its results will be handled. There is no reason to believe, for
example, that day_of_week is called only by other functions in the
same subsystem. Perhaps it is a library function, and the client
programmer doesn't even work in the same company as the author of
day_of_week. The results (either the day of the week or some kind of
error indication) will be handled in whatever manner the caller sees
fit. The only choice in the matter that day_of_week has is how to
present those results.

From time to time, someone will ask when to throw vs. when to return
an error code. Inevitably, someone will present this guideline: "if
the immediate caller will handle the error, return an error code;
otherwise throw an exception." This has never made sense to me, as it
really just turns into another question: "how do you know when the
immediate caller will be able to handle the error?"

For example, let's say I have a function ReadData which reads raw
bytes from a file. If there fewer bytes in the file than requested,
the function indicates an "end of file" error. Should it do this with
an exception or an error code? This really turns into "will the
immediate caller handle the error?"

How would you answer this question?

For myself, I don't find this guideline at all helpful. Consider:

There are two choices for indicating the error: returning an error
code, and throwing an exception. There are two places where the error
will be handled: in the immediate caller, or higher up the call stack.
When everything is "correct" (according to the guideline under
discussion), then we always have one of these two situations:

1) the function returns an error code and the error is handled in the
immediate caller
2) the function throws an exception which is handled farther up the
call stack

There are two other possibilities, which can also occur:

3) the function returns an error code and the error is handled farther
up the call stack
4) the function throws an exception which is handled in the immediate
caller

Situations 3 and 4 can occur if the function "guesses wrong" about the
way the function's error results will be used. Note also that 3 is the
case that exceptions were largely invented to avoid, because passing
error codes up the call stack is tedious, error prone, etc. Note also
that 4 does not have any such disadvantages, in the sense that if you
can correctly write code for case 2, then it follows that you can
correctly write code the case 4.

So here's my next problem with the guideline: if you "guess wrong"
about the way the caller will handle an error, you may end up passing
error codes up the call stack. Speaking from personal experience, this
has always been the case whenever I try to use functions that return
error codes. Consider that sooner or later some function that returns
an error code will call a function that returns an error code; as soon
as that happens, you're propagating error codes.

Now, suppose instead that the guideline reads: "Throw an exception to
indicate an error." Now there are only two situations:

1) the function throws an exception which is handled by the immediate
caller
2) the function throws an exception which is handled farther up the
call stack

Neither situation seems to have any correctness disadvantages compared
with the previous guideline, so this has always seemed a much better
approach. In fact, the guideline that I always use is:

Prefer to use exceptions to indicate errors unless
-- performance considerations won't allow exceptions (should be rare)
-- the exception must cross language boundaries (for example, don't
throw an exception from a function called from a function written in
C)
-- you don't have a choice and must use another method (for example,
you're overriding a virtual function that returns errors with an error
code)

Bob

Francis Glassborow

unread,
Sep 5, 2002, 9:22:59 AM9/5/02
to
In article <c87c1cfb.0209...@posting.google.com>, Bob Bell
<bel...@pacbell.net> writes

> > As you said in an earlier message:
> >
> > I cannot give you a single answer because it depends on the
> > design of the (sub)system in which day_of_week resides.
>
>I see your point. Yet I do not accept that a function can dictate how
>its results will be handled. There is no reason to believe, for
>example, that day_of_week is called only by other functions in the
>same subsystem.

I can ensure that by any of the following:

1) Place it in an unnamed namespace - it can only be called from within
the TU where it is declared/defined

2) Declare it as a static function -- as above

3) Make it a non-public member function.

Then there is the issue of exactly what an error is. A function that
returns a pointer that is set to NULL if the request could not be met
would be an error in some people's view and a normal response in the
view of other's. Actually, I think it depends on what the function is
doing. I.e. to what degree would a NULL return be expected by the
potential user.


--
Francis Glassborow ACCU
64 Southfield Rd
Oxford OX4 1PA +44(0)1865 246490
All opinions are mine and do not represent those of any organisation

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Bob Bell

unread,
Sep 7, 2002, 5:56:13 AM9/7/02
to
bran...@cix.co.uk (Dave Harris) wrote in message news:<memo.20020901...@brangdon.madasafish.com>...

> bel...@pacbell.net (Bob Bell) wrote (abridged):
> > Therefore, if the postconditions are not true, you have
> > undefined behavior. Either the postconditions hold or the system is
> > in an undefined state.
>
> Meyer says the pre- and post-conditions define a contract, and an
> exception is what happens when the contract cannot be fulfilled. In this
> view, failing to meet a post-condition is not undefined behaviour, or
> necessarily a bug.

(Sorry about the long delay in posting, but I just saw your post.)

I just posted a response to the thread "C++0x -- what's really needed?
(LONG)" about this topic today (9/6), including more about why I
disagree with Meyer. The short version is that Meyer's concept
precludes the idea of throwing an exception as a way to fulfill a
contract, which seems unnecessarily limiting. It's like saying that
you can't return an error code as a way to fulfill a contract.

> > If the preconditions are not established, then the function cannot
> > make any guarantess and we again have undefined behavior.
>
> OK. Terminology aside, this is where the logic starts to get murky. Who's
> to say that that an exception can't be thrown under the umbrella of
> undefined behaviour?

It's not that exceptions can't be thrown under the umbrella of
undefined behavior -- of course they can. It's just that if they are,
a caller can't depend on them. If the behavior is undefined, then
maybe it throws A today, B tomorrow, and C next week. What do you
catch?

On the other hand, if the function always throws A, and you tell your
caller that, and he can depend on it, and you're not going to change
the behavior, etc., etc... then throwing starts to sound like "defined
behavior," and it also starts to sound like part of a contract a
caller can depend on. If that's true, what is the point of insisting
that the contract was not fulfilled, or that throwing is not part of
the function's postconditions?

> Always with exceptions it is good to think about who will catch them. For
> an assert-style exception, really we are saying that no-one should catch
> it. At least, no ordinary application code. It may be caught by the
> operating system or language run-time, and also by application code which
> is conceptually at a similar level. This kind of exception is, in my view,
> a meta-programming facility.

I'm not sure what you mean by an "assert-style" exception. I want
assertion behavior that will stop the program in its tracks so I can
examine its state, but if an exception is thrown the stack will be
unwound. If an assertion-style exception allows me to examine the
state of the system (perhaps in a post-mortem like a core dump), then
that's OK with me. The point is that an assertion should help me
figure out why this condition, which should have been true, was false.

> So this kind of exception is not documented as part of routine behaviour
> which object-level programmers should rely on, but it is documented as
> something which meta-level programmers can rely on. This need for dual
> views can make the subject confusing.

Are you suggesting that functions provide two contracts for the
different programmers? A "normal" contract and a "meta" contract? If
I'm understanding you correctly, I don't like the sound of this.

> > Also, assertions never detect conditions (like X) that a
> > function can meaningfully respond to, because an assertion stops the
> > system from responding at all (by breaking in the debugger).
>
> In some environments, breaking into the debugger, or halting the program,
> may be inappropriate. Then an assertion system might be better off
> throwing an exception, which can be caught and turned into an orderly
> shut-down and/or restart.

Yep. See my response above about assertion-style exceptions.

Bob

Dave Harris

unread,
Sep 7, 2002, 11:06:25 AM9/7/02
to
bel...@pacbell.net (Bob Bell) wrote (abridged):
> > In order to write a function, you need to know something of what its
> > clients want it to do.
>
> I see your point. Yet I do not accept that a function can dictate how
> its results will be handled.

Agreed. "Dictating" would be too strong a word. We plan for an expected
pattern of use but the actual pattern of use may be different. I'm not
sure what your point is. We shouldn't plan?


> Now, suppose instead that the guideline reads: "Throw an exception to
> indicate an error."

Fine. In my view this merely replaces the question "Should I throw an
exception?" with "Is this an error?" I don't think the rephrasing takes us
any further forward.

I am not disagreeing with the guideline. Indeed, I am happy to equate
exceptions with errors. If you read back over my last few articles, you'll
notice I talked of returning a "status code" rather than an "error code".
(Incidently, I'd also equate "error" with "post-condition failure".) But
now the guideline is mere tautology.

I'll say that again. I agree errors should always become exceptions. The
design problem is deciding whether something is an error. My guideline is,
"If we expect the immediate caller to handle the situation, then the
situation is probably not an error."

My reasoning is (1) that ultimate error handling should be simple,
centralised and systematic, and not spread out over every routine in ad
hoc fashion. Catch-clauses which propagate the error are not handlers in
this sense (and ideally should be replaced by RAII objects).

And (2) that exceptions are a verbose and clunky syntactic construct,
which should only be used when the benefits outweigh the costs. And their
main benefit is the ease with which they propagate across routine
boundaries.

One feature of the Java libraries which makes that language less pleasant
to use, is the way they force callers to have try/catch clauses all over
the place. I found that if the caller /wants/ to handle the condition,
then the condition simply isn't an error, but part of business as usual.
If it is an error than it should be propagated up to some central
error-handler.

It is very rare for the error-handler to be the same routine that is the
immediate caller of the thrower. That usually suggests a poor separation
of concerns; the routine is doing too much work. It is even rarer to
design a routine /expecting/ there to be no intermediate routines between
the part responsible for error handling and the immediate caller.

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Dave Harris

unread,
Sep 7, 2002, 11:07:06 AM9/7/02
to
bel...@pacbell.net (Bob Bell) wrote (abridged):
> I just posted a response to the thread "C++0x -- what's really needed?
> (LONG)" about this topic today (9/6), including more about why I
> disagree with Meyer.

I've read it, but I'm posting my reply here.


> On the other hand, if the function always throws A, and you tell your
> caller that, and he can depend on it, and you're not going to change
> the behavior, etc., etc... then throwing starts to sound like "defined
> behavior," and it also starts to sound like part of a contract a
> caller can depend on. If that's true, what is the point of insisting
> that the contract was not fulfilled, or that throwing is not part of
> the function's postconditions?

In Eiffel, exceptions are part of the contract. They are defined behaviour
and can be relied upon. They are just not part of the post-condition. The
contract comprises 3 parts:

(1) Pre-conditions (and class invariants).
(2) Post-conditions (and class invariants).
(3) Exceptions.

A major difference between C++ and Eiffel is that in Eiffel, exceptions
are not objects. In effect, there is only one kind exception which can
always be thrown (there is no "no-throw guarantee"). This means part (3)
of the contract is always present and must always be the same, so Eiffel
users tend to leave it implicit. If you're not aware, you might overlook
it when reading Eiffel literature. In C++, on the other hand, we would
use (3) to list the exception classes thrown.

Given that, your question becomes, "What is the point of separating (2)
from (3)"? The answer is separation of concerns. Exceptions are wildly
different from the other post-conditions in their default control flow and
hence their implied meaning. All of the reasoning about a routine's
behaviour will be structured with (2) and (3) dealt with separately; the
normal case and the exceptional case. The terminology reflects this way of
thinking.


> I'm not sure what you mean by an "assert-style" exception.

I meant one which should not be caught by ordinary application code,
because it implies a bug and/or illegal states in that code.


> I want assertion behavior that will stop the program in its tracks so
> I can examine its state, but if an exception is thrown the stack will
> be unwound.

I expect what happens when an assertion fails to be configurable. On a
development machine, entering the debugger or halting with a core-dump may
be appropriate. On a live, 24/7 server it may be better to abort just the
current transaction but continue serving other transactions. Sometimes it
is possible to arrange this with a separate "watch-dog" process, and
sometimes exceptions are a reasonable tool.

This configurable variation is why application code should not rely on
getting an exception from an assertion failure. On the other hand,
who-ever configures the assertion mechanism does need to be able to
guarantee they produce exceptions.


> Are you suggesting that functions provide two contracts for the
> different programmers? A "normal" contract and a "meta" contract? If
> I'm understanding you correctly, I don't like the sound of this.

That is what I mean. I appreciate why you don't like it but I feel it is
necessary to capture what is going on.

I think truly undefined behaviour is a bad thing, to be avoided. In this I
agree with Java, which tries to fully define all eventualities. So, for
example, I'd like array access to be range-checked by default. Even when
an out-of-range access is a bug which should never happen, I don't want
that bug to lead to truly undefined behaviour; I want it to be safely
trapped (and by the above argument, with an exception). On the other hand,
I don't want to legitimise code like:

double sum = 0;
try {
for (int i = 0; true; ++i)
sum += vec.at(i);
}
catch (std::out_of_range &e) {
return sum;
}

Currently the standard says this code is reasonable for std::vector::at(),
but for std::vector::operator[]() the exception is not guaranteed.
However, it also permits the exception to be guaranteed by the
implementation. It's a bit muddled. Really it doesn't matter where the
guarantee comes from provided it's there.

I /want/ guaranteed range-checking but I don't want to legitimise code
which relies on the guarantee inappropriately. This leads me to try to
formalise when catching out_of_range is appropriate.

If we do have a separate operating system "watch-dog" process that can
kill the rogue application and restart it seamlessly, then it's pretty
clear this is going on at a whole other level to the normal application.
This use of exceptions is a poor-man's version of the watch-dog, so it
should also be considered not part of the normal application. That is what
I mean by "meta". I am using exceptions to provide an O/S-level service.
Only systems-level code should be catching out_of_range. Systems
programmers are using a different contract to application programmers.

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

0 new messages