Exceptions

216 views
Skip to first unread message

Thiago R. Adams

unread,
Jun 29, 2005, 10:50:59 AM6/29/05
to
Hi,
I have a lot of questions about exceptions.

In the sample below:

struct E(
. . .
~E(){}
);

int main()
{
try {
throw E();
}
catch(const E &e) {
e.some();
}
}

When ~E will be called? Which is the E scope?

Where E will be created? In the Stack?

Exceptions are slow if are not throwed?

What makes exceptions slower or fast? The number of catch's influences?

Using exceptions, what influences the memory?

And what influences the program size?


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

tony_i...@yahoo.co.uk

unread,
Jun 30, 2005, 7:09:33 AM6/30/05
to
Firstly, I won't answer "when ~E will be called", because a basic
programming skill is putting a few cout statements in to find out for
yourself. You can also work out whether E is on the stack or not, but
this is a bit more difficult - the answer is yes. My recollection is
that E() will be created on the normal program stack, and the catch
statement may run in a dedicated exception stack (which allows
exceptions to work when the main stack has been exhausted due to
excessive recursion).

Anyway, entering and leaving a try { } block may have overheads in many
environments, but they can vary massively. Again, a basic programming
skill is to benchmark things in your own environment, and/or look at
the assembly output from your compiler (e.g. g++ -S). On Solaris a
couple years ago, I found reporting errors via exceptions was about 15
times slower than using integral return codes (like libc), but things
may well have changed. I don't recall the overhead for successful
processing. Anyway, whether this is significant depends on many
factors. It just confirmed my impression that the programmer I had
just interviewed was being a bit silly when he insisted that he hadn't
reported an error using anything other than an exception for several
years.

As for what influences the memory: use of exceptions may require the
compiler to reserve an additional stack, but the standard C++ library
uses exceptions itself, so you should have that overhead anyway. Re
program size, it's just not relevant unless you're in an embedded
environment, in which case I'd say again that you have to write code
and make measurements with your particular compiler and processor.

The general rule is use exceptions to report exceptional circumstances,
not oft-encountered error conditions. Varying from this rule isn't
worth doing unless you have profiling results showing you that you have
to.

A design example: a function written "bool is_odd(int)" probably
shouldn't throw, but "void assert_odd(int)" could, because the caller
is clearly saying it would be exception not to succeed. See
Stroustrup's TC++PL3 for some background discussions on use of
exceptions.

Cheers, Tony

Thiago R. Adams

unread,
Jun 30, 2005, 1:30:31 PM6/30/05
to
Hi Tony,

My question is about destructor. When it be called? Yes I made some
tests with cout. I guess that the destructor is called at more external
try. And the compiler is smart if you use um rethrown. But this
behavior must be written is some place? or not?

>You can also work out whether E is on the stack or not, but
> this is a bit more difficult - the answer is yes. My recollection is
> that E() will be created on the normal program stack, and the catch
> statement may run in a dedicated exception stack (which allows
> exceptions to work when the main stack has been exhausted due to
> excessive recursion).

I made some tests. I create a recusive function with try/catch and log
the address of stack variables. The stack address of variables doesn't
change with the exceptions. (I used visual c++ 2005 express)
Then I also guess that is in other stack.

> The general rule is use exceptions to report exceptional circumstances,
> not oft-encountered error conditions. Varying from this rule isn't
> worth doing unless you have profiling results showing you that you have
> to.

It's an excelent topic! :) What is one exception condition?
I instead use exceptions for erros and for precoditions broken.
for example:

struct X{
X(AnyInterface *p) {
if (p == 0)
throw runtime_error("AnyInterface is obrigatory for class X!");
else
p->f();
}
};

I need known the exceptions overhead and behavior to convince my
coworkers that is good.
Today we use a lot of return codes.

TC++PL3 is very good! :)
I read also:
CUJ august 2004, "When, for what, and how should you use exceptions"
Sutter
Exceptional C++ Style and C++ Coding Standard.

Thank's
http://paginas.terra.com.br/informatica/thiago_adams/eng/index.htm

David Abrahams

unread,
Jun 30, 2005, 2:27:44 PM6/30/05
to
"Thiago R. Adams" <thiago...@gmail.com> writes:

> Hi,
> I have a lot of questions about exceptions.
>
> In the sample below:
>
> struct E(
> . . .
> ~E(){}
> );
>
> int main()
> {
> try {
> throw E();
> }
> catch(const E &e) {
> e.some();
> }
> }
>
> When ~E will be called?

It depends on the implementation, because E's copy ctor may be called
an arbitrary number of times, but the last E will definitely be
destroyed when execution reaches the closing brace of the catch block.

> Which is the E scope?

I don't understand the question.

> Where E will be created? In the Stack?

That's entirely implementation-dependent. From an abstract
point-of-view, there's "magic memory" in which all exceptions are
stored during unwinding.

> Exceptions are slow if are not throwed?

That's also implementation dependent. On the "best" implementations,
there's no speed penalty for exceptions until one is thrown.

> What makes exceptions slower or fast? The number of catch's
> influences?

That's also implementation dependent. On the "best" implementations,
there's no speed penalty for a catch block. Generally, those
implementations trade speed in the case where an exception is thrown
for speed in the case where no exception is thrown. In other words,
on implementations I consider inferior, actually throwing an exception
may be faster, but executing the code normally, when there are no
exceptions, will be slower.

> Using exceptions, what influences the memory?
> And what influences the program size?

That's also implementation dependent. There is generally no dynamic
memory associated with exception-handling. As for the program size,
on the "best" implementations, tables are generated that associate
program counter values with the unwinding actions and catch blocks
that must be executed when an exception is thrown with any of those
program counter values in an active stack frame. In some
implementations you can limit the number of tables generated by using
the empty exception specification wherever possible.

HTH,

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

David Abrahams

unread,
Jul 1, 2005, 5:50:27 AM7/1/05
to
"Thiago R. Adams" <thiago...@gmail.com> writes:

>> The general rule is use exceptions to report exceptional circumstances,
>> not oft-encountered error conditions. Varying from this rule isn't
>> worth doing unless you have profiling results showing you that you have
>> to.
> It's an excelent topic! :) What is one exception condition?
> I instead use exceptions for erros and for precoditions broken.

Broken preconditions should almost always be handled with an assert
and not an exception. An exception will usually cause a great deal of
code to be executed before you get a chance to diagnose the problem.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Maxim Yegorushkin

unread,
Jul 1, 2005, 6:57:53 AM7/1/05
to
On Thu, 30 Jun 2005 22:27:44 +0400, David Abrahams
<da...@boost-consulting.com> wrote:

[]

>> Using exceptions, what influences the memory?
>> And what influences the program size?
>
> That's also implementation dependent. There is generally no dynamic
> memory associated with exception-handling.

Not sure if exception throwing is exception-handling, but g++ allocates
exceptions on the heap.

http://savannah.gnu.org/cgi-bin/viewcvs/gcc/gcc/libstdc%2B%2B-v3/libsupc%2B%2B/eh_alloc.cc?rev=HEAD&content-type=text/vnd.viewcvs-markup

--
Maxim Yegorushkin
<firstname...@gmail.com>

Thiago R. Adams

unread,
Jul 1, 2005, 1:02:33 PM7/1/05
to
> Broken preconditions should almost always be handled with an assert
> and not an exception. An exception will usually cause a great deal of
> code to be executed before you get a chance to diagnose the problem.

In much cases asserts work as comentary only;
for example:

void f(pointer *p){
assert(p != 0);
p->f();
}

With exceptions the code will be response the error.

The TC++PL has an example:

template <classX , classA > inline void Assert(A assertion) {
if (!assertion ) throw X();
}

If think that the most useful is:
template <classX , classA > inline void Assert(A assertion) {
DebugBreak(); // stops debug
if (!assertion ) throw X();
}

void f2(int* p) {
// ARG_CHECK is an constant: true for checks args
Assert<Bad_arg>(ARG_CHECK || p !=0 );
}

If code is correct the all callers will be test for arguments.
Then I can remove this test. (ARG_CHECK) But is nescessary?

When use throw and when use asserts?
Two cases:
void func(vector &v, int *p) {
if (p == 0)
throw bad_arg(); // why not? because this functions alread throws
v.push_back(p); // push can throw right?
}

void func(vector &v, int *p) throw() {
// maybe is better, becase this function no throw
// it has only one contract with callers in debug and release
assert(p != 0);

// yes I can use reference but is only an example :)
*p = v.size() + 10;
}

I think that is a long and important topic.
Performance in exceptions is important to decide the use.
My coworks says: "If we doesn't kwown the behavior and performance
penalities of exceptions, we will use returns codes, because returns
codes is more simple and is known"


Thanks!
(and sorry my english is not so good)

David Abrahams

unread,
Jul 1, 2005, 9:58:38 PM7/1/05
to
"Maxim Yegorushkin" <firstname...@gmail.com> writes:

> On Thu, 30 Jun 2005 22:27:44 +0400, David Abrahams
> <da...@boost-consulting.com> wrote:
>
> []
>
>>> Using exceptions, what influences the memory?
>>> And what influences the program size?
>>
>> That's also implementation dependent. There is generally no dynamic
>> memory associated with exception-handling.
>
> Not sure if exception throwing is exception-handling, but g++ allocates
> exceptions on the heap.
>
> http://savannah.gnu.org/cgi-bin/viewcvs/gcc/gcc/libstdc%2B%2B-v3/libsupc%2B%2B/eh_alloc.cc?rev=HEAD&content-type=text/vnd.viewcvs-markup

AFAICT those exceptions are being "dynamically" allocated out of
static memory, in

static one_buffer emergency_buffer[EMERGENCY_OBJ_COUNT];

That looks like the "magic memory" I was referring to. Am I missing
something?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Jul 1, 2005, 10:04:21 PM7/1/05
to
"Thiago R. Adams" <thiago...@gmail.com> writes:

>> Broken preconditions should almost always be handled with an assert
>> and not an exception. An exception will usually cause a great deal of
>> code to be executed before you get a chance to diagnose the problem.
>
> In much cases asserts work as comentary only;
> for example:
>
> void f(pointer *p){
> assert(p != 0);
> p->f();
> }

No, they work to stop program execution at the earliest point of
detection of the error.

> With exceptions the code will be response the error.

If preconditions are broken, your program state is broken, by
definition. Trying to recover is generally ill-advised.

> The TC++PL has an example:

That doesn't make it right :)

> template <classX , classA > inline void Assert(A assertion) {
> if (!assertion ) throw X();
> }
>
> If think that the most useful is:
> template <classX , classA > inline void Assert(A assertion) {
> DebugBreak(); // stops debug

Stop execution so I can debug the program. Good!

> if (!assertion ) throw X();
> }

If the assertion fails when there is no debugger, how do you expect
the program to recover?

> void f2(int* p) {
> // ARG_CHECK is an constant: true for checks args
> Assert<Bad_arg>(ARG_CHECK || p !=0 );
> }
>
> If code is correct the all callers will be test for arguments.
> Then I can remove this test. (ARG_CHECK) But is nescessary?

I don't understand the question.

> When use throw and when use asserts?

Use asserts to detect that the invariants you have designed into your
program are broken. Use throw to indicate that a function will not be
able to fulfill its usual postconditions, and when the immediate
caller is not very likely to be able to handle the error directly and
continue (otherwise, use error return codes and the like).

> Two cases:
> void func(vector &v, int *p) {
> if (p == 0)
> throw bad_arg(); // why not? because this functions alread throws

Why not? Because, I presume, passing 0 is a precondition violation.
It depends on what you put in your documentation. If you say, "you
must pass me a non-null pointer," then use an assert. If you say, "if
you pass a null pointer I'll throw," well, then throw. However, the
former is usually the better course of action.

> v.push_back(p); // push can throw right?

Yes. So what?

> }
>
> void func(vector &v, int *p) throw() {
> // maybe is better, becase this function no throw
> // it has only one contract with callers in debug and release
> assert(p != 0);

Even if it were not nothrow, it would have only one contract.

> // yes I can use reference but is only an example :)
> *p = v.size() + 10;
> }
>
> I think that is a long and important topic.
> Performance in exceptions is important to decide the use.
> My coworks says: "If we doesn't kwown the behavior and performance
> penalities of exceptions, we will use returns codes, because returns
> codes is more simple and is known"

Yes, it's a common FUD. It's easy to do some experiments to get a
feeling for the real numbers.

Do they know the cost in correctness and maintainability of using
return codes?

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Alf P. Steinbach

unread,
Jul 2, 2005, 6:23:03 AM7/2/05
to
* David Abrahams -> Thiago R. Adams:

>
> >
> > If think that the most useful is:
> > template <classX , classA > inline void Assert(A assertion) {
> > DebugBreak(); // stops debug
>
> Stop execution so I can debug the program. Good!
>
> > if (!assertion ) throw X();
> > }
>
> If the assertion fails when there is no debugger, how do you expect
> the program to recover?

That's actually a good _C++_ question... ;-)

First, the reason why one would like to 'throw' in this case, which is
usually not to recover in the sense of continuing normal execution, but
to recover enough to do useful logging, reporting and graceful exit on
an end-user system with no debugger and other programmer's tools
(including, no programmer's level understanding of what goes on).

Why that is a problem in C++: the standard exception hierarchy is not
designed. Uh, I meant, it's not designed with different degrees of
recoverability in mind. At best you can use std::runtime_error for
"soft" (in principle recover-and-continue-normally'able) exceptions, and
classes derived otherwise from std::exception for "hard" exceptions, but
in practice people tend to not restrict themselves to
std::runtime_error, and the Boost library is an example of this common
practice -- the standard's own exception classes are also examples.

So, if you want a really "hard" exception in C++, one that is likely to
propagate all the way up to the topmost control level, you'll have to
use something else than a standard exception.

And even that non-standard exception might be caught (and not rethrown)
by a catch(...) somewhere. Which is not as unlikely as it might seem.
E.g., as I recall, ScopeGuard does that in its destructorน.

Well, then, why not use std::terminate instead? After all, that's what
it's for, isn't it? And it's configurable.

But no, that's not what it's for. Calling std::terminate does not
guarantee RAII cleanup as a "hard" exception would. In short, I know of
no good portable solution to this problem in standard C++, and thinking
of how extremely easily it could have been supported in the design of
the standard exception classes (there was even existing practice from
earlier languages indicating how it should be) it's very frustrating.


น) One might argue that calling std::terminate is the only reasonable
failure handling in a destructor, even for ScopeGuard-like objects.
But the standard already provides that draconian measure for the
situation where it's really needed, where you would otherwise have a
double exception (which does not exist in C++). Doing it explicitly
just removes a measure of control from the client code programmer.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Maxim Yegorushkin

unread,
Jul 3, 2005, 6:10:40 AM7/3/05
to
David Abrahams wrote:

[]

> > Not sure if exception throwing is exception-handling, but g++ allocates
> > exceptions on the heap.
> >
> > http://savannah.gnu.org/cgi-bin/viewcvs/gcc/gcc/libstdc%2B%2B-v3/libsupc%2B%2B/eh_alloc.cc?rev=HEAD&content-type=text/vnd.viewcvs-markup
>
> AFAICT those exceptions are being "dynamically" allocated out of
> static memory, in
>
> static one_buffer emergency_buffer[EMERGENCY_OBJ_COUNT];
>
> That looks like the "magic memory" I was referring to. Am I missing
> something?

Only when malloc returns 0. That's why the *emergency* buffer.

--
Maxim Yegorushkin
<firstname...@gmail.com>

Peter Dimov

unread,
Jul 3, 2005, 6:11:16 AM7/3/05
to
Alf P. Steinbach wrote:
> * David Abrahams -> Thiago R. Adams:
> >
> > >
> > > If think that the most useful is:
> > > template <classX , classA > inline void Assert(A assertion) {
> > > DebugBreak(); // stops debug
> >
> > Stop execution so I can debug the program. Good!
> >
> > > if (!assertion ) throw X();
> > > }
> >
> > If the assertion fails when there is no debugger, how do you expect
> > the program to recover?
>
> That's actually a good _C++_ question... ;-)
>
> First, the reason why one would like to 'throw' in this case, which is
> usually not to recover in the sense of continuing normal execution, but
> to recover enough to do useful logging, reporting and graceful exit on
> an end-user system with no debugger and other programmer's tools
> (including, no programmer's level understanding of what goes on).

No recovery is possible after a failed assert. A failed assert means
that we no longer know what's going on. Generally logging and reporting
should be done at the earliest opportunity; if you attempt to "recover"
you may be terminated and no longer be able to log or report.

In some situations (ATM machine in the middle of a transaction, say) it
might make sense to attempt recovery even when an assertion fails, if
things can't possibly get any worse. This is extremely fragile, of
course. You can't test how well the recovery works because by
definition it is only executed in situations that your tests did not
cover (if they did, you'd have fixed the code to no longer assert).

Maxim Yegorushkin

unread,
Jul 3, 2005, 6:19:39 AM7/3/05
to
On Sat, 02 Jul 2005 14:23:03 +0400, Alf P. Steinbach <al...@start.no>
wrote:

[]

>> If the assertion fails when there is no debugger, how do you expect
>> the program to recover?
>
> That's actually a good _C++_ question... ;-)
>
> First, the reason why one would like to 'throw' in this case, which is
> usually not to recover in the sense of continuing normal execution, but
> to recover enough to do useful logging, reporting and graceful exit on
> an end-user system with no debugger and other programmer's tools
> (including, no programmer's level understanding of what goes on).

Why would one want a graceful exit when code is broken, rather than dying
as loud as possible leaving a core dump with all state preserved, rather
than unwound? std::abort() is a good tool for that.

--
Maxim Yegorushkin
<firstname...@gmail.com>

Alf P. Steinbach

unread,
Jul 3, 2005, 6:25:19 PM7/3/05
to
* Peter Dimov:

> Alf P. Steinbach wrote:
> > * David Abrahams -> Thiago R. Adams:
> > >
> > > >
> > > > If think that the most useful is:
> > > > template <classX , classA > inline void Assert(A assertion) {
> > > > DebugBreak(); // stops debug
> > >
> > > Stop execution so I can debug the program. Good!
> > >
> > > > if (!assertion ) throw X();
> > > > }
> > >
> > > If the assertion fails when there is no debugger, how do you expect
> > > the program to recover?
> >
> > That's actually a good _C++_ question... ;-)
> >
> > First, the reason why one would like to 'throw' in this case, which is
> > usually not to recover in the sense of continuing normal execution, but
> > to recover enough to do useful logging, reporting and graceful exit on
> > an end-user system with no debugger and other programmer's tools
> > (including, no programmer's level understanding of what goes on).
>
> No recovery is possible after a failed assert. A failed assert means
> that we no longer know what's going on. Generally logging and reporting
> should be done at the earliest opportunity; if you attempt to "recover"
> you may be terminated and no longer be able to log or report.

If "recovery" in the above means continuing on with normal execution,
then the above is a sort of extremist version of my position. I would
not go that far, especially in light of contrary facts. For example,
one of the most used programs in the PC world, the Windows Explorer,
does continue normal execution in this situation simply by restarting
itself, so with this meaning of "recovery" the above is false.

If "recovery" in the above means something generally more reasonable,
like, cleaning up, then again that flies in the face of established
fact. E.g., generally open files are closed by the OS whenever a
process terminates, whatever the cause of the termination, and it's no
big deal to define that as part of the application. In this sense of
"recovery" the above therefore says it's impossible to write an OS in
C++, and that's patently false, too.

So what does the above mean, if anything?

Is it, perhaps, a "proof", from principles, that hummingbirds can't fly?

Then I'd suggest the proof technique and/or the principles are to blame.


> In some situations (ATM machine in the middle of a transaction, say) it
> might make sense to attempt recovery even when an assertion fails, if
> things can't possibly get any worse. This is extremely fragile, of
> course. You can't test how well the recovery works because by
> definition it is only executed in situations that your tests did not
> cover (if they did, you'd have fixed the code to no longer assert).

Happily it's not the case that every ATM machine that encounters a
failed assertion has to be discarded. :-)


Cheers,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Alf P. Steinbach

unread,
Jul 3, 2005, 6:29:23 PM7/3/05
to
* Maxim Yegorushkin:

>
> Why would one want a graceful exit when code is broken, rather than dying
> as loud as possible leaving a core dump with all state preserved, rather
> than unwound? std::abort() is a good tool for that.

Because almost all code in existence, with the possible exception of
TeX, is broken, and (1) end-users really don't like core dumps, (2) the
maintainance team would like some information about what went wrong so
that it can be fixed, and a discarded end-user's core dump doesn't do,
(3) it's generally impolite and inconsiderate to wreck the city when one
knows one's dying, but that can be the effect of a simple std::abort.

Cheers,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Niklas Matthies

unread,
Jul 3, 2005, 10:27:08 PM7/3/05
to
On 2005-07-03 10:19, Maxim Yegorushkin wrote:

> On Sat, 02 Jul 2005 14:23:03 +0400, Alf P. Steinbach wrote:
:
>>> If the assertion fails when there is no debugger, how do you expect
>>> the program to recover?
>>
>> That's actually a good _C++_ question... ;-)
>>
>> First, the reason why one would like to 'throw' in this case, which is
>> usually not to recover in the sense of continuing normal execution, but
>> to recover enough to do useful logging, reporting and graceful exit on
>> an end-user system with no debugger and other programmer's tools
>> (including, no programmer's level understanding of what goes on).
>
> Why would one want a graceful exit when code is broken, rather than
> dying as loud as possible leaving a core dump with all state
> preserved, rather than unwound?

Because the customer expects and demands it. Actually, more often than
not the customer even demands graceful resumption.

-- Niklas Matthies

Albrecht Fritzsche

unread,
Jul 3, 2005, 10:27:31 PM7/3/05
to
>Why would one want a graceful exit when code is broken, rather than dying
>as loud as possible leaving a core dump with all state preserved, rather
>than unwound?

Because your code is integrated into another program. This program
might, for instance,
give some guarantees while your code is working - like saving all prior
actions etc...

If all your program can do is "dying loud" then the integrators won't
be exactly happy.

Ali

Nicola Musatti

unread,
Jul 4, 2005, 7:08:15 AM7/4/05
to

Maxim Yegorushkin wrote:
[...]


> Why would one want a graceful exit when code is broken, rather than dying
> as loud as possible leaving a core dump with all state preserved, rather
> than unwound? std::abort() is a good tool for that.

You and Peter seem to assume that there can be no knowledge about how
and where the code is broken. I believe that this depends on the kind
of application and how modular it is. In many interactive applications
it only takes a little care to ensure that you can always abort the
current operation and go back to the main event loop as if no problem
ever took place.

I'm working all day long with an IDE that throws at me any kind of
unexpected exceptions; when that happens I usually restart the #@!
thing, but the few times I did try to carry on, I never encountered any
problems.

Cheers,
Nicola Musatti

Motti Lanzkron

unread,
Jul 4, 2005, 7:07:18 AM7/4/05
to
On 1 Jul 2005 13:02:33 -0400, "Thiago R. Adams"
<thiago...@gmail.com> wrote:

>> Broken preconditions should almost always be handled with an assert
>> and not an exception. An exception will usually cause a great deal of
>> code to be executed before you get a chance to diagnose the problem.
>
>In much cases asserts work as comentary only;
>for example:
>
>void f(pointer *p){
> assert(p != 0);
> p->f();
>}
>
>With exceptions the code will be response the error.
>
>The TC++PL has an example:
>
>template <classX , classA > inline void Assert(A assertion) {
> if (!assertion ) throw X();
>}
>
>If think that the most useful is:
>template <classX , classA > inline void Assert(A assertion) {
> DebugBreak(); // stops debug
> if (!assertion ) throw X();
>}

We have some macros that do something similar:
They're macros so that the original __LINE__ and __FILE__ are kept and
since a lot of code is COM, errors are indicated by return codes.

Something along the lines of:
#define ASSERT_OR_RETURN(cond, ret) \
if( cond ) \
; \
else { \
assert(false && #cond); \
return ret; \
}

And it's used as:
ASSERT_OR_RETURN( pArg != NULL, E_POINTER);

As a side note, this has got some criticism that claims that static
code analyzers can't correctly analyze functions that use such
constructs. Is this true? I'm not familiar with such tools but it
seems that they should have the option of some macro expansion.

Peter Dimov

unread,
Jul 4, 2005, 1:43:32 PM7/4/05
to
Alf P. Steinbach wrote:
> * Peter Dimov:
>> Alf P. Steinbach wrote:

>>> First, the reason why one would like to 'throw' in this case, which
>>> is usually not to recover in the sense of continuing normal
>>> execution, but to recover enough to do useful logging, reporting
>>> and graceful exit on an end-user system with no debugger and other
>>> programmer's tools (including, no programmer's level understanding
>>> of what goes on).
>>
>> No recovery is possible after a failed assert. A failed assert means
>> that we no longer know what's going on. Generally logging and
>> reporting should be done at the earliest opportunity; if you attempt
>> to "recover" you may be terminated and no longer be able to log or
>> report.
>
> If "recovery" in the above means continuing on with normal execution,

You said: "one would like to 'throw' in this case" ... "to recover enough to
do useful logging, reporting and graceful exit."

So I assumed that by "recovery" you mean "stack unwinding" (with its usual
side effects), because this is what 'throw' does.

> then the above is a sort of extremist version of my position. I would
> not go that far, especially in light of contrary facts. For example,
> one of the most used programs in the PC world, the Windows Explorer,
> does continue normal execution in this situation simply by restarting
> itself, so with this meaning of "recovery" the above is false.

Normal execution doesn't involve killing the host process, even if it's
automatically restarted after that. Exit+restart is recovery, for a
different definition of "recovery", but it's not performed by throwing an
exception.

> If "recovery" in the above means something generally more reasonable,
> like, cleaning up, then again that flies in the face of established
> fact. E.g., generally open files are closed by the OS whenever a
> process terminates, whatever the cause of the termination, and it's no
> big deal to define that as part of the application. In this sense of
> "recovery" the above therefore says it's impossible to write an OS in
> C++, and that's patently false, too.

No, cleaning up open files after a process dies is not recovery, and it's
not performed by throwing an exception.

> So what does the above mean, if anything?

It means that performing stack unwinding after a failed assert is usually a
bad idea.

Peter Dimov

unread,
Jul 4, 2005, 1:45:31 PM7/4/05
to
Nicola Musatti wrote:
> Maxim Yegorushkin wrote:
> [...]
>> Why would one want a graceful exit when code is broken, rather than
>> dying as loud as possible leaving a core dump with all state
>> preserved, rather than unwound? std::abort() is a good tool for that.
>
> You and Peter seem to assume that there can be no knowledge about how
> and where the code is broken.

Not really.

Either

(a) you go the "correct program" way and use assertions to verify that your
expectations match the observed behavior of the program, or

(b) you go the "resilient program" way and use exceptions in an attempt to
recover from certain situations that may be caused by bugs.

(a) implies that whenever an assert fails, the program no longer behaves as
expected, so everything you do from this point on is based on _hope_ that
things aren't as bad.

(b) implies that whenever stack unwinding might occur, you must assume that
the conditions that you would've ordinarily tested with an assert do not
hold.

Most people do neither. They write incorrect programs and don't care about
the fact that every stack unwinding must assume a broken program. It's all
wishful thinking. We can't make our programs correct, so why even bother?
Just throw an exception.

Alf P. Steinbach

unread,
Jul 4, 2005, 10:28:14 PM7/4/05
to
* Peter Dimov:

> > >
> > > No recovery is possible after a failed assert.
>
> [The above] means that performing stack unwinding after a failed

> assert is usually a bad idea.

I didn't think of that interpretation, but OK.

The interpretation, or rather, what you _meant_ to say in the first
place, is an opinion, which makes it more difficult to discuss.

After a failed assert it's known that something, which could be anything
(e.g. full corruption of memory), is wrong. Attempting to execute even
one teeny tiny little instruction might do unimaginable damage. Yet you
think it's all right to not only terminate the process but also to log
things, which involves file handling, as long as one doesn't do a stack
rewind up from the point of the failed assert. This leads me to suspect
that you're confusing a failed assert with a corrupted stack, or that
you think that a failure to clean up 100% might be somehow devastating.
Anyway, an explanation of your opinion would be great, and this time,
please write what you mean, not something entirely different.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Jul 5, 2005, 6:52:02 AM7/5/05
to
"Peter Dimov" <pdi...@gmail.com> writes:

> Nicola Musatti wrote:
>> Maxim Yegorushkin wrote:
>> [...]
>>> Why would one want a graceful exit when code is broken, rather than
>>> dying as loud as possible leaving a core dump with all state
>>> preserved, rather than unwound? std::abort() is a good tool for that.
>>
>> You and Peter seem to assume that there can be no knowledge about how
>> and where the code is broken.
>
> Not really.
>
> Either
>
> (a) you go the "correct program" way and use assertions to verify that your
> expectations match the observed behavior of the program, or
>
> (b) you go the "resilient program" way and use exceptions in an attempt to
> recover from certain situations that may be caused by bugs.
>
> (a) implies that whenever an assert fails, the program no longer behaves as
> expected, so everything you do from this point on is based on _hope_ that
> things aren't as bad.
>
> (b) implies that whenever stack unwinding might occur, you must assume that
> the conditions that you would've ordinarily tested with an assert do not
> hold.

And while it is possible to do (b) in a principled way, it's much more
difficult than (a), because once you unwind and return to "normal"
code with the usual assumptions about program integrity broken, you
have to either:

1. Test every bit of data obsessively to make sure it's still
reasonable, or

2. Come up with a principled way to decide which kinds of
brokenness you're going to look for and try to circumvent, and
which invariants you're going to assume still hold.

In practice, I think doing a complete job of (1) is really impossible,
so you effectively have to do (2). Note also that once you unwind to
"normal" code, information about the particular integrity check that
failed tends to get lost: all the different throw points unwind into
the same instruction stream, so there really is a vast jungle of
potential problems to consider.

Programming with the basic assumption that the program state might be
corrupt is very difficult, and tends to work against the advantages of
exceptions, cluttering the "normal" flow of control with integrity
tests and attempts to work around the problems. And your program gets
bigger, harder to test and to maintain; if your work is correct, these
tests and workarounds will never be executed at all.

> Most people do neither. They write incorrect programs and don't care
> about the fact that every stack unwinding must assume a broken
> program.

I assume you mean that's the assumption you must make when you throw
in response to failed preconditions.

> It's all wishful thinking. We can't make our programs correct, so
> why even bother? Just throw an exception.

....and make it someone else's problem. Code higher up the call stack
mike know how to deal with it, right? ;-)

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Alf P. Steinbach

unread,
Jul 5, 2005, 9:03:24 AM7/5/05
to
* David Abrahams:

> "Peter Dimov" <pdi...@gmail.com> writes:
>
> > Nicola Musatti wrote:
> >> Maxim Yegorushkin wrote:
> >> [...]
> >>> Why would one want a graceful exit when code is broken, rather than
> >>> dying as loud as possible leaving a core dump with all state
> >>> preserved, rather than unwound? std::abort() is a good tool for that.
> >>
> >> You and Peter seem to assume that there can be no knowledge about how
> >> and where the code is broken.
> >
> > Not really.
> >
> > Either
> >
> > (a) you go the "correct program" way and use assertions to verify that your
> > expectations match the observed behavior of the program, or
> >
> > (b) you go the "resilient program" way and use exceptions in an attempt to
> > recover from certain situations that may be caused by bugs.

[Here responding to Peter Dimov's statement:]

Those are extremes, so the "either" is not very meaningful.

AFAIK the techniques of mathematical proof of program correctness is in
general not used in the industry. One reason is simply that the proofs
(and attendant machinery) tend to be more complex than the programs. Apart
from the work involved, that means a possibly higher chance of errors, for
example from over-generalization being employed as a valid proof technique.

When an assertion fails you have proof that the program isn't correct, and
due to the way we use asserts, an indication that the process should terminate,
so whether (a) or (b) has been employed (and I agree that if one had to choose
between the extremes (a) would be a good choice) is not relevant any more.


> > (a) implies that whenever an assert fails, the program no longer behaves as
> > expected, so everything you do from this point on is based on _hope_ that
> > things aren't as bad.

That is literally correct, but first of all, "the program" is an over-
generalization, because you usually know something much more specific than
that, and secondly, there are degrees of hope, including informed hope.

If you try to execute a halt instruction you're hoping the instruction
space is not corrupted, and further that the OS' handling of illegal
instructions (if any) still works. And I can hear you thinking, those
bad-case scenarios are totally implausible, and even the scenarios
leading someone to try a halt instruction are so implausible that no-one
actually do that. But those scenarios are included in "the program" no
longer behaving as expected, that's what that over-generalization and
absolute -- incorrectly applied -- mathematical logic means.

If you try to terminate the process using a call to something, you're
hoping that this isn't a full stack you're up against, and likewise for
evaluation of any expression whatsoever. Whatever you do, you're doing a
gut-feeling potential cost / clear benefit analysis, and this should in my
opinion be a pragmatic decision, a business decision. It should not be a
decision based on absolute black/white principles thinking where every
small K.O. is equated to a nuclear attack because in both cases you're down.

As an example, the program might run out of handles to GUI objects. In
old Windows that meant that what earlier was very nice graphical
displays suddenly started showing as e.g. white, blank areas. If this is
detected (as it should be) then there's generally nothing that can be done
within this process, so the process should terminate, and that implies
detection by something akin to a C++ 'assert'. A normal exception won't
do, because it might be picked up by some general exception handler. On
the other hand, you'd like that program to clean up. E.g., if it's your
ATM example, you'd like it to eject the user's card before terminating.

And, you'd like it to log and/or report this likely bug, e.g. sending a mail.

And, you don't want to compromise your design by making everything global
just so a common pre-termination handler can do the job.


> > (b) implies that whenever stack unwinding might occur, you must assume that
> > the conditions that you would've ordinarily tested with an assert do not
> > hold.

(b) implies that whenever anything might occur, you must assume that anything
can be screwed up. ;-)


> And while it is possible to do (b) in a principled way, it's much more
> difficult than (a), because once you unwind and return to "normal"
> code with the usual assumptions about program integrity broken, you
> have to either:
>
> 1. Test every bit of data obsessively to make sure it's still
> reasonable, or
>
> 2. Come up with a principled way to decide which kinds of
> brokenness you're going to look for and try to circumvent, and
> which invariants you're going to assume still hold.
>
> In practice, I think doing a complete job of (1) is really impossible,
> so you effectively have to do (2).

[Here responding to David Abraham's statement:]

I think your points (1) and (2) summarizes approach (b) well, and show
that it's not a technique one would choose if there was a choice.

But as mentioned above, it's an extreme, although in some other
languages (e.g. Java) you get null-pointer exceptions & the like.


> Note also that once you unwind to
> "normal" code, information about the particular integrity check that
> failed tends to get lost: all the different throw points unwind into
> the same instruction stream, so there really is a vast jungle of
> potential problems to consider.

I agree, for the C++ exceptions we do have.

If we did have some kind of "hard" exception supported by the language,
or even just standardized and supported by convention, then the vast jungle
of potential problems that stands in the way of further normal execution
wouldn't matter so much: catching that hard exception at some uppermost
control level you know that the process has to terminate, not continue
with normal execution (which was the problem), and you know what actions
you've designated for that case (also known by throwers, or at least known
to be irrelevant to them), so that's what the code has to attempt to do.


> [snip]


> ....and make it someone else's problem. Code higher up the call stack
> mike know how to deal with it, right? ;-)

In the case of a "hard" exception it's not "might", it's a certainty.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

ka...@gabi-soft.fr

unread,
Jul 5, 2005, 9:13:05 AM7/5/05
to
Alf P. Steinbach wrote:
> * Peter Dimov:

> > > > No recovery is possible after a failed assert.

> > [The above] means that performing stack unwinding after a
> > failed assert is usually a bad idea.

> I didn't think of that interpretation, but OK.

> The interpretation, or rather, what you _meant_ to say in the
> first place, is an opinion, which makes it more difficult to
> discuss.

It's always difficult to discuss a sentence with "usually".
What percentage is "usually"?

My work is almost exclusively on large scall servers, usually on
critical systems. In that field, it is always a mistake to do
anything more than necessary when a program invariant fails; you
back out as quickly as possible, and let the watchdog processes
clean up and restart.

At the client level, I agree that the question is less clear,
although the idea of a executing tons of destructors when the
program invariants don't hold sort of scares me even there. As
does the idea that some important information not be displayed
because of the error. For a game, on the other hand, its no big
deal, and in many cases, of course, you can recover enough to
continue, or at least save the game so it could be restarted.

> After a failed assert it's known that something, which could
> be anything (e.g. full corruption of memory), is wrong.
> Attempting to execute even one teeny tiny little instruction
> might do unimaginable damage.

Well, a no-op is probably safe:-).

> Yet you think it's all right to not only terminate the process
> but also to log things, which involves file handling, as long
> as one doesn't do a stack rewind up from the point of the
> failed assert. This leads me to suspect that you're confusing
> a failed assert with a corrupted stack, or that you think that
> a failure to clean up 100% might be somehow devastating.

I think the idea is that basically, you don't know what stack
unwinding may do, or try to do, because it depends on the global
program state. It's not local, and you have no control over
it. Most of the time, it's probably acceptable to do some
formatting (which, admittedly, may overwrite critical memory if
some pointers are corrupted, but you're not going to do anything
with the memory afterwards), and try to output the results
(which does entail a real risk -- if the file descriptor is
corrupted, you may end up overwriting something you shouldn't).
The point is, if you try to do this from the abortion routine,
you know at least exactly what you are trying to do, and can
estimate the risk. Whereas stack unwinding leads you into the
unknown, and you can't estimate the risk.

And of course, there are cases where even the risk of trying to
log the data is unacceptable. You core, the watchdog process
picks up the return status (which under Unix tells you that the
process was terminated by an unhandled signal, and has generated
a core dump), and generates the relevant log entries.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

ka...@gabi-soft.fr

unread,
Jul 5, 2005, 9:12:21 AM7/5/05
to
Niklas Matthies wrote:
> On 2005-07-03 10:19, Maxim Yegorushkin wrote:
> > On Sat, 02 Jul 2005 14:23:03 +0400, Alf P. Steinbach wrote:

> >>> If the assertion fails when there is no debugger, how do
> >>> you expect the program to recover?

> >> That's actually a good _C++_ question... ;-)

> >> First, the reason why one would like to 'throw' in this
> >> case, which is usually not to recover in the sense of
> >> continuing normal execution, but to recover enough to do
> >> useful logging, reporting and graceful exit on an end-user
> >> system with no debugger and other programmer's tools
> >> (including, no programmer's level understanding of what
> >> goes on).

> > Why would one want a graceful exit when code is broken,
> > rather than dying as loud as possible leaving a core dump
> > with all state preserved, rather than unwound?

> Because the customer expects and demands it. Actually, more
> often than not the customer even demands graceful resumption.

I guess it depends on the customer. None of my customers would
ever have accepted anything but "aborting" anytime we were
unsure of the data. Most of the time, trying to continue the
program after an assertion failed would have been qualified as
"grobe Fahrlässigkeit" -- I think the correct translation is
"criminal negligence".

But I'm not sure that that was the question. My impression
wasn't that people were saying, continue, even if you don't know
what you are doing. My impression was that we were discussing
the best way to shut the program down; basically: with or
without stack walkback. Which can be summarized by something
along the lines of: trying to clean up, but risk doing something
bad, or get out as quickly as possible, with as little risk as
possible, and leave the mess.

My experience (for the most part, in systems which are more or
less critical in some way, and under Unix) is that the operating
system will clean up most of the mess anyway, and that any
attempts should be carefully targetted, to minimize the risk.
Throwing an exception means walking back the stack, which in
turn means executing a lot of unnecessary and potentially
dangerous destructors. I don't think that the risk is that
great, typically, but it is very, very difficult, if not
impossible, to really evaluate. For example, I usually have
transaction objects on the stack. Calling the destructor
without having called commit should normally provoke a roll
back. But if I'm unsure of the global invariants of the
process, it's a risk I'd rather not take; maybe the destructor
will misinterpret some data, and cause a commit, although the
transaction didn't finish correctly. Where as if I abort, the
connection to the data base is broken (by the OS), and the data
base automatically does its roll back in this case. Why take
the risk (admittedly very small), when a solution with zero risk
exists?

But this is based on my personal experience. I can imagine that
in the case of a light weight graphical client, for example, the
analysis might be different. About all that can go wrong is
that the display is all messed up, and in this case, the user
will kill the process and restart it manually. And of course,
you might luck out, the user might not even notice, and you can
pull one over on him.

Still, if what the program is doing is important, and not just
cosmetic, you must take into account that if the program
invariants don't hold, it may do something wrong. If doing
something wrong can have bad consequences (and not just cosmetic
effects), then you really should limit what you try to do to a
minimum. Regardless of what some naive user might think. In
such cases, walking back the stack calling destructors
represents a significantly greater risk than explicitly
executing a very limited set of targetted clean-up operations.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

David Abrahams

unread,
Jul 5, 2005, 1:53:00 PM7/5/05
to
al...@start.no (Alf P. Steinbach) writes:

> * Peter Dimov:
>> > >
>> > > No recovery is possible after a failed assert.
>>
>> [The above] means that performing stack unwinding after a failed
>> assert is usually a bad idea.
>
> I didn't think of that interpretation, but OK.
>
> The interpretation, or rather, what you _meant_ to say in the first
> place,

AFAICT that was a *conclusion* based on what Peter had said before.

> is an opinion, which makes it more difficult to discuss.


> After a failed assert it's known that something, which could be anything
> (e.g. full corruption of memory), is wrong. Attempting to execute even
> one teeny tiny little instruction might do unimaginable damage. Yet you
> think it's all right to not only terminate the process but also to log
> things, which involves file handling, as long as one doesn't do a stack
> rewind up from the point of the failed assert.

There are two problems with stack unwinding at that point:

1. It executes more potentially damaging instructions than necessary,
since none of the destructors or catch blocks involved have
anything to do with process termination or logging.

2. The flow of execution proceeds into logic that is usually involved
with the resumption of normal execution, and it's easy to end up
back in code that executes as though everything is still fine.

> This leads me to suspect that you're confusing a failed assert with
> a corrupted stack, or that you think that a failure to clean up 100%
> might be somehow devastating. Anyway, an explanation of your
> opinion would be great, and this time, please write what you mean,
> not something entirely different.

Ouch.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Peter Dimov

unread,
Jul 5, 2005, 1:49:28 PM7/5/05
to
David Abrahams wrote:

> "Peter Dimov" <pdi...@gmail.com> writes:
>
> > Either
> >
> > (a) you go the "correct program" way and use assertions to verify that your
> > expectations match the observed behavior of the program, or
> >
> > (b) you go the "resilient program" way and use exceptions in an attempt to
> > recover from certain situations that may be caused by bugs.
> >
> > (a) implies that whenever an assert fails, the program no longer behaves as
> > expected, so everything you do from this point on is based on _hope_ that
> > things aren't as bad.
> >
> > (b) implies that whenever stack unwinding might occur, you must assume that
> > the conditions that you would've ordinarily tested with an assert do not
> > hold.
>
> And while it is possible to do (b) in a principled way, it's much more
> difficult than (a), because once you unwind and return to "normal"
> code with the usual assumptions about program integrity broken, you
> have to either:
>
> 1. Test every bit of data obsessively to make sure it's still
> reasonable, or
>
> 2. Come up with a principled way to decide which kinds of
> brokenness you're going to look for and try to circumvent, and
> which invariants you're going to assume still hold.
>
> In practice, I think doing a complete job of (1) is really impossible,
> so you effectively have to do (2).

It's possible to do (b) when you know that the stack unwinding will
completely destroy the potentially corrupted state, and it seems
possible - in theory - to write programs this way.

That is, instead of:

void menu_item_4()
{
frobnicate( document );
}

one might write

void menu_item_4()
{
Document tmp( document );

try
{
frobnicate( tmp );
document.swap( tmp );
}
catch( exception const & x )
{
// maybe attempt autosave( document ) here
report_error( x );
}
}

Even if there are bugs in frobnicate, if it doesn't leave tmp in an
undestroyable state, it's possible to continue.

I still prefer (a), of course. :-)

David Abrahams

unread,
Jul 6, 2005, 5:30:29 AM7/6/05
to
al...@start.no (Alf P. Steinbach) writes:

>> And while it is possible to do (b) in a principled way, it's much more
>> difficult than (a), because once you unwind and return to "normal"
>> code with the usual assumptions about program integrity broken, you
>> have to either:
>>
>> 1. Test every bit of data obsessively to make sure it's still
>> reasonable, or
>>
>> 2. Come up with a principled way to decide which kinds of
>> brokenness you're going to look for and try to circumvent, and
>> which invariants you're going to assume still hold.
>>
>> In practice, I think doing a complete job of (1) is really impossible,
>> so you effectively have to do (2).
>
> [Here responding to David Abraham's statement:]
>
> I think your points (1) and (2) summarizes approach (b) well, and show
> that it's not a technique one would choose if there was a choice.

I think that's what Peter meant when he wrote "performing stack


unwinding after a failed assert is usually a bad idea."

^^^^^^^

> But as mentioned above, it's an extreme

^^^^
What is an extreme?

> although in some other languages (e.g. Java) you get null-pointer
> exceptions & the like.

IMO null-pointer exceptions are a joke; it's a way of claiming that
the language is typesafe and making that sound bulletproof: all you do
is turn programming errors into exceptions with well-defined behavior!
Fantastic! Now my program goes on doing... something... even though
its state might be completely garbled.

>> Note also that once you unwind to "normal" code, information about
>> the particular integrity check that failed tends to get lost: all
>> the different throw points unwind into the same instruction stream,
>> so there really is a vast jungle of potential problems to consider.
>
> I agree, for the C++ exceptions we do have.
>
> If we did have some kind of "hard" exception supported by the
> language, or even just standardized and supported by convention,
> then the vast jungle of potential problems that stands in the way of
> further normal execution wouldn't matter so much: catching that hard
> exception at some uppermost control level you know that the process
> has to terminate, not continue with normal execution (which was the
> problem), and you know what actions you've designated for that case
> (also known by throwers, or at least known to be irrelevant to
> them), so that's what the code has to attempt to do.

And what happens if the corrupted programming state causes a crash
during unwinding (e.g. from a destructor)?

What makes executing the unwinding actions the right thing to do? How
do you know which unwinding actions will get executed at the point
where you detect that your program state is broken?

>> [snip]
>> ....and make it someone else's problem. Code higher up the call stack
>> mike know how to deal with it, right? ;-)
>
> In the case of a "hard" exception it's not "might", it's a certainty.

?? I don't care how you flavor the exception; the appropriate recovery
action cannot always be known by the outermost layer of the program.
Furthermore, what the outermost layer of the program knows how to do
becomes irrelevant if there is a crash during unwinding.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Jul 6, 2005, 5:59:01 AM7/6/05
to
"Peter Dimov" <pdi...@gmail.com> writes:

<snip example that copies program state, modifies, and swaps>

What you've just done -- implicitly -- is to decide which kinds of


brokenness you're going to look for and try to circumvent, and which

invariants you're going to assume still hold. For example, your
strategy assumes that whatever broke invariants in the copy of your
document didn't also stomp on the memory in the original document.
Part of what your strategy does is to increase the likelihood that
your assumptions will be correct, but if you're going to go down the
(b)(2) road in a principled way, you have to recognize where the
limits of your program's resilience are.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Alf P. Steinbach

unread,
Jul 6, 2005, 6:03:31 AM7/6/05
to
* David Abrahams:

> al...@start.no (Alf P. Steinbach) writes:
>
> > * Peter Dimov:
> >> > >
> >> > > No recovery is possible after a failed assert.
> >>
> >> [The above] means that performing stack unwinding after a failed
> >> assert is usually a bad idea.
> >
> > I didn't think of that interpretation, but OK.
> >
> > The interpretation, or rather, what you _meant_ to say in the first
> > place,
>
> AFAICT that was a *conclusion* based on what Peter had said before.

"It's impossible to do stack unwinding, therefore it's usually a bad
idea to do stack unwinding." I didn't think of that. It's, uh...


> > is an opinion, which makes it more difficult to discuss.
>
>
> > After a failed assert it's known that something, which could be anything
> > (e.g. full corruption of memory), is wrong. Attempting to execute even
> > one teeny tiny little instruction might do unimaginable damage. Yet you
> > think it's all right to not only terminate the process but also to log
> > things, which involves file handling, as long as one doesn't do a stack
> > rewind up from the point of the failed assert.
>
> There are two problems with stack unwinding at that point:
>
> 1. It executes more potentially damaging instructions than necessary,

That is an unwarranted assumption.

In practice the opposite is probably more likely for many classes of
applications, but it does depend: there are situations where stack unwinding
is not advicable, and there are situations where it is advicable.

The decision is just like the judgement call of assert versus
exception versus return value: you judge the severity, the consequences
of this or that way of handling it, what you have time for (:-)), even
what maintainance programmers are likely to understand, etc.


> since none of the destructors or catch blocks involved have
> anything to do with process termination or logging.

Ditto, the above is just assumptions, but you might turn it around
and get a valid statement: _if_ the assumptions above hold, then that
is a situation where stack unwinding would perhaps not be a good idea.


> 2. The flow of execution proceeds into logic that is usually involved
> with the resumption of normal execution, and it's easy to end up
> back in code that executes as though everything is still fine.

Not sure what you mean with "resumption of normal execution" since
destructors are very much oriented towards directly terminating in
such cases. Destructors are the most likely actors that may terminate
the process if further problems manifest themselves. Either by directly
calling abort or terminate, or by throwing (where C++ rules guarantee
termination).

Regarding "code that executes as though everything is still fine":

First of all, most likely everything at higher levels _is_ just as fine,
or not, as it ever was: when you detect that null-pointer argument (say) it
doesn't usually mean more than a simple typo or the like at the calling
site. That goes into the equation for likely good versus bad of doing this
or that. Secondly, destructors are designed to make things fine if they
aren't already: they're cleaners, clean-up routines, and after a destructor
has run successfully there's no longer that class invariant that can be bad,
so by cleaning up you're systematically removing potential badness.


> > This leads me to suspect that you're confusing a failed assert with
> > a corrupted stack, or that you think that a failure to clean up 100%
> > might be somehow devastating. Anyway, an explanation of your
> > opinion would be great, and this time, please write what you mean,
> > not something entirely different.
>
> Ouch.

I wouldn't and didn't put it that way.

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Alf P. Steinbach

unread,
Jul 6, 2005, 6:40:15 AM7/6/05
to
* David Abrahams:

>
> > But as mentioned above, it's an extreme
> ^^^^
> What is an extreme?

Peter's approach (b). Also approach (a), but here the reference was
to approach (b).


> > although in some other languages (e.g. Java) you get null-pointer
> > exceptions & the like.
>
> IMO null-pointer exceptions are a joke; it's a way of claiming that
> the language is typesafe and making that sound bulletproof: all you do
> is turn programming errors into exceptions with well-defined behavior!
> Fantastic! Now my program goes on doing... something... even though
> its state might be completely garbled.

Agreed.


> > If we did have some kind of "hard" exception supported by the
> > language, or even just standardized and supported by convention,
> > then the vast jungle of potential problems that stands in the way of
> > further normal execution wouldn't matter so much: catching that hard
> > exception at some uppermost control level you know that the process
> > has to terminate, not continue with normal execution (which was the
> > problem), and you know what actions you've designated for that case
> > (also known by throwers, or at least known to be irrelevant to
> > them), so that's what the code has to attempt to do.
>
> And what happens if the corrupted programming state causes a crash
> during unwinding (e.g. from a destructor)?

The same as would have happened if we'd initiated that right away: the
difference is that it doesn't have to happen, and generally won't, and
even if it does one might have collected useful info on the way. ;-)


> What makes executing the unwinding actions the right thing to do? How
> do you know which unwinding actions will get executed at the point
> where you detect that your program state is broken?

Both questions are impossible to answer due to built-in assumptions. For
the first question, there is no more "the right thing" than there is "best",
out of context. For the second question, you generally don't know the
unwinding actions, and, well, you know that... ;-)


> >> [snip]
> >> ....and make it someone else's problem. Code higher up the call stack
> >> mike know how to deal with it, right? ;-)
> >
> > In the case of a "hard" exception it's not "might", it's a certainty.
>
> ?? I don't care how you flavor the exception; the appropriate recovery
> action cannot always be known by the outermost layer of the program.

When you get to the outermost layer you are (or rather, the program is)
already mostly recovered: you have logged the most basic information (of
course you did that right away, before unwinding), you have removed lots of
potential badness (due to destructors cleaning up), and perhaps as part of
that logged more rich & useful state information, and now all you have to do
is to attempt to do more chancy high level error reporting before terminating
in a hopefully clean way -- if you like to do very dangerous things, as they
evidently do at Microsoft, you could also at this point store descriptions
of the state and attempt a Bird Of Phoenix resurrection, a.k.a. restart.


> Furthermore, what the outermost layer of the program knows how to do
> becomes irrelevant if there is a crash during unwinding.

A crash during unwinding is OK: it's no worse than what you would have had
with no unwinding. Two main potential problems are (1) a hang, which can
occur during cleanup, and (2) unforeseen bad effects such as trashing data
on the disk or committing a transaction erronously. Such potential problems
will have to be weighted against the probability of obtaining failure data
at all, obtaining rich failure data, pleasing or at least not seriously
annoying the user (in the case of an interactive app), etc., in each case.

Cheers,

- Alf

--
A: Because it messes up the order in which people normally read text.
Q: Why is it such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Jul 6, 2005, 6:01:21 PM7/6/05
to
al...@start.no (Alf P. Steinbach) writes:

> * David Abrahams:
>> al...@start.no (Alf P. Steinbach) writes:
>>
>> > * Peter Dimov:
>> >> > >
>> >> > > No recovery is possible after a failed assert.
>> >>
>> >> [The above] means that performing stack unwinding after a failed
>> >> assert is usually a bad idea.
>> >
>> > I didn't think of that interpretation, but OK.
>> >
>> > The interpretation, or rather, what you _meant_ to say in the first
>> > place,
>>
>> AFAICT that was a *conclusion* based on what Peter had said before.
>
> "It's impossible to do stack unwinding, therefore it's usually a bad
> idea to do stack unwinding." I didn't think of that. It's, uh...

You clipped everything but the first sentence of Peter's paragraph,
which makes what he's saying look like a simpleminded tautology, and
now you're ridiculing it. Nice.

>> > After a failed assert it's known that something, which could be anything
>> > (e.g. full corruption of memory), is wrong. Attempting to execute even
>> > one teeny tiny little instruction might do unimaginable damage. Yet you
>> > think it's all right to not only terminate the process but also to log
>> > things, which involves file handling, as long as one doesn't do a stack
>> > rewind up from the point of the failed assert.
>>
>> There are two problems with stack unwinding at that point:
>>
>> 1. It executes more potentially damaging instructions than necessary,
>
> That is an unwarranted assumption.

Let's ignore, for the moment, that the runtime library's unwinding
code is being executed and _its_ invariants may have been violated.

If you arrange your program so that the only automatic objects with
nontrivial destructors you use are specifically designed to do
something when the program state is broken, then you will be executing
the minimal number of potentially damaging instructions. Do you think
that's a likely scenario? I would hate to write programs under a
restriction that I could only use automatic objects with nontrivial
destructors to account for a condition that should never occur. It
would also prevent the use of most libraries.

> In practice the opposite is probably more likely for many classes of
> applications,

Wow, so many classes of applications actually can be written that way? I
presume they wouldn't use the standard (or any other) libraries.

>> 2. The flow of execution proceeds into logic that is usually involved
>> with the resumption of normal execution, and it's easy to end up
>> back in code that executes as though everything is still fine.
>
> Not sure what you mean with "resumption of normal execution"

I mean what happens in most continuously running programs (not
single-pass translators like compilers) after error reporting or
exception translation at the end of a catch block that doesn't end
with a rethrow.

> since destructors are very much oriented towards directly
> terminating in such cases.

Destructors are oriented towards destroying an object, and have no
particular relationship to overall error-handling strategies.

> Destructors are the most likely actors that may terminate the
> process if further problems manifest themselves.

Destructors intentionally taking a program down is a common idiom in
the code you've seen? Can you show me even one example of that in
open source code somewhere? I'm really curious.

> Either by directly calling abort or terminate, or by throwing (where
> C++ rules guarantee termination).

C++ rules don't guarantee termination if a destructor throws, unless
unwinding is already in progress.

> Regarding "code that executes as though everything is still fine":
>
> First of all, most likely everything at higher levels _is_ just as
> fine,

That is an unwarranted assumption.

> or not, as it ever was: when you detect that null-pointer argument (say) it


> doesn't usually mean more than a simple typo or the like at the calling
> site.

That is an unwarranted assumption.

> That goes into the equation for likely good versus bad of doing this
> or that.

Yes. Those equations are what make (b) difficult. There are no
disciplined ways of understanding the risks of the choices you have to
make.

> Secondly, destructors are designed to make things fine if they
> aren't already: they're cleaners, clean-up routines, and after a
> destructor has run successfully there's no longer that class
> invariant that can be bad, so by cleaning up you're systematically
> removing potential badness.

It's easy to imagine that running a bunch of destructors increases
"badness," e.g. by leaving dangling pointers behind.

>> > This leads me to suspect that you're confusing a failed assert with
>> > a corrupted stack, or that you think that a failure to clean up 100%
>> > might be somehow devastating. Anyway, an explanation of your
>> > opinion would be great, and this time, please write what you mean,
>> > not something entirely different.
>>
>> Ouch.
>
> I wouldn't and didn't put it that way.

I didn't rewrite your text at all. Unless someone is spoofing you,
you put it exactly that way.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

David Abrahams

unread,
Jul 6, 2005, 6:02:52 PM7/6/05
to
al...@start.no (Alf P. Steinbach) writes:

>> > If we did have some kind of "hard" exception supported by the
>> > language, or even just standardized and supported by convention,
>> > then the vast jungle of potential problems that stands in the way of
>> > further normal execution wouldn't matter so much: catching that hard
>> > exception at some uppermost control level you know that the process
>> > has to terminate, not continue with normal execution (which was the
>> > problem), and you know what actions you've designated for that case
>> > (also known by throwers, or at least known to be irrelevant to
>> > them), so that's what the code has to attempt to do.
>>
>> And what happens if the corrupted programming state causes a crash
>> during unwinding (e.g. from a destructor)?
>
> The same as would have happened if we'd initiated that right away:

Initiated what?

> the difference is that it doesn't have to happen,

Difference between what and what?

> and generally won't,

Based on what do you say that?

> and even if it does one might have collected useful info on the
> way. ;-)

Maybe. As long as you're clear that it's wishful thinking. Also,
there seems to be little good reason to use unwinding to collect that
info. You can establish a chain of reporting frames and traverse that
when your precondition is violated without doing any unwinding (use
TLS to ensure thread safety if you need it).

>> What makes executing the unwinding actions the right thing to do? How
>> do you know which unwinding actions will get executed at the point
>> where you detect that your program state is broken?
>
> Both questions are impossible to answer due to built-in assumptions.
> For the first question, there is no more "the right thing" than
> there is "best", out of context.

That's my point exactly. The author of the code where the
precondition violation is detected doesn't _know_ the context, so she
can't know what is appropriate.

> For the second question, you generally don't know the unwinding
> actions, and, well, you know that... ;-)

Yes, also part of my point.

>> >> [snip]
>> >> ....and make it someone else's problem. Code higher up the call stack
>> >> mike know how to deal with it, right? ;-)
>> >
>> > In the case of a "hard" exception it's not "might", it's a certainty.
>>
>> ?? I don't care how you flavor the exception; the appropriate recovery
>> action cannot always be known by the outermost layer of the program.
>
> When you get to the outermost layer you are (or rather, the program is)
> already mostly recovered: you have logged the most basic information (of
> course you did that right away, before unwinding),

That's not recovery! There's normally no recovering from broken
invariants, because except for a few places where you second-guess
your entire worldview as a programmer, your whole program has been
written with the assumption that they hold.

> you have removed lots of potential badness (due to destructors
> cleaning up),

That's pretty vague. What kind of "potential badness" do you think
gets removed?

> and perhaps as part of that logged more rich & useful state
> information,

Unwinding is totally unnecessary for that purpose.

> and now all you have to do is to attempt to do more
> chancy high level error reporting before terminating in a hopefully
> clean way -- if you like to do very dangerous things, as they
> evidently do at Microsoft, you could also at this point store
> descriptions of the state and attempt a Bird Of Phoenix
> resurrection, a.k.a. restart.

Again, unwinding is totally unnecessary for that purpose.

>> Furthermore, what the outermost layer of the program knows how to
>> do becomes irrelevant if there is a crash during unwinding.
>
> A crash during unwinding is OK: it's no worse than what you would
> have had with no unwinding.

Of course it is worse if you are postponing anything important until
the outer layer, because the crash will prevent you from getting to
that important action.

AFAICT, the only good reason to unwind is so that you can resume
normal execution. If you're not going to do that, unwinding just
destroys important debugging information and makes your program more
vulnerable to crashes that may occur due to the execution of
noncritical destructors and catch blocks.

--
Dave Abrahams
Boost Consulting
www.boost-consulting.com

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Bob Bell

unread,
Jul 6, 2005, 6:46:11 PM7/6/05
to
Alf P. Steinbach wrote:
> * David Abrahams:

> > Furthermore, what the outermost layer of the program knows how to do
> > becomes irrelevant if there is a crash during unwinding.
>
> A crash during unwinding is OK: it's no worse than what you would have had
> with no unwinding.

You seem to be confusing "crash" and "abort".

Bob

Bob Bell

unread,
Jul 6, 2005, 6:46:36 PM7/6/05
to
David Abrahams wrote:
> "Peter Dimov" <pdi...@gmail.com> writes:
> > David Abrahams wrote:
> >> "Peter Dimov" <pdi...@gmail.com> writes:
> >>
> >> > Either
> >> >
> >> > (a) you go the "correct program" way and use assertions to verify that your
> >> > expectations match the observed behavior of the program, or
> >> >
> >> > (b) you go the "resilient program" way and use exceptions in an attempt to
> >> > recover from certain situations that may be caused by bugs.

[snip]

> > It's possible to do (b) when you know that the stack unwinding will
> > completely destroy the potentially corrupted state, and it seems
> > possible - in theory - to write programs this way.
>
> <snip example that copies program state, modifies, and swaps>
>
> What you've just done -- implicitly -- is to decide which kinds of
> brokenness you're going to look for and try to circumvent, and which
> invariants you're going to assume still hold. For example, your
> strategy assumes that whatever broke invariants in the copy of your
> document didn't also stomp on the memory in the original document.
> Part of what your strategy does is to increase the likelihood that
> your assumptions will be correct, but if you're going to go down the
> (b)(2) road in a principled way, you have to recognize where the
> limits of your program's resilience are.

And recognize that where those limits are exceeded, you're back to (a)
anyway.

Bob

Gerhard Menzl

unread,
Jul 7, 2005, 10:43:30 AM7/7/05
to
ka...@gabi-soft.fr wrote:

> My experience (for the most part, in systems which are more or
> less critical in some way, and under Unix) is that the operating
> system will clean up most of the mess anyway, and that any
> attempts should be carefully targetted, to minimize the risk.
> Throwing an exception means walking back the stack, which in
> turn means executing a lot of unnecessary and potentially
> dangerous destructors. I don't think that the risk is that
> great, typically, but it is very, very difficult, if not
> impossible, to really evaluate. For example, I usually have
> transaction objects on the stack. Calling the destructor
> without having called commit should normally provoke a roll
> back. But if I'm unsure of the global invariants of the
> process, it's a risk I'd rather not take; maybe the destructor
> will misinterpret some data, and cause a commit, although the
> transaction didn't finish correctly. Where as if I abort, the
> connection to the data base is broken (by the OS), and the data
> base automatically does its roll back in this case. Why take
> the risk (admittedly very small), when a solution with zero risk
> exists?

I think the problem with this discussion is that no-one seems to agree
about what we mean by global invariants and what kinds of programs we
are talking about. When flight control software encounters a negative
altitude value, it had better shut down (and, hopefully, let the backup
system take over). On the other hand, a word processor that aborts and
destroys tons of unsaved work just because the spellchecker has met a
violated invariant is just inacceptable.

It is generally agreed that modularity, loose coupling, and
encapsulation are cornerstones of good software design. Providing these
principles are being adherred to, I wonder whether global invariants (or
preconditions) that require immediate shutdown when violated are really
as common as this discussion seems to suggest they are.

In my experience, the distinction is rarely that clear-cut, at least not
in interactive, user-centric systems. For example, right now I am
working on the front-end of a multi-user telecommunication system that
needs a central database for some, but not all operations. A corrupted
database table would certainly constitute violated preconditions, yet a
shutdown in such a case would be out of the question. Our customer
insists - justifiably - that operations which do not rely on database
transactions, such as emergency calls, continue to function even if the
database connection is completely broken.


--
Gerhard Menzl

#dogma int main ()

Humans may reply by replacing the thermal post part of my e-mail address
with "kapsch" and the top level domain part with "net".

Alf P. Steinbach

unread,
Jul 7, 2005, 8:38:17 PM7/7/05
to
* David Abrahams:
> al...@start.no (Alf P. Steinbach) writes:
>
> > * David Abrahams:
> >> al...@start.no (Alf P. Steinbach) writes:
> >>
> >> > * Peter Dimov:
> >> >> > >
> >> >> > > No recovery is possible after a failed assert.
> >> >>
> >> >> [The above] means that performing stack unwinding after a failed
> >> >> assert is usually a bad idea.
> >> >
> >> > I didn't think of that interpretation, but OK.
> >> >
> >> > The interpretation, or rather, what you _meant_ to say in the first
> >> > place,
> >>
> >> AFAICT that was a *conclusion* based on what Peter had said before.
> >
> > "It's impossible to do stack unwinding, therefore it's usually a bad
> > idea to do stack unwinding." I didn't think of that. It's, uh...
>
> You clipped everything but the first sentence of Peter's paragraph,

It so happens that I agree with the literal interpretation of the rest
(although not with the sense it imparts). I.e. there was nothing to
discuss there with Peter, and no need to quote. For completeness, here's
what I clipped and agree literally with, emphasis added:

A failed assert means that we no longer _know_ what's going on. [Right]


Generally logging and reporting should be done at the earliest

opportunity [right again, although what can be logged/reported at
that early moment, and of what use it can be, is very restricted]; if


you attempt to "recover" you may be terminated and no longer be able

to log or report [right, and that holds for anything you do].


> which makes what he's saying look like a simpleminded tautology,

I don't think Peter is simpleminded, quite the opposite, and anyway,
that discussion is off-topic and not one I'd like to participate in.


> and now you're ridiculing it. Nice.

If showing that a statement is incorrect, by quoting the parts it
refers to, is ridicule, then I ridiculed your statement. However,
quoting is normally not considered ridicule. You're off-topic
both regarding Peter's alleged intellectual capacity and my
alleged choice of rhetorical tools.

Cheers,

- Alf