Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

C++ stack overflow and exception safety

206 views
Skip to first unread message

christopher

unread,
Dec 5, 2002, 4:09:26 PM12/5/02
to
The question of stack overflow and exception safety has been nagging
me lately. I think the fact that C++ does not throw on a call stack
overflow is rather archaic. In fact on modern OSes (ok say NT based
systems) I believe the following lines have nearly the same
characteristics for potential memory failure, but in the case of
failure new throws a C++ exception and on NT a SEH is raised if memory
can not be allocated to increase stack space:

foo(a);

...

p = new bar();

What I think is odd is that as C++ application programers the fact
that new can fail and throw is beaten into your head since the day you
learn what new is. But at the same time a function call is generally
considered innocuous unless recursion comes into play. I would argue
that in a low memory situation either is as likely to fail, but the
application would almost never be prepared to handle the failure from
the inability to increase stack space.

I also think there is a general lack of understanding about what can
be done in low memory situations. What happens for instance if there
is not enough memory for the runtime to unwind the stack? I think it
fair to say that most Windows applications, including those shipped
from Redmond, don't do anything graceful in low memory situations.

I also have the notion that there is no such thing as a safe operation
in C++, and that an OS failure could occur on any line of code, but
I'm finding that more difficult to prove. My colleagues believe my
assertion that something as simple as pointer assignment could
potentially fail at the OS level (again with a SEH on NT) absurd. I
have to admit finding such a case is difficult, but my feeling is that
it is still theoretically possible with compilers which aggressively
optimize for efficient memory usage over performance.

christopher baus

Senior Software Engineer
Zephyr Associates

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]

Alexander Terekhov

unread,
Dec 5, 2002, 6:19:27 PM12/5/02
to

christopher wrote:
>
> The question of stack overflow and exception safety has been nagging
> me lately. I think the fact that C++ does not throw on a call stack
> overflow is rather archaic. In fact on modern OSes (ok say NT based
> systems) I believe the following lines have nearly the same
> characteristics for potential memory failure, but in the case of
> failure new throws a C++ exception and on NT a SEH is raised if
> memory ...

Well, that's rather interesting topic... For now, I'd like to drop
the following links...

<http://tinyurl.com/39nx>
(comp.std.c++, Subject: Re: C++ exception handling)

<http://tinyurl.com/39nz>
(c.p.t., c.l.c++, Subject: Re: stack size vs. thread stack size)

<http://tinyurl.com/39o1> and <http://tinyurl.com/39oa>
(comp.std.c++, Subject: Re: Some issues with Technical Report...)

...illustrating stack overflow exceptions in action and, more
importantly, "rasing" some questions [NOT answered, thus far].

TIA [for eventual answer(s), I mean ;-) ].

regards,
alexander.

christopher

unread,
Dec 7, 2002, 2:41:14 PM12/7/02
to
> The question of stack overflow and exception safety has been nagging
> me lately. I think the fact that C++ does not throw on a call stack
> overflow is rather archaic. In fact on modern OSes (ok say NT based
> systems) I believe the following lines have nearly the same
> characteristics for potential memory failure, but in the case of
> failure new throws a C++ exception and on NT a SEH is raised if memory
> can not be allocated to increase stack space:
>
Let me change "allocated" to "committed", as that is what win32 really
does. While Sutter's treatment of exception safety is very thorough,
I find the lack of discussion on exceptional stack situations to be
overly optimistic when the discussion centers on "strong" exception
safety.

While swap is considered a nothrow operation, it can result in a SEH
when physical memory is limited.

IMHO under modern OSes stack allocation and dynamic heap allocation
are very similar, yet have very different treatments in the language
and by application programmers.

In historical contexts stack and heap were physically different. I
believe modern memory management systems blur the distinction.

christopher

David Abrahams

unread,
Dec 8, 2002, 9:07:11 AM12/8/02
to
chris...@baus.net (christopher) writes:

>> The question of stack overflow and exception safety has been nagging
>> me lately. I think the fact that C++ does not throw on a call stack
>> overflow is rather archaic. In fact on modern OSes (ok say NT based
>> systems) I believe the following lines have nearly the same
>> characteristics for potential memory failure, but in the case of
>> failure new throws a C++ exception and on NT a SEH is raised if memory
>> can not be allocated to increase stack space:
>>
> Let me change "allocated" to "committed", as that is what win32 really
> does. While Sutter's treatment of exception safety is very thorough,
> I find the lack of discussion on exceptional stack situations to be
> overly optimistic when the discussion centers on "strong" exception
> safety.

One important reason that stack overflows should never be turned into
C++ exceptions is that to write exception-safe code, you need to know
that some operations simply don't throw. As you say, Herb's discussion
centers on the strong guarantee, and like most discussions of
exception-safety, tends to miss the big picture. All three of the
guarantees are usually needed to cooperate in producing exception-safe
systems. You can read more about this in my short 1998 paper at
http://www.boost.org/more/generic_exception_safety.html**.

If stack overflows can generate exceptions, then /any/ operation could
potentially throw an exception. If an exception propagates from a
region of code which was supposed to be no-throw, it's very likely
that the program's invariants will be broken. Suddenly, there is no
way to distinguish a recoverable condition, like "filesystem full"
from an unrecoverable one in which the program cannot reliably
continue.

Stack overflows, like null pointer dereferences and invalid
instructions, should be handled by traps or other mechanisms outside
the C++ exception-handling system.

-Dave

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution


**originally published in M. Jazayeri, R. Loos, D. Musser (eds.):
Generic Programming, Proc. of a Dagstuhl Seminar, Lecture Notes on
Computer Science 1766 - I'm required to add this note when linking
to that document.

Dave Harris

unread,
Dec 8, 2002, 5:31:24 PM12/8/02
to
chris...@baus.net (christopher) wrote (abridged):

> The question of stack overflow and exception safety has been nagging
> me lately. I think the fact that C++ does not throw on a call stack
> overflow is rather archaic. In fact on modern OSes (ok say NT based
> [...]

As I recall, C++ makes stack overflow undefined behaviour. Since throwing
an exception is a legal implementation of undefined behaviour, the
behaviour you desire is permitted.

A exception should not be mandated on all platforms, if only because for
some platforms the overhead of testing would be too great.


> My colleagues believe my assertion that something as simple as
> pointer assignment could potentially fail at the OS level (again with
> a SEH on NT) absurd.

It is allowed to fail if the source is uninitialised, or the result of
certain casts. Eg:
int *p1;
int *p2 = p1; // May fail.

> I have to admit finding such a case is difficult, but my feeling is
> that it is still theoretically possible with compilers which
> aggressively optimize for efficient memory usage over performance.

I don't think such an optimisation would conform to the C++ standard,
because it changes the visible behaviour of the program.

Dave Harris, Nottingham, UK | "Weave a circle round him thrice,
bran...@cix.co.uk | And close your eyes with holy dread,
| For he on honey dew hath fed
http://www.bhresearch.co.uk/ | And drunk the milk of Paradise."

Brett Gossage

unread,
Dec 9, 2002, 1:32:00 PM12/9/02
to
<-- agreed to stuff snipped -->

> Suddenly, there is no
> way to distinguish a recoverable condition, like "filesystem full"
> from an unrecoverable one in which the program cannot reliably
> continue.

Whoa, how do get to this apocolypse? If you arrive at "catch(
FileSystemFull& )" then you know you didn't run out stack space or have
a
no-throw somewhere in the call stack. I can imagine running out of
stack
space in a thread and having a handler ready to catch
"std::stack_overflow",
give the thread more stack space, and try again. In this case, stack
overflow is only unrecoverable wrt a given thread - not the entire
program.
I know, the Standard doesn't address threads, but it has to sooner or
later.

Perhaps the original poster's question deserves a closer look.

Ken Hagan

unread,
Dec 9, 2002, 3:33:22 PM12/9/02
to
"christopher" <chris...@baus.net> wrote...

>
> IMHO under modern OSes stack allocation and dynamic heap allocation
> are very similar, yet have very different treatments in the language
> and by application programmers.
>
> In historical contexts stack and heap were physically different. I
> believe modern memory management systems blur the distinction.

Physically, yes, but I think there is a logical distinction.

In the absence of recursion, the worst case stack usage for a whole
program can be determined when the program is linked. The worst case
heap usage generally depends on input data.

In the presence of recursion more careful arguments are required,
but if all the recursions bottom out somewhere it still ought to
be possible to preduct worst case usage.

Therefore, stack overflows are in principle avoidable whereas heap
overflows aren't. Therefore, a stack overflow is a programming error
but a heap overflow is something we just have to be prepared for.

christopher

unread,
Dec 9, 2002, 4:28:01 PM12/9/02
to
David Abrahams <da...@boost-consulting.com> wrote in message news:<uadjhe...@boost-consulting.com>...

> chris...@baus.net (christopher) writes:
>
> >> The question of stack overflow and exception safety has been nagging
> >> me lately. I think the fact that C++ does not throw on a call stack
> >> overflow is rather archaic. In fact on modern OSes (ok say NT based
> >> systems) I believe the following lines have nearly the same
> >> characteristics for potential memory failure, but in the case of
> >> failure new throws a C++ exception and on NT a SEH is raised if memory
> >> can not be allocated to increase stack space:
> >>
> > Let me change "allocated" to "committed", as that is what win32 really
> > does. While Sutter's treatment of exception safety is very thorough,
> > I find the lack of discussion on exceptional stack situations to be
> > overly optimistic when the discussion centers on "strong" exception
> > safety.
>
> One important reason that stack overflows should never be turned into
> C++ exceptions is that to write exception-safe code, you need to know
> that some operations simply don't throw. As you say, Herb's discussion
> centers on the strong guarantee, and like most discussions of
> exception-safety, tends to miss the big picture. All three of the
> guarantees are usually needed to cooperate in producing exception-safe
> systems. You can read more about this in my short 1998 paper at
> http://www.boost.org/more/generic_exception_safety.html**.

> If stack overflows can generate exceptions, then /any/ operation could
> potentially throw an exception. If an exception propagates from a

Thanks for making my point for me.


> Stack overflows, like null pointer dereferences and invalid
> instructions, should be handled by traps or other mechanisms outside
> the C++ exception-handling system.
>

I want to change the terminalogy from stack overflow, from inability
to commit stack memory with in the bounds of the reserved stack
address space. That is technically different than stack overflow.

Then I'll add. Failure to allocate memory, like null pointer


dereferences and invalid instructions, should be handled by traps or
other mechanisms outside the C++ exception-handling system.

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Alexander Terekhov

unread,
Dec 9, 2002, 7:53:12 PM12/9/02
to

David Abrahams wrote:
[...]

> One important reason that stack overflows should never be turned into
> C++ exceptions is that to write exception-safe code, you need to know
> that some operations simply don't throw.

Obviously, nothrow... I'd say, *regions* shall NOT throw; making this
type of exception/trap/whatever-you-call-it unrecoverable IF it happens
to occur in such "restricted" place. BTW, nothrow(*) stuff can NOT be
invoked inside async-cancel regions as well, for example. However, to
me, this doesn't mean that stack overflow is generally unrecoverable
thing. I think that you somewhat tend to miss the big picture here, so
to speak. <wink>

regards,
alexander.

(*) Well, basic and strong as well. ;-) There should be another type
of exception safety -- async-cancel-safety. Here's kinda-example
illustrating async-cancel [well, okay, somewhat silly one ;-) ].

#include <pthread.h>

#include <cassert>
#include <iostream>
using namespace std;

extern "C" void* terminator( void* ptr ) {
int status;
pthread_t thidInitialAKAmainThread = *static_cast< pthread_t* >(ptr);
cout << "killing main()..." << flush;
status = pthread_cancel( thidInitialAKAmainThread );
assert( 0 == status );
status = pthread_join( thidInitialAKAmainThread, 0 );
assert( 0 == status );
cout << " done/eliminated! bye-bye [now terminating the
process]...\n";
return 0;
}

int main() {
int status;
pthread_t thidArnie, thidInitialAKAmainThread = pthread_self();
status = pthread_create( &thidArnie,
0,
&terminator,
&thidInitialAKAmainThread );
assert( 0 == status );
status = pthread_setcanceltype( PTHREAD_CANCEL_ASYNCHRONOUS,
&status );
assert( 0 == status );
for ( ;; ) ;

Hyman Rosen

unread,
Dec 9, 2002, 7:52:54 PM12/9/02
to
Ken Hagan wrote:

> In the presence of recursion more careful arguments are required,
> but if all the recursions bottom out somewhere it still ought to
> be possible to preduct worst case usage.


No. In the presence of recursion, maximum stack use is undecidable.
Here's a simple example of a hard case:

bool collatz(int n)
{
if (n == 1) return true;
if (n & 1) return collatz(3 * n + 1);
return collatz(n / 2);

David Abrahams

unread,
Dec 10, 2002, 2:06:31 PM12/10/02
to
Alexander Terekhov <tere...@web.de> writes:

> David Abrahams wrote:
> [...]
>> One important reason that stack overflows should never be turned into
>> C++ exceptions is that to write exception-safe code, you need to know
>> that some operations simply don't throw.
>
> Obviously, nothrow... I'd say, *regions* shall NOT throw; making this
> type of exception/trap/whatever-you-call-it unrecoverable IF it happens
> to occur in such "restricted" place.

Yes. Those "restricted" places happen all over the most trivial code,
not just in unwinding/recovery. Just look at any standard container
class for examples. So while there are probably a few areas where you
can recover from a stack overflow, in general you can't. Using
exceptions to recover from stack overflows is a shot in the dark which
could just as easily lead to worse problems.

> BTW, nothrow(*) stuff can NOT be invoked inside async-cancel regions

What is the definition of an async-cancel region? Are you seriously
suggesting there are regions of code where it's inappropriate to use a
nothrow operation (e.g. integer arithmetic)? Pretty much all C++ code
is built out of nothrow operations; it's hard to get anything done
without them.

> as well, for example. However, to me, this doesn't mean that stack
> overflow is generally unrecoverable thing. I think that you somewhat
> tend to miss the big picture here, so to speak. <wink>

<wink> to you too. If I'm missing something, you haven't done
anything to make it more apparent.

> regards,
> alexander.
>
> (*) Well, basic and strong as well. ;-)

that pretty much means that inside an async cancel region (whatever
that is) every operation must throws an exception /and/ break the
program's invariants. That's pretty restrictive (and useless), isn't
it?

> There should be another type of exception safety --
> async-cancel-safety. Here's kinda-example illustrating
> async-cancel [well, okay, somewhat silly one ;-) ].

Silly or not, it doesn't illustrate much. What about this code makes
it "safe"? What would be a corresponding "unsafe" version?

I think it would be a lot better, actually, if you took a moment to
describe what the problem is that "async-cancel-safety" is trying to
solve, in plain English. Acompanying code always helps, of course,
but without some conceptual background it's pretty useless.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

llewelly

unread,
Dec 10, 2002, 2:07:20 PM12/10/02
to
"Ken Hagan" <K.H...@thermoteknix.co.uk> writes:

[snip]


> In the presence of recursion more careful arguments are required,
> but if all the recursions bottom out somewhere it still ought to
> be possible to preduct worst case usage.

[snip]

No. Most large projects will have recursive functions whose depth is
dependent on input which is unknown at delivery time.

Paul D. DeRocco

unread,
Dec 10, 2002, 2:18:52 PM12/10/02
to
> "David Abrahams" <da...@boost-consulting.com> wrote ...

>
> If stack overflows can generate exceptions, then /any/ operation could
> potentially throw an exception. If an exception propagates from a
> region of code which was supposed to be no-throw, it's very likely
> that the program's invariants will be broken. Suddenly, there is no
> way to distinguish a recoverable condition, like "filesystem full"
> from an unrecoverable one in which the program cannot reliably
> continue.
>
> Stack overflows, like null pointer dereferences and invalid
> instructions, should be handled by traps or other mechanisms outside
> the C++ exception-handling system.

This is true if you want to guarantee that an application always functions
"correctly", and aborts instantly if it can't live up to this guarantee.
This is appropriate for most PC applications, especially command line ones.
But in real-time systems, it's often better to turn the trap into an
exception, and hope for the best. In some situations, it will crash and
burn, but that's not as bad as always aborting. Often, it works just fine,
the error can be logged, and the application continues to run.

--

Ciao, Paul D. DeRocco
Paul mailto:pder...@ix.netcom.com

David Abrahams

unread,
Dec 10, 2002, 2:30:35 PM12/10/02
to
"Brett Gossage" <bgossage3_nospam#!*!@comcast.net> writes:

> <-- agreed to stuff snipped -->
>
>> Suddenly, there is no
>> way to distinguish a recoverable condition, like "filesystem full"
>> from an unrecoverable one in which the program cannot reliably
>> continue.
>
> Whoa, how do get to this apocolypse? If you arrive at "catch(
> FileSystemFull& )" then you know you didn't run out stack space or
> have a no-throw somewhere in the call stack. I can imagine running
> out of stack space in a thread and having a handler ready to catch
> "std::stack_overflow", give the thread more stack space, and try
> again. In this case, stack overflow is only unrecoverable wrt a
> given thread - not the entire program. I know, the Standard doesn't
> address threads, but it has to sooner or later.

Sure, but any threading system worth its salt has a better way to
restart a thread than by using something which attempts to unwind the
stack, possibly touching corrupted resources. You should be able to
install a thread terminator which is invoked at the point the stack
overflow is detected. Once stack unwinding begins from a point in the
program that was coded to be a nothrow operation, the state of the
system is unreliable and destructors and catch clauses invoked during
unwinding may be touching corrupted or uninitialized data.

For a simple example, look at your std::vector::reserve()
implementation. Most std::vectors hold two pointers which describe the
range of constructed elements in the vector, essentially the results
of begin() and end(). If an exception is thrown due to a stack
overflow while the pointers are being updated during reserve(), one of
the pointers might point into the old memory, and the other might
point into the new memory. The vector's destructor walks the range of
items between begin() and end() destroying each element. What do you
suppose will happen if this vector happens to be destroyed during
unwinding?

Exceptions are good for situations where unwinding is
desirable. They're just the wrong mechanism for this kind of
non-recoverable scenario.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 10, 2002, 2:30:53 PM12/10/02
to
chris...@baus.net (christopher) writes:

How, pray tell, does that make your point? Having no ability to write
nothrow operations is a very undesirable property, and is one reason
that exception-safety in Java is a pipe dream.

>> Stack overflows, like null pointer dereferences and invalid
>> instructions, should be handled by traps or other mechanisms outside
>> the C++ exception-handling system.
>>
> I want to change the terminalogy from stack overflow, from inability
> to commit stack memory with in the bounds of the reserved stack
> address space. That is technically different than stack overflow.

How? Are there any observable differences for the user?

> Then I'll add. Failure to allocate memory, like null pointer
> dereferences and invalid instructions, should be handled by traps or
> other mechanisms outside the C++ exception-handling system.

Why? In general, failure to allocate memory is a recoverable
condition. Exceptions are good for recoverable conditions, and pretty
bad for unrecoverable ones. They're especially bad for conditions
that can happen at arbitrary points in a program's execution.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

christopher

unread,
Dec 10, 2002, 2:32:22 PM12/10/02
to
> Therefore, stack overflows are in principle avoidable whereas heap
> overflows aren't. Therefore, a stack overflow is a programming error
> but a heap overflow is something we just have to be prepared for.
Point taken, but...

Not if the stack is allocated dynamically, which it is on most modern
OSes. You don't know how much memory is committed for your stack.

If you determined that your stack can not grow beyond 500k for
instance, but your stack is being dynamically allocated between 4k and
1meg as it is by default on NT, or 4k and 2meg as it is by default on
Linux then stack overflow could happen as soon as the OS attempts to
commit something more than 4k. Which is probably at start-up of most
non-trival applications.

Ken Hagan

unread,
Dec 11, 2002, 5:46:37 AM12/11/02
to
I wrote:
> In the presence of recursion more careful arguments are required,
> but if all the recursions bottom out somewhere it still ought to
> be possible to preduct worst case usage.

"Hyman Rosen" <hyr...@mail.com> wrote...


>
> No. In the presence of recursion, maximum stack use is undecidable.
> Here's a simple example of a hard case:
>
> bool collatz(int n)
> {
> if (n == 1) return true;
> if (n & 1) return collatz(3 * n + 1);
> return collatz(n / 2);
> }

Um, er, okay, good example. Thanks. Let me try again.

In the presence of recursion, EITHER careful argument (perhaps with the
help of some vetting of the input data at run-time) can determine the
worst case stack usage, OR you want to make an engineering judgement
regarding the suitability of the chosen algorithm for the intended
purpose. (Games are allowed to crash but planes aren't.)

Garry Lancaster

unread,
Dec 11, 2002, 5:51:44 AM12/11/02
to
David Abrahams:

> One important reason that stack overflows should never be turned into
> C++ exceptions is that to write exception-safe code, you need to know
> that some operations simply don't throw.

Well, behaviour on stack overflow is undefined by the
standard, so anything can happen, including throwing
an exception.

Unless you intend your exception safety guarantees to
still hold in the face of undefined behaviour (which would
be an unachievable goal), there is no problem.

Kind regards

Garry Lancaster

Daniel James

unread,
Dec 11, 2002, 5:58:12 AM12/11/02
to
In article <at1qk8$dt1$1$8300...@news.demon.co.uk>, Ken Hagan wrote:
> In the absence of recursion, the worst case stack usage for a whole
> program can be determined when the program is linked.

It's C rather than C++ but what about alloca() (and the new variable
length arrays in C99) ?

Cheers,
Daniel.

James Kanze

unread,
Dec 11, 2002, 9:59:08 AM12/11/02
to
David Abrahams <da...@boost-consulting.com> wrote in message
news:<uadjhe...@boost-consulting.com>...

> One important reason that stack overflows should never be turned into


> C++ exceptions is that to write exception-safe code, you need to know
> that some operations simply don't throw.

[...]

> Stack overflows, like null pointer dereferences and invalid
> instructions, should be handled by traps or other mechanisms outside
> the C++ exception-handling system.

That's an interesting observation. Because I've rarely seen a program
in which stack overflow and allocation failure should be treated
differently (and the few exceptions are the cases where you would
probably want to use new (nothrow)). What do you suggest that the
system should do in the case of stack overflow (with emphesis on the
should -- I know all to well what my system does).

I'm basically curious, because in general, you should be able to create
a condition in which stack overflow is not possible. For robust
applications under Unix, I'll normally recurse deeply the first thing in
main. Although it isn't guaranteed anywhere, on all Unixes I've seen,
once memory is allocated to the stack, it remains allocated to the
stack. So if you recurse enough to use more stack than you will ever
need later, you simply can't overflow the stack.

Of course, if the application inputs recursive structures, parsed by
means of recursive descent, you cannot know just how deep you might need
to go. In this case, we always put an artificial limit on nesting, and
allocated enough stack up front to cover any expression within this
limit.

And of course, in a multithreaded environment, you set the stack size,
and it is allocated when you set it.

In all such cases, however, you must somehow determine exactly how much
stack you need, a difficult and error prone process.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung

James Kanze

unread,
Dec 11, 2002, 10:05:39 AM12/11/02
to
"Ken Hagan" <K.H...@thermoteknix.co.uk> wrote in message
news:<at1qk8$dt1$1$8300...@news.demon.co.uk>...
> "christopher" <chris...@baus.net> wrote...

> > IMHO under modern OSes stack allocation and dynamic heap allocation
> > are very similar, yet have very different treatments in the language
> > and by application programmers.

> > In historical contexts stack and heap were physically different. I
> > believe modern memory management systems blur the distinction.

> Physically, yes, but I think there is a logical distinction.

> In the absence of recursion, the worst case stack usage for a whole
> program can be determined when the program is linked. The worst case
> heap usage generally depends on input data.

> In the presence of recursion more careful arguments are required, but
> if all the recursions bottom out somewhere it still ought to be
> possible to preduct worst case usage.

Most of the cases of recursion I've seen involved some variant of
recursive descent parsing: the depth depends on the user input.

You can put an artificial cap on it; that's what we did in previous
critical applications, to ensure no overflow, but it certainly isn't the
most appealing of solutions.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 11, 2002, 11:08:36 AM12/11/02
to
"Paul D. DeRocco" <pder...@ix.netcom.com> writes:

>> "David Abrahams" <da...@boost-consulting.com> wrote ...
>>
>> If stack overflows can generate exceptions, then /any/ operation
could
>> potentially throw an exception. If an exception propagates from a
>> region of code which was supposed to be no-throw, it's very likely
>> that the program's invariants will be broken. Suddenly, there is no
>> way to distinguish a recoverable condition, like "filesystem full"
>> from an unrecoverable one in which the program cannot reliably
>> continue.
>>
>> Stack overflows, like null pointer dereferences and invalid
>> instructions, should be handled by traps or other mechanisms outside
>> the C++ exception-handling system.
>
> This is true if you want to guarantee that an application always
functions
> "correctly", and aborts instantly if it can't live up to this
> guarantee.

There are other alternatives than "aborts instantly" (e.g. log the
error and restart the thread), but yes, they all have that "abort"
flavor about them.

> This is appropriate for most PC applications, especially command line
ones.
> But in real-time systems, it's often better to turn the trap into an
> exception, and hope for the best. In some situations, it will crash
and
> burn, but that's not as bad as always aborting. Often, it works just
fine,
> the error can be logged, and the application continues to run.

Hmm. What kind of real-time application has such a long startup time
that restarting is prohibitive and is sufficiently non-critical that
running along with a possibly corrupted internal state is more
appropriate than waiting for it to be restarted? I'm having trouble
stretching my imagination around that one.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Ken Hagan

unread,
Dec 11, 2002, 11:20:40 AM12/11/02
to
> "Ken Hagan" <K.H...@thermoteknix.co.uk> writes:
>
> [snip]
> > In the presence of recursion more careful arguments are required,
> > but if all the recursions bottom out somewhere it still ought to
> > be possible to preduct worst case usage.
> [snip]

"llewelly" <llewe...@xmission.dot.com> wrote...


>
> No. Most large projects will have recursive functions whose depth is
> dependent on input which is unknown at delivery time.

Well, I know mine do, but I'm writing GUI apps and my process
contains millions of lines of C, C++, assembly language and
perhaps much else that I didn't write so to be frank, stack
overflows are not high on my list of worries.

I would hope that safety critical systems didn't have unbounded
recursion.

Somewhere in between those two extremes, there may be people writing
to a level of robustness where stack overflows are considered to be
worth worrying about but unbounded recursion isn't. I doubt it, but
if anyone reading this thinks they fit that discription, feel free
to speak up.

Ken Hagan

unread,
Dec 11, 2002, 11:20:58 AM12/11/02
to

"christopher" <chris...@baus.net> wrote...

>
> If you determined that your stack can not grow beyond 500k for
> instance, but your stack is being dynamically allocated between 4k and
> 1meg as it is by default on NT, or 4k and 2meg as it is by default on
> Linux then stack overflow could happen as soon as the OS attempts to
> commit something more than 4k. Which is probably at start-up of most
> non-trival applications.

If you care, then you can gratuitously call a function from main()
that has a 500K local variable. There remains the possibility of
a stack overflow during your implementation's start-up code, but
we can consider that a failure of the OS to run your application.
Once you've passed the "stack commit" function, you are running
in an environment where stack overflows won't happen.

James Kanze

unread,
Dec 11, 2002, 11:23:40 AM12/11/02
to
David Abrahams <da...@boost-consulting.com> wrote in message
news:<uznre9...@boost-consulting.com>...

[...]


> Sure, but any threading system worth its salt has a better way to
> restart a thread than by using something which attempts to unwind the
> stack, possibly touching corrupted resources. You should be able to
> install a thread terminator which is invoked at the point the stack
> overflow is detected. Once stack unwinding begins from a point in the
> program that was coded to be a nothrow operation, the state of the
> system is unreliable and destructors and catch clauses invoked during
> unwinding may be touching corrupted or uninitialized data.

If you don't unwind the stack, destructors aren't called, and your
application is probably in an inconsistent state. If you don't unwind
the stack, about the only safe thing you can do is exit the program.

> For a simple example, look at your std::vector::reserve()
> implementation. Most std::vectors hold two pointers which describe the
> range of constructed elements in the vector, essentially the results
> of begin() and end(). If an exception is thrown due to a stack
> overflow while the pointers are being updated during reserve(), one of
> the pointers might point into the old memory, and the other might
> point into the new memory. The vector's destructor walks the range of
> items between begin() and end() destroying each element. What do you
> suppose will happen if this vector happens to be destroyed during
> unwinding?

I don't quite follow you. Surely it is possible to update two pointers
without an intervening stack allocation.

Or is your point simply that a programmer today has no control over when
stack allocations may occur. Obviously, if stack overflow were to cause
an exception (not that I'm convinced this is a good idea), then the
standard would have to limit the cases where it might occur. I don't
think that this is great problem, however. Unless I'm mistaken, most
compilers allocate the spill space for registers once on entering a
function, and not immediately whenever it is actually needed.

> Exceptions are good for situations where unwinding is desirable.
> They're just the wrong mechanism for this kind of non-recoverable
> scenario.

I suspect that I agree. On the other hand, what makes stack overflow
different from operator new ?

--
James Kanze mailto:jka...@caicheuvreux.com

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

James Kanze

unread,
Dec 11, 2002, 11:24:34 AM12/11/02
to
chris...@baus.net (christopher) wrote in message
news:<69980a75.02120...@posting.google.com>...

> > Therefore, stack overflows are in principle avoidable whereas heap
> > overflows aren't. Therefore, a stack overflow is a programming error
> > but a heap overflow is something we just have to be prepared for.

> Point taken, but...

> Not if the stack is allocated dynamically, which it is on most modern
> OSes. You don't know how much memory is committed for your stack.

> If you determined that your stack can not grow beyond 500k for
> instance, but your stack is being dynamically allocated between 4k and
> 1meg as it is by default on NT, or 4k and 2meg as it is by default on
> Linux then stack overflow could happen as soon as the OS attempts to
> commit something more than 4k. Which is probably at start-up of most
> non-trival applications.

This is only partially true, at least for Solaris (the only system for
which I know this part well). First, of course, it is only true for the
start up thread. For all other threads, however, the stack is normally
allocated in one go, at thread creation. And even for the main thread,
allocation is only partially dynamic. Once memory has been allocated
for the stack, it is never deallocated nor used for anything else. So
an application which cannot fail can start by recursing deeply enough to
"pre-allocate" the maximum stack that it needs; once this pre-allocation
done, stack allocation cannot fail.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Alan Griffiths

unread,
Dec 11, 2002, 11:24:52 AM12/11/02
to
"Paul D. DeRocco" <pder...@ix.netcom.com> wrote in message
news:<at45kr$2k5$1...@slb5.atl.mindspring.net>...

> > "David Abrahams" <da...@boost-consulting.com> wrote ...
> >
> > Stack overflows, like null pointer dereferences and invalid
> > instructions, should be handled by traps or other mechanisms outside
> > the C++ exception-handling system.
>
> This is true if you want to guarantee that an application always
functions
> "correctly", and aborts instantly if it can't live up to this
guarantee.
> This is appropriate for most PC applications, especially command line
ones.
> But in real-time systems, it's often better to turn the trap into an
> exception, and hope for the best. In some situations, it will crash
and
> burn, but that's not as bad as always aborting. Often, it works just
fine,
> the error can be logged, and the application continues to run.

Obviously I can't comment on your subdomain of "real-time" systems,
but your comments don't apply to mine. I would not expect errors
outside the domain addressed by a program to be addressed within that
program.

While the application must continue to operate, specific programs may
fail:

In the case under discussion, a failure of the runtime environment to
provide stack space *cannot* be handled. This can be addressed in
several ways, but presuming that the environment is such that the
availablility of memory cannot be guaranteed, then I would expect the
program with problems to be terminated and a separate monitor program
to deal with recovery. (The monitor program provides a far simpler
function and is written in such a way that it has predictable resource
usage and can be guaranteed access to those resources.)

--
Alan Griffiths
http://www.octopull.demon.co.uk/

David Abrahams

unread,
Dec 11, 2002, 11:25:29 AM12/11/02
to
"Garry Lancaster" <glanc...@ntlworld.com> writes:

> David Abrahams:
>> One important reason that stack overflows should never be turned into
>> C++ exceptions is that to write exception-safe code, you need to know
>> that some operations simply don't throw.
>
> Well, behaviour on stack overflow is undefined by the
> standard, so anything can happen, including throwing
> an exception.
>
> Unless you intend your exception safety guarantees to
> still hold in the face of undefined behaviour (which would
> be an unachievable goal), there is no problem.

There's a big problem if you intend some kind of useful response
(e.g. logging, thread restart) to stack overflow. You absolutely need
to be able to distinguish the unrecoverable problems from the
recoverable ones.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 11, 2002, 4:56:26 PM12/11/02
to
ka...@gabi-soft.de (James Kanze) writes:

> David Abrahams <da...@boost-consulting.com> wrote in message
> news:<uadjhe...@boost-consulting.com>...
>
> > One important reason that stack overflows should never be turned into
> > C++ exceptions is that to write exception-safe code, you need to know
> > that some operations simply don't throw.
>
> [...]
>
> > Stack overflows, like null pointer dereferences and invalid
> > instructions, should be handled by traps or other mechanisms outside
> > the C++ exception-handling system.
>
> That's an interesting observation. Because I've rarely seen a program
> in which stack overflow and allocation failure should be treated
> differently (and the few exceptions are the cases where you would
> probably want to use new (nothrow)). What do you suggest that the
> system should do in the case of stack overflow (with emphesis on the
> should -- I know all to well what my system does).

It really depends on the application, doesn't it? The best response
you can hope for in most cases is to try to log something and exit.
That probably requires recovering some stack to do it with, and in
practical cases that may mean using something like setjmp/longjmp.
However, executing destructors and other unwinding actions is almost
certainly not desirable.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 11, 2002, 5:01:52 PM12/11/02
to
ka...@gabi-soft.de (James Kanze) writes:

> David Abrahams <da...@boost-consulting.com> wrote in message
> news:<uznre9...@boost-consulting.com>...
>
> [...]
>> Sure, but any threading system worth its salt has a better way to
>> restart a thread than by using something which attempts to unwind the
>> stack, possibly touching corrupted resources. You should be able to
>> install a thread terminator which is invoked at the point the stack
>> overflow is detected. Once stack unwinding begins from a point in the
>> program that was coded to be a nothrow operation, the state of the
>> system is unreliable and destructors and catch clauses invoked during
>> unwinding may be touching corrupted or uninitialized data.
>
> If you don't unwind the stack, destructors aren't called, and your
> application is probably in an inconsistent state. If you don't unwind
> the stack, about the only safe thing you can do is exit the program.

Likewise if you unwind the stack from an inappropriate place. That's
why systems which turn asynchronous messages like thread cancellation
into exceptions emanating from an arbitrary point in the code are
impossible to write reliable code for.

I should also note, aside from issues of correctness, that some of the
mechanisms which help EH to be efficient depend on the compiler being
able to deduce the places where exceptions can and cannot be thrown.

>> For a simple example, look at your std::vector::reserve()
>> implementation. Most std::vectors hold two pointers which describe the
>> range of constructed elements in the vector, essentially the results
>> of begin() and end(). If an exception is thrown due to a stack
>> overflow while the pointers are being updated during reserve(), one of
>> the pointers might point into the old memory, and the other might
>> point into the new memory. The vector's destructor walks the range of
>> items between begin() and end() destroying each element. What do you
>> suppose will happen if this vector happens to be destroyed during
>> unwinding?
>
> I don't quite follow you. Surely it is possible to update two pointers
> without an intervening stack allocation.

Stack allocations are completely outside the C++ standard, and might
well be affected by the architecture and optimization level. How
would you write standard C++ with any confidence that "no stack
allocation happens between these two sequence points"?

> Or is your point simply that a programmer today has no control over when
> stack allocations may occur.

Precisely. I wrote the above too soon.

> Obviously, if stack overflow were to cause an exception (not that
> I'm convinced this is a good idea), then the standard would have to
> limit the cases where it might occur. I don't think that this is
> great problem, however. Unless I'm mistaken, most compilers
> allocate the spill space for registers once on entering a function,
> and not immediately whenever it is actually needed.

I think it would be a big problem for the spirit of the standard. The
intention has always been to not prohibit a wide range of
architectures (e.g. stack machines).

>> Exceptions are good for situations where unwinding is desirable.
>> They're just the wrong mechanism for this kind of non-recoverable
>> scenario.
>
> I suspect that I agree. On the other hand, what makes stack overflow
> different from operator new ?

operator new is invoked explicitly. Stack allocations are an
implementation detail of the compiler and processor.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Alexander Terekhov

unread,
Dec 11, 2002, 6:43:09 PM12/11/02
to

David Abrahams wrote:
>
> Alexander Terekhov <tere...@web.de> writes:
>
> > David Abrahams wrote:
> > [...]
> >> One important reason that stack overflows should never be turned
into
> >> C++ exceptions is that to write exception-safe code, you need to
know
> >> that some operations simply don't throw.
> >
> > Obviously, nothrow... I'd say, *regions* shall NOT throw; making
this
> > type of exception/trap/whatever-you-call-it unrecoverable IF it
happens
> > to occur in such "restricted" place.
>
> Yes. Those "restricted" places happen all over the most trivial code,
> not just in unwinding/recovery. Just look at any standard container
> class for examples.

Nah, I don't use them [in "mission critical" code]. ;-)

> So while there are probably a few areas where you can recover from a
> stack overflow, in general you can't.

While there are definitely MANY areas where you can recover from
stack overflow, in general [without precautions], you can't. YUP.

[...]


> > BTW, nothrow(*) stuff can NOT be invoked inside async-cancel regions
>
> What is the definition of an async-cancel region?

Basically, it's a region where thread-cancel exception is raised
asynchronously -- possibly "interrupting" ANY instruction within
a region. The need-for-some-safety-construct(s) [on the language
level] with respect to async-cancel is kinda illustrated here:

http://groups.google.com/groups?selm=3C926C82.AD56845D%40web.de
("Subject: Re: C++ and threads", <http://tinyurl.com/3fkf>)

[...]


> > tend to miss the big picture here, so to speak. <wink>
>
> <wink> to you too.

<Jeez, <wink>-overflow! Ha! Exception caught, recovery done...>

Thank you. Now <wink>-<wink> to you. ;-)

[...]


> I think it would be a lot better, actually, if you took a moment to
> describe what the problem is that "async-cancel-safety" is trying to
> solve, in plain English. Acompanying code always helps, of course,
> but without some conceptual background it's pretty useless.

I'd like to ask you to take a look at this *thread* [followups]:

http://groups.google.com/groups?threadm=lj1O8.16%24XI5.132225%40news.cpq
corp.net
(Subject: Re: cancelling one thread from inside another one)

regards,
alexander.

P.S. <http://tinyurl.com/3fl2> Oops, "basic" is WRONG here. I
should have said *destructible*, instead. Mea culpa. ;-) ;-)

Alexander Terekhov

unread,
Dec 11, 2002, 6:44:03 PM12/11/02
to

Ken Hagan wrote:
[...]

> Once you've passed the "stack commit" function, you are running
> in an environment where stack overflows won't happen.

That's NOT true. My followup [to the article discussing 'similar
ideas'] with pointers/quotes on this, was rejected here [reason:
"environment-specific, off-topic"]. I'll forward it to the non-
moderated comp.lang.c++ in a minute.

regards,
alexander.

Ken Hagan

unread,
Dec 11, 2002, 6:44:21 PM12/11/02
to
I wrote:
> In the absence of recursion, the worst case stack usage for a whole
> program can be determined when the program is linked.

"Daniel James" <waste...@nospam.aaisp.org> wrote...


> It's C rather than C++ but what about alloca() (and the new variable
> length arrays in C99) ?

If you are worried about stack overflow, you either don't use these
facilities, or you control the size argument that you pass to them.

Stack overflow strikes me a little like dereferencing NULL pointers
or falling into floating point traps. Assembly language programmers
who have complete control over the order of operations can probably
write robust code in the presence of these events, so they will
appreciate operating system mechanisms that let them catch them.
Higher level languages let the compiler decide the precise ordering
of side effects, so these kinds of trap have to be avoided because
there's no way of telling *exactly* what has or hasn't happened when
the exception occurs.

Mirek Fidler

unread,
Dec 12, 2002, 5:16:30 AM12/12/02
to
> I suspect that I agree. On the other hand, what makes stack overflow
> different from operator new ?

Considering this and amount of energy invested in taking care about new
throws (or checking for returned NULL), I think it would be much better to
consider both situations same... That is, that new should rather be
non-throwing and low memory sitatuations solved by program termination. BTW,
on most operating systems it is not unlikely that OS crashes BEFORE new can
even throw.

It would also help exception specifications... as lazy programmers (me
being one) do not like to write exception specification for every function
(as most C++ functions in fact can throw through new). So we are rather not
using them, even in situations where it would be appropriate.

Mirek

Brett Gossage

unread,
Dec 12, 2002, 9:19:40 AM12/12/02
to

> That probably requires recovering some stack to do it with, and in
> practical cases that may mean using something like setjmp/longjmp.

Not necessarily, a parent thread would still have available, uncorrupt
stack
space...

int stack_size = 100000;
int tries = 0;
bool success = false;
int max_tries = 5;

while( !success && tries<max_tries )
{
try
{
Thread mythread( stack_size );

mythread.run();

mythread.join(); // the exception propagates from the thread here

success = true;
}
catch( std::stack_overflow& err )
{
stack_size += 10000;
tries++;
}
}// end while

if( tries == max_tries )
throw OperationFailed();

Garry Lancaster

unread,
Dec 12, 2002, 9:23:22 AM12/12/02
to
> > David Abrahams:
> >> One important reason that stack overflows should never be turned
into
> >> C++ exceptions is that to write exception-safe code, you need to
know
> >> that some operations simply don't throw.

Garry Lancaster:


> > Well, behaviour on stack overflow is undefined by the
> > standard, so anything can happen, including throwing
> > an exception.
> >
> > Unless you intend your exception safety guarantees to
> > still hold in the face of undefined behaviour (which would
> > be an unachievable goal), there is no problem.

David Abrahams:


> There's a big problem if you intend some kind of useful response
> (e.g. logging, thread restart) to stack overflow.

If we use an OS that generates stack overflow exceptions,
a compiler that allows us to get at these exceptions and a
piece of code where neither the stack overflow or the
resulting exception propagation will violate any program
invariants such as exception safety guarantees, we can
choose to treat stack overflow just like any other C++
exception. This is clearly not an option for a piece of code
offering the no-throw guarantee.

In most cases, for reasons of necessity, portability or
convenience, we program as if stack overflow were an
unrecoverable problem; as if it is just as it is in the C++
standard, something that causes undefined behaviour.
In other words, if it affects our coding at all, it is because
we code to avoid the problem, not because we are
attempting to detect and react to it.

There are various in-between approaches, but they all
rely on "balance of probability" coding because they are
trying to make something happen after undefined
behaviour, including maybe violating an exception safety
guarantee, has hit the program. They may work, they may
not.

> You absolutely need
> to be able to distinguish the unrecoverable problems from the
> recoverable ones.

We don't need this, but more pertinently we *can't have
it*, in C++ or any other language, at least not when one
or more of the unrecoverable problems we are trying to
distinguish causes undefined behaviour. Because
undefined behaviour can be *anything*, including exactly
the same behaviour encountered for recoverable
problems.

Kind regards

Garry Lancaster

David Abrahams

unread,
Dec 12, 2002, 9:24:35 AM12/12/02
to
Alexander Terekhov <tere...@web.de> writes:

> David Abrahams wrote:


>>
>> Alexander Terekhov <tere...@web.de> writes:
>>
>> Yes. Those "restricted" places happen all over the most trivial code,
>> not just in unwinding/recovery. Just look at any standard container
>> class for examples.
>
> Nah, I don't use them [in "mission critical" code]. ;-)

You don't use those "restricted" places, or you don't use standard
containers?

> [...]
>> > BTW, nothrow(*) stuff can NOT be invoked inside async-cancel
regions
>>
>> What is the definition of an async-cancel region?
>
> Basically, it's a region where thread-cancel exception is raised
> asynchronously -- possibly "interrupting" ANY instruction within
> a region. The need-for-some-safety-construct(s) [on the language
> level] with respect to async-cancel is kinda illustrated here:
>

Then you said it inside-out. Async-cancel regions obviously break any
otherwise nothrow region they appear inside of, but there's no problem
using nothrow operations inside an async-cancel region.

>> I think it would be a lot better, actually, if you took a moment to
>> describe what the problem is that "async-cancel-safety" is trying
>> to solve, in plain English. Acompanying code always helps, of
>> course, but without some conceptual background it's pretty useless.
>
> I'd like to ask you to take a look at this *thread* [followups]:
>
>
http://groups.google.com/groups?threadm=lj1O8.16%24XI5.132225%40news.cpq
> corp.net
> (Subject: Re: cancelling one thread from inside another one)

Believe it or not, I did. One quote from Butenhof stands out:

"Personally, if I were to spend any serious time considering the
future of async cancelability in the standard, I'd rather argue for
removing it entirely"

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Michiel Salters

unread,
Dec 12, 2002, 10:16:59 AM12/12/02
to
David Abrahams <da...@boost-consulting.com> wrote in
news:uof7to...@boost-consulting.com:

> Hmm. What kind of real-time application has such a long startup time
> that restarting is prohibitive and is sufficiently non-critical that
> running along with a possibly corrupted internal state is more
> appropriate than waiting for it to be restarted? I'm having trouble
> stretching my imagination around that one.

A phone switch, which happens to be sufficiently _critical_ that running


along with a possibly corrupted internal state is more appropriate than

waiting for it to be restarted. BTDT.

It helps when the compiler builders speak to the OS builders, though - the
system I'm thinking about can detect stack overflows, and request the OS to
clean up only the affected state. Losing a single phone call is much more
acceptable than losing 30,000.

The moral: Your program can be better, if you know what happens in
exceptional circumstances.

Regards,
--
Michiel Salters

Michiel Salters

unread,
Dec 12, 2002, 10:18:25 AM12/12/02
to
David Abrahams <da...@boost-consulting.com> wrote in
news:uadjhe...@boost-consulting.com:

> One important reason that stack overflows should never be turned into
> C++ exceptions is that to write exception-safe code, you need to know
> that some operations simply don't throw.

...
> If stack overflows can generate exceptions, then /any/ operation could
> potentially throw an exception. If an exception propagates from a
> region of code which was supposed to be no-throw, it's very likely
> that the program's invariants will be broken.

...
which is why it would be nice if the standard limited the number of places
where such exceptions could happen, e.g. at the function call sequence
point, if the function called has one execution path that when taken would
lead to a stack overflow. That means that the caller knows that either all
sideeffects of the function have happened, or only evaluation of arguments.

The price for the implementation would be a stack check as part of the
function prolog. Most current prologs already change the stack pointer,
often increasing it by total amount of bytes needed. In general, the prolog
makes sure the called function can execute. I basically say that it
ought to signal failure by an exception, just as a ctor should signal
failure by an exception. There's no other way.

( The obvious difference is that a ctor can be user-defined in portable
C++, and the user-generated ctor won't throw an exception. The analogy is
about not being able to use a return value. )

I guess this should be implemented in GCC before this moves to comp.std.c++
I've got no idea what the actual performance impact would be.

Brett Gossage

unread,
Dec 12, 2002, 10:22:21 AM12/12/02
to
..> Sure, but any threading system worth its salt has a better way to

> restart a thread than by using something which attempts to unwind the
> stack, possibly touching corrupted resources.

You could not restart the thread. A new thread, with more stack
space allocated to it, would be created and begin the same call sequence.
I would hope that the stack could be unwound to release resources held
by fully constructed objects - especially things like mutexes.
Other threads may hang waiting for the resource to be released.
Brutal termination without stack unwinding could be just as corrupting.

> You should be able to
> install a thread terminator which is invoked at the point the stack
> overflow is detected. Once stack unwinding begins from a point in the
> program that was coded to be a nothrow operation, the state of the
> system is unreliable and destructors and catch clauses invoked during
> unwinding may be touching corrupted or uninitialized data.
>

So you're saying that running out stack always corrupts the stack space?
I don't see that. If memory serves ;-) a constructor isn't called if the
space
cannot be allocated. No resources are taken in that case - no destructor
call.
All previously constructed objects should still have their memory space
intact
I don't think the stack memory is reclaimed until the thread is dead and
gone.

I want the thread to cry "O, I am slain!" [Polonius, Act III Scene IV] and
tell me why. The trick is: how can it cry out if it is dead?

James Kanze

unread,
Dec 12, 2002, 5:35:19 PM12/12/02
to
Daniel James <waste...@nospam.aaisp.org> wrote in message
news:<VA.000000e...@nospam.aaisp.org>...

> In article <at1qk8$dt1$1$8300...@news.demon.co.uk>, Ken Hagan wrote:
> > In the absence of recursion, the worst case stack usage for a whole
> > program can be determined when the program is linked.

> It's C rather than C++ but what about alloca() (and the new variable
> length arrays in C99) ?

Alloca doesn't exist in C or in C++. But VLA's could be a problem.

On the other hand, I can always get an exception (bad_alloc) when I
declare an std::vector. Why should declaring a VLA be any different ?

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

James Kanze

unread,
Dec 12, 2002, 5:36:33 PM12/12/02
to
David Abrahams <da...@boost-consulting.com> wrote in message
news:<uof7to...@boost-consulting.com>...

> Hmm. What kind of real-time application has such a long startup time
> that restarting is prohibitive and is sufficiently non-critical that
> running along with a possibly corrupted internal state is more
> appropriate than waiting for it to be restarted? I'm having trouble
> stretching my imagination around that one.

Most of the telephone applications I've worked on have had very long
start up times. Presumably, if the state is corrupted, the program will
crash pretty soon anyway, and if it's not, we've saved a start-up.

That said, it's not the kind of solution I like to envisage. I'd much
rather see some sort of controled handling of stack overflow, which
permits recovery. Obviously, for the reasons you've pointed out, this
means that you need some sort of restrictions as to when this can take
place. An obvious rule would be that it could only take place during a
function call -- in practice, this would make most implementations
already conforming. But I have a sneaky suspicion that this is not
enough -- we need a garantee that some function calls can succeed.

Logically, what I would want is some way of testing in advance. Before
I execute a block of code, I request a verification/reservation of
enough stack -- for something like a recursive descent parser, for
example, any time I processed an opening parentheses, I could check that
enough stack was available to be able to handle any expression without
parentheses.

Currently, there are two problems with this:
- I have no way of knowing how much stack space I might need, and
- even if I know, there is no portable way to reserve it.
Providing a portable way of specifying the latter shouldn't be too
difficult, although some discussion of its possible implementation on
widespread platforms is necessary. But I can't even see how to specify
an interface for the first point.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

James Kanze

unread,
Dec 13, 2002, 5:04:01 AM12/13/02
to
David Abrahams <da...@boost-consulting.com> wrote in message
news:<uhedkv...@boost-consulting.com>...
> ka...@gabi-soft.de (James Kanze) writes:

> > David Abrahams <da...@boost-consulting.com> wrote in message
> > news:<uznre9...@boost-consulting.com>...

> > [...]
> >> Sure, but any threading system worth its salt has a better way to
> >> restart a thread than by using something which attempts to unwind
> >> the stack, possibly touching corrupted resources. You should be
> >> able to install a thread terminator which is invoked at the point
> >> the stack overflow is detected. Once stack unwinding begins from a
> >> point in the program that was coded to be a nothrow operation, the
> >> state of the system is unreliable and destructors and catch clauses
> >> invoked during unwinding may be touching corrupted or uninitialized
> >> data.

> > If you don't unwind the stack, destructors aren't called, and your
> > application is probably in an inconsistent state. If you don't
> > unwind the stack, about the only safe thing you can do is exit the
> > program.

> Likewise if you unwind the stack from an inappropriate place.

Likely. If we're talking about the system turning stack overflow into
an exception, it is up to the system to determine where an appropriate
place might be, and to throw there.

> That's why systems which turn asynchronous messages like thread
> cancellation into exceptions emanating from an arbitrary point in the
> code are impossible to write reliable code for.

> I should also note, aside from issues of correctness, that some of the
> mechanisms which help EH to be efficient depend on the compiler being
> able to deduce the places where exceptions can and cannot be thrown.

> >> For a simple example, look at your std::vector::reserve()
> >> implementation. Most std::vectors hold two pointers which describe
> >> the range of constructed elements in the vector, essentially the
> >> results of begin() and end(). If an exception is thrown due to a
> >> stack overflow while the pointers are being updated during
> >> reserve(), one of the pointers might point into the old memory, and
> >> the other might point into the new memory. The vector's destructor
> >> walks the range of items between begin() and end() destroying each
> >> element. What do you suppose will happen if this vector happens to
> >> be destroyed during unwinding?

> > I don't quite follow you. Surely it is possible to update two
> > pointers without an intervening stack allocation.

> Stack allocations are completely outside the C++ standard, and might
> well be affected by the architecture and optimization level. How
> would you write standard C++ with any confidence that "no stack
> allocation happens between these two sequence points"?

Yes, but we're talking about a possible extension: turning stack
overflow into a C++ exception. If an implementation decides to do this,
then one of the things it must do is ensure that stack allocation only
happens at specific points, where it can be correctly handled.

I'm not sure what the effect of this would be on optimization, etc. On
a Sparc, normally, the only stack allocation you have is at the actual
call instruction. (I'm not sure if you have a lot of parameters.) If
the hardware/system backs out of this correctly (and I'm pretty sure it
can be made to), then you could make it work. There are two problems:
- limiting stack overflow exceptions to call points is probably not
enough to be able to write robust programs, and
- it would probably require some OS modifications to implement.
I have the feeling that something needs to be done, but I'm not sure
that it is possible.

> > Or is your point simply that a programmer today has no control over
> > when stack allocations may occur.

> Precisely. I wrote the above too soon.

> > Obviously, if stack overflow were to cause an exception (not that
> > I'm convinced this is a good idea), then the standard would have to
> > limit the cases where it might occur. I don't think that this is
> > great problem, however. Unless I'm mistaken, most compilers
> > allocate the spill space for registers once on entering a function,
> > and not immediately whenever it is actually needed.

> I think it would be a big problem for the spirit of the standard. The
> intention has always been to not prohibit a wide range of
> architectures (e.g. stack machines).

True. On the other hand, perhaps the stack machines have a way of
"committing" stack in advance. (This is how I've always solved the
problem on Unix based machines. There's no written guarantee, but on
every Unix machine I've seen, once memory is used as stack, it is
committed; you know that you can reuse it as stack without failure.)

This also raises another question: how relevant is this philosophy
today? For example, are there any stack machines still around which
support, or try to support, C++?

(Again, I'm not trying to be dogmatic about anything for the moment.
I'm just asking questions.)

> >> Exceptions are good for situations where unwinding is desirable.
> >> They're just the wrong mechanism for this kind of non-recoverable
> >> scenario.

> > I suspect that I agree. On the other hand, what makes stack
> > overflow different from operator new ?

> operator new is invoked explicitly. Stack allocations are an
> implementation detail of the compiler and processor.

I was thinking more in terms of view of an application. What makes it
acceptable to core dump on stack overflow, but not on heap allocation?
Most telecoms applications, for example, use ASN.1 scope and filter in
command input -- all of the parsers for filter that I've seen use
recursive descent, which means that stack use is simply not predictable
in advance. The protocols provide for an error message "insufficient
resources", but we pay contractual penalties for system down time.
Being able to catch stack overflow would sure make life easier in this
case.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

James Kanze

unread,
Dec 13, 2002, 5:05:54 AM12/13/02
to
"Mirek Fidler" <c...@volny.cz> wrote in message
news:<at84jf$1av0$1...@news.vol.cz>...

> > I suspect that I agree. On the other hand, what makes stack
> > overflow different from operator new ?

> Considering this and amount of energy invested in taking care
> about new throws (or checking for returned NULL), I think it would be
> much better to consider both situations same...

It depends on the application. In my current application, there is no
recursion, so if I provide enough stack, stack overflow is impossible.
On the other hand, I create objects on user command, and it is
impossible to forsee the amount of objects the user will request. So
I've got to recover from insufficient memory.

In many earlier applications, user commands could contain arbitrary
filter expression, in what was basically prefix notation. The most
obvious way of handling this is with recursive descent. Which means
that the amount of stack needed also depends on user input, and cannot
be forseen in advance. In this case, the two problems definitely should
be considered as the same problem.

> That is, that new should rather be non-throwing and low memory
> sitatuations solved by program termination. BTW, on most operating
> systems it is not unlikely that OS crashes BEFORE new can even throw.

I've never seen that. I've successfully handled out of memory
situations under Solaris, HP/UX and AIX, without intervening system
crashes. I've triggered out of memory conditions under Windows NT as
well, and although I wasn't able to successfully recover from them, the
system certainly didn't crash.

> It would also help exception specifications... as lazy programmers
> (me being one) do not like to write exception specification for every
> function (as most C++ functions in fact can throw through new). So we
> are rather not using them, even in situations where it would be
> appropriate.

The problem is that the guarantees of exception specifications are
run-time guarantees, not compile time ones. Which pretty much means
that they are just another way of crashing your program.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 13, 2002, 10:29:38 AM12/13/02
to
ka...@gabi-soft.de (James Kanze) writes:

> David Abrahams <da...@boost-consulting.com> wrote in message
> news:<uhedkv...@boost-consulting.com>...
>> ka...@gabi-soft.de (James Kanze) writes:
>
>> Likewise if you unwind the stack from an inappropriate place.
>
> Likely. If we're talking about the system turning stack overflow into
> an exception, it is up to the system to determine where an appropriate
> place might be, and to throw there.

AFAICT, that's not possible in any useful way. First, the user would
have to mark any "nothrow region" somehow so the system could know
that it was not supposed to throw there (incredibly cumbersome; these
regions appear all over ordinary code). But then, what *does* the
system do if it can't get stack memory in one of those regions? By
that point, it's too late. About the only way I can imagine dealing
with this is via some sort of whole-program link-time static analysis
which traces all possible code paths, detects non-tail recursion, and
checks the amount of stack needed each time one of these regions is
entered... which adds an additional runtime check at the entry to each
nothrow region, slowing the whole program down, not to mention linking!

>> Stack allocations are completely outside the C++ standard, and might
>> well be affected by the architecture and optimization level. How
>> would you write standard C++ with any confidence that "no stack
>> allocation happens between these two sequence points"?
>
> Yes, but we're talking about a possible extension

Oh, really? I thought this whole thinkg started because someone was
complaining that stack overflow was not defined to throw an exception
in the standard.

> : turning stack overflow into a C++ exception.

Oh, I am pretty sure that this happens on Win32/VC++. It would be
consistent with other EH madness on that platform.

> If an implementation decides to do this, then one of the things it
> must do is ensure that stack allocation only happens at specific
> points, where it can be correctly handled.

Are you planning to enforce "inline", forbid recursion in inline
functions, and forbid nothrow functions which aren't inline?

> I'm not sure what the effect of this would be on optimization, etc. On
> a Sparc, normally, the only stack allocation you have is at the actual
> call instruction. (I'm not sure if you have a lot of parameters.) If
> the hardware/system backs out of this correctly (and I'm pretty sure it
> can be made to), then you could make it work. There are two problems:
> - limiting stack overflow exceptions to call points is probably not
> enough to be able to write robust programs, and

Definitely not.

> - it would probably require some OS modifications to implement.

No big challenge, IMO (as I'm not an OS writer ;-)).

> I have the feeling that something needs to be done, but I'm not sure
> that it is possible.

That's my feeling :(.

>> > Obviously, if stack overflow were to cause an exception (not that
>> > I'm convinced this is a good idea), then the standard would have to
>> > limit the cases where it might occur. I don't think that this is
>> > great problem, however. Unless I'm mistaken, most compilers
>> > allocate the spill space for registers once on entering a function,
>> > and not immediately whenever it is actually needed.
>
>> I think it would be a big problem for the spirit of the standard. The
>> intention has always been to not prohibit a wide range of
>> architectures (e.g. stack machines).
>
> True. On the other hand, perhaps the stack machines have a way of
> "committing" stack in advance. (This is how I've always solved the
> problem on Unix based machines. There's no written guarantee, but on
> every Unix machine I've seen, once memory is used as stack, it is
> committed; you know that you can reuse it as stack without failure.)
>
> This also raises another question: how relevant is this philosophy
> today? For example, are there any stack machines still around which
> support, or try to support, C++?

I don't know, but I don't think the _question_ is all that relevant.
There are too many other problems to solve first ;-)

>> >> Exceptions are good for situations where unwinding is desirable.
>> >> They're just the wrong mechanism for this kind of non-recoverable
>> >> scenario.
>
>> > I suspect that I agree. On the other hand, what makes stack
>> > overflow different from operator new ?
>
>> operator new is invoked explicitly. Stack allocations are an
>> implementation detail of the compiler and processor.
>
> I was thinking more in terms of view of an application. What makes it
> acceptable to core dump on stack overflow, but not on heap
> allocation?

Nothing, except that stack overflow is less-likely, more preventable,
and harder to do anything useful about once it actually happens.

> Most telecoms applications, for example, use ASN.1 scope and filter in
> command input -- all of the parsers for filter that I've seen use
> recursive descent, which means that stack use is simply not predictable
> in advance. The protocols provide for an error message "insufficient
> resources", but we pay contractual penalties for system down time.
> Being able to catch stack overflow would sure make life easier in this
> case.

You could imagine a cooperative system which pre-allocates some pages
for stack, and provides a function which can tell you how many pages
you have available to commit. You could periodically call this
function and throw an exception when stack is running low. All
systems which throw exceptions (e.g. for thread cancellation) must
invoke the throwing code excplicitly if you want a reliable system.

Yes, you can write systems where exceptions magically pop into the
excecution stream at "arbitrary" points, but then the state of the
system after unwinding is anybody's guess.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Alexander Terekhov

unread,
Dec 13, 2002, 10:30:45 AM12/13/02
to

James Kanze wrote:
[...]

> On the other hand, perhaps the stack machines have a way of
> "committing" stack in advance. (This is how I've always solved the
> problem on Unix based machines. There's no written guarantee, but on
> every Unix machine I've seen, once memory is used as stack, it is
> committed; you know that you can reuse it as stack without failure.)
>
> This also raises another question: how relevant is this philosophy
> today?

http://tinyurl.com/3i1b

<quote>

Seriously, I agree it's fairly clear that in such a dynamic stack
segment model (which doesn't, by the way, violate anything in POSIX),
any pretense that any arbitrary stack address need remain unique for
the life of the thread would be invalid. Cool; a good objective
reason to say "this is a bad idea".

</quote>

regards,
alexander.

Mirek Fidler

unread,
Dec 13, 2002, 10:31:31 AM12/13/02
to
> > That is, that new should rather be non-throwing and low memory
> > sitatuations solved by program termination. BTW, on most operating
> > systems it is not unlikely that OS crashes BEFORE new can even throw.
>
> I've never seen that. I've successfully handled out of memory
> situations under Solaris, HP/UX and AIX, without intervening system
> crashes. I've triggered out of memory conditions under Windows NT as
> well, and although I wasn't able to successfully recover from them, the
> system certainly didn't crash.

Depends what you consider a system crash. As for Win NT, system (at lest
kernel) perhaps does not crash, but low memory condition may force to crash
some of components. In any case, during low memory condition system
sometimes becomes so slow (perhaps due to extensive trashing) that reset
button is only possible solution.

On e.g. Linux, system might not crash again, but low memory situation
may force e.g. X Window to crash - resulting same loss of data...

Mirek

David Abrahams

unread,
Dec 13, 2002, 12:00:59 PM12/13/02
to
ka...@gabi-soft.de (James Kanze) writes:

> David Abrahams <da...@boost-consulting.com> wrote in message
> news:<uof7to...@boost-consulting.com>...
>
>> Hmm. What kind of real-time application has such a long startup time
>> that restarting is prohibitive and is sufficiently non-critical that
>> running along with a possibly corrupted internal state is more
>> appropriate than waiting for it to be restarted? I'm having trouble
>> stretching my imagination around that one.
>
> Most of the telephone applications I've worked on have had very long
> start up times. Presumably, if the state is corrupted, the program
will
> crash pretty soon anyway, and if it's not, we've saved a start-up.

"Presumably?" Does that really mean anything? What if someone's file
they're transmitting by modem gets garbled, or if a "secure"
conversation suddenly gets shunted without encryption to another line?

> That said, it's not the kind of solution I like to envisage. I'd much
> rather see some sort of controled handling of stack overflow, which
> permits recovery. Obviously, for the reasons you've pointed out, this
> means that you need some sort of restrictions as to when this can take
> place. An obvious rule would be that it could only take place during
a
> function call -- in practice, this would make most implementations
> already conforming. But I have a sneaky suspicion that this is not
> enough -- we need a garantee that some function calls can succeed.

Yeah, problematic isn't it? Function calls are an important unit of
abstraction in C++. We really need to be able to write nothrow
functions (e.g. iterator dereference, operator delete, ...).

Back to the bad-old-days of 'C' macros instead of inline functions,
anyone?

> Logically, what I would want is some way of testing in advance.
> Before I execute a block of code, I request a
> verification/reservation of enough stack -- for something like a
> recursive descent parser, for example, any time I processed an
> opening parentheses, I could check that enough stack was available
> to be able to handle any expression without parentheses.

Yikes. Checkpoint and proceed?

> Currently, there are two problems with this:
> - I have no way of knowing how much stack space I might need, and
> - even if I know, there is no portable way to reserve it.
> Providing a portable way of specifying the latter shouldn't be too
> difficult, although some discussion of its possible implementation on
> widespread platforms is necessary. But I can't even see how to
specify
> an interface for the first point.

Yeah, it gets really hard to say anything standardizable once you get
into this territory.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Alexander Terekhov

unread,
Dec 13, 2002, 3:20:35 PM12/13/02
to

David Abrahams wrote:
[...]

> You don't use those "restricted" places, or you don't use standard
> containers?

I have my own containers. The interface isn't standard and the impl
doesn't use rather popular catch(...)/rethrow-"idiom", for example.
Thus far, they worked just fine, however.

[...]
> >> > BTW, nothrow(*) stuff can NOT be invoked inside async-cancel
> regions
> >>
> >> What is the definition of an async-cancel region?
> >
> > Basically, it's a region where thread-cancel exception is raised
> > asynchronously -- possibly "interrupting" ANY instruction within
> > a region. The need-for-some-safety-construct(s) [on the language
> > level] with respect to async-cancel is kinda illustrated here:
> >
>
> Then you said it inside-out. Async-cancel regions obviously break any
> otherwise nothrow region they appear inside of, but there's no problem
> using nothrow operations inside an async-cancel region.

Yeah, except that you'll have some rather serious problem(s) with
nothrow operations ala throw()-nothing-new inside an async-cancel
region.

> >> I think it would be a lot better, actually, if you took a moment to
> >> describe what the problem is that "async-cancel-safety" is trying
> >> to solve, in plain English. Acompanying code always helps, of
> >> course, but without some conceptual background it's pretty useless.
> >
> > I'd like to ask you to take a look at this *thread* [followups]:
> >
> >
> http://groups.google.com/groups?threadm=lj1O8.16%24XI5.132225%40news.cpq
> > corp.net
> > (Subject: Re: cancelling one thread from inside another one)
>

> Believe it or not, I did. ...

Thank you.

regards,
alexander.

Alexander Terekhov

unread,
Dec 13, 2002, 4:50:08 PM12/13/02
to

David Abrahams wrote:
>
> ka...@gabi-soft.de (James Kanze) writes:
>
> > David Abrahams <da...@boost-consulting.com> wrote in message
> > news:<uhedkv...@boost-consulting.com>...
> >> ka...@gabi-soft.de (James Kanze) writes:
> >
> >> Likewise if you unwind the stack from an inappropriate place.
> >
> > Likely. If we're talking about the system turning stack overflow into
> > an exception,

Here's an ["greetings-from-the-yellow-zone"] example:

http://tinyurl.com/39nx
(comp.std.c++, Subject: Re: C++ exception handling)

> > it is up to the system to determine where an appropriate
> > place might be, and to throw there.
>
> AFAICT, that's not possible in any useful way. First, the user would
> have to mark any "nothrow region" somehow so the system could know
> that it was not supposed to throw there (incredibly cumbersome; these
> regions appear all over ordinary code).

"nothrow region" aside [throw()-ES might actually work quite
well here], there just ought to support for >>standard<< "bool
expected_exception<T>()" in the future C++, I believe strongly.
Well, I'll refrain from dropping [boost.org's] links on that
subject here, this time. ;-)



> But then, what *does* the
> system do if it can't get stack memory in one of those regions? By
> that point, it's too late. About the only way I can imagine dealing
> with this is via some sort of whole-program link-time static analysis
> which traces all possible code paths, detects non-tail recursion, and
> checks the amount of stack needed each time one of these regions is
> entered... which adds an additional runtime check at the entry to each
> nothrow region, slowing the whole program down, not to mention linking!

Uhmm, are you sure?

[...]


> Yes, you can write systems where exceptions magically pop into the
> excecution stream at "arbitrary" points,

Yep, inside async-cancel-safe regions, for example.

> but then the state of the system after unwinding is anybody's guess.

Nope. That's NOT true.

regards,
alexander.

Philippe Mori

unread,
Dec 13, 2002, 4:51:06 PM12/13/02
to
If your application really needs a big stack, I thing that you should uses
C++ std::stack to keep your data and rewrote your algorithme to works
without recursivity but using that data stack. That way, process or thread
stack won't overflow.

If you really want that your application continue after a stack overflow,
maybe the best thing to do is to do the recursive stuff in another thread
with is own stack. If an overflow occurs, you simply terminate that thread
and start another one instead.

Yet another possibility is to replace some stack allocation by head
allocation so that the stack won't grow too fast.

Under Windows it is easy to detect stack overflow using SEH (system
exceptions) but since the system uses a guard page and change it's
attribute, a subsequent stack overflow won't be detected as it (I think it
would produce a non-continuable exception). So to properly resume from that
overflow, you would have to catch the system exception (using try/__except),
decode it, restore the guard page attribute and continue... And then
exception would have to be handle from a point where the stack is not too
full.

If the error occurs in it own thread, it is probably simpler to start
another thread and pass necessary information to it.

In pratice, I doubt that many function are recursive and maybe doing a check
for stack space before calling those functions would be the more effective
way to do it. You will simply need a function that allows you to check the
remainding stack space and then if the remainding space is not enough (with
a security factor), simply throw a C++ exception.

In pratice, stack overflow is not the only system exception that cannot
simply be handled using C++ exception handling. With Visual C++, those
exceptions are catch with a catch(...) but you will not completly recover
from them in all case without some special code.

For exemple, when a flotting point exception occurs, under Visual C++ and
Borland C++, we must call _fpreset(). Otherwise, the flotting point state
would not be restored and subsequent flotting point operations won't works
as expected.

Finally, under Windows stack pages are reserved but not commited until they
are used. So if memory is used elsewhere, the system may not be able to
commit a page when it need it. In pratice, this is generally not a problem
but buggy memory allocation or recursion may eat all the system memory...
This problem can probably be avoid by forcing them to be commited either by
the trick of pre-commited them from main or by specifying an appropriate
commited size when creating the process (or thread) or building the
application.

christopher

unread,
Dec 13, 2002, 5:56:10 PM12/13/02
to
an exception propagates from a
> >
> > Thanks for making my point for me.
>
> How, pray tell, does that make your point? Having no ability to write
> nothrow operations is a very undesirable property, and is one reason
> that exception-safety in Java is a pipe dream.
>

And it is the reason why having exception-safety in C++ is a pipe
dream. Going directly to exit() in my strongly exception safe program
because the OS failed to commit stack space certainly makes my
exception safety purely academic, and it should be treated as such.

Excuse my belief that pretending certain operations are safe because
so I can write strongly exception safe programs is silly in real world
applications.

Regarding real-time systems: I would have to suspect there is a huge
class of applications were restart is just not possible, hence the
term real-time. You can not reinvent time, but that might a topic for
philosophy and not C++ programming ; ).

David Abrahams

unread,
Dec 14, 2002, 6:49:40 AM12/14/02
to
Alexander Terekhov <tere...@web.de> writes:

> David Abrahams wrote:
> [...]
>> Yes, you can write systems where exceptions magically pop into the
>> excecution stream at "arbitrary" points,

^^^^^^^^^


>
> Yep, inside async-cancel-safe regions, for example.

But then that's not an _arbitrary_ point, is it?

>> but then the state of the system after unwinding is anybody's guess.
>
> Nope. That's NOT true.

'fraid so. Otherwise there would be no reason to designate special
async-cancel-safe regions.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

christopher

unread,
Dec 14, 2002, 6:51:37 AM12/14/02
to
> Yeah, problematic isn't it? Function calls are an important unit of
> abstraction in C++. We really need to be able to write nothrow
> functions (e.g. iterator dereference, operator delete, ...).

It's too bad we can't then. Or at least the standard doesn't
guarantee it. It doesn't say that a implementation can't throw when
faced with stack failure.

christopher

unread,
Dec 14, 2002, 6:52:14 AM12/14/02
to
>
> If you care, then you can gratuitously call a function from main()
> that has a 500K local variable. There remains the possibility of
> a stack overflow during your implementation's start-up code, but
> we can consider that a failure of the OS to run your application.

> Once you've passed the "stack commit" function, you are running
> in an environment where stack overflows won't happen.
>

You can generally instruct the loader to do this for you, but I
suspect most of us do not.

David Abrahams

unread,
Dec 14, 2002, 11:34:49 AM12/14/02
to
chris...@baus.net (christopher) writes:

> an exception propagates from a
>> >
>> > Thanks for making my point for me.
>>
>> How, pray tell, does that make your point? Having no ability to write
>> nothrow operations is a very undesirable property, and is one reason
>> that exception-safety in Java is a pipe dream.
>>
>
> And it is the reason why having exception-safety in C++ is a pipe
> dream. Going directly to exit() in my strongly exception safe program
> because the OS failed to commit stack space certainly makes my
> exception safety purely academic, and it should be treated as such.

Do you think it's possible to eliminate all ways that the program
might unintentionally "go directly to exit"?

> Excuse my belief that pretending certain operations are safe because
> so I can write strongly exception safe programs is silly in real
> world applications.

I think you're confusing exception-safety with safety.
Exception-safety is really a misnomer. If you have exception-safety,
it just means that the program is in a consistent, recoverable state
if an exception is thrown. If you think that's meaningless just
because there are other ways the program can fail, well I guess that's
a shame.

I think the property that "if an exception is thrown, the program is
in a good state" is one worth preserving, because we know the program
is never barreling ahead with trashed internals. If we start handling
conditions that aren't recoverable by throwing exceptions, we destroy
that property. I don't happen to believe that all failures are
recoverable, and in particular stack overflow is obviously not a
condition from which we can recover in general.

If you make all stack overflows throw exceptions, you will still not
have absolute reliability - a stack overflow will still put your
program into an "unsafe state", only now you will hear about the
failure much later, when the consequences of the corruption caused by
unwinding are manifested, or worse, you'll fail to notice the problem
at all.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Paul D. DeRocco

unread,
Dec 14, 2002, 3:57:13 PM12/14/02
to
"David Abrahams" <da...@boost-consulting.com> wrote ...

>
> Hmm. What kind of real-time application has such a long startup time
> that restarting is prohibitive and is sufficiently non-critical that
> running along with a possibly corrupted internal state is more
> appropriate than waiting for it to be restarted? I'm having trouble
> stretching my imagination around that one.

I did a fairly elaborate stage lighting controller, written in C++, and
running on a 486-class CPU. It used an ROM MS-DOS clone whose only
purpose
was to load the program and allow show data to be loaded and saved to
floppy, but it's boot-up time, including BIOS POST, was about 30
seconds.
Even if a wild pointer dereference didn't scribble in some OS data area,
and
the program could be restarted immediately, the operator would still
have to
figure out where he was in the show, and jump to that point. In the
meantime, the lights would all shut off, not something you want to have
happen in the middle of a play or a rock concert. By turning such
hardware
traps into exceptions containing the code address of the interrupt, I
was
able to put that hex number into the display, and get feedback that let
me
find the bugs. Most of the time, all that would happen to the user is
that
the last executed command would be aborted, and the system would
continue to
function normally, as long as the particular command sequence that
caused
the bug was avoided.

--

Ciao, Paul D. DeRocco
Paul mailto:pder...@ix.netcom.com

Hillel Y. Sims

unread,
Dec 14, 2002, 3:58:47 PM12/14/02
to
"James Kanze" <ka...@gabi-soft.de> wrote in message
news:d6651fb6.02121...@posting.google.com...

> Likely. If we're talking about the system turning stack overflow into
> an exception, it is up to the system to determine where an appropriate
> place might be, and to throw there.
>
[..]

>
> Yes, but we're talking about a possible extension: turning stack
> overflow into a C++ exception. If an implementation decides to do
this,
> then one of the things it must do is ensure that stack allocation only
> happens at specific points, where it can be correctly handled.
>
> I'm not sure what the effect of this would be on optimization, etc.
On
> a Sparc, normally, the only stack allocation you have is at the actual
> call instruction. (I'm not sure if you have a lot of parameters.) If
> the hardware/system backs out of this correctly (and I'm pretty sure
it
> can be made to), then you could make it work. There are two problems:
> - limiting stack overflow exceptions to call points is probably not
> enough to be able to write robust programs, and
> - it would probably require some OS modifications to implement.
> I have the feeling that something needs to be done, but I'm not sure
> that it is possible.
>

These are great points. Stack overflow exceptions probably do not really
need to be part of the C++ Standard, but a "good" implementation could
(maybe should) provide the ability for programmers to tap into and
control
this OS-specific behavior within the C++ framework (all invariants /
exception guarantees upheld).. or at least just explicitly define the
platform-specific behavior (even if that means just stating that it is
really undefined behavior on that platform).

C++ seems to be working ok in most cases even with the Standard leaving
the
stack allocation stuff up in the air currently, and there's probably
good
reason not to mandate controlled stack allocation behavior across all
applications (especially if it could be a pessimization in some systems
and
not every application needs it). On the other hand, no reason for
vendors
not to provide it if it makes sense (and for us to pester them until
they
give it to us ;-). I read a post intended for this thread at
comp.lang.c++
(which was rejected from this group by the moderators even though it is
probably actually relevant..) which, if I understand correctly,
describes
how programmers can specify stack overflow detection behavior in IBM
mainframes. On Alpha VMS, I'm not sure if it is explicitly spelled out
in
any docs anywhere, but we currently believe (via some testing) that the
C++
compiler always builds code such that all possible predeterminable stack
allocation for a function is performed at function entry prolog time
(even
stuff like "if (!xyz) return; else { char a[1000]; blah blah; }" will
actually always cause 1000 bytes to be allocated on the stack when the
function is entered).

I'm going to suggest that:
1) In light of platforms where stack overflow is generally already
handled
via non-C++ OS-exceptions and where "catch(...)" (and only "catch(...)")
can
actually intercept those (such as Win32, VMS, etc.) [btw, this is a
legal
extension to Standard C++], this is just another bullet point for why
general convention for all C++ development (especially code that is
intended
to be portable across platforms) should be to always only explicitly
throw
objects inherited from std::exception and avoid catch(...) blocks
completely, in order for stuff like "catch all exceptions" code to only
catch C++-defined exceptions and not actually prevent broken program
state
from aborting the application and causing data corruption. (Well, it
would
really be better of course to avoid "catch all exceptions" and use only
"on_unwinding" type stuff like ScopeGuard / ON_BLOCK_EXIT (*) when
possible,
but at least by following this convention "catch(std::exception&)" can
sufficiently substitute for "catch all exceptions" where needed and
program
integrity is not compromised.)

2) For any application that is interested in correctly making use of
C++
stack overflow exceptions for exception-safety and safe recovery when
possible, runtime "throw()" and/or NOTHROW_REGION (**) ES protection
should
be explicitly used for all important nothrow code regions, since it is
not
possible to guarantee deterministically under all circumstances at
compile-time that a stack overflow exception will not be generated by
certain code (which must not throw) at runtime. This is because if
function
entry/prolog (currently treated as nothrow-regions by the C++ Standard)
actually does throw an exception in nothrow-defined code, then it is
often
no longer possible to guarantee that all relevant program invariants /
exception guarantees have not been violated, and it seems to me that if
you
are actually otherwise interested in enabling detection of this
situation
via a C++ exception, ostensibly for safety and recovery purposes, then
you
would likely want to ensure that if nothrow code is ever violated by an
exception (an unrecoverable situation), the application should be
properly
terminated in a controlled way immediately (preferably without any
unwinding..). (Unless the throw() protection itself is not actually
enabled
by the platform for the function call stack allocation prolog code
itself,
in which case I guess it would be pretty much useless/broken. Since it
is
specified in the function interface, the compiler could/should activate
the
ES protection in client code before the point of the function call
itself.
NOTHROW_REGION in client code could be used as a workaround for this
(**),
if it is a problem. Also, thread-cancellation should be disabled around
nothrow regions for those systems where thread cancellation is
intelligently
implemented via an exception-based mechanism (pthreads-win32, VMS,
Solaris,
IBM) -- the compiler should probably do this automatically for throw()
code
too, but it probably doesn't, see prior discussion of CANCEL_GUARD for
more
information on a manual workaround (***).)

>
> I was thinking more in terms of view of an application. What makes it
> acceptable to core dump on stack overflow, but not on heap allocation?
> Most telecoms applications, for example, use ASN.1 scope and filter in
> command input -- all of the parsers for filter that I've seen use
> recursive descent, which means that stack use is simply not
predictable
> in advance. The protocols provide for an error message "insufficient
> resources", but we pay contractual penalties for system down time.
> Being able to catch stack overflow would sure make life easier in this
> case.
>

If the exception is generated from exception-safe code, then there
should be
no reason not to handle/recover from the situation without aborting in
most
cases (if that makes sense for your application). If the exception is
generated from certain nothrow code regions, then the program state is
potentially corrupted and you would really want to core dump to avoid
possible worse data corruption (something like putting the encrypted
data on
the wrong line was mentioned as a possible side-effect). My
understanding is
that "always on" service in these sort of critical cases should better
be
provided for by external monitoring processes which maintain physically
separate memory spaces (just a separate process would probably do) that
will
restart specific servers immediately if they terminate abnormally for
any
reason (even going so far as to do stuff like maintain pre-initialized
hot-swappable backup servers for shortest possible downtime).

thanks,
hys

(*): ScopeGuard and ON_BLOCK_EXIT, part of the scopeguard.h package by
Alexandrescu and Marginean. Highly recommended package. See
http://www.cuj.com/experts/1812/alexandr.htm?topic=experts for details
and
sourcecode.

(**) NOTHROW_REGION: Please see http://tinyurl.com/3iu4 --
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=f2ed
0aad
..0209031536.d7c079%40posting.google.com&rnum=1&prev=/groups%3Fq%3Dcomp.
lang.
c%252B%252B.moderated%2BNOTHROW_REGION%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3D
en
for a prior treatment and sourcecode implementation of this mechanism.

(***) CANCEL_GUARD: thread-cancellation protection mechanism for nothrow
code. Please see http://tinyurl.com/3ivd --
http://groups.google.com/groups?dq=&hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=
25je
8.31973%24GL6.4689755%40news02.optonline.net&rnum=1&prev=/groups%3Fq%3Dg
:thl
1895679809d%26dq%3D%26hl%3Den%26lr%3D%26ie%3DUTF-8%26oe%3DUTF-8%26selm%3
D25j
e8.31973%2524GL6.4689755%2540news02.optonline.net for explanation and a
pthreads-based sourcecode implementation.

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Hillel Y. Sims

unread,
Dec 14, 2002, 3:59:42 PM12/14/02
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:uy96ur...@boost-consulting.com...

> AFAICT, that's not possible in any useful way. First, the user would
> have to mark any "nothrow region" somehow

throw() ? (or NOTHROW_REGION (*) )

> so the system could know
> that it was not supposed to throw there (incredibly cumbersome; these
> regions appear all over ordinary code). But then, what *does* the
> system do if it can't get stack memory in one of those regions? By
> that point, it's too late.

I think it's not really a problem in a lot of cases:

CtorThatCanThrowSomething::CtorThatCanThrowSomething(int x)
: pimpl(new CtorThatCanThrowImpl)
{
printf("bababooey!\n"); // this is "nothrow" but could actually
theoretically throw a stack overflow
}

Assuming printf() itself doesn't internally leak resources, there is no
problem for CtorThatCanThrowSomething if a stack overflow exception is
triggered by printf().

Well in case printf() could actually leak resources, or for any other
functions that are not marked throw() but really should be in light of
stack-overflow-as-C++-interceptable-exceptions, a potential workaround
for
calling these routines, without requiring changes to the underlying
implementation, could be the use of NOTHROW_REGION around these function
calls in client code (*). Feh, no kidding about the cumbersome part.

{
NOTHROW_REGION;
printf("safe!\n"); // Give me stack, or give me death!
}

> > : turning stack overflow into a C++ exception.
>
> Oh, I am pretty sure that this happens on Win32/VC++. It would be
> consistent with other EH madness on that platform.

I believe the Win32 system exception mechanism (along with many other
aspects of Win32) is pretty much directly derived from VMS. It's at
least
one part they got right! ;-)

thanks,
hys

(*) NOTHROW_REGION: Please see http://tinyurl.com/3iu4 --


http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=f2ed
0aad
..0209031536.d7c079%40posting.google.com&rnum=1&prev=/groups%3Fq%3Dcomp.
lang.
c%252B%252B.moderated%2BNOTHROW_REGION%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3D
en
for a prior treatment and sourcecode implementation of this mechanism.

--


(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Hillel Y. Sims

unread,
Dec 14, 2002, 4:00:18 PM12/14/02
to
"James Kanze" <ka...@gabi-soft.de> wrote in message
news:d6651fb6.0212...@posting.google.com...

> "Mirek Fidler" <c...@volny.cz> wrote in message
> news:<at84jf$1av0$1...@news.vol.cz>...
> > That is, that new should rather be non-throwing and low memory
> > sitatuations solved by program termination. BTW, on most operating
> > systems it is not unlikely that OS crashes BEFORE new can even
throw.
>
> I've never seen that. I've successfully handled out of memory
> situations under Solaris, HP/UX and AIX, without intervening system
> crashes. I've triggered out of memory conditions under Windows NT as
> well, and although I wasn't able to successfully recover from them,
the
> system certainly didn't crash.
>

I believe some OS's (Linux, maybe? but I might be totally wrong) do not
always actually 'commit' memory immediately when you request it, they
just
sort of set up their bookkeeping to expect that you may allocate up to X
amount of bytes and tell you it's ok without actually attempting to
allocate
all of it. This is apparently intended as an optimization whereby
programmers may often request large chunks of memory, but only use a
little
at a time, so instead of allocating all of it up front, they give it to
you
as you attempt to access it. Eventually I guess either the system would
thrash itself to death without actually dying, or you'd just get some
sort
of accvio at a random point in your code.

>
> The problem is that the guarantees of exception specifications are
> run-time guarantees, not compile time ones. Which pretty much means
> that they are just another way of crashing your program.
>

This could actually be desirable behavior in some cases (eg stack
overflow
in nothrow code).

hys

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Hillel Y. Sims

unread,
Dec 14, 2002, 4:00:54 PM12/14/02
to
"Philippe Mori" <philip...@hotmail.com> wrote in message
news:heoK9.6622$281.1...@news20.bellglobal.com...

> If you really want that your application continue after a stack
overflow,
> maybe the best thing to do is to do the recursive stuff in another
thread
> with is own stack. If an overflow occurs, you simply terminate that
thread
> and start another one instead.

Since threads share a common memory space, corruption in one thread is
corruption in the entire process. If the stack overflow has corrupted
the
state of one thread, then the state of the remainder of the process is
suspect.

Well actually, I'm hoping some of the Ada-knowledgeable people could
chime
in on this thread and explain how stack overflow is handled by that
environment.. ;-)

hys

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

David Abrahams

unread,
Dec 14, 2002, 4:04:45 PM12/14/02
to
chris...@baus.net (christopher) writes:

>> Yeah, problematic isn't it? Function calls are an important unit
>> of abstraction in C++. We really need to be able to write nothrow
>> functions (e.g. iterator dereference, operator delete, ...).
>
> It's too bad we can't then. Or at least the standard doesn't
> guarantee it. It doesn't say that a implementation can't throw when
> faced with stack failure.

Sad but true. We need to rely on our implementators not to do what
Microsoft did.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Niklas Matthies

unread,
Dec 15, 2002, 7:14:37 AM12/15/02
to
On 2002-12-13 15:29, David Abrahams <da...@boost-consulting.com> wrote:
> ka...@gabi-soft.de (James Kanze) writes:
>> David Abrahams <da...@boost-consulting.com> wrote in message
>> news:<uhedkv...@boost-consulting.com>...
>>> ka...@gabi-soft.de (James Kanze) writes:
>>
>>> Likewise if you unwind the stack from an inappropriate place.
>>
>> Likely. If we're talking about the system turning stack overflow into
>> an exception, it is up to the system to determine where an appropriate
>> place might be, and to throw there.
>
> AFAICT, that's not possible in any useful way. First, the user would
> have to mark any "nothrow region" somehow so the system could know
> that it was not supposed to throw there (incredibly cumbersome; these
> regions appear all over ordinary code). But then, what *does* the
> system do if it can't get stack memory in one of those regions? By
> that point, it's too late. About the only way I can imagine dealing
> with this is via some sort of whole-program link-time static analysis
> which traces all possible code paths, detects non-tail recursion, and
> checks the amount of stack needed each time one of these regions is
> entered... which adds an additional runtime check at the entry to each
> nothrow region, slowing the whole program down, not to mention linking!

I don't think that this (and also the other points you mentioned in this
posting) would be an unsurmountable problem, although certainly not
trivial. There's a more important show-stopper, though: Function
pointers, and, more importantly, virtual member functions.

Nothrow variants are needed for those, but they call code that is
dynamically determined, possibly even has been dynamically loaded, and
whose stack space requirements therefore cannot be guaranteed by the
calling code. Unfortunately the calling code happens to be the only code
who is allowed to throw an exception when the called code is to be
nothrow.

This could still be made to work by tagging all nothrow virtual
functions (where "virtual function" is meant to include all of its
possible overrides) and function pointer types with some worst-case
stack space requirement, and fail compilation (or linking) whenever a
virtual function implementation exceeds that limit or whenever the
address of a function that exceeds that limit is bound to such a
function pointer.

While all this starts to be quite ugly, I'm undecided whether this
should really be considered a show-stopper. Having the guarantee that
ones program won't ever ungracefully terminate in mid-execution because
of stack space exhaustion certainly has some merits. And where it hurts
performance (maybe more because of always having to commit the worst-case
stack space for nothrow call trees, rather than because of the runtime
checks needed for non-nothrow calls), it could be turned off for those
applications where it isn't considered critical.

(I apologize if some of this has already been mentioned here, I'm still
in the middle of catching up this thread.)

-- Niklas Matthies

David Abrahams

unread,
Dec 15, 2002, 7:19:17 AM12/15/02
to
"Hillel Y. Sims" <use...@phatbasset.com> writes:

> I'm going to suggest that:
> 1) In light of platforms where stack overflow is generally already
> handled via non-C++ OS-exceptions and where "catch(...)" (and only
> "catch(...)") can actually intercept those (such as Win32, VMS, etc.)
> [btw, this is a legal extension to Standard C++], this is just another
> bullet point for why general convention for all C++ development
> (especially code that is intended to be portable across platforms)
> should be to always only explicitly throw objects inherited from
> std::exception and avoid catch(...) blocks completely, in order for
> stuff like "catch all exceptions" code to only catch C++-defined
> exceptions and not actually prevent broken program state from aborting
> the application and causing data corruption.

I don't see how catch(...) changes anything, since the unwinding code
that corrupts your program state might just as well be (in fact
probably is) in some destructor. The only case where it might help to
avoid catch(...) is when you're going to avoid catching the stack
overflow altogether, in which case it propagates out of the program,
the program aborts, and whether or not any unwinding occurs is the
implementation-defined or unspecified (I forget which). In any case,
I don't think this is the behavior you're after, is it?

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 15, 2002, 7:25:27 AM12/15/02
to
"Paul D. DeRocco" <pder...@ix.netcom.com> writes:

> I did a fairly elaborate stage lighting controller, written in C++,
> and running on a 486-class CPU. It used an ROM MS-DOS clone whose
> only purpose was to load the program and allow show data to be
> loaded and saved to floppy, but it's boot-up time, including BIOS
> POST, was about 30 seconds.

<snip>

> Most of the time, all that would happen to the user is that the last
> executed command would be aborted, and the system would continue to
> function normally, as long as the particular command sequence that
> caused the bug was avoided.

OK, point taken. In deployed code it can sometimes be better to take
your chances with recovery from an arbitrary point in the execution
stream. I don't mind the idea that some applications get to use this
kind of behavior to advantage. I just don't want to see this become
part of the required "defined behavior" of C++.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 15, 2002, 7:27:21 AM12/15/02
to
"Hillel Y. Sims" <use...@phatbasset.com> writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:uy96ur...@boost-consulting.com...
>> AFAICT, that's not possible in any useful way. First, the user would
>> have to mark any "nothrow region" somehow
>
> throw() ?

You'd need much more than that. You can't add throw() to a pointer
increment in an ordinary function or an iterator dereference in a
generic algorithm. Nor can you add throw() to a function which throws
exceptions, but only under different conditions.

> (or NOTHROW_REGION (*) )

Closer... if we carefully mark every NOTHROW_REGION, it becomes safe
to throw on stack overflow anywhere else. (It probably even makes
asynchronous exceptions work if you can find a way to defer them to
the end of any NOTHROW_REGION). As long as nobody thinks it gets us
absolute protection against stack overflows, I think it's a promising
approach.

>> so the system could know that it was not supposed to throw there
>> (incredibly cumbersome; these regions appear all over ordinary
>> code). But then, what *does* the system do if it can't get stack
>> memory in one of those regions? By that point, it's too late.
>
> I think it's not really a problem in a lot of cases:
>
> CtorThatCanThrowSomething::CtorThatCanThrowSomething(int x)
> : pimpl(new CtorThatCanThrowImpl)
> {
> printf("bababooey!\n"); // this is "nothrow" but could actually
> theoretically throw a stack overflow
> }
>
> Assuming printf() itself doesn't internally leak resources, there is no
> problem for CtorThatCanThrowSomething if a stack overflow exception is
> triggered by printf().

Showing an example where it's not a problem doesn't mean it's never a
problem (I realize you didn't claim that). In fact, your example
doesn't have any regions which need to be nothrow; it just happens to
use a nothrow operation in a context where exceptions don't matter.

> Well in case printf() could actually leak resources, or for any
> other functions that are not marked throw() but really should be in
> light of stack-overflow-as-C++-interceptable-exceptions, a potential
> workaround for calling these routines, without requiring changes to
> the underlying implementation, could be the use of NOTHROW_REGION
> around these function calls in client code (*). Feh, no kidding
> about the cumbersome part.
>
> {
> NOTHROW_REGION;
> printf("safe!\n"); // Give me stack, or give me death!
> }

You lost me here. The paragraph above is unintelligible to me, except
for the part after "Feh" ;-)

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Hillel Y. Sims

unread,
Dec 16, 2002, 5:25:00 AM12/16/02
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:un0n8w...@boost-consulting.com...

> "Hillel Y. Sims" <use...@phatbasset.com> writes:
>
> > I'm going to suggest that:
> > 1) In light of platforms where stack overflow is generally already
> > handled via non-C++ OS-exceptions and where "catch(...)" (and only
> > "catch(...)") can actually intercept those (such as Win32, VMS, etc.)
> > [btw, this is a legal extension to Standard C++], this is just another
> > bullet point for why general convention for all C++ development
> > (especially code that is intended to be portable across platforms)
> > should be to always only explicitly throw objects inherited from
> > std::exception and avoid catch(...) blocks completely, in order for
> > stuff like "catch all exceptions" code to only catch C++-defined
> > exceptions and not actually prevent broken program state from aborting
> > the application and causing data corruption.
>
> I don't see how catch(...) changes anything, since the unwinding code
> that corrupts your program state might just as well be (in fact
> probably is) in some destructor. The only case where it might help to
> avoid catch(...) is when you're going to avoid catching the stack
> overflow altogether, in which case it propagates out of the program,
> the program aborts, and whether or not any unwinding occurs is the
> implementation-defined or unspecified (I forget which). In any case,
> I don't think this is the behavior you're after, is it?
>

Actually it is... if you cannot catch the exception by name, then you cannot
handle it effectively and any attempt to do so is unsafe. You could use
OS-specific extensions to catch the stack overflow exception by its identity
and handle it safely (although you probably wouldn't get stack unwinding in
that case). But catch(...) does not allow you to determine the context of
the actual exception, so you cannot know whether it is really a potentially
recoverable exception or not (maybe it was some std::runtime_error, maybe
stack overflow, or maybe an asynchronous access violation hardware trap),
and therefore you cannot ever do anything meaningful (or safe) in that
situation so you just want to avoid intercepting it completely (and avoid
triggering unwinding / cleanup code in unstable situations). Generally I
think it's best to avoid trying to catch something that you don't really
know what it is / aren't expecting, anyhow.

The value of the std::exception model is that, if all exceptions emanate
from a single known / named base class, you can always at least know that
you are only catching / expecting a C++ based exception (even if you still
don't know what it really is within that domain..), so at least for stuff
like templated code which must "catch(everything) { cleanup; throw; }" you
would be able to restrict "everything" to only the subset of all synchronous
C++ exceptions, which is really what is intended with regard to C++
exception safety, if I understand correctly. ( -- although it could be
argued that the better model for those circumstances would be to use
"on_winding" / ScopeGuard type stuff instead, but the concept of
std::exception is still valid)

In my point above, I was reflecting on the current situation on many
machines, where stack overflow exceptions are already being thrown as OS
exceptions that can be caught by catch(...) but do not have a C++ name. If
they were thrown as named C++ exceptions, then you could catch them by name
(where appropriate) and handle them effectively (define "effectively" ;-),
and also get appropriate stack unwinding. Because they are thrown as OS
exceptions, you cannot handle them via catch('name') sequences (except
erroneously via catch(...)), and must rely on OS-based EH mechanisms (or let
the program crash, for safety).

(I know you think that it is heinous that catch(...) catches OS exceptions,
and to some extent I probably agree with you, but I believe it is possible
to show that these OS exceptions can actually be implemented as true C++
exceptions by the OS using private names that are simply not visible to end
user code, and therefore this behavior does not necessarily violate the
Standard in any way.)

thanks,
hys

PS: What is the difference between "implementation-defined" and
"unspecified"??

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Hillel Y. Sims

unread,
Dec 16, 2002, 5:35:15 AM12/16/02
to
"Niklas Matthies" <comp.lang.c++.mod...@nmhq.net> wrote in
message news:slrnavnllg.jb.comp.lang....@nmhq.net...

> I don't think that this (and also the other points you mentioned in this
> posting) would be an unsurmountable problem, although certainly not
> trivial. There's a more important show-stopper, though: Function
> pointers, and, more importantly, virtual member functions.
>
> Nothrow variants are needed for those, but they call code that is
> dynamically determined, possibly even has been dynamically loaded, and
> whose stack space requirements therefore cannot be guaranteed by the
> calling code. Unfortunately the calling code happens to be the only code
> who is allowed to throw an exception when the called code is to be
> nothrow.
>

See http://tinyurl.com/3iu4 --
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&threadm=f2ed0aad
...0209031536.d7c079%40posting.google.com&rnum=1&prev=/groups%3Fq%3Dcomp.lang
..c%252B%252B.moderated%2BNOTHROW_REGION%26ie%3DUTF-8%26oe%3DUTF-8%26hl%3Den
for "NOTHROW_REGION" mechanism useful for protecting calling code against
violation of nothrow contracts by called code. Example:

void func()
{
do_some_stuff();
do_some_more_stuff();
{
// 'critical' section
NOTHROW_REGION;
do_some_stuff_if_it_throws_we're_hosed();
}
do_some_other_stuff();
}

(The ability of the above to provide stack overflow exception safety is
dependent upon the ability to control stack allocation behavior on the
particular platform -- I guess in the case of nested function calls this is
already implicitly provided by all platforms.)

>
> While all this starts to be quite ugly, I'm undecided whether this
> should really be considered a show-stopper. Having the guarantee that
> ones program won't ever ungracefully terminate in mid-execution because
> of stack space exhaustion certainly has some merits.

It does not seem possible to guarantee that the program will never terminate
because of stack space exhaustion. It might be worthwhile to use nothrow
guards like throw() or NOTHROW_REGION to ensure that stack exhaustion in
unrecoverable circumstances cannot cause data corruption (by forcing
immediate termination).

thanks,
hys

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Garry Lancaster

unread,
Dec 16, 2002, 4:13:20 PM12/16/02
to
David Abrahams:

> >> Yeah, problematic isn't it? Function calls are an important unit
> >> of abstraction in C++. We really need to be able to write nothrow
> >> functions (e.g. iterator dereference, operator delete, ...).

Christopher:


> > It's too bad we can't then.

Of course we can write no-throw functions. We just
can't expect them to behave predicatably in the face
of undefined behaviour. We can't expect any code to
do anything predictably in the face of undefined
behaviour: why should no-throw functions be any
different?

> > Or at least the standard doesn't
> > guarantee it. It doesn't say that a implementation can't throw when
> > faced with stack failure.

That's correct. Behaviour upon stack overflow is
undefined, according to the C++ standard.

David Abrahams:


> Sad but true. We need to rely on our implementators not to do what
> Microsoft did.

No we don't. The MS behaviour is completely standards
compliant and occassionally very useful.

There is a lot of confusing stuff being posted to this
thread by people who seem to think exceptions on
stack overflow create some kind of problem. This is
simply untrue. Since theoretical arguments have so
far led to such little light, I would ask anyone who thinks
otherwise to post some standard-compliant C++ code
that shows the problem they forsee. They should
include a description of the behaviour they expect
the code to exhibit on stack overflow. (I would think
such a description would have to read "undefined",
but it's not my example ;-)

Kind regards

Garry Lancaster

James Kanze

unread,
Dec 16, 2002, 4:19:34 PM12/16/02
to
"Hillel Y. Sims" <use...@phatbasset.com> wrote in message
news:<YEyK9.20054$a8.1...@news4.srv.hcvlny.cv.net>...

> "James Kanze" <ka...@gabi-soft.de> wrote in message
> news:d6651fb6.0212...@posting.google.com...
> > "Mirek Fidler" <c...@volny.cz> wrote in message
> > news:<at84jf$1av0$1...@news.vol.cz>...
> > > That is, that new should rather be non-throwing and low memory
> > > sitatuations solved by program termination. BTW, on most operating
> > > systems it is not unlikely that OS crashes BEFORE new can even
> > > throw.

> > I've never seen that. I've successfully handled out of memory
> > situations under Solaris, HP/UX and AIX, without intervening system
> > crashes. I've triggered out of memory conditions under Windows NT
> > as well, and although I wasn't able to successfully recover from
> > them, the system certainly didn't crash.

> I believe some OS's (Linux, maybe? but I might be totally wrong) do
> not always actually 'commit' memory immediately when you request it,
> they just sort of set up their bookkeeping to expect that you may
> allocate up to X amount of bytes and tell you it's ok without actually
> attempting to allocate all of it. This is apparently intended as an
> optimization whereby programmers may often request large chunks of
> memory, but only use a little at a time, so instead of allocating all
> of it up front, they give it to you as you attempt to access
> it. Eventually I guess either the system would thrash itself to death
> without actually dying, or you'd just get some sort of accvio at a
> random point in your code.

This is a known behavior of some OS's, yes. Including Linux, and you'll
notice that Linux isn't in the list of systems where I've successfully
recovered.:-) (In fact, there is a way of turning this behavior off in
Linux. I've not had the occasion to do any trials -- my customers don't
use Linux -- but I would imagine that if you did this, there would also
be no problem.)

Another poster mentionned thrashing, or the possibility of some
secondary components crashing. Thrashing was a problem with early
versions of Solaris, but doesn't seem to be a problem from 2.7 on. As
for secondary components crashing, I've never seen the critical
components crash on a system which didn't to lazy allocation. It
obviously can happen, but typically, most critical components will have
been running long enough to not need any extra memory. (I might add
that I don't consider things like the window manager particularly
critical, since the actual application probably won't use it anyway.)

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

James Kanze

unread,
Dec 16, 2002, 4:20:34 PM12/16/02
to
"Hillel Y. Sims" <use...@phatbasset.com> wrote in message
news:<npyK9.20047$a8.1...@news4.srv.hcvlny.cv.net>...

> [..]

David's answer to my post basically said what I was afraid it would:-).
I was just hoping that maybe, just maybe there was a way out, but I
didn't really believe it myself.

The basic point is (at least for "typical" systems): a stack overflow
can occur any time there is a push. And the system must push during a
function call. Allowing stack overflow to trigger an exception thus
basically means that throw() functions become impossible.

I'm not sure what the answer is. In theory, stack use is bound except
when recursion is involved. In the case of recursion, stack use for
each recursive invocation is bound. So it should be possible to simply
create a request telling the system I'm going to need so much stack,
much in the same way I allocate explicitly on the heap (except that in
this case, I allocate all that it needed for a specific action). The
problem is that while these stack uses are bound, I generally have no
easy way, or no way at all, to determine what that bound is. Which
makes it rather difficult for me even if the system function existed.

--
James Kanze mailto:jka...@caicheuvreux.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 16, 2002, 4:23:13 PM12/16/02
to
"Hillel Y. Sims" <use...@phatbasset.com> writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:un0n8w...@boost-consulting.com...
>

> > I don't see how catch(...) changes anything, since the unwinding
> > code that corrupts your program state might just as well be (in
> > fact probably is) in some destructor. The only case where it
> > might help to avoid catch(...) is when you're going to avoid
> > catching the stack overflow altogether, in which case it
> > propagates out of the program, the program aborts, and whether or
> > not any unwinding occurs is the implementation-defined or
> > unspecified (I forget which). In any case, I don't think this is
> > the behavior you're after, is it?
> >
>
> Actually it is... if you cannot catch the exception by name, then
> you cannot handle it effectively and any attempt to do so is
> unsafe. You could use OS-specific extensions to catch the stack
> overflow exception by its identity and handle it safely (although
> you probably wouldn't get stack unwinding in that case). But
> catch(...) does not allow you to determine the context of the actual
> exception, so you cannot know whether it is really a potentially
> recoverable exception or not (maybe it was some std::runtime_error,
> maybe stack overflow, or maybe an asynchronous access violation
> hardware trap), and therefore you cannot ever do anything meaningful

> (or safe) in that situation.

Q: What can you ever do that's safe when you have an "unrecoverable
exception"?

A: Nothing, that's what makes it unrecoverable.

If you're going to roll the dice and try to recover from the exception
anyway (e.g. see Paul DeRocco's lighting controller), that's great;
you don't need to know anything about the context in order to do some
recovery. If you're not going to try to recover, then you might as
well just have these kinds of fault go directly to terminate(); do not
pass go; do not collect $100; do _not_ throw an exception.

> so you just want to avoid intercepting it completely (and avoid
> triggering unwinding / cleanup code in unstable
> situations). Generally I think it's best to avoid trying to catch
> something that you don't really know what it is / aren't expecting,
> anyhow.

That's a philosophical concern, not a practical one... unless the OS
_forces_ you to deal brave unwinding behavior for "asynchronous"
faults where it's inappropriate.

> The value of the std::exception model is that, if all exceptions emanate
> from a single known / named base class, you can always at least know that
> you are only catching / expecting a C++ based exception (even if you still
> don't know what it really is within that domain..), so at least for stuff
> like templated code which must "catch(everything) { cleanup; throw; }" you
> would be able to restrict "everything" to only the subset of all synchronous
> C++ exceptions, which is really what is intended with regard to C++
> exception safety, if I understand correctly.

Well, some people legitimately want to roll the dice with the other
exceptions. And your
NOTHROW_REGION idea makes those uses even less theoretically unsound.

> In my point above, I was reflecting on the current situation on many
> machines, where stack overflow exceptions are already being thrown
> as OS exceptions that can be caught by catch(...) but do not have a
> C++ name. If they were thrown as named C++ exceptions, then you
> could catch them by name (where appropriate) and handle them
> effectively (define "effectively" ;-), and also get appropriate
> stack unwinding. Because they are thrown as OS exceptions, you
> cannot handle them via catch('name') sequences (except erroneously
> via catch(...)), and must rely on OS-based EH mechanisms (or let the
> program crash, for safety).
>
> (I know you think that it is heinous that catch(...) catches OS exceptions,
> and to some extent I probably agree with you, but I believe it is possible
> to show that these OS exceptions can actually be implemented as true C++
> exceptions by the OS using private names that are simply not visible to end
> user code, and therefore this behavior does not necessarily violate the
> Standard in any way.)

Sure, I know it's allowable behavior. What I think is heinous is not
that they're non-C++ exceptions, it's that they're "asynchronous" in
some sense and can emanate from regions that were supposed to be
nothrow. I don't happen to agree that the lack of context for the
throw or information about the failure is an important problem for
catch(...), because most catch(...) blocks should include "throw;"
anyway. When they don't, it's up to the writer to know what kinds of
exceptions she might be eating on that platform and do something
"appropriate".

>
> thanks,
> hys
>
> PS: What is the difference between "implementation-defined" and
> "unspecified"??

In the former case, you can look up the behavior in the
implementation's documentation. In the latter, you probably can't.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Bob Bell

unread,
Dec 16, 2002, 4:41:05 PM12/16/02
to
"Hillel Y. Sims" <use...@phatbasset.com> wrote in message news:<MS9L9.28797$a8.2...@news4.srv.hcvlny.cv.net>...

> Actually it is... if you cannot catch the exception by name, then you cannot
> handle it effectively and any attempt to do so is unsafe. You could use
> OS-specific extensions to catch the stack overflow exception by its identity
> and handle it safely (although you probably wouldn't get stack unwinding in
> that case). But catch(...) does not allow you to determine the context of
> the actual exception, so you cannot know whether it is really a potentially
> recoverable exception or not (maybe it was some std::runtime_error, maybe
> stack overflow, or maybe an asynchronous access violation hardware trap),

This is actually a good argument for why "asynchronous access
violation hardware traps" should not be turned into exceptions, rather
than an argument against catch (...).

> (I know you think that it is heinous that catch(...) catches OS exceptions,

Actually, it's heinous that the OS _throws_ exceptions at arbitrary,
unpredictable points.

> and to some extent I probably agree with you, but I believe it is possible
> to show that these OS exceptions can actually be implemented as true C++
> exceptions by the OS using private names that are simply not visible to end
> user code, and therefore this behavior does not necessarily violate the
> Standard in any way.)

If an implementation throws exceptions during undefined behavior, it
does not violate the standard. That doesn't make it a good thing,
however...

Bob

David Abrahams

unread,
Dec 17, 2002, 7:05:48 AM12/17/02
to
"Garry Lancaster" <glanc...@ntlworld.com> writes:

> David Abrahams:
>> Sad but true. We need to rely on our implementators not to do what
>> Microsoft did.
>
> No we don't. The MS behaviour is completely standards
> compliant and occassionally very useful.
>
> There is a lot of confusing stuff being posted to this
> thread by people who seem to think exceptions on
> stack overflow create some kind of problem. This is
> simply untrue. Since theoretical arguments have so
> far led to such little light, I would ask anyone who thinks
> otherwise to post some standard-compliant C++ code
> that shows the problem they forsee.

I hope you'll accept a simple example in pseudocode instead:

try {

allocate new space in database file
write new database record
}
catch(...)
{
reclaim space by restoring database to old size
throw;
}

[don't give me a hard time about catch(...) here, it's for
illustrative purposes. Use an RAII class if you prefer]

But wait: the stack overflow happened in the middle of a write
operation in the filesystem, leaving the file descriptor record in an
inconsistent state. The operation in the catch(...) block which
attempts to restore the database to the old size by passing a pointer
to the descriptor record to some OS routine now walks on some part of
the disk it doesn't own, or perhaps garbles good data in the database.

> They should include a description of the behaviour they expect the
> code to exhibit on stack overflow. (I would think such a description
> would have to read "undefined", but it's not my example ;-)

I don't expect any particular behavior on an arbitrary
standards-conforming C++ system. However, if the system is capable of
detecting and responding to stack overflow I expect to be able to
select immediate termination or some other non-unwinding response that
isn't going to corrupt my database.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Hillel Y. Sims

unread,
Dec 17, 2002, 7:10:31 AM12/17/02
to

"David Abrahams" <da...@boost-consulting.com> wrote in message
news:uwumae...@boost-consulting.com...

>
> Q: What can you ever do that's safe when you have an "unrecoverable
> exception"?
>
> A: Nothing, that's what makes it unrecoverable.
[..]

> Sure, I know it's allowable behavior. What I think is heinous is not
> that they're non-C++ exceptions, it's that they're "asynchronous" in
> some sense and can emanate from regions that were supposed to be
> nothrow. I don't happen to agree that the lack of context for the
> throw or information about the failure is an important problem for
> catch(...), because most catch(...) blocks should include "throw;"
> anyway. When they don't, it's up to the writer to know what kinds of
> exceptions she might be eating on that platform and do something
> "appropriate".

In the sense of a C++ stack overflow exception mechanism, we would indeed
want a specification for triggering those exceptions synchronously at
well-known allocation points. Asynchronous exceptions are certainly much
more difficult to handle; when they are triggered there is usually no way to
know if the program state integrity is intact (except for random
programmer-specified async-cancel-safe regions). The only way I can think of
with C++ to absolutely only catch synchronous exceptions generically (in one
single catch() handler) is via the std::exception hierarchy. catch(...)
catches everything including async exceptions, which could represent a
corrupted program state, and therefore in the vast majority of circumstances
(except for the tiny few applications where it really does make most sense
to continue even with potentially corrupted data as long as possible) it is
best to avoid them and let them dump the application.

The significance of catch(...) { throw; } is that this construct _forces_
stack unwinding for all exception contexts, including unrecoverable
asynchronous system exceptions, even when they are actually not handled at
all by the outer code and the application is going to terminate() without
otherwise triggering unwinding. The destructors are then executing with a
potentially corrupted program state, and random data corruption may (or may
not) result. In the sense of the "you can do nothing safe" philosophy, it's
usually going to be best to avoid triggering stack unwinding inadvertently
in these circumstances, unless it is actually directed by an outer layer on
the call stack which explicitly handles that exception (even if it's a
catch(...) { /*swallow*/ } for one of those don't-care-never-stop apps). I
believe the generally more appropriate strategy for "I must clean up this
state on any exceptional exit" would be to only trigger cleanup code if the
exception is actually handled by an outer layer -- currently this is done
via RAII or you can use ScopeGuard for some flexibility wrapping function
calls (some kind of future "on_unwinding { /* any cleanup code */ }"
mechanism would be way cool).

If you write catch(std::exception&) { throw; }, then although you are still
forcing stack unwinding even if the app is going to terminate (which has
some mild implications for debugging / core dumps), at least you know that
you are only reacting to a synchronous exception, and there is every reason
to believe the program state is still valid and recoverable. If you can
guarantee that all your exceptions are std::exception based, this can be an
acceptable workaround if RAII and ScopeGuard are not feasible.


> > so you just want to avoid intercepting it completely (and avoid
> > triggering unwinding / cleanup code in unstable
> > situations). Generally I think it's best to avoid trying to catch
> > something that you don't really know what it is / aren't expecting,
> > anyhow.
>
> That's a philosophical concern, not a practical one... unless the OS
> _forces_ you to deal brave unwinding behavior for "asynchronous"
> faults where it's inappropriate.

How does it "force" anything if you specifically write catch(...)? ;-)

>
> > The value of the std::exception model is that, if all exceptions emanate
> > from a single known / named base class, you can always at least know
that
> > you are only catching / expecting a C++ based exception (even if you
still
> > don't know what it really is within that domain..), so at least for
stuff
> > like templated code which must "catch(everything) { cleanup; throw; }"
you
> > would be able to restrict "everything" to only the subset of all
synchronous
> > C++ exceptions, which is really what is intended with regard to C++
> > exception safety, if I understand correctly.
>
> Well, some people legitimately want to roll the dice with the other
> exceptions. And your
> NOTHROW_REGION idea makes those uses even less theoretically unsound.

Um.. sorry I can't quite parse that last sentence fully.. is that a good
thing or a bad thing?

hys
-only 3 more days of work for the year!! :-D

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Alan Griffiths

unread,
Dec 17, 2002, 12:33:31 PM12/17/02
to
"Garry Lancaster" <glanc...@ntlworld.com> wrote in message news:<LjlL9.377$BR6.126110@newsfep2-gui>...

> David Abrahams:
> > Sad but true. We need to rely on our implementators not to do what
> > Microsoft did.
>
> No we don't. The MS behaviour is completely standards
> compliant and occassionally very useful.

The questions isn't about compliance, it is about usefulness.

> There is a lot of confusing stuff being posted to this
> thread by people who seem to think exceptions on
> stack overflow create some kind of problem. This is
> simply untrue. Since theoretical arguments have so
> far led to such little light, I would ask anyone who thinks
> otherwise to post some standard-compliant C++ code
> that shows the problem they forsee. They should
> include a description of the behaviour they expect
> the code to exhibit on stack overflow. (I would think
> such a description would have to read "undefined",
> but it's not my example ;-)

The no-throw guarantee is important in that it ensures that any
changes to program state made in that region of code are atomic. This
cannot be assured if the environment causes an exception to occur in
spite of the programmer's intent.

What Microsoft does is legal but unhelpful.
--
Alan Griffiths
http://www.octopull.demon.co.uk/

Sergey P. Derevyago

unread,
Dec 17, 2002, 4:07:17 PM12/17/02
to
James Kanze wrote:
> Most of the telephone applications I've worked on have had very long
> start up times.
BTW, why don't you use a pool of prestarted applications?
--
With all respect, Sergey. http://cpp3.virtualave.net/
mailto : ders at skeptik.net

Bob Bell

unread,
Dec 17, 2002, 4:26:39 PM12/17/02
to
"Hillel Y. Sims" <use...@phatbasset.com> wrote in message
news:<fLyL9.47296$a8.3...@news4.srv.hcvlny.cv.net>...

> In the sense of a C++ stack overflow exception mechanism, we would
indeed
> want a specification for triggering those exceptions synchronously at
> well-known allocation points. Asynchronous exceptions are certainly
much
> more difficult to handle; when they are triggered there is usually no
way to
> know if the program state integrity is intact (except for random
> programmer-specified async-cancel-safe regions). The only way I can
think of
> with C++ to absolutely only catch synchronous exceptions generically
(in one
> single catch() handler) is via the std::exception hierarchy.

How do you know that asynchronous exceptions are not derived from
std::exception? Given that throwing exceptions asynchronously is part
of undefined behavior, it is perfectly allowable to derive such an
exception from std::exception.

> The significance of catch(...) { throw; } is that this construct
_forces_
> stack unwinding for all exception contexts, including unrecoverable
> asynchronous system exceptions, even when they are actually not
handled at
> all by the outer code and the application is going to terminate()
without
> otherwise triggering unwinding.

This is another good argument against throwing exceptions
asynchronously, not an argument against catch (...).

Bob Bell

Brett Gossage

unread,
Dec 18, 2002, 9:11:43 AM12/18/02
to
>
> I hope you'll accept a simple example in pseudocode instead:
>
> try {
>
> allocate new space in database file
> write new database record
> }
> catch(...)
> {
> reclaim space by restoring database to old size
> throw;
> }

No fair writing-in your own bugs...

try {

allocate new space in database file
write new database record
}

catch( std::stack_overflow& err )
{
do nothing
throw;


}
catch(...)
{
reclaim space by restoring database to old size
throw;
}

Now, if I'm doing thread-per-database-request, I can retry the transaction
with more stack.

Hillel Y. Sims

unread,
Dec 18, 2002, 9:15:23 AM12/18/02
to
"Bob Bell" <bel...@pacbell.net> wrote in message
news:c87c1cfb.02121...@posting.google.com...

> How do you know that asynchronous exceptions are not derived from
> std::exception? Given that throwing exceptions asynchronously is part
> of undefined behavior, it is perfectly allowable to derive such an
> exception from std::exception.

Yeah, I guess so.. Well then we are all totally screwed w/r/t hoping to even
try to write any kind of robust software? I know this is not the case on at
least Windows and VMS, anyhow..

> This is another good argument against throwing exceptions
> asynchronously, not an argument against catch (...).

Yeah maybe from a philosophical standpoint. I'm just trying to point out
that use of catch(...) has some subtle practical consequences at the OS
level (which can already throw async exceptions, and how soon do you think
it's likely to change?) which don't seem to be always totally obvious from a
'pure' C++ point of view. It has been demonstrated that there are a few
cases where it is actually desirable even in light of this to use catch(...)
(the lighting controller software, for example), but the vast majority of
cases it can lead to program instability and is not desirable. Well,
assuming your OS is not completely sadistic, you can at least rely on the
std::exception hierarchy as being synchronous and avoid these issues. (also
wasn't part of the point of this thread whether stack overflow exceptions
are synchronous or asynchronous?)

hys

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Brett Gossage

unread,
Dec 18, 2002, 9:17:33 AM12/18/02
to
> The no-throw guarantee is important in that it ensures that any
> changes to program state made in that region of code are atomic.

No, it does not. It only promises not to leak an exception "so help me kill
myself."

> This cannot be assured if the environment causes an exception to occur in
> spite of the programmer's intent.
>

Nothing prevents an exception from being thrown inside any no-throw region.

David Abrahams

unread,
Dec 18, 2002, 9:22:58 AM12/18/02
to
"Hillel Y. Sims" <use...@phatbasset.com> writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:uwumae...@boost-consulting.com...
> >
> > Q: What can you ever do that's safe when you have an "unrecoverable
> > exception"?
> >
> > A: Nothing, that's what makes it unrecoverable.
> [..]
> > Sure, I know it's allowable behavior. What I think is heinous is not
> > that they're non-C++ exceptions, it's that they're "asynchronous" in
> > some sense and can emanate from regions that were supposed to be
> > nothrow. I don't happen to agree that the lack of context for the
> > throw or information about the failure is an important problem for
> > catch(...), because most catch(...) blocks should include "throw;"
> > anyway. When they don't, it's up to the writer to know what kinds of
> > exceptions she might be eating on that platform and do something
> > "appropriate".
>
> In the sense of a C++ stack overflow exception mechanism, we would
> indeed want a specification for triggering those exceptions
> synchronously at well-known allocation points. Asynchronous
> exceptions are certainly much more difficult to handle; when they
> are triggered there is usually no way to know if the program state
> integrity is intact (except for random programmer-specified
> async-cancel-safe regions).

Right.

> The only way I can think of with C++ to absolutely only catch
> synchronous exceptions generically (in one single catch() handler)
> is via the std::exception hierarchy.

As pointed out by Bob Bell, there's no reason that asynchronous
exceptions couldn't be derived from std::exception. I realize that so
far, in practice, they are not. But implementors have already gone so
far as to throw catch-able C++ exceptions asynchronously; what makes
you think they'll stop there?

> catch(...) catches everything including async exceptions,

In practice, on some platforms.

> which could represent a corrupted program state, and therefore in
> the vast majority of circumstances (except for the tiny few
> applications where it really does make most sense to continue even
> with potentially corrupted data as long as possible) it is best to
> avoid them and let them dump the application.

Sure.

> The significance of catch(...) { throw; } is that this construct _forces_
> stack unwinding for all exception contexts, including unrecoverable
> asynchronous system exceptions,

If the implementation "helpfully" turns them into C++ exceptions for
you, yes.

> even when they are actually not handled at all by the outer code and
> the application is going to terminate() without otherwise triggering
> unwinding. The destructors are then executing with a potentially
> corrupted program state, and random data corruption may (or may not)
> result.

Agreed.

> In the sense of the "you can do nothing safe" philosophy, it's
> usually going to be best to avoid triggering stack unwinding
> inadvertently in these circumstances,

Yes! It would be best if applications had to explicitly add a handler
which threw an exception in these cases, where the platform can
support it and the application writer wants that behavior.

> unless it is actually directed by an outer layer on the call stack
> which explicitly handles that exception (even if it's a catch(...) {
> /*swallow*/ } for one of those don't-care-never-stop apps). I
> believe the generally more appropriate strategy for "I must clean up
> this state on any exceptional exit" would be to only trigger cleanup
> code if the exception is actually handled by an outer layer --
> currently this is done via RAII or you can use ScopeGuard for some
> flexibility wrapping function calls (some kind of future
> "on_unwinding { /* any cleanup code */ }" mechanism would be way
> cool).

15 Exception handling 15.5.1 The terminate() function

2 In such cases, void terminate(); is called (18.6.3). In the
situation where no matching handler is found, it is
implementation-defined whether or not the stack is unwound before
terminate() is called.

So RAII classes don't portably prevent cleanup from occurring when
there's no matching catch block.

> If you write catch(std::exception&) { throw; }, then although you
> are still forcing stack unwinding even if the app is going to
> terminate (which has some mild implications for debugging / core
> dumps), at least you know that you are only reacting to a
> synchronous exception,

In practice, on some platforms.

> and there is every reason to believe the program state is still
> valid and recoverable. If you can guarantee that all your exceptions
> are std::exception based,

I can't, since I write libraries.

> this can be an acceptable workaround if RAII and ScopeGuard are not
> feasible.

I agree. It's an unfortunate but neccessary approach for some
environments. I have no problem with your coding guideline, so long
as we acknowledge loud and clear that _hardcoding_ asynchronous events
to throw C++ exceptions is a design mistake which shouldn't be
repeated. I want to make sure that this practice isn't perpetuated.

> > > so you just want to avoid intercepting it completely (and avoid
> > > triggering unwinding / cleanup code in unstable
> > > situations). Generally I think it's best to avoid trying to catch
> > > something that you don't really know what it is / aren't expecting,
> > > anyhow.
> >
> > That's a philosophical concern, not a practical one... unless the OS
> > _forces_ you to deal brave unwinding behavior for "asynchronous"
> > faults where it's inappropriate.
>
> How does it "force" anything if you specifically write catch(...)? ;-)

As I mentioned, the C++ implementation can unwind even if no matching
handler is found.

> > > The value of the std::exception model is that, if all
> > > exceptions emanate from a single known / named base class, you
> > > can always at least know that you are only catching / expecting
> > > a C++ based exception (even if you still don't know what it
> > > really is within that domain..), so at least for stuff like
> > > templated code which must "catch(everything) { cleanup; throw;
> > > }" you would be able to restrict "everything" to only the
> > > subset of all synchronous C++ exceptions, which is really what
> > > is intended with regard to C++ exception safety, if I
> > > understand correctly.
> >
> > Well, some people legitimately want to roll the dice with the
> > other exceptions. And your NOTHROW_REGION idea makes those uses
> > even less theoretically unsound.
>
> Um.. sorry I can't quite parse that last sentence fully.. is that a good
> thing or a bad thing?

Sorry, double negative. It's a good thing. It makes "dice rolling"
more theoretically sound, since you can (with tedious attention to
detail) ensure that you never mistakenly think you've recovered from
an unrecoverable condition.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Craig Powers

unread,
Dec 18, 2002, 10:30:46 AM12/18/02
to
David Abrahams wrote:

>
> ka...@gabi-soft.de (James Kanze) writes:
>
> > David Abrahams <da...@boost-consulting.com> wrote in message
> > news:<uhedkv...@boost-consulting.com>...

> > : turning stack overflow into a C++ exception.
>
> Oh, I am pretty sure that this happens on Win32/VC++. It would be
> consistent with other EH madness on that platform.

OS exceptions (including stack overflow) can be made to turn into C++
exceptions, but it's not the default behavior. (Except for problems
that arise when one uses catch(...).)

Garry Lancaster

unread,
Dec 18, 2002, 2:18:02 PM12/18/02
to
Alan Griffiths:

>> The no-throw guarantee is important in that it ensures that any
>> changes to program state made in that region of code are atomic.

Brett Gossage:


> No, it does not. It only promises not to leak an exception "so help me
kill
> myself."

Exactly. Glad someone gets it. To put what you
wrote more formally: exception-safety guarantees, like
all program guarantees, do not hold in the presence
of undefined behaviour e.g. on stack overflow.

The presence of undefined behaviour is such an
obvious "get-out" clause for a guarantee, that it is
usually omitted from any relevant discussion. But
when we are actually discussing things like stack
overflow it becomes important not to forget it.

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

Garry Lancaster

unread,
Dec 18, 2002, 7:19:00 PM12/18/02
to
> > David Abrahams:
> >> Sad but true. We need to rely on our implementators not to do
what
> >> Microsoft did.

Garry Lancaster:


> > No we don't. The MS behaviour is completely standards
> > compliant and occassionally very useful.
> >
> > There is a lot of confusing stuff being posted to this
> > thread by people who seem to think exceptions on
> > stack overflow create some kind of problem. This is
> > simply untrue. Since theoretical arguments have so
> > far led to such little light, I would ask anyone who thinks
> > otherwise to post some standard-compliant C++ code
> > that shows the problem they forsee.

David Abrahams:


> I hope you'll accept a simple example in pseudocode instead:

Sure.

> try {
>
> allocate new space in database file
> write new database record
> }
> catch(...)
> {
> reclaim space by restoring database to old size
> throw;
> }
>
> [don't give me a hard time about catch(...) here, it's for
> illustrative purposes. Use an RAII class if you prefer]

Would I be so pedantic? (Actually don't answer that...;-)

> But wait: the stack overflow happened in the middle of a write
> operation in the filesystem, leaving the file descriptor record in an
> inconsistent state. The operation in the catch(...) block which
> attempts to restore the database to the old size by passing a pointer
> to the descriptor record to some OS routine now walks on some part of
> the disk it doesn't own, or perhaps garbles good data in the database.

> > They should include a description of the behaviour they expect the
> > code to exhibit on stack overflow. (I would think such a
description
> > would have to read "undefined", but it's not my example ;-)

> I don't expect any particular behavior on an arbitrary
> standards-conforming C++ system.

Exactly. It's undefined. That's crucial.

> However, if the system is capable of
> detecting and responding to stack overflow I expect to be able to
> select immediate termination or some other non-unwinding response that
> isn't going to corrupt my database.

In other words, you expect the undefined behaviour
to take certain forms but not others. It is this expectation
that is in error, not any particular compiler or OS.

The example doesn't have any problem on a system
that throws exceptions on stack overflow that isn't
also present on the "arbitrary standards-conforming
C++ system".

Systems that throw exceptions on stack overflow
are sometimes better, and certainly never any
worse, than those that just leave the behaviour
undefined, as it is in the standard. In particular,
they do not change what is meant by the no-throw
guarantee.

(And, yes, I know that you invented the exception
safety guarantees, so you can make them mean
anything you like I suppose, but am assuming that
once you have reflected upon the matter you will
realise that it does not make sense to expect them
to hold in the face of undefined behaviour.)

Kind regards

Garry Lancaster
Codemill Ltd
Visit our web site at http://www.codemill.net

Vladimir Kouznetsov

unread,
Dec 19, 2002, 6:28:02 AM12/19/02
to
"David Abrahams" <da...@boost-consulting.com> wrote in message news:u4r9bu...@boost-consulting.com...

> As pointed out by Bob Bell, there's no reason that asynchronous
> exceptions couldn't be derived from std::exception. I realize that so
> far, in practice, they are not. But implementors have already gone so
> far as to throw catch-able C++ exceptions asynchronously; what makes
> you think they'll stop there?

Now when everybody's agreed that asynchronous exceptions are
unrecoverable, can anybody explain why the stack overflow exception
cannot be thrown synchronously? And if there are any evidences that
Microsoft throws them asynchronously? By asynchronous I mean that they
can leave language constructs in progress unfinished which as I
understand makes them unrecoverable.

thanks,
v

David Abrahams

unread,
Dec 19, 2002, 3:04:24 PM12/19/02
to
"Garry Lancaster" <glanc...@ntlworld.com> writes:

> Would I be so pedantic? (Actually don't answer that...;-)
>

>> I don't expect any particular behavior on an arbitrary
>> standards-conforming C++ system.
>
> Exactly. It's undefined. That's crucial.
>
>> However, if the system is capable of
>> detecting and responding to stack overflow I expect to be able to
>> select immediate termination or some other non-unwinding response that
>> isn't going to corrupt my database.
>
> In other words, you expect the undefined behaviour
> to take certain forms but not others. It is this expectation
> that is in error, not any particular compiler or OS.

See, this is that pedantry you mentioned. Maybe I used the wrong word
when I said "expect". I know better: real implementations often do
something else. What I mean is that it's poor QOI to do it that way.

Yes, I realize that even on a system where there's no intentional way
to deal with stack overflow, an exception or something that looks like
one could theoretically happen. In practice, though, it won't. Some
other corruption could happen silently. On an system without virtual
memory or separate stack/heap areas, it probably will. On a system
with those facilities, the implementor can usually choose a behavior.
Forced stack unwinding is a very poor choice, because it tends to mask
the cause of errors and sometimes makes things worse.

> Systems that throw exceptions on stack overflow
> are sometimes better, and certainly never any
> worse, than those that just leave the behaviour
> undefined, as it is in the standard. In particular,
> they do not change what is meant by the no-throw
> guarantee.
>
> (And, yes, I know that you invented the exception
> safety guarantees, so you can make them mean
> anything you like I suppose,

Would I be so self-serving? (Actually don't answer that...;-)

> but am assuming that once you have reflected upon the matter you
> will realise that it does not make sense to expect them to hold in
> the face of undefined behaviour.)

Please. I had already reflected long and hard on the matter before
posting. I know the meaning of undefined behavior. Being able to say
anything sensible about EH in the standard depends on understanding
that.

The original poster, IIRC, was outraged that the C++ standard doesn't
require stack unwinding on overflow. My first point was that it would
be very bad to make that the defined behavior. My second point is
that it's poor QOI to make that an intentional implementation of
undefined behavior unless there's a way to turn it off. I'd actually
prefer it if you had to turn it on explicitly.

That's all I'm saying.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 19, 2002, 3:05:56 PM12/19/02
to
vladimir....@ngrain.com (Vladimir Kouznetsov) writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message news:u4r9bu...@boost-consulting.com...
> > As pointed out by Bob Bell, there's no reason that asynchronous
> > exceptions couldn't be derived from std::exception. I realize that so
> > far, in practice, they are not. But implementors have already gone so
> > far as to throw catch-able C++ exceptions asynchronously; what makes
> > you think they'll stop there?
>
> Now when everybody's agreed that asynchronous exceptions are
> unrecoverable, can anybody explain why the stack overflow exception
> cannot be thrown synchronously?

They can.

The problem is that stack overflows can happen "asynchronously". What
do you do in that case? And how do you know which case it is when the
overflow occurs?

> And if there are any evidences that Microsoft throws them
> asynchronously? By asynchronous I mean that they can leave language
> constructs in progress unfinished which as I understand makes them
> unrecoverable.

What we mean by "asynchronous" is slightly different. For the
purposes of this discussion, an asynchronous event is one that can
happen in the middle of any construct which would not throw an
exception unless undefined behavior is invoked. For example:

void f() {}
int main()
{
f();
}

f does not throw exceptions. However, a stack overflow "violates
implementation limits" and therefore invokes undefined behavior. If a
stack overflow occurs when f is invoked, that event is "asynchronous",
and an exception throw at that point by the implementation would be an
"asynchronous exception".

HTH,


--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Garry Lancaster

unread,
Dec 20, 2002, 10:05:34 AM12/20/02
to
Garry Lancaster:

> > Would I be so pedantic? (Actually don't answer that...;-)

David Abrahams:


> >> I don't expect any particular behavior on an arbitrary
> >> standards-conforming C++ system.

> > Exactly. It's undefined. That's crucial.

> >> However, if the system is capable of
> >> detecting and responding to stack overflow I expect to be able to
> >> select immediate termination or some other non-unwinding response
that
> >> isn't going to corrupt my database.
> >
> > In other words, you expect the undefined behaviour
> > to take certain forms but not others. It is this expectation
> > that is in error, not any particular compiler or OS.

David Abrahams:


> See, this is that pedantry you mentioned.

Actually, I was referring to a completely different form
of pedantry ;-)

> Maybe I used the wrong word
> when I said "expect". I know better: real implementations often do
> something else. What I mean is that it's poor QOI to do it that way.

Well, it being "poor QOI" is your opinion and you are entitled
to it. I don't agree and I explain why below.

> Yes, I realize that even on a system where there's no intentional way
> to deal with stack overflow, an exception or something that looks like
> one could theoretically happen. In practice, though, it won't.

It might. We just don't know.

> Some
> other corruption could happen silently. On an system without virtual
> memory or separate stack/heap areas, it probably will.

Yup.

> On a system
> with those facilities, the implementor can usually choose a behavior.
> Forced stack unwinding is a very poor choice, because it tends to mask
> the cause of errors and sometimes makes things worse.

Compared to what? Those basic systems without virtual
memory or separate stack/heap areas?

With respect to masking the cause of errors, we get an
exception which most debuggers will notify us of,
regardless of whether it gets caught later. The exception
should be easily identifiable as being caused by stack
overflow. If we are not running under a debugger, the
program will either terminate due to an unhandled
exception, which will be a signal to us to debug it, or it
will just start behaving oddly. Only in this last case are
we no better off than with our basic system. We are
never worse off.

With respect to making things worse, I agree that stack
unwinding "sometimes makes things worse" than they
would be with other types of undefined behaviour. However,
sometimes things will be the same, sometimes better.
This comment has no "diagnostic value". It doesn't mean
anything. It doesn't help.

However, for some code, specifically designed to make
use of this compiler feature, throwing an exception on
stack overflow allows you to recover and continue
the normal execution path, with completely well-defined
behaviour (it's still undefined according to the standard, of
course, but the implementation has chosen to define it).
Which is presumably why this way of doing things was
chosen, and yet another reason why it is better than
leaving the behaviour undefined, and why I believe it is
unfair to label it "poor QOI".

If anyone can devise a simple, efficient system of handling
stack overflow that allows continuation, avoids corruption
and never masks the cause of errors, that's great. My feeling
is that this is not possible: something's got to give. It's
very easy for us to criticize a system when we don't have to
come up with something better.

Furthermore, this is a very minor compiler feature of
limited interest to most programmers: I'm sure most
people would rather their compiler vendors concentrated
their limited resources on other more pressing areas of
concern.

> > Systems that throw exceptions on stack overflow
> > are sometimes better, and certainly never any
> > worse, than those that just leave the behaviour
> > undefined, as it is in the standard. In particular,
> > they do not change what is meant by the no-throw
> > guarantee.
> >
> > (And, yes, I know that you invented the exception
> > safety guarantees, so you can make them mean
> > anything you like I suppose,
>
> Would I be so self-serving? (Actually don't answer that...;-)
>
> > but am assuming that once you have reflected upon the matter you
> > will realise that it does not make sense to expect them to hold in
> > the face of undefined behaviour.)
>
> Please. I had already reflected long and hard on the matter before
> posting. I know the meaning of undefined behavior. Being able to say
> anything sensible about EH in the standard depends on understanding
> that.

Anyone can be wrong occassionally, Dave. Even
you or I.

> The original poster, IIRC, was outraged that the C++ standard doesn't
> require stack unwinding on overflow. My first point was that it would
> be very bad to make that the defined behavior.

Agreed. On some machines the performance hit of
definining stack overflow behaviour would be unacceptable
for some.

> My second point is
> that it's poor QOI to make that an intentional implementation of
> undefined behavior unless there's a way to turn it off. I'd actually
> prefer it if you had to turn it on explicitly.

And you are entitled to your opinion and preference.

> That's all I'm saying.

If so, relatively uncontroversial.

However, it seemed to me that you (and not just you) were
saying something else, but I'll ask the question straight:
are you saying that systems that throw exceptions on
stack overflow somehow cannot support the no-throw
exception safety guarantee?

This is the real issue.

Kind regards

Garry Lancaster

David Abrahams

unread,
Dec 20, 2002, 6:59:43 PM12/20/02
to
"Garry Lancaster" <glanc...@ntlworld.com> writes:

>> Maybe I used the wrong word
>> when I said "expect". I know better: real implementations often do
>> something else. What I mean is that it's poor QOI to do it that way.
>
> Well, it being "poor QOI" is your opinion and you are entitled
> to it.

Thanks.

>> Yes, I realize that even on a system where there's no intentional way
>> to deal with stack overflow, an exception or something that looks
like
>> one could theoretically happen. In practice, though, it won't.
>
> It might. We just don't know.

Sure, in theory it could, and it would be standards-conforming. Have
you ever seen it, on a system which wasn't specifically designed to
throw an exception on stack overflow? Has anyone?

>> On a system with those facilities, the implementor can usually
>> choose a behavior. Forced stack unwinding is a very poor choice,
>> because it tends to mask the cause of errors and sometimes makes
>> things worse.
>
> Compared to what? Those basic systems without virtual
> memory or separate stack/heap areas?

No, compared to a system which lets the programmer install his own
"detected undefined behavior handlers", for example.

> With respect to masking the cause of errors, we get an
> exception which most debuggers will notify us of,
> regardless of whether it gets caught later.

Lots of people have trouble with this scenario, because their
application throws exceptions often enough that stopping on throw is a
nuisance or makes debugging completely impractical.

> The exception should be easily identifiable as being caused by stack
> overflow.

It might. We just don't know ;-)

> If we are not running under a debugger, the program will either
> terminate due to an unhandled exception, which will be a signal to
> us to debug it

And can we hope the system has nice QOI which gives us a core dump or
equivalent in this case (cross fingers).

> or it will just start behaving oddly. Only in this last case are we
> no better off than with our basic system. We are never worse off.

Yup.

> With respect to making things worse, I agree that stack
> unwinding "sometimes makes things worse" than they
> would be with other types of undefined behaviour. However,
> sometimes things will be the same, sometimes better.

Yup.

> This comment has no "diagnostic value". It doesn't mean
> anything. It doesn't help.

Sorry, it was meant to be helpful. My point is that the application
programmer can usually know which response is most appropriate.

> However, for some code, specifically designed to make

stretches of-----^


> use of this compiler feature, throwing an exception on
> stack overflow allows you to recover and continue
> the normal execution path, with completely well-defined
> behaviour (it's still undefined according to the standard, of
> course, but the implementation has chosen to define it).

Yup.

> Which is presumably why this way of doing things was
> chosen, and yet another reason why it is better than
> leaving the behaviour undefined, and why I believe it is
> unfair to label it "poor QOI".

It probably _is_ unfair. It's still my opinion.

> If anyone can devise a simple, efficient system of handling
> stack overflow that allows continuation, avoids corruption
> and never masks the cause of errors, that's great. My feeling
> is that this is not possible: something's got to give. It's
> very easy for us to criticize a system when we don't have to
> come up with something better.

It's trivial to come up with something better. All of these systems
have a low-level handler for the actual problem, which eventually
calls the exception-throwing code. All you have to do is make the
handler's behavior customizable.

> Furthermore, this is a very minor compiler feature of
> limited interest to most programmers: I'm sure most
> people would rather their compiler vendors concentrated
> their limited resources on other more pressing areas of
> concern.

Then why the long thread here?

>> > but am assuming that once you have reflected upon the matter you
>> > will realise that it does not make sense to expect them to hold in
>> > the face of undefined behaviour.)
>>
>> Please. I had already reflected long and hard on the matter before
>> posting. I know the meaning of undefined behavior. Being able to
say
>> anything sensible about EH in the standard depends on understanding
>> that.
>
> Anyone can be wrong occassionally, Dave. Even you or I.

Thanks for the reminder.

>> The original poster, IIRC, was outraged that the C++ standard
>> doesn't require stack unwinding on overflow. My first point was
>> that it would be very bad to make that the defined behavior.
>
> Agreed. On some machines the performance hit of definining stack
> overflow behaviour would be unacceptable for some.

I think there are other reasons, too.

>> My second point is that it's poor QOI to make that an intentional
>> implementation of undefined behavior unless there's a way to turn
>> it off. I'd actually prefer it if you had to turn it on
>> explicitly.
>
> And you are entitled to your opinion and preference.

Again, thanks.

>> That's all I'm saying.
>
> If so, relatively uncontroversial.
>
> However, it seemed to me that you (and not just you) were saying
> something else, but I'll ask the question straight: are you saying
> that systems that throw exceptions on stack overflow somehow cannot
> support the no-throw exception safety guarantee?

There are two ways to look at this:

1. Any stack overflow exception is undefined behavior, so it doesn't
count, and the nothrow guarantee is just as valid as ever.

2. Within the context of the particular implementation, if stack
overflow is documented as throwing exceptions, and there is not
very careful documentation of where stack overflows can occur, then
they cannot support the nothrow guarantee. And even if you could
define these restricted circumstances, they would probably not be
useful.

However, now we're dancing in a very gray legal area between the
standard and an implementation. I'm really not comfortable in this
part of the ballroom.

Think of what I've been saying about the nothrow guarantee this way:
if you want an exception from stack overflow, remember that in general
you can't count on the state of your system after this happens. It
may increase the overall robustness of your system, but it's not in
any sense a robust mechanism, because it may violate an
otherwise-legitimate nothrow guarantee that some code was counting on.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Todd Greer

unread,
Dec 21, 2002, 6:43:39 PM12/21/02
to
I realize that this discussion is about throwing on stack overflow, rather than
when dereferencing a null pointer (which is what I'm about to talk about), but I
think the situations are mostly the same.

In a company in which I recently worked, the fact that "catch(...)" would catch
some results of undefined behaviour, such as access violations, caused problems
on several occasions. There were several instances of very long debugging
sessions because a program had dereferenced a bad pointer, but continued
execution in an unknown state long enough to badly corrupt data later. While
the long hours spent debugging what should have been a simple problem had the
most immediate impact, the fact of programs continuing execution in completely
unknown states was most worrisome.

As a result of this, my group adopted a coding convention that completely banned
"catch(...)", with no exceptions (I really didn't intend that as a pun). It
also resulted in much internal campaigning to get other groups to stop using it,
because people were getting nailed be other people's "catch(...)"s.

Everyone understands that this was completely compliant behaviour. What was
wanted was not, as was suggested earlier, for the behaviour to remain undefined
by the platform; what was wanted was either for it to be defined as crashing the
program, or for it to be defined to only be caught by something clearly
implementation-specific like "catch(__segfault)".

BTW, it is possible, at least in the access violation case, to customize the
behaviour, but this would have required a level of intracompany agreement that
was not present. I would never deny the potential value of this behaviour as an
option, but our lives would have been much easier if it was not the default in
this case. There would have been fewer bugs shipped to customers, less time
spent bugtracking, and less time spent harassing others into changing their
code.

--
Todd Greer <cl...@tsgreer.com>

Vladimir Kouznetsov

unread,
Dec 21, 2002, 6:57:33 PM12/21/02
to
"David Abrahams" <da...@boost-consulting.com> wrote in message news:un0n2r...@boost-consulting.com...

> What we mean by "asynchronous" is slightly different. For the
> purposes of this discussion, an asynchronous event is one that can
> happen in the middle of any construct which would not throw an
> exception unless undefined behavior is invoked. For example:
>
> void f() {}
> int main()
> {
> f();
> }
>
> f does not throw exceptions. However, a stack overflow "violates
> implementation limits" and therefore invokes undefined behavior. If a
> stack overflow occurs when f is invoked, that event is "asynchronous",
> and an exception throw at that point by the implementation would be an
> "asynchronous exception".

Aha! Thank you for your answer. I guess now I understand what do you
mean. Please correct me if I'm wrong. You are saying that in presence
of the "asynchronous" events one cannot write a code that provides
exception safety guarantees. All the guaranties are off then and it's
better to die, because there is no way to recover:
class Foo {
char* p;
void setP(char* q) nothrow() {
p = q;
}
public:
Foo(char* q): p(q) {}
~Foo() {
delete p;
}
void set(char* q) {
delete p;
setP(q); // if it throws an "async" event it's not safe to destroy
the object
}
};
The reason of that is that the behavior is undefined when stack
overflow occurs. By the way I couldn't find where is that stated. Is
that undefined because it's not mentioned at all?
What if it would be unspecified in this case? I think it is still
possible to write safe code in this case, even exception safe code
although probably in a more tedious way. And than classes could
provide even more strict guarantees - for stack overflow, assuming
that that only can happen in function prologue. Or that just doesn't
make any sense? I think that would be very unfortunate if my debugger
died and buried debuggee with it just because of stack overflow when I
wanted to evaluate some expression.

thanks,
v

David Abrahams

unread,
Dec 22, 2002, 10:48:24 AM12/22/02
to
Todd Greer <todd...@tsgreer.com> writes:

> I realize that this discussion is about throwing on stack overflow,
> rather than when dereferencing a null pointer (which is what I'm about
> to talk about), but I think the situations are mostly the same.

Both are undefined behavior.

> In a company in which I recently worked, the fact that "catch(...)"
> would catch some results of undefined behaviour, such as access
> violations, caused problems on several occasions. There were several
> instances of very long debugging sessions because a program had
> dereferenced a bad pointer, but continued execution in an unknown
> state long enough to badly corrupt data later. While the long hours
> spent debugging what should have been a simple problem had the most
> immediate impact, the fact of programs continuing execution in
> completely unknown states was most worrisome.

That's one reason I'm saying that it's poor QOI to force this
behavior.

> As a result of this, my group adopted a coding convention that
> completely banned "catch(...)", with no exceptions (I really didn't
> intend that as a pun). It also resulted in much internal campaigning
> to get other groups to stop using it, because people were getting
> nailed be other people's "catch(...)"s.

It's a very unfortunate thing to have to do. Practically speaking,
it's not a bad rule (at least not for debug builds) because
"asynchronous" C++ exceptions seem to be somewhat prevalent. There
should be a corollary that your company president has to complain
loudly to your compiler vendor and threaten to switch before adopting
this rule though!


>
> Everyone understands that this was completely compliant behaviour.
> What was wanted was not, as was suggested earlier, for the behaviour
> to remain undefined by the platform; what was wanted was either for it
> to be defined as crashing the program, or for it to be defined to only
> be caught by something clearly implementation-specific like
> "catch(__segfault)".
>
> BTW, it is possible, at least in the access violation case, to
> customize the behaviour, but this would have required a level of
> intracompany agreement that was not present.

Wow, it was easier to ban catch(...)? What did you do about your
standard library implementation?

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 22, 2002, 5:58:44 PM12/22/02
to
vladimir....@ngrain.com (Vladimir Kouznetsov) writes:

> Aha! Thank you for your answer. I guess now I understand what do you
> mean. Please correct me if I'm wrong. You are saying that in presence
> of the "asynchronous" events one cannot write a code that provides
> exception safety guarantees.

Close. It's asynchronous exceptions, not asynchronous events in and
of themselves, that cause the problem.

> All the guaranties are off then and it's better to die,

often-------------------------------------^

It might be better to die than to have no choice about the response to
the asynchronous event, especially if you have to debug the program.

> because there is no way to recover:

> class Foo {
> char* p;
> void setP(char* q) nothrow() {
> p = q;
> }
> public:
> Foo(char* q): p(q) {}
> ~Foo() {
> delete p;
> }
> void set(char* q) {
> delete p;
> setP(q); // if it throws an "async" event it's not safe to destroy
> the object
> }
> };

That's one example, yes.

> The reason of that is that the behavior is undefined when stack
> overflow occurs.

The reason of what?

> By the way I couldn't find where is that stated. Is that undefined
> because it's not mentioned at all?

It's not dealt with explicitly in the standard because the
implementation isn't required to have a stack. However, stack
overflow falls under the category of "exceeding implementation
limits", a catch-all that could be read to allow almost anything to
invoke undefined behavior.

> What if it would be unspecified in this case? I think it is still
> possible to write safe code in this case, even exception safe code
> although probably in a more tedious way.

I don't see how changing it to unspecified helps.

> And than classes could provide even more strict guarantees - for
> stack overflow, assuming that that only can happen in function
> prologue. Or that just doesn't make any sense?

Probably not. Think of all the nothrow function calls you take for
granted don't throw. operator delete() is one example.


--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Hillel Y. Sims

unread,
Dec 22, 2002, 6:57:39 PM12/22/02
to
"David Abrahams" <da...@boost-consulting.com> wrote in message
news:u3coqc...@boost-consulting.com...
> Todd Greer <todd...@tsgreer.com> writes:

> > As a result of this, my group adopted a coding convention that
> > completely banned "catch(...)", with no exceptions (I really didn't
> > intend that as a pun). It also resulted in much internal
campaigning
> > to get other groups to stop using it, because people were getting
> > nailed be other people's "catch(...)"s.
>
> It's a very unfortunate thing to have to do.

No, it's not... ;-) I'm starting to think that catch(...) should ideally
be
used about as frequently as goto..

> Practically speaking,
> it's not a bad rule (at least not for debug builds) because
> "asynchronous" C++ exceptions seem to be somewhat prevalent. There
> should be a corollary that your company president has to complain
> loudly to your compiler vendor and threaten to switch before adopting
> this rule though!

Resistance is futile. What is the probability that Microsoft will change
this behavior in their OS anytime soon?

> >
> > Everyone understands that this was completely compliant behaviour.
> > What was wanted was not, as was suggested earlier, for the behaviour
> > to remain undefined by the platform; what was wanted was either for
it
> > to be defined as crashing the program, or for it to be defined to
only
> > be caught by something clearly implementation-specific like
> > "catch(__segfault)".

(If you could catch it via "catch(__segfault)" then you would still
catch it
via catch(...) anyhow..?)

> >
> > BTW, it is possible, at least in the access violation case, to
> > customize the behaviour, but this would have required a level of
> > intracompany agreement that was not present.
>
> Wow, it was easier to ban catch(...)? What did you do about your
> standard library implementation?
>

Standard Library code only does "catch(...) { throw; }"-style cleanup
(as
far as I've ever seen, anyhow), which, although still a little heinous
(can
still potentially cause data corruption), is not nearly as bad as
catch(...)
{ /*swallow*/ } which seems to be what Todd was talking about.

From my own experience, exceptions were introduced much later in our
system's lifecycle than the general error handling framework, for which
much
legacy code has been written with certain assumptions of expected
behavior
(most of which are somewhat flawed too, but..). It was much easier to
convince everyone who wanted to use exceptions to use the std::exception
hierarchy and avoid catch(...) than to be able to change the OS-level
error
handler behavior.

To Todd: I am also curious what changes to your coding guidelines you
found
necessary to implement a "no catch(...)" policy? Do you have any special
requirements for user-defined exception classes?

hys

--
(c) 2002 Hillel Y. Sims
FactSet Research Systems
hsims AT factset.com

Niklas Matthies

unread,
Dec 22, 2002, 7:25:52 PM12/22/02
to
On 2002-12-21 23:43, Todd Greer <todd...@tsgreer.com> wrote:
> I realize that this discussion is about throwing on stack overflow,
> rather than when dereferencing a null pointer (which is what I'm about
> to talk about), but I think the situations are mostly the same.

Ignoring for the moment that the behavior for both is equally undefined
by the standard, there actually is an important conceptual difference
between both cases: Whether a pointer variable is null or not is always
under the sole control of the program. How much stack space is available
to the program, on the other hand, generally is not, similar to dynamic
memory or file storage.

If stack space is considered a dynamic resource, then one logical
implication is that a program needs to be prepared for the resource to
become depleted, just as a program is expected to gracefully handle
dynamic memory exhaustion or I/O errors. The problem is that many
languages, including C++, generally ignore that issue and don't provide
mechanisms for handling stack space exhaustion, at least not in any
remotely portable fashion.

Even if there is some agreement that stack space availability has to
remain outside the scope of the C++ standard, it should be acknowledged
that stack space exhaustion doesn't really fit the usual notion of
undefined behavior as something that a program always can prevent by
means provided by the language.

-- Niklas Matthies

Alexander Terekhov

unread,
Dec 23, 2002, 12:28:40 AM12/23/02
to

"Hillel Y. Sims" wrote:
>
> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:u3coqc...@boost-consulting.com...
> > Todd Greer <todd...@tsgreer.com> writes:
>
> > > As a result of this, my group adopted a coding convention that
> > > completely banned "catch(...)", with no exceptions (I really
didn't
> > > intend that as a pun). It also resulted in much internal
> campaigning
> > > to get other groups to stop using it, because people were getting
> > > nailed be other people's "catch(...)"s.
> >
> > It's a very unfortunate thing to have to do.
>
> No, it's not... ;-) I'm starting to think that catch(...) should
ideally
> be used about as frequently as goto..

Not only catch(...) but, with a few exceptions like fully
"trusted"-via-ES regions, also any other catch(<I don't
know what it is>)... like rather popular catch(const std::
exception& what_the_hell_is_it_comma_might_well_be_some_
std_logic_error_dash_it_is_pretty_much_the_same_as_segfault).

regards,
alexander.

--
<http://tinyurl.com/3rjt> <http://tinyurl.com/3rjh>

David Abrahams

unread,
Dec 23, 2002, 6:39:08 AM12/23/02
to
"Hillel Y. Sims" <use...@phatbasset.com> writes:

> "David Abrahams" <da...@boost-consulting.com> wrote in message
> news:u3coqc...@boost-consulting.com...
>> Todd Greer <todd...@tsgreer.com> writes:
>
>> > As a result of this, my group adopted a coding convention that
>> > completely banned "catch(...)", with no exceptions (I really didn't
>> > intend that as a pun). It also resulted in much internal
> campaigning
>> > to get other groups to stop using it, because people were getting
>> > nailed be other people's "catch(...)"s.
>>
>> It's a very unfortunate thing to have to do.
>
> No, it's not... ;-)

Are you saying it's a good thing that catch(...) causes problems on
these systems which automatically turn crashes into asynchronous
exceptions?

Wouldn't it be better if catch(...) didn't cause any problems?

> I'm starting to think that catch(...) should ideally be used about
> as frequently as goto..

Is that a response to the details of some implementations, or because
you think that it's a fundamentally bad idea as defined in the
standard? If the latter, please explain your position.

>> Practically speaking, it's not a bad rule (at least not for debug
>> builds) because "asynchronous" C++ exceptions seem to be somewhat
>> prevalent. There should be a corollary that your company president
>> has to complain loudly to your compiler vendor and threaten to
>> switch before adopting this rule though!
>
> Resistance is futile. What is the probability that Microsoft will change
> this behavior in their OS anytime soon?

I am unable to calculate a useful estimate, captain. However IIRC
someone I spoke to from the compiler team reported to me that they
recognize it's a problem and are already doing something slightly
different for Win64.

>> > Everyone understands that this was completely compliant
>> > behaviour. What was wanted was not, as was suggested earlier,
>> > for the behaviour to remain undefined by the platform; what was
>> > wanted was either for it to be defined as crashing the program,
>> > or for it to be defined to only be caught by something clearly
>> > implementation-specific like "catch(__segfault)".
>
> (If you could catch it via "catch(__segfault)" then you would still
> catch it via catch(...) anyhow..?)

Really depends what catch(__segfault) means (since we're not allowed
to write that today), doesn't it?

>> > BTW, it is possible, at least in the access violation case, to
>> > customize the behaviour, but this would have required a level of
>> > intracompany agreement that was not present.
>>
>> Wow, it was easier to ban catch(...)? What did you do about your
>> standard library implementation?
>>
>
> Standard Library code only does "catch(...) { throw; }"-style
> cleanup (as far as I've ever seen, anyhow), which, although still a
> little heinous (can still potentially cause data corruption), is not
> nearly as bad as catch(...) { /*swallow*/ } which seems to be what
> Todd was talking about.

I don't think so. In my own experience catch(...) { ... throw; } was
*precisely* what was involved when SEH caused debugging problems. The
issue is that all of the nested catch(...) clauses execute before JIT
debugging even gets a chance to take hold, which takes you some
considerable distance from the cause of the error.

Fortunately, there's a simple way to turn off this problem for most
cases on that platform.

> From my own experience, exceptions were introduced much later in our
> system's lifecycle than the general error handling framework, for
> which much legacy code has been written with certain assumptions of
> expected behavior (most of which are somewhat flawed too, but..). It
> was much easier to convince everyone who wanted to use exceptions to
> use the std::exception hierarchy and avoid catch(...) than to be
> able to change the OS-level error handler behavior.

If you didn't make sure all the other code you're using also follows
that policy, you may have sold them on a hollow promise.


--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

David Abrahams

unread,
Dec 23, 2002, 6:39:47 AM12/23/02
to
Niklas Matthies <comp.lang.c++.mod...@nmhq.net> writes:

> On 2002-12-21 23:43, Todd Greer <todd...@tsgreer.com> wrote:
>> I realize that this discussion is about throwing on stack overflow,
>> rather than when dereferencing a null pointer (which is what I'm about
>> to talk about), but I think the situations are mostly the same.
>
> Ignoring for the moment that the behavior for both is equally undefined
> by the standard, there actually is an important conceptual difference
> between both cases: Whether a pointer variable is null or not is always
> under the sole control of the program. How much stack space is available
> to the program, on the other hand, generally is not

That is a very good point.

> , similar to dynamic memory or file storage.

Ooh, I *deeply* disagree with that part. Stack space is very
different from both of those resources because while they are always
explicitly requested, stack space is not. You can never know portably
when your implementation will request more stack space, which is part
of what makes it so hard to handle as an error condition.

> If stack space is considered a dynamic resource, then one logical
> implication is that a program needs to be prepared for the resource to
> become depleted, just as a program is expected to gracefully handle
> dynamic memory exhaustion or I/O errors. The problem is that many
> languages, including C++, generally ignore that issue and don't provide
> mechanisms for handling stack space exhaustion, at least not in any
> remotely portable fashion.
>
> Even if there is some agreement that stack space availability has to
> remain outside the scope of the C++ standard, it should be acknowledged
> that stack space exhaustion doesn't really fit the usual notion of
> undefined behavior as something that a program always can prevent by
> means provided by the language.

I was going to say

"That's not the usual notion of undefined behavior."

But I'm glad you brought it up, because I think I agree with your
description of undefined behavior. I can't find any justification in
the standard for calling stack overflow "undefined behavior". The
only place in the standard that can account for stack overflow,
AFAICT, is this sentence from 1.4:

--- If a program contains no violations of the rules in this
International Standard, a conforming implemen- tation shall, within
its resource limits, accept and correctly execute 3) that program.

And as if that wasn't weasel-wording enough:

3) ``Correct execution'' can include undefined behavior, depending
on the data being processed; see 1.3 and 1.9.

Which is irrelevant to this discussion but amusing nonetheless.

--
David Abrahams
da...@boost-consulting.com * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]

Joshua Lehrer

unread,
Dec 23, 2002, 6:45:49 AM12/23/02
to
Todd Greer <todd...@tsgreer.com> wrote in message news:<m37ke4i...@nonsequitur.austin.rr.com>...

> In a company in which I recently worked, the fact that "catch(...)" would catch
> some results of undefined behaviour, such as access violations, caused problems
> on several occasions. There were several instances of very long debugging
> sessions because a program had dereferenced a bad pointer, but continued
> execution in an unknown state long enough to badly corrupt data later. While


but why did you let it "continue execution in an unknown state"? If
you catch an exception of which you know nothing about, you can't let
your application continue. In fact, if you catch an exception of
which you know nothing about, you can't let ANYTHING happen, not even
your catch clause. Why? Because if you don't know what the problem
is, you can't reliably do anything. Therefore, "catch(...)" should
never be used, unless you do nothing but immediately rethrow the
exception.

joshua lehrer
factset research systems

It is loading more messages.
0 new messages