Is there a good use case for noexcept?

256 views
Skip to first unread message

DeMarcus

unread,
Apr 12, 2011, 10:39:47 PM4/12/11
to
Hi,

In an interview with Bjarne Stroustrup...

http://www.codeguru.com/cpp/misc/print.php/c18357/An-Interview-with-C-Creator-Bjarne-Stroustrup.htm

...he says "With noexcept a throw is considered a fatal design error and
the program immediately terminated."

Terminated?!! What doesn't it cost to correct a design error once found
at the customer site?!


I don't know when to use noexcept!

What I call is a simple guideline how to use noexcept. Preferably one
that could fit into an item in the next More Effective C++ by Scott
Meyers or C++ Coding Standards by Sutter & Alexandrescu.

One questions I have is; which one (or several) of these apply?

1. Always use noexcept with functions that you think won't throw.

2. Always use noexcept with functions that you think won't throw or that
very very rarely throw, like for instance std::bad_alloc.

3. Only use noexcept when you really need, e.g. when you want to provide
Abrahams' exception safety guarantees.

4. Only use noexcept when you have stepped through your function, line
by line, and made sure no exception is thrown.


Thanks,
Daniel


--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Francis Glassborow

unread,
Apr 13, 2011, 5:03:35 PM4/13/11
to

On 13/04/2011 03:39, DeMarcus wrote:
> Hi,
>
> In an interview with Bjarne Stroustrup...
>
> http://www.codeguru.com/cpp/misc/print.php/c18357/An-Interview-with-C-Creator-Bjarne-Stroustrup.htm
>
>
> ...he says "With noexcept a throw is considered a fatal design error and
> the program immediately terminated."
>
> Terminated?!! What doesn't it cost to correct a design error once found
> at the customer site?!
>
>
> I don't know when to use noexcept!
>
> What I call is a simple guideline how to use noexcept. Preferably one
> that could fit into an item in the next More Effective C++ by Scott
> Meyers or C++ Coding Standards by Sutter & Alexandrescu.
>
> One questions I have is; which one (or several) of these apply?
>
> 1. Always use noexcept with functions that you think won't throw.
>
> 2. Always use noexcept with functions that you think won't throw or that
> very very rarely throw, like for instance std::bad_alloc.
>
> 3. Only use noexcept when you really need, e.g. when you want to provide
> Abrahams' exception safety guarantees.

I think that it would be helpful if we could persuade implementers to
diagnose potential throws from a function marked as noexcept. Even
though static checking is not required, and code that fails a static
check is still conforming it would be very helpful in improving reliability.

With that in mind, I would like to see all leaf functions (i.e.
functions that do not call any other functions) that cannot throw be
marked as noexcept. I suspect that it would be possible to write a tool
that would do that for the programmer. A leaf function either has an
explicit throw in its definition or cannot throw.

Now it becomes reasonable to mark any function that only call leaf
functions that are marked as noexcept and do not themselves throw as
noexcept.

There is a small problem in that a programmer might add a throw into a
previously clean leaf function.

Sometimes people think that noexcept is like const. It isn't because it
works the other way round. You can add a const qualification without
effecting cases lower down the chain you cannot do that with noexcept
which has to start from the lowest level (you can add it higher up, but
only if you are willing to catch exceptions within the function and
contain them there)

What I would really like to see is a compiler that warns me if I write a
function that cannot throw without qualifying it as noexcept. That would
support, IMO, a progressive improvement in code.

A serious problem is in template code where the status of a function may
depend on the type it is instantiated with.

Francis

restor

unread,
Apr 13, 2011, 5:02:45 PM4/13/11
to

> What I call is a simple guideline how to use noexcept. Preferably one
> that could fit into an item in the next More Effective C++ by Scott
> Meyers or C++ Coding Standards by Sutter & Alexandrescu.
>
> One questions I have is; which one (or several) of these apply?
>
> 1. Always use noexcept with functions that you think won't throw.
>
> 2. Always use noexcept with functions that you think won't throw or that
> very very rarely throw, like for instance std::bad_alloc.
>
> 3. Only use noexcept when you really need, e.g. when you want to provide
> Abrahams' exception safety guarantees.
>
> 4. Only use noexcept when you have stepped through your function, line
> by line, and made sure no exception is thrown.

I believe, it would be (3), with the annotation "DO NOT USE noexcept
EXPLICITLY". Really. This is not ironical. Functions in general throw
and should throw, and they just shouldn't be annotated with noexcept.
Note that even functions from old-C standard library compiled with C
compiler may throw; just consider qsort() and bsearch().
noexcept is needed to detect if STL elements can implement a safe move
operation. But you do not need to type noexcept for that:

struct MyClass {
std::string name;
std::vector<std::string> colleagues;
};

Compiler already annotated your move constructor, move assignment and
destructor as noexcept. You need not (and had better not) do anything.
Two places where I can see you could use noexcept would be:

(A) member function swap and your specialization of std::swap.
(B) annotating your destructor as noexcept(false) if you are using it
in an unconventional way, like scope guard or wrapping function calls
as described in http://www2.research.att.com/~bs/wrapper.pdf

1 and 2 from your list would not be good candidates and neither would
4, for the following reasons.
The fact that you can prove your function never throws an exception,
does not mean that it won't throw in the future.
suppose you have function:

void swap_potinters( int * lhs, int * rhs ) noexcept
{
int * tmp = lhs;
lhs = rhs;
rhs = tmp;
}

Now, suppose you need to add logging only for the time of debugging

void swap_potinters( int * lhs, int * rhs ) noexcept
{
LOG("swapping pointers");
int * tmp = lhs;
lhs = rhs;
rhs = tmp;
}

You get an error. A throwing function declared noexcept. You want to
remove noexcept now? This will require the recompilation of the entire
program. You could have a look at the following document.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3248.pdf
The C++ Standards Committee is now considering removing noexcept from
many library functions.

Regards,
&rzej

Thomas Richter

unread,
Apr 14, 2011, 9:48:07 AM4/14/11
to

On 13.04.2011 04:39, DeMarcus wrote:

> One questions I have is; which one (or several) of these apply?
>
> 1. Always use noexcept with functions that you think won't throw.
>
> 2. Always use noexcept with functions that you think won't throw or that
> very very rarely throw, like for instance std::bad_alloc.
>
> 3. Only use noexcept when you really need, e.g. when you want to provide
> Abrahams' exception safety guarantees.
>
> 4. Only use noexcept when you have stepped through your function, line
> by line, and made sure no exception is thrown.

If you ask me: None of the above. I would use "noexept" for low-level,
speed critical (for example numerical) methods of which I know that they
cannot throw. Alternatively, in memory-constraint systems where methods
are designed in such a way that they cannot throw, and the overall code
needs to fit into a small memory region.

It is too bad that noexect isn't checked at compile time. This would be
more useful... I wonder why. A compiler could, of course, still warn.

Greetings,
Thomas

Jeffrey Schwab

unread,
Apr 14, 2011, 9:47:31 AM4/14/11
to

On Tuesday, April 12, 2011 10:39:47 PM UTC-4, DeMarcus wrote:

> I don't know when to use noexcept!

> 3. Only use noexcept when you really need, e.g. when you want to provide
> Abrahams' exception safety guarantees.

That one, sort of. When some library code guarantees a particular level of
exception safety, declaring your own code noexcept may enable optimizations
within the library. For example, std::vector reallocs may be able to move,
rather than copy, elements with noexcept move constructors. Use
noexcept(true) to annotate functions for which you want the noexcept
operator to return true.

DeMarcus

unread,
Apr 14, 2011, 4:41:32 PM4/14/11
to

On 04/13/2011 11:02 PM, restor wrote:
>
>> What I call is a simple guideline how to use noexcept. Preferably one
>> that could fit into an item in the next More Effective C++ by Scott
>> Meyers or C++ Coding Standards by Sutter& Alexandrescu.

>>
>> One questions I have is; which one (or several) of these apply?
>>
>> 1. Always use noexcept with functions that you think won't throw.
>>
>> 2. Always use noexcept with functions that you think won't throw or that
>> very very rarely throw, like for instance std::bad_alloc.
>>
>> 3. Only use noexcept when you really need, e.g. when you want to provide
>> Abrahams' exception safety guarantees.
>>
>> 4. Only use noexcept when you have stepped through your function, line
>> by line, and made sure no exception is thrown.
>
> I believe, it would be (3), with the annotation "DO NOT USE noexcept
> EXPLICITLY". Really. This is not ironical.
[...]

> 1 and 2 from your list would not be good candidates and neither would
> 4, for the following reasons.
> The fact that you can prove your function never throws an exception,
> does not mean that it won't throw in the future.
> suppose you have function:
>
> void swap_potinters( int * lhs, int * rhs ) noexcept
> {
> int * tmp = lhs;
> lhs = rhs;
> rhs = tmp;
> }
>
> Now, suppose you need to add logging only for the time of debugging
>
> void swap_potinters( int * lhs, int * rhs ) noexcept
> {
> LOG("swapping pointers");
> int * tmp = lhs;
> lhs = rhs;
> rhs = tmp;
> }
>
> You get an error. A throwing function declared noexcept. You want to
> remove noexcept now? This will require the recompilation of the entire
> program.
>

It's a good example. But consider this; we are going into a new era of
making our applications even more robust than ever by means of noexcept
the way Abrahams' proposed (originating from his exception safety
guarantees). We have to rethink our new designs.

So how would I solve the above problem? I would reason like this; LOG
has nothing to do with swapping pointers, so it must not effect my
decision to use noexcept. Nevertheless, we want LOG to be there.

If I had the source code to LOG then I would make LOG noexcept as well.
If I did not have the source code I would do like this

void swap_pointers( int * lhs, int * rhs ) noexcept
{
try { LOG("swapping pointers"); }
catch( ... ) {}

int * tmp = lhs;
lhs = rhs;
rhs = tmp;
}

or, if LOG is a C function we know won't throw, use the noexcept block
that Martin B. talks about in the previous post "History and evolution
of the noexcept proposal?"

void swap_pointers( int * lhs, int * rhs ) noexcept
{


noexcept { LOG("swapping pointers"); }

int * tmp = lhs;
lhs = rhs;
rhs = tmp;
}

Now, many of you may rage over my choice to swallow the exceptions from
LOG with try/catch(...){}, but remember; LOG is *not* part of the
semantics of swap_pointers, hence, that's a legal design decision.


I find this discussion very productive. We must be clear on how to
reason when using noexcept. Please feel free to provide more arguments.

ThosRTanner

unread,
Apr 14, 2011, 4:42:14 PM4/14/11
to

On Apr 13, 3:39 am, DeMarcus <use_my_alias_h...@hotmail.com> wrote:
> Hi,
>
> In an interview with Bjarne Stroustrup...
>
> http://www.codeguru.com/cpp/misc/print.php/c18357/An-Interview-with-C...

>
> ...he says "With noexcept a throw is considered a fatal design error and
> the program immediately terminated."
>
> Terminated?!! What doesn't it cost to correct a design error once found
> at the customer site?!
>
> I don't know when to use noexcept!

You forgot 5: Never use noexcept, because you don't know that someone
won't change a function you call and the compiler isn't going to check
it.

Without static checking it's a disaster waiting to happen.

I agree it's a serious design error having a program throw an
exception it said it wouldn't. But I feel that the exceptions thrown
are part of the interface contract (like not altering the data passed
to you) (and templates, are, IMO, already specifying the interface
contract, so I fail to see the how the "it can cause templates not to
compile" argument holds - it's not an issue with the template, it's an
issue with the class you're using in the template) and shouldn't
compile.

After too long in programming I have given up on attempting to get
programmers to take notice of compiler warnings. We find the warnings
we think indicate serious errors, and with any luck, at least one of
our compilers will let us set it to an error. People take notice of
those.

Option 5 is the only one that guarantees your program won't suddenly
crash.

DeMarcus

unread,
Apr 14, 2011, 8:34:41 PM4/14/11
to

I somewhat agree with you. I just want to do one remark; it's important to think of the /semantics/ of a function. Just because a function never can throw doesn't automatically make it noexcept.

One has to decide, from the overall design, if this shall be a no-throw function. Maybe it isn't today, but it may be tomorrow. One good example is all functions returning -1 when failing. Maybe one wants to change these in the future to throw instead.

It's important to be totally clear with whether the /semantics/, not the implementation, may throw.


> Now it becomes reasonable to mark any function that only call leaf
> functions that are marked as noexcept and do not themselves throw as
> noexcept.
>
> There is a small problem in that a programmer might add a throw into a
> previously clean leaf function.
>

See above.

> Sometimes people think that noexcept is like const. It isn't because it
> works the other way round. You can add a const qualification without
> effecting cases lower down the chain you cannot do that with noexcept
> which has to start from the lowest level (you can add it higher up, but
> only if you are willing to catch exceptions within the function and
> contain them there)
>

Now, this is the *perfect* example.

First, we must all remember why we use noexcept. It's *not* to save a couple of clock cycles as Stroustrup & al. want you to think! (in that case it's better they rename it 'optimized' or something)

noexcept is about being able to provide the strong exception safety guarantee stated by Dave Abrahams. Why do we need strong exception safety guarantee? We need it to be able to provide full roll-back of objects that failed doing an operation. Full roll-back is a critical part to some systems.

Now, if we go back to your example, you have to ask yourself; why would I put noexcept to a function higher up?

Most certainly you would answer that you want to provide full roll-back on some object (other suggestions are welcome). But full roll-back doesn't come for free, it may not even be possible in certain situations. In your example you may need to redesign big parts to achieve it, just catch and swallow exceptions could be fatal.

Now, here's the real danger with current noexcept. If you just put noexcept to a function higher up, the compiler will not even give you a warning, and you most certainly won't give your customer the full roll-back as they expect it; it will terminate their system!


> What I would really like to see is a compiler that warns me if I write a
> function that cannot throw without qualifying it as noexcept. That would
> support, IMO, a progressive improvement in code.
>

No, it's the other way around. As I said earlier, a function is not a noexcept function just because it doesn't throw. To improve your design skills you need to think of the /semantics/ of the function and ask yourself if the function should be able to fail or not. *Then* we use the compiler to give us an error early if our implementation doesn't fulfill our design.

*That* is why we need /static/ checked noexcept so we don't have to postpone the redesign until after an angry call from a customer.

restor

unread,
Apr 16, 2011, 9:46:59 AM4/16/11
to

> I somewhat agree with you. I just want to do one remark; it's
important to think of the /semantics/ of a function. Just because a
function never can throw doesn't automatically make it noexcept.
> It's important to be totally clear with whether the /semantics/, not the implementation, may throw.

> noexcept is about being able to provide the strong exception safety guarantee stated by Dave Abrahams. Why do we need strong


exception safety guarantee? We need it to be able to provide full roll-back of objects that failed doing an operation. Full
roll-back is a critical part to some systems.
>
> Now, if we go back to your example, you have to ask yourself; why would I put noexcept to a function higher up?
>
> Most certainly you would answer that you want to provide full roll-back on some object (other suggestions are welcome). But full
roll-back doesn't come for free, it may not even be possible in certain situations. In your example you may need to redesign big
parts to achieve it, just catch and swallow exceptions could be fatal.

> Now, here's the real danger with current noexcept. If you just put noexcept to a function higher up, the compiler will not even
give you a warning, and you most certainly won't give your customer the full roll-back as they expect it; it will terminate their
system!
>
> > What I would really like to see is a compiler that warns me if I write a
> > function that cannot throw without qualifying it as noexcept. That would
> > support, IMO, a progressive improvement in code.
>
> No, it's the other way around. As I said earlier, a function is not a noexcept function just because it doesn't throw. To improve
your design skills you need to think of the /semantics/ of the function and ask yourself if the function should be able to fail or
not. *Then* we use the compiler to give us an error early if our implementation doesn't fulfill our design.

This is a very good rule, which could be summarized as, "use noexcept
for declaring no-fail semantics". I just want to add one comment
regarding no-fail semantics. I read somehere (can't remember where)
that there is a non-trivial difference between no-throw and no-fail
guarantee. Namely, the no-throw function may still fail to do what it
is expected to do, but it may report the failure by other mens than
throwing exceptions, or not report the failure at all. In order to
provide commit-or-rollback semantics, what we need is a no-fail
guarantee.

Exception safety guarantees provided by our functions are not
something that compiler can verify. This also applies to no-fail
guarantee. Checking for exceptions is one thing, but checking for the
lack of failure is something completely different. Compiler will not
be able to check that, so you will have to declare your intention with
noexcept, and just be careful that your function never fails.

Regards,
&rzej

Seungbeom Kim

unread,
Apr 16, 2011, 9:45:11 AM4/16/11
to

On 2011-04-14 17:34, DeMarcus wrote:
>
> [...] I just want to do one remark; it's important to think of the

> /semantics/ of a function. Just because a function never can throw
> doesn't automatically make it noexcept.
>
> One has to decide, from the overall design, if this shall be a
> no-throw function. Maybe it isn't today, but it may be tomorrow.
> One good example is all functions returning -1 when failing.
> Maybe one wants to change these in the future to throw instead.
>
> It's important to be totally clear with whether the /semantics/,
> not the implementation, may throw.

If a function is changed to throw an exception instead of return -1
in case of an error, that is not just an implementation change, but
an interface change, or a change in the contract, which also justifies
a change in the function signature.

--
Seungbeom Kim

restor

unread,
Apr 16, 2011, 9:47:18 AM4/16/11
to

Now I see my example wasn't a good one. I failed to state my point.
First, I used name "swap" in function which brings std::swap to mind,
which in turn is a good candidate for noexcept. Second, I used
function LOG, which perhaps shouldn't even throw (I think Boost.Log
doesn't throw on failure).

So let me give you a different example. This one is literally taken
from http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3248.pdf.
Suppose I have the following function:

T& std::vector<T>::front() noexcept {
assert(!this->empty());
return *this->data();
}

Function front doesn't need to provide a no-fail guarantee, but we
labeled it with noexcept because we could. assert typically either
aborts or does nothing, so it works fine noexcept. But what if for
unit-test framework I want to change the meaning of assert to signal
test failure with an exception? (This argument is still taken from
n3248.pdf) This is just unit tests so I can decide to do it and this
is what unit-test frameworks usually do: try to also detect assertion
failures. So we would probably implement an assert as

#define assert(cond) \
static_cast<void>( cond || signalFailure(#cond __FILE_,
__LINE__) )

[[noreturn]]
bool signalFailure( string cond, string file, unsigned line )
{
throw TestFailure(cond, file, line);
}

Now the testing will not work, because the program will terminate as
soon as we try to signal test failure.

Regards,
&rzej

DeMarcus

unread,
Apr 17, 2011, 3:49:51 AM4/17/11
to

On 2011-04-16 15:45, Seungbeom Kim wrote:
>
> On 2011-04-14 17:34, DeMarcus wrote:
>>
>> [...] I just want to do one remark; it's important to think of the
>> /semantics/ of a function. Just because a function never can throw
>> doesn't automatically make it noexcept.
>>
>> One has to decide, from the overall design, if this shall be a
>> no-throw function. Maybe it isn't today, but it may be tomorrow.
>> One good example is all functions returning -1 when failing.
>> Maybe one wants to change these in the future to throw instead.
>>
>> It's important to be totally clear with whether the /semantics/,
>> not the implementation, may throw.
>
> If a function is changed to throw an exception instead of return -1
> in case of an error, that is not just an implementation change, but
> an interface change, or a change in the contract, which also justifies
> a change in the function signature.
>

Well, yes, I think we have the same intention of achieving robust design, but to me it's important to differentiate between the /syntax/ contract and the /semantics/ contract.

If we have a function returning -1 on failure and we change it to throw an exception instead we have changed the /syntax/ contract but the /semantics/ contract stays the same, that is; this function may fail.

My idea is that noexcept shall help us keep the semantics contract intact.

Stroustrup gave the following example in N3202.
http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2010/n3202.pdf

void f(int i)
{
// int a[sz]; // before
vector<int> a(sz); // after
a[i] = 7;
// ...
}

He says "We cannot build systems where an apparently small change in a function forces recompilations of all users".

I think it's unfortunate that he misses the fact that the recompilation is there to help us find flaws in the design.

Nevertheless, his example is important, so how would I solve it?

First I ask myself; what semantics contract have I promised the user of this function f()?

* If it's a function that may fail, I don't put noexcept to it neither before when using an array nor after when using the vector, hence the all-user-recompilation won't be needed.

* If it's a function that may never fail, and we feel that we must change the implementation to use a vector then we are out of luck; we can't declare this function noexcept. We could try embrace the vector within try/catch(...){} but if the vector is important for fulfilling the semantics of the function then that's not possible. (swallowing exceptions is bad behavior, but if the expression is not part of the semantics contract, like for instance logging, then I consider it legal)

I know many may be frustrated saying that the noexcept mustn't stop us from doing enhancements in our implementations, but I see it like this.

1. First of all; we don't /have/ to use noexcept all over our code, only where we need to provide strong exception guarantee.

2. Secondly; noexcept is there to help us keep the semantics contract.

3. Third; consider the alternative (or actually the current draft):

f(int i) noexcept
{
vector<int> a(sz);
a[i] = 7;
// ...
}

What happens when vector fails to construct, or a.at( BIG_NUMBER ); throws an std::out_of_range exception? Then we have two problems. The minor one is the immediate termination. The major one is that we have found a design flaw that may already be running on many customer machines.

In summary; we can either have the compiler telling us this function doesn't fulfill its contract, or we can have our customer telling us we don't fulfill the contract.

--

Martin B.

unread,
Apr 17, 2011, 3:49:10 AM4/17/11
to

On 16.04.2011 15:47, restor wrote:
>
>>> I believe, it would be (3), with the annotation "DO NOT USE noexcept
>>> EXPLICITLY". Really. This is not ironical.
>> [...]
>>> 1 and 2 from your list would not be good candidates and neither would
>>> 4, for the following reasons.
>>> The fact that you can prove your function never throws an exception,
>>> does not mean that it won't throw in the future.
>>> suppose you have function:
>> ....

>> I find this discussion very productive. We must be clear on how to
>> reason when using noexcept. Please feel free to provide more arguments.
>
> Now I see my example wasn't a good one. I failed to state my point.
> First, I used name "swap" in function which brings std::swap to mind,
> which in turn is a good candidate for noexcept. Second, I used
> function LOG, which perhaps shouldn't even throw (I think Boost.Log
> doesn't throw on failure).
>
> So let me give you a different example. This one is literally taken
> from http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3248.pdf.
> Suppose I have the following function:
>
> T& std::vector<T>::front() noexcept {
> assert(!this->empty());
> return *this->data();
> }
>
> Function front doesn't need to provide a no-fail guarantee, but we
> labeled it with noexcept because we could. assert typically either
> aborts or does nothing, so it works fine noexcept. But what if for
> unit-test framework I want to change the meaning of assert to signal
> test failure with an exception? (This argument is still taken from
> n3248.pdf) This is just unit tests so I can decide to do it and this
> is what unit-test frameworks usually do: try to also detect assertion
> failures. So we would probably implement an assert as
> ...

>
> Now the testing will not work, because the program will terminate as
> soon as we try to signal test failure.
>

I find the unit testing example (I think it already has been mentioned somewhere else) somehow not very convincing, but I think I don't fully understand what the argument is:

* Do you want to actually *test the assert* being triggerd?
* Or are you concerned the the test run will be terminated prematurely because of `terminate()`?

cheers,
Martin

Alexander Terekhov

unread,
Apr 17, 2011, 3:51:55 AM4/17/11
to

restor wrote:
[...]


> bool signalFailure( string cond, string file, unsigned line )
> {

*( (int*)0 ) = 0;

That is the best signal. Hth.

regards,
alexander.

restor

unread,
Apr 18, 2011, 10:03:45 AM4/18/11
to

> > Now I see my example wasn't a good one. I failed to state my point.
> > First, I used name "swap" in function which brings std::swap to mind,
> > which in turn is a good candidate for noexcept. Second, I used
> > function LOG, which perhaps shouldn't even throw (I think Boost.Log
> > doesn't throw on failure).
>
> > So let me give you a different example. This one is literally taken
> > fromhttp://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3248.pdf.

> > Suppose I have the following function:
>
> > T& std::vector<T>::front() noexcept {
> > assert(!this->empty());
> > return *this->data();
> > }
>
> > Function front doesn't need to provide a no-fail guarantee, but we
> > labeled it with noexcept because we could. assert typically either
> > aborts or does nothing, so it works fine noexcept. But what if for
> > unit-test framework I want to change the meaning of assert to signal
> > test failure with an exception? (This argument is still taken from
> > n3248.pdf) This is just unit tests so I can decide to do it and this
> > is what unit-test frameworks usually do: try to also detect assertion
> > failures. So we would probably implement an assert as
> > ...
>
> > Now the testing will not work, because the program will terminate as
> > soon as we try to signal test failure.
>
> I find the unit testing example (I think it already has been mentioned somewhere else) somehow not very convincing, but I think I don't fully understand what the argument is:
>
> * Do you want to actually *test the assert* being triggerd?
> * Or are you concerned the the test run will be terminated prematurely because of `terminate()`?

I guess the answer would be closer to #2, but it is still not exactly
what I meant. I have a unit test framework that apart from normally
checking the results in test cases, also wants, in case some assertion
fires, to report the assertion failure too in a uniform manner (i.e.
similar to normal test failures).

Regards,
&rzej

restor

unread,
Apr 18, 2011, 7:49:11 PM4/18/11
to

> void f(int i)
> {
> // int a[sz]; // before
> vector<int> a(sz); // after
> a[i] = 7;
> // ...
>
> }

> What happens when vector fails to construct, or a.at( BIG_NUMBER ); throws


> an std::out_of_range exception? Then we have two problems. The minor one
> is the immediate termination. The major one is that we have found a design
> flaw that may already be running on many customer machines.

> In summary; we can either have the compiler telling us this function
> doesn't fulfill its contract, or we can have our customer telling us we
> don't fulfill the contract.

> From my experience, the array-vs-vector example is not a real-life
one. If we follow the rule of using noexcept for no-fail guarantee,
no-
fail functions are always very simple: it is only pointer assignments,
or calling release(), reset() on auto_ptr's, or calling swap(). Also
you need to be very cautious when writing those for other reasons:
they must not fail, not only not throw exceptions. If you even think
of using vector in no-
fail function, there is something fundamentally wrong.

Regards,
&rzej

Martin B.

unread,
Apr 18, 2011, 7:48:10 PM4/18/11
to

Ah. But this "in case some assertion fires" would mean, that *in case it fires* you will either immediately fix your unit test code or your program code, depending on which code triggered the assertion.

That is, a triggered assertion in your unit tests means your UT run failed. As opposed to exceptions, where normally you will have succeeding unit tests to test for the exceptions your interface specifies.

What I want to get at is: Since a triggered `assert` always means a failed UT, it will be (more) acceptable to have this triggered assert just log+terminate the unit test and thus signal failure.

Obviously how practical this can be depends on the UT tools used and the platform, but I think it isn't really a showstopper for noexcept.

cheers,
Martin


--
Stop Software Patents
http://petition.stopsoftwarepatents.eu/841006602158/
http://www.ffii.org/

DeMarcus

unread,
Apr 19, 2011, 5:02:33 PM4/19/11
to

I agree, it isn't really a showstopper, but if we could get the opportunity to choose ourselves what failure is big enough to cause termination, we could create more professional code.

In the case of unit testing it feels strange that the whole test fixture shall terminate when encountering a failure; it is its job.

Actually it could cause trouble as well. Maybe it wouldn't be a showstopper to have a termination if one works as you describe; programming a little, compiling and then testing the unit. But in case you have loads of unit tests where many of them take a long time to run you would probably want to run them on a build server over night. Then it's not desirable to have a crash when a test fails.


--

DeMarcus

unread,
Apr 20, 2011, 3:23:42 PM4/20/11
to

On 2011-04-19 01:49, restor wrote:
>
>> void f(int i)
>> {
>> // int a[sz]; // before
>> vector<int> a(sz); // after
>> a[i] = 7;
>> // ...
>>
>> }
>
>> What happens when vector fails to construct, or a.at( BIG_NUMBER );
throws
>> an std::out_of_range exception? Then we have two problems. The minor one
>> is the immediate termination. The major one is that we have found a
design
>> flaw that may already be running on many customer machines.
>
>> In summary; we can either have the compiler telling us this function
>> doesn't fulfill its contract, or we can have our customer telling us we
>> don't fulfill the contract.
>
>> From my experience, the array-vs-vector example is not a real-life
> one. If we follow the rule of using noexcept for no-fail guarantee,
> no-
> fail functions are always very simple: it is only pointer assignments,
> or calling release(), reset() on auto_ptr's, or calling swap(). Also
> you need to be very cautious when writing those for other reasons:
> they must not fail, not only not throw exceptions. If you even think
> of using vector in no-
> fail function, there is something fundamentally wrong.
>

I agree with you completely, what I write below is an experiment in my head.

I've been thinking about this for a long time and what bugs me right now
is the question; what kind of failure is std::bad_alloc?

Even the Turing machine has infinite memory so it's substantial to all
programs that we have memory.

My conclusion is that std::bad_alloc cannot be seen as a failure within
our program; it's a failure of a higher kind that we can't take into
consideration when we make our design contracts and function
declarations. See it as /force majeure/ in a business contract.

http://en.wikipedia.org/wiki/Force_majeure

If we allow ourselves to accept events force majeure then new rules come
into play. If we get std::bad_alloc in a noexcept function we must
accept that everything relying on that function is now out of order and
we must take action based on this condition.


I've seen nobody reason like the above, and for the current noexcept
draft a std::bad_alloc gives immediate termination.

However, running out of memory doesn't necessary mean we can't go on.
Let's say we tried to allocate a 16GB painting canvas then we shouldn't
terminate but instead catch the std::bad_alloc, clean up and ask the
user if she wants to try 8GB instead. A force majeure failure is not a
severe failure if it can be resolved and some party is taking the
responsibility for it.

restor

unread,
Apr 21, 2011, 4:25:51 PM4/21/11
to
> I've been thinking about this for a long time and what bugs me right now
> is the question; what kind of failure is std::bad_alloc?
>
> Even the Turing machine has infinite memory so it's substantial to all
> programs that we have memory.
>
> My conclusion is that std::bad_alloc cannot be seen as a failure within
> our program; it's a failure of a higher kind that we can't take into
> consideration when we make our design contracts and function
> declarations. See it as /force majeure/ in a business contract.
>
> http://en.wikipedia.org/wiki/Force_majeure
>
> If we allow ourselves to accept events force majeure then new rules come
> into play. If we get std::bad_alloc in a noexcept function we must
> accept that everything relying on that function is now out of order and
> we must take action based on this condition.
>
> I've seen nobody reason like the above, and for the current noexcept
> draft a std::bad_alloc gives immediate termination.
>
> However, running out of memory doesn't necessary mean we can't go on.
> Let's say we tried to allocate a 16GB painting canvas then we shouldn't
> terminate but instead catch the std::bad_alloc, clean up and ask the
> user if she wants to try 8GB instead. A force majeure failure is not a
> severe failure if it can be resolved and some party is taking the
> responsibility for it.

I must admit I am having a problem finding what point you are trying
to make.
What you say reminded me of an article I read recently on exceptions.

http://web.cecs.pdx.edu/~black/publications/Black%20D.%20Phil%20Thesis.pdf

Where the author argued that running out of memory is about the only
good use case for throwing exceptions.
As I see it, "out-of-memory" is not the only condition that falls into
the category that I would call "system-error". Memory is one resource
that program needs, but there are others. For another example,
consider a multi-threaded program. Not all systems need to support
threads, but lets consider the system that does support them. Suppose
you have checked that the environment can launch threads, and you want
to create a second thread from your main thread, so that you have two
threads. You need two threads. But the operating system may fail to
create a thread for you because it may have a limit of 16000 threads
and the one you requested was 16001-st. Its not your fault, but the
program cannot run further normally. There are much more such
situations that you might call "system error" or "platform error".
In C++11 we have a type system_error for those kind of exceptions.
However, bad_alloc is not a sub-type thereof due to the funny reason:
if you do not have memory you cannot even construct the object that
says "out-of-memory"; hence the special case for this one.

I think I disagree with your claim that throwing bad_alloc is a "force
majeure" in the noexcept contract. If we use reasoning "any function
that doesn't throw is noexcept" this is obvious. But even if we use
reasoning "noexcept is no-fail guarantee", it is still the function
author's fault that the function threw bad_alloc. Not because you
didn't swallow it, but because you used operations that required
allocating memory. Your could have fulfilled the contract by not using
operations that require memory. If you really needed additional memory
you signed the contract with the intention of breaking it.

Anyway, your "vision" resembles more and more the solution that Java
adapted: exceptions are divided in checked exceptions and unchecked
exceptions. The former are checked at compile-time the latter are
never checked (no on writes in Java "throws NullPointerException").
You can choose which kind you want to use. The rule, I guess, is that
it is really an error you throw unchecked exception. When exception is
just use as a "control statement" like if-statement or loop-statement
that simply takes you from one place in the code to another one, it
must be a checked exception.

Regards,
&rzej

DeMarcus

unread,
Apr 22, 2011, 10:57:27 AM4/22/11
to

On 2011-04-21 22:25, restor wrote:
>> I've been thinking about this for a long time and what bugs me right now
>> is the question; what kind of failure is std::bad_alloc?
>>
>> Even the Turing machine has infinite memory so it's substantial to all
>> programs that we have memory.
>>
>> My conclusion is that std::bad_alloc cannot be seen as a failure within
>> our program; it's a failure of a higher kind that we can't take into
>> consideration when we make our design contracts and function
>> declarations. See it as /force majeure/ in a business contract.
>>
>> http://en.wikipedia.org/wiki/Force_majeure
>>
>> If we allow ourselves to accept events force majeure then new rules come
>> into play. If we get std::bad_alloc in a noexcept function we must
>> accept that everything relying on that function is now out of order and
>> we must take action based on this condition.
>>
>> I've seen nobody reason like the above, and for the current noexcept
>> draft a std::bad_alloc gives immediate termination.
>>
>> However, running out of memory doesn't necessary mean we can't go on.
>> Let's say we tried to allocate a 16GB painting canvas then we shouldn't
>> terminate but instead catch the std::bad_alloc, clean up and ask the
>> user if she wants to try 8GB instead. A force majeure failure is not a
>> severe failure if it can be resolved and some party is taking the
>> responsibility for it.
>
> I must admit I am having a problem finding what point you are trying
> to make.

Thanks for your thorough answer, I appreciate it!

> What you say reminded me of an article I read recently on exceptions.
>
> http://web.cecs.pdx.edu/~black/publications/Black%20D.%20Phil%20Thesis.pdf
>
> Where the author argued that running out of memory is about the only
> good use case for throwing exceptions.

Just a quick review of my idea (for coming answers):

There are four types of failures (and we use exceptions for them):

1. Expected failures - Exceptions thrown within our design contract of a
function. These are the exceptions that a user must be prepared for and
handle. noexcept means the user (the caller of the function) doesn't
need to prepare anything.

2. Usage failures - The user provided invalid arguments to the function,
hence the /user/ broke the contract. Unchecked std::runtime_error is thrown.

3. Design failures - I, the programmer of the function, have a bug in my
code, hence it was /me/ that broke the contract. Unchecked
std::logic_error is thrown.

4. System failures - The system could not provide resources to fulfill
the contract, hence the /system/ broke the contract which is out of our
control. Such case would be called force majeur in business contracts.
Let's use that term here as well.


> As I see it, "out-of-memory" is not the only condition that falls into
> the category that I would call "system-error". Memory is one resource
> that program needs, but there are others. For another example,
> consider a multi-threaded program. Not all systems need to support
> threads, but lets consider the system that does support them. Suppose
> you have checked that the environment can launch threads, and you want
> to create a second thread from your main thread, so that you have two
> threads. You need two threads. But the operating system may fail to
> create a thread for you because it may have a limit of 16000 threads
> and the one you requested was 16001-st. Its not your fault, but the
> program cannot run further normally. There are much more such
> situations that you might call "system error" or "platform error".

To be honest, the first three failures in my list above are failures
that I believe fit a proper failure handling strategy. About the fourth
with system failures, I'm still very open for discussion.

I had a discussion with a friend yesterday whether std::bad_alloc was my
fault or the system's fault. On one hand I agree with you that it's my
responsibility to make sure there are enough resources. The only two
arguments I have today against that are

1. The Turing machine has infinite memory. (maybe not so good argument)

2. It would be very difficult to demand that I make sure we have enough
resources. It would mean that we move the system failure group to either
the user failure group or the design failure group.
- If we move it to the user failure group, in practice, it would
mean that the /user/ must provide enough system resources to every
function call. I know some large systems actually have that policy but
to many systems that would be a very impractical development environment.
- If we move it to the design failure group it would feel very
awkward since design failures are considered compile-time bugs that can
be remedied, whereas out-of-memory failures are run-time failures (more
discussion about this below).
- If we move it to the expected failure group my whole theory would
break apart (more below).


> In C++11 we have a type system_error for those kind of exceptions.
> However, bad_alloc is not a sub-type thereof due to the funny reason:
> if you do not have memory you cannot even construct the object that
> says "out-of-memory"; hence the special case for this one.
>

So they will have std::system_error? Interesting, I didn't know that.
However, it's not really true in all cases that we cannot even construct
the object that says "out-of-memory". If we have 180 MB left and try to
allocate 200 MB we get a std::bad_alloc, but we still have the 180 MB to
continue with.

Of course there may be situations where we only have 2 bytes left but I
still don't understand why they don't inherit std::bad_alloc from
std::system_error.


> I think I disagree with your claim that throwing bad_alloc is a "force
> majeure" in the noexcept contract. If we use reasoning "any function
> that doesn't throw is noexcept" this is obvious. But even if we use
> reasoning "noexcept is no-fail guarantee", it is still the function
> author's fault that the function threw bad_alloc. Not because you
> didn't swallow it, but because you used operations that required
> allocating memory. Your could have fulfilled the contract by not using
> operations that require memory. If you really needed additional memory
> you signed the contract with the intention of breaking it.
>

Yes, here is where I have troubles defending my thesis. To be extremely
theoretical I would probably say that you are right if it wasn't for the
small fact that we could run out of stack memory as well.

So let's answer the discussion above whether it's my fault or the
system's fault that we run out of memory. If we want to group
std::bad_alloc and stack exhaustion together, then we can't move the
system failure group to the design failure group since the heap and
stack failures will both be seen as run-time errors. Also, we can't move
the system failure group to the expected failure group since then few
functions could be noexcept since most operations use the stack.

However, if we choose to see system failures as an own group as in my
suggestion, then as you say, the most difficult question would be; what
could be considered a system failure? Memory? Threads? Disk? etc.

Again, I'm still not sure about my own ideas here. On one hand I claim
that the Turing machine is single threaded with infinite memory, hence
only std::bad_alloc and stack exhaustion are force majeure. On the other
hand we must also consider new programming concepts like multi-threading
as well as see what's practical programming.


> Anyway, your "vision" resembles more and more the solution that Java
> adapted: exceptions are divided in checked exceptions and unchecked
> exceptions. The former are checked at compile-time the latter are
> never checked (no on writes in Java "throws NullPointerException").
> You can choose which kind you want to use. The rule, I guess, is that
> it is really an error you throw unchecked exception. When exception is
> just use as a "control statement" like if-statement or loop-statement
> that simply takes you from one place in the code to another one, it
> must be a checked exception.
>

I totally agree with this last one. As a matter of fact, I believe that
exceptions should be designated failures only, not be "smart" hacks to
break a loop or do some other kind of messaging.

Daniel James

unread,
Apr 22, 2011, 11:03:38 AM4/22/11
to

In article <4d637940-cecd-465e-b0a3-
d14323...@x37g2000prb.googlegroups.com>, Restor wrote:
> ... if we use reasoning "noexcept is no-fail guarantee", it is

> still the function author's fault that the function threw bad_alloc.
> Not because you didn't swallow it, but because you used operations
> that required allocating memory. Your could have fulfilled the
> contract by not using operations that require memory. If you really
> needed additional memory you signed the contract with the intention
> of breaking it.

That seems to me to be the important point here. The noexcept qualifier
is supposed to identify functions that *cannot* throw, and which can
safely be used in a context in which the strong exception safety
guarantee is required.

If the programmer is going to supply such a function he has an
obligation to honour the contract made explicit by the noexcept
qualifier and not do anything that can cause an exception to be thrown
from that function.

I think that it will usually be the case that functions that need to be
made noexcept will be simple things (copy, move, swap) and that if they
cannot be implemented in an exception-safe manner then the programmer
should consider changing his design so as to remove the need to do so
rather than complaining that noexcept is too strict.

If a function identified as noexcept does, in fact, throw then that is a
programmer error (the function should either not have been declared
noexcept, or should have been written in such a way that it could not
throw). Arguing about what should happen at runtime when such a function
throws seems somewhat moot -- what should happen is that someone fixes
the bug.

I do still believe, though, that if such a condition can be detected at
compile time rather than run time the standard should require this to be
done. Nobody likes their application to run into a std::terminate, and
any static check that can help prevent that is most welcome.

--
Regards,
Daniel.
[Lest there be any confusion I am NOT the Boost maintainer of the same
name]

DeMarcus

unread,
Apr 22, 2011, 3:12:10 PM4/22/11
to
On 2011-04-22 17:03, Daniel James wrote:
>
> In article<4d637940-cecd-465e-b0a3-
> d14323...@x37g2000prb.googlegroups.com>, Restor wrote:
>> ... if we use reasoning "noexcept is no-fail guarantee", it is
>> still the function author's fault that the function threw bad_alloc.
>> Not because you didn't swallow it, but because you used operations
>> that required allocating memory. Your could have fulfilled the
>> contract by not using operations that require memory. If you really
>> needed additional memory you signed the contract with the intention
>> of breaking it.
>
> That seems to me to be the important point here. The noexcept qualifier
> is supposed to identify functions that *cannot* throw, and which can
> safely be used in a context in which the strong exception safety
> guarantee is required.
>
> If the programmer is going to supply such a function he has an
> obligation to honour the contract made explicit by the noexcept
> qualifier and not do anything that can cause an exception to be thrown
> from that function.
>

The idea with strong exception safety guarantee is, in the end, to be
able to prevent operating on inconsistent objects. So just identifying
functions that cannot /throw/ is not sufficient; we must also guarantee
that they cannot /fail/. This is important, otherwise we cannot prevent
operating on inconsistent objects.

> I think that it will usually be the case that functions that need to be
> made noexcept will be simple things (copy, move, swap) and that if they
> cannot be implemented in an exception-safe manner then the programmer
> should consider changing his design so as to remove the need to do so
> rather than complaining that noexcept is too strict.
>

The problem is that stack exhaustion (out-of-stack-memory) is also a
failure, so even the simple function std::swap may fail.

> If a function identified as noexcept does, in fact, throw then that is a
> programmer error (the function should either not have been declared
> noexcept, or should have been written in such a way that it could not
> throw). Arguing about what should happen at runtime when such a function
> throws seems somewhat moot -- what should happen is that someone fixes
> the bug.
>

Here's where we have to choose between the practical- and the
theoretical model.

Practical model: We say that stack exhaustion is an exception to the
rule about failure since it's very rare.

Theoretical model: We say stack exhaustion is the same failure type as
std::bad_alloc.

I like your reasoning that noexcept shouldn't be something sprinkled all
over the code but rather focused to certain small operations. If we
follow that and the practical model we have something solid that works
practically.

I'm just keen on exploring the theoretical model, but then we have to
allow stack exhaustion and std::bad_alloc as "force majeure" failures
that will pass the noexcept and throw anyway. My reasoning behind that
idea is that a force majeure failure is something out of control and out
of our design contract, hence we cannot trust the object anymore and
obviously not guarantee its consistency any longer.

> I do still believe, though, that if such a condition can be detected at
> compile time rather than run time the standard should require this to be
> done. Nobody likes their application to run into a std::terminate, and
> any static check that can help prevent that is most welcome.
>

I don't like a forced std::terminate either.


--

Andrzej Krzemieński

unread,
Apr 24, 2011, 1:49:53 AM4/24/11
to

> Just a quick review of my idea (for coming answers):
>
> There are four types of failures (and we use exceptions for them):
>
> 1. Expected failures - Exceptions thrown within our design contract of a
> function. These are the exceptions that a user must be prepared for and
> handle. noexcept means the user (the caller of the function) doesn't
> need to prepare anything.
>
> 2. Usage failures - The user provided invalid arguments to the function,
> hence the /user/ broke the contract. Unchecked std::runtime_error is thrown.
>
> 3. Design failures - I, the programmer of the function, have a bug in my
> code, hence it was /me/ that broke the contract. Unchecked
> std::logic_error is thrown.

First, Is there a point in distinguishing types 2 and 3? Regardless if
it is
function author or function user who broke the contract, in the end we
have
a "programmer error" situation. In either case the only good action
that can
be taken is for the programmer to fix the code, and rebuild the
program. This
obviously cannot happen "in run-time" at customer's computer. In run-
time
there is no good action that can be taken. It is not even obvious if
you should throw any exception at this point.

Second, functions in most cases do not need to be no-fail functions,
But if
someone decided to use a no-fail function, it was probably to provide
a
commit-or-rollback guarantee in his own function. If so, the exception
that
is thrown and not checked breaks the commit-or-rollback contract and
leaves
some object in inconsistent state (object's invariant is broken). If
you
then choose to catch such an unchecked exception (I wouldn't recommend
that)
and resume the normal program execution, and the object with the
broken
invariant is a global, the program will very likely try to acces this
object
again. If so, you will, if you are lucky, keep getting the same
exception thrown
again and again, or, if you are less lucky the program will just run,
operating
on an invalid object.

> 4. System failures - The system could not provide resources to fulfill
> the contract, hence the /system/ broke the contract which is out of our
> control. Such case would be called force majeur in business contracts.
> Let's use that term here as well.
>

>> As I see it, "out-of-memory" is not the only condition that falls into
>> the category that I would call "system-error". Memory is one resource
>> that program needs, but there are others. For another example,
>> consider a multi-threaded program. Not all systems need to support
>> threads, but lets consider the system that does support them. Suppose
>> you have checked that the environment can launch threads, and you want
>> to create a second thread from your main thread, so that you have two
>> threads. You need two threads. But the operating system may fail to
>> create a thread for you because it may have a limit of 16000 threads
>> and the one you requested was 16001-st. Its not your fault, but the
>> program cannot run further normally. There are much more such
>> situations that you might call "system error" or "platform error".
>

> To be honest, the first three failures in my list above are failures
> that I believe fit a proper failure handling strategy. About the fourth
> with system failures, I'm still very open for discussion.
>
> I had a discussion with a friend yesterday whether std::bad_alloc was my
> fault or the system's fault. On one hand I agree with you that it's my
> responsibility to make sure there are enough resources. The only two
> arguments I have today against that are
>
> 1. The Turing machine has infinite memory. (maybe not so good argument)
>
> 2. It would be very difficult to demand that I make sure we have enough
> resources. It would mean that we move the system failure group to either
> the user failure group or the design failure group.

I find something wrong with this argument. If I understood you
correctly,
you divide failures into a number of categories, and here you say that
it
is difficult to put "out-of-memory" to either category. This might
indicate
that there is something wrong with the division into categories. (I am
not
saying that thee is something wrong. I am just saying that this
argument
is sort of "circular".)

> - If we move it to the user failure group, in practice, it would
> mean that the /user/ must provide enough system resources to every
> function call. I know some large systems actually have that policy but
> to many systems that would be a very impractical development environment.
> - If we move it to the design failure group it would feel very
> awkward since design failures are considered compile-time bugs that can
> be remedied, whereas out-of-memory failures are run-time failures (more
> discussion about this below).
> - If we move it to the expected failure group my whole theory would
> break apart (more below).

Well, I would, in fact, put it into "expected" failure group (I meen
"out
of /heap/ memory" situation). This agrees with my intuitive
understanding
of the word "expected". I do expect this error to happen, an sometimes
it
is possible to deal with it: unwind the stack, which would probably
release
some of memory. Unlike "programmer errors" where you do not expect
them:
if you did, you would have avoided them in the first place.

>> In C++11 we have a type system_error for those kind of exceptions.
>> However, bad_alloc is not a sub-type thereof due to the funny reason:
>> if you do not have memory you cannot even construct the object that
>> says "out-of-memory"; hence the special case for this one.
>

> So they will have std::system_error? Interesting, I didn't know that.
> However, it's not really true in all cases that we cannot even construct
> the object that says "out-of-memory". If we have 180 MB left and try to
> allocate 200 MB we get a std::bad_alloc, but we still have the 180 MB to
> continue with.
>
> Of course there may be situations where we only have 2 bytes left but I
> still don't understand why they don't inherit std::bad_alloc from
> std::system_error.
>

>> I think I disagree with your claim that throwing bad_alloc is a "force
>> majeure" in the noexcept contract. If we use reasoning "any function

>> that doesn't throw is noexcept" this is obvious. But even if we use


>> reasoning "noexcept is no-fail guarantee", it is still the function
>> author's fault that the function threw bad_alloc. Not because you
>> didn't swallow it, but because you used operations that required
>> allocating memory. Your could have fulfilled the contract by not using
>> operations that require memory. If you really needed additional memory
>> you signed the contract with the intention of breaking it.
>

> Yes, here is where I have troubles defending my thesis. To be extremely
> theoretical I would probably say that you are right if it wasn't for the
> small fact that we could run out of stack memory as well.
>
> So let's answer the discussion above whether it's my fault or the
> system's fault that we run out of memory. If we want to group

> std::bad_alloc and stack exhaustion together, ..

Let's not. Let's treat "out-of-heap-memory" and "out-of-stack-memory"
as
two different things. This makes the situation more clear. For
"out-of-heap-memory" we can consider it an expected error. For
"out-of-stack-memory" we still have a problem that you describe.

> then we can't move the
> system failure group to the design failure group since the heap and
> stack failures will both be seen as run-time errors. Also, we can't move
> the system failure group to the expected failure group since then few
> functions could be noexcept since most operations use the stack.
>
> However, if we choose to see system failures as an own group as in my
> suggestion, then as you say, the most difficult question would be; what
> could be considered a system failure? Memory? Threads? Disk? etc.
>
> Again, I'm still not sure about my own ideas here. On one hand I claim
> that the Turing machine is single threaded with infinite memory, hence
> only std::bad_alloc and stack exhaustion are force majeure. On the other
> hand we must also consider new programming concepts like multi-threading
> as well as see what's practical programming.


---------------------

>> I think that it will usually be the case that functions that need to be
>> made noexcept will be simple things (copy, move, swap) and that if they
>> cannot be implemented in an exception-safe manner then the programmer
>> should consider changing his design so as to remove the need to do so
>> rather than complaining that noexcept is too strict.
>
> The problem is that stack exhaustion (out-of-stack-memory) is also a
> failure, so even the simple function std::swap may fail.

That is a valid point in general; however for swap, there is a way to
implement
it without allocating any stack memory for pointers:

void swap( int* & x, int* & y ) { // refs to pointers
x ^= y;
y ^= x;
x ^= y;
}

swap for more elaborate types can be implemented in terms of pointer
swaps.
I still find your argument valid. I just iclude this example as an
interesting
fact.

>> If a function identified as noexcept does, in fact, throw then that is a
>> programmer error (the function should either not have been declared
>> noexcept, or should have been written in such a way that it could not
>> throw). Arguing about what should happen at runtime when such a function
>> throws seems somewhat moot -- what should happen is that someone fixes
>> the bug.
>
> Here's where we have to choose between the practical- and the
> theoretical model.
>
> Practical model: We say that stack exhaustion is an exception to the
> rule about failure since it's very rare.
>
> Theoretical model: We say stack exhaustion is the same failure type as
> std::bad_alloc.
>
> I like your reasoning that noexcept shouldn't be something sprinkled all
> over the code but rather focused to certain small operations. If we
> follow that and the practical model we have something solid that works
> practically.
>
> I'm just keen on exploring the theoretical model, but then we have to
> allow stack exhaustion and std::bad_alloc as "force majeure" failures
> that will pass the noexcept and throw anyway. My reasoning behind that
> idea is that a force majeure failure is something out of control and out
> of our design contract, hence we cannot trust the object anymore and
> obviously not guarantee its consistency any longer.

But since we could not trust an inconsistent object and since this
object
might be a global, by letting such an exception pass we risk that
someone
will catch it, somehow hanlde, and resume normal program flow with an
inconsistent global object.
This problem would not occur if we could make sure noone catches an
"out-of-memory" exception in such case, but this is no different than
calling
std::terminate in the first place.

Regards,
&rzej

Alexander Terekhov

unread,
Apr 24, 2011, 1:49:03 AM4/24/11
to

DeMarcus wrote:
[...]


> The problem is that stack exhaustion (out-of-stack-memory) is also a
> failure, so even the simple function std::swap may fail.

Exactly! And the runtime behaviour should be the same as with

int f() { // implicit nothrow/noexcept
throw 0;
}

No?

C'mon, a noexcept function shall call abort() (C++ terminate()) on
throw / stack exhaustion.

regards,
alexander.

Daniel James

unread,
Apr 24, 2011, 2:01:42 AM4/24/11
to

In article <4db1af07$0$313$1472...@news.sunsite.dk>, DeMarcus wrote:
> ... just identifying functions that cannot /throw/ is not sufficient;
> we must also guarantee that they cannot /fail/ ...

That is *exactly* the point. We should do all we can to ensure that they
cannot result in a call to std::terminate (which is surely a form of
failure) ... and the best way to do that is to check statically.

> The problem is that stack exhaustion (out-of-stack-memory) is also a
> failure, so even the simple function std::swap may fail.

That is certainly *a* problem ... but not one that we can do much about,
in general. If the stack is *really* exhausted even calling
std::terminate may fail ...

> Theoretical model: We say stack exhaustion is the same failure type as
> std::bad_alloc.

Yes ... though it isn't, really. C++ can achieve quite a lot without a
usable heap, but if you run out of stack you're in real trouble ...
unless you can (say) switch to a separate error-handling stack (reserved
in advance) while you figure out what to do next.

My pragmatic approach (and I hear alarm bells ring when I think of an
approach that is merely pragmatic rather than right) is to say that the
response to the std::bad_alloc problem should be to say that noexcept
functions should not allocate (or, possibly, should handle the
std::bad_alloc and fail in some other way -- but as you point out the
real meaning of noexcept is "can't fail"), but that stack overflow should
be assumed to be impossible and allowed to crash the app.

After all, on most general purpose OSes the stack is effectively limited
by the extent of virtual memory, and if you really run out of that the OS
will kill your app unceremoniously anyway (if you're lucky). On other
platforms (e.g. embedded systems) stack overflow will probably be handled
in some platform-specific way that is outside the scope of the language
spec.

.. but perhaps that's where your whole "force majeur" argument was
going?

--
Regards,
Daniel.
[Lest there be any confusion I am NOT the Boost maintainer of the same
name]

Alexander Terekhov

unread,
Apr 24, 2011, 2:01:59 AM4/24/11
to

Daniel James wrote:
[...]


> I do still believe, though, that if such a condition can be
> detected at compile time rather than run time the standard
> should require this to be done. Nobody likes their application
> to run into a std::terminate, and any static check that can
> help prevent that is most welcome.

I think that everybody agrees that

int main() { // implicit nothrow/noexcept
throw 0;
}

deserves a warning. No?

regards,
alexander.


--

DeMarcus

unread,
Apr 24, 2011, 12:23:08 PM4/24/11
to

On 2011-04-24 07:49, Alexander Terekhov wrote:
>
>
> DeMarcus wrote:
> [...]
>> The problem is that stack exhaustion (out-of-stack-memory) is also a
>> failure, so even the simple function std::swap may fail.
>
> Exactly! And the runtime behaviour should be the same as with
>
> int f() { // implicit nothrow/noexcept
> throw 0;
> }
>
> No?
>

I'm not sure I understand what you mean with 'implicit nothrow/noexcept' but I answer as you would have written something like

int f() noexcept
{
throw 0;
}

> C'mon, a noexcept function shall call abort() (C++ terminate()) on
> throw / stack exhaustion.
>

I know that your mentioned kind of behavior would fit many failure handling strategies. However, it doesn't fit everyone's strategies. Have a look at for instance N3248.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3248.pdf


What /would/ fit everyone though is something similar to your suggestion but slightly changed. Instead of calling abort when a noexcept function throws it calls an atnoexcept() handler. The default for the handler is to call terminate(), but anyone could change it to do whatever, for example

void myNoexceptHandler()
{
// Always terminate except when catching MyAssertException.
try
{
throw;
}
catch( MyAssertException& )
{
throw;
}

std::terminate();

DeMarcus

unread,
Apr 25, 2011, 8:09:08 AM4/25/11
to

On 2011-04-24 08:01, Daniel James wrote:
>
> In article<4db1af07$0$313$1472...@news.sunsite.dk>, DeMarcus wrote:
>> ... just identifying functions that cannot /throw/ is not sufficient;
>> we must also guarantee that they cannot /fail/ ...
>
> That is *exactly* the point. We should do all we can to ensure that they
> cannot result in a call to std::terminate (which is surely a form of
> failure) ... and the best way to do that is to check statically.
>

I would also like to see all expected exceptions be checked statically.

However, I also find it quite important to meet the unit test situation
described in N3248.

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2011/n3248.pdf

Therefore I would like to see some unchecked_error that would slip
through the static checking.

Current C++0x draft is the worst possible. It doesn't check anything and
it terminates when an exception is thrown. This means two things.

1. Our failure handling design is not checked at all which may introduce
more bugs than it prevents.
2. We can't use the opportunity to throw unit test exceptions since we
will terminate.

I would at least like to see that they changed the standard so instead
of calling terminate in case of an exception, it calls some atnoexcept
handler that we can set ourselves. The default for the handler could be
to terminate but if we set our own custom handler it should unwind the
stack and be allowed to rethrow exceptions. This way we could at least
eliminate the problem number 2 above.


>> The problem is that stack exhaustion (out-of-stack-memory) is also a
>> failure, so even the simple function std::swap may fail.
>
> That is certainly *a* problem ... but not one that we can do much about,
> in general. If the stack is *really* exhausted even calling
> std::terminate may fail ...
>

I agree. I was just keen on seeing a consistent theory that would also
help us use the heap in noexcept functions; something that still seems
to trouble the committee. Right now we don't have any guidelines at all.
It's more like; you can use the heap in noexcept functions but it will
haunt you in the future when your customers get crashes and you realize
your design has flaws.


>> Theoretical model: We say stack exhaustion is the same failure type as
>> std::bad_alloc.
>
> Yes ... though it isn't, really. C++ can achieve quite a lot without a
> usable heap, but if you run out of stack you're in real trouble ...
> unless you can (say) switch to a separate error-handling stack (reserved
> in advance) while you figure out what to do next.
>
> My pragmatic approach (and I hear alarm bells ring when I think of an
> approach that is merely pragmatic rather than right) is to say that the
> response to the std::bad_alloc problem should be to say that noexcept
> functions should not allocate (or, possibly, should handle the
> std::bad_alloc and fail in some other way -- but as you point out the
> real meaning of noexcept is "can't fail"), but that stack overflow should
> be assumed to be impossible and allowed to crash the app.
>

Yes, this is what I mean with the /practical model/ that would work. The
current problem with the practical model though is twofold as stated above;

1. It's not checked statically, as it should be, since it can be if we
don't use the heap.
2. When we have applied static checking we must find a way to allow for
unit tests to slip assertion exceptions through.

Because of these two issues I presented the /theoretical model/ as an
alternative.

> After all, on most general purpose OSes the stack is effectively limited
> by the extent of virtual memory, and if you really run out of that the OS
> will kill your app unceremoniously anyway (if you're lucky). On other
> platforms (e.g. embedded systems) stack overflow will probably be handled
> in some platform-specific way that is outside the scope of the language
> spec.
>
> .. but perhaps that's where your whole "force majeur" argument was
> going?
>

Yes. My point wasn't really to show a way to survive a stack exhaustion,
it was rather a way to present a uniform theory to move std::bad_alloc
away from the expected failures' category to allow for using the heap in
noexcept functions.


--

DeMarcus

unread,
Apr 25, 2011, 8:56:34 AM4/25/11
to

On 2011-04-24 08:01, Alexander Terekhov wrote:
>
>
> Daniel James wrote:
> [...]
>> I do still believe, though, that if such a condition can be
>> detected at compile time rather than run time the standard
>> should require this to be done. Nobody likes their application
>> to run into a std::terminate, and any static check that can
>> help prevent that is most welcome.
>
> I think that everybody agrees that
>
> int main() { // implicit nothrow/noexcept
> throw 0;
> }
>
> deserves a warning. No?
>

The ideal situation, according to me, would be the following.

1. Introduce a new exception, call it std::unchecked_error (derived like this: std::exception -> std::unchecked_error).

2. Derive std::bad_alloc from the C++0x std::system_error, i.e. std::exception -> std::runtime_error -> std::system_error -> std::bad_alloc. See
http://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-api-4.5/a00650.html

3. Put compile-time static checking to noexcept functions, but let exceptions derived from std::unchecked_error slip through.

4. Let it be implementation defined whether std::runtime_error and std::logic_error will also slip through the static check, e.g. introduce compiler flags --unchecked-runtime-error and --unchecked-logic-error.

5. When a std::unchecked_error, std::runtime_error or std::logic_error is thrown in run-time, call a handler, let's call it std::atnoexcept(). The default behavior for the handler is to terminate without stack unwinding. If a custom handler is provided it should be able to throw or rethrow std::unchecked_error (and possibly std::runtime_error and std::logic_error). It should also unwind the stack just before the throw or rethrow. If no throw or rethrow is done in the handler, terminate() is called without unwinding the stack.


Without comparing above suggestion with C++03, Java, Bruce Eckel or whatever, just consider my idea and say if it would fit your way of programming. Or rather; if it would inhibit your way of programming.

Andrzej Krzemieński

unread,
Apr 26, 2011, 8:27:25 PM4/26/11
to

> The ideal situation, according to me, would be the following.
>
> 1. Introduce a new exception, call it std::unchecked_error (derived like this: std::exception -> std::unchecked_error).
>
> 2. Derive std::bad_alloc from the C++0x std::system_error, i.e. std::exception -> std::runtime_error -> std::system_error -> std::bad_alloc. Seehttp://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-api-4.5/a00650.html

>
> 3. Put compile-time static checking to noexcept functions, but let exceptions derived from std::unchecked_error slip through.
>
> 4. Let it be implementation defined whether std::runtime_error and std::logic_error will also slip through the static check, e.g. introduce compiler flags --unchecked-runtime-error and --unchecked-logic-error.
>
> 5. When a std::unchecked_error, std::runtime_error or std::logic_error is thrown in run-time, call a handler, let's call it std::atnoexcept(). The default behavior for the handler is to terminate without stack unwinding. If a custom handler is provided it should be able to throw or rethrow std::unchecked_error (and possibly std::runtime_error and std::logic_error). It should also unwind the stack just before the throw or rethrow. If no throw or rethrow is done in the handler, terminate() is called without unwinding the stack.
>
> Without comparing above suggestion with C++03, Java, Bruce Eckel or whatever, just consider my idea and say if it would fit your way of programming. Or rather; if it would inhibit your way of programming.


Hi, I am still not sure what it buys you. Do you see a 'noexcept' as a
no-fail function indicator (I believe you did) or a way of indicating
if functions throw in general, something like 'const' or 'volatile'
that gets propagated along function dependency tree. If this is the
former, I fail to se how what you propose is better that C++11.

Currently, when I need to write a commit-or-rollback function, I must
use a no-fail function. Compiler doesn't help me with that: I must be
cautious to write my no-fail function correctly. In C++11 compiler may
additionally issue a compile-time error if I get something wrong
(example in the end of my post), warn me if I throw from no-fail
functions and it can optimize based on noexcept flag. So this is
definately an improvement over C++03 without any loss (compared to C+
+03).

How does your proposal add value to that model? what can
std::atnoexcept() do apart form calling std::terminate? throw other
exception? But if so, it will spoil my commit-or-rollback guarantee?
commit-or-rollback is supposed to always work, not only for a subset
of exceptions. Applications cannot run if their objsecs are left in
indeterminate state. Calling some exceptions "force majeure" does not
change that. In fact you could call program termination a "force
majeure". Note that std::terminate is also an exception handler, and
can be overriden to do whatever you want (almost -- you cannot
resume), for example generating program dump, or scheduling a restart.

Your example with running out of stack memory is a good one, but I do
not think it should be generalized into letting any error spoil my
commit-or-rollback guarantee.

--------------------------------------------

This is an example of how I can use noexcept to help me detect an
error:

template< typename T >
void safe_swap( T& x, T& y )
{
using boost::swap;
static_assert( noexcept(swap(x, y)), "swap may throw" );
swap(x, y);
}

safe_swap will fail to compile if swap for T may throw.

Regards,
&rzej

DeMarcus

unread,
Apr 27, 2011, 9:18:04 PM4/27/11
to

On 2011-04-24 07:49, Andrzej Krzemieński wrote:
>
>> Just a quick review of my idea (for coming answers):
>>
>> There are four types of failures (and we use exceptions for them):
>>
>> 1. Expected failures - Exceptions thrown within our design contract of a
>> function. These are the exceptions that a user must be prepared for and
>> handle. noexcept means the user (the caller of the function) doesn't
>> need to prepare anything.
>>
>> 2. Usage failures - The user provided invalid arguments to the function,
>> hence the /user/ broke the contract. Unchecked std::runtime_error is thrown.
>>
>> 3. Design failures - I, the programmer of the function, have a bug in my
>> code, hence it was /me/ that broke the contract. Unchecked
>> std::logic_error is thrown.
>
> First, Is there a point in distinguishing types 2 and 3? Regardless if
> it is
> function author or function user who broke the contract, in the end we
> have
> a "programmer error" situation. In either case the only good action
> that can
> be taken is for the programmer to fix the code, and rebuild the
> program. This
> obviously cannot happen "in run-time" at customer's computer. In run-
> time
> there is no good action that can be taken. It is not even obvious if
> you should throw any exception at this point.
>

There is one good example where it's necessary to distinguish between types 2 and 3. When you do unit tests you have at least three types of tests you should do:

A. Function tests - test whether a unit behaves correctly when used correctly.

B. Error tests - test whether a unit behaves correctly when used correctly but fails the operation with an expected failure, e.g. FileNotFoundException.

C. Precondition tests - test whether a unit behaves correctly when used /incorrectly/.

So when we run our unit tests we want to distinguish between type 2 - usage failures that will pass our tests with a green OK flag since our unit caught our incorrect arguments we provided, while type 3 - design failures indicate we found an error in our unit.


But you are correct, in the application I still haven't found a very convincing argument to distinguish type 2 and 3 except that I would like to log them differently.


> Second, functions in most cases do not need to be no-fail functions,
> But if
> someone decided to use a no-fail function, it was probably to provide
> a
> commit-or-rollback guarantee in his own function. If so, the exception
> that
> is thrown and not checked breaks the commit-or-rollback contract and
> leaves
> some object in inconsistent state (object's invariant is broken). If
> you
> then choose to catch such an unchecked exception (I wouldn't recommend
> that)
> and resume the normal program execution, and the object with the
> broken
> invariant is a global, the program will very likely try to acces this
> object
> again. If so, you will, if you are lucky, keep getting the same
> exception thrown
> again and again, or, if you are less lucky the program will just run,
> operating
> on an invalid object.
>

Good point!

I've had a vision that one should be able to delete the object, create a new and continue. But as you say; it's impossible to tell how many layers the exception has propagated and it could even have been thrown from a global or injected object.

Why I've been so keen on the fail-and-resume idea is because I would like to find a failure fire-wall strategy; a way for a framework to be able to deal with buggy plug-ins without being affected.

Yes, this one is tricky to discuss and I'm very uncertain about what would be the best solution.

The out-of-memory failure wouldn't go into the usage failure category unless there is a precondition saying one has to provide a certain amount of memory. It doesn't really go into the design failure category either since a design failure is something static that can be corrected in compile-time.

Then we only have two options left. Either we say the out-of-memory failure goes into the expected failures category or we give it a separate system- "force majeure" failure category.

The only reason why I would not prefer it in the expected failure category is because it would hinder us from using the heap in a noexcept function. In summary we have two options:

The practical model: The out-of-memory failure goes into the expected failure category. Only use noexcept with functions that do not use the heap.

The theoretical model: The out-of-memory failure gets its own failure category. Now the heap can be used in noexcept functions.

I know that the practical model will probably work for most cases, but I'm keen on the theoretical model for a couple of reasons:

1. The practical model is what the current C++0x draft is supporting. However, the committee still have problems with it. See Stroustrup's example at page 2 in N3202.
http://www2.research.att.com/~bs/N3202-noexcept.pdf
The theoretical model would solve his problem.

2. I have a vision that roll-back and consistency could be applied to even bigger, more advanced objects.

3. We don't have to make stack exhaustion an exception to the rule.


The biggest problem is that I haven't found a way to fit the theoretical model into the C++0x standard without forcing it onto people that doesn't want to use it. Maybe if we just changed the standard to call some atnoexcept() handler instead of terminate(). Then the theoretical model could provide a custom handler that rethrows exceptions from user-, design- and system failures.


>> - If we move it to the user failure group, in practice, it would
>> mean that the /user/ must provide enough system resources to every
>> function call. I know some large systems actually have that policy but
>> to many systems that would be a very impractical development environment.
>> - If we move it to the design failure group it would feel very
>> awkward since design failures are considered compile-time bugs that can
>> be remedied, whereas out-of-memory failures are run-time failures (more
>> discussion about this below).
>> - If we move it to the expected failure group my whole theory would
>> break apart (more below).
>
> Well, I would, in fact, put it into "expected" failure group (I meen
> "out
> of /heap/ memory" situation). This agrees with my intuitive
> understanding
> of the word "expected". I do expect this error to happen, an sometimes
> it
> is possible to deal with it: unwind the stack, which would probably
> release
> some of memory. Unlike "programmer errors" where you do not expect
> them:
> if you did, you would have avoided them in the first place.
>

I agree, I just see that in the industry there are two types of memory allocations.

1. Memory allocations where you just expect the memory to exists, e.g. when creating a shared_ptr or a small vector to do some quick calculations on.

2. Memory allocations of big sized objects, e.g. reading in an image from file.

Usually the programmer doesn't care checking for std::bad_alloc for the first type, but doing the check when it comes to bigger objects.

It's true that both can be seen as expected failures, but my idea was to introduce the system failure category to allow noexcept functions to use the heap, especially for situations like number 1 above.

With the current C++0x draft it's important to understand that using the heap in a noexcept function leads to termination for expected failures, and we will not even get a warning when using the heap. I'm very worried about the consequences for the software design.

> void swap( int*& x, int*& y ) { // refs to pointers

That is also a good point!

I still think though, that the standard should be changed from calling terminate() to call some atnoexcept() handler that is allowed to translate and rethrow with the stack unwound. The default for the handler would be to terminate but setting an own handler would allow for experimenting with alternative failure handling strategies and unit test machinery.

DeMarcus

unread,
Apr 27, 2011, 9:17:32 PM4/27/11
to

On 2011-04-27 02:27, Andrzej Krzemieński wrote:
>
>> The ideal situation, according to me, would be the following.
>>
>> 1. Introduce a new exception, call it std::unchecked_error (derived like this: std::exception -> std::unchecked_error).
>>
>> 2. Derive std::bad_alloc from the C++0x std::system_error, i.e. std::exception -> std::runtime_error -> std::system_error -> std::bad_alloc. Seehttp://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-api-4.5/a00650.html
>>
>> 3. Put compile-time static checking to noexcept functions, but let exceptions derived from std::unchecked_error slip through.
>>
>> 4. Let it be implementation defined whether std::runtime_error and std::logic_error will also slip through the static check, e.g. introduce compiler flags --unchecked-runtime-error and --unchecked-logic-error.
>>
>> 5. When a std::unchecked_error, std::runtime_error or std::logic_error is thrown in run-time, call a handler, let's call it std::atnoexcept(). The default behavior for the handler is to terminate without stack unwinding. If a custom handler is provided it should be able to throw or rethrow std::unchecked_error (and possibly std::runtime_error and std::logic_error). It should also unwind the stack just before the throw or rethrow. If no throw or rethrow is done in the handler, terminate() is called without unwinding the stack.
>>
>> Without comparing above suggestion with C++03, Java, Bruce Eckel or whatever, just consider my idea and say if it would fit your way of programming. Or rather; if it would inhibit your way of programming.
>
>
> Hi, I am still not sure what it buys you. Do you see a 'noexcept' as a
> no-fail function indicator (I believe you did) or a way of indicating
> if functions throw in general, something like 'const' or 'volatile'
> that gets propagated along function dependency tree. If this is the
> former, I fail to se how what you propose is better that C++11.
>

My most important idea is that there's a difference between expected failures and non-expected failures, both notified with exceptions where possible. An expected failure is typically FileNotFoundException, and a non-expected failure is typically a failing assert().

I see noexcept as a no-fail function indicator in the sense it will never have any expected failures, but the function may have non-expected failures.


> Currently, when I need to write a commit-or-rollback function, I must
> use a no-fail function. Compiler doesn't help me with that: I must be
> cautious to write my no-fail function correctly. In C++11 compiler may
> additionally issue a compile-time error if I get something wrong
> (example in the end of my post), warn me if I throw from no-fail
> functions and it can optimize based on noexcept flag. So this is
> definately an improvement over C++03 without any loss (compared to C+
> +03).
>

Agree. (Although I still don't understand what optimization we will get using noexcept. Some people say that we may omit the stack unwinding, but it seems strange; all functions and scopes must unwind the stack. Could anyone explain this optimization?)


> How does your proposal add value to that model? what can
> std::atnoexcept() do apart form calling std::terminate? throw other
> exception? But if so, it will spoil my commit-or-rollback guarantee?
> commit-or-rollback is supposed to always work, not only for a subset
> of exceptions.

The idea is that a noexcept function can only throw non-expected failures. At first glance it should go to std::terminate, yes, but if we want to incorporate this into our unit test machinery we would like to handle the non-expected failure, report it and continue.

That was my basic answer, and I should actually stop here not to make any confusion.

However, if you're interested in my (possibly confusing) advanced answer, I had the following idea. First I thought exactly like you; no exceptions shall slip out of a noexcept function to guarantee commit-or-rollback.

Then there was this big discussion about all functions that allocate on the heap and that they must be able to be noexcept as well.

http://www2.research.att.com/~bs/N3202-noexcept.pdf
http://www.codeguru.com/cpp/misc/print.php/c18357/An-Interview-with-C-Creator-Bjarne-Stroustrup.htm

The strange thing is that the discussion doesn't take into consideration what it means when we're out of memory but instead claim that it's a fatal design error that goes to std::terminate. It's very contradictory to first say that we must allow noexcept functions to allocate on the heap and then claim it's a fatal design error.

Stroustrup is a great man and I know he wants the community all good so I started to look at what could be done about this contradiction and I came to the following conclusion.

For those who wants to be able to allocate on the heap in noexcept functions we could allow those developers to claim std::bad_alloc is a force majeure failure and let that slip through. That's not possible with std::terminate but would be with a custom made std::atnoexcept handler.

It's true that we now violate the commit-or-rollback guarantee but see it like the following. The strong exception safety guarantee is the commit-and-rollback. The basic exception safety guarantee does not guarantee commit-and-rollback but if you delete the object after a failure you could resume execution.

The force majeure failure discussed here would be a /run-time/ degradation of your strong exception safety guarantee to the basic exception safety guarantee. If we have proper mechanism to handle that we could see it as a severe failure, but not severe enough to terminate.

Whether the force majeure would work in practice is yet to be found out, but it's one solution to the fatal design error contradiction for those who want to allocate on the heap in noexcept functions.


> Applications cannot run if their objsecs are left in
> indeterminate state.

All objects with basic exception safety guarantee are left in indeterminate but valid state after a failure. With proper handling (most probably deletion of the object) we can continue execution.


> Calling some exceptions "force majeure" does not
> change that. In fact you could call program termination a "force
> majeure". Note that std::terminate is also an exception handler, and
> can be overriden to do whatever you want (almost -- you cannot
> resume), for example generating program dump, or scheduling a restart.
>

The idea is that the std::atnoexcept shall have termination as default behavior but the ability to be overridden by a custom handler in for instance a unit test machinery or for developers that want to explore using the heap in noexcept functions.


> Your example with running out of stack memory is a good one, but I do
> not think it should be generalized into letting any error spoil my
> commit-or-rollback guarantee.
>
> --------------------------------------------
>
> This is an example of how I can use noexcept to help me detect an
> error:
>
> template< typename T>
> void safe_swap( T& x, T& y )
> {
> using boost::swap;
> static_assert( noexcept(swap(x, y)), "swap may throw" );
> swap(x, y);
> }
>
> safe_swap will fail to compile if swap for T may throw.
>

Nice example! This will be very useful in case there will be no static check whatsoever on noexcept functions.

Andrzej Krzemieński

unread,
Apr 29, 2011, 8:18:44 PM4/29/11
to

> My most important idea is that there's a difference between expected failures and non-expected failures, both notified with exceptions where possible. An expected failure is typically FileNotFoundException, and a non-expected failure is typically a failing assert().

Thanks, that makes it clear wat "expected" and "unexpected" error is.
Now, going back to the classification of errors you provided earlier,
it is different from what I was taught about dealing with errors for
the last couple of years:

> 1. Expected failures - Exceptions thrown within our design contract of a function. These are the exceptions that a user must be prepared for and handle. noexcept means the user (the caller of the function) doesn't need to prepare anything.

This is what exceptions derived from std::runtime_error are for,
according to the classics. Expected failures would also include system
failures (you do expect them). It also includes std::bad_alloc.

> 2. Usage failures - The user provided invalid arguments to the function, hence the /user/ broke the contract. Unchecked std::runtime_error is thrown.

This is what std::logic_error is for, although methods other than
exceptions can be considered.

> 3. Design failures - I, the programmer of the function, have a bug in my code, hence it was /me/ that broke the contract. Unchecked std::logic_error is thrown.

It is usually recommended not to throw an exception in those caseses,
but either launch the debugger in debug builds, or generate a memory
dump and abort the program. You may not afford aborting the program,
but the alternative is the risk of running the program that is in an
invalid state and works counter to the design. Note that for tools
like BOOST_ASSERT you can configure the behavior of the program upon
assertion failure, and this can be reporting a test failure in unit
test.

> 4. System failures - The system could not provide resources to fulfill the contract, hence the /system/ broke the contract which is out of our control. Such case would be called force majeur in business contracts. Let's use that term here as well.

this would be the same as 1.

> I see noexcept as a no-fail function indicator in the sense it will never have any expected failures, but the function may have non-expected failures.

Ok, this clarifies your idea.

>> Currently, when I need to write a commit-or-rollback function, I must
>> use a no-fail function. Compiler doesn't help me with that: I must be
>> cautious to write my no-fail function correctly. In C++11 compiler may
>> additionally issue a compile-time error if I get something wrong
>> (example in the end of my post), warn me if I throw from no-fail
>> functions and it can optimize based on noexcept flag. So this is
>> definately an improvement over C++03 without any loss (compared to C+
>> +03).
>
> Agree. (Although I still don't understand what optimization we will get using noexcept. Some people say that we may omit the stack unwinding, but it seems strange; all functions and scopes must unwind the stack. Could anyone explain this optimization?)

When I said "optimization" I did not mean removing some exception
handling code. What I meant was the optimization that led to the
introduction of "noexcept": if compiler can see that move assignment
and move constructor of your class X are "no-fail", it can safely use
move operations on vector<X> or map<x> (this is the optimization) --
otherwise it has to (slowly, but safely) copy vector<X> or map<x>. The
whole purpose of noexcept is to make function std::move_if_noexcept
work.

>> How does your proposal add value to that model? what can
>> std::atnoexcept() do apart form calling std::terminate? throw other
>> exception? But if so, it will spoil my commit-or-rollback guarantee?
>> commit-or-rollback is supposed to always work, not only for a subset
>> of exceptions.
>
> The idea is that a noexcept function can only throw non-expected failures. At first glance it should go to std::terminate, yes, but if we want to incorporate this into our unit test machinery we would like to handle the non-expected failure, report it and continue.

Ok, I get the point. Still, in order to uniformly report an error in
unit test framework, you do not need to throw exceptions for
programmer errors. You can register a common assertion failure handler
like the one for BOOST_ASSERT. In case you need to continue with other
tets you can use std::terminate to resume suite execution from the
next test (you will need some tricks with globals for that) -- while
this is a hack, unit-testing alone may be to small an application to
deserve a language change. And unit-test frameworks already apply lot
of magic to make the tets look cool and short. This additional one
would not surprise anyone that much.

>
> That was my basic answer, and I should actually stop here not to make any confusion.
>
> However, if you're interested in my (possibly confusing) advanced answer, I had the following idea. First I thought exactly like you; no exceptions shall slip out of a noexcept function to guarantee commit-or-rollback.
>
> Then there was this big discussion about all functions that allocate on the heap and that they must be able to be noexcept as well.
>

> http://www2.research.att.com/~bs/N3202-noexcept.pdfhttp://www.codeguru.com/cpp/misc/print.php/c18357/An-Interview-with-C...


>
> The strange thing is that the discussion doesn't take into consideration what it means when we're out of memory but instead claim that it's a fatal design error that goes to std::terminate. It's very contradictory to first say that we must allow noexcept functions to allocate on the heap and then claim it's a fatal design error.


While I could not aford to read those now, I just used the "find"
tool to look for "noexcept" "memory" and "aloc" words, but couldn't
find anything that would say about allowing heap alloc in noexcept.
Could you give me a real life (or looking like real life) example of a
no-fail function that would need to allocate memory? The no-fail
functions I ever had to write never did that, and it feels strange to
do that, at least to me.

>
> Stroustrup is a great man and I know he wants the community all good so I started to look at what could be done about this contradiction and I came to the following conclusion.
>
> For those who wants to be able to allocate on the heap in noexcept functions we could allow those developers to claim std::bad_alloc is a force majeure failure and let that slip through. That's not possible with std::terminate but would be with a custom made std::atnoexcept handler.
>
> It's true that we now violate the commit-or-rollback guarantee but see it like the following. The strong exception safety guarantee is the commit-and-rollback. The basic exception safety guarantee does not guarantee commit-and-rollback but if you delete the object after a failure you could resume execution.
>
> The force majeure failure discussed here would be a /run-time/ degradation of your strong exception safety guarantee to the basic exception safety guarantee. If we have proper mechanism to handle that we could see it as a severe failure, but not severe enough to terminate.

I think that breaking no-fail guarantee not only breaks strong
guarantee but also may break even the basic guarantee. I use no-fail
fuinctions, because I expect certain computation to succeed and based
on this assumption I write my code, which I intend to provide basic
guarantee. Even If I did make my basic guarantee work, I need to know
which objects to destroy or reset. I would need to do it manually, at
least for global objects. I would need to know which global objects to
delete.

Regards,
&rzej

Reply all
Reply to author
Forward
0 new messages