I have discovered recently that "throw X(...);" terminates application if it can't allocate memory for exception being thrown. This happens with GCC. Standard does not specify what should happen in this situation (see C++11 15.1.p4).I am trying to discuss this situation with someone who is closer to committee than me... Is it a defect? Standard does not specify behavior here and implementations are free to terminate (and they do) -- problem is that this makes impossibly to write a C++ program that survives out-of-memory situation -- even throwing std::bad_alloc can terminate your app.
On Tuesday, August 15, 2017 at 7:18:16 AM UTC-5, Chris Hallock wrote:On Tuesday, August 15, 2017 at 3:33:08 AM UTC-4, Michael Kilburn wrote:I have discovered recently that "throw X(...);" terminates application if it can't allocate memory for exception being thrown. This happens with GCC. Standard does not specify what should happen in this situation (see C++11 15.1.p4).I am trying to discuss this situation with someone who is closer to committee than me... Is it a defect? Standard does not specify behavior here and implementations are free to terminate (and they do) -- problem is that this makes impossibly to write a C++ program that survives out-of-memory situation -- even throwing std::bad_alloc can terminate your app.
This is an implementation-internal memory allocation. What could the Standard possibly say?A lot of different things... For example:"This allocation failure should result in std::bad_alloc being thrown. Implementation should never fail to throw std::bad_alloc"
it certainly may come as a surprise to those who use exception specification, but at least I will be able to write application that doesn't mysteriously die when it runs out of memory
On Tuesday, August 15, 2017 at 11:18:55 AM UTC-4, Michael Kilburn wrote:On Tuesday, August 15, 2017 at 7:18:16 AM UTC-5, Chris Hallock wrote:On Tuesday, August 15, 2017 at 3:33:08 AM UTC-4, Michael Kilburn wrote:I have discovered recently that "throw X(...);" terminates application if it can't allocate memory for exception being thrown. This happens with GCC. Standard does not specify what should happen in this situation (see C++11 15.1.p4).I am trying to discuss this situation with someone who is closer to committee than me... Is it a defect? Standard does not specify behavior here and implementations are free to terminate (and they do) -- problem is that this makes impossibly to write a C++ program that survives out-of-memory situation -- even throwing std::bad_alloc can terminate your app.
This is an implementation-internal memory allocation. What could the Standard possibly say?A lot of different things... For example:"This allocation failure should result in std::bad_alloc being thrown. Implementation should never fail to throw std::bad_alloc"
Well that would change nothing, since "should" only specifies a recommendation ;) The word you're looking for is "must".
That being said, I don't think I would consider this a "defect" in the standard. It seems likely that this is deliberately left as an implementation detail. Basically, it's a quality-of-implementation issue; an implementation can provide a guarantee that all standard-library-generated `std::bad_alloc`s will be able to be generated. But they aren't required to do so.
it certainly may come as a surprise to those who use exception specification, but at least I will be able to write application that doesn't mysteriously die when it runs out of memory
Unless it runs out of memory due to a stack allocation. In which case, you're in undefined behavior land. At least if an exception cannot be allocated, `std::terminate` will be called. With a stack allocation OOM error, you won't even get that.
You should be able to modify the C++ support library's build to enable the buffer
allocation by default.
On Tuesday, 15 August 2017 08:43:36 PDT Michael Kilburn wrote:
> I believe QoI isn't an issue here -- real issue is that standard does not
> allow std::bad_alloc to be thrown in this case (am I wrong?). Therefore
> implementations opt for std::terminate(). If it was allowed -- every
> exception specification would need to be implicitly extended with
> std::bad_alloc -- and I am pretty sure Standard doesn't mention that.
Why would the standard forbid throwing? Where did you find the text that gives
you that impression?
Forget exception specifications, they are deprecated and removed from the
language now. There's only noexcept(true) and noexcept(false).
On Tuesday, August 15, 2017 at 10:28:28 AM UTC-5, Nicol Bolas wrote:On Tuesday, August 15, 2017 at 11:18:55 AM UTC-4, Michael Kilburn wrote:On Tuesday, August 15, 2017 at 7:18:16 AM UTC-5, Chris Hallock wrote:On Tuesday, August 15, 2017 at 3:33:08 AM UTC-4, Michael Kilburn wrote:I have discovered recently that "throw X(...);" terminates application if it can't allocate memory for exception being thrown. This happens with GCC. Standard does not specify what should happen in this situation (see C++11 15.1.p4).I am trying to discuss this situation with someone who is closer to committee than me... Is it a defect? Standard does not specify behavior here and implementations are free to terminate (and they do) -- problem is that this makes impossibly to write a C++ program that survives out-of-memory situation -- even throwing std::bad_alloc can terminate your app.
This is an implementation-internal memory allocation. What could the Standard possibly say?A lot of different things... For example:"This allocation failure should result in std::bad_alloc being thrown. Implementation should never fail to throw std::bad_alloc"
Well that would change nothing, since "should" only specifies a recommendation ;) The word you're looking for is "must".Naturally :-)That being said, I don't think I would consider this a "defect" in the standard. It seems likely that this is deliberately left as an implementation detail. Basically, it's a quality-of-implementation issue; an implementation can provide a guarantee that all standard-library-generated `std::bad_alloc`s will be able to be generated. But they aren't required to do so.I believe QoI isn't an issue here -- real issue is that standard does not allow std::bad_alloc to be thrown in this case (am I wrong?).
Therefore implementations opt for std::terminate(). If it was allowed -- every exception specification would need to be implicitly extended with std::bad_alloc -- and I am pretty sure Standard doesn't mention that.
it certainly may come as a surprise to those who use exception specification, but at least I will be able to write application that doesn't mysteriously die when it runs out of memory
Unless it runs out of memory due to a stack allocation. In which case, you're in undefined behavior land. At least if an exception cannot be allocated, `std::terminate` will be called. With a stack allocation OOM error, you won't even get that.I can control local variables and call depth in my code -- therefore I can avoid running out of stack. I can't avoid out-of-memory including running out of these emergency exception buffers, especially considering that now I can have many threads and retain (and chain) exceptions indefinitely via std::current_exception.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.
To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
That being said, I don't think I would consider this a "defect" in the standard. It seems likely that this is deliberately left as an implementation detail. Basically, it's a quality-of-implementation issue; an implementation can provide a guarantee that all standard-library-generated `std::bad_alloc`s will be able to be generated. But they aren't required to do so.I believe QoI isn't an issue here -- real issue is that standard does not allow std::bad_alloc to be thrown in this case (am I wrong?).
I'm really confused by this.
As I understood the issue, your concern was that you attempt to call an allocation function, which fails to allocate the requested memory. So the implementation attempts to throw `std::bad_alloc`, as required by the standard, but because throwing exceptions may require dynamic allocations, the implementation may fail to throw that exception.
If this happens, the standard cannot "allow std::bad_alloc to be thrown", because if the implementation could throw it, you wouldn't have this problem.
That's what makes this a QOI issue. The standard does not require that throwing exceptions use dynamic allocations; that's merely a possible implementation. Therefore, a C++ implementation could make sure that the executable contains sufficient static memory to throw a single `std::bad_alloc` exception.Therefore implementations opt for std::terminate(). If it was allowed -- every exception specification would need to be implicitly extended with std::bad_alloc -- and I am pretty sure Standard doesn't mention that.
That's Java, not C++. In C++, when we had exception specifications, if nothing was listed, you throw anything.
it certainly may come as a surprise to those who use exception specification, but at least I will be able to write application that doesn't mysteriously die when it runs out of memory
Unless it runs out of memory due to a stack allocation. In which case, you're in undefined behavior land. At least if an exception cannot be allocated, `std::terminate` will be called. With a stack allocation OOM error, you won't even get that.I can control local variables and call depth in my code -- therefore I can avoid running out of stack. I can't avoid out-of-memory including running out of these emergency exception buffers, especially considering that now I can have many threads and retain (and chain) exceptions indefinitely via std::current_exception.
Nonsense. You have just as much control over your memory allocation patterns and what exceptions get thrown as you do over local variable size and call stack depth. If `std::bad_alloc` is thrown, it is your choice how much memory to allocate during the stack unwinding process to reach the cite where the exception is caught.
If the issue is whether to do some last chance action or not, catch(...) is probably your friend. When everything fails, fine diagnostics might not cut it, but taking some action such as closing down a nuclear reactor can be interesting.
As was mentioned before, the behavior of the underlying exception handling mechanism, allocating memory or not, is not something the standard imposes. If you are dealing with an open source implementation, you can probably get the control you need.
Out of curiosity, have you seen this presentation on surviving bad_alloc? https://www.youtube.com/watch?v=QIiFsqsb9HM (maybe it doesn't relate to your use-case, but just in case... :) )
2017-08-15 22:14 GMT-04:00 Nicol Bolas <jmck...@gmail.com>:
On Tuesday, August 15, 2017 at 9:33:43 PM UTC-4, Michael Kilburn wrote:On Tuesday, August 15, 2017 at 12:41:20 PM UTC-5, Thiago Macieira wrote:Forget exception specifications, they are deprecated and removed from the
language now. There's only noexcept(true) and noexcept(false).Ok, so if you believe this isn't a language defect -- there was a followup question:Can you write C++ program that handles OOM and how?I know for a fact that I can do it in C. Would really like to be able to do it in C++.
Catch `std::bad_alloc` at the appropriate locations. It's easier than in C.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.
On Tuesday, August 15, 2017 at 10:02:44 PM UTC-5, Patrice Roy wrote:If the issue is whether to do some last chance action or not, catch(...) is probably your friend. When everything fails, fine diagnostics might not cut it, but taking some action such as closing down a nuclear reactor can be interesting.Once again, this code can terminate in GCC in low memory situation before executing catch block:void foo()try{int k = read_user_input()if (k <= 0)throw std::runtime_error("Value has to be greater than zero");}catch(...){lower_reactor_rods();}and according to my reading of standard it is an acceptable behavior. I am looking for answer to these questions:- why is it like this?
- is it possible to write C++ program (that throws exceptions) that is capable of reliably handling out-of-memory?
On Tuesday, August 15, 2017 at 9:12:36 PM UTC-5, Nicol Bolas wrote:
Therefore implementations opt for std::terminate(). If it was allowed -- every exception specification would need to be implicitly extended with std::bad_alloc -- and I am pretty sure Standard doesn't mention that.
That's Java, not C++. In C++, when we had exception specifications, if nothing was listed, you throw anything.
I know, but if standard mandate throwing bad_alloc in this case code like this:
On Tuesday, August 15, 2017 at 11:21:20 PM UTC-4, Michael Kilburn wrote:On Tuesday, August 15, 2017 at 10:02:44 PM UTC-5, Patrice Roy wrote:If the issue is whether to do some last chance action or not, catch(...) is probably your friend. When everything fails, fine diagnostics might not cut it, but taking some action such as closing down a nuclear reactor can be interesting.Once again, this code can terminate in GCC in low memory situation before executing catch block:void foo()try{int k = read_user_input()if (k <= 0)throw std::runtime_error("Value has to be greater than zero");}catch(...){lower_reactor_rods();}and according to my reading of standard it is an acceptable behavior. I am looking for answer to these questions:- why is it like this?
Because when the ability to report errors stops working, the ability to report errors has stopped working. ;) Much like if a program issues a wild write that happens to land on `errno`'s storage, you can't rely on the value of `errno` to be consistent with expectations.
This is also why such a scenario calls `std::terminate`; this function exists for dealing with "things got really broken" scenarios. You can register your own termination function, and thus be able to deal with catastrophic conditions.- is it possible to write C++ program (that throws exceptions) that is capable of reliably handling out-of-memory?
"Reliably" is of course in the eyes of the beholder. The above code is quite reliable. Indeed, implementations might even guarantee that it works. If you're in a safety critical scenario, you always register a terminate handler.
On a related matter:
On Tuesday, August 15, 2017 at 11:08:30 PM UTC-4, Michael Kilburn wrote:On Tuesday, August 15, 2017 at 9:12:36 PM UTC-5, Nicol Bolas wrote:Therefore implementations opt for std::terminate(). If it was allowed -- every exception specification would need to be implicitly extended with std::bad_alloc -- and I am pretty sure Standard doesn't mention that.
That's Java, not C++. In C++, when we had exception specifications, if nothing was listed, you throw anything.I know, but if standard mandate throwing bad_alloc in this case code like this:
You've merely restated reason #4,658 as to why C++ doesn't have this feature anymore ;)
On Tuesday, August 15, 2017 at 7:18:16 AM UTC-5, Chris Hallock wrote:
On Tuesday, August 15, 2017 at 3:33:08 AM UTC-4, Michael Kilburn wrote:
I have discovered recently that "throw X(...);" terminates application if it can't allocate memory for exception being thrown. This happens with GCC. Standard does not specify what should happen in this situation (see C++11 15.1.p4). [...]
This is an implementation-internal memory allocation. What could the Standard possibly say?
A lot of different things... For example:"This allocation failure should result in std::bad_alloc being thrown. Implementation should never fail to throw std::bad_alloc"
On Tuesday, 15 August 2017 22:37:57 PDT Michael Kilburn wrote:
> I don't think this is a good argument -- we operate on assumption that
> hardware delivers promised guarantees. There are multiple places in the
> language where it mandates certain outcomes and disregards "sledgehammer"
> factor. In my (limited) understanding there at least one way to implement
> "throw std::bad_alloc() should never fail" requirement from compiler
> standpoint.
That wouldn't solve your problem.
On Tuesday, 15 August 2017 20:08:30 PDT Michael Kilburn wrote:
> No. My concern is that "throw my_exception();" statement has unspecified
> behavior if exception mechanism fails to allocate memory for my_exception
> object. Most popular compilers seem to simply terminate application.
It's not unspecified behaviour. It's very specified: call std::terminate.
As others have said: when the mechanism used to report failures fails itself,
you get a last-ditch bail-out the form of std::terminate.
On Tuesday, 15 August 2017 18:33:43 PDT Michael Kilburn wrote:
> Leaving this case unspecified prevents me from writing portable
> code that handles OOM.
You can't portably handle OOM anyway.
On a modern, multitasking operating system with virtual memory, OOM is
signalled by killing some processes. You don't get notified by it. Your memory
allocations succeed, but you may get SIGBUS when you try to access it.
On Tuesday, 15 August 2017 18:03:39 PDT Michael Kilburn wrote:
> On Tuesday, August 15, 2017 at 10:28:08 AM UTC-5, Thiago Macieira wrote:
> > You should be able to modify the C++ support library's build to enable the
> > buffer
> > allocation by default.
>
> I don't see how this will help
How won't it? If you make it work like you want it, then it will.
What part am I missing?
It's not clear to me what, if anything, the standard mandates if allocation fails while copying the exception in the course of throwing it - I'm pretty sure this would trigger terminate() but I couldn't find the specifics in the standard. However, it does mandate (21.8.2) that all standard exception classes have non-throwing copy constructors and assignment operators, so this is a problem that could only arise with custom exceptions.
Each standard library class T that derives from class exception shall have a publicly accessible copy constructor and a publicly accessible copy assignment operator that do not exit with an exception.
On 16 August 2017 at 10:05, Michael Kilburn <crusad...@gmail.com> wrote:
> I think you confuse my problem with another one. What I am saying is that
> throwing any exception can have unspecified behavior under low memory
> conditions. 'Unspecified' means standard doesn't mandate certain outcome and
> leaves it to implementation. Most popular ones -- call std::terminate. This
> means I can't write a (portable) program that reliably handles out-of-memory
> condition.
Your earlier example runs out of stack space. See how it behaves if
dynamic allocation is
done instead:
https://wandbox.org/permlink/wLk01q0LnzEdFJvS
The standard can't mandate that constructing an exception cannot
consume stack space.
You can't avoid stack exhaustion in a portable manner, because
detecting it and recovering from it
has a cost that some users would not want to pay, and might not even
be possible on some platforms.
>
> Activating buffer allocation doesn't help really -- first of all it is not
> portable, second -- you still end up with limited resource that can run out.
> Instead of trying to play race with environment, I'd rather have a reliable
> way to handle this situation in the code.
>
> Does it make sense?
Your problem has nothing particular to do with exceptions.
On Tuesday, 15 August 2017 18:33:43 PDT Michael Kilburn wrote:
> Leaving this case unspecified prevents me from writing portable
> code that handles OOM.
You can't portably handle OOM anyway.
On a modern, multitasking operating system with virtual memory, OOM is
signalled by killing some processes. You don't get notified by it. Your memory
allocations succeed, but you may get SIGBUS when you try to access it.
On 2017-08-16 21:03, Ville Voutilainen wrote:
>
> There is no such spot. Throwing an exception during unwinding calls
> terminate, but
> failing to allocate memory when an exception is initialized before
> it's thrown does not.
Failing to allocate while constructing an exception is just an ordinary
allocation failure, and would be expected to throw bad_alloc just as it
would in any other circumstances. The fact that it happened inside a
throw expression isn't relevant; execution never got as far as actually
attempting to throw the original exception.
It's not clear to me what, if anything, the standard mandates if
allocation fails while copying the exception in the course of throwing
it - I'm pretty sure this would trigger terminate() but I couldn't find
the specifics in the standard. However, it does mandate (21.8.2) that
all standard exception classes have non-throwing copy constructors and
assignment operators, so this is a problem that could only arise with
custom exceptions. Writing your exceptions so that copying them can't
throw is generally considered good practice.
The standard also requires (21.6.3.1) that all operations on bad_alloc,
including default construction, can't throw.
If you have an exception failure, you can no longer reliably throw exceptions.
On Tuesday, 15 August 2017 23:55:21 PDT Michael Kilburn wrote:
> > > "throw std::bad_alloc() should never fail" requirement from compiler
> > > standpoint.
> >
> > That wouldn't solve your problem.
>
> Why do you think so?
Because you said your problem is that you wanted to throw something else, an
exception class possibly much bigger than std::bad_alloc. So even if the
implementation is required to be able to throw std::bad_alloc, it won't be
required to always be able to throw your type.
On 2017-08-17 10:37, Michael Kilburn wrote:
>
> On Wednesday, August 16, 2017 at 4:11:52 PM UTC-5, Ross Smith wrote:
>
> On 2017-08-16 21:03, Ville Voutilainen wrote:
> >
> > There is no such spot. Throwing an exception during unwinding calls
> > terminate, but
> > failing to allocate memory when an exception is initialized before
> > it's thrown does not.
>
> Failing to allocate while constructing an exception is just an ordinary
> allocation failure, and would be expected to throw bad_alloc just as it
> would in any other circumstances. The fact that it happened inside a
> throw expression isn't relevant; execution never got as far as actually
> attempting to throw the original exception.
>
> Not that I am against this notion, but where standard says that
> allocation in every circumstance must produce std::bad_alloc? On the
> other hand 15.1.p4 clearly states that allocation mechanism is unspecified.
I'm no sure which part of the standard you're referring to there; I
can't see anything relevant in 15.1/4 in the current draft. Probably my
fault for using section numbers in the first place, we should use the
labels.
Anyway, [bad.alloc] (21.6.3.1 in the current draft) seems pretty
clear: "The class bad_alloc defines the type of objects thrown as
exceptions by the implementation to report a failure to allocate storage."
> The standard also requires (21.6.3.1) that all operations on bad_alloc,
> including default construction, can't throw.
>
> throw can fail during allocation, before construction.
I have no idea what you mean by that. Constructing and throwing
bad_alloc doesn't involve any allocation; constructing some other
exception type can throw, but that's just an ordinary allocation failure
and throws bad_alloc as usual, no special rules required.
(Well, it's not quite true that bad_alloc can't allocate anything in its
constructor: if it does, it has to catch and handle any failure
internally, without throwing. I can imagine an implementation where
bad_alloc's constructor tries to construct a string with a detailed
error message, but if that fails, gives up and just holds a pointer to a
static string with a generic message.)
Ross Smith
On Wednesday, 16 August 2017 15:25:19 PDT Michael Kilburn wrote:
> Regardless of these points -- there is a large class of systems where OS
> won't kill your process because there are no memory left. Process gets NULL
> from related malloc and it is up to the process to handle it. I want to be
> able to handle it in some other way than calling std::terminate().
>
> Same for a situation where I (user) explicitly (for whatever reason) want
> to limit my process to lets say 1Mb of footprint. I want this process to
> behave when it hit this limit, not crash. And I want that behavior to be
> robust.
Then you need to talk to those system vendors and ask them to provide such
robustness.
You were asking for portability. You can't portably handle OOM, despite what
the C++ standard may say.
On Wednesday, 16 August 2017 15:46:37 PDT Michael Kilburn wrote:
> I agree with everything until this point with one note -- as a user I don't
> care if std::bad_alloc is thrown in both cases -- all I need is a sensible
> 'backout' mechanism that would allow my application to survive. I don't
> really need to distinguish between these two situations, but it might be a
> bonus.
Explain to me what you're going to do if you can't free any more memory. That
is, what happens if the OOM situation is caused by another program on the same
machine.
> > exceptions.
>
> This is correct only on assumption that throwing std::bad_alloc (or
> std::cant_throw) consumes some limited resource. This can be avoided. Here
> is a very trivialized example:
>
> // takes pointer to constructed exception object
> // if NULL -- this means we are processing std::bad_alloc
> void locate_catch_clause_and_unwind_stack(void* exception_ptr);
>
> void throw_impl(exception_type* p)
> {
> void* xp = alloc_from_exception_heap(p->size); // return NULL if
> we ran out of magic
> locate_catch_clause_and_unwind_stack(xp);
> }
You're assuming that "locate_catch_clause_and_unwind_stack" can operate
without allocating memory too. The act of unwinding may require resources too.
That means either dynamic (which can fail) or static (which we can exhaust).
On Wednesday, 16 August 2017 16:12:17 PDT Michael Kilburn wrote:
> If given system doesn't comply with requirements outlined in C++ standard
> -- I seriously don't care about them. I write C++ code and I expect it to
> be executed only on compliant systems.
You don't have to care. That's your choice.
Just note you'll be excluding Linux, FreeBSD, macOS and Windows.
On Wednesday, 16 August 2017 16:18:40 PDT Michael Kilburn wrote:
> On Wednesday, August 16, 2017 at 6:15:38 PM UTC-5, Thiago Macieira wrote:
> > On Wednesday, 16 August 2017 16:12:17 PDT Michael Kilburn wrote:
> > > If given system doesn't comply with requirements outlined in C++
> >
> > standard
> >
> > > -- I seriously don't care about them. I write C++ code and I expect it
> >
> > to
> >
> > > be executed only on compliant systems.
> >
> > You don't have to care. That's your choice.
> >
> > Just note you'll be excluding Linux, FreeBSD, macOS and Windows.
>
> I don't understand how this remark contributes anything to this discussion.
> Also, I am pretty sure all these systems are considered compliant with
> current C++ requirements.
Except for the part you quoted elsewhere in this thread: that if new succeeds,
you can access those bytes without problem. That's not the case: new may
succeed and you may still SIGBUS when accessing those bytes.
On Wednesday, 16 August 2017 16:16:32 PDT Michael Kilburn wrote:
> > Explain to me what you're going to do if you can't free any more memory.
> > That
> > is, what happens if the OOM situation is caused by another program on the
> > same
> > machine.
>
> It depends on application. My favorite example for this situation is a
> server that accepts connections (creates a thread for each one), reads data
> (user commands), executes them and in case of exception -- everything in
> given thread unwinds, connection gets closed and server survives (along
> with other connections).
Ok, suppose the connection is active and you're about to send a packet. In the
calculation for that packet, some operation wants to throw and fails. That
means you're going to get a stack unwinding.
At what point do you catch it?
What memory will you free?
If you're going to close the socket, are you going to try and send a clean shutdown?
Do you have any other recourse besides closing the socket uncleanly and
exiting the thread? And how much memory will that free?
> > > is a very trivialized example:
> > > // takes pointer to constructed exception object
> > > // if NULL -- this means we are processing std::bad_alloc
> > > void locate_catch_clause_and_unwind_stack(void* exception_ptr);
> > >
> > > void throw_impl(exception_type* p)
> > > {
> > >
> > > void* xp = alloc_from_exception_heap(p->size); // return NULL
> >
> > if
> >
> > > we ran out of magic
> > >
> > > locate_catch_clause_and_unwind_stack(xp);
> > >
> > > }
> >
> > You're assuming that "locate_catch_clause_and_unwind_stack" can operate
> > without allocating memory too. The act of unwinding may require resources
> > too.
> > That means either dynamic (which can fail) or static (which we can
> > exhaust).
>
> I am pretty sure current implementations don't allocate additional memory
> besides extra stack space -- but this problem exists in C too and crash on
> stack overflow is an long accepted behaviour.
I've never understood how you can allocate stack space while unwinding the
stack. During the unwind process, the stack pointers need to be adjusted back
to where they used to be. On Linux/x86, once you adjust the SP register, any
data below the pointer becomes garbage and no longer guaranteed to be retained
(or more than 128 bytes below it, or some other limit).
That means any resources the unwinder may need to preserve during the
unwinding need to be somewhere else other than the stack.
--
---
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Discussion" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.
To post to this group, send email to std-dis...@isocpp.org.
Visit this group at https://groups.google.com/a/isocpp.org/group/std-discussion/.
Michael : how about writing a paper recommending that implementations always ensure there's at least enough resources available to throw std::bad_alloc? You might encounter resistance from embedded systems developers, but at least it would be discussed by the committee.Just a thought :)
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussio...@isocpp.org.
On 2017-08-16 21:03, Ville Voutilainen wrote:
>
> There is no such spot. Throwing an exception during unwinding calls
> terminate, but
> failing to allocate memory when an exception is initialized before
> it's thrown does not.
Failing to allocate while constructing an exception is just an ordinary
allocation failure, and would be expected to throw bad_alloc just as it
would in any other circumstances.
The fact that it happened inside a
throw expression isn't relevant; execution never got as far as actually
attempting to throw the original exception.
It's not clear to me what, if anything, the standard mandates if
allocation fails while copying the exception in the course of throwing
it - I'm pretty sure this would trigger terminate() but I couldn't find
the specifics in the standard.
struct C {
C() { }
C(const C&) {
if (std::uncaught_exceptions()) {
throw 0; // throw during copy to handler’s exception-declaration object (18.3)
}
}
};
int main() {
try {
throw C(); // calls std::terminate() if construction of the handler’s
// exception-declaration object is not elided (15.8)
} catch(C) { }
}
And the example attached specifically calls out the copy constructor of the exception type:
struct C {
C() { }
C(const C&) {
if (std::uncaught_exceptions()) {
throw 0; // throw during copy to handler’s exception-declaration object (18.3)
}
}
};
int main() {
try {
throw C(); // calls std::terminate() if construction of the handler’s
// exception-declaration object is not elided (15.8)
} catch(C) { }
}
All that being said... I'm not as sure I'm right as I was when I started writing that. The reason? The line cites 18.5.2, which states:
> An exception is considered uncaught after completing the initialization of the exception object
That seems to suggest that, in the above code, the exception isn't uncaught when the `throw` in `C`'s copy constructor is encountered. Therefore, `std::uncaught_exceptions()` should be 0.
This looks like a genuine defect in the standard.
To unsubscribe from this group and stop receiving emails from it, send an email to std-discussion+unsubscribe@isocpp.org.
On Wednesday, 16 August 2017 15:28:46 PDT Michael Kilburn wrote:
> On Wednesday, August 16, 2017 at 1:37:24 AM UTC-5, Thiago Macieira wrote:
> > You can't portably handle OOM anyway.
>
> This is one of the possible answers, yes. But I would like to understand
> why I can do it in C, but can not do it in C++.
C has the same problems I listed.
It's just that C++ introduces one more: its failure reporting mechanism can
itself fail.
> > On a modern, multitasking operating system with virtual memory, OOM is
> > signalled by killing some processes. You don't get notified by it. Your
> > memory
> > allocations succeed, but you may get SIGBUS when you try to access it.
>
> This contradicts guarantee that C++ gives wrt 'new' -- if it succeeds, I
> should be able to access memory it allocated.
It does. Nothing we can do about it.
On Wednesday, 16 August 2017 16:55:07 PDT Michael Kilburn wrote:
> > If you're going to close the socket, are you going to try and send a clean
> > shutdown?
>
> On my side error string will be preallocated (or I will fallback to static
> string). On OS side `send()` will fail and I close socket in a way that
> will signal user that smth went wrong (if possible).
If it's a UDP socket, closing it will not send anything to the other side. The
other side will only timeout.
If you preallocate memory for the OOM condition, you may cause the OOM
condition you were trying to avoid.
That reminds me of developing for Symbian, where a 32 MB VRAM system reserved
1 MB for displaying the OOM message. When the camera app was running, it
consumed 30 MB VRAM. That meant everything else needed to work with just 1
MB...
> > I've never understood how you can allocate stack space while unwinding the
> > stack. During the unwind process, the stack pointers need to be adjusted
> > back
> > to where they used to be. On Linux/x86, once you adjust the SP register,
> > any
> > data below the pointer becomes garbage and no longer guaranteed to be
> > retained
> > (or more than 128 bytes below it, or some other limit).
>
> As far as I know MSVC (during unwinding) goes up the stack and calls
> destructors using current "remainder" of stack. Once unwinding is done, it
> executes catch() clause in similar way and adjusts stack related registers
> (this "freeing" potentially large portion of stack).
>
> Not sure about GCC, but I imagine it does something similar.
It does not.
The problem is where it can store the information it needs to save while those
destructors are running. It can't be below the stack, since the destructors
may call other functions that consume stack space. Even if they didn't, Unix
signals may be delivered and those also consume stack space.
Since it's not below the stack, it needs to be either in the heap or above the
stack (thread-specific reserved area).
> > That means any resources the unwinder may need to preserve during the
> > unwinding need to be somewhere else other than the stack.
>
> This is assuming unwinder needs any resources (besides more stack). For
> example in case of table-driven unwinding you basically end up searching
> and traversing a tree stored in static data (loaded from disk as part of
> your executable) -- you don't need extra memory for that. In old days MSVC
> was walking stack frames and looking for data it knew should be there. Not
> sure what it is doing nowadays.
Great if it can be implemented without consuming more resources. But that's
not a given: on some ABI, allocating and keeping memory during unwinding may
be necessary.
And that's aside from the exception object itself, which is most
definitely not stored in a static table.
If there are ABIs like that and C++ starts requiring them to change, it would
be a huge breakage for existing users. Big enough that I can predict
objections strong enough to the C++ Standard change that would back it out.
So, no, you have to cope with the fact that throwing exceptions may fail, even
for the smallest and/or pre-defined types. And it may be that the systems thus
affected are mainstream.
On Wednesday, 16 August 2017 21:32:15 PDT Michael Kilburn wrote:
> Ahh... I see. 'Full overcommit'. Some claim it to be bad practice:
> https://www.etalabs.net/overcommit.html
>
> I personally don't care, it is just a tool. If you don't like it -- don't
> use it. When setting up an environment for my app that is supposed to
> handle OOMs I'll make sure it is switched off.
*How*?
First, there's no portable way of doing that. So your question about how to
portably handle OOM falls short, since as I said you can't do it.
Second, you can't change the kernel VM settings as a regular user. You need to
do that as a privleged user. So your regular app can't do it.
Third, even if you run your application as root, little stops the admin from
shrinking the swap or turning it completely off (swapoff can fail if it can't
free the pages already swapped out to it, but I doubt it takes into
consideration reserved-but-unused space). That shrinks the total VM after it's
been "committed" to a particular process. And running without swap is poor use
of resources, since it prevents the kernel from evicting little used pages in
favour of things that really need RAM.
If you have enough control over the system to overcome the problems above, you
have enough control over the C++ runtime library implementation too.
> In last Linux environment I worked we never had it on -- because OOM-killer
> always chose the most important app to kill, the one that spent last 14
> hours chewing through data that needs to be available before next trading
> day starts :-)
You can non-portably change that behaviour by adjusting its OOM score. But,
again, non-portable solution.
> I don't know if this behavior is allowed by C++ standard. Probably is. If
> yes -- then you are correct, you can't build portable C or C++ app that can
> handle OOMs, but it is not the reason to deny this possibility to those
> systems where OOM-killer isn't used.
I'm not denying it where it is possible. I'm saying you can't require a
portable solution because it can't be done, or would cost too much.
> Also, this is kinda irrelevant -- my biggest grind with 'crashing throw'
> can be summed like this:
> - exceptions and return codes are essentially the same thing -- it is a way
> to pass error object up the call stack, just implemented differently
Except where they're not. As you've shown, throwing can fail, returning can't.
> - you can always pass this error object using C technique
> - why you can't to it using C++ technique?
Because it's not the same thing. It's a more complex beast and requires more
machinery.
On 17 August 2017 at 01:21, Michael Kilburn <crusad...@gmail.com> wrote:
>> Your earlier example runs out of stack space. See how it behaves if
>> dynamic allocation is
>> done instead:
>> https://wandbox.org/permlink/wLk01q0LnzEdFJvS
>
>
> Your example throws pointer to an object -- and therefore succeeds. Here is
> how running of stack space looks like, notice the difference in output from
> my original example:
> https://wandbox.org/permlink/dBBaSsLQ9Rwa9WzI
Well, great, you have just shown yourself that you can't catch stack
allocation failures.
If the initialization of an exception object runs into that problem,
you won't get a bad_alloc.
>> Your problem has nothing particular to do with exceptions.
> Yes, it does.
Except that it does not, as your own example shows.
On 17 August 2017 at 08:13, Michael Kilburn <crusad...@gmail.com> wrote:
>> Well, great, you have just shown yourself that you can't catch stack
>> allocation failures.
>> If the initialization of an exception object runs into that problem,
>> you won't get a bad_alloc.
>
>
> Can you show me where 'stack allocation' happens in this disassembled code:
> https://godbolt.org/g/kcL1ks
> ?
In __cxa_allocate_exception?
>> >> Your problem has nothing particular to do with exceptions.
>> > Yes, it does.
>>
>> Except that it does not, as your own example shows.
>
>
> I can't decide if you are trolling me or not...
You seem to be suggesting that I shouldn't take you seriously. I can
certainly accommodate
such wishes with relatively little effort.
If you think you have a valid point, you can come to an actual meeting to express it. It's open and costs nothing (apart from such things as food, lodging and travel costs, which are on you; it's all volunteer work). You probably won't be able to vote in plenary, but you'll be able to participate otherwise, express yourself and take part in straw polls.I think you'll get (quite reasonable) opposition from people working in memory-constrained environments, and from those who do not use exceptions. You'll probably want to explore what ensuring there's enough memory to throw std::bad_alloc at all times means to these groups (in one case, every byte counts; in the other case, using precious memory for an unused feature). It's a matter of tradeoffs, or so I suppose. You'll want to make a convincing argument.Another avenue to consider is to contact your compiler vendor(s), as for the moment it's an implementation-specific thing. Maybe that would be sufficient (compiler options or somesuch might do it), and it would be less work for you.Good luck!
On Thursday, August 17, 2017 at 12:02:18 AM UTC-5, Ville Voutilainen wrote:On 17 August 2017 at 01:21, Michael Kilburn <crusad...@gmail.com> wrote:
>> Your earlier example runs out of stack space. See how it behaves if
>> dynamic allocation is
>> done instead:
>> https://wandbox.org/permlink/wLk01q0LnzEdFJvS
>
>
> Your example throws pointer to an object -- and therefore succeeds. Here is
> how running of stack space looks like, notice the difference in output from
> my original example:
> https://wandbox.org/permlink/dBBaSsLQ9Rwa9WzI
Well, great, you have just shown yourself that you can't catch stack
allocation failures.
If the initialization of an exception object runs into that problem,
you won't get a bad_alloc.Can you show me where 'stack allocation' happens in this disassembled code:?
On Thursday, August 17, 2017 at 1:13:07 AM UTC-4, Michael Kilburn wrote:On Thursday, August 17, 2017 at 12:02:18 AM UTC-5, Ville Voutilainen wrote:On 17 August 2017 at 01:21, Michael Kilburn <crusad...@gmail.com> wrote:
>> Your earlier example runs out of stack space. See how it behaves if
>> dynamic allocation is
>> done instead:
>> https://wandbox.org/permlink/wLk01q0LnzEdFJvS
>
>
> Your example throws pointer to an object -- and therefore succeeds. Here is
> how running of stack space looks like, notice the difference in output from
> my original example:
> https://wandbox.org/permlink/dBBaSsLQ9Rwa9WzI
Well, great, you have just shown yourself that you can't catch stack
allocation failures.
If the initialization of an exception object runs into that problem,
you won't get a bad_alloc.Can you show me where 'stack allocation' happens in this disassembled code:?
Your compiler happens to be eliding the temporary. But if it didn't/couldn't do that, then it would be forced to create a temporary, then copy/move into it.
A temporary that would be on the stack. And thus, not throw `bad_alloc` when it causes a stack overflow.
So why should elision of the large exception cause a recoverable error, when not eliding the exception will cause an unrecoverable failure?
Your general issue is potentially valid, but your specific example is not.
On Wednesday, 16 August 2017 22:06:50 PDT Michael Kilburn wrote:
> > *How*?
>
> By adding smth like this into supporting documentation:
> This application was designed to handle lack out-of-memory situations
> on it's own. On Linux switch off over commit, please.
>
> And, depending on actual design this can be made optional or mandatory.
So you can also add it to the documentation
"Please use libstdc++safe instead of libstdc++"
> > > Also, this is kinda irrelevant -- my biggest grind with 'crashing throw'
> > > can be summed like this:
> > > - exceptions and return codes are essentially the same thing -- it is a
> >
> > way
> >
> > > to pass error object up the call stack, just implemented differently
> >
> > Except where they're not. As you've shown, throwing can fail, returning
> > can't.
>
> I'd like throwing not to fail.
That's a physical impossibilty, for an arbitrary exception object of unknown
size.
At best, we can adopt your suggestion: an unthrowable exception could be
converted to an exception of a different type (which your catch handlers won't
usually catch) to indicate the situation.
I still think it's a bad idea because you'll most likely unwind too much and
exit frames that did not expect to exit. Including noexcept frames (which will
cause std::terminate).
> I am looking for better answer. I've been coding complex systems for years
> -- complexity (where you can't avoid it) is no longer a goof reason to not
> do smth (for me).
> Why it can't be made no-fail?
That depends on what "no-fail" is. Can it guarantee throwing an arbitrary
exception of any type? No, never. That's easy to see why.
Is it physically possible to write an implementation that can guarantee that
throwing a specific exception type or types will succeed in starting the
unwind, and that unwinding will not by itself fail? Yes, I believe so.
If that runtime implementation can be written, I'd argue that it solves your
problem. You can require your application to use it.
But can we make such requirements to all implementations? Without a survey of
the current implementations, both mainstream and fringe (but not obsolete),
it's difficult to say. It might be possible. But my wild guess here is that such
a survey would find at least one implementation that cannot implement it.
On Wednesday, 16 August 2017 21:47:50 PDT Michael Kilburn wrote:
> > The problem is where it can store the information it needs to save while
> > those
> > destructors are running. It can't be below the stack, since the
> > destructors
> > may call other functions that consume stack space. Even if they didn't,
> > Unix
> > signals may be delivered and those also consume stack space.
>
> I don't know these details, if I knew -- I would not be asking these
> questions here. I always assumed that stack unwinding can't fail if
> destructors don't throw (and there is enough stack left).
>
> Which information needs to be saved 'on-the-side' to run destructors with
> GCC?
That's exactly the problem. The standard cannot mandate something that would
be unimplementable, be it for either technical unfeasibility or because the
cost for existing implementations would be too high. What you're asking is not
a problem of the standard, it's a problem of the implementations.
And though
there are several developers here who are developers in GCC and Clang, they
may not be paying attention to this thread or be aware of the specific details
of the unwind mechanism.
Without checking the code, I would assume that unwinding can't fail once it's
started. The problem is starting it, since the act of throwing requires using
some resources. At the very least, it needs to store the exception object
you've thrown and any information that can be obtained by the C++ API like
std::uncaught_exceptions(), std::current_exception().
But I wouldn't be surprised if certain unwinders need to allocate a bit of
memory per frame being unwound to calculate something, like in the code that
determines whether a catch's expression matches the object thrown. We simply
can't exclude that possibility in other ABIs, even if the IA-64 C++ portable
ABI's libunwind doesn't need to.
Of course, this isn't including the fact that catching by value an object with
non-noexcept copy constructor could throw too.
> > Great if it can be implemented without consuming more resources. But
> > that's
> > not a given: on some ABI, allocating and keeping memory during unwinding
> > may
> > be necessary.
>
> Can you elaborate on this portion a bit? What kind of information?
See above, I'm speculating. The point is that I cannot rule out that
possibility.
Remember also that we need to deal with ABIs that may have been patched over
and over for the last 25-30 years to deal with C++ and compiler improvements.
> > And that's aside from the exception object itself, which is most
> > definitely not stored in a static table.
>
> I thought with GCC exception object is constructed on a heap (or in an
> emergency buffer) before unwinding machinery is invoked...
And that's exactly the problem. What happens if the object is too large for
the emergency buffer and the heap allocation fails?
On Wednesday, 16 August 2017 23:41:32 PDT Michael Kilburn wrote:
> > Remember also that we need to deal with ABIs that may have been patched
> > over
> > and over for the last 25-30 years to deal with C++ and compiler
> > improvements.
>
> Why? Just like with other C++ features -- given compiler can just declare
> that it doesn't support it until version X.X.X.
You're assuming that the compiler can make that change in the first place. That
is not a given, since it could be a major ABI-breaking change. There's more to
this than a theorerical possibility: this change would have large impacts in
support and compatibility of existing code. It may be theoretically possible
but not economically viable.
I don't know if you were around when we did an ABI change the last time (for
GCC 3.3 to 3.4). That made the C++11 std::string change in libstdc++ look like
a minor hiccup in comparison -- I did a system upgrade one day and it was
over.
> > > > And that's aside from the exception object itself, which is most
> > > > definitely not stored in a static table.
> > >
> > > I thought with GCC exception object is constructed on a heap (or in an
> > > emergency buffer) before unwinding machinery is invoked...
> >
> > And that's exactly the problem. What happens if the object is too large
> > for
> > the emergency buffer and the heap allocation fails?
>
> auto-substitution to no-fail "throw std::bad_alloc" or "throw
> std::cant_throw". Assuming it is possible.
And assuming it's a good idea. I'm not satisfied it is.
Take the following example:
void f()
{
throw LargeObject;
}
void g() noexcept
[
try {
f();
} catch (const LargeObject &o) {
return;
}
}
Is the code above safe?
If the throw in f() does throw LargeObject, then g() will catch it and consume
the exception. That function is appropriately noexcept.
But if the throw in f() replaces LargeObject with something else -- ANYTHING
else -- then the catch block in g() won't catch it. Since the g() function is
marked noexcept, the unwinder runtime will call std::terminate().
Now, this is no different than guaranteeing a call to std::terminate() at the
throw point, since you end up there anyway. But the type replacement does
allow unwinding in other code conditions.
Still, we haven't ruled out the unwinder failing in the first place, unrelated
to the type in question. So long as that is a possibility, there will be the
need for an escape hatch that isn't an exception.
On Wednesday, 16 August 2017 23:29:32 PDT Michael Kilburn wrote:
> > But can we make such requirements to all implementations? Without a survey
> > of
> > the current implementations, both mainstream and fringe (but not
> > obsolete),
> > it's difficult to say. It might be possible. But my wild guess here is
> > that such
> > a survey would find at least one implementation that cannot implement it.
>
> Who should I talk to to get this ball rolling? Maybe there is a better
> person to discuss this with? Should I try to come up with proposal as
> Patrice suggested? (it is hard to come up with proposal without knowing
> precisely how stuff you want to change operates right now...)
My guess is that person that looks back at you when you brush your teeth or
shave in the morning: yourself. (I suppose your dog could also be looking at
you, but the dog won't be of much help. If it's a cat, it might actually make
matters worse if the cat may be plotting the downfall of human civilisation)
I think that the part about changing types if the throw mechanism can't throw
that type can be solved with a simple paper making the proposal. The committee
and compiler writers would see it and discuss the issue.
You should also include in the paper the further problems associated with
heap-free unwinding, beyond the ability to store the exception object itself.
But unless you do the survey I mentioned above, I don't think you can propose
solutions. But you can at least get the ball rolling by getting the vendors to
survey themselves.
I'd also write "stack overflow is out of scope".