Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Exceeding memory while using STL containers

111 views
Skip to first unread message

blwy10

unread,
May 30, 2006, 5:40:18 AM5/30/06
to
What happens when in the process of using STL containers, we insert so
many elements that our computer runs out of memory to store them. What
happens then? I assume that this behaviour is implementation-defined
and dependent on what STL am I using and what OS I am running but I'm
just curious to know what would/should typically happen. To be more
precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
STL implementation. The container specifically is std::set.

Thanks all.
Benjamin Lau


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Kai-Uwe Bux

unread,
May 31, 2006, 11:18:06 AM5/31/06
to
blwy10 wrote:

> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen.

[platform information snipped]

No. The standard containers do all allocations through std::allocator unless
you specify an allocator of your own. When std::allocator cannot do an
allocation, it will throw std::bad_alloc.


Best

Kai-Uwe Bux

Ulrich Eckhardt

unread,
May 31, 2006, 11:17:45 AM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then?

This is signalled via exceptions, std::bad_alloc I'd say.

Uli

ThosRTanner

unread,
May 31, 2006, 11:26:25 AM5/31/06
to

blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then?
It throws an exception (std::bad_alloc I think)

> I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.
What is implementation defined is at what point your system will throw
bad_alloc, which is dependant not just on what STL you are using and
what OS you are running but also how much memory the OS permits the
program to have, how much swap space is in use, ....

Note: I've come across one implementation of C++ where, given certain
memory constraints, when the computer runs out of memory, your heap
overwrites your stack. This is from people who should have known
better. But they won't change it because apparently speed is more
important than correct and predictable functioning of a program.

Meh...@gmail.com

unread,
May 31, 2006, 11:25:11 AM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen.

I guess it depends on the allocator used, but on my platform operators
new and delete are used in the default allocator so I think when
running out of memory std::bad_alloc is thrown. I don't know if this
is standard though.

n2xssvv g02gfr12930

unread,
May 31, 2006, 11:20:35 AM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.
>

I'd suggest that the STL function that failed would throw one of the
defined STL errors which your code could catch. Obviously if an error
occurs that the STL library cannot possibly hope to be aware of, and
therefore check for, you'll get a standard error.

JB


JB

Daniel Aarno

unread,
May 31, 2006, 11:21:40 AM5/31/06
to
This would depend on the allocator you are using. You can supply your own
allocator object to each of the containers in STL that takes care of allocating
and releasing memory. If you use the default I guess new and delete are being
used. Contrary to what many people believe new nowadays throws an exception
when memory allocation fails (unless explicitly told not to). new does NOT
return 0, NULL or anything else by default. Actually you can install a
new_handler that is called when memory allocation fails the first time. If you
new_handler is able to find memory for new (by releasing a pool for example)
the operation can resume otherwise the exception is thrown. So given that you
don't catch bad_alloc exceptions (or a base thereof) or have a new_handler that
takes care of the problem terminate is invoked. Terminate will call you
uncaught_handler if you have one set or default to calling abort which aborts
the process without any cleanup (such as flushing buffers etc).

/Daniel Aarno

blwy10 skrev:


> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

shailesh

unread,
May 31, 2006, 11:22:22 AM5/31/06
to
For memory management STL uses allocators. One can redefine the
allocators if required. Now the default allocator provided with STL
uses ::operator new for memory allocation. It may throw appropriate
exceptions. i.e. it will throw std::bad_alloc when it cannot allocate
memory. Here is the definition of _Allocate function inside
allocator<T> class with VC distribution:

template<class _Ty> inline
_Ty _FARQ *_Allocate(_SIZT _Count, _Ty _FARQ *)
{ // allocate storage for _Count elements of type _Ty
return ((_Ty _FARQ *)operator new(_Count * sizeof (_Ty)));
}


Given this, the next question is, what happens with the STL container
in the presence of this std::bad_alloc exception.

All standard containers implement basic exception safety guarantee.
In general insert() operations have strong exception guarantee. But
multi-element inserts are never strongly exception safe.

Alan Johnson

unread,
May 31, 2006, 11:26:52 AM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.

A correct implementation will throw a std::bad_alloc exception.

--
Alan Johnson

Shimon Shvartsbroit

unread,
May 31, 2006, 11:32:32 AM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.
>

Most of chances you'll get std::bad_alloc exception thrown. Since STL
is exception safe library, in such case AFAIK it follows the "strong"
exception gurantee. Which means that the container will hold all
succesfully inserted elements untill it ran out of memory. For
instance, if there were five items in your set container, and by adding
the sixth item the computer ran out of memory, the container will still
hold those five elements as if the sixth element had never been added.
In addition to that the "insert" method will throw std::bad_alloc.

http://www.gotw.ca/gotw/059.htm , look for "Strong Guarantee"

Regards,
Shimon

whitef...@hotmail.com

unread,
May 31, 2006, 11:33:57 AM5/31/06
to
Hellol Benjamin,

The STL containers were built so they would be able to handle their own
memory. When you insert an element into a container and there is not
enough memory, the container will throw a std::bad_alloc exception that
you should catch and exit the program. There is not much you can do if
you suspect that the size of the set is going to take up all your RAM.
If the set is really that huge, you can write the set in a binary file
and use your hard drive's memory instead, but this requires knowledge
of C++ file I/O.

dhruv

unread,
May 31, 2006, 3:46:57 PM5/31/06
to

blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.

The container's insert() function will throw an std::bad_alloc()
exception, and I don't think there's much that can be done about it
either. The result is NOT implementation dependant, and is defined by
the C++ standard. This is of course assuming that you are using
std::allocator() or some replacement that adheres to the standard.

Regards,
-Dhruv.

Vaclav Haisman

unread,
May 31, 2006, 3:59:06 PM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.
>
> Thanks all.
> Benjamin Lau
std::bad_alloc is thrown?

--
VH

kanze

unread,
May 31, 2006, 4:25:46 PM5/31/06
to
blwy10 wrote:

> What happens when in the process of using STL containers, we
> insert so many elements that our computer runs out of memory
> to store them. What happens then? I assume that this behaviour
> is implementation-defined and dependent on what STL am I using
> and what OS I am running but I'm just curious to know what
> would/should typically happen. To be more precise, I am using
> Windows XP SP 2 and VC Toolkit 2003, including its STL
> implementation. The container specifically is std::set.

Strictly speaking, it's undefined behavior. You've exhausted
the resource limits of your process. However, the standard does
provide an officially sanctionned means for the library to
handle this: raise an std::bad_alloc exception. In practice, on
systems which support detecting a failed allocation, when using
the default allocator, and you haven't replaced the new_handler,
you will usually get this. Not all systems support detection of
a failed allocation, however: I've had problems with this in the
past on AIX, Linux and Windows NT. (I know that AIX has since
fixed the problem, and that there are ways to configure Linux to
avoid it.) And of course, depending exactly on what you are
doing when you run out of memory, you may end up crashing
anyway.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Michiel...@tomtom.com

unread,
May 31, 2006, 4:32:41 PM5/31/06
to

blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then?

Assuming you're using the standard allocator, you'll get an
std::bad_alloc
exception. It may take a while though, if your OS decides to swap a
lot.

HTH,
Michiel Salters

amitav...@gmail.com

unread,
May 31, 2006, 4:46:42 PM5/31/06
to
blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them.

The C++ standard mandates that the default `new' should throw the
`bad_alloc' exception when it runs out of memory.

> I assume that this behaviour is implementation-defined and dependent on what
> STL am I using and what OS I am running but I'm just curious to know what
> would/should typically happen.
>

> Benjamin Lau

This behavior is not implementation-defined; it is clearly specified in
the standard. And by definition, STL should behave as per the
standard, irrespective of the OS on which it is implemented.

If your program does not alter the standard behavior of `new' by
calling set_new_handler(), it should throw the `bad_alloc' exception
when memory is exhausted. If your program is not prepared to catch
that exception, it will mostly be terminated.

-amitav

boaz...@yahoo.com

unread,
May 31, 2006, 4:44:28 PM5/31/06
to

blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.
>
> Thanks all.
> Benjamin Lau

STL containers are "Indifferent" to memory allocation issues since one
of their template parameters is the allocation scheme to be used. So
rather then asking what the container will do, you have to ask yourself
what the allocator will do. First you have to read about the allocator
class. In general though it has 4 major functions:
allocate - which do row memory allocation
constact - which "move" an object to the allocated memory allocated by
allocate function (this is really simplified explanation)
destroy - destroy the object but not free the memory
deallocate that do the opposite of allocate
Now it's all deepened on what allocator you are using with your
containers. There is a default allocator that STL implements and this
is really OS and compiler vendor specific. Since you can see the source
code of the SRL allocator and you know whether you are using it I bet
you can figure the rest yourself :)
good luck

Shimon Shvartsbroit

unread,
May 31, 2006, 4:55:59 PM5/31/06
to
whitef...@hotmail.com wrote:
> Hellol Benjamin,
>
> The STL containers were built so they would be able to handle their own
> memory. When you insert an element into a container and there is not
> enough memory, the container will throw a std::bad_alloc exception that
> you should catch and exit the program. There is not much you can do if
> you suspect that the size of the set is going to take up all your RAM.

That's not so correct. Here's a real life scenario that happened to me
at work. The main component of product I was working on couldn't
allocate 50K bytes of memory. That component also managed a queue of
data it needed to release. One of the options was to catch
std::bad_alloc and inside catch block to process the queue and release
objects from memory. By doing that, I'd providing enough memory for
application to keep running.
Another application could use GC(Garbage Collection). It could be a
good idea to make GC to start releasing unused objects inside
catch(bad_alloc) block.
Altough, I agree that probably most common behavior is to notify the
user of such problem, log it and shutdown the systems.

Shimon

Joshua Lehrer

unread,
May 31, 2006, 4:53:42 PM5/31/06
to

ThosRTanner wrote:
> Note: I've come across one implementation of C++ where, given certain
> memory constraints, when the computer runs out of memory, your heap
> overwrites your stack. This is from people who should have known
> better. But they won't change it because apparently speed is more
> important than correct and predictable functioning of a program.

This is the way that, in college, I was taught that computers are
supposed to 'work'. The stack grows up, the heap grows down, and when
they meet, you are in trouble.

I currently work on a platform where the stack grows down, but is
limited to 1gig, and the heap grows up, and is also limited to 1 gig.
This is problematic because we use very little stack space but lots of
heap, and yet we are still limited to 1 and 1. With a floating
separator between stack and heap, we could get 1.8 and 0.2, which is
more like what we actually need.

I guess another solution would have the heap and stack growing toward
eachother. If the heap ever needed to grow and it detected it was too
close to the stack, it would throw bad_alloc. However, what do you do
when the stack approaches the heap? Is it allowed to throw bad_alloc
at the point when it detects this? If not, what do you do? If so,
isn't this a lot of overhead to calculate each time you need to grow
the stack? Growing the stack is supposed to be fast and cheap (usually
just an add).

joshua lehrer
http://www.lehrerfamily.com/

Martin Bonner

unread,
May 31, 2006, 4:49:45 PM5/31/06
to

blwy10 wrote:
> What happens when in the process of using STL containers, we insert so
> many elements that our computer runs out of memory to store them. What
> happens then? I assume that this behaviour is implementation-defined
> and dependent on what STL am I using and what OS I am running but I'm
> just curious to know what would/should typically happen. To be more
> precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> STL implementation. The container specifically is std::set.

In principal (as everybody has said), the implementation should throw
std:bad_alloc. In practice, when a Windows program runs out of memory,
there is a good chance that it will take the whole system down with it.
This is not a failure of the STL, but of all the drivers and other
programs that can't cope with out of memory. There is thus little
point in trying to handle out-of-memory failures (and yes, I do realize
that is part of the reason we got here!)

Note that failing to throw std:bad_alloc doesn't mean the STL is
failing to adhere to the standard - there is a general get-out clause
for "resource limit exceeded" in the standard.

James Kanze

unread,
May 31, 2006, 4:59:58 PM5/31/06
to
Kai-Uwe Bux wrote:
> blwy10 wrote:

>> What happens when in the process of using STL containers, we insert
>> so many elements that our computer runs out of memory to store them.
>> What happens then? I assume that this behaviour is
>> implementation-defined and dependent on what STL am I using and what
>> OS I am running but I'm just curious to know what would/should
>> typically happen.

> No. The standard containers do all allocations through std::allocator


> unless you specify an allocator of your own. When std::allocator
> cannot do an allocation, it will throw std::bad_alloc.

Maybe. It depends on the program (e.g. if you've not replaced
the new handler, or the global operator new), and a lot of other
things -- all std::allocator is required to do is use ::operator
new. The default version of the global operator new will throw
std::bad_alloc if 1) it is able to determine that the request
will fail, and 2) you haven't replaced the new handler.

One thing is certain: you cannot count on recovering from an out
of memory situation on most platforms.

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

kanze

unread,
Jun 1, 2006, 7:36:37 AM6/1/06
to
Joshua Lehrer wrote:
> ThosRTanner wrote:
> > Note: I've come across one implementation of C++ where,
> > given certain memory constraints, when the computer runs out
> > of memory, your heap overwrites your stack. This is from
> > people who should have known better. But they won't change
> > it because apparently speed is more important than correct
> > and predictable functioning of a program.

> This is the way that, in college, I was taught that computers
> are supposed to 'work'. The stack grows up, the heap grows
> down, and when they meet, you are in trouble.

The question is whether you recognize it or not. Back in the
older days of 64 KB of unprotected memory, it wasn't uncommon
for stack overflow to just overwrite a few variables -- they
actual error didn't occur until much later, and it wasn't always
apparent what caused the error.

With pthread_attr_setstack, you can guarantee this sort of
behavior today; I wouldn't be surprised if even without
pthread_attr_setstack, you can get this behavior in some
threading implementations.

> I guess another solution would have the heap and stack growing
> toward eachother. If the heap ever needed to grow and it
> detected it was too close to the stack, it would throw
> bad_alloc.

I think many modern Unixes do this. (Solaris does, at least. I
think Linux on PC does as well.)

> However, what do you do when the stack approaches the heap?

Core dumps. At least on every system I've worked on for the
last 15 years.

In practice (although I don't think it is written down
anywhere), once memory has been allocated to the stack, at least
under Solaris (and I'm pretty sure PC-Linux), it is allocated
eternally to the stack. In critical applications, we usually
recurse pretty deeply at start up, in order to allocate the
maximum that will ever been needed; having done this, we will
never run out of stack later. (As far as I know, this is not
explicitly guaranteed anywhere. But I've yet to find a Unix
system where it didn't work.)

> Is it allowed to throw bad_alloc at the point when it detects
> this?

It is; it can do anything it damn well pleases, since running
out of resources is undefined behavior. But I don't think many
of us would like that; in order to write exception safe code,
you need a few critical functions which cannot throw. If stack
overflow raises an exception, by definition, any function call
can throw.

> If not, what do you do?

Cross your fingers? Pray?

Seriously, the only solution I know is to determine the maximum
amount of stack you need (which is far from simple, and requires
a pretty good familiarity with the system architecture and
calling conventions), and pre-allocate it up front, as described
above.

> If so, isn't this a lot of overhead to calculate each time you
> need to grow the stack? Growing the stack is supposed to be
> fast and cheap (usually just an add).

Well, it's never just an add, since you also have to set up the
new stack frame. The trick is to identify the critical spots
(say, before recursing), and only check there. Manually, by
comparing to the top you saved when you recursed to allocate it
in the first place.

--
James Kanze GABI Software

Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

kanze

unread,
Jun 1, 2006, 7:37:12 AM6/1/06
to
dhruv wrote:
> blwy10 wrote:
> > What happens when in the process of using STL containers, we insert so
> > many elements that our computer runs out of memory to store them. What
> > happens then? I assume that this behaviour is implementation-defined
> > and dependent on what STL am I using and what OS I am running but I'm
> > just curious to know what would/should typically happen. To be more
> > precise, I am using Windows XP SP 2 and VC Toolkit 2003, including its
> > STL implementation. The container specifically is std::set.

> The container's insert() function will throw an std::bad_alloc()
> exception, and I don't think there's much that can be done about it
> either. The result is NOT implementation dependant, and is defined by
> the C++ standard. This is of course assuming that you are using
> std::allocator() or some replacement that adheres to the standard.

And haven't replaced the global operator new, and haven't
specified a custom new handler, and that the OS correctly
reports the error to operator new, and that having run out of
memory, you're still able to allocated enough stack to call the
operator new function (and that it can call any functions it
happens to use internally).

In practice, just catching bad_alloc is nowhere near enough to
be able to recover from out of memory conditions.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Daniel Aarno

unread,
Jun 1, 2006, 4:57:57 PM6/1/06
to
> This is the way that, in college, I was taught that computers are
> supposed to 'work'. The stack grows up, the heap grows down, and when
> they meet, you are in trouble.

This actually has nothing to do with how STL handles out-of-memory problems.
STL assumes that there is a correct underlying implementation of the memory
management system so that new and delete operators work as specified by the
standard.

>
> I currently work on a platform where the stack grows down, but is
> limited to 1gig, and the heap grows up, and is also limited to 1 gig.
> This is problematic because we use very little stack space but lots of
> heap, and yet we are still limited to 1 and 1. With a floating
> separator between stack and heap, we could get 1.8 and 0.2, which is
> more like what we actually need.
>
> I guess another solution would have the heap and stack growing toward
> eachother. If the heap ever needed to grow and it detected it was too
> close to the stack, it would throw bad_alloc. However, what do you do
> when the stack approaches the heap? Is it allowed to throw bad_alloc
> at the point when it detects this? If not, what do you do? If so,
> isn't this a lot of overhead to calculate each time you need to grow
> the stack? Growing the stack is supposed to be fast and cheap (usually
> just an add).

Even tough this is maybe a bit of topic it is an interesting question.
Yes, growing the stack should be cheap, usually adding a value to a pointer.
Taking Linux as an example you can tell the linker how much stack space you
want to allocate and how much it is allowed to grow. This means that you know
the limitations of your stack, but you can change it (so you're not stuck with
the obscene amount of 1 GB as you claim to have on you system). I think the
defaults on "normal" Linux systems is 4kB allocated, 2MB reserved. This way the
normal MMU circuit on can catch "out of stack", and of course the system will
not give you any more pages for dynamic allocation when you are out of memory.

/Daniel Aarno

kanze

unread,
Jun 1, 2006, 4:51:11 PM6/1/06
to
Martin Bonner wrote:
> blwy10 wrote:
> > What happens when in the process of using STL containers, we
> > insert so many elements that our computer runs out of memory
> > to store them. What happens then? I assume that this
> > behaviour is implementation-defined and dependent on what
> > STL am I using and what OS I am running but I'm just curious
> > to know what would/should typically happen. To be more
> > precise, I am using Windows XP SP 2 and VC Toolkit 2003,
> > including its STL implementation. The container specifically
> > is std::set.

> In principal (as everybody has said), the implementation
> should throw std:bad_alloc. In practice, when a Windows
> program runs out of memory, there is a good chance that it
> will take the whole system down with it.

I've never experienced this. I know that Linux used to (and
probably still does in some configurations) use lazy allocation,
and would start terminating random processes when it ran out of
memory -- AIX also had this problem in the distant past.
Although I've not experienced this, at least one person has told
me that in one case, it killed the login process, so it became
impossible to log into the system -- as a result of a simple
user process using too much memory. Under Windows, I've had a
pop-up window appear, asking me to kill some other processes
manually, in order to obtain more memory -- the system request
didn't return until it could fulfill the request for memory.

Of course, on any system, if you have a lot more virtual memory
than real memory, you'll start thrashing long before the system
runs out of memory. While the system isn't down, there's not
necessarily much you can do with it.

> This is not a failure of the STL, but of all the drivers and other
> programs that can't cope with out of memory. There is thus little
> point in trying to handle out-of-memory failures (and yes, I do realize
> that is part of the reason we got here!)

> Note that failing to throw std:bad_alloc doesn't mean the STL is
> failing to adhere to the standard - there is a general get-out clause
> for "resource limit exceeded" in the standard.

It's debatable whether this clause applies when the standard
provides an official means of signaling the error. In practice,
however, when you're out of memory, you also stand a definite
risk of stack overflow when calling operator new. Which is
definitly covered by this clause.

On the other hand, in the older versions of AIX (older, in this
case, being those from more than 8 or 10 years ago), and some
configurations of Linux, the system returns a valid pointer even
when no memory is available; your program (or someone elses!)
then crashes when it attempts to use the pointer. I find it
hard to justify this under the "resource limits exceeded"
clause, because the system has told me that the resource was
there; I'm not trying to use additional resources when it
crashes, but rather resources that I have already successfully
acquired.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Daniel Aarno

unread,
Jun 1, 2006, 4:57:35 PM6/1/06
to
> That's not so correct. Here's a real life scenario that happened to me
> at work. The main component of product I was working on couldn't
> allocate 50K bytes of memory. That component also managed a queue of
> data it needed to release. One of the options was to catch
> std::bad_alloc and inside catch block to process the queue and release
> objects from memory. By doing that, I'd providing enough memory for
> application to keep running.
> Another application could use GC(Garbage Collection). It could be a
> good idea to make GC to start releasing unused objects inside
> catch(bad_alloc) block.

This is probably better handled by setting a new hander with set_new_handler()
which gets called prior to throwing bad_alloc.

> Altough, I agree that probably most common behavior is to notify the
> user of such problem, log it and shutdown the systems.

This is what you as a programmer would do, not the STL.

/Daniel Aarno

ThosRTanner

unread,
Jun 1, 2006, 5:02:47 PM6/1/06
to

Joshua Lehrer wrote:
> ThosRTanner wrote:
> > Note: I've come across one implementation of C++ where, given certain
> > memory constraints, when the computer runs out of memory, your heap
> > overwrites your stack. This is from people who should have known
> > better. But they won't change it because apparently speed is more
> > important than correct and predictable functioning of a program.
>
> This is the way that, in college, I was taught that computers are
> supposed to 'work'. The stack grows up, the heap grows down, and when
> they meet, you are in trouble.
Well, that isn't always the case. I know one system where the stack was
grown by mallocing extra space from the heap, and chaining blocks.
Worked very nicely, and stack overflow == memory exhaustion, not heap
corruption.

> I currently work on a platform where the stack grows down, but is
> limited to 1gig, and the heap grows up, and is also limited to 1 gig.
> This is problematic because we use very little stack space but lots of
> heap, and yet we are still limited to 1 and 1. With a floating
> separator between stack and heap, we could get 1.8 and 0.2, which is
> more like what we actually need.
>
> I guess another solution would have the heap and stack growing toward
> eachother. If the heap ever needed to grow and it detected it was too
> close to the stack, it would throw bad_alloc. However, what do you do
> when the stack approaches the heap? Is it allowed to throw bad_alloc
> at the point when it detects this? If not, what do you do? If so,
> isn't this a lot of overhead to calculate each time you need to grow
> the stack? Growing the stack is supposed to be fast and cheap (usually
> just an add).

It could just terminate, bad_alloc as the result of a call might cause
the runtime system some confusion.

My main objection was not to the stack overwriting the heap (which I
expect to be an issue), but the heap overwriting the stack. This
program on the system I am complaining about

int main()
{
while (malloc(1) != NULL) {};
return 0;
}

will crash in malloc, because the heap eventually ends up overwriting
the stack, and some heap chain information gets overwritten by the
function calls involved in actually doing the malloc.

Nemanja Trifunovic

unread,
Jun 2, 2006, 6:48:33 PM6/2/06
to

Martin Bonner wrote:
> In practice, when a Windows program runs out of memory,
> there is a good chance that it will take the whole system down with it.
> This is not a failure of the STL, but of all the drivers and other
> programs that can't cope with out of memory.

Nope. std::bad_alloc is thrown from a Windows program as well.

Markus Schoder

unread,
Jun 3, 2006, 7:48:19 AM6/3/06
to
kanze wrote:

> Martin Bonner wrote:
> > Note that failing to throw std:bad_alloc doesn't mean the STL is
> > failing to adhere to the standard - there is a general get-out clause
> > for "resource limit exceeded" in the standard.
>
> It's debatable whether this clause applies when the standard
> provides an official means of signaling the error. In practice,
> however, when you're out of memory, you also stand a definite
> risk of stack overflow when calling operator new. Which is
> definitly covered by this clause.
>
> On the other hand, in the older versions of AIX (older, in this
> case, being those from more than 8 or 10 years ago), and some
> configurations of Linux, the system returns a valid pointer even
> when no memory is available; your program (or someone elses!)
> then crashes when it attempts to use the pointer. I find it
> hard to justify this under the "resource limits exceeded"
> clause, because the system has told me that the resource was
> there; I'm not trying to use additional resources when it
> crashes, but rather resources that I have already successfully
> acquired.

This was (not sure wether it still is, but the feature is definitely
still available) the default behaviour of the Linux kernel. It is
called memory overcommit. This causes every allocation to succeed so
long as it can be fitted into the address space of the process
regardless of available memory (including virtual memory). Once you
start writing to the memory the kernel starts mapping pages to physical
memory which may fail. If that happens the out-of-memory killer kicks
in and selects through some heuristics a process to kill. The killed
process never regains control and therefore cannot guard at all against
this. The killed process may not even be the one causing the out of
memory condition in the first place.

All of this sounds quite horrible and of course this behaviour can be
switched off however as a matter of fact there are memory allocation
and usage patterns that will work with overcommit and fail with an out
of memory error without overcommit. So for non critical systems memory
overcommit can be a win.

kanze

unread,
Jun 3, 2006, 7:55:36 AM6/3/06
to
ThosRTanner wrote:
> Joshua Lehrer wrote:
> > ThosRTanner wrote:

> > > Note: I've come across one implementation of C++ where,
> > > given certain memory constraints, when the computer runs
> > > out of memory, your heap overwrites your stack. This is
> > > from people who should have known better. But they won't
> > > change it because apparently speed is more important than
> > > correct and predictable functioning of a program.

> > This is the way that, in college, I was taught that
> > computers are supposed to 'work'. The stack grows up, the
> > heap grows down, and when they meet, you are in trouble.

> Well, that isn't always the case. I know one system where the
> stack was grown by mallocing extra space from the heap, and
> chaining blocks. Worked very nicely, and stack overflow ==
> memory exhaustion, not heap corruption.

Recently? I used such a system in the late 1980's (an
unofficial port of C to the Siemens BS2000 mainframes). The
effect on performance pretty much made it unusable -- a couple
of hundred machine instructions for a function call.

On the other hand, all of the HP/UX machines I've seen had a
stack which grew up, not down. The stack had a fixed maximum
size, with the heap (also growing up) directly above it.

> > I currently work on a platform where the stack grows down,
> > but is limited to 1gig, and the heap grows up, and is also
> > limited to 1 gig. This is problematic because we use very
> > little stack space but lots of heap, and yet we are still
> > limited to 1 and 1. With a floating separator between stack
> > and heap, we could get 1.8 and 0.2, which is more like what
> > we actually need.

> > I guess another solution would have the heap and stack
> > growing toward eachother. If the heap ever needed to grow
> > and it detected it was too close to the stack, it would
> > throw bad_alloc. However, what do you do when the stack
> > approaches the heap? Is it allowed to throw bad_alloc at
> > the point when it detects this? If not, what do you do? If
> > so, isn't this a lot of overhead to calculate each time you
> > need to grow the stack? Growing the stack is supposed to be
> > fast and cheap (usually just an add).

> It could just terminate, bad_alloc as the result of a call
> might cause the runtime system some confusion.

Yes. What usually happens is that the system will maintain some
guard pages between the heap and the stack. When you increase
the stack size, accesses to the new stack may cause page faults,
which the system then handles -- if there's room, and you're
authorized, it will allocate additional pages. Otherwise, it
will terminate the program (signal 11 in Solaris or Linux -- the
same as if you screwed up with a pointer).

> My main objection was not to the stack overwriting the heap
> (which I expect to be an issue), but the heap overwriting the
> stack. This program on the system I am complaining about

> int main()
> {
> while (malloc(1) != NULL) {};
> return 0;
> }

> will crash in malloc, because the heap eventually ends up
> overwriting the stack, and some heap chain information gets
> overwritten by the function calls involved in actually doing
> the malloc.

That's horrible. Heap allocation is controlled, so there is no
excuse for not checking.

--
James Kanze GABI Software
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Jeff Koftinoff

unread,
Jun 3, 2006, 10:02:40 AM6/3/06
to

kanze wrote:
<snip>

>
> I've never experienced this. I know that Linux used to (and
> probably still does in some configurations) use lazy allocation,
> and would start terminating random processes when it ran out of
> memory -- AIX also had this problem in the distant past.

I can attest to the typical linux installation's lazy allocation
effectively making bad_alloc an exception that is almost never thrown!

For clarification for others, new and malloc will happily return
virtual memory pointers. The actual memory pages are only allocated
when you actually touch the data.

At this point, if there is a memory full condition, ie: all ram and
swap is taken, then all the O/S can do is start killing processes via
the 'OOM Killer'.

The tricky part here is that new and malloc are not the only places
where pages of virtual memory space are allocated. The stack is as
well.

So you can have the situation where malloc/new allocates almost all the
virtual memory spaces available. Then you clear all this memory. The
system is still running - Until the next function call which takes
enough stack space to require a new memory page to be allocated for
stack!

You can easily see this behaviour yourself by writing a program which
allocates a big chunk of memory, waits 20 seconds, and then clears it.
Run 10 of these at the same time and watch the malloc and new not fail
and watch some processes die asynchronously - sometimes processes not
related to your test program!

So in effect, the STL 'compliance' of throwing bad_alloc is fully
dependent on the operating system.

On linux, one can change the kernel setting:
/proc/sys/vm/overcommit_memory

to avoid this, at the expense of much higher system memory usage.

People who are trying to write fail-safe, robust, secure programs need
to know about this behavour and test all these corner cases. It makes
worrying about bad_alloc kind of pointless if the system almost never
throws it!

jeff koftinoff
je...@jdkoftinoff.com
www.jdkoftinoff.com

James Kanze

unread,
Jun 3, 2006, 8:36:04 PM6/3/06
to

I think it is still the default behavior of the kernel. Various
distributions, however, may have it disactivated.

> It is called memory overcommit. This causes every allocation
> to succeed so long as it can be fitted into the address space
> of the process regardless of available memory (including
> virtual memory). Once you start writing to the memory the
> kernel starts mapping pages to physical memory which may fail.
> If that happens the out-of-memory killer kicks in and selects
> through some heuristics a process to kill. The killed process
> never regains control and therefore cannot guard at all
> against this. The killed process may not even be the one
> causing the out of memory condition in the first place.

I know. I thought that's what I sid.

> All of this sounds quite horrible and of course this behaviour
> can be switched off however as a matter of fact there are
> memory allocation and usage patterns that will work with
> overcommit and fail with an out of memory error without
> overcommit. So for non critical systems memory overcommit can
> be a win.

There are a few very specific application domains where it is
preferable. Offering it can be considered a feature. Silently
implementing it, without telling anyone, and without providing a
means of turning it off (the case originally in both AIX and
Unix) can only be considered... well, I can't think of a word
bad enough for it.

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

James Kanze

unread,
Jun 3, 2006, 8:36:56 PM6/3/06
to
Jeff Koftinoff wrote:
> kanze wrote:
> <snip>
>> I've never experienced this. I know that Linux used to (and
>> probably still does in some configurations) use lazy
>> allocation, and would start terminating random processes when
>> it ran out of memory -- AIX also had this problem in the
>> distant past.

> I can attest to the typical linux installation's lazy
> allocation effectively making bad_alloc an exception that is
> almost never thrown!

> For clarification for others, new and malloc will happily
> return virtual memory pointers. The actual memory pages are
> only allocated when you actually touch the data.

> At this point, if there is a memory full condition, ie: all
> ram and swap is taken, then all the O/S can do is start
> killing processes via the 'OOM Killer'.

> The tricky part here is that new and malloc are not the only
> places where pages of virtual memory space are allocated. The
> stack is as well.

And fork, and possibly some other operations as well. (Does
pthread_create use sbrk/brk to allocate stacks, or some other
mechanism?) And of course, it's quite possible to write a
version of malloc or operator new which doesn't use space in
virtual memory -- say by creating a file an mmapping it.

The really tricky part, usually, is finding out what your system
does, because this sort of thing is usually very poorly
documented.

> So in effect, the STL 'compliance' of throwing bad_alloc is fully
> dependent on the operating system.

Which was my point. More generally, even outside of the STL,
you generally cannot count on bad_alloc -- you have to take
draconian, system dependant (and usually undocumented) actions
to ensure getting it when you run out of memory.

> On linux, one can change the kernel setting:
> /proc/sys/vm/overcommit_memory

> to avoid this, at the expense of much higher system memory usage.

> People who are trying to write fail-safe, robust, secure
> programs need to know about this behavour and test all these
> corner cases. It makes worrying about bad_alloc kind of
> pointless if the system almost never throws it!

Note that they know about this behavior *because* they test all
those corner cases. It's far from obvious to the typical Linux
user that overcommit is active, and it's even less obvious how
to turn it off. And in communities where robust, secure
programs are the rule, the programmers usually refuse to believe
that any system would be so dumb as to use it, until they
encounter the case in their tests.

--
James Kanze kanze...@neuf.fr


Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung

9 place Sémard, 78210 St.-Cyr-l'École, France +33 (0)1 30 23 00 34

0 new messages