Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Minimizing Dynamic Memory Allocation

1 view
Skip to first unread message

ma740988

unread,
Jan 24, 2009, 3:32:15 PM1/24/09
to
I'm perusing the coding standards put together by hired help and I'm
having a hard time making it pass section 2.1. Section 2.1 reads

-------
Section 2.1 Minimizing Dynamic Memory Allocation
Avoid the use of C's malloc and free. Instead use (sparingly) C++'s
new and delete operators. To avoid memory leaks, a clear
understanding of resource acquisition (allocation and release) is
important. To avoid leaks, all classes should include a destructor
that releases any memory allocated by the class' constructor. To
ensure that destructors are called, class constructors should be
declared

foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }

and not

foo::foo ( a, b ) { a = ...; b = ...; ... }

because failure in the constructor after initialization of b will call
the destructors for a and b in the first definition, but not in the
second; thus in the second there is a potential memory leak.

Finally, always set a 'new_handler' using the built-in function
set_new_handler. The default new_handler terminates the program when
it cannot satisfy a memory request. Program termination at a critical
time may be disastrous.
-------
For starters, I'd be lying if I said I've never opted for the 'and
not' condition on occasions.
Number 2: Why would a failure in second condition bypass the
destructor for a and b? If I don't have a fully constructed foo
_after_ the initialization of b, the destructor for a and b ought to
be called for both definitions.

Thanks

Kai-Uwe Bux

unread,
Jan 24, 2009, 4:04:50 PM1/24/09
to
ma740988 wrote:

> I'm perusing the coding standards put together by hired help and I'm
> having a hard time making it pass section 2.1. Section 2.1 reads
>
> -------
> Section 2.1 Minimizing Dynamic Memory Allocation
> Avoid the use of C's malloc and free. Instead use (sparingly) C++'s
> new and delete operators. To avoid memory leaks, a clear
> understanding of resource acquisition (allocation and release) is
> important. To avoid leaks, all classes should include a destructor
> that releases any memory allocated by the class' constructor. To
> ensure that destructors are called, class constructors should be
> declared
>
> foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }
>
> and not
>
> foo::foo ( a, b ) { a = ...; b = ...; ... }
>
> because failure in the constructor after initialization of b will call
> the destructors for a and b in the first definition, but not in the
> second; thus in the second there is a potential memory leak.

I disagree with this coding guideline. As far as memory leaks and pointer
members are concerned, the "and not" option seems the way to go. Consider

class foo {
int * a;
int * b;
public:
foo ( int i, int j )
: a ( new int ( i ) )
, b ( new int ( j ) )
{}
~foo ( void ) {
delete a;
delete b;
}
};

This leaks the pointee for a if allocation for b throws: the destructor for
the pointer a will be called, but that destructor does not deallocate the
memory pointed to. You get a similar problem with class pointers instead of
int pointers when the constructors for the b pointee throws.

In order to avoid this, auto_ptr is the intended way:

class foo {
int * a;
int * b;
public:
foo ( int i, int j ) {
std::auto_ptr<int> aa = new int ( i );
std::auto_ptr<int> bb = new int ( j );
a = aa.release();
b = bb.release();
}
~foo ( void ) {
delete a;
delete b;
}
};

Note: you do not need to initialize the pointers a and b to 0 because if the
constructor for foo fails, the destructor ~foo() will not be called, which
means we don't delete possibly non-null pointers.

I don't see how you can initialize the pointers a and b in the initializer
list to their final values _without_ the problem of a possible leak (unless
you use a smart pointer of some flavor).


> Finally, always set a 'new_handler' using the built-in function
> set_new_handler. The default new_handler terminates the program when
> it cannot satisfy a memory request.

Incorrect: by default a new expression throws an exception if memory
allocation fails. Only if that exception goes uncaught, the program is
terminated.

> Program termination at a critical time may be disastrous.

BTW: The program running in an inconsistent state is probably more
disastrous (do you want a run-away program in a bank buy tons of stock
options or crash).

> -------
> For starters, I'd be lying if I said I've never opted for the 'and
> not' condition on occasions.
> Number 2: Why would a failure in second condition bypass the
> destructor for a and b? If I don't have a fully constructed foo
> _after_ the initialization of b, the destructor for a and b ought to
> be called for both definitions.

Yes: I think if a and b are of class type, the constructors for the
subobjects a and b will _always_ be called before the body of the
foo-constructor is entered. Only, if you omit the initializer list, the
default constructors will be called. If the constructor for foo fails in
the body, all constructed subobjects will be destroyed properly.


Best

Kai-Uwe Bux

Alf P. Steinbach

unread,
Jan 24, 2009, 4:11:27 PM1/24/09
to
* ma740988:

> I'm perusing the coding standards put together by hired help

Buy Andrei & whoever-it-was' C++ coding guidelines book, one per developer.


> and I'm
> having a hard time making it pass section 2.1. Section 2.1 reads
>
> -------
> Section 2.1 Minimizing Dynamic Memory Allocation
> Avoid the use of C's malloc and free.

Good advice for general C++, but of course there are no absolutes, e.g.,
sometimes some library routine will require you to 'malloc' some argument or
'free' some result.


> Instead use (sparingly) C++'s
> new and delete operators.

Also good advice, including that word 'sparingly'.


> To avoid memory leaks, a clear
> understanding of resource acquisition (allocation and release) is
> important.

This is true.


> To avoid leaks, all classes should include a destructor
> that releases any memory allocated by the class' constructor.

This is just dumb advice.

Use types that manage their own memory.

Use smart pointers.


> To
> ensure that destructors are called, class constructors should be
> declared
>
> foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }
>
> and not
>
> foo::foo ( a, b ) { a = ...; b = ...; ... }
>
> because failure in the constructor after initialization of b will call
> the destructors for a and b in the first definition, but not in the
> second; thus in the second there is a potential memory leak.

This is incorrect.

However, it's generally a good idea to use initializer lists rather than
assignments.

See the FAQ.


> Finally, always set a 'new_handler' using the built-in function
> set_new_handler.

This is good advice.


> The default new_handler terminates the program when
> it cannot satisfy a memory request.

This is incorrect.


> Program termination at a critical
> time may be disastrous.

And this is just dumb.

The reason you should set a new_handler is in order to override the default
exception throwing behavior and instead let it terminate the program.

Because that's about the only sane way to deal with memory exhaustion.


> -------
> For starters, I'd be lying if I said I've never opted for the 'and
> not' condition on occasions.

?


> Number 2: Why would a failure in second condition bypass the
> destructor for a and b? If I don't have a fully constructed foo
> _after_ the initialization of b, the destructor for a and b ought to
> be called for both definitions.

Right.


Cheers, & hth.,

- Alf

pa...@lib.hu

unread,
Jan 25, 2009, 12:31:21 PM1/25/09
to
IMO that section is at least fishy. I suggest you read other
standards, especially ones with rationale and references. (Sutter/
Alexandrescu: C++ Coding Standrards is a good start)

just few fast comments:
- using new isntead f delete does not reduce the numner of dynamic
inits a single bit; preference to new is based on differnt stuff, and
the real motivation is to not use eiter -- but string, vector, list,
etc.

- writing destructors is strictly discouraged -- instead you shall use
RAII members and empty/autogenerated dtors

- in the second examle dtors of a and b are definitely called for the
second case.

- set_new_handler is hyper-advanced stuff fro those who can write
project standards, not just reading them.

anon

unread,
Jan 26, 2009, 4:38:37 AM1/26/09
to
Kai-Uwe Bux wrote:
> ma740988 wrote:
[snip]

I was wondering : why didn't you suggest this right away?

class foo {
std::auto_ptr< int > a;
std::auto_ptr< int > b;


public:
foo ( int i, int j ) :

{
a( new int ( i ) ),
b( new int ( j ) )
}
~foo ( void ) {
}
};

James Kanze

unread,
Jan 26, 2009, 5:46:42 AM1/26/09
to
On Jan 24, 10:11 pm, "Alf P. Steinbach" <al...@start.no> wrote:
> * ma740988:

[...]


> > and I'm having a hard time making it pass section 2.1.
> > Section 2.1 reads

> > -------
> > Section 2.1 Minimizing Dynamic Memory Allocation
> > Avoid the use of C's malloc and free.

> Good advice for general C++, but of course there are no
> absolutes, e.g., sometimes some library routine will require
> you to 'malloc' some argument or 'free' some result.

Then apply rule 0 (or what should be rule 0 in any reasonable
coding guidelines): any other rule in the guideline can be
violated if there is a justifiable reason for doing so. This
reason must be documented in the code (comment, etc.), and
approved by...

They're called guidelines, and not laws, for a reason. In the
end, if it passes code review, it's OK. Violating a guideline
is a justifiable reason for the reviewers to reject it, but
they aren't required to do so, if they're convinced that the
violation was justified.

Things that can be automatically checked (like use of
malloc/free) may require some sort of special comment to turn
the checking off in a particular case.

> > Instead use (sparingly) C++'s new and delete operators.

> Also good advice, including that word 'sparingly'.

For a definition of 'sparingly' which depends on the
application. I can easily imagine numeric applications where no
explicit use of new or delete is appropriate. And I've worked
on server applications where almost everything was allocated
dynamically, by necessity. The real rule is more complex; it
implies understanding what you are doing. Some simple rules can
be applied, however:

-- Never use dynamic allocation if the lifetime corresponds to
a standard lifetime and the type and size are known at
compile time.

-- If the object type doesn't require identity, and the
lifetime does not correspond to a standard lifetime, copy it
rather than using dynamic allocation, unless the profiler
says otherwise.

> > To avoid memory leaks, a clear understanding of resource
> > acquisition (allocation and release) is important.

> This is true.

> > To avoid leaks, all classes should include a destructor
> > that releases any memory allocated by the class' constructor.

> This is just dumb advice.

And it contradicts the advice immediately above: "have a clear
understanding of resource acquisition".

> Use types that manage their own memory.

Which is exactly what the "advice" you just called dumb said.

There are two (and I think only two) cases where dynamic
allocation is appropriate: the lifetime of the object doesn't
correspond to a standard lifetime, or the size or type aren't
known at compile time. In the second case, having a type which
manages its own memory, and frees it in the destructor, is
definitely the prefered solution. In the first case (by far the
most prevalent in my applications), it really doesn't mean
anything.

> Use smart pointers.

Whatever? Why does making the managing class look like a
pointer change anything? (There are cases where smart pointers
are appropriate, but they don't solve all problems.)

> > To ensure that destructors are called, class constructors
> > should be declared

> > foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }

> > and not

> > foo::foo ( a, b ) { a = ...; b = ...; ... }

> > because failure in the constructor after initialization of b
> > will call the destructors for a and b in the first
> > definition, but not in the second; thus in the second there
> > is a potential memory leak.

> This is incorrect.

> However, it's generally a good idea to use initializer lists
> rather than assignments.

> See the FAQ.

See also the response by Kai-Uwe. Just using initializer lists
can make things worse.

A good general rule is to require any single class to be
responsible for at most one resource. If a class needs several,
it should spawn the management of each off into a separate base
class or member. (There are times when this simplifies life
even when the derived class only needs a single resource. See
the g++ implementation of std::vector, for example.)

> > Finally, always set a 'new_handler' using the built-in
> > function set_new_handler.

> This is good advice.

> > The default new_handler terminates the program when it
> > cannot satisfy a memory request.

> This is incorrect.

> > Program termination at a critical time may be disastrous.

> And this is just dumb.

More to the point, it doesn't mean anything. And it's not
implementable, at least not reasonably. If the OS crashes, for
example, the program will terminate, or it may terminate because
of a hardware failure. The total system must be so designed
that this isn't "disastrous" (i.e. it doesn't result in a loss
of lives or destruction of property).

> The reason you should set a new_handler is in order to
> override the default exception throwing behavior and instead
> let it terminate the program.

> Because that's about the only sane way to deal with memory
> exhaustion.

*That* depends largely on the application. In many cases,
you're right, but certainly not all, and there are applications
which can (and must) recover correctly to insufficient memory.
(It's tricky to get right, of course, since you also have to
protect against things like stack overflow and buggy systems
which tell you you've got memory when you haven't.)

--
James Kanze (GABI Software) email:james...@gmail.com
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34

Christof Donat

unread,
Jan 26, 2009, 7:50:05 AM1/26/09
to
Hi,

> - writing destructors is strictly discouraged

Huh, that is new to me. Where did you get that from.

> -- instead you shall use
> RAII members and empty/autogenerated dtors

Are you shure, you really are familliar with all implications of RAII?

Christof


pa...@lib.hu

unread,
Jan 26, 2009, 8:07:44 AM1/26/09
to

I'm positive :)
Read James' recent post in this thread for details. (around "A good


general rule is to require any single class to be responsible for at

most one resource. ...")

See also the "Single responsibility principle"


Bart van Ingen Schenau

unread,
Jan 26, 2009, 9:11:54 AM1/26/09
to

It seems you seriously misunderstand what it means to manage a
resource.
If a class is responsible for managing a resource (a single resource
preferably), then that class most definitely needs a destructor to
clean-up after itself.

The only classes that don't need a user-written destructor are those
that don't need to do any cleanup when they are destroyed. This is
typically because those classes don't manage any resources.
Depending on what you are writing, this could be a majority of the
classes, but it still isn't enough to say that you should try to avoid
writing destructors.

Bart v Ingen Schenau

pa...@lib.hu

unread,
Jan 26, 2009, 10:26:38 AM1/26/09
to
And you miss the separation of responsibilities.
We have RAII/RIID/manager/etc classes -- that have responsibility to
manage a single resource. Examples: auto_ptr, string, vector, CFile,
CSingleLock, shared_ptr. They normally reside in libraries ready to
use. Ocasinally there is some some really new resource to cover, but
it is more like exceptional.

Then there are the normal user classes this whole discussion is about.
They, instead of attempting to manage resources directly shall use the
classes mentioned before. As members, base classes, locally, etc.
You do that, and need not write destructor (cctor, op= ) because
cleanup is properly done.

Most real-life problems come when a user class rather starts using raw
pointers or raw resources, obviously messing up somewhere the first 5+
times. And only to eventually discover using a manager class.

Care to point out what I miss? ;-)

Pete Becker

unread,
Jan 26, 2009, 11:43:34 AM1/26/09
to
On 2009-01-26 09:11:54 -0500, Bart van Ingen Schenau
<Bart.van.In...@ict.nl> said:

Several posts have overreacted to the original, slightly vague, post.
Here's what it means:

1. Use RAII for management of individual resources (this implies, of
course, writing destructors that clean up those resources).
2. When a class uses multiple resources, use manager classes as defined
above for each of those resources. The default destructor for the new
class will then work just fine; no need to write your own destructor.

--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of "The
Standard C++ Library Extensions: a Tutorial and Reference
(www.petebecker.com/tr1book)

Pete Becker

unread,
Jan 26, 2009, 11:43:57 AM1/26/09
to

The text that you're replying to.

Kai-Uwe Bux

unread,
Jan 26, 2009, 11:45:04 AM1/26/09
to
anon wrote:

Because of the context you snipped: I wanted to show how to initialize
pointer members and not how to replace those by other types that do their
own memory management. Clearly, the coding guidelines directed me more into
the direction of RAII too much. After all, in that case, I would just do:

class foo {
int a;
int b;
public:
...
};

Nonetheless, you are right that looking for the right smart pointer (or any
other class managing its own memory) can be a good idea.


Best

Kai-Uwe Bux

peter koch

unread,
Jan 26, 2009, 2:12:04 PM1/26/09
to
Actually I agree more or less with the OP's statement. Not because you
should not write destructors in general but because the principle that
classes should do only one thing typically lends to that design.

/Peter

pa...@lib.hu

unread,
Jan 26, 2009, 3:25:37 PM1/26/09
to
>Several posts have overreacted to the original, slightly vague, post.

IMNSHO a vague guideline has negative value. The one we're talking
about is worse, it is strictly incorrect in statements (i.e claiming
dtors not called).

Sure you (and I and many other mentors) can write down what it
*should* (or even was trying to) mean, but the readers have the actual
text.

It did sonfuse the original poster, was fairly criticised by others --
let's admint it failed the mark set for a "good guideline". That is
something providing clear leading; raising no questions or eyebrows.

Pete Becker

unread,
Jan 26, 2009, 3:39:59 PM1/26/09
to
On 2009-01-26 15:25:37 -0500, pa...@lib.hu said:

>> Several posts have overreacted to the original, slightly vague, post.
>
> IMNSHO a vague guideline has negative value. The one we're talking
> about is worse, it is strictly incorrect in statements (i.e claiming
> dtors not called).

Huh? What guideline is that? The one that you snipped?

>
> Sure you (and I and many other mentors) can write down what it
> *should* (or even was trying to) mean, but the readers have the actual
> text.
>
> It did sonfuse the original poster, was fairly criticised by others --
> let's admint it failed the mark set for a "good guideline". That is
> something providing clear leading; raising no questions or eyebrows.

Sigh. I don't think the intention was to provide a "good guideline",
but to participate in the discussion.

tonytech08

unread,
Jan 27, 2009, 1:38:31 AM1/27/09
to
On Jan 25, 11:31 am, p...@lib.hu wrote:

> - set_new_handler is hyper-advanced stuff fro those who can write
> project standards, not just reading them.

If _I_ "get it", I assure you it is not "hyper-advanced". In the
simplest scenario, set_new_handler is used to avoid exceptions in
codebases or application programs where exceptions are an off-limits
subsystem. Simple as pie. (Or "bad analogy", depending on "which side
of the tracks" your mind is on? ;)).


Alf P. Steinbach

unread,
Jan 27, 2009, 2:50:26 AM1/27/09
to
* James Kanze:

> On Jan 24, 10:11 pm, "Alf P. Steinbach" <al...@start.no> wrote:
>> * ma740988:
>
> [...]
>>> and I'm having a hard time making it pass section 2.1.
>>> Section 2.1 reads
>
>>> -------
>>> Section 2.1 Minimizing Dynamic Memory Allocation
>>> Avoid the use of C's malloc and free.
>
>> Good advice for general C++, but of course there are no
>> absolutes, e.g., sometimes some library routine will require
>> you to 'malloc' some argument or 'free' some result.
>
> Then apply rule 0 (or what should be rule 0 in any reasonable
> coding guidelines): any other rule in the guideline can be
> violated if there is a justifiable reason for doing so. This
> reason must be documented in the code (comment, etc.), and
> approved by...
>
> They're called guidelines, and not laws, for a reason. In the
> end, if it passes code review, it's OK. Violating a guideline
> is a justifiable reason for the reviewers to reject it, but
> they aren't required to do so, if they're convinced that the
> violation was justified.

Code reviews are less often performed in the real world than you seem to think.

On the other hand, your idea of a rule 0 in any coding guideline is a very good one.

For if programming could be reduced to a simple set of mechanical rules it would
have been so reduced, long ago.


> Things that can be automatically checked (like use of
> malloc/free) may require some sort of special comment to turn
> the checking off in a particular case.

It seems you're envisioning some automated code-guideline-conformity checker.

Given the level of the OP's firm's new (proposed) guidelines that's an entirely
unreasonable supposition.

There's also the point that using such a checker would amount to have given up
on quality and instead focusing on enforcing unthinking rule conformity, and
that kind of conformity is not a good idea when what you need is intelligent,
creative staff.


>>> Instead use (sparingly) C++'s new and delete operators.
>
>> Also good advice, including that word 'sparingly'.
>
> For a definition of 'sparingly' which depends on the
> application. I can easily imagine numeric applications where no
> explicit use of new or delete is appropriate. And I've worked
> on server applications where almost everything was allocated
> dynamically, by necessity.

Which does not mean it's a good idea to pepper the code with 'new' calls.

Centralize that functionality.


> The real rule is more complex; it
> implies understanding what you are doing. Some simple rules can
> be applied, however:
>
> -- Never use dynamic allocation if the lifetime corresponds to
> a standard lifetime and the type and size are known at
> compile time.

std::scoped_ptr, bye bye. :-)

But I'm sure you're thinking of some given application domain familiar to you.


> -- If the object type doesn't require identity, and the
> lifetime does not correspond to a standard lifetime, copy it
> rather than using dynamic allocation, unless the profiler
> says otherwise.
>
>>> To avoid memory leaks, a clear understanding of resource
>>> acquisition (allocation and release) is important.
>
>> This is true.
>
>>> To avoid leaks, all classes should include a destructor
>>> that releases any memory allocated by the class' constructor.
>
>> This is just dumb advice.
>
> And it contradicts the advice immediately above: "have a clear
> understanding of resource acquisition".
>
>> Use types that manage their own memory.
>
> Which is exactly what the "advice" you just called dumb said.

Nope.

The "advice" in the proposed guidelines was to define a destructor in every
class. That's dumb on its own, as opposed to using resource-specific management
wrappers such as smart pointers and standard library container classes. And it's
even more dumb, downright catastrophic advice, when the author of the guidelines
isn't familiar with rule of 3 and forgets to mention it here.

The opposite of that approach is to use types that manage their own memory,
i.e., the opposite of mindless duplication and continuous re-invention of
resource management is to centralize it.


> There are two (and I think only two) cases where dynamic
> allocation is appropriate: the lifetime of the object doesn't
> correspond to a standard lifetime, or the size or type aren't
> known at compile time. In the second case, having a type which
> manages its own memory, and frees it in the destructor, is
> definitely the prefered solution. In the first case (by far the
> most prevalent in my applications), it really doesn't mean
> anything.
>
>> Use smart pointers.
>
> Whatever? Why does making the managing class look like a
> pointer change anything?

Making the managing class look like a pointer does change a few things, mainly
because of the language's operator-> support, but that change has nothing to do
with what was discussed.


> (There are cases where smart pointers
> are appropriate, but they don't solve all problems.)

Right, ANY LANGUAGE FEATURE OR TECHNIQUE is appropriate in some cases and
inappropriate in some other cases.


>>> To ensure that destructors are called, class constructors
>>> should be declared
>
>>> foo::foo ( a, b ) : a ( ... ), b ( ... ) { .... }
>
>>> and not
>
>>> foo::foo ( a, b ) { a = ...; b = ...; ... }
>
>>> because failure in the constructor after initialization of b
>>> will call the destructors for a and b in the first
>>> definition, but not in the second; thus in the second there
>>> is a potential memory leak.
>
>> This is incorrect.
>
>> However, it's generally a good idea to use initializer lists
>> rather than assignments.
>
>> See the FAQ.
>
> See also the response by Kai-Uwe. Just using initializer lists
> can make things worse.

It seems somebody has argued that "just using initializer lists" will correct
"things", presumably some problem.

But why are you remarking on that to me?


> A good general rule is to require any single class to be
> responsible for at most one resource. If a class needs several,
> it should spawn the management of each off into a separate base
> class or member. (There are times when this simplifies life
> even when the derived class only needs a single resource. See
> the g++ implementation of std::vector, for example.)

As a general rule I agree with that. Good advice.

Hm, I'd like to see any application that, from a modern point of view, can deal
safely with a std::bad_alloc other than by terminating.

However, if you look up old clc++m threads you'll perhaps find the long one
where I argued with Dave Abrahams that one reasonable approach, for some
applications, is to use "hard" exceptions in order to clean up upwards in the
call stack before terminating (the problem being that C++ lacks support for
"hard" exceptions, so that the scheme would have to rely on convention, which is
always brittle).

E.g. it might be possible to save an open document to disk using only finite
pre-allocated resources.

But still I think the only sane way to deal with memory exhaustion is to
terminate, whether it happens more or less directly at the detection point or
via some "hard" exception.

Then it may perhaps be possible to do a "bird of Phoenix" thing, as e.g. the
Windows Explorer shell often does, or perhaps just a reboot. Or one might have
three machines running and voting, as in the space shuttle. Or whatever, but
continuing on when memory has been exhausted is, IMHO, simply a Very Bad Idea.


Cheers, :-)

- Alf

Bart van Ingen Schenau

unread,
Jan 27, 2009, 4:44:08 AM1/27/09
to
On Jan 26, 4:26 pm, p...@lib.hu wrote:
> And you miss the separation of responsibilities.
> We have RAII/RIID/manager/etc classes -- that have responsibility to
> manage a single resource. Examples: auto_ptr, string, vector, CFile,
> CSingleLock, shared_ptr.   They normally reside in libraries ready to
> use. Ocasinally there is some some really new resource to cover, but
> it is more like exceptional.

No, I don't miss tthe separation of responsibilities. But I try to
write my coding standards in such a way that they can be applied
equally to both the common, non-resource owning, user classes and the
more exceptional resource manager classes.

Although every coding standard I know has a clause that you may break
the rules is you have a sufficiently good reason, you still have to
explain yourself in a code review and might even have to get a formal
management sign-off on the deviation. That will be enough of a hassle
for some programmers that they will just not write their resource
managers in a RAII fashion.
For that reason I won't actively discourage the writing of
destructors, but instead encourage practices like separation of
responsibilities, re-use of existing components and creating classes
with RAII semantics.

Bart v Ingen Schenau

James Kanze

unread,
Jan 27, 2009, 5:00:01 AM1/27/09
to

Probably. But without code reviews, there's no point in having
coding guidelines to begin with.

> On the other hand, your idea of a rule 0 in any coding
> guideline is a very good one.

It's interesting, because it is one of the first things I read
about coding guidelines.

> For if programming could be reduced to a simple set of
> mechanical rules it would have been so reduced, long ago.

Yep. And we'd be out of work (or at least paid a lot, lot
less):-).

> > Things that can be automatically checked (like use of
> > malloc/free) may require some sort of special comment to
> > turn the checking off in a particular case.

> It seems you're envisioning some automated
> code-guideline-conformity checker.

Yes and no. If some of the conformity checking is automated,
then you need some special handling for cases where rule 0 was
applied. (I've actually seen it done.)

> Given the level of the OP's firm's new (proposed) guidelines
> that's an entirely unreasonable supposition.

I have no idea what is going on in the OP's organization. But
I've seen some very badly organized places try to use automated
tools to impose bad rules. My point is just that if automated
checking is envisaged, then some provision for rule 0 must be
made.

> There's also the point that using such a checker would amount
> to have given up on quality and instead focusing on enforcing
> unthinking rule conformity, and that kind of conformity is not
> a good idea when what you need is intelligent, creative staff.

Some degree of rule conformity is a good idea, although it
certainly doesn't replace an intelligent, creative staff. And
it's always a good idea to automate what you can. What little
experience I've had with automated rule checking has been very
positive, but it was with a very good organization, which
understood the software development process very well. I've no
doubt that it can be seriously misused, however.

> >>> Instead use (sparingly) C++'s new and delete operators.

> >> Also good advice, including that word 'sparingly'.

> > For a definition of 'sparingly' which depends on the
> > application. I can easily imagine numeric applications
> > where no explicit use of new or delete is appropriate. And
> > I've worked on server applications where almost everything
> > was allocated dynamically, by necessity.

> Which does not mean it's a good idea to pepper the code with
> 'new' calls.

> Centralize that functionality.

Agreed. There should normally be few instances of new
expressions, even in applications where most objects are
dynamically allocated.

I'm sure that there are exceptions to even this, of course. My
real point is not that you are wrong, per se, but that the
definition of 'sparingly' will depend on the application.

> > The real rule is more complex; it implies understanding what
> > you are doing. Some simple rules can be applied, however:

> > -- Never use dynamic allocation if the lifetime corresponds to
> > a standard lifetime and the type and size are known at
> > compile time.

> std::scoped_ptr, bye bye. :-)

If the type and size are not known at compile time, it's
actually a very good solution. If the type and size are known,
and the lifetime corresponds to the scope, why would you
dynamically allocate?

> But I'm sure you're thinking of some given application domain
> familiar to you.

In this case, I think it's a general rule, applicable to all
applications. If the type and size are known at compile time,
and the lifetime corresponds to standard lifetime (i.e. a
lifetime defined by the language standard), then you should not
allocate dynamically.

> > -- If the object type doesn't require identity, and the
> > lifetime does not correspond to a standard lifetime, copy it
> > rather than using dynamic allocation, unless the profiler
> > says otherwise.

> >>> To avoid memory leaks, a clear understanding of resource
> >>> acquisition (allocation and release) is important.

> >> This is true.

> >>> To avoid leaks, all classes should include a destructor
> >>> that releases any memory allocated by the class' constructor.

> >> This is just dumb advice.

> > And it contradicts the advice immediately above: "have a clear
> > understanding of resource acquisition".

> >> Use types that manage their own memory.

> > Which is exactly what the "advice" you just called dumb said.

> Nope.

> The "advice" in the proposed guidelines was to define a
> destructor in every class.

Where did you see that? I don't see anything about "defining a
destructor". Just ensuring that the class has one (which all
classes do), and that it deletes any memory allocated in the
constructor. The obvious way of achieving that would be for all
of the pointers to memory allocated by the class be scoped_ptr.

> That's dumb on its own, as opposed to using resource-specific
> management wrappers such as smart pointers and standard
> library container classes. And it's even more dumb, downright
> catastrophic advice, when the author of the guidelines isn't
> familiar with rule of 3 and forgets to mention it here.

Hopefully, there would be other rules, concerning when and how
to support copy and assignment. Given the way this rule is
formulated, however, I can understand your doubts about this.
But they would be "other rules", and not necessarily part of
this one.

> The opposite of that approach is to use types that manage
> their own memory, i.e., the opposite of mindless duplication
> and continuous re-invention of resource management is to
> centralize it.

The necessity of requiring memory management to be pushed off
into a separate class doesn't seem correct to me. What's
important is to use standard and well understood idioms
(patterns). Thus, if you're using the compilation firewall, the
standard idiom is to provide a non-inline destructor which does
the delete---any attempt to use smart pointers is likely to get
you into problems with instantiating templates over an
incomplete type. (Such problems can be avoided by careful
choice of the pointer type, or by implementing one of your own.
But why bother?)

Again: there are three basic reasons to use dynamic allocation:
the size isn't known at compile time, the type isn't known at
compile time, and the lifetime of the object doesn't correspond
to a standard lifetime. The first is (usually, at least) best
handled by using a standard container, e.g. std::vector. The
second is (usually) best handled by a smart pointer (e.g.
boost::scoped_ptr), and the third needs explicit management---it
may be possible to design a smart pointer to handle it in
explicit cases, but not generally.

Beyond that, there are a few special idioms which require
dynamic allocation for other reasons---things like the
compilation firewall idiom (which is really a variant of not
knowing the exact type, or the details of the type, at least
when compiling client code) or the singleton pattern (but
usually, you're also trying to obtain a non-standard lifetime,
e.g. the destructor is never called---otherwise, you use a local
static). But they're just that, established and well known
patterns.

[...]


> > A good general rule is to require any single class to be
> > responsible for at most one resource. If a class needs several,
> > it should spawn the management of each off into a separate base
> > class or member. (There are times when this simplifies life
> > even when the derived class only needs a single resource. See
> > the g++ implementation of std::vector, for example.)

> As a general rule I agree with that. Good advice.

Finally:-).

I think our only difference (here, at least) is that you seem to
be suggesting that this class should usually look like a
pointer. Where as I find that the cases where it should look
like a pointer aren't all that frequent.

> >> This is good advice.

> >> This is incorrect.

A well written, robust LDAP server. There's absolutely no
reason why it should crash or otherwise terminate just because
some idiot client sends an impossibly complicated request. And
there's really no reason why it should artificially limit the
complexity of the request.

> However, if you look up old clc++m threads you'll perhaps find
> the long one where I argued with Dave Abrahams that one
> reasonable approach, for some applications, is to use "hard"
> exceptions in order to clean up upwards in the call stack
> before terminating (the problem being that C++ lacks support
> for "hard" exceptions, so that the scheme would have to rely
> on convention, which is always brittle).

> E.g. it might be possible to save an open document to disk
> using only finite pre-allocated resources.

> But still I think the only sane way to deal with memory
> exhaustion is to terminate, whether it happens more or less
> directly at the detection point or via some "hard" exception.

> Then it may perhaps be possible to do a "bird of Phoenix"
> thing, as e.g. the Windows Explorer shell often does, or
> perhaps just a reboot. Or one might have three machines
> running and voting, as in the space shuttle. Or whatever, but
> continuing on when memory has been exhausted is, IMHO, simply
> a Very Bad Idea.

What about a server whose requests can contain arbitrary
expressions (e.g. in ASN.1 or XML---both of which support
nesting)? The server parses the requests into a tree; since the
total size and the types of the individual nodes aren't known at
compile time, it must use dynamic allocation. So what happens
when you receive a request with literally billions of nodes? Do
you terminate the server? Do you artificially limit the number
of nodes so that you can't run out of memory? (But the limit
would have to be unreasonably small, since you don't want to
crash if you receive the requests from several different clients,
in different threads.) Or do you catch bad_alloc (and stack
overflow, which requires implementation specific code), free up
the already allocated nodes, and return an "insufficient
resources" error.

It's not a typical case, and most of the time, I agree with you.
But such cases do exist, and if the OP's organization is dealing
with one, then handling bad_alloc can make sense.

James Kanze

unread,
Jan 27, 2009, 5:04:44 AM1/27/09
to
On Jan 26, 3:11 pm, Bart van Ingen Schenau

<Bart.van.Ingen.Sche...@ict.nl> wrote:
> On Jan 26, 2:07 pm, p...@lib.hu wrote:

> > >> - writing destructors is strictly discouraged
> > >Huh, that is new to me. Where did you get that from.
> > >> -- instead you shall use
> > >> RAII members and empty/autogenerated dtors
> > >Are you shure, you really are familliar with all implications of RAII?

> > I'm positive :)
> > Read James' recent post in this thread for details. (around
> > "A good general rule is to require any single class to be
> > responsible for at most one resource. ...")

And that class requires a user defined destructor. Writing
destructors is encouraged, not discouraged.

> > See also the "Single responsibility principle"

> It seems you seriously misunderstand what it means to manage a
> resource. If a class is responsible for managing a resource
> (a single resource preferably), then that class most
> definitely needs a destructor to clean-up after itself.

> The only classes that don't need a user-written destructor are
> those that don't need to do any cleanup when they are
> destroyed. This is typically because those classes don't
> manage any resources.

Or make any changes to program state. Most of my destructors
are concerned with roll back, not freeing resources.

> Depending on what you are writing, this could be a majority of
> the classes, but it still isn't enough to say that you should
> try to avoid writing destructors.

Note too that the compiler generated destructor is inline. I
tend to write a lot of destructors because of this as well.

James Kanze

unread,
Jan 27, 2009, 5:18:04 AM1/27/09
to
On Jan 26, 4:26 pm, p...@lib.hu wrote:

The fact that library classes only handle general solutions, and
rarely cover the exact case you need. As time goes on, your
personal library will become more an more complete. For your
specific application domain and programming style.

Some things are well covered by the library: if the size isn't
known at compile time, std::vector, std::string et al. seem to
cover most of the cases. If the type isn't known, then you can
consider boost::scoped_ptr, but this still requires that the
pointed to type be completely defined in the header file, where
as a simple pointer doesn't, and if there is only one such
pointer, it really doesn't buy us that much. And in many cases,
clean up will be more complicated than simply calling delete.

Alf P. Steinbach

unread,
Jan 27, 2009, 5:29:36 AM1/27/09
to
* James Kanze:

>
>>> -- Never use dynamic allocation if the lifetime corresponds to
>>> a standard lifetime and the type and size are known at
>>> compile time.
>
>> std::scoped_ptr, bye bye. :-)
>
> If the type and size are not known at compile time, it's
> actually a very good solution. If the type and size are known,
> and the lifetime corresponds to the scope, why would you
> dynamically allocate?

Sometimes just for convenience. But it may be that the class in question is
designed for dynamic allocation only. E.g., a window object class, and you're
doing the "main window" (however in this case a standard smart pointer is
usually not good enough).


[snip]


>>>>> To avoid leaks, all classes should include a destructor
>>>>> that releases any memory allocated by the class' constructor.
>
>>>> This is just dumb advice.
>
>>> And it contradicts the advice immediately above: "have a clear
>>> understanding of resource acquisition".
>
>>>> Use types that manage their own memory.
>
>>> Which is exactly what the "advice" you just called dumb said.
>
>> Nope.
>
>> The "advice" in the proposed guidelines was to define a
>> destructor in every class.
>
> Where did you see that?

Top of this quoted section. :-)


> I don't see anything about "defining a destructor".

The statement about "all classes should" is in a coding guideline.


[snip]


> I think our only difference (here, at least) is that you seem to
> be suggesting that this class should usually look like a
> pointer. Where as I find that the cases where it should look
> like a pointer aren't all that frequent.

Huh, where did you get that impression.


[snip]


> What about a server whose requests can contain arbitrary
> expressions (e.g. in ASN.1 or XML---both of which support
> nesting)? The server parses the requests into a tree; since the
> total size and the types of the individual nodes aren't known at
> compile time, it must use dynamic allocation. So what happens
> when you receive a request with literally billions of nodes? Do
> you terminate the server? Do you artificially limit the number
> of nodes so that you can't run out of memory? (But the limit
> would have to be unreasonably small, since you don't want to
> crash if you receive the requests from several different clients,
> in different threads.) Or do you catch bad_alloc (and stack
> overflow, which requires implementation specific code), free up
> the already allocated nodes, and return an "insufficient
> resources" error.

I haven't done that, and as I recall it's one of those things that can be
debated for years with no clear conclusion. E.g. the "jumping rabbit" (whatever
that is in French) maintained such a never-ending thread over in clc++m. But I
think I'd go for the solution of a sub-allocator with simple quota management.
After all, when it works well for disk space management on file servers, why not
for main memory for this? Disclaimer: until I've tried on this problem a large
number of times, and failed a large number of times with various aspects of it,
I don't have more than an overall gut-feeling "this should work" idea; e.g. I
can imagine e.g. Windows finding very nasty ways to undermine the scheme... :-)


Cheers,

- Alf

Christof Donat

unread,
Jan 27, 2009, 5:38:22 AM1/27/09
to
Hi,

>>> Because that's about the only sane way to deal with memory
>>> exhaustion.
>>

>> *That* depends largely on the application. [...]


>
> Hm, I'd like to see any application that, from a modern point of view, can
> deal safely with a std::bad_alloc other than by terminating.

I have not done things like that in C++ up to now, but I once wrote a Java
Application with a dynamically sized Cache that should possibly use all
available Heap untill it is needded somewhere else. Such an application
might have a use for catching std::bad_alloc in order to ask the Cache to
free some memory and try again.

Christof


Michael DOUBEZ

unread,
Jan 27, 2009, 5:51:03 AM1/27/09
to

Requiring that a class should be responsible with at most one resource
has nothing to do with the Single Responsability Principle (RSP).

The first is a sound practice to have an exception safe code: managing
multiple resource in a exception safe way tends to be tedious and error
prone. While RSP is a OO principle for enhancing reuse and refactoring
of code; it is related to separation of concern (i.e. obtain good
functional decomposition).

--
Michael

James Kanze

unread,
Jan 28, 2009, 4:32:55 AM1/28/09
to
On Jan 27, 11:29 am, "Alf P. Steinbach" <al...@start.no> wrote:
> * James Kanze:

> [snip]


> >>>>> To avoid leaks, all classes should include a destructor
> >>>>> that releases any memory allocated by the class'
> >>>>> constructor.

> >>>> This is just dumb advice.

> >>> And it contradicts the advice immediately above: "have a
> >>> clear understanding of resource acquisition".

> >>>> Use types that manage their own memory.

> >>> Which is exactly what the "advice" you just called dumb said.

> >> Nope.

> >> The "advice" in the proposed guidelines was to define a
> >> destructor in every class.

> > Where did you see that?

> Top of this quoted section. :-)

The quoted section doesn't say that the user has to define a
destructor. It says that the class must "include a destructor
[user defined or implicit] that releases any memory allocated by
the class' constructor." In other words, the class author
should ensure that any memory allocated by the classes
constructor is released in the destructor---there's nothing
there about having to do so explicitly, just ensuring that it is
done.

Which isn't really very good advice, since most of the time, if
the constructor allocates and destructor releases, you really
shouldn't be using dynamic memory anyway. Of the three reasons
for using dynamic memory:

-- lifetime doesn't correspond to a standard lifetime: if the
constructor allocates, and the destructor releases, it's
exactly the standard lifetime of a class member;

-- size unknown at compile time: that's what std::vector et al.
are for;

-- type unknown at compile time (a polymorphic delegate, for
example); boost_scoped pointer would certainly be worth
considering.

Judging by the software I've seen (in many different domains,
but certainly not all), I'd say that the last reason is the
least frequent.

The rule is also stupid because, of course, classes which do
allocate memory which they own (like std::vector) don't
necessarily do so only in the constructor. It (sort of) implies
that std::vector could skip the destructor, since it allocated
the memory in push_back(), and not in the constructor. (I'm
pretty sure that that's not what was meant, but that's what it
literally says.)

> > I don't see anything about "defining a destructor".

> The statement about "all classes should" is in a coding
> guideline.

And the following words is "include", followed by a description
of what the destructor should do. Obviously, all classes
include a destructor, but not all include a destructor "that
releases any memory allocated by the class constructor". So the
coding guideline is to ensure that the includes destructor does
release any memory allocated by the class constructor. By
whatever means appropriate---making the member a
boost::scoped_ptr<T> instead of a T* would be one means. (Note
that if the class contains a boost::scoped_ptr, it likely needs
a user defined destructor anyway. But that's another issue.)

> [snip]
> > What about a server whose requests can contain arbitrary
> > expressions (e.g. in ASN.1 or XML---both of which support
> > nesting)? The server parses the requests into a tree; since
> > the total size and the types of the individual nodes aren't
> > known at compile time, it must use dynamic allocation. So
> > what happens when you receive a request with literally
> > billions of nodes? Do you terminate the server? Do you
> > artificially limit the number of nodes so that you can't run
> > out of memory? (But the limit would have to be unreasonably
> > small, since you don't want to crash if you receive the
> > requests from several different clients, in different
> > threads.) Or do you catch bad_alloc (and stack overflow,
> > which requires implementation specific code), free up the
> > already allocated nodes, and return an "insufficient
> > resources" error.

> I haven't done that, and as I recall it's one of those things
> that can be debated for years with no clear conclusion. E.g.
> the "jumping rabbit" (whatever that is in French)

(Do you mean "chaud lapin". That's not "jumping rabbit", but "hot
rabbit". With definite lubricious overtones.)

> maintained such a never-ending thread over in clc++m. But I
> think I'd go for the solution of a sub-allocator with simple
> quota management.

That's also a possible solution. I'm certainly not saying that
handling bad_alloc is the only possible solution. Depending on
other factors, it may be the easiest or most appropriate,
however. E.g. if you're using some sort of explicit stack,
rather than a recursive parser, or if you can also arrange to
get a bad_alloc or some other error message on stack overflow.
And you're on a correctly configured OS, which will correctly
report insufficient memory.

> After all, when it works well for disk space management on
> file servers, why not for main memory for this?

Different number of users? Different size of available
resources? Different use patterns? It might work, but there
are enough differences that you can's suppose that it will (or
that it will be the best solution).

> Disclaimer: until I've tried on this problem a large number of
> times, and failed a large number of times with various aspects
> of it, I don't have more than an overall gut-feeling "this
> should work" idea; e.g. I can imagine e.g. Windows finding
> very nasty ways to undermine the scheme... :-)

The biggest problem with trying to catch bad_alloc is that some
systems undermine it. Linux, for example, unless you configure
it specially. Maybe Windows as well; the one time I
experimented with it under Windows (but that was Windows NT with
VC++ 6.0---a long time ago), rather that getting bad_alloc, the
system suspended my process and brought up a pop-up window
suggesting that I terminate other processes. Not a very useful
reaction for a server, where there's no one in front of the
screen, and the connection will time out if I don't respond in a
limited time.

Alf P. Steinbach

unread,
Jan 28, 2009, 6:04:55 AM1/28/09
to
* James Kanze:

> On Jan 27, 11:29 am, "Alf P. Steinbach" <al...@start.no> wrote:
>> * James Kanze:
>
>> [snip]
>>>>>>> To avoid leaks, all classes should include a destructor
>>>>>>> that releases any memory allocated by the class'
>>>>>>> constructor.
>
>>>>>> This is just dumb advice.
>
>>>>> And it contradicts the advice immediately above: "have a
>>>>> clear understanding of resource acquisition".
>
>>>>>> Use types that manage their own memory.
>
>>>>> Which is exactly what the "advice" you just called dumb said.
>
>>>> Nope.
>
>>>> The "advice" in the proposed guidelines was to define a
>>>> destructor in every class.
>
>>> Where did you see that?
>
>> Top of this quoted section. :-)
>
> The quoted section doesn't say that the user has to define a
> destructor. It says that the class must "include a destructor
> [user defined or implicit] that releases any memory allocated by
> the class' constructor." In other words, the class author
> should ensure that any memory allocated by the classes
> constructor is released in the destructor---there's nothing
> there about having to do so explicitly, just ensuring that it is
> done.

As I see it your interpretation of the wording is meaningless, but we'll
probably not agree on that, so...


[snip]


> The biggest problem with trying to catch bad_alloc is that some
> systems undermine it. Linux, for example, unless you configure
> it specially. Maybe Windows as well; the one time I
> experimented with it under Windows (but that was Windows NT with
> VC++ 6.0---a long time ago), rather that getting bad_alloc, the
> system suspended my process and brought up a pop-up window
> suggesting that I terminate other processes. Not a very useful
> reaction for a server, where there's no one in front of the
> screen, and the connection will time out if I don't respond in a
> limited time.

The reasonable approach for allocating a large chunk of memory for e.g. a
document or for a sub-allocator would IMHO be to use new(std::nothrow), because
the possibility of allocation failure is then part of the normal case outcome.

For Windows one would probably have to call SetErrorMode at app startup.

The documentation doesn't mention whether it affects out-of-memory handling but
the documentation is in general incomplete, and it's not unlikely.


Cheers & hth.,

- Alf

0 new messages