TR1; memory and build-times

2 views
Skip to first unread message

Jaap Suter

unread,
Aug 23, 2005, 7:45:05 PM8/23/05
to
Hi,

Judging from the TR1 proposal, it doesn't seem like function and
shared_ptr will allow customization of memory allocations. Both
components require dynamically allocating internal structures and it
seems the user has no control over where this memory comes from.

Is this true, or am I overlooking something? If it is true, is there a
rationale for this? It strongly limits their use on embedded systems
with zero tolerance for fragmentation. They also seem to be the first
standardized -as far as the TR1 goes- components in C++ that allocate
memory without giving the user control over it.

My second question is related to build-times. It seems that shared_ptr
is added into <memory> and function into <functional>. Why are these
components not given their own header files? Whenever I'm using
shared_ptr, I'm automatically pulling in auto_ptr. And judging from the
boost::shared_ptr implementation, I might also be pulling in
<algorithm> just because an implementation detail needs std::swap. Any
comments on why the TR1 seems to stay away from adding new headers?

Thanks,

Jaap Suter


[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]

Pete Becker

unread,
Aug 24, 2005, 4:00:28 AM8/24/05
to
Jaap Suter wrote:
>
> Is this true, or am I overlooking something? If it is true, is there a
> rationale for this? It strongly limits their use on embedded systems
> with zero tolerance for fragmentation. They also seem to be the first
> standardized -as far as the TR1 goes- components in C++ that allocate
> memory without giving the user control over it.

You left out regular expressions.

There are several places in the standard library where memory is
allocated without adding the complexity of supporting custom allocation.
Locale facets, some algorithms, and valarray come to mind. Users can
control allocation of individual objects (as is done by shared_ptr and
the function template) by overloading operator new and operator delete.

>
> My second question is related to build-times. It seems that shared_ptr
> is added into <memory> and function into <functional>. Why are these
> components not given their own header files?

Because they're like existing things that already have headers.

> Whenever I'm using
> shared_ptr, I'm automatically pulling in auto_ptr.

Yup. And when you're using adjacent_find you're automatically pulling in
binary_search, and when you're using fopen you're automatically pulling
in fclose.

> And judging from the
> boost::shared_ptr implementation, I might also be pulling in
> <algorithm> just because an implementation detail needs std::swap.

You might, but that's not required.

> Any
> comments on why the TR1 seems to stay away from adding new headers?
>

It adds <array>, <random>, <regex>, <tuple>, <type_traits>,
<unordered_map>, and <unordered_set>.

The headers that have things added are <math.h> and <cmath>,
<functional>, <memory>, and <utility>.

You seem to be assuming that splitting things into more headers will
shorten compile times. That's not necessarily true: smaller headers
probably compile faster, but in real applications you need more of them.
In C the time needed to open a header file overwhelms the time needed to
parse it, so splitting things into more headers makes compiles slower.
In C++ implementations today that's not as clearly true. But if parsing
headers with lots of templates is too slow with your compiler, complain
to the compiler vendor -- they probably haven't implemented export. <g>

And, of course, having more headers means more header names to learn,
and more chance for errors.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Carl Barron

unread,
Aug 24, 2005, 4:11:51 AM8/24/05
to
In article <1124832646.7...@g44g2000cwa.googlegroups.com>,
Jaap Suter <goo...@jaapsuter.com> wrote:

> Hi,
>
> Judging from the TR1 proposal, it doesn't seem like function and
> shared_ptr will allow customization of memory allocations. Both
> components require dynamically allocating internal structures and it
> seems the user has no control over where this memory comes from.

TR1::function is a template with an optional allocator argument.

template<typename Function, typename Allocator = std::allocator<void> >
class function;

Boost::shared_ptr uses its own 'undocumented' small object allocator.
so TR1::shared_ptr could do so. If this internal allocation is a
bottleneck [profile to be sure] and you can modify the class to hold a
counter or multiply inherit from the given class, and a simple struct
containing the counter, and don't need weak_ptrs then
boost::intrusive_ptr does not do any INTERNAL allocations. Only
allocation needed is in the constructors first argument, [it has an
optional bool argument as well] How you allocate it must match how
you deallocate the object via the required free function overloads to
manage the counter, in intrusive_ptr<T> but it looks fairly easy.
Further boost::intrusive_ptr looks like it can be implimented in one
file with no dependencies on boost if desired, especially if you don't
need compiler workarounds, for 'older compilers'...

Maxim Yegorushkin

unread,
Aug 24, 2005, 5:29:14 AM8/24/05
to

Carl Barron wrote:

[]

> Boost::shared_ptr uses its own 'undocumented' small object allocator.
> so TR1::shared_ptr could do so. If this internal allocation is a
> bottleneck [profile to be sure] and you can modify the class to hold a
> counter or multiply inherit from the given class, and a simple struct
> containing the counter, and don't need weak_ptrs then
> boost::intrusive_ptr does not do any INTERNAL allocations.

I just skimmed through boost/detail/quick_allocator.hpp and noticed
that quick_allocator causes false sharing on SMP. It happens when
several counters are allocated within the same cache line and those
counters are used by different processors, thus thrashing processors'
cache lines when the counter is written, even when the counter is used
by a single processor only.

More information about allocators and false sharing can be found in
papers included with Hoard allocator sources.

Pete Becker

unread,
Aug 24, 2005, 10:51:43 AM8/24/05
to
Carl Barron wrote:
>
> TR1::function is a template with an optional allocator argument.
>
> template<typename Function, typename Allocator = std::allocator<void> >
> class function;
>

std::tr1::function takes exactly one argument. Look it up.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

arke...@myrealbox.com

unread,
Aug 24, 2005, 6:23:57 PM8/24/05
to
I looked it up in the proposal (is the proposal the same as the
accepted code?) and, thank god, function DOES take an optional
allocator argument.

Many people (not saying you) don't seem to understand that custom
allocators often have nothing to do with performance, I sometimes use
custom allocators that routinely perform worse than operator new. They
have to do with deterministically bounded times for time constrained
applications (AKA realtime). If tr1 function does indeed drop support
for allocators, I think that the C++ community is locking out one of
it's loyal and growing programmer segments, obviously a big mistake.

- JJJ

Jaap Suter

unread,
Aug 24, 2005, 6:21:36 PM8/24/05
to
> TR1::function is a template with an optional allocator argument.
> template<typename Function, typename Allocator = std::allocator<void> >
> class function;

Correct, that is what n1402 describes here:
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2002/n1402.html

But perusing n1745.pdf which, afaik, is the official TR1 proposal, the
function type template parameter list does not have an allocator
argument.

Am I overlooking something?

> Boost::shared_ptr uses its own 'undocumented' small object allocator.
> so TR1::shared_ptr could do so. If this internal allocation is a
> bottleneck [profile to be sure]

It is not a performance bottleneck I care about, it's the potential for
memory fragmentation. I might scatter my heap with small
sizeof(shared_count) byte holes. The only option I see is to write my
global allocator as follows:

void* operator new (std::size_t size)
{
if (size == sizeof(shared_count))
return allocate_from_small_object_pool();
else
return allocate_from_main_heap();
}

Even on PC I'd like to see more people care about this, but on an
embedded platform with 24 MB of non-virtual memory, it's a much bigger
problem.

I see the difficulty with giving shared_ptr an extra template
parameter. It would make shared_ptr<T, AllocatorA> a different type
than shared_ptr<T, AllocatorB>. Could this not be solvable in practice
with proper conversions between the two?

> ... and don't need weak_ptrs then boost::intrusive_ptr does not
> do any INTERNAL allocations.

I realize that, and encourage people to use intrusive_ptr's where
sufficient, but there are many designs in which weak_ptr's make sense.

Jaap Suter

unread,
Aug 24, 2005, 6:19:44 PM8/24/05
to
> Users can control allocation of individual objects (as is done
> by shared_ptr and the function template) by overloading operator
> new and operator delete.

But they have no control over the shared_count object that shared_ptr
allocates internally, correct?

> And when you're using adjacent_find you're automatically pulling in
> binary_search,

I've never been a big fan of this either; <algorithm> is huge.

> and when you're using fopen you're automatically pulling
> in fclose.

And this one seems okay with me, so I recognize it's not a black and
white issue. Nonetheless, I would have preferred to see a more granular
<algorithm>, <memory> and <functional>.

> > And judging from the
> > boost::shared_ptr implementation, I might also be pulling in
> > <algorithm> just because an implementation detail needs std::swap.
>
> You might, but that's not required.

In fact, from my quick browse through the code, std::swap is only used
to swap two raw pointers. All of <algorithm> just for that? I would
agree it's not really required.

> > Any
> > comments on why the TR1 seems to stay away from adding new headers?
>
> It adds <array>, <random>, <regex>, <tuple>, <type_traits>,
> <unordered_map>, and <unordered_set>.

Oops, my bad. Apologies for jumping to conclusions too soon.

> But if parsing headers with lots of templates is too slow with
> your compiler, complain to the compiler vendor -- they probably
> haven't implemented export. <g>

:), unfortunately I'm in a business where we can't wait for or afford a
compiler upgrade.

> And, of course, having more headers means more header names to learn,
> and more chance for errors.

If the header-name is the same as the component therein, it's trivial
to find. If anything, I'm having a harder time finding my way around
standard C++ headers (is it <strstream> or <sstream>, is it in
<functional> or <algorithm>) than boost headers (shared_ptr is in
shared_ptr.hpp, easy).

And what kind of errors are you referring to?

Thanks,

Jaap

Pete Becker

unread,
Aug 25, 2005, 8:08:16 AM8/25/05
to
arke...@myrealbox.com wrote:

> I looked it up in the proposal (is the proposal the same as the
> accepted code?) and, thank god, function DOES take an optional
> allocator argument.
>

It does not take an optional allocator argument. Look it up in the right
place. That's the TR1 document, not the proposal that was written nearly
three years ago.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Pete Becker

unread,
Aug 25, 2005, 8:07:46 AM8/25/05
to
Jaap Suter wrote:
>>Users can control allocation of individual objects (as is done
>>by shared_ptr and the function template) by overloading operator
>>new and operator delete.
>
>
> But they have no control over the shared_count object that shared_ptr
> allocates internally, correct?

Users can control allocation of individual objects (as is done by
shared_ptr and the function template) by overloading operator new and
operator delete.

>
>
>>And when you're using adjacent_find you're automatically pulling in
>>binary_search,
>
>
> I've never been a big fan of this either; <algorithm> is huge.
>

Seems to me it's about the right size. It holds all the stuff it needs
to. What is your criterion for "huge"?

>
>>and when you're using fopen you're automatically pulling
>>in fclose.
>
>
> And this one seems okay with me, so I recognize it's not a black and
> white issue. Nonetheless, I would have preferred to see a more granular
> <algorithm>, <memory> and <functional>.

Three headers is still three headers. What is it that you're looking for?

>
>
>>>And judging from the
>>>boost::shared_ptr implementation, I might also be pulling in
>>><algorithm> just because an implementation detail needs std::swap.
>>
>>You might, but that's not required.
>
>
> In fact, from my quick browse through the code, std::swap is only used
> to swap two raw pointers. All of <algorithm> just for that? I would
> agree it's not really required.

It's not required, so your complaint is about some implementation, not
about TR1 itself.

>
>
>>>Any
>>>comments on why the TR1 seems to stay away from adding new headers?
>>
>>It adds <array>, <random>, <regex>, <tuple>, <type_traits>,
>><unordered_map>, and <unordered_set>.
>
>
> Oops, my bad. Apologies for jumping to conclusions too soon.
>
>
>>But if parsing headers with lots of templates is too slow with
>>your compiler, complain to the compiler vendor -- they probably
>>haven't implemented export. <g>
>
>
> :), unfortunately I'm in a business where we can't wait for or afford a
> compiler upgrade.

shrug.

>
>
>>And, of course, having more headers means more header names to learn,
>>and more chance for errors.
>
>
> If the header-name is the same as the component therein, it's trivial
> to find.

So you want one header for each component? Big and slow. Not a good idea.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Carl Barron

unread,
Aug 25, 2005, 8:11:11 AM8/25/05
to
In article <1124900119.1...@g14g2000cwa.googlegroups.com>,
<arke...@myrealbox.com> wrote:

> I looked it up in the proposal (is the proposal the same as the
> accepted code?) and, thank god, function DOES take an optional
> allocator argument.
>
> Many people (not saying you) don't seem to understand that custom
> allocators often have nothing to do with performance, I sometimes use
> custom allocators that routinely perform worse than operator new. They
> have to do with deterministically bounded times for time constrained
> applications (AKA realtime). If tr1 function does indeed drop support
> for allocators, I think that the C++ community is locking out one of
> it's loyal and growing programmer segments, obviously a big mistake.
>
> - JJJ
>

Latest I have seem dated in Jan 2005. function does not have an
optional allocator argument in that document, that is
template <class Sig> class function;

I made the mistake of reading the wrong copy of tr1 [deleted it after I
was corrected] Function does NOT take an allocator argument.

Confusion abounds even with me, but I don't ussually use custom
allocators for anything:)

Jaap Suter

unread,
Aug 25, 2005, 8:12:25 AM8/25/05
to
Just to reply to myself some more, I actually finished the exercise
into something that works. It's not a complete shared_ptr, but it
demonstrates one possible way to implement a shared_ptr that allows
assignment from shared_ptr<T, AllocatorA> to shared_ptr<T, AllocatorB>.

You can see find it on http://www.jaapsuter.com/src/, or
http://www3.telus.net/j_suter/src/shared_ptr_t_alloc.cpp as a direct
link.

I didn't implement operator -> and *, it probably contains some bugs,
it doesn't come with tests, and it's probably not exception safe. It's
just to demonstrate one possible strategy.

It relies on monostate allocators, as well as a virtual function call
when the reference count drops to zero. Other than that, I don't see
any overhead or problems.

But as usual in the most complicated language in the world ;), I'm
probably overlooking some obvious reason why the standardization
committee decided to forego on custom allocators.

I'm curious to hear why.

:), thanks!

Jaap Suter - http://www.jaapsuter.com

Carl Barron

unread,
Aug 25, 2005, 11:13:26 AM8/25/05
to
In article <SuydnRjV-M4...@rcn.net>, Pete Becker
<peteb...@acm.org> wrote:

> Carl Barron wrote:
> >
> > TR1::function is a template with an optional allocator argument.
> >
> > template<typename Function, typename Allocator = std::allocator<void> >
> > class function;
> >
>
> std::tr1::function takes exactly one argument. Look it up.

Yep you are right, I did look it up but in an older version [n1596]
since I have [n1795] time to delete n1596, oh well.

Jaap Suter

unread,
Aug 25, 2005, 11:14:37 AM8/25/05
to
As an exercise I just tried writing a shared_ptr<T, AllocA> that is
assignable to a shared_ptr<T, AllocB>. I could not pull it of without
virtual functions or some other extra indirection.

To me this explains the standard-commmittee's decision to avoid the
extra parameter. It's the only ay to allow libraries from different
developers that use shared_ptrs in their interface to work together.

I'm not sure what the best solution here is.

Peter Dimov

unread,
Aug 25, 2005, 7:09:00 PM8/25/05
to
Jaap Suter wrote:

> It is not a performance bottleneck I care about, it's the potential for
> memory fragmentation. I might scatter my heap with small
> sizeof(shared_count) byte holes. The only option I see is to write my
> global allocator as follows:
>
> void* operator new (std::size_t size)
> {
> if (size == sizeof(shared_count))
> return allocate_from_small_object_pool();
> else
> return allocate_from_main_heap();
> }

All reasonable allocators already do something similar, with various
degrees of success, and there's always the dlmalloc option when the
built-in malloc is broken.

http://gee.cs.oswego.edu/dl/html/malloc.html

Pete Becker

unread,
Aug 25, 2005, 7:12:37 PM8/25/05
to
Jaap Suter wrote:
>
> If the header-name is the same as the component therein, it's trivial
> to find. If anything, I'm having a harder time finding my way around
> standard C++ headers (is it <strstream> or <sstream>, is it in
> <functional> or <algorithm>) than boost headers (shared_ptr is in
> shared_ptr.hpp, easy).
>
> And what kind of errors are you referring to?
>

The one you mentioned is a good example: confusing <strstream> with
<sstream>, which follows your design criterion of one component per header.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Peter Dimov

unread,
Aug 25, 2005, 7:44:32 PM8/25/05
to
Jaap Suter wrote:
> > Users can control allocation of individual objects (as is done
> > by shared_ptr and the function template) by overloading operator
> > new and operator delete.
>
> But they have no control over the shared_count object that shared_ptr
> allocates internally, correct?

In my opinion, the best way to control allocation is to overload the
global new operator. Adding allocator arguments to all components that
need to allocate memory propagates upwards in all interfaces that use
such components and it doesn't work when the implementation is in a
library and the interface is not a template.

In addition, the standard C and C++ libraries do allocate memory as an
implementation detail, so you have to replace ::operator new and malloc
anyway.

But this aside, there is a proposal in comp.std.c++ to add the
following constructor:

template<class Y, class D, class A> shared_ptr( Y * p, D d, A a );

to shared_ptr (among other things). If we agree that shared_ptr's
internal allocations have to be customizable, the above would be my
preferred way to do it. ('A' implements a standard allocator
interface.)

arke...@myrealbox.com

unread,
Aug 25, 2005, 7:42:28 PM8/25/05
to
What gets my goat, is, why the beef with conversions on
tr1::function<>? I dont want any auto conversion, just the ability to
quickly define a new type of tr1::function<> with particular memory
behavior. All the standard containers do not provide conversion between
types with different allocators, why the sudden inconsitency?

I would like to use tr1::function<F> and tr1::function<F, MyAlloc> as
two different types in two different parts of my app. I am now forced
to take a complete snapshot of the tr1 function class and manually go
through the source to change it to support MyFunction<F, MyAlloc>, and
do this each time the class is updated/bug fixed. Obviously an error
prone activity.

Of course in the meanwhile, I will continue using boost::function, but
I believe it is only a matter of time before this lib becomes
unsupported, and eventually dropped from boost.

Oh poo,
Jeremy Jurksztowicz

Jaap Suter

unread,
Aug 26, 2005, 5:47:01 AM8/26/05
to
> In my opinion, the best way to control allocation is to overload the
> global new operator.

Thanks Peter. I appreciate your input on this.

> Adding allocator arguments to all components that
> need to allocate memory propagates upwards in all interfaces that use
> such components and it doesn't work when the implementation is in a
> library and the interface is not a template.

Which is precisely why one would need conversion operators. Granted,
those have numerous drawbacks too. Automatic conversions are no panacea

> In addition, the standard C and C++ libraries do allocate memory as an
> implementation detail, so you have to replace ::operator new and malloc
> anyway.

I have yet to find a game developer that doesn't shy away from those C
and C++ library components that allocate memory. Few game developers
use <iostream>. We still overload ::operator new and malloc, but only
to replace it with our own general purpose heap which generally has
better diagnostics (which unfortunately is often the only advantage).

> But this aside, there is a proposal in comp.std.c++ to add the
> following constructor:
>
> template<class Y, class D, class A> shared_ptr( Y * p, D d, A a );
>
> to shared_ptr (among other things). If we agree that shared_ptr's
> internal allocations have to be customizable, the above would be my
> preferred way to do it. ('A' implements a standard allocator
> interface.)

You prefer this because shared_ptrs with different allocators still
have the same type, correct? I guess this allocator will be stored as
part of the shared_count structure?

An interesting approach. I'll have to put more thought into that one.

Thanks,

Jaap Suter

unread,
Aug 26, 2005, 5:44:05 AM8/26/05
to

> What gets my goat, is, why the beef with conversions on
> tr1::function<>?

Because I want to encourage third-party technology to use
tr1::function<> in their interfaces. If there are no conversions, yet
they use different allocators, I can't pass them around to other
people's technology.

I'm getting tired of passing around void (*callback)() with an extra
void* for the context data.

> I dont want any auto conversion, just the ability to
> quickly define a new type of tr1::function<> with particular memory
> behavior.

That would still be better than what we have right now, I agree. Even
if that means giving up the auto-conversion.

> All the standard containers do not provide conversion between
> types with different allocators, why the sudden inconsitency?

Because I rarely pass around vectors and lists. It's because they have
an iterator or range interface that I can pass around, independent of
the allocation mechanism.

Thanks,

Jaap

Pete Becker

unread,
Aug 26, 2005, 5:45:11 AM8/26/05
to
arke...@myrealbox.com wrote:
> What gets my goat, is, why the beef with conversions on
> tr1::function<>? I dont want any auto conversion, just the ability to
> quickly define a new type of tr1::function<> with particular memory
> behavior. All the standard containers do not provide conversion between
> types with different allocators, why the sudden inconsitency?

What inconsistency? Nobody who was there claimed that the lack of
assignability was the reason. That's Jaap's strawman.

>
> I would like to use tr1::function<F> and tr1::function<F, MyAlloc> as
> two different types in two different parts of my app. I am now forced
> to take a complete snapshot of the tr1 function class and manually go
> through the source to change it to support MyFunction<F, MyAlloc>, and
> do this each time the class is updated/bug fixed. Obviously an error
> prone activity.

Only if that's what you want to do. If you want to control allocation of
single objects, it's much easier to overload operator new and operator
delete.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

Jaap Suter

unread,
Aug 26, 2005, 5:45:32 AM8/26/05
to
> > void* operator new (std::size_t size)
> > {
> > if (size == sizeof(shared_count))
> > return allocate_from_small_object_pool();
> > else
> > return allocate_from_main_heap();
> > }
>
> All reasonable allocators already do something similar, with various
> degrees of success

I agree. But I work in an industry where the urge to replace perfectly
fine default heap allocators with our own crappier version is still
abound. The never ending 'not-invented-here' syndrome.

On a recent project we discovered a free-list that had over a hundred
chunks of less than 12 byte size before it hit the larger chunks. So
ever time new memory was needed, the allocator would iterate over a
hundred elements before finding it.

Smart? No. Impossible to fix? No. But still the reality of our
industry.

All in all it's an acceptable solution. Nonetheless I would still
prefer to trust my global new to be a truly 'general-purpose'
allocator, instead of containing a switch table that defers to
different pools.

> and there's always the dlmalloc option when the
> built-in malloc is broken.
> http://gee.cs.oswego.edu/dl/html/malloc.html

Interestingly enough, at least one well known game developer has taken
and adapted this code and uses it now...

;)

Thanks,

Jaap Suter

Peter Dimov

unread,
Aug 27, 2005, 6:06:51 AM8/27/05
to
Jaap Suter wrote:

> > Adding allocator arguments to all components that
> > need to allocate memory propagates upwards in all interfaces that use
> > such components and it doesn't work when the implementation is in a
> > library and the interface is not a template.
>
> Which is precisely why one would need conversion operators. Granted,

> those have numerous drawbacks too. Automatic conversions are no panacea.

No, it's not about the conversions.

You have library A that uses shared_ptr. You have library B that uses
vector. You have library C that uses A and B and wants to allocate
memory in a specific way. For this to work, A and B must expose a
template-based interface with an allocator argument.

> > In addition, the standard C and C++ libraries do allocate memory as an
> > implementation detail, so you have to replace ::operator new and malloc
> > anyway.
>
> I have yet to find a game developer that doesn't shy away from those C
> and C++ library components that allocate memory.

I've heard of certain game developers that were excited to learn the
hard way that sprintf can use malloc. :-)

> > template<class Y, class D, class A> shared_ptr( Y * p, D d, A a );
>

> You prefer this because shared_ptrs with different allocators still
> have the same type, correct? I guess this allocator will be stored as
> part of the shared_count structure?

Correct, but it's a bit harder than that. 'a' can't be stored in the
shared_count structure because the shared_count structure has to be
allocated with 'a' first; in addition, 'a' needs to be rebound to the
correct type (and there are exception safety issues, too.)

Another pitfall occurs at deallocation time. The allocator is part of
the structure, which needs to be destroyed and deallocated. So a local
copy of the allocator needs to be made first.

I've had this interface planned for a long time, but these
implementation problems have lead me to think that the standard
allocator is simply not the proper abstraction. However I wasn't been
able to come up with something better, and std::allocator is standard,
after all, so maybe we'll have to go with the above.

Pablo Halpern has proposed an alternative allocator interface in
comp.std.c++, so it might be worth investigating how it handles the
shared_ptr and function use cases.

Peter Dimov

unread,
Aug 27, 2005, 6:05:48 AM8/27/05
to
arke...@myrealbox.com wrote:
> What gets my goat, is, why the beef with conversions on
> tr1::function<>? I dont want any auto conversion, just the ability to
> quickly define a new type of tr1::function<> with particular memory
> behavior. All the standard containers do not provide conversion between
> types with different allocators, why the sudden inconsitency?

The main issue here is that a single instance of tr1::function
allocates objects of different types, and std::allocator is not the
proper customization interface.

The standard containers are underspecified in their handling of the
Allocator parameter (and argument), but at least it's possible there to
do "the right thing", store an instance (or two instances in the
std::deque case) of Allocator (or a suitably rebound Allocator) and do
all allocations with them.

Since tr1::function doesn't know the type of the object until runtime,
it can't use a stored allocator instance, so it has to impose the
requirement of equivalent copies. This, in turn, precludes some very
useful allocators, such as one that implements the "small string
optimization".

Conversions aren't a problem for tr1::function, by the way. A
tr1::function can store another tr1::function.

Jaap Suter

unread,
Aug 27, 2005, 10:23:06 AM8/27/05
to

> Only if that's what you want to do. If you want to control allocation of
> single objects, it's much easier to overload operator new and operator
> delete.

As far as I know, most implementations of shared_ptr and function
allocate an implementation defined structure that I have no access to.
I can't overload new and delete for these.

Regards,

Jaap Suter

Pavel Vozenilek

unread,
Aug 27, 2005, 3:16:56 PM8/27/05
to

"Jaap Suter" wrote:


>> and there's always the dlmalloc option when the
>> built-in malloc is broken.
>> http://gee.cs.oswego.edu/dl/html/malloc.html
>
> Interestingly enough, at least one well known game developer has taken
> and adapted this code and uses it now...
>

By all means try it. The latest version fixed some
strange but on Win32 and its performance.

My micro bechmark said cca 100-1000x !!! faster
than the VC6 malloc (in single thread but MT safe).

/Pavel

Pete Becker

unread,
Aug 28, 2005, 9:31:02 AM8/28/05
to
Jaap Suter wrote:

>>Only if that's what you want to do. If you want to control allocation of
>>single objects, it's much easier to overload operator new and operator
>>delete.
>
>
> As far as I know, most implementations of shared_ptr and function
> allocate an implementation defined structure that I have no access to.
> I can't overload new and delete for these.
>

If you overload the global operator new and operator delete then your
overload will be used for those allocations.

--

Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

[ See http://www.gotw.ca/resources/clcm.htm for info about ]

lance...@nyc.rr.com

unread,
Sep 3, 2005, 2:57:07 PM9/3/05
to
I have actually used the proposed alternative allocator interface, and
have built an experimental version of boost::shared_ptr to use it, and
did tests. I worked for months trying to get it to work (i.e. preserve
the desired semantics).
I finally made a version of shared_ptr that took a detail::shared_count
looking template as a policy class rather than baked in.(The original
problem? trying to manage pointer in shared memory.)
The boost implementation prefers to use build macros as policy
parameters -- a few default template paremters preserved the build
macros, preserved the interface plus gave me my optional stuff.
Why were polymorphic allocators so hard in shared_ptr? Sounds like a
good idea, doesn't it? Well it is not bad for localized use, for things
that don't travel far, where you can inspect the relationship between
with "allocator instance" when with what objects live in its memory.
(and yes, I stored the allocator instance as part of the modified
shared_ptr)
But the method does not scale well. Try debugging that with code that
interacts beyond a few translation units. I gave up on shared_ptrs with
polymorphic allocators merely because I could not teach others how to
use them. If someone wants to write the docs, I can supply an
implementation.
The Apache Xerces processor server has long supported these allocators
since version 2.3), were discussed on D&E (see "placement new") and in
the Coplien 92 book.
It looks like the C++ community wants to visit this again.

lance...@nyc.rr.com

unread,
Sep 3, 2005, 11:17:51 PM9/3/05
to
Reply all
Reply to author
Forward
0 new messages