Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Singleton_pattern and Thread Safety

22 views
Skip to first unread message

Pallav singh

unread,
Dec 9, 2010, 11:19:41 AM12/9/10
to
HI All ,

i have a query using given singleton that its not thread Safe ?

Since function getInstance() is returning the static object singleton
class
AS far my knowlege, static object is intialized only first time when
control
reaches there. The second time control Thread reached there , compiler
skipps the initialization part.

http://en.wikipedia.org/wiki/Singleton_pattern

// This version solves the problems of the minimalist Singleton above,
// but strictly speaking only in a single-threaded environment,
// and use in a multithreaded environment when relying on the ABI
// may still pose problems at program termination time.

class Singleton
{
private:
Singleton() {}
~Singleton() {}
Singleton(const Singleton &); // intentionally undefined
Singleton & operator=(const Singleton &); // intentionally undefined

public:
static Singleton &getInstance();
};

// Source file (.cpp)
Singleton& Singleton::getInstance()
{
// Static Variables are initialized only first time Thread of
// Execution reaches here first time.
static Singleton instance;
return instance;
}

Thx
Pallav Singh

Victor Bazarov

unread,
Dec 9, 2010, 11:35:26 AM12/9/10
to
On 12/9/2010 11:19 AM, Pallav singh wrote:
> HI All ,
>
> i have a query using given singleton that its not thread Safe ?

Are you asking or are you telling?

> Since function getInstance() is returning the static object singleton
> class
> AS far my knowlege, static object is intialized only first time when
> control
> reaches there. The second time control Thread reached there , compiler
> skipps the initialization part.

Yes, usually. What's your query?

You probably need to get familiar with the problem of thread safety of
static object initialization. It's not very common, but it's been
studied enough to have left a trail that you could find using your
favorite Web search engine.

>
> http://en.wikipedia.org/wiki/Singleton_pattern
>
> // This version solves the problems of the minimalist Singleton above,
> // but strictly speaking only in a single-threaded environment,
> // and use in a multithreaded environment when relying on the ABI
> // may still pose problems at program termination time.
>
> class Singleton
> {
> private:
> Singleton() {}
> ~Singleton() {}
> Singleton(const Singleton&); // intentionally undefined

> Singleton& operator=(const Singleton&); // intentionally undefined


>
> public:
> static Singleton&getInstance();
> };
>
> // Source file (.cpp)
> Singleton& Singleton::getInstance()
> {
> // Static Variables are initialized only first time Thread of
> // Execution reaches here first time.
> static Singleton instance;
> return instance;
> }
>
> Thx

Not sure for what...

> Pallav Singh

V
--
I do not respond to top-posted replies, please don't ask

Marcel Müller

unread,
Dec 9, 2010, 12:05:20 PM12/9/10
to
Pallav singh wrote:
> i have a query using given singleton that its not thread Safe ?
>
> Since function getInstance() is returning the static object singleton
> class
> AS far my knowlege, static object is intialized only first time when
> control
> reaches there. The second time control Thread reached there , compiler
> skipps the initialization part.

That's right.

> // Source file (.cpp)
> Singleton& Singleton::getInstance()
> {
> // Static Variables are initialized only first time Thread of
> // Execution reaches here first time.

> static Singleton instance;

This line is not guaranteed to be thread safe. In some implementation it
is safe.

> return instance;
> }


Marcel

James Kanze

unread,
Dec 10, 2010, 4:52:05 AM12/10/10
to

> That's right.

In practice, it will be thread safe *if* the first call to
getInstance occurs before threading starts. If threading
doesn't start before entering main (normally an acceptable
restriction), then just declaring a variable with static
lifetime which is initialized by getInstance() is sufficient,
e.g. (at namespace scope):

Singleton* dummyForInitialization = &Singleton::getInstance();

> > return instance;
> > }

Note that the above still risks order of destruction issues;
it's more common to not destruct the singleton ever, with
something like:

namespace {

Singleton* ourInstance = &Singleton::instance();

Singleton&
Singleton::instance()
{
if (ourInstance == NULL)
ourInstance = new Singleton;
return *ourInstance;
}
}

(This solves both problems at once: initializing the variable
with a call to Singleton::instance and ensuring that the
singleton is never destructed.)

--
James Kanze

Leigh Johnston

unread,
Dec 10, 2010, 8:16:29 AM12/10/10
to
On 10/12/2010 09:52, James Kanze wrote:
> On Dec 9, 5:05 pm, Marcel Müller<news.5.ma...@spamgourmet.com> wrote:
>> Pallav singh wrote:
>>> i have a query using given singleton that its not thread Safe ?
>
>>> Since function getInstance() is returning the static object
>>> singleton class AS far my knowlege, static object is
>>> intialized only first time when control reaches there. The
>>> second time control Thread reached there , compiler skipps
>>> the initialization part.
>
>> That's right.
>
>>> // Source file (.cpp)
>>> Singleton& Singleton::getInstance()
>>> {
>>> // Static Variables are initialized only first time Thread of
>>> // Execution reaches here first time.
>>> static Singleton instance;
>
>> This line is not guaranteed to be thread safe. In some implementation it
>> is safe.
>
> In practice, it will be thread safe *if* the first call to
> getInstance occurs before threading starts. If threading
> doesn't start before entering main (normally an acceptable
> restriction), then just declaring a variable with static
> lifetime which is initialized by getInstance() is sufficient,
> e.g. (at namespace scope):
>
> Singleton* dummyForInitialization =&Singleton::getInstance();

>
>>> return instance;
>>> }
>
> Note that the above still risks order of destruction issues;
> it's more common to not destruct the singleton ever, with
> something like:
>
> namespace {
>
> Singleton* ourInstance =&Singleton::instance();

>
> Singleton&
> Singleton::instance()
> {
> if (ourInstance == NULL)
> ourInstance = new Singleton;
> return *ourInstance;
> }
> }
>
> (This solves both problems at once: initializing the variable
> with a call to Singleton::instance and ensuring that the
> singleton is never destructed.)
>

James "Cowboy" Kanze's OO designs includes objects that are never
destructed but leak instead? Interesting. What utter laziness typical
of somebody who probably overuses (abuses) the singleton pattern.
Singleton can be considered harmful (use rarely not routinely).

/Leigh

Fred Zwarts

unread,
Dec 10, 2010, 8:59:28 AM12/10/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:rsudnaHfQqi7tZ_Q...@giganews.com

As far as I can see it does not leak.
Up to the very end of the program the ourInstance pointer keeps pointing to the object
and can be used to access the object.
This is a well known technique to overcome the order of destruction issue.

Leigh Johnston

unread,
Dec 10, 2010, 9:23:36 AM12/10/10
to

Of course it is a memory leak the only upside being it is a singular
leak that would be cleaned up by the OS as part of program termination
rather than being an ongoing leak that continues to consume memory. It
is lazy. As far as it being a "well known technique" I have encountered
it before when working on a large project with many team members but
that does not justify its use; it was a consequence of parallel
development of many sub-modules with insufficient time set aside for
proper interop design and too much risk associated with "fixing" it.

Destruction is the opposite of construction; destruction (proper
cleanup) is not an intractable problem. /delete/ what you /new/.

/Leigh

u2

unread,
Dec 10, 2010, 9:31:56 AM12/10/10
to
On Dec 9, 11:19 am, Pallav singh <singh.pal...@gmail.com> wrote:
> HI All ,
>
> i have a query using given singleton that its not thread Safe ?
>
> Since function getInstance() is returning the static  object singleton
> class
> AS far my knowlege, static object is intialized only first time when
> control
> reaches there. The second time control Thread reached there , compiler
> skipps the initialization part.
>
> http://en.wikipedia.org/wiki/Singleton_pattern
>
> // This version solves the problems of the minimalist Singleton above,
> // but strictly speaking only in a single-threaded environment,

It is not thread safe. The comment line you included says exactly
that.
What is your question?

Fred Zwarts

unread,
Dec 10, 2010, 10:03:16 AM12/10/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:ItOdncr4ttt9qp_Q...@giganews.com

So, it is a matter of definition whether you want to call that a leak.
Usually something is acalled a leak if an object is no longer accessible,
because the pointer to the object went out of scope, or was assigned
a diferent value.

> It is lazy. As far as it being a "well known technique" I have
> encountered it before when working on a large project with many team
> members but that does not justify its use; it was a consequence of
> parallel development of many sub-modules with insufficient time set
> aside for proper interop design and too much risk associated with
> "fixing" it.

It is not necessarily lazy. The order of destruction of global objects
is not always predictable. Why spending time for a complex
solution, if it serves no purpose and makes the code much more
difficult to read?



> Destruction is the opposite of construction; destruction (proper
> cleanup) is not an intractable problem. /delete/ what you /new/.

Why? If destruction does not serve any purpose?
Is it a fixed rule so that you don't need to think about it?

Leigh Johnston

unread,
Dec 10, 2010, 10:16:33 AM12/10/10
to

What do you mean by global objects? If you mean objects defined at
namespace scope or static class member objects then you should avoid
having such objects in more than one translation unit modulo the advice
that one should avoid globals as they are definitely considered harmful.
Singletons are nothing more than disguised global variables.

>
>> Destruction is the opposite of construction; destruction (proper
>> cleanup) is not an intractable problem. /delete/ what you /new/.
>
> Why? If destruction does not serve any purpose?
> Is it a fixed rule so that you don't need to think about it?

If you can define construction then you can also define destruction;
why? Code re-use is one reason; e.g. a class which was initially a
singleton may suddenly be required to be instantiated more than once. A
class shouldn't really care about how many instances of it will be
created; ideally all objects should be destroyed on program termination
the only exceptions to this being abnormal program termination (e.g.
unhandled exception) or a program that is supposed to never terminate.

/Leigh

James Kanze

unread,
Dec 10, 2010, 10:29:41 AM12/10/10
to
On Dec 10, 1:16 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> On 10/12/2010 09:52, James Kanze wrote:
> > On Dec 9, 5:05 pm, Marcel M ller<news.5.ma...@spamgourmet.com> wrote:
> >> Pallav singh wrote:

[...]


> > Note that the above still risks order of destruction issues;
> > it's more common to not destruct the singleton ever, with
> > something like:

> > namespace {

> > Singleton* ourInstance =&Singleton::instance();

> > Singleton&
> > Singleton::instance()
> > {
> > if (ourInstance == NULL)
> > ourInstance = new Singleton;
> > return *ourInstance;
> > }
> > }

> > (This solves both problems at once: initializing the variable
> > with a call to Singleton::instance and ensuring that the
> > singleton is never destructed.)

> James "Cowboy" Kanze's OO designs includes objects that are never
> destructed but leak instead?

And where do you see a leak?

> Interesting. What utter laziness typical
> of somebody who probably overuses (abuses) the singleton pattern.

A lot of unsupported accusations from someone whose postings
here show a remarkable lack of any knowledge of serious software
engineering.

> Singleton can be considered harmful (use rarely not routinely).

Does two or three in a program of 500KLoc count as "routinely"?

--
James Kanze

Fred Zwarts

unread,
Dec 10, 2010, 10:54:02 AM12/10/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:bsSdnZQqkfvX2Z_Q...@giganews.com

should, shouldn't, ideally, ... It sounds like an ideology.

> the only exceptions to this being abnormal program
> termination (e.g. unhandled exception) or a program that is supposed
> to never terminate.

Why don't you object against a program that never terminates?
Isn't that an even bigger leak?
Lazy?
If you can define a start voor a program, than you can define a stop.
Maybe later it may suddenly be required to be stopped.

Program termination and object termination (destruction) are comparable:
Don't waste your time in designing code for things you don't need.
I conclude it is a matter of taste whether it is called laziness of efficiency.

Leigh Johnston

unread,
Dec 10, 2010, 10:56:49 AM12/10/10
to
On 10/12/2010 15:29, James Kanze wrote:
> On Dec 10, 1:16 pm, Leigh Johnston<le...@i42.co.uk> wrote:
>> On 10/12/2010 09:52, James Kanze wrote:
>>> On Dec 9, 5:05 pm, Marcel M ller<news.5.ma...@spamgourmet.com> wrote:
>>>> Pallav singh wrote:
>
> [...]
>>> Note that the above still risks order of destruction issues;
>>> it's more common to not destruct the singleton ever, with
>>> something like:
>
>>> namespace {
>
>>> Singleton* ourInstance =&Singleton::instance();
>
>>> Singleton&
>>> Singleton::instance()
>>> {
>>> if (ourInstance == NULL)
>>> ourInstance = new Singleton;
>>> return *ourInstance;
>>> }
>>> }
>
>>> (This solves both problems at once: initializing the variable
>>> with a call to Singleton::instance and ensuring that the
>>> singleton is never destructed.)
>
>> James "Cowboy" Kanze's OO designs includes objects that are never
>> destructed but leak instead?
>
> And where do you see a leak?
>

Is that a serious question?

The only real difference between the two programs below is the amount of
memory leaked:

int main()
{
int* p = new int;
}

int main()
{
int* p = new int;
p = new int;
}

A singular memory leak (one that is not repeated so doesn't consume more
and more memory as a program runs) is still a memory leak.

I will ignore the predictable, trollish part of your reply.

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 11:03:59 AM12/10/10
to

I see little difference between abnormal program termination and a
program that never terminates as far as system-wide object destruction
is concerned. If it is known in advance that a program never terminates
then one can relax the rules slightly if only for the sake of efficiency
if development time is a factor. However, one should always design a
class so that is agnostic as to the amount instances of it that will be
made and that it can be explicitly destroyed. I admit this is an ideal
but if you design things properly in the first place this should happen
automatically (i.e. with little thought).

/Leigh

Fred Zwarts

unread,
Dec 10, 2010, 11:35:14 AM12/10/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:ZeKdnYPixIT10p_Q...@giganews.com

If it is known that an object never terminates, one may,
for the same reason, also relax the rules about destructors.

> However, one should
> always design a class so that is agnostic as to the amount instances
> of it that will be made and that it can be explicitly destroyed. I
> admit this is an ideal but if you design things properly in the first
> place this should happen automatically (i.e. with little thought).

The same holds for coding program termination.
One can define rules here as well:
One should always design a program to be able to terminate properly.
If you design things properly in the first place, this should happen automatically.

But I understand that you are able to relax rules.
I respect such an opinion.

Leigh Johnston

unread,
Dec 10, 2010, 11:42:19 AM12/10/10
to

Furthermore if you placed your singleton code into a DLL and the DLL was
continuously loaded and unloaded I think you would see plain evidence
that this is in fact a memory leak.

/Leigh

Ian Collins

unread,
Dec 10, 2010, 3:06:54 PM12/10/10
to

What James describes is a very common idiom (to avoid order of
destruction issues) and it's certainly one I've often used.

--
Ian Collins

Leigh Johnston

unread,
Dec 10, 2010, 3:21:39 PM12/10/10
to

Not considering object destruction when designing *new* classes is bad
practice IMO. Obviously there may be problems when working with
pre-existing designs which were created with a lack of such consideration.

/Leigh

Ian Collins

unread,
Dec 10, 2010, 3:39:20 PM12/10/10
to

A programmer seldom has the benefit of a green field design. Even when
he or she does, there are still the dark and scary corners of the
language where undefined behaviour lurks. Order of destruction issues
is one such corner, especially when static objects exist in multiple
compilation units.

By the way, the leak example you posted differers in one significant
point from James' example above: the location of the allocation. James'
allocation takes place before main. I agree with my debugger in
categorising that as a "block in use" rather than a leek.

--
Ian Collins

Leigh Johnston

unread,
Dec 10, 2010, 4:08:05 PM12/10/10
to

I am well aware of the unspecified construction/destruction order
associated with globals in multiple TUs and that is primary reason why
this method of James's should be avoided. The order of destruction of
"Meyers Singleton" objects *is* well defined for example although making
the "Meyers Singleton" method thread safe is not completely trivial.

>
> By the way, the leak example you posted differers in one significant
> point from James' example above: the location of the allocation. James'
> allocation takes place before main. I agree with my debugger in
> categorising that as a "block in use" rather than a leek.
>

It is a leak pure and simple; as I said else-thread if you use the
pattern in a DLL and load and unload the DLL multiple times the leak
will become apparent. What some debugger chooses to name an allocation
is mostly irrelevant; it *is* an allocation without a paired deallocation.

/Leigh

Joshua Maurice

unread,
Dec 10, 2010, 5:56:41 PM12/10/10
to

So, under that particular pedantic definition, it is a memory leak.

Let me be a devil's advocate with the following questions. Have you
ever used fork and exec? Did you specifically "free" or "delete" /
every single outstanding allocation/ before calling exec? Just
wondering.

We disagree with your blanket assertion that memory leaks, under that
particular definition, are always bad and must be avoided at all
costs.

Your use case of loading and unloading the same DLL multiple times is
an interesting use case. I give you that point. I'll remember that in
the future as a particularly likely source of leaks.

Leigh Johnston

unread,
Dec 10, 2010, 6:19:45 PM12/10/10
to

Royal we? One word: Cowboy(s).

Kanze's method suffers from unspecified construction order of singletons
defined in multiple TUs which basically means it is hogwash.

>
> Your use case of loading and unloading the same DLL multiple times is
> an interesting use case. I give you that point. I'll remember that in
> the future as a particularly likely source of leaks.

A use-case which, IMO, renders the method invalid as a general purpose,
useful pattern. In my eyes it is simply buggy code.

On the rare occasion that I need a singleton I tend to use the Meyers
Singleton which does have specified construction/destruction order and
does not leak memory.

/Leigh

Ian Collins

unread,
Dec 10, 2010, 6:31:24 PM12/10/10
to
On 12/11/10 10:08 AM, Leigh Johnston wrote:
> On 10/12/2010 20:39, Ian Collins wrote:
>> On 12/11/10 09:21 AM, Leigh Johnston wrote:
>>>
>>> Not considering object destruction when designing *new* classes is bad
>>> practice IMO. Obviously there may be problems when working with
>>> pre-existing designs which were created with a lack of such
>>> consideration.
>>
>> A programmer seldom has the benefit of a green field design. Even when
>> he or she does, there are still the dark and scary corners of the
>> language where undefined behaviour lurks. Order of destruction issues is
>> one such corner, especially when static objects exist in multiple
>> compilation units.
>
> I am well aware of the unspecified construction/destruction order
> associated with globals in multiple TUs and that is primary reason why
> this method of James's should be avoided. The order of destruction of
> "Meyers Singleton" objects *is* well defined for example although making
> the "Meyers Singleton" method thread safe is not completely trivial.

That is another pattern I use, but as you say, it has issues of its own.

>> By the way, the leak example you posted differers in one significant
>> point from James' example above: the location of the allocation. James'
>> allocation takes place before main. I agree with my debugger in
>> categorising that as a "block in use" rather than a leek.
>
> It is a leak pure and simple; as I said else-thread if you use the
> pattern in a DLL and load and unload the DLL multiple times the leak
> will become apparent.

Well that's something I have never done and can't see myself ever doing.
Why would you do it?

> What some debugger chooses to name an allocation
> is mostly irrelevant; it *is* an allocation without a paired deallocation.

It's not "some debugger", it's the opinion of the team that wrote the
debugger.

--
Ian Collins

Howard Hinnant

unread,
Dec 10, 2010, 6:39:32 PM12/10/10
to
On Dec 10, 4:08 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> The order of destruction of
> "Meyers Singleton" objects *is* well defined for example although making
> the "Meyers Singleton" method thread safe is not completely trivial.

Just fyi the "Meyers Singleton" will be thread safe with a C++0x
conforming compiler (as currently drafted):

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3225.pdf

6.7 [stmt.dcl], paragraph 4:

... If control enters the declaration concurrently while the variable
is being initialized, the concurrent execution shall wait for
completion of the initialization.^91 If control re-enters the
declaration recursively while the variable is being initialized, the
behavior is undefined.

91) The implementation must not introduce any deadlock around
execution of the initializer.

-Howard

Joshua Maurice

unread,
Dec 10, 2010, 6:43:39 PM12/10/10
to
On Dec 10, 3:19 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> On 10/12/2010 22:56, Joshua Maurice wrote:
> > We disagree with your blanket assertion that memory leaks, under that
> > particular definition, are always bad and must be avoided at all
> > costs.
>
> Royal we?  One word: Cowboy(s).

I am unsure if that is a sardonic remark or one made out of ignorance.
Are you that unaware of how every poster in this thread besides
yourself who commented on this issue is against you? That is to whom
the "we" referred.

Leigh Johnston

unread,
Dec 10, 2010, 6:44:28 PM12/10/10
to

I am currently working on a plugin framework for an application and
plugin DLLs can be loaded/unloaded by the user. I am sure there are a
few other use-cases in existence in the wild.

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 6:50:34 PM12/10/10
to

I say again: cowboys.

/Leigh

Ian Collins

unread,
Dec 10, 2010, 7:09:17 PM12/10/10
to
On 12/11/10 10:08 AM, Leigh Johnston wrote:

> On 10/12/2010 20:39, Ian Collins wrote:
>>
>> By the way, the leak example you posted differers in one significant
>> point from James' example above: the location of the allocation. James'
>> allocation takes place before main. I agree with my debugger in
>> categorising that as a "block in use" rather than a leek.
>>
>
> It is a leak pure and simple; as I said else-thread if you use the
> pattern in a DLL and load and unload the DLL multiple times the leak
> will become apparent. What some debugger chooses to name an allocation
> is mostly irrelevant; it *is* an allocation without a paired deallocation.

It really come down to how you define a leak.

By my definition, a block is leaked when it becomes orphaned, either
when the last pointer to it is reassigned, or when the last pointer to
it goes out of scope (the two examples you posted).

The question then becomes how can a block assigned outside of main leak?

The last pointer to it can be reassigned inside main, so that would be a
leak. But it can't go out of scope without being freed by the operating
system. So it's a block in use immediately before the process terminates.

In your counter example of loading and unloading a dynamic library, all
the allocations are happening with main, so the last pointer can both be
reassigned and go out of scope. So you do have a leak, even if you only
load and unload once.

--
Ian Collins

Ian Collins

unread,
Dec 10, 2010, 7:10:33 PM12/10/10
to

Resorting to insults is a good way to loose an argument.

--
Ian Collins

Leigh Johnston

unread,
Dec 10, 2010, 7:35:44 PM12/10/10
to

A leak is a lack of a deallocation not the reassignment of the last
pointer to the object (which is just incidental to the leak). Remember
an object can delete itself (delete this) which means all pointers to it
can go out of scope / be reassigned and yet it is not leaked.

/Leigh

Ian Collins

unread,
Dec 10, 2010, 7:39:27 PM12/10/10
to

But memory allocated outside of main is deallocated, by the operating
system.

--
Ian Collins

Leigh Johnston

unread,
Dec 10, 2010, 7:43:01 PM12/10/10
to

Of course it does! The OS deallocates all the *leaks*! :)

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 7:56:18 PM12/10/10
to

Good stuff! Users of "static" beware... :)

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 9:23:10 PM12/10/10
to
On 10/12/2010 23:31, Ian Collins wrote:
> On 12/11/10 10:08 AM, Leigh Johnston wrote:
>> On 10/12/2010 20:39, Ian Collins wrote:
>>> On 12/11/10 09:21 AM, Leigh Johnston wrote:
>>>>
>>>> Not considering object destruction when designing *new* classes is bad
>>>> practice IMO. Obviously there may be problems when working with
>>>> pre-existing designs which were created with a lack of such
>>>> consideration.
>>>
>>> A programmer seldom has the benefit of a green field design. Even when
>>> he or she does, there are still the dark and scary corners of the
>>> language where undefined behaviour lurks. Order of destruction issues is
>>> one such corner, especially when static objects exist in multiple
>>> compilation units.
>>
>> I am well aware of the unspecified construction/destruction order
>> associated with globals in multiple TUs and that is primary reason why
>> this method of James's should be avoided. The order of destruction of
>> "Meyers Singleton" objects *is* well defined for example although making
>> the "Meyers Singleton" method thread safe is not completely trivial.
>
> That is another pattern I use, but as you say, it has issues of its own.
>

Normally I instantiate all my singletons up front (before threading) but
I decided to quickly roll a new singleton template class just for the
fun of it (thread-safe Meyers Singleton):

namespace lib
{
template <typename T>
class singleton
{
public:
static T& instance()
{
if (sInstancePtr != 0)
return static_cast<T&>(*sInstancePtr);
{ // locked scope
lib::lock lock1(sLock);
static T sInstance;
{ // locked scope
lib::lock lock2(sLock); // second lock should emit memory barrier here
sInstancePtr = &sInstance;
}
}
return static_cast<T&>(*sInstancePtr);
}
private:
static lib::lockable sLock;
static singleton* sInstancePtr;
};

template <typename T>
lib::lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;
}

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 9:38:57 PM12/10/10
to

Even though a memory barrier is emitted for a specific implementation of
my lockable class it obviously still relies on the C++ compiler not
re-ordering stores across a library I/O call (acquiring the lock) but it
works fine for me at least (VC++). I could mention volatile but I
better not as that would start a long argument. Roll on C++0x.

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 10:05:10 PM12/10/10
to

Sorry I was worrying over nothing; of course the C++ compiler will not
reorder a pointer assignment to before the creation of the object it
points to .. no volatile needed! :)

/Leigh

Joshua Maurice

unread,
Dec 10, 2010, 10:12:25 PM12/10/10
to

If I'm reading your code right, on the fast path, you don't have a
barrier, a lock, or any other kind of synchronization, right? If yes,
you realize you've coded the naive implementation of double checked?
You realize that it's broken, right? Have you even read
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
?
To be clear, this has undefined behavior according to the C++0x
standard as well.

Leigh Johnston

unread,
Dec 10, 2010, 10:17:05 PM12/10/10
to

I am aware of double checked locking pattern yes and this is not the
double checked locking pattern (there is only one check of the pointer
if you look). If a pointer read/write is atomic is should be fine (on
the implementation I use it is at least).

/Leigh

Joshua Maurice

unread,
Dec 10, 2010, 10:40:51 PM12/10/10
to

You've hidden the second check with the static keyword.

Example: Consider:

SomeType& foo()
{
static SomeType foo;
return foo;
}

For a C++03 implementation, it's likely implemented with something
like:

SomeType& foo()
{
static bool b = false; /*done before any runtime execution, stored
in the executable image */
static char alignedStorage[sizeof(SomeType)]; /*with some magic
for alignment */
if ( ! b)
new (alignedStorage) SomeType();
return * reinterpret_cast<SomeType*>(alignedStorage);
}

That's your double check.

For C++0x, it will not be implemented like that. Instead, it will be
implemented in a thread-safe way that makes your example entirely
redundant.

Joshua Maurice

unread,
Dec 10, 2010, 10:42:13 PM12/10/10
to

Err, that should be:

SomeType& foo()
{
static bool b = false; /*done before any runtime execution, stored
in the executable image */
static char alignedStorage[sizeof(SomeType)]; /*with some magic
for alignment */
if ( ! b)
{
new (alignedStorage) SomeType();

b = true;

u2

unread,
Dec 10, 2010, 10:53:47 PM12/10/10
to
On Dec 10, 10:17 pm, Leigh Johnston <le...@i42.co.uk> wrote:
>  If a pointer read/write is atomic is should be fine

It should? What if it is not?

> (on
> the implementation I use it is at least).

How do you know?

Leigh Johnston

unread,
Dec 10, 2010, 11:08:55 PM12/10/10
to

The problem with the traditional double checked locking pattern is twofold:

1) The "checks" are straight pointer comparisons and for the second
check the pointer may not be re-read after the first check due to
compiler optimization.
2) The initialization of the pointer may be re-ordered by the CPU to
happen before the initialization of the singleton object is complete.

I think you are confusing the checking issue. I am acquiring a lock
before this hidden check of which you speak is made and this check is
not the same as the initial fast pointer check so issue 1 is not a problem.

As far as issue 2 is concerned my version (on VC++ at least) is solved
via my lock primitive which should emit a barrier on RAII construction
and destruction and cause VC++ *compiler* to not re-order stores across
a library I/O call (if I am wrong about this a liberal sprinkling of
volatile would solve it).

I should have stated in the original post that my solution is not
portable as-is but it is a solution for a particular implementation
(which doesn't preclude porting to other implementations). :)

/Leigh

Leigh Johnston

unread,
Dec 10, 2010, 11:13:15 PM12/10/10
to
On 11/12/2010 03:53, u2 wrote:
> On Dec 10, 10:17 pm, Leigh Johnston<le...@i42.co.uk> wrote:
>> If a pointer read/write is atomic is should be fine
>
> It should? What if it is not?

If it is not then it won't work.

>
>> (on
>> the implementation I use it is at least).
>
> How do you know?

By examining emitted assembler and reading Intel docs.

/Leigh

Joshua Maurice

unread,
Dec 10, 2010, 11:21:56 PM12/10/10
to
On Dec 10, 8:08 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> The problem with the traditional double checked locking pattern is twofold:
>
> 1) The "checks" are straight pointer comparisons and for the second
> check the pointer may not be re-read after the first check due to
> compiler optimization.
> 2) The initialization of the pointer may be re-ordered by the CPU to
> happen before the initialization of the singleton object is complete.
>
> I think you are confusing the checking issue.  I am acquiring a lock
> before this hidden check of which you speak is made and this check is
> not the same as the initial fast pointer check so issue 1 is not a problem.
>
> As far as issue 2 is concerned my version (on VC++ at least) is solved
> via my lock primitive which should emit a barrier on RAII construction
> and destruction and cause VC++ *compiler* to not re-order stores across
> a library I/O call (if I am wrong about this a liberal sprinkling of
> volatile would solve it).
>
> I should have stated in the original post that my solution is not
> portable as-is but it is a solution for a particular implementation
> (which doesn't preclude porting to other implementations). :)

First, you are incorrect about the issues of double checked locking,
and threading in general. I again suggest that you read:
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
You are ignoring hardware cache issues and single pipeline reordering
issues.

Also, it may work on the current x86_32 processor, but it probably (?)
won't work on windows on all current available and future hardware.

u2

unread,
Dec 10, 2010, 11:34:15 PM12/10/10
to

Which docs?

Paavo Helde

unread,
Dec 11, 2010, 3:50:16 AM12/11/10
to
Leigh Johnston <le...@i42.co.uk> wrote in
news:ItOdncr4ttt9qp_Q...@giganews.com:

> On 10/12/2010 13:59, Fred Zwarts wrote:
>> "Leigh Johnston"<le...@i42.co.uk> wrote in message
[...]
>>> James "Cowboy" Kanze's OO designs includes objects that are never
>>> destructed but leak instead? Interesting. What utter laziness
>>> typical of somebody who probably overuses (abuses) the singleton
>>> pattern. Singleton can be considered harmful (use rarely not
>>> routinely).
>>
>> As far as I can see it does not leak.
>> Up to the very end of the program the ourInstance pointer keeps
>> pointing to the object and can be used to access the object.
>> This is a well known technique to overcome the order of destruction
>> issue.
>
> Of course it is a memory leak the only upside being it is a singular
> leak that would be cleaned up by the OS as part of program termination
> rather than being an ongoing leak that continues to consume memory.
> It is lazy. As far as it being a "well known technique" I have
> encountered it before when working on a large project with many team
> members but that does not justify its use; it was a consequence of
> parallel development of many sub-modules with insufficient time set
> aside for proper interop design and too much risk associated with
> "fixing" it.

Exactly. Programming is an engineering discipline, meaning that one has
to estimate the risks, costs and benefits. If the "leaked singleton"
approach is 10 times easier to get working correctly and has vanishing
risk of ill side effects, I would go with it regardless if somebody calls
it a leak or not.

Cheers
Paavo


Leigh Johnston

unread,
Dec 11, 2010, 8:27:50 AM12/11/10
to
On 11/12/2010 04:21, Joshua Maurice wrote:

From the document you keep harping on about:

Singleton* Singleton::instance () {
Singleton* tmp = pInstance;
... // insert memory barrier // (1)
if (tmp == 0) {
Lock lock;
tmp = pInstance;
if (tmp == 0) {
tmp = new Singleton;
... // insert memory barrier // (2)
pInstance = tmp;
}
}
return tmp;
}

My version has the barrier at (2) above which should ensure that stores
for the construction of the singleton object are globally visible before
the store of the pointer. As far as the barrier at (1) is concerned I
am not sure that I need it as I can't have reordered loads of any
pointer (the pointer is only loaded once for the initial fast check).

/Leigh

Leigh Johnston

unread,
Dec 11, 2010, 8:52:43 AM12/11/10
to


Hmm, I think I see why I might need the first barrier: is it due to
loads being made from the singleton object before the pointer check
causing problems for *clients* of the function? any threading experts
care to explain?

/Leigh

Leigh Johnston

unread,
Dec 11, 2010, 9:51:32 AM12/11/10
to
On 11/12/2010 04:34, u2 wrote:
> On Dec 10, 11:13 pm, Leigh Johnston<le...@i42.co.uk> wrote:
>> On 11/12/2010 03:53, u2 wrote:
>>
>>> On Dec 10, 10:17 pm, Leigh Johnston<le...@i42.co.uk> wrote:
>>>> If a pointer read/write is atomic is should be fine
>>
>>> It should? What if it is not?
>>
>> If it is not then it won't work.
>>
>>
>>
>>>> (on
>>>> the implementation I use it is at least).
>>
>>> How do you know?
>>
>> By examining emitted assembler and reading Intel docs.
>
> Which docs?
>

http://www.intel.com/products/processor/manuals/index.htm?wapkw=(manuals)

Chris M. Thomasson

unread,
Dec 11, 2010, 11:05:18 AM12/11/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:kY-dnahdNL25H57Q...@giganews.com...
[...]

> Hmm, I think I see why I might need the first barrier: is it due to loads
> being made from the singleton object before the pointer check causing
> problems for *clients* of the function? any threading experts care to
> explain?

http://lwn.net/Articles/5159

http://mirror.linux.org.au/linux-mandocs/2.6.4-cset-20040312_2111/read_barrier_depends.html

http://groups.google.com/group/comp.lang.c++.moderated/msg/e500c3b8b6254f35

Basically, the only architecture out there which requires a data-dependant
acquire barrier after the initial atomic load of the shared instance pointer
is a DEC Alpha...


BTW, there is a way to provide DCL along with allowing for the destruction
of the singleton. The simplest method is to have the singleton return an
smart pointer that has the strong-thread safety guarantee. Here is one such
pointer implementation:

http://atomic-ptr-plus.sourceforge.net


Here is another experimental one:

http://webpages.charter.net/appcore/vzoom/refcount

Keep in mind that `boost/std::smart_ptr' does not provide strong-thread
safety guarantee...


Leigh Johnston

unread,
Dec 11, 2010, 11:54:38 AM12/11/10
to

Thanks, so in summary my version should work on my implementation
(IA-32/VC++) and probably would work on other implementations except DEC
Alpha for which an extra barrier would be required. Whether or not
volatile is required to ensure stores are not re-ordered by the
*compiler* across a library I/O call is an open question and one that
disappears once C++0x arrives.

/Leigh

Ebenezer

unread,
Dec 11, 2010, 1:14:57 PM12/11/10
to

I think of cowboy as being positive. Cowboys were/are hard-working,
practical guys.


Brian Wood
Ebenezer Enterprises
http://webEbenezer.net

Ian Collins

unread,
Dec 11, 2010, 1:39:42 PM12/11/10
to

Google "cowboy builder" for the interpretation from the other side of
the pond!

--
Ian Collins

Chris M. Thomasson

unread,
Dec 11, 2010, 1:47:56 PM12/11/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:m4adnexxGJ1aMZ7Q...@giganews.com...

> On 11/12/2010 16:05, Chris M. Thomasson wrote:
>> "Leigh Johnston"<le...@i42.co.uk> wrote in message
>> news:kY-dnahdNL25H57Q...@giganews.com...
>> [...]
>>
>>> Hmm, I think I see why I might need the first barrier: is it due to
>>> loads
>>> being made from the singleton object before the pointer check causing
>>> problems for *clients* of the function? any threading experts care to
>>> explain?
>>
>> http://lwn.net/Articles/5159
>>
>> http://mirror.linux.org.au/linux-mandocs/2.6.4-cset-20040312_2111/read_barrier_depends.html
>>
>> http://groups.google.com/group/comp.lang.c++.moderated/msg/e500c3b8b6254f35
>>
>> Basically, the only architecture out there which requires a
>> data-dependant
>> acquire barrier after the initial atomic load of the shared instance
>> pointer
>> is a DEC Alpha...

[...]

> Thanks, so in summary my version should work on my implementation
> (IA-32/VC++) and probably would work on other implementations except DEC
> Alpha for which an extra barrier would be required.

Are you referring to this one:

http://groups.google.com/group/comp.lang.c++/msg/547148077c2245e2


I believe I have kind of solved the problem. I definitely see what you are
doing here and agree that you can get a sort of "portable" acquire/release
memory barriers by using locks. However, the lock portion of a mutex only
has to contain an acquire barrier, which happens to be the _wrong_ type for
producing an object. I am referring to the following snippet of your code:

<Leigh Johnston thread-safe version of Meyers singleton>
___________________________________________________________
static T& instance()
{
00: if (sInstancePtr != 0)
01: return static_cast<T&>(*sInstancePtr);
02: { // locked scope
03: lib::lock lock1(sLock);
04: static T sInstance;
05: { // locked scope
06: lib::lock lock2(sLock); // second lock should emit memory
barrier here
07: sInstancePtr = &sInstance;
08: }
09: }
10: return static_cast<T&>(*sInstancePtr);
}
___________________________________________________________


Line `06' does not produce the correct memory barrier. Instead, you can try
something like this:


<pseudo-code and exception saftey aside for a moment>
___________________________________________________________
struct thread
{
pthread_mutex_t m_acquire;
pthread_mutex_t m_release;


virtual void user_thread_entry() = 0;


static void thread_entry_stub(void* x)
{
thread* const self = static_cast<thread*>(x);
pthread_mutex_lock(&m_release);
self->user_thread_entry();
pthread_mutex_unlock(&m_release);
}
};


template<typename T>
T& meyers_singleton()
{
static T* g_global = NULL;
T* local = ATOMIC_LOAD_DEPENDS(&g_global);

if (! local)
{
static pthread_mutex_t g_mutex = PTHREAD_MUTEX_INITIALIZER;
thread* const self_thread = pthread_get_specific(...);

pthread_mutex_lock(&g_mutex);
00: static T g_instance;

// simulated memory release barrier
01: pthread_mutex_lock(&self_thread->m_acquire);
02: pthread_mutex_unlock(&self_thread->m_release);
03: pthread_mutex_lock(&self_thread->m_release);
04: pthread_mutex_unlock(&self_thread->m_acquire);

// atomically produce the object
05: ATOMIC_STORE_NAKED(&g_global, &g_instance);
06: local = &g_instance;

pthread_mutex_unlock(&g_mutex);
}

return local;
}
___________________________________________________________


The code "should work" under very many existing POSIX implementations. Here
is why...


- The implied release barrier contained in line `02' cannot rise above line
`01'.

- The implied release barrier in line `02' cannot sink below line `03'.

- Line `04' cannot rise above line `03'.

- Line '03' cannot sink below line `04'.

- Line `00' cannot sink below line `02'.

- Lines `05, 06' cannot rise above line `03'.


Therefore the implied release barrier contained in line `02' will always
execute _after_ line `00' and _before_ lines `05, 06'.


Keep in mind that there are some fairly clever mutex implementations that do
not necessarily have to execute any memory barriers for a lock/unlock pair.
Think exotic asymmetric mutex impl. I have not seen any in POSIX
implementations yet, but I have seen them used for implementing internals of
a Java VM...


[...]


Leigh Johnston

unread,
Dec 11, 2010, 1:57:35 PM12/11/10
to

Thanks for the info. At the moment I am only concerned with IA-32/VC++
implementation which should be safe. I could add specific barriers to
my lock class when porting to other implementations (not something I
plan on doing any time soon).

/Leigh

Chris M. Thomasson

unread,
Dec 11, 2010, 2:57:42 PM12/11/10
to
"Leigh Johnston" <le...@i42.co.uk> wrote in message
news:aI-dnTRysdoEVJ7Q...@giganews.com...

FWIW, an atomic store on IA-32 has implied release memory barrier semantics.
Also, an atomic load has implied acquire semantics. All LOCK'ed atomic RMW
operations basically have implied full memory barrier semantics:

http://www.intel.com/Assets/PDF/manual/253668.pdf
(read chapter 8)

Also, latest VC++ provides acquire/release for volatile load/store
respectively:

http://msdn.microsoft.com/en-us/library/12a04hfd(v=VS.100).aspx
(read all)

So, even if you port over to VC++ for X-BOX (e.g., PowerPC), you will get
correct behavior as well.


Therefore, I don't think you even need the second lock at all. If you are
using VC++ you can get away with marking the global instance pointer
variable as being volatile. This will give release semantics when you store
to it, and acquire when you load from it on Windows or X-BOX, Itanium...


[...]


Joshua Maurice

unread,
Dec 11, 2010, 7:30:09 PM12/11/10
to

Or, you know, you could just do it "the right way" the first time and
put in all of the correct memory barriers to avoid undefined behavior
according to the C++ standard in order to well, avoid, undefined
behavior. It's not like it will actually put in a useless no-op when
using the appropriate C++0x atomics as a normal load on that
architecture apparently has all of the desired semantics. Why write
unportable code which you have to read arch manuals to prove its
correctness when you can write portable code which you can prove its
correctness from the much simpler C++0x standard?

Moreover, are you ready to say that you can foresee all possible
compiler, linker, hardware, etc., optimizations in the future which
might not exist yet, and you know that they won't break the code?
Sure, the resultant assembly output is correct at the moment according
to the x86 assembly docs, but that is no guarantee that the C++
compiler will produce that correct assembly in the future. It could
implement cool optimizations that would break the /already broken/ C++
code. This is why you write to the appropriate standard. When in C++
land, write to the C++ standard.

I strongly disagree with your implications Chris that Leigh is using
good practice with his threading nonsense non-portable hacks,
especially if/when C++0x comes out and is well supported.

PS: If you are implementing a portable threading library, then
eventually someone has to use the non-portable hardware specifics.
However, only that person / library should have to, not the writer of
what should be portable general purpose code.

PPS: volatile has no place in portable code as a threading primitive
in C or C++. None. It never has. Please stop perpetuating this myth.

Öö Tiib

unread,
Dec 11, 2010, 8:19:13 PM12/11/10
to
On Dec 12, 2:30 am, Joshua Maurice <joshuamaur...@gmail.com> wrote:
> On Dec 11, 11:57 am, "Chris M. Thomasson" <cris...@charter.net> wrote:
> > "Leigh Johnston" <le...@i42.co.uk> wrote in message
> >news:aI-dnTRysdoEVJ7Q...@giganews.com...
> > > On 11/12/2010 18:47, Chris M. Thomasson wrote:

[...]

How there can be such an heated discussion about singleton anti-
pattern? If several singletons get constructed during concurrent
grabbing of instance() then destroy them until there remains one. Or
better none. No harm made since singletons are trash anyway.

Joshua Maurice

unread,
Dec 11, 2010, 8:23:00 PM12/11/10
to

I haven't been talking about singletons. I've been having a heated
discussion over incorrect, or at least non-portable bad-style,
threading code.

Leigh Johnston

unread,
Dec 11, 2010, 8:59:47 PM12/11/10
to

You are trolling. Obviously when I have a C++0x compiler available I
will use the C++0x idioms. I currently do not have a C++0x compiler
available. C++0x has not been fucking standardized yet. I am using the
code *now* not *later*.

>
> PS: If you are implementing a portable threading library, then
> eventually someone has to use the non-portable hardware specifics.
> However, only that person / library should have to, not the writer of
> what should be portable general purpose code.
>
> PPS: volatile has no place in portable code as a threading primitive
> in C or C++. None. It never has. Please stop perpetuating this myth.

I already said that on the specific implementation I am using I
*probably* don't need volatile as a library I/O call seems to prevent
*compiler* reordering across the library call. *If* that is not the
case one could use volatile on that *particular* implementation to
prevent *compiler* reordering. Yes I am well aware that compiler
reordering is not the same as CPU reordering.

/Leigh

Leigh Johnston

unread,
Dec 11, 2010, 9:00:13 PM12/11/10
to
On 12/12/2010 01:23, Joshua Maurice wrote:

Bullshit. I never said it was portable. It works for me on the
implementation I use. What is bad-style is Kanze's leaky, broken
(multiple TU) singleton method. What is bad-style is designing classes
without any consideration of object destruction.

One can instantiate multiple "Meyers Singletons" before creating any
threads to avoid any singleton related threading code. No object
destruction problems.

/Leigh

Chris M. Thomasson

unread,
Dec 11, 2010, 9:01:52 PM12/11/10
to
"Chris M. Thomasson" <cri...@charter.net> wrote in message
news:bHPMo.464$My1...@newsfe16.iad...

> "Leigh Johnston" <le...@i42.co.uk> wrote in message

> template<typename T>
> T& meyers_singleton()
> {
> static T* g_global = NULL;
> T* local = ATOMIC_LOAD_DEPENDS(&g_global);
>
> if (! local)
> {
> static pthread_mutex_t g_mutex = PTHREAD_MUTEX_INITIALIZER;
> thread* const self_thread = pthread_get_specific(...);
>
> pthread_mutex_lock(&g_mutex);
> 00: static T g_instance;
>
> // simulated memory release barrier
> 01: pthread_mutex_lock(&self_thread->m_acquire);
> 02: pthread_mutex_unlock(&self_thread->m_release);
> 03: pthread_mutex_lock(&self_thread->m_release);
> 04: pthread_mutex_unlock(&self_thread->m_acquire);


here is a simplification:


// simulated memory release barrier

pthread_mutex_unlock(&self_thread->m_release);
pthread_mutex_lock(&self_thread->m_release);


No code below the lock can rise above it, and no code above the unlock can
sink below it.

This will produce proper release barrier on many existing POSIX
implementations for a plurality of architectures. I cannot think of a
PThread mutex implementation that gets around all explicit membars in the
internal `pthread_mutex_lock/unlock' code.


I do know of some Java proposals...

Joshua Maurice

unread,
Dec 11, 2010, 9:27:09 PM12/11/10
to
On Dec 11, 6:00 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> On 12/12/2010 01:23, Joshua Maurice wrote:
>
>
>
> > On Dec 11, 5:19 pm, Tiib<oot...@hot.ee>  wrote:

You continue to assert that a memory leak is bad a priori. The rest of
us in this thread disagree with that claim. Instead, we program to
tangible requirements, such as cost, time to market, meets (business)
use case. We also keep in mind less tangible but still important
considerations, like maintainability and reusability.

"No memory leaks" in the sense you're using has never been a
(business) use case for any of my projects. However, arguably, you
have a good point with regards to maintainability and reusability
w.r.t. the use case of repeated loading and unloading of the same DLL
in the same process.

With that out of the way, let's go back to the threading issue. I
never claimed that you claimed that it was portable. I said the code
is very badly written because writing the threading code the correct
way:
1- carries no additional runtime cost,
2- has less time to code (as you don't need to whip out an x86
assembly manual),
3- let's others easily verify your code's correctness (as it doesn't
require them to whip out an x86 assembly manual),
4- better guarantees that future compiler or hardware upgrades won't
break your program,
5- gives portability,
6- and finally makes you look like not someone who writes bad code.

Leigh Johnston

unread,
Dec 12, 2010, 9:55:06 AM12/12/10
to

You are not listening. If you have multiple singletons using Mr Kanze's
method that "you all" agree with it is unspecified as to the order of
construction of these singletons across multiple TUs; i.e. the method
suffers the same problem as ordinary global variables; it is no better
than using ordinary global variables modulo the lack of object
destruction (which is shite). Unspecified construction order is
anathema to maintainability as the order could change as TUs are added
or removed from a project.

>
> "No memory leaks" in the sense you're using has never been a
> (business) use case for any of my projects. However, arguably, you
> have a good point with regards to maintainability and reusability
> w.r.t. the use case of repeated loading and unloading of the same DLL
> in the same process.
>
> With that out of the way, let's go back to the threading issue. I
> never claimed that you claimed that it was portable. I said the code
> is very badly written because writing the threading code the correct
> way:
> 1- carries no additional runtime cost,
> 2- has less time to code (as you don't need to whip out an x86
> assembly manual),
> 3- let's others easily verify your code's correctness (as it doesn't
> require them to whip out an x86 assembly manual),
> 4- better guarantees that future compiler or hardware upgrades won't
> break your program,
> 5- gives portability,
> 6- and finally makes you look like not someone who writes bad code.

As Chris pointed out the only problem with my version compared to the
version given in document by Meyers and Alexandrescu that you seem so
fond of is the lack of a memory barrier after the initial fast check but
this is only a problem for a minimal number of CPUs as the load is
dependent. If I had to port my code to run on such CPUs I simply have
to add this extra barrier.

In the real world people write non-portable code all the time as doing
so is not "incorrect".

/Leigh

Leigh Johnston

unread,
Dec 12, 2010, 10:40:38 AM12/12/10
to

FWIW I have since changed my singleton template :) ...

template <typename T>
class singleton
{
public:
static T& instance()
{

T* ret = static_cast<T*>(sInstancePtr);
lib::memory_barrier_acquire_dependant();
if (ret == 0)
{


lib::lock lock1(sLock);
static T sInstance;

lib::memory_barrier_release();
sInstancePtr = &sInstance;
ret = static_cast<T*>(sInstancePtr);
}
return *ret;


}
private:
static lib::lockable sLock;
static singleton* sInstancePtr;
};

template <typename T>
lib::lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;

/Leigh

Chris M. Thomasson

unread,
Dec 12, 2010, 6:43:22 PM12/12/10
to
"Joshua Maurice" <joshua...@gmail.com> wrote in message
news:784150c0-0156-4685...@a28g2000prb.googlegroups.com...

[...]

> > On 11/12/2010 18:47, Chris M. Thomasson wrote:

[...]

> > FWIW, an atomic store on IA-32 has implied release memory barrier


> > semantics.
> > Also, an atomic load has implied acquire semantics. All LOCK'ed atomic
> > RMW
> > operations basically have implied full memory barrier semantics:

[...]

> > Also, latest VC++ provides acquire/release for volatile load/store
> > respectively:
> >
> > http://msdn.microsoft.com/en-us/library/12a04hfd(v=VS.100).aspx
> > (read all)
> >
> > So, even if you port over to VC++ for X-BOX (e.g., PowerPC), you will
> > get
> > correct behavior as well.
> >
> > Therefore, I don't think you even need the second lock at all. If you
> > are
> > using VC++ you can get away with marking the global instance pointer
> > variable as being volatile. This will give release semantics when you
> > store
> > to it, and acquire when you load from it on Windows or X-BOX, Itanium...

> Or, you know, you could just do it "the right way" the first time and
> put in all of the correct memory barriers to avoid undefined behavior
> according to the C++ standard in order to well, avoid, undefined
> behavior.

Using C++ with multiple threads is already undefined behavior by default
because the C++ standard does not know anything about threading. BTW,
besides using C and 100% pure POSIX Threads, what exactlyis "the right
way" to implement advanced synchronization primitives?

I personally choose to implement all of my sensitive concurrent algorithms
in assembly language. I try to contain the implementation code in externally
assembled object files that provide an external common C API. This is
"fairly safe", because "most" compilers treat calls to unknown external
functions as a sort of implicit "compiler barrier". However, aggressive
link-time optimizations have the ability to mess things up. Luckily, most
compilers that provide such features also have a way to turn them on, or
off.

Yes its like juggling chainsaws! Hopefully, the upcoming C++ standard is
going to keep the chance of decapitation down to a minimum...

;^)


> It's not like it will actually put in a useless no-op when
> using the appropriate C++0x atomics as a normal load on that
> architecture apparently has all of the desired semantics. Why write
> unportable code which you have to read arch manuals to prove its
> correctness when you can write portable code which you can prove its
> correctness from the much simpler C++0x standard?

What C++0x standard? Is it totally completed yet? How many compilers support
all of the functionality?

Well, I need to write code that works now. Unfortunately, this means I have
to implement non-portable architecture/platform specific code and abstract
it away under a common API.

C++0x is going to make things oh SO much easier!

:^D


> Moreover, are you ready to say that you can foresee all possible
> compiler, linker, hardware, etc., optimizations in the future which
> might not exist yet, and you know that they won't break the code?

Hell no. However, I can document that my code works on existing compilers
and architecture combinations, and add all the caveats. For instance, I have
to document that link-time optimizations should probably be turned off for
any code that makes use of my synchronization primitives.


> Sure, the resultant assembly output is correct at the moment according
> to the x86 assembly docs, but that is no guarantee that the C++
> compiler will produce that correct assembly in the future. It could
> implement cool optimizations that would break the /already broken/ C++
> code. This is why you write to the appropriate standard. When in C++
> land, write to the C++ standard.

There is no threading in C++ standard. Heck, I think its even undefined
behavior to use POSIX Threads with C++; does a thread cancellation run
dtors?


> I strongly disagree with your implications Chris that Leigh is using
> good practice with his threading nonsense non-portable hacks,
> especially if/when C++0x comes out and is well supported.

If Leigh sticks with the most current MSVC++ versions, he should be just
fine. That compiler happens to add extra semantics to `volatile'. So, if you
only use MSVC++, then you CAN use `volatile' for implementing threading
constructs. And the code will be portable to any architecture that the most
current version of MSVC++ happens to support, IA-32/64, PowerPC, and
Itanium, to name a few... For instance, MSVC++ does not emit any memory
barriers for volatile loads/stores on IA-32/64. However, it MUST emit the
proper barriers on PowerPC and Itanium.


> PS: If you are implementing a portable threading library, then
> eventually someone has to use the non-portable hardware specifics.
> However, only that person / library should have to, not the writer of
> what should be portable general purpose code.

Totally agreed! I have to write architecture/platform specific code and
abstract it away. The users only need to work with the _API_. No need to
worry about how it's actually implemented under the covers. You should
probably read the following post, and the entire thread when you get the
time:

http://groups.google.com/group/comp.programming.threads/msg/423df394a0370fa6

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/29ea516c5581240e

FWIW, here is an example code for an atomic reference counted pointer with
strong-thread safety guarantee:


http://webpages.charter.net/appcore/vzoom/refcount/


I can port this to basically any IA-32 based operation system that has a GCC
compiler. If I want to port it to PPC, well, I need to create a code in PPC
assembly language, make it adhere to the common API, and BAM! We are in
business. For some reason this website comes to mind:


http://predef.sourceforge.net/


That information is oh so EXTREMELY __valuable__ to me!!!


> PPS: volatile has no place in portable code as a threading primitive
> in C or C++. None. It never has. Please stop perpetuating this myth.

It definitely has a place in most recent versions of MSVC++.


Basically, the only way that you can get 100% portable threading right now
is to use PThreads and compilers that comply with POSIX standard. Read this
for more info on how a compiler, GCC of all things, can break POSIX code:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/63f6360d939612b3

Please read the whole thread; David Butenhof has some very interesting posts
in there!


:^)


Chris M. Thomasson

unread,
Dec 12, 2010, 6:46:00 PM12/12/10
to
"Chris M. Thomasson" <cri...@charter.net> wrote in message
news:76dNo.2044$EX....@newsfe02.iad...

> "Joshua Maurice" <joshua...@gmail.com> wrote in message
> news:784150c0-0156-4685...@a28g2000prb.googlegroups.com...
[...]

>> PS: If you are implementing a portable threading library, then


>> eventually someone has to use the non-portable hardware specifics.
>> However, only that person / library should have to, not the writer of
>> what should be portable general purpose code.
>
> Totally agreed! I have to write architecture/platform specific code and
> abstract it away. The users only need to work with the _API_. No need to
> worry about how it's actually implemented under the covers. You should
> probably read the following post, and the entire thread when you get the
> time:
>
> http://groups.google.com/group/comp.programming.threads/msg/423df394a0370fa6
>
> http://groups.google.com/group/comp.programming.threads/browse_frm/thread/29ea516c5581240e

BTW, I was posting as `SenderX' at the time...

;^)

[...]


Joshua Maurice

unread,
Dec 12, 2010, 6:54:59 PM12/12/10
to

You didn't address one of my bigger concerns. If/when C++0x becomes in
wide use, it's possible that compilers will start optimizing more
aggressively, and without the memory barrier, it's possible that it
could break. You have shown that the current C++ undefined behavior
produces correct assembly. That's one possible outcome of undefined
behavior. A compiler upgrade could break it.

Also, if you consider it acceptable to write broken non-portable code
when portable code is better in every respect, good for you.

> In the real world people write non-portable code all the time as doing
> so is not "incorrect".

They don't in my workplace.

Leigh Johnston

unread,
Dec 12, 2010, 7:13:04 PM12/12/10
to

Did you deliberately ignore my other reply? I have since changed my
singleton template to add an extra barrier if you care to look even
though this barrier is a no-op on my implementation (although it should
prevent compiler reorderings).

>
> Also, if you consider it acceptable to write broken non-portable code
> when portable code is better in every respect, good for you.
>
>> In the real world people write non-portable code all the time as doing
>> so is not "incorrect".
>
> They don't in my workplace.

I often write non-portable code and wrap it in a portable API as doing
so is plain common sense. I also tend to layer my code with each layer
more abstract than the last; lower layers are more likely to contain
non-portable code.

/Leigh

Chris M. Thomasson

unread,
Dec 12, 2010, 7:25:56 PM12/12/10
to
"Joshua Maurice" <joshua...@gmail.com> wrote in message
news:cfb8de94-6b24-4e60...@r40g2000prh.googlegroups.com...

On Dec 12, 6:55 am, Leigh Johnston <le...@i42.co.uk> wrote:

[...]

> You didn't address one of my bigger concerns. If/when C++0x becomes in
> wide use, it's possible that compilers will start optimizing more
> aggressively, and without the memory barrier, it's possible that it
> could break.

How would you add the memory barrier now without resulting to an abstraction
that is based on 100% non-portable architecture specific code? What I would
personally do is create a low-level standard interface for each arch, say
something like:

extern void ia32_membar_acquire_depends(void);
extern void ia64_membar_acquire_depends(void);
extern void ppc_membar_acquire_depends(void);
extern void alpha_membar_acquire_depends(void);

The functions above would each be implemented in architecture specific
assembly language, and externally assembled into separate object files. Only
`alpha_membar_acquire_depends()' would contain the actual memory barrier.
All the others would boil down to NOPS. Then, I would use compile-time
pre-defined pre-processor macros to select the correct version, perhaps
something like:

#if defined (ARCH_DEC_ALPHA)
# defined membar_acquire_depends alpha_membar_acquire_depends
#elif defined (ARCH_IA32)
# defined membar_acquire_depends ia32_membar_acquire_depends
#elif defined (ARCH_...)

/* blah, blah */

#else
# error "Sorry, my library is not compatible with this environment!"
#endif


This is why the information contained within the wonderful:

http://predef.sourceforge.net

project is so darn important to me.


> You have shown that the current C++ undefined behavior
> produces correct assembly. That's one possible outcome of undefined
> behavior. A compiler upgrade could break it.

Link-time optimization can possibly break it. Also, some compilers require
you to explicitly insert compiler barriers in order for them to not re-order
code. However, luckily, link-time optimization aside for a moment... Most
compilers will indeed treat calls to completely unknown externally assembled
functions as "compiler-barriers".

However, AFAICT, Leigh Johnston is only using MSVC++. This compiler has
explicit documentation that gives certain guarantees to `volatile'. It has
compiler intrinsic for compiler barriers, memory barriers, atomic
operations, ect... So, if he sticks with that specific compiler, then
everything should be just fine.


Chris M. Thomasson

unread,
Dec 12, 2010, 8:05:10 PM12/12/10
to
"Chris M. Thomasson" <cri...@charter.net> wrote in message
news:76dNo.2044$EX....@newsfe02.iad...
[...]

> Basically, the only way that you can get 100% portable threading right now
> is to use PThreads and compilers that comply with POSIX standard. Read
> this
> for more info on how a compiler, GCC of all things, can break POSIX code:
>
> http://groups.google.com/group/comp.programming.threads/browse_frm/thread/63f6360d939612b3
>
> Please read the whole thread; David Butenhof has some very interesting
> posts
> in there!

Here is one of those posts:

http://groups.google.com/group/comp.programming.threads/msg/729f412608a8570d


Chris M. Thomasson

unread,
Dec 12, 2010, 8:15:33 PM12/12/10
to
"Joshua Maurice" <joshua...@gmail.com> wrote in message
news:cfb8de94-6b24-4e60...@r40g2000prh.googlegroups.com...
[...]

> > In the real world people write non-portable code all the time as doing
> > so is not "incorrect".

> They don't in my workplace.

I guess the best you can do is abstract a non-portable atomic
operations/memory barrier API behind a C++0x interface. Luckily, somebody
already did that for us:

http://www.stdthread.co.uk/

But I think even this still might be able to break with very aggressive
link-time optimizations...


Ebenezer

unread,
Dec 13, 2010, 1:27:59 AM12/13/10
to
> They don't in my workplace.- Hide quoted text -
>

With an on line service (code generator) it's OK to
write non-portably as long as you're happy with the
platform you're on and are convinced you've found the
platform of your dreams. I'm on Linux/Intel and am not
sure it is the platform of my dreams. I still haven't
found what I'm looking for, but think it's possible to
find such a platform eventually.

James Kanze

unread,
Dec 13, 2010, 5:12:31 AM12/13/10
to
On Dec 10, 3:56 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> On 10/12/2010 15:29, James Kanze wrote:
> > On Dec 10, 1:16 pm, Leigh Johnston<le...@i42.co.uk> wrote:
> >> On 10/12/2010 09:52, James Kanze wrote:
> >>> On Dec 9, 5:05 pm, Marcel M ller<news.5.ma...@spamgourmet.com> wrote:
> >>>> Pallav singh wrote:

> > [...]
> >>> Note that the above still risks order of destruction issues;
> >>> it's more common to not destruct the singleton ever, with
> >>> something like:

> >>> namespace {

> >>> Singleton* ourInstance =&Singleton::instance();

> >>> Singleton&
> >>> Singleton::instance()
> >>> {
> >>> if (ourInstance == NULL)
> >>> ourInstance = new Singleton;
> >>> return *ourInstance;
> >>> }
> >>> }

> >>> (This solves both problems at once: initializing the variable
> >>> with a call to Singleton::instance and ensuring that the
> >>> singleton is never destructed.)

> >> James "Cowboy" Kanze's OO designs includes objects that are never
> >> destructed but leak instead?

> > And where do you see a leak?

> Is that a serious question?

Yes. There's no memory leak in the code I posted. I've used it
in applications that run for years, without running out of
memory.

> The only real difference between the two programs below is the amount of
> memory leaked:

> int main()
> {
> int* p = new int;
> }

> int main()
> {
> int* p = new int;
> p = new int;
> }

Arguably, there's no memory leak in either. A memory leak
results in memory use that increases in time. That's the usual
definition of "leak" in English, applied to memory, and it's the
only "useful" definition; if the second line in the above were
in a loop, you would have a leak.

Other definitions of memory leak are possible---I've seen people
claim that you can't have a memory leak in Java because it has
garbage collection, for example. But such definitions are of no
practical use, and don't really correspond to the usual meaning.

> A singular memory leak (one that is not repeated so doesn't
> consume more and more memory as a program runs) is still
> a memory leak.

> I will ignore the predictable, trollish part of your reply.

In other words, you know that your position is indefensible, so
prefer to resort to name calling.

--
James Kanze

James Kanze

unread,
Dec 13, 2010, 5:29:35 AM12/13/10
to
On Dec 11, 3:05 am, Leigh Johnston <le...@i42.co.uk> wrote:
> On 11/12/2010 02:38, Leigh Johnston wrote:
> > On 11/12/2010 02:23, Leigh Johnston wrote:
> >> On 10/12/2010 23:31, Ian Collins wrote:
> >>> On 12/11/10 10:08 AM, Leigh Johnston wrote:
> >>>> On 10/12/2010 20:39, Ian Collins wrote:
> >>>>> On 12/11/10 09:21 AM, Leigh Johnston wrote:

[...]
> >> Normally I instantiate all my singletons up front (before
> >> threading)

That's really the only acceptable solution. (And to answer
Leigh's other point: you don't use singletons in plugins.)

> >> but I decided to quickly roll a new singleton
> >> template class just for the fun of it (thread-safe Meyers
> >> Singleton):

> >> namespace lib
> >> {


> >> template <typename T>
> >> class singleton
> >> {
> >> public:
> >> static T& instance()
> >> {

> >> if (sInstancePtr != 0)
> >> return static_cast<T&>(*sInstancePtr);
> >> { // locked scope


> >> lib::lock lock1(sLock);
> >> static T sInstance;

> >> { // locked scope


> >> lib::lock lock2(sLock); // second lock should emit memory barrier here

> >> sInstancePtr = &sInstance;
> >> }
> >> }
> >> return static_cast<T&>(*sInstancePtr);


> >> }
> >> private:
> >> static lib::lockable sLock;
> >> static singleton* sInstancePtr;
> >> };
> >>
> >> template <typename T>
> >> lib::lockable singleton<T>::sLock;
> >> template <typename T>
> >> singleton<T>* singleton<T>::sInstancePtr;
> >> }

> > Even though a memory barrier is emitted for a specific
> > implementation of my lockable class it obviously still
> > relies on the C++ compiler not re-ordering stores across
> > a library I/O call (acquiring the lock) but it works fine
> > for me at least (VC++). I could mention volatile but
> > I better not as that would start a long argument. Roll on
> > C++0x.

> Sorry I was worrying over nothing; of course the C++ compiler
> will not reorder a pointer assignment to before the creation
> of the object it points to ..

Really? What makes you think that? (And of course, even if the
compiler doesn't reorder, the hardware might.)

The code posted above is broken on so many counts, it's hard to
know where to start. It is basically just the double checked
locking anti-pattern, known to not work. (See, for example,
http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf.)
With, in addition, the fact that there doesn't seem to be
anything which guarantees that sLock is constructed before it is
used.

> no volatile needed! :)

In C++, the volatile wouldn't buy you anything; some versions of
Visual Studios do extend the meaning so that it could be used,
but this was (I believe) a temporary solution; presumably,
future versions will adopt the definition of volatile adopted in
C++0x.

--
James Kanze

James Kanze

unread,
Dec 13, 2010, 5:44:43 AM12/13/10
to
On Dec 11, 4:08 am, Leigh Johnston <le...@i42.co.uk> wrote:
> On 11/12/2010 03:40, Joshua Maurice wrote:
> > On Dec 10, 7:17 pm, Leigh Johnston<le...@i42.co.uk> wrote:

> >> On 11/12/2010 03:12, Joshua Maurice wrote:
> >>>>> Normally I instantiate all my singletons up front
> >>>>> (before threading) but I decided to quickly roll a new

> >>> If I'm reading your code right, on the fast path, you
> >>> don't have a barrier, a lock, or any other kind of
> >>> synchronization, right? If yes, you realize you've coded
> >>> the naive implementation of double checked? You realize
> >>> that it's broken, right? Have you even read
> >>>http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
> >>>? To be clear, this has undefined behavior according to
> >>>the C++0x standard as well.

> >> I am aware of double checked locking pattern yes and this
> >> is not the double checked locking pattern (there is only
> >> one check of the pointer if you look). If a pointer
> >> read/write is atomic is should be fine (on the
> >> implementation I use it is at least).

> > You've hidden the second check with the static keyword.

> > Example: Consider:

> > SomeType& foo()
> > {
> > static SomeType foo;
> > return foo;
> > }

> > For a C++03 implementation, it's likely implemented with something
> > like:

> > SomeType& foo()
> > {
> > static bool b = false; /*done before any runtime execution, stored
> > in the executable image */
> > static char alignedStorage[sizeof(SomeType)]; /*with some magic
> > for alignment */
> > if ( ! b)
> > new (alignedStorage) SomeType();
> > return * reinterpret_cast<SomeType*>(alignedStorage);
> > }

> > That's your double check.

> > For C++0x, it will not be implemented like that. Instead, it
> > will be implemented in a thread-safe way that makes your
> > example entirely redundant.

> The problem with the traditional double checked locking
> pattern is twofold:

> 1) The "checks" are straight pointer comparisons and for the second
> check the pointer may not be re-read after the first check due to
> compiler optimization.

That's not correct. Since there is a lock between the two
reads, the pointer must be reread.

One major problem with both the traditional double checked
locking and your example is that a branch which finds the
pointer not null will never execute any synchronization
primitives. Which means that there is no guarantee that it will
see a constructed object---in the absense of synchronization
primitives, the order of writes in another thread is not
preserved (and in practice will vary on most high end modern
processors).

You've added to the problem by not reading the pointer a second
time. This means that two threads may actually try to construct
the static object. Which doesn't work with most compilers today
(but will be guaranteed in C++0x, I think).

Finally, of course, if the instance function is called from the
constructor of a static object, there's a very good chance that
sLock won't have been constructed. (Unix supports static
construction of mutexes, but as far as I know, Windows doesn't.)

> 2) The initialization of the pointer may be re-ordered by the CPU to
> happen before the initialization of the singleton object is complete.

> I think you are confusing the checking issue. I am acquiring
> a lock before this hidden check of which you speak is made and
> this check is not the same as the initial fast pointer check
> so issue 1 is not a problem.

I think you're missing the point that order is only preserved if
*both* threads synchronize correctly. You're lock guarantees
the order the writes are emitted in the writing thread, but does
nothing to ensure the order in which the writes become visible
in other threads.

> As far as issue 2 is concerned my version (on VC++ at least) is solved
> via my lock primitive which should emit a barrier on RAII construction
> and destruction and cause VC++ *compiler* to not re-order stores across
> a library I/O call (if I am wrong about this a liberal sprinkling of
> volatile would solve it).

> I should have stated in the original post that my solution is not
> portable as-is but it is a solution for a particular implementation
> (which doesn't preclude porting to other implementations). :)

There are definitly implementations on which it will work: any
single core machine, for example. And it's definitely not
portable: among the implementions where it will not work, today,
are Windows, Linux and Solaris, at least when running on high
end platforms.

--
James Kanze

James Kanze

unread,
Dec 13, 2010, 5:48:22 AM12/13/10
to
On Dec 11, 4:05 pm, "Chris M. Thomasson" <cris...@charter.net> wrote:
> "Leigh Johnston" <le...@i42.co.uk> wrote in message

> news:kY-dnahdNL25H57Q...@giganews.com...
> [...]

> > Hmm, I think I see why I might need the first barrier: is it
> > due to loads being made from the singleton object before the
> > pointer check causing problems for *clients* of the
> > function? any threading experts care to explain?

> http://lwn.net/Articles/5159

> http://mirror.linux.org.au/linux-mandocs/2.6.4-cset-20040312_2111/rea...

> http://groups.google.com/group/comp.lang.c++.moderated/msg/e500c3b8b6...

> Basically, the only architecture out there which requires
> a data-dependant acquire barrier after the initial atomic load
> of the shared instance pointer is a DEC Alpha...

You must know something I don't: the documentation of the Sparc
architecture definitely says that it isn't guaranteed; I've also
heard that it fails on Itaniums, and that it is uncertain on
80x86. (My own reading of the Intel documentation fails to turn
up a guarantee, but I've not seen everything.)

--
James Kanze

Yannick Tremblay

unread,
Dec 13, 2010, 6:00:33 AM12/13/10
to
In article <Xns9E4B6E3FD3F7Dm...@216.196.109.131>,

Paavo Helde <myfir...@osa.pri.ee> wrote:
>Leigh Johnston <le...@i42.co.uk> wrote in
>news:ItOdncr4ttt9qp_Q...@giganews.com:
>
>> On 10/12/2010 13:59, Fred Zwarts wrote:
>>> "Leigh Johnston"<le...@i42.co.uk> wrote in message
>[...]

>>>> James "Cowboy" Kanze's OO designs includes objects that are never
>>>> destructed but leak instead? Interesting. What utter laziness
>>>> typical of somebody who probably overuses (abuses) the singleton
>>>> pattern. Singleton can be considered harmful (use rarely not
>>>> routinely).
>>>
>>> As far as I can see it does not leak.
>>> Up to the very end of the program the ourInstance pointer keeps
>>> pointing to the object and can be used to access the object.
>>> This is a well known technique to overcome the order of destruction
>>> issue.
>>
>> Of course it is a memory leak the only upside being it is a singular
>> leak that would be cleaned up by the OS as part of program termination
>> rather than being an ongoing leak that continues to consume memory.
>> It is lazy. As far as it being a "well known technique" I have
>> encountered it before when working on a large project with many team
>> members but that does not justify its use; it was a consequence of
>> parallel development of many sub-modules with insufficient time set
>> aside for proper interop design and too much risk associated with
>> "fixing" it.
>
>Exactly. Programming is an engineering discipline, meaning that one has
>to estimate the risks, costs and benefits. If the "leaked singleton"
>approach is 10 times easier to get working correctly and has vanishing
>risk of ill side effects, I would go with it regardless if somebody calls
>it a leak or not.

Exactly,

Possible approaches:

1- Static initialisation singleton (also known as Meyer singleton)
+ Simple
+ no leak
- Can create problems due to unspecified destruction order

2- Newed never deleted singleton (also known as Gamma singleton)
+ simple
+ No destruction order issues
- arguably "leak"

3- Let's get complex and place mutex in
+ can make initialisation thread safe
- more complex
- possibly slower

4- Let's get even more complex and introduce full lifetime management
+ well, lifetime can be fully controlled.
- a lot more complex
- more chances to get it wrong.

If you really think you need #4, go and read Modern C++ Design.
There's a long discussion on singleton and lots of code to handle lots
of possiblities. It's a very nice and thourough analysis but only
require for academic purposes and some rare cases.

In the real world of real software, the simpler #1 and #2 solutions
will cover maybe 99% of the cases.

Yannick
P.S.: I guess you could also just import Loki in your application,
that might offer the full #4 for facilities without having to maintain
the code.


James Kanze

unread,
Dec 13, 2010, 6:09:15 AM12/13/10
to
On Dec 11, 7:57 pm, "Chris M. Thomasson" <cris...@charter.net> wrote:
> "Leigh Johnston" <le...@i42.co.uk> wrote in message

> news:aI-dnTRysdoEVJ7Q...@giganews.com...

> > On 11/12/2010 18:47, Chris M. Thomasson wrote:
> [...]

> > Thanks for the info. At the moment I am only concerned with
> > IA-32/VC++ implementation which should be safe.

> FWIW, an atomic store on IA-32 has implied release memory
> barrier semantics.

Could you cite a statement from Intel in support of that?

> Also, an atomic load has implied acquire semantics. All
> LOCK'ed atomic RMW operations basically have implied full
> memory barrier semantics:

In particular, §8.2.3.4, which specifically states that "Loads
may be reordered with earliers storead to different locations".
Which seems to say just the opposite of what you are claiming.

> Also, latest VC++ provides acquire/release for volatile load/store
> respectively:

> http://msdn.microsoft.com/en-us/library/12a04hfd(v=VS.100).aspx
> (read all)

> So, even if you port over to VC++ for X-BOX (e.g., PowerPC), you will get
> correct behavior as well.

Provided he uses volatile on the pointer (and uses the classical
double checked locking pattern, rather than his modified
version).

> Therefore, I don't think you even need the second lock at all.
> If you are using VC++ you can get away with marking the global
> instance pointer variable as being volatile. This will give
> release semantics when you store to it, and acquire when you
> load from it on Windows or X-BOX, Itanium...

IIUC, these guarantees were first implemented in VS 2010.
(They're certainly not present in the generated code of the
versions of VC++ I use, mainly 2005.)

I'm also wondering about their perenity. I know that Herb
Sutter presented them to the C++ committee with the suggestion
that the committee adopt them. After some discussion, he more
or less accepted the view of the other members of the committee,
that it wasn't a good idea. (I hope I'm not misrepresenting his
position---I was present during some of the discussions, but
I wasn't taking notes.) Given the dates, I rather imagine that
the feature set of 2010 was already fixed, and 2010 definitly
implements the ideas that Herb presented. To what degree
Microsoft will feel bound to these, once the standard is
officially adopted with a different solution, I don't know (and
I suspect that no one really knows, even at Microsoft).

--
James Kanze

Michael Doubez

unread,
Dec 13, 2010, 6:12:58 AM12/13/10
to
On 13 déc, 11:29, James Kanze <james.ka...@gmail.com> wrote:
> On Dec 11, 3:05 am, Leigh Johnston <le...@i42.co.uk> wrote:
>
> > On 11/12/2010 02:38, Leigh Johnston wrote:
> > > On 11/12/2010 02:23, Leigh Johnston wrote:
> > >> On 10/12/2010 23:31, Ian Collins wrote:
> > >>> On 12/11/10 10:08 AM, Leigh Johnston wrote:
> > >>>> On 10/12/2010 20:39, Ian Collins wrote:
> > >>>>> On 12/11/10 09:21 AM, Leigh Johnston wrote:
>
>     [...]
>
> > >> Normally I instantiate all my singletons up front (before
> > >> threading)
>
> That's really the only acceptable solution.  (And to answer
> Leigh's other point: you don't use singletons in plugins.)

Nitpicking.
DLL plugin might use/define singleton but AFAIK nobody said you have
to use the same kind of singleton everywhere.

--
Michael

James Kanze

unread,
Dec 13, 2010, 6:22:25 AM12/13/10
to

> >http://msdn.microsoft.com/en-us/library/12a04hfd(v=VS.100).aspx
> > (read all)

Until C++0x becomes reality, there is no "right way" in C++.
One reasonably portable way of getting memory barriers is to use
explicit locks; this will have a run-time impact. (Chris and
I have discussed this in the past.) Whether that run-time
impact is significant is another question---roughly speaking
(IIRC), Chris has developed a solution that will use one less
barrier than a classical mutex lock (which requires a barrier
when acquiring the lock, and another when freeing it). In the
absolute, a barrier is "expensive" (the equivalent of 10 or more
normal instructions?), but a lot depends on what else you're
doing; I think that in most cases, the difference will be lost
in the noise.

> It's not like it will actually put in a useless no-op when
> using the appropriate C++0x atomics as a normal load on that
> architecture apparently has all of the desired semantics. Why write
> unportable code which you have to read arch manuals to prove its
> correctness when you can write portable code which you can prove its
> correctness from the much simpler C++0x standard?

> Moreover, are you ready to say that you can foresee all possible
> compiler, linker, hardware, etc., optimizations in the future which
> might not exist yet, and you know that they won't break the code?
> Sure, the resultant assembly output is correct at the moment according
> to the x86 assembly docs, but that is no guarantee that the C++
> compiler will produce that correct assembly in the future. It could
> implement cool optimizations that would break the /already broken/ C++
> code. This is why you write to the appropriate standard. When in C++
> land, write to the C++ standard.

> I strongly disagree with your implications Chris that Leigh is using
> good practice with his threading nonsense non-portable hacks,
> especially if/when C++0x comes out and is well supported.

If/when C++0x comes out, obviously, you'd want to use it. Until
then, if you really do have a performance problem, you may have
to live with non-portable constructs. (At present, anything
involving threading is non-portable.) Leigh, of course,
disingenuously didn't mention non-portability in his initial
presentation of the algorithm (which contained other problems as
well); Chris is generally very explicit about such issues.

> PS: If you are implementing a portable threading library, then
> eventually someone has to use the non-portable hardware specifics.
> However, only that person / library should have to, not the writer of
> what should be portable general purpose code.

> PPS: volatile has no place in portable code as a threading primitive
> in C or C++. None. It never has. Please stop perpetuating this myth.

Microsoft has extended the meaning of volatile (starting with
VS 2010?) so that it can be used. This is a Microsoft specific
extension (and on the web page Chris sites, they explicitly
present it as such---this isn't the old Microsoft, trying to
lock you in without your realizing it). C++0x will provide
alternatives which should be portable, but until then...

--
James Kanze

James Kanze

unread,
Dec 13, 2010, 6:24:28 AM12/13/10
to
On Dec 12, 2:27 am, Joshua Maurice <joshuamaur...@gmail.com> wrote:
> On Dec 11, 6:00 pm, Leigh Johnston <le...@i42.co.uk> wrote:

[...]


> You continue to assert that a memory leak is bad a priori.

He continues to assert that it is a memory leak, although it
doesn't fit any useful definition of a memory leak.

--
James Kanze

James Kanze

unread,
Dec 13, 2010, 6:45:47 AM12/13/10
to
On Dec 12, 2:55 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> On 12/12/2010 02:27, Joshua Maurice wrote:
> > On Dec 11, 6:00 pm, Leigh Johnston<le...@i42.co.uk> wrote:
> >> On 12/12/2010 01:23, Joshua Maurice wrote:

> >>> On Dec 11, 5:19 pm, Tiib<oot...@hot.ee> wrote:
> >>>> On Dec 12, 2:30 am, Joshua Maurice<joshuamaur...@gmail.com> wrote:

> >>>>> On Dec 11, 11:57 am, "Chris M. Thomasson"<cris...@charter.net> wrote:
> >>>>>> "Leigh Johnston"<le...@i42.co.uk> wrote in message
> >>>>>>news:aI-dnTRysdoEVJ7Q...@giganews.com...
> >>>>>>> On 11/12/2010 18:47, Chris M. Thomasson wrote:

> >>>> [...]


> >> One can instantiate multiple "Meyers Singletons" before creating any
> >> threads to avoid any singleton related threading code. No object
> >> destruction problems.

> > You continue to assert that a memory leak is bad a priori. The rest of
> > us in this thread disagree with that claim. Instead, we program to
> > tangible requirements, such as cost, time to market, meets (business)
> > use case. We also keep in mind less tangible but still important
> > considerations, like maintainability and reusability.

> You are not listening. If you have multiple singletons using
> Mr Kanze's method that "you all" agree with it is unspecified
> as to the order of construction of these singletons across
> multiple TUs; i.e. the method suffers the same problem as
> ordinary global variables; it is no better than using ordinary
> global variables modulo the lack of object destruction (which
> is shite).

I'd suggest you read the code I posted first. It addresses the
order of initialization issues fully. (But of course, you're
not one to let actual facts bother you.)

> Unspecified construction order is anathema to maintainability
> as the order could change as TUs are added or removed from
> a project.

Unspecified constructor order of variables at namespace scope is
a fact of life in C++. That's why we use the singleton pattern
(which doesn't put the actual variable at namespace scope, but
allocates it dynamically).

Unspecified destructor order of variables with static lifetime
(namespace scope or not) is also a fact of life in C++. That's
why we don't destruct the variable we dynamically allocated.

It's called defensive programming. Or simply sound software
engineering. It's called avoiding undefined behavior, if you
prefer.

[...]


> As Chris pointed out the only problem with my version compared
> to the version given in document by Meyers and Alexandrescu
> that you seem so fond of is the lack of a memory barrier after
> the initial fast check but this is only a problem for
> a minimal number of CPUs as the load is dependent. If I had
> to port my code to run on such CPUs I simply have to add this
> extra barrier.

I'm not sure which problems in your code Chris tried to address,
or even how much of your code he actually studied; your code
definitely doesn't work on the machines I use (which includes
some with Intel processors, both under Linux and under Windows)
and the compilers I use.

> In the real world people write non-portable code all the time
> as doing so is not "incorrect".

Certainly not: until we get C++0x, all multithreaded code is
"non-portable". It's a bit disingenious, however, to not
mention the limitations before others pointed out what didn't
work.

--
James Kanze

Leigh Johnston

unread,
Dec 13, 2010, 7:23:50 AM12/13/10
to

Bullshit. If the code was in a DLL and the DLL was loaded/unloaded
multiple times the leak would be obvious.

>
>> The only real difference between the two programs below is the amount of
>> memory leaked:
>
>> int main()
>> {
>> int* p = new int;
>> }
>
>> int main()
>> {
>> int* p = new int;
>> p = new int;
>> }
>
> Arguably, there's no memory leak in either. A memory leak
> results in memory use that increases in time. That's the usual
> definition of "leak" in English, applied to memory, and it's the
> only "useful" definition; if the second line in the above were
> in a loop, you would have a leak.

Bullshit. If the code was in a DLL and the DLL was loaded/unloaded
multiple times the leak would be obvious.

>
> Other definitions of memory leak are possible---I've seen people
> claim that you can't have a memory leak in Java because it has
> garbage collection, for example. But such definitions are of no
> practical use, and don't really correspond to the usual meaning.
>
>> A singular memory leak (one that is not repeated so doesn't
>> consume more and more memory as a program runs) is still
>> a memory leak.
>
>> I will ignore the predictable, trollish part of your reply.
>
> In other words, you know that your position is indefensible, so
> prefer to resort to name calling.

Bullshit.

Your singleton method is a leak and has unspecified construction order
across multiple TUs. It is hogwash.

/Leigh

Leigh Johnston

unread,
Dec 13, 2010, 7:31:29 AM12/13/10
to

It is typical of you to ignore the rest of the thread which has already
moved on. My singleton code has since become:

template <typename T>
class singleton
{
public:
static T& instance()
{

T* ret = static_cast<T*>(sInstancePtr);

memory_barrier_acquire_dependant();
if (ret == 0)
{

lock theLock(sLock);
static T sInstance;


memory_barrier_release();
sInstancePtr = &sInstance;
ret = static_cast<T*>(sInstancePtr);
}
return *ret;
}
private:

static lockable sLock;
static singleton* sInstancePtr;
};

template <typename T>


lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;


>


>> no volatile needed! :)
>
> In C++, the volatile wouldn't buy you anything; some versions of
> Visual Studios do extend the meaning so that it could be used,
> but this was (I believe) a temporary solution; presumably,
> future versions will adopt the definition of volatile adopted in
> C++0x.
>

Again the thread has moved on. I am not using volatile.

/Leigh

Leigh Johnston

unread,
Dec 13, 2010, 7:41:37 AM12/13/10
to

To which version are you referring? The newest version does not use
volatile nor needs volatile. There is a barrier after the first fast
check and whilst this barrier is a no-op on my implementation it does
prevent compiler reordering. My version is not the same as the
traditional double checked locking pattern if you care to look (there is
only one "pointer check". My version is the Meyers Singleton which is
superior to your hogwash leaky version.

Again:

template <typename T>
class singleton
{
public:
static T& instance()
{
T* ret = static_cast<T*>(sInstancePtr);

lib::memory_barrier_acquire_dependant();
if (ret == 0)
{


lib::lock lock1(sLock);
static T sInstance;

lib::memory_barrier_release();


sInstancePtr = &sInstance;
ret = static_cast<T*>(sInstancePtr);
}
return *ret;
}
private:

static lib::lockable sLock;
static singleton* sInstancePtr;
};

template <typename T>
lib::lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;

/Leigh

/Leigh

Joshua Maurice

unread,
Dec 13, 2010, 7:41:45 AM12/13/10
to

I'm sorry if I wasn't clear enough. Let me put forward my
proposition:

There seems to be an emerging memory model, whose basics are shared
between C++0x, POSIX, win32, and even Java. It seems like a wise idea
to program to this memory model as this gives the best guarantee of
correctness, maintainability, portability, and so on.

As we don't have C++0x at the moment, and POSIX and win32 are not
fully portable, I would suggest that you use a library which
implements "atomics" following the basic idea of these memory models.
The library can be one which you've downloaded, like Boost, or it can
be one you wrote yourself.

The "non-portable" stuff should be kept in that library, and the rest
of the code should use the portable API, that API which is consistent
with POSIX, win32, and C++0x.

In short, you shouldn't sprinkle your code with volatile. Have
functions implement some of the C++0x / POSIX / win32 semantics, and
implement those functions in terms of volatile. You minimize the code
which needs to be changed when porting. This is preferable in almost
every way to using volatile throughout your code.

Leigh Johnston

unread,
Dec 13, 2010, 7:43:55 AM12/13/10
to

Again you are deliberately replying to earlier posts in a thread that
are no longer relevant as the thread has moved on.

template <typename T>
class singleton
{
public:
static T& instance()
{

T* ret = static_cast<T*>(sInstancePtr);
lib::memory_barrier_acquire_dependant();
if (ret == 0)
{

lib::lock lock1(sLock);
static T sInstance;

lib::memory_barrier_release();
sInstancePtr = &sInstance;
ret = static_cast<T*>(sInstancePtr);
}
return *ret;
}

private:
static lib::lockable sLock;
static singleton* sInstancePtr;
};

template <typename T>
lib::lockable singleton<T>::sLock;
template <typename T>
singleton<T>* singleton<T>::sInstancePtr;

/Leigh

Leigh Johnston

unread,
Dec 13, 2010, 7:47:56 AM12/13/10
to

You are incorrect. The destruction order of Meyers Singletons is specified.

>
> 2- Newed never deleted singleton (also known as Gamma singleton)
> + simple
> + No destruction order issues
> - arguably "leak"

No destruction order issues because it is never destroyed which is hogwash.

>
> 3- Let's get complex and place mutex in
> + can make initialisation thread safe
> - more complex
> - possibly slower

Depends how you do it; if using a pattern similar to DCLP the barrier
for the fastest path is a no-op on certain implementations.

>
> 4- Let's get even more complex and introduce full lifetime management
> + well, lifetime can be fully controlled.
> - a lot more complex
> - more chances to get it wrong.
>
> If you really think you need #4, go and read Modern C++ Design.
> There's a long discussion on singleton and lots of code to handle lots
> of possiblities. It's a very nice and thourough analysis but only
> require for academic purposes and some rare cases.
>
> In the real world of real software, the simpler #1 and #2 solutions
> will cover maybe 99% of the cases.
>

#1 yes; #2 for cowboy programmers yes.

/Leigh

Keith H Duggar

unread,
Dec 13, 2010, 8:31:50 AM12/13/10
to
On Dec 13, 7:47 am, Leigh Johnston <le...@i42.co.uk> wrote:
> On 13/12/2010 11:00, Yannick Tremblay wrote:
> > In article<Xns9E4B6E3FD3F7Dmyfirstnameosapr...@216.196.109.131>,
> > Paavo Helde<myfirstn...@osa.pri.ee>  wrote:

By itself Meyer's Singleton is not sufficient to eliminate
unspecified order of destruction "problems". It must be used
in combination with some convention such as "all objects that
use Singleton must call the Singleton::instance() method in
their constructor" to ensure the Singleton outlives objects
that might use it.

This can be difficult to achieve (especially in shops that
abuse Singleton in the first place) hence their temptation to
just throw in the towel and create these "never destroyed"
singletons. (FYI I'm not saying James works in such a shop;
I doubt that he does.)

Regardless, such objects are still a cop out. I have yet to
see any good justification or examples of objects that /must
not be/ properly destroyed at program termination. Though I
cannot claim there are no such objects. Maybe someone can
provide a legitimate example of an object that cannot have
any appropriate destructor called at exit?

KHD

PS. Leigh, save us your automatic "I am well aware ..." bs.
The Royal We don't care whether or not you /appear/ intelligent
in an internet forum. Furthermore what you did or did not know
is virtually irrelevant (except to your insecurity).

PPS. I'm not asking for examples of objects that /need not/ be
destroyed, I'm asking for examples that /must not/ be destroyed.

PPPS. The word is "destroyed" not "destructed" you lemmings.

Leigh Johnston

unread,
Dec 13, 2010, 8:41:28 AM12/13/10
to

The order of construction/destruction is specified according to the
standard that is all I meant. The order of construction of Kanze's
leaky singleton objects is unspecified. As far as lifetime
relationships are concerned I do tend to make these explicit via
constructors yes.

>
> This can be difficult to achieve (especially in shops that
> abuse Singleton in the first place) hence their temptation to
> just throw in the towel and create these "never destroyed"
> singletons. (FYI I'm not saying James works in such a shop;
> I doubt that he does.)
>
> Regardless, such objects are still a cop out. I have yet to
> see any good justification or examples of objects that /must
> not be/ properly destroyed at program termination. Though I
> cannot claim there are no such objects. Maybe someone can
> provide a legitimate example of an object that cannot have
> any appropriate destructor called at exit?
>
> KHD
>
> PS. Leigh, save us your automatic "I am well aware ..." bs.
> The Royal We don't care whether or not you /appear/ intelligent
> in an internet forum. Furthermore what you did or did not know
> is virtually irrelevant (except to your insecurity).

WTF? Troll (fuck) off.

/Leigh

gwowen

unread,
Dec 13, 2010, 12:02:56 PM12/13/10
to
On Dec 13, 1:41 pm, Leigh Johnston <le...@i42.co.uk> wrote:

> WTF?  Troll (fuck) off.
>
> /Leigh

Seriously, Leigh, if profanity is your response to everyone who
disagrees with you, you only succeed in coming across like a petulant,
hormonal, teenager. Talk like a mature adult, or no-one will take
anything you seriously.

Leigh Johnston

unread,
Dec 13, 2010, 12:25:41 PM12/13/10
to

Shrinking violets can use message filters if they find profanities
emotionally crippling.

/Leigh

Ebenezer

unread,
Dec 13, 2010, 1:03:18 PM12/13/10
to

I'm not entirely sure of your point, but think you are advocating
for portability here. With library code I think portability is
important and work to increase the portability of my code. But
with executable/service code I'm not convinced portability is
very important. What matters more I think is being happily
married to a decent platform. If one finds an excellent platform,
he can work on writing the best code on that platform possible
and rest assured that those working on the platform are ethical
people who will work hard to produce quality products. I'm on
the Linux/Intel platform at this time, but believe I'll move
to a better platform in the days ahead. This article about
airport/airplane security tells of some Israeli companies with
interesting products coming out.

http://current.com/technology/92862589_are-you-a-terrorist-the-simple-question-being-asked-at-an-airport-which-could-rumble-a-suicide-bomber.htm

(I believe that's a helpful question to ask people when
flying outside of Israel.)
Perhaps the new and improved platforms being developed
will have Israeli ties. If others have suggestions on where
to find these platforms, I'm interested.

Ebenezer

unread,
Dec 13, 2010, 1:11:59 PM12/13/10
to

I hope you will clean up your language here also. Profanity does
nothing to help your cause.

Leigh Johnston

unread,
Dec 13, 2010, 1:17:27 PM12/13/10
to

The code you posted results in unspecified construction order of your
leaking singletons even though they are dynamically allocated if they
are defined in multiple TUs.

>
>> Unspecified construction order is anathema to maintainability
>> as the order could change as TUs are added or removed from
>> a project.
>
> Unspecified constructor order of variables at namespace scope is
> a fact of life in C++. That's why we use the singleton pattern
> (which doesn't put the actual variable at namespace scope, but
> allocates it dynamically).

A fact of life you seem to be ignoring; the order of construction of the
following is specified within a single TU but unspecified in relation to
globals defined in other TUs:

namespace
{
foo global1;
foo* global2 = new foo();
foo global3; // global2 has been fully constructed (dynamically
allocated) before reaching here
}


>
> Unspecified destructor order of variables with static lifetime
> (namespace scope or not) is also a fact of life in C++. That's
> why we don't destruct the variable we dynamically allocated.

Dynamic allocation is irrelevant here; construction order is unspecified
as you are initializing a global pointer with the result of a dynamic
allocation

>
> It's called defensive programming. Or simply sound software
> engineering. It's called avoiding undefined behavior, if you
> prefer.

As you are doing it wrong it is neither defensive programming nor sound
engineering.

/Leigh

James Kanze

unread,
Dec 13, 2010, 1:35:38 PM12/13/10
to
On Dec 13, 6:03 pm, Ebenezer <woodbria...@gmail.com> wrote:
> On Dec 13, 5:45 am, James Kanze <james.ka...@gmail.com> wrote:

> > On Dec 12, 2:55 pm, Leigh Johnston <le...@i42.co.uk> wrote:

> > > In the real world people write non-portable code all the time
> > > as doing so is not "incorrect".

> > Certainly not: until we get C++0x, all multithreaded code is
> > "non-portable". It's a bit disingenious, however, to not
> > mention the limitations before others pointed out what didn't
> > work.

> I'm not entirely sure of your point, but think you are advocating
> for portability here.

I'm not advocating anything, really. All I'm saying is that,
realistically, multithreaded code today will not be 100%
portable.

> With library code I think portability is
> important and work to increase the portability of my code.

Whether library code or application code, it's best to avoid
non-portable constructs when they aren't necessary, and to
isolate them in a few well defined areas when they are. For
some definition of "portable": I've worked on a lot of projects
where we supposed that floating point was IEEE, and we didn't
isolate the use of double to a few well defined areas:-).

In the end, there is no one right answer. The important thing
is to make an educated choice, and document it.

--
James Kanze

James Kanze

unread,
Dec 13, 2010, 1:40:38 PM12/13/10
to
On Dec 13, 6:17 pm, Leigh Johnston <le...@i42.co.uk> wrote:
> On 13/12/2010 11:45, James Kanze wrote:

[...]


> >> You are not listening. If you have multiple singletons using
> >> Mr Kanze's method that "you all" agree with it is unspecified
> >> as to the order of construction of these singletons across
> >> multiple TUs; i.e. the method suffers the same problem as
> >> ordinary global variables; it is no better than using ordinary
> >> global variables modulo the lack of object destruction (which
> >> is shite).

> > I'd suggest you read the code I posted first. It addresses
> > the order of initialization issues fully. (But of course,
> > you're not one to let actual facts bother you.)

> The code you posted results in unspecified construction order
> of your leaking singletons even though they are dynamically
> allocated if they are defined in multiple TUs.

I'd suggest you read it carefully, because it guarantees that
the singleton is constructed before first use.

> >> Unspecified construction order is anathema to
> >> maintainability as the order could change as TUs are added
> >> or removed from a project.

> > Unspecified constructor order of variables at namespace scope is
> > a fact of life in C++. That's why we use the singleton pattern
> > (which doesn't put the actual variable at namespace scope, but
> > allocates it dynamically).

> A fact of life you seem to be ignoring; the order of construction of the
> following is specified within a single TU but unspecified in relation to
> globals defined in other TUs:

> namespace
> {
> foo global1;
> foo* global2 = new foo();
> foo global3; // global2 has been fully constructed (dynamically
> allocated) before reaching here
>
> }

Certainly. Who ever said the contrary? And what relationship
does this have to any of the singleton implementations we've
been discussing.

> > Unspecified destructor order of variables with static lifetime
> > (namespace scope or not) is also a fact of life in C++. That's
> > why we don't destruct the variable we dynamically allocated.

> Dynamic allocation is irrelevant here; construction order is
> unspecified as you are initializing a global pointer with the
> result of a dynamic allocation

So you wrap your initialization in a function, and make sure
that the only way to access the pointer is through that
function. The classical (pre-Meyers) singleton pattern, in
fact.

> > It's called defensive programming. Or simply sound software
> > engineering. It's called avoiding undefined behavior, if you
> > prefer.

> As you are doing it wrong it is neither defensive programming
> nor sound engineering.

Again, I'd suggest you read my code very, very carefully. (I'll
admit that it's not immediately obvious as to why it works. But
it's been reviewed several times by leading experts, and never
found wanting.)

--
James Kanze

Leigh Johnston

unread,
Dec 13, 2010, 1:47:36 PM12/13/10
to

You are doing it wrong. I say again if you have more than one of your
leaking singletons defined in more than one TU the construction order of
your leaking singletons is unspecified.

This is your code:

namespace {

Singleton* ourInstance = &Singleton::instance();

Singleton&
Singleton::instance()
{
if (ourInstance == NULL)
ourInstance = new Singleton;
return *ourInstance;
}
}

The ourInstance *pointer* is a global object (albeit with internal
linkage) which you are initializing with a dynamic allocation wrapped in
a function. If you have more than such initialization in more than one
TU the order of the initializations is unspecified.

/Leigh

Ian Collins

unread,
Dec 13, 2010, 2:00:56 PM12/13/10
to
On 12/14/10 01:23 AM, Leigh Johnston wrote:
> On 13/12/2010 10:12, James Kanze wrote:
>>
>> Yes. There's no memory leak in the code I posted. I've used it
>> in applications that run for years, without running out of
>> memory.
>
> Bullshit. If the code was in a DLL and the DLL was loaded/unloaded
> multiple times the leak would be obvious.

But it wasn't.

--
Ian Collins

It is loading more messages.
0 new messages