Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

a really simple C++ abstraction around pthread_t...

12 views
Skip to first unread message

Chris M. Thomasson

unread,
Oct 30, 2008, 5:39:00 PM10/30/08
to
I use the following technique in all of my C++ projects; here is the example
code with error checking omitted for brevity:
_________________________________________________________________
/* Simple Thread Object
______________________________________________________________*/
#include <pthread.h>


extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}

template<typename T>
struct active : public T {
active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Usage Example
______________________________________________________________*/
#include <string>
#include <cstdio>


class worker : public thread_base {
std::string const m_name;

void on_active() {
std::printf("(%p)->worker(%s)::on_thread_entry()\n",
(void*)this, m_name.c_str());
}

public:
worker(std::string const& name)
: m_name(name) {
std::printf("(%p)->worker(%s)::my_thread()\n",
(void*)this, m_name.c_str());
}

~worker() {
std::printf("(%p)->worker(%s)::~my_thread()\n",
(void*)this, m_name.c_str());
}
};


class another_worker : public thread_base {
unsigned const m_id;
std::string const m_name;

void on_active() {
std::printf("(%p)->my_thread(%u/%s)::on_thread_entry()\n",
(void*)this, m_id, m_name.c_str());
}

public:
another_worker(unsigned const id, std::string const& name)
: m_id(id), m_name(name) {
}
};


int main(void) {
{
active<worker> workers[] = {
"Chris",
"John",
"Jane",
"Steve",
"Richard",
"Lisa"
};

active<another_worker> other_workers[] = {
active<another_worker>(21, "Larry"),
active<another_worker>(87, "Paul"),
active<another_worker>(43, "Peter"),
active<another_worker>(12, "Shelly"),
};
}

std::puts("\n\n\n__________________\nhit <ENTER> to exit...");
std::fflush(stdout);
std::getchar();
return 0;
}
_________________________________________________________________


I personally like this technique better than Boost. I find it more straight
forward and perhaps more object oriented, the RAII nature of the `active'
helper class does not hurt either. Also, I really do think its more
"efficient" than Boost in the way it creates threads because it does not
copy anything...

IMHO, the really nice thing about it would have to be the `active' helper
class. It allows me to run and join any object from the ctor/dtor that
exposes a common interface of (T::active_run/join). Also, it allows me to
pass a variable number of arguments to the object it wraps directly through
its ctor; this is fairly convenient indeed...


Any suggestions on how I can improve this construct?

Szabolcs Ferenczi

unread,
Oct 30, 2008, 5:44:26 PM10/30/08
to

Now it is better that you have taken the advice about the terminology
(active):

http://groups.google.com/group/comp.lang.c++/msg/6e915b5211cce641

Best Regards,
Szabolcs

Chris M. Thomasson

unread,
Oct 30, 2008, 5:53:58 PM10/30/08
to

"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:de3350c2-558d-4059...@u29g2000pro.googlegroups.com...

On Oct 30, 10:39 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > I use the following technique in all of my C++ projects; here is the
> > example
> > code with error checking omitted for brevity:
> > _________________________________________________________________
> [...]

> > _________________________________________________________________
> >
> > I personally like this technique better than Boost. I find it more
> > straight
> > forward and perhaps more object oriented, the RAII nature of the
> > `active'
> > helper class does not hurt either. Also, I really do think its more
> > "efficient" than Boost in the way it creates threads because it does not
> > copy anything...
> >
> > IMHO, the really nice thing about it would have to be the `active'
> > helper
> > class. It allows me to run and join any object from the ctor/dtor that
> > exposes a common interface of (T::active_run/join). Also, it allows me
> > to
> > pass a variable number of arguments to the object it wraps directly
> > through
> > its ctor; this is fairly convenient indeed...
> >
> > Any suggestions on how I can improve this construct?

> Now it is better that you have taken the advice about the terminology
> (active):

> http://groups.google.com/group/comp.lang.c++/msg/6e915b5211cce641

Yeah; I think your right.

Chris M. Thomasson

unread,
Oct 30, 2008, 6:16:09 PM10/30/08
to

"Chris M. Thomasson" <n...@spam.invalid> wrote in message
news:dIpOk.15$kd...@newsfe01.iad...

>I use the following technique in all of my C++ projects; here is the
>example code with error checking omitted for brevity:
> _________________________________________________________________
[...]
> _________________________________________________________________
>
[...]

> Any suggestions on how I can improve this construct?

One addition I forgot to add would be creating an explict `guard' helper
object within the `active' helper object so that one can create objects and
intervene between its ctor and when it actually gets ran... Here is full
example code showing this moment:


_________________________________________________________________
/* Simple Thread Object
______________________________________________________________*/
#include <pthread.h>


extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}


template<typename T>
struct active : public T {

struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Usage Example
______________________________________________________________*/
#include <string>
#include <cstdio>


class worker : public thread_base {
std::string const m_name;

void on_active() {
std::printf("(%p)->worker(%s)::on_active()\n",
(void*)this, m_name.c_str());
}

public:
worker(std::string const& name)
: m_name(name) {
std::printf("(%p)->worker(%s)::my_thread()\n",
(void*)this, m_name.c_str());
}

~worker() {
std::printf("(%p)->worker(%s)::~my_thread()\n",
(void*)this, m_name.c_str());
}
};


class another_worker : public thread_base {
unsigned const m_id;
std::string const m_name;

void on_active() {
std::printf("(%p)->another_worker(%u/%s)::on_active()\n",
(void*)this, m_id, m_name.c_str());
}

public:
another_worker(unsigned const id, std::string const& name)
: m_id(id), m_name(name) {
}
};


int main(void) {
{
worker w1("Amy");
worker w2("Kim");
worker w3("Chris");
another_worker aw1(123, "Kelly");
another_worker aw2(12345, "Tim");
another_worker aw3(87676, "John");

active<thread_base>::guard w12_aw12[] = {
w1, w2, w3,
aw1, aw2, aw3
};

active<worker> workers[] = {
"Jim",
"Dave",
"Regis"
};

active<another_worker> other_workers[] = {
active<another_worker>(999, "Jane"),
active<another_worker>(888, "Ben"),
active<another_worker>(777, "Larry")
};
}

std::puts("\n\n\n__________________\nhit <ENTER> to exit...");
std::fflush(stdout);
std::getchar();
return 0;
}
_________________________________________________________________


Take notice of the following code snippet residing within main:


worker w1("Amy");
worker w2("Kim");
worker w3("Chris");
another_worker aw1(123, "Kelly");
another_worker aw2(12345, "Tim");
another_worker aw3(87676, "John");

active<thread_base>::guard w12_aw12[] = {
w1, w2, w3,
aw1, aw2, aw3
};


This shows how one can make use of the guard object. The objects are fully
constructed, and they allow you do go ahead and do whatever you need to do
with them _before_ you actually run/join them. This can be a very convenient
ability.

Adem

unread,
Oct 31, 2008, 10:40:11 AM10/31/08
to
"Chris M. Thomasson" wrote
> "Chris M. Thomasson" wrote

>
> >I use the following technique in all of my C++ projects; here is the
> >example code with error checking omitted for brevity:
>
<snip>

Hmm. is it ok to stay within the ctor for the whole
duration of the lifetime of the object?
IMO the ctor should be used only for initializing the object,
but not for executing or calling the "main loop" of the object
because the object is fully created only after the ctor has finished,
isn't it?

Chris M. Thomasson

unread,
Oct 31, 2008, 2:22:09 PM10/31/08
to
"Adem" <for-us...@alicewho.com> wrote in message
news:gef5ep$ihk$1...@aioe.org...
> "Chris M. Thomasson" wrote
[...]

>> template<typename T>
>> struct active : public T {
>> struct guard {
>> T& m_object;
>>
>> guard(T& object) : m_object(object) {
>> m_object.active_run();
>> }
>>
>> ~guard() {
>> m_object.active_join();
>> }
>> };
>>
>> active() : T() {


// at this point, T is constructed.


>> this->active_run();


// the procedure above only concerns T.

>> }
> <snip>
>
> Hmm. is it ok to stay within the ctor for the whole
> duration of the lifetime of the object?

In this case it is because the object `T' is fully constructed before its
`active_run' procedure is invoked.


> IMO the ctor should be used only for initializing the object,
> but not for executing or calling the "main loop" of the object
> because the object is fully created only after the ctor has finished,
> isn't it?

Normally your correct. However, the `active' object has nothing to do with
the "main loop" of the object it wraps. See following discussion for further
context:

http://groups.google.com/group/comp.lang.c++/browse_frm/thread/f7cae1851bb5f215

The OP of that thread was trying to start the thread within the ctor of the
threading base-class. This invokes a race-condition such that the derived
class can be called without its ctor being completed. The `active' helper
template gets around that by automating the calls to `active_run/join' on a
completely formed object T.

Szabolcs Ferenczi

unread,
Oct 31, 2008, 2:17:23 PM10/31/08
to

--

> Hmm. is it ok to stay within the ctor for the whole
> duration of the lifetime of the object?

It does not stay within the constructor since the constructor
completes after starting a thread.

> IMO the ctor should be used only for initializing the object,

That is what is happening. Part of the initialisation is launching a
thread.

> but not for executing or calling the "main loop" of the object

The C++ model allows any method calls from the constructor.

12.6.2.9
"Member functions (including virtual member functions, 10.3) can be
called for an object under construction."
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2798.pdf

> because the object is fully created only after the ctor has finished,
> isn't it?

Yes, it is. In this case the object is fully constructed since the
thread is started in the wrapper after the object has been fully
constructed.

If you are interested in the arguments and counter arguments, you can
check the discussion thread where this construction has emerged:

"What's the connection between objects and threads?"
http://groups.google.com/group/comp.lang.c++/browse_frm/thread/f7cae1851bb5f215

Best Regards,
Szabolcs

Szabolcs Ferenczi

unread,
Oct 31, 2008, 3:15:35 PM10/31/08
to
On Oct 30, 10:39 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:

> Any suggestions on how I can improve this construct?

I think this construction is good enough for the moment. Now we should
turn to enhancing the communication part for shared objects in C++0x.

You know that Boost provides the scoped lock which has some advantage
over the explicit lock/unlock in Pthreads.

Furthermore, C++0x includes already some higher level wrapper
construct for making the wait for condition variables more natural or
user friendly. I refer to the wait wrapper, which expects the
predicate:

<quote>
template <class Predicate>
void wait(unique_lock<mutex>& lock, Predicate pred);
Effects:
As if:
while (!pred())
wait(lock);
</quote>
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2497.html
</quote>

Actually, this wrapper could be combined with the scoped lock to get a
very high level construction in C++0x.

Well, ideally if they would dear to introduce some keywords in C++0x,
a bounded buffer could be expressed something like this.

template< typename T >
monitor class BoundedBuffer {
const unsigned int m_max;
std::deque<T> b;
public:
BoundedBuffer(const int n) : m_max(n) {}
void put(const T n) {
when (b.size() < m_max) {
b.push_back(n);
}
}
T get() {
T aux;
when (!b.empty()) {
aux = b.front();
b.pop_front();
}
return aux;
}
}

However, neither the `monitor' nor the `when' is not going to be
introduced in C++0x. The `when' would have the semantics of the
Conditional Critical Region.

Actually, something similar could be achieved with higher level
wrapper constructions that would result in program fragments like this
(I am not sure about the syntax, I did a simple transformation there):

template< typename T >
class BoundedBuffer : public Monitor {
const unsigned int m_max;
std::deque<T> b;
public:
BoundedBuffer(const int n) : m_max(n) {}
void put(const T n) {
{ When guard(b.size() < m_max);
b.push_back(n);
}
}
T get() {
T aux;
{ When guard(!b.empty());
aux = b.front();
b.pop_front();
}
return aux;
}
}

Here the super class would contain the necessary harness (mutex,
condvar) and the RAII object named `guard' could be similar to the
Boost scoped lock, it would provide locking and unlocking but in
addition it would wait in the constructor until the predicate is
satisfied. Something like this:

class When {
...
public:
When(...) {
// lock
// cv.wait(&lock, pred);
}
~When() {
// cv.broadcast
// unlock
}
};

The question is how would you hack the classes `Monitor' and `When' so
that active objects could be combined with this kind of monitor data
structure to complete the canonical example of producers-consumers.
Forget about efficiency concerns for now.

Best Regards,
Szabolcs

Chris M. Thomasson

unread,
Oct 31, 2008, 5:02:21 PM10/31/08
to

"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:8390584c-34f0-486d...@o4g2000pra.googlegroups.com...

On Oct 30, 10:39 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:

> > Any suggestions on how I can improve this construct?

> I think this construction is good enough for the moment. Now we should
> turn to enhancing the communication part for shared objects in C++0x.

Okay.

[...]


> The question is how would you hack the classes `Monitor' and `When' so
> that active objects could be combined with this kind of monitor data
> structure to complete the canonical example of producers-consumers.
> Forget about efficiency concerns for now.

Here is a heck of a hack, in the form of a fully working program, for you
take a look at Szabolcs:
_______________________________________________________________________


/* Simple Thread Object
______________________________________________________________*/
#include <pthread.h>


extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}


template<typename T>
struct active : public T {

struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Moniter
______________________________________________________________*/
class moniter {
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;

public:
moniter() {
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
}

~moniter() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
moniter& m_moniter;

lock_guard(moniter& moniter_) : m_moniter(moniter_) {
m_moniter.lock();
}

~lock_guard() {
m_moniter.unlock();
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_signal(&m_cond);
}
};


#define when_x(mp_pred, mp_line) \
lock_guard guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when(mp_pred) when_x(mp_pred, __LINE__)


/* Simple Usage Example
______________________________________________________________*/

#include <cstdio>
#include <deque>


#define PRODUCE 10000
#define BOUND 100
#define YIELD 2


template<typename T>
struct bounded_buffer : public moniter {
unsigned const m_max;
std::deque<T> m_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
signal();
}
}

T pop() {
T obj;
when (! m_buffer.empty()) {
obj = m_buffer.front();
m_buffer.pop_front();
}
return obj;
}
};


class producer : public thread_base {
bounded_buffer<unsigned>& m_buffer;

void on_active() {
for (unsigned i = 0; i < PRODUCE; ++i) {
m_buffer.push(i + 1);
std::printf("produced %u\n", i + 1);
if (! (i % YIELD)) { sched_yield(); }
}
}

public:
producer(bounded_buffer<unsigned>* buffer) : m_buffer(*buffer) {}
};


struct consumer : public thread_base {
bounded_buffer<unsigned>& m_buffer;

void on_active() {
unsigned i;
do {
i = m_buffer.pop();
std::printf("consumed %u\n", i);
if (! (i % YIELD)) { sched_yield(); }
} while (i != PRODUCE);
}

public:
consumer(bounded_buffer<unsigned>* buffer) : m_buffer(*buffer) {}
};


int main(void) {
{
bounded_buffer<unsigned> b(BOUND);
active<producer> p(&b);
active<consumer> c(&b);
}

std::puts("\n\n\n__________________\nhit <ENTER> to exit...");
std::fflush(stdout);
std::getchar();
return 0;
}

_______________________________________________________________________

Please take notice of the following class, which compiles fine:


template<typename T>
struct bounded_buffer : public moniter {
unsigned const m_max;
std::deque<T> m_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
signal();
}
}

T pop() {
T obj;
when (! m_buffer.empty()) {
obj = m_buffer.front();
m_buffer.pop_front();
}
return obj;
}
};


Well, is that kind of what you had in mind? I make this possible by hacking
the following macro together:


#define when_x(mp_pred, mp_line) \
lock_guard guard_##mp_line(*this); \
while (! (mp_pred)) this->wait();

#define when(mp_pred) when_x(mp_pred, __LINE__)


It works, but its a heck of a hack! ;^D

Anyway, what do you think of this approach? I add my own "keyword"... lol.

Chris M. Thomasson

unread,
Oct 31, 2008, 5:06:20 PM10/31/08
to
of course I have created a possible DEADLOCK condition! I totally forgot to
have the bounded_buffer signal/broadcast after it pops something! Here is
the FIXED version:


extern "C" void* thread_entry(void*);

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

~moniter() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
moniter& m_moniter;

lock_guard(moniter& moniter_) : m_moniter(moniter_) {
m_moniter.lock();
}

~lock_guard() {
m_moniter.unlock();
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_signal(&m_cond);
}
};

#define when(mp_pred) when_x(mp_pred, __LINE__)

broadcast();
}
}

T pop() {
T obj;
when (! m_buffer.empty()) {
obj = m_buffer.front();
m_buffer.pop_front();

broadcast();
}
return obj;
}
};


I am VERY sorry for this boneheaded mistake!!! OUCH!!!! BTW, the reason I
broadcast is so the bounded_buffer class can be used by multiple producers
and consumers.

Chris M. Thomasson

unread,
Oct 31, 2008, 5:14:51 PM10/31/08
to
What exactly do you have in mind wrt integrating monitor, when keyword and
active object? How far off the corrected version of my example?

http://groups.google.com/group/comp.programming.threads/msg/3e9f7980d9323edc


BTW, sorry for posting the broken version:

http://groups.google.com/group/comp.programming.threads/msg/e362e9a5d41a6ae6


I jumped the gun!

Chris M. Thomasson

unread,
Oct 31, 2008, 5:39:06 PM10/31/08
to
Given the fixed version I posted here:

http://groups.google.com/group/comp.programming.threads/msg/3e9f7980d9323edc

one can easily create multiple producers and consumers like this:


int main(void) {
{
bounded_buffer<unsigned> b(BOUND);

active<producer> p[] = { &b, &b, &b, &b, &b, &b };
active<consumer> c[] = { &b, &b, &b, &b, &b, &b };
}

std::puts("\n\n\n__________________\nhit <ENTER> to exit...");
std::fflush(stdout);
std::getchar();
return 0;
}


I really do like the convenience of the `active' helper template. Anyway,
one caveat wrt the way these specific consumers are coded to act on data,
the number of producers and consumers must be equal.

Chris M. Thomasson

unread,
Oct 31, 2008, 5:58:51 PM10/31/08
to
"Chris M. Thomasson" <n...@spam.invalid> wrote in message
news:yjKOk.682$kd5...@newsfe01.iad...

> of course I have created a possible DEADLOCK condition! I totally forgot
> to have the bounded_buffer signal/broadcast after it pops something! Here
> is the FIXED version:
>


ARGHGHGGHGHGHH!


I make two other extreme hardcore RETARDED mistakes! Notice how the
procedure `monitor::broadcast()' fuc%ing calls `pthread_cond_signal()'
instead of `pthread_cond_broadcast()' !!!!!!!!


Also, I misspell Monitor!!!!


DAMNIT!!!

[...]


I am really pissed off now! GRRRR!!! Here is current version that
actually broadcasts!!!:


extern "C" void* thread_entry(void*);

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}


/* Simple Monitor
______________________________________________________________*/
class monitor {
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;

public:
monitor() {
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
}

~monitor() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
monitor& m_monitor;

lock_guard(monitor& monitor_) : m_monitor(monitor_) {
m_monitor.lock();
}

~lock_guard() {
m_monitor.unlock();
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_broadcast(&m_cond);
}
};


#define when_x(mp_pred, mp_line) \
monitor::lock_guard guard_##mp_line(*this); \


while (! (mp_pred)) this->wait();

#define when(mp_pred) when_x(mp_pred, __LINE__)


/* Simple Usage Example
______________________________________________________________*/
#include <cstdio>
#include <deque>


#define PRODUCE 100000
#define BOUND 1000
#define YIELD 4


template<typename T>
struct bounded_buffer : private monitor {

active<producer> p[] = { &b, &b, &b, &b, &b, &b };
active<consumer> c[] = { &b, &b, &b, &b, &b, &b };
}

std::puts("\n\n\n__________________\nhit <ENTER> to exit...");


std::fflush(stdout);
std::getchar();
return 0;
}


SHI%!

;^(.......

Szabolcs Ferenczi

unread,
Oct 31, 2008, 6:09:16 PM10/31/08
to
On Oct 31, 10:14 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> What exactly do you have in mind wrt integrating monitor, when keyword and
> active object?

I did not meant integrating all the three but it is an interesting
idea too.

I meant that if we have high level wrappers for easy coding of the
monitor construction, it is possible to put together applications
where there are active objects and passive ones for shared data
communication around. One such an example is the producers and
consumers example.

> How far off the corrected version of my example?

It is promising. Better than I expected, however, the `broadcast();'
should be also included into the RAII (the `when' object).

I did not think of using the preprocessor for the task but rather some
pure C++ construction. That is why I thought that the RAII object
could not only be a simple `lock_guard' but something like a
`when_guard'. Then one does not have to explicitly place the
`broadcast();' and the `while (! (mp_pred)) this->wait();' can be part
of the constructor of the `when_guard'.

On the other hand, the preprocessor allows to keep the more natural
syntax.

Best Regards,
Szabolcs

Chris M. Thomasson

unread,
Oct 31, 2008, 8:03:44 PM10/31/08
to

"Szabolcs Ferenczi" <szabolcs...@gmail.com> wrote in message
news:980d8b20-a9bb-4912...@r15g2000prh.googlegroups.com...

On Oct 31, 10:14 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > What exactly do you have in mind wrt integrating monitor, when keyword
> > and
> > active object?

> I did not meant integrating all the three but it is an interesting
> idea too.

> I meant that if we have high level wrappers for easy coding of the
> monitor construction, it is possible to put together applications
> where there are active objects and passive ones for shared data
> communication around. One such an example is the producers and
> consumers example.

> > How far off the corrected version of my example?

> It is promising. Better than I expected, however, the `broadcast();'
> should be also included into the RAII (the `when' object).

Okay. Also, there was a little problem in the when macros. I need another
level of indirection in order to properly expand the __LINE__ macro. Here is
new version:
________________________________________________________________


class monitor {
pthread_mutex_t m_mutex;
pthread_cond_t m_cond;

public:
monitor() {
pthread_mutex_init(&m_mutex, NULL);
pthread_cond_init(&m_cond, NULL);
}

~monitor() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
monitor& m_monitor;

lock_guard(monitor& monitor_) : m_monitor(monitor_) {
m_monitor.lock();
}

~lock_guard() {
m_monitor.unlock();
}
};

struct signal_guard {
monitor& m_monitor;
bool const m_broadcast;

signal_guard(monitor& monitor_, bool broadcast = true)
: m_monitor(monitor_), m_broadcast(broadcast) {

}

~signal_guard() {
if (m_broadcast) {
m_monitor.broadcast();
} else {
m_monitor.signal();
}
}
};

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_broadcast(&m_cond);
}
};

#define when_xx(mp_pred, mp_line) \
monitor::lock_guard lock_guard_##mp_line(*this); \
monitor::signal_guard signal_guard_##mp_line(*this); \


while (! (mp_pred)) this->wait();

#define when_x(mp_pred, mp_line) when_xx(mp_pred, mp_line)
#define when(mp_pred) when_x(mp_pred, __LINE__)
________________________________________________________________


Now it properly expands __LINE__ and it also automatically broadcasts.


> I did not think of using the preprocessor for the task but rather some
> pure C++ construction. That is why I thought that the RAII object
> could not only be a simple `lock_guard' but something like a
> `when_guard'. Then one does not have to explicitly place the
> `broadcast();' and the `while (! (mp_pred)) this->wait();' can be part
> of the constructor of the `when_guard'.

> On the other hand, the preprocessor allows to keep the more natural
> syntax.

There is a problem using the preprocessor hack; as-is the following code
will deadlock:


struct object : monitor {
int m_state; // = 0

void foo() {
when (m_state == 1) {
// [...];
}

when (m_state == 2) {
// [...];
}
}
};


this is because the local objects created by the first call to `when' will
still be around during the second call; the soultion is easy:


struct object : monitor {
int m_state; // = 0

void foo() {
{ when (m_state == 1) {
// [...];
}
}


{ when (m_state == 2) {
// [...];
}
}
}
};


but looks a bit awkward. Speaking of awkward, the version using pure C++ and
no pre-processor would be kind of painful to code for. You would basically
need to wrap up all the predicates in separate functions. Lets see here...
It would be something like this:


class when {
monitor& m_monitor;

public:
template<typename P>
when(monitor* monitor_, P pred) : m_monitor(*monitor_) {
m_monitor.lock();
while (! pred()) m_monitor.wait();
}

~when() {
m_monitor.broadcast();
m_monitor.unlock();
}
};


template<typename T>
struct buffer : monitor {


unsigned const m_max;
std::deque<T> m_buffer;

bool pred_push() const {
return m_buffer.size() < m_max;
}

bool pred_pop() const {
return ! m_buffer.empty();
}

public:
buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when guard(this, pred_push);
m_buffer.push_back(obj);
}

T pop() {
when guard(this, pred_pop);
T obj = m_buffer.front();
m_buffer.pop_front();
return obj;
}
};


This would work, but it forces you to create a special function
per-predicate. Also, it suffers from the same problem the macro version
does. Two when guard objects residing in the same scope will deadlock...


Humm, personally, I kind of like the macro version better. It seems cleaner
for some reason and allows one to use full expressions for the predicate
instead of a special function. What would be neat is if I use an expression
as a template parameter and have it be treated as if it were a function. The
preprocessor makes this easy...

Chris M. Thomasson

unread,
Nov 1, 2008, 9:01:19 PM11/1/08
to
[added: comp.lang.c++

here is link to Ulrich Eckhardt's full post because I snipped some of it:

http://groups.google.com/group/comp.programming.threads/browse_frm/thread/44190e3b9ac81a69

]

"Ulrich Eckhardt" <doom...@knuut.de> wrote in message
news:6n37ijF...@mid.uni-berlin.de...
> Chris M. Thomasson wrote:
> [C++ thread baseclass with virtual run() function]
>
> Just one thing technical about the code: your use reinterpret_cast in a
> place that actually calls for a static_cast. A static_cast is the right
> tool to undo the implicit conversion from T* to void*.


>
>> I personally like this technique better than Boost. I find it more
>> straight forward and perhaps more object oriented, the RAII nature of the
>> `active' helper class does not hurt either. Also, I really do think its
>> more "efficient" than Boost in the way it creates threads because it does
>> not copy anything...
>

> There are two things that strike me here:
> 1. You mention "object oriented" as if that was a goal, but it isn't.
> Rather, it is a means to achieve something, and the question is always
> valid whether its use is justified. Java's decision to force an OO design
> on you and then inviting other paradigms back in through the backdoor is
> the prime example for misunderstood OO. Wrapping a thread into a class the
> way you do it is another IMHO, ill explain below.
> 2. What exactly is the problem with the copying? I mean you're starting a
> thread, which isn't actually a cheap operation either. Further, if you
> want, you can optimise that by using transfer of ownership (auto_ptr) or
> shared ownership (shared_ptr) in case you need. Boost doesn't make it
> necessary to copy anything either (except under the hood it does some
> dynamic allocation), but also allows you to chose if you want. However,
> copying and thus avoiding shared data is the safer default, because any
> shared data access requires care.

[...]

If find it unfortunate to be forced to use an smart pointers and dynamically
created objects just to be able to pass common shared data to a thread. I am
still not convinced that treating a thread as an object is a bad thing...
For instance...


How would I be able to create the following fully compliable program (please
refer to the section of code under the "Simple Example" heading, the rest is
simple impl detail for pthread abstraction) using Boost threads. The easy
way to find the example program is to go to the end of the entire program,
and start moving up until you hit the "Simple Example" comment... Here it
is:
_____________________________________________________________________

/* Simple Thread Object
______________________________________________________________*/
#include <pthread.h>


extern "C" void* thread_entry(void*);

class thread_base {
pthread_t m_tid;
friend void* thread_entry(void*);
virtual void on_active() = 0;

public:
virtual ~thread_base() = 0;

void active_run() {
pthread_create(&m_tid, NULL, thread_entry, this);
}

void active_join() {
pthread_join(m_tid, NULL);
}
};

thread_base::~thread_base() {}

void* thread_entry(void* state) {
reinterpret_cast<thread_base*>(state)->on_active();
return 0;
}


template<typename T>
struct active : public T {

struct guard {
T& m_object;

guard(T& object) : m_object(object) {
m_object.active_run();
}

~guard() {
m_object.active_join();
}
};

active() : T() {
this->active_run();
}

~active() {
this->active_join();
}

template<typename T_p1>
active(T_p1 p1) : T(p1) {
this->active_run();
}

template<typename T_p1, typename T_p2>
active(T_p1 p1, T_p2 p2) : T(p1, p2) {
this->active_run();
}

template<typename T_p1, typename T_p2, typename T_p3>
active(T_p1 p1, T_p2 p2, T_p3 p3) : T(p1, p2, p3) {
this->active_run();
}

// [and on and on for more params...]
};


/* Simple Monitor
______________________________________________________________*/

~monitor() {
pthread_cond_destroy(&m_cond);
pthread_mutex_destroy(&m_mutex);
}

struct lock_guard {
monitor& m_monitor;

lock_guard(monitor& monitor_) : m_monitor(monitor_) {
m_monitor.lock();
}

~lock_guard() {
m_monitor.unlock();
}
};

}

void lock() {
pthread_mutex_lock(&m_mutex);
}

void unlock() {
pthread_mutex_unlock(&m_mutex);
}

void wait() {
pthread_cond_wait(&m_cond, &m_mutex);
}

void signal() {
pthread_cond_signal(&m_cond);
}

void broadcast() {
pthread_cond_broadcast(&m_cond);
}
};

/* Simple Example
______________________________________________________________*/
#include <string>
#include <deque>
#include <cstdio>


template<typename T>
struct bounded_buffer : monitor {


unsigned const m_max;
std::deque<T> m_buffer;

public:
bounded_buffer(unsigned const max_) : m_max(max_) {}

void push(T const& obj) {
when (m_buffer.size() < m_max) {
m_buffer.push_back(obj);
}
}

T pop() {
when (! m_buffer.empty()) {


T obj = m_buffer.front();
m_buffer.pop_front();
return obj;
}
}
};


struct person : thread_base {
typedef bounded_buffer<std::string> queue;
std::string const m_name;
queue& m_response;

public:
queue m_request;

void on_active() {
m_response.push(m_name + " is ready to receive some questions!");
for (unsigned i = 0 ;; ++i) {
std::string msg(m_request.pop());
if (msg == "QUIT") { break; }
std::printf("(Q)->%s: %s\n", m_name.c_str(), msg.c_str());
switch (i) {
case 0:
msg = "(A)->" + m_name + ": Well, I am okay";
break;

case 1:
msg = "(A)->" + m_name + ": I already told you!";
break;

default:
msg = "(A)->" + m_name + ": I am PISSED OFF NOW!";
}
m_response.push(msg);
}
std::printf("%s was asked to quit...\n", m_name.c_str());
m_response.push(m_name + " is FINISHED");
}

person(std::string const& name, queue* q, unsigned const bound)
: m_name(name), m_response(*q), m_request(bound) {}
};


#define BOUND 10


int main(void) {
{
person::queue response(BOUND);

active<person> chris("Chris", &response, BOUND);
active<person> amy("Amy", &response, BOUND);

std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("How are you doing?");
amy.m_request.push("How are you feeling?");
std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("Do you really feel that way?");
amy.m_request.push("Are you sure?");
std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("Why do you feel that way?");
amy.m_request.push("Can you share more of you feelings?");
std::printf("%s\n", response.pop().c_str());
std::printf("%s\n\n", response.pop().c_str());

chris.m_request.push("QUIT");
amy.m_request.push("QUIT");

std::printf("%s\n", response.pop().c_str());
std::printf("%s\n", response.pop().c_str());
}

std::puts("\n\n\n__________________\nhit <ENTER> to exit...");
std::fflush(stdout);
std::getchar();
return 0;
}

_____________________________________________________________________


Please correct me if I am wrong, but Boost would force me to dynamically
create the `person::queue request' object in main right? AFAICT, this
example shows why is can be a good idea to treat a thread as an object. In
this case, a person object is a thread. Anyway, as of now, I am not entirely
convinced that Boost has a far superior method of creating threads...


Anyway, I really do need to think about the rest of your post; you raise
several interesting issues indeed.

Chris M. Thomasson

unread,
Nov 1, 2008, 11:02:53 PM11/1/08
to

"Chris M. Thomasson" <n...@spam.invalid> wrote in message
news:XR6Pk.3776$7o4....@newsfe01.iad...
[...]

> "Ulrich Eckhardt" <doom...@knuut.de> wrote in message
> news:6n37ijF...@mid.uni-berlin.de...
>> Chris M. Thomasson wrote:
[...]>
> If find it unfortunate to be forced to use an smart pointers and
> dynamically created objects just to be able to pass common shared data to
> a thread. I am still not convinced that treating a thread as an object is
> a bad thing... For instance...
>
>
> How would I be able to create the following fully compliable program
> (please refer to the section of code under the "Simple Example" heading,
> the rest is simple impl detail for pthread abstraction) using Boost
> threads. The easy way to find the example program is to go to the end of
> the entire program, and start moving up until you hit the "Simple Example"
> comment... Here it is:
> _____________________________________________________________________
[...]

> _____________________________________________________________________
>
>
>
>
> Please correct me if I am wrong, but Boost would force me to dynamically
> create the `person::queue request' object in main right? AFAICT, this
> example shows why is can be a good idea to treat a thread as an object. In
> this case, a person object is a thread. Anyway, as of now, I am not
> entirely convinced that Boost has a far superior method of creating
> threads...
[...]

I should have made the characters `Chris' and `Amy' completely separate
objects deriving from a `person' base-class . That way, their personalities
and therefore their responses would not be identical. Treating threads as
objects works very well for me perosnally...

Ulrich Eckhardt

unread,
Nov 6, 2008, 5:24:46 PM11/6/08
to
Responding late, it was a rough week...

Chris M. Thomasson wrote:
> "Ulrich Eckhardt" <doom...@knuut.de> wrote in message
> news:6n37ijF...@mid.uni-berlin.de...
>> Chris M. Thomasson wrote:
>> [C++ thread baseclass with virtual run() function]
>>

If I get your code right, this program is creating two threads, each of
which models a person's behaviour. The main thread then asks them questions
which are stored in a queue to be handled asynchronously and prints the
answers which are received from another queue. Is that right?

Now, how would I rather design this? Firstly, I would define the behaviour
of the persons in a class. This class would be completely ignorant of any
threading going on in the background and only model the behaviour.

Then, I would create a type that combines a queue with a condition and a
mutex. This could then be used to communicate between threads in a safe
way. This would be pretty much the same as your version, both
pthread_cond_t and pthread_mutex_t translate easily to boost::mutex and
boost::condition.


Now, things get a bit more complicated, because first some questions need to
be answered. The first one is what the code should do in case of failures.
What happens if one person object fails to answer a question? What if
answering takes too long? What if the thread where the answers are
generated is terminated due to some error? What if the invoking thread
fails to queue a request, e.g. due to running out of memory?

The second question to answer is what kind of communication you are actually
modelling here. In the example, it seems as if you were making a request
and then receiving the response to that request, but that isn't actually
completely true. Rather, the code is sending a message and receiving a
message, but there is no correlation between the sent and received message.
If this correlation is required, I would actually return a cookie when I
make a request and retrieve the answer via that cookie.

In any case, you can write an equivalent program using Boost.Thread. If you
want, you can wrap stuff into a class, e.g. like this:

struct active_person
{
explicit active_person(queue& out_):
out(out_),
th( bind( &handle_requests, this))
{}
~active_person()
{ th.join(); }
void push_question( std::string const& str)
{ in.push(str); }
std::string pop_answer()
{ return out.pop(); }
private:
void handle_requests()
{
while(true)
{
std::string question = in.pop();
if(question=="QUIT")
return;
out.push(p.ask(question));
}
}
queue in;
queue& out;
person p;
// note: this one must come last, because its initialisation
// starts a thread using this object.
boost::thread th;
};

You could also write a simple function:

void async_quiz( person& p, queue& questions, queue& answers)
{
while(true)
{
std::string question = questions.pop();
if(question=="QUIT")
return;
answers.push(p.ask(question));
}
}

queue questions, answers;
person amy("amy");
thread th(bind( &async_quiz, ref(amy),
ref(questions), ref(answers)));
.... // ask questions
th.join();

Note the use of ref() to avoid the default copying of the argument. Using
pointers would work, too, but I find it ugly.

In any case, this is probably not something I would write that way. The
point is that I can not force a thread to shut down or handle a request. I
can ask it to and maybe wait for it (possibly infinitely), but I can not
force it. So, if there is any chance for failure, both on the grounds of
request handling or the request handling mechanism in general, this
approach becomes unusable. Therefore, I rather prepare for the case that a
thread becomes unresponsive by allowing it to run detached from the local
stack.

Writing about that, there is one thing that came somehow as a revelation to
me, and that was the Erlang programming language. It has concurrency built
in and its threads (called processes) communicate using messages. Using a
similar approach in C++ allows writing very clean programs that don't
suffer lock contention. In any case, I suggest taking a look at Erlang just
for the paradigms, I found some really good ideas to steal from there. ;)


> #define when_xx(mp_pred, mp_line) \
> monitor::lock_guard lock_guard_##mp_line(*this); \
> monitor::signal_guard signal_guard_##mp_line(*this); \
> while (! (mp_pred)) this->wait();
>
> #define when_x(mp_pred, mp_line) when_xx(mp_pred, mp_line)
> #define when(mp_pred) when_x(mp_pred, __LINE__)

Just one note on this one: When I read your example, I stumbled over the use
of a 'when' keyword where I would expect an 'if'. I find this here really
bad C++ for several reasons:
1. Macros should be UPPERCASE_ONLY. That way, people see that it's a macro
and they know that it may or may not behave like a function. It simply
avoids surprises.
2. It is used in a way that breaks its integration into the control flow
syntax. Just imagine that I would use it in the context of an 'if'
expression:

if(something)
when(some_condition)
do_something_else();

I'd rather write it like this:

if(something) {
LOCK_AND_WAIT_CONDITION(some_condition);
do_something_else();
}

Firstly, you see that these are separate statements and this then
automatically leads to you adding the necessary curly braces.

> Please correct me if I am wrong, but Boost would force me to dynamically
> create the `person::queue request' object in main right?

No. The argument when starting a thread is something that can be called,
like a function or functor, that's all. This thing is then copied before it
is used by the newly started thread. Typically, this thing is a function
bound to its context arguments using Boost.Bind or Lambda. If you want, you
can make this a simple structure containing a few pointers or references,
i.e. something dead easy to copy.

cheers

Uli

Szabolcs Ferenczi

unread,
Nov 6, 2008, 8:49:09 PM11/6/08
to
On Nov 6, 11:24 pm, Ulrich Eckhardt <dooms...@knuut.de> wrote:
> Responding late, it was a rough week...
>
> Chris M. Thomasson wrote:
> > "Ulrich Eckhardt" <dooms...@knuut.de> wrote in message

> >news:6n37ijF...@mid.uni-berlin.de...
> >> Chris M. Thomasson wrote:
[...]

> > #define when_xx(mp_pred, mp_line) \
> >   monitor::lock_guard lock_guard_##mp_line(*this); \
> >   monitor::signal_guard signal_guard_##mp_line(*this); \
> >   while (! (mp_pred)) this->wait();
>
> > #define when_x(mp_pred, mp_line) when_xx(mp_pred, mp_line)
> > #define when(mp_pred) when_x(mp_pred, __LINE__)
>
> Just one note on this one: When I read your example, I stumbled over the use
> of a 'when' keyword where I would expect an 'if'.

The `when' is basically different from the `if'.

Since I have come up with this construction, I might give you an
answer. The `when' is not a genuine keyword since it is still C++ code
and the `when' is not a C++ keyword. However, here we are
experimenting whether a programming style could be applied in a C++
program that mimics the original Conditional Critical Region
programming proposal where the `when' is a keyword.

The `if' is a sequential control statement whereas the `when' is not
one. The semantics of the `if' and the `when' differs in that the `if'
fails when the condition does not hold at the time it is evaluated and
the statement takes the `else' branch if it is given. The `when'
keyword, on the other hand, delays the thread until the condition is
not true. In other words, the `when' specifies a guard.

> I find this here really
> bad C++ for several reasons:
> 1. Macros should be UPPERCASE_ONLY. That way, people see that it's a macro
> and they know that it may or may not behave like a function. It simply
> avoids surprises.

It is not necessarily meant to be a macro, the macro is just one
possible implementation of the construction.

> 2. It is used in a way that breaks its integration into the control flow
> syntax. Just imagine that I would use it in the context of an 'if'
> expression:
>
>   if(something)
>     when(some_condition)
>       do_something_else();
>
> I'd rather write it like this:
>
>   if(something) {
>     LOCK_AND_WAIT_CONDITION(some_condition);
>     do_something_else();
>   }
>
> Firstly, you see that these are separate statements and this then
> automatically leads to you adding the necessary curly braces.

Please check out the post where the `when' construction is introduced
into this discussion thread:

http://groups.google.com/group/comp.lang.c++/msg/cb476c1c7d91c008

If you have an idea how to solve the original problem I described,
your solution is welcome.

Best Regards,
Szabolcs

Chris M. Thomasson

unread,
Nov 9, 2008, 7:12:17 AM11/9/08
to

"Ulrich Eckhardt" <doom...@knuut.de> wrote in message
news:6nh95gF...@mid.uni-berlin.de...

> Responding late, it was a rough week...
[...]

> In any case, you can write an equivalent program using Boost.Thread. If
> you
> want, you can wrap stuff into a class, e.g. like this:
>
> struct active_person
> {
> explicit active_person(queue& out_):
> out(out_),
> th( bind( &handle_requests, this))
> {}
> ~active_person()
> { th.join(); }
> void push_question( std::string const& str)
> { in.push(str); }
> std::string pop_answer()
> { return out.pop(); }
> private:
> void handle_requests()
> {
> while(true)
> {
> std::string question = in.pop();
> if(question=="QUIT")
> return;
> out.push(p.ask(question));
> }

out.push("QUIT");


> }
> queue in;
> queue& out;
> person p;
> // note: this one must come last, because its initialisation
> // starts a thread using this object.
> boost::thread th;
> };

[...]


I would create the active template using Boost like:

// quick sketch - may have typo
_______________________________________________________________________
template<typename T>
class active : public T {
boost::thread m_active_handle;

public:
active() : T(), m_active_handle(bind(&on_active, (T*)this)) {}

~active() { m_active_handle.join(); }

template<typename P1>
active(P1 p1) : T(p1),
m_active_handle(bind(&on_active, (T*)this)) {}

template<typename P1, typename P2>
active() : T(P1, P2),
m_active_handle(bind(&on_active, (T*)this)) {}

// [on and on for more params...]
};
_______________________________________________________________________


Then I could use it like:


struct person {
void on_active() {
// [...]
}
};


int main() {
active<person> p[10];
return 0;
}

I personally like this construction; I think its "cleaner" than using the
Boost interface "directly".

0 new messages