Here is a sample source code, and after that my question:
void *ThreadFunc( void *bla )
{
int threadID = *( (int *) bla );
cTestClass Test( threadID );
pthread_setcancelstate( PTHREAD_CANCEL_ENABLE, 0 );
pthread_setcanceltype( PTHREAD_CANCEL_ASYNCHRONOUS, 0 );
while ( 1 )
{
usleep( 10000 );
Test.SayHello();
}
return 0;
}
Shouldn't the cTestClass (object Test) destructor execute when I pthread_cancel
the running thread - is it not allocated on the stack of the running thread?
What happens to the variables allocated on the stack of the (p)thread?
I tried to dynamicly allocate the cTestClass object, install a
pthread_cleanup_push/pop handler in which I destroy the allocated
cTestClass object, and that works just fine? But what about the variables
allocated on stack of ThreadFunc - are they removed (is memory they occupy
deallocated) when I pthread_cancel the running thread?
Thanks.
Mario Zagar
cyber AT fly . srk . fer . hr
The standards for pthreads and C++ exist in their own worlds and make no
mention of each other, so there is no standard way (correct me if I'm
wrong here).
I would think a well behaved C++ runtime would call the object
destructors (Solaris will) as they are going out of scope when the
thread is cancelled.
--
Ian Collins
Masuma Ltd,
Christchurch
New Zealand.
That's probably wrong [unless portability does NOT matter to you].
This should be a "C" routine...if you pass it to pthread_create().
> {
> int threadID = *( (int *) bla );
> cTestClass Test( threadID );
>
> pthread_setcancelstate( PTHREAD_CANCEL_ENABLE, 0 );
That's redundant/unneeded. Also, passing NULL pointer is NOT
correct either above or below.
> pthread_setcanceltype( PTHREAD_CANCEL_ASYNCHRONOUS, 0 );
Read on async-cancel-safety. Heck, there is nothing mysterious
in it and it's is quite simple [though language enforced sanity
of async-cancel-regions would definitely help here, of course].
> while ( 1 )
> {
> usleep( 10000 );
> Test.SayHello();
> }
>
> return 0;
> }
>
> Shouldn't the cTestClass (object Test) destructor execute when I pthread_cancel
> the running thread - is it not allocated on the stack of the running thread?
Yes, but only if your Pthreads impl supports C++ stack-unwinding on
thread cancel and exit. Fire a search on this topic and you'll surely
find quite a lot of various stuff/info...
regards,
alexander.
No, the destructor should not execute. The destructor executes when the
function returns, which it never does.
> I tried to dynamicly allocate the cTestClass object, install a
> pthread_cleanup_push/pop handler in which I destroy the allocated
> cTestClass object, and that works just fine?
Of course, because your handler performs the deletion. By the way, are
you *sure* this is safe? What if the cancellation occurs inside
'Test.SayHello()'? Is your object reentrant enough that the destructor
can run during a member function?
Here's a trivial example to show the problem:
class MyClass
{
char *ptr;
~MyClass() { if (ptr!=NULL) free(ptr); }
Replace(const char *foo)
{
if(ptr!=NULL) free(ptr);
ptr=strdup(foo);
}
Now imagine if the thread is halfway through 'Replace' when it's
cancelled. Now the destructor runs as a cleanup handler. Oh oh, 'ptr' is
not 'NULL', so we free it, but it was already freed in 'Replace'. Boom!
> But what about the variables
> allocated on stack of ThreadFunc - are they removed (is memory they occupy
> deallocated) when I pthread_cancel the running thread?
Yes and no. The memory the object itself occupies is removed because
the stack no longer exists when the thread goes away; however, any
memory that would have been freed in the destructor is not freed because
the destructor is not executed.
My suggestion to you is simply not to use asynchronous cancellation.
99.9995% of the time, there are other (and better) ways to achieve the
same objective.
DS
Read the C++ standard, David. Hint:
"....
The function signature longjmp(jmp_buf jbuf, int val) has more restricted
behavior in this International Standard. If any automatic objects would be
destroyed by a thrown exception transferring control to another (destination)
point in the program, then a call to longjmp( jbuf, val) at the throw point
that transfers control to the same (destination) point has undefined behavior.
...."
regards,
alexander.
OK... I guess to C++ standard the behavior is 'undefined', but I understand
what you're saying.
> Of course, because your handler performs the deletion. By the way, are
>you *sure* this is safe? What if the cancellation occurs inside
>'Test.SayHello()'? Is your object reentrant enough that the destructor
>can run during a member function?
Yes, my object is thread safe, this was just the simplest example for
my question about stack variable behavior... again, I see your point and
it's under control.
> Yes and no. The memory the object itself occupies is removed because
>the stack no longer exists when the thread goes away; however, any
>memory that would have been freed in the destructor is not freed because
>the destructor is not executed.
That was my conclusion also (ofcourse there still remains the 'undefined
behavior' of C++ standard :)
> My suggestion to you is simply not to use asynchronous cancellation.
>99.9995% of the time, there are other (and better) ways to achieve the
>same objective.
>
Hm, such as?
You see, my problem is specific in a way that I can run a 'system()' command
within an object in my thread, but I must be able to kill this thread at
_any_ time (which can also occure while in system()) so I think the async
cancelation works best for me (I guess I'm in those 0.0005% :))).
I'm open for suggestions if anyone has a better solution to this problem.
Regards,
Mario
C++ standard w.r.t. Pthreads aside ['legal' sense], David is WRONG. It's really
GOOD that you understand it. ;-)
[...]
> Yes, my object is thread safe, ....
thread-safe != async-cancel-safe.
regards,
alexander.
'Normally' destructor _should_ execute when function returns (normally as in
no threads of any kind in program), or am I wrong?
I think that's what David ment.
>> Yes, my object is thread safe, ....
>
>thread-safe != async-cancel-safe.
>
Agreed, should've said async-cancel-safe (meaning I cannot cancel my thread at
a potentially bad time considering my thread-safeness :)
Regards,
Mario
I meant:
http://groups.google.com/groups?selm=3C2CC94A.503FB8E2%40web.de
http://groups.google.com/groups?selm=3CBC1A30.EF89B4BF%40web.de
> >> Yes, my object is thread safe, ....
> >
> >thread-safe != async-cancel-safe.
> >
>
> Agreed, should've said async-cancel-safe
async-cancel-safe with nontrivial destructor[1] (.SayHello()
with probably some stream output aside for a moment) >>?!?<<
Don't think so. Anyway, usleep() ISN'T async-cancel-safe,
for sure. ;-)
regards,
alexander.
[1] "A destructor is trivial if it is an implicitly-declared
destructor and if:
— all of the direct base classes of its class have trivial
destructors and
— for all of the nonstatic data members of its class that
are of class type (or array thereof), each such class
has a trivial destructor.
Otherwise, the destructor is nontrivial."
> My suggestion to you is simply not to use asynchronous
> cancellation. 99.9995% of the time, there are other (and better)
> ways to achieve the same objective.
Yeah? How so?
The way I used to do it in Win32 was to have a global KillEvent, which
each thread would 'poll' during it's thread loop, and/or use during
any WaitForxxxObject/Event call. If set, the thread exits on it's own
after cleaning stuff up.
Since my move to linux, I've seen people do pthread_cancel(), instead
of the global KillEvent method.
I suppose in the linux world, one could have a global file descriptor
in which all select() statements adds to their read_set, which is
written to if program is to shutdown. Accompanied to this the global
mutex protected killEvent variable that could be checked. And with
this a SIGUSR signal can be sent to every thread to bring it out of a
blocking read. I haven't tested all of this; it's just an idea (and
maybe a bad one).
Besides the above techniques and the pthread_cancel (which doesn't
allow a thread to exit cleanly [ie. clean up after itself without a
sloppy cleanup function]), what are the better ways to achieve the
same objective?
--
Steve
> Besides the above techniques and the pthread_cancel (which doesn't
> allow a thread to exit cleanly [ie. clean up after itself without a
> sloppy cleanup function]), what are the better ways to achieve the
> same objective?
Simply design so this is a non-issue. Don't tie tasks strongly to
threads, so you never need to get rid of a thread, just a task. You have
complete control over what tasks are, when threads pick them up, and
when threads put them down. A desire to cancel a thread results from a
poor design decision of failing to abstract tasks independently from
threads.
DS
Anything related to a waitfor-type call has nothing to do with
asynchronous cancellation. This is a special cancellation mode which
is intended for bits of code where polling a global variable would
introduce an unacceptable performance hit, and which comes with
a bucketful of restrictions.
[...]
% Besides the above techniques and the pthread_cancel (which doesn't
% allow a thread to exit cleanly [ie. clean up after itself without a
% sloppy cleanup function]), what are the better ways to achieve the
% same objective?
I can't agree that thread_cancel doesn't allow a thread to exit cleanly.
If you want the thread to clean up after itself, don't allocate any
resources, or only allow cancellation when the thread isn't using
those resources.
Anyway, there isn't a one-size-fits-all solution to this problem, on
any platform. How do you get rid of threads created by third-party
code that you've called? Not by setting a shut-down flag. My favourite
way of telling back-ground threads to go away is to put a `go away'
message on the message queue they're processing. You can also make
`go away' a special property of the queue, and have each thread check
that when it's getting a message, i.e.,
forever:
pthread_mutex_lock(&q->mux);
while (!q->go_away && !q->msg)
pthread_cond_wait(&q->cond, &q->mux);
if (q->go_away) {
pthread_mutex_unlock(&q->mux);
/* go away processing here */
}
else {
mymsg = q->msg;
q->msg = q->msg->N;
pthread_mutex_unlock(&q->mux);
/* message processing code here */
}
goto forever;
which is shut down by:
pthread_mutex_lock(&q->mux);
q->go_away = 1;
pthread_mutex_unlock(&q->mux);
pthread_cond_broadcast(&q->cond);
--
Patrick TJ McPhee
East York Canada
pt...@interlog.com
The fuction my not return, but the object still goes out of scope.
Ian
Dead accurate. Anything else is simply brain-dead [unless you invoke
PROCESS termination instead of THREAD termination -- here it would
make sense to invoke SPECIFIC cleanup; atexit(); C++-terminate()
handlers, etc. and just silently get rid of everything else, IMHO].
regards,
alexander.
Patrick,
Shouldn't the broadcast be inside the mutex lock? As follows:
pthread_mutex_lock(&q->mux);
q->go_away = 1;
pthread_cond_broadcast(&q->cond);
pthread_mutex_unlock(&q->mux);
???
--
Steve
Why? If you do that, the other thread may wake up (due to the
broadcast) and then have to go right back to sleep (trying to acquire
the mutex). It's much more sensible to release the lock before
broadcasting, don't you think?
DS
Why ???
Well, AFAICT 'predictable scheduling' and a few 'special' cases
aside, no; it shouldn't. An example with slightly different 'policy'
[for consumers], but similar intent -- shutting down a bunch of
producers and consumers via 'go_away' flag and signaling/broadcasting
[while NOT holding the mutex] can be found here:
http://groups.google.com/groups?selm=3BA067DC.2527AEF1%40web.de
("Sleep( 1000 );" is Win32 -- please don't run it on some UNIX ;-))
regards,
alexander.
--
You Could Have Won >>ONE MILLION<< At http://sources.redhat.com/pthreads-win32
#include <disclaim.er>
Unless implementation uses scheduling optimization that Butenhof labeled
as 'wait morphing'... and, of course, unless that behavior is absolutely
NEEDED to get 'predictable scheduling' or perhaps even something else...
> It's much more sensible to release the lock before
> broadcasting, don't you think?
In general, yes. But sometime it's just required to prevent >>DEADLY<<
dangerous races:
http://www.terekhov.de/mythread.c
(see the cases of signaling prior to unlock)
regards,
alexander.
> > It's much more sensible to release the lock before
> > broadcasting, don't you think?
>
> In general, yes. But sometime it's just required to prevent >>DEADLY<<
> dangerous races:
>
> http://www.terekhov.de/mythread.c
> (see the cases of signaling prior to unlock)
I don't know what I'm supposed to see there, but it should always be
safe to release the lock before you broadcast. The standard doesn't
guarantee that the broadcast will be received in any particular time
frame, so the delay in sending the signal can't cause a problem. There's
no chance of deadlock because releasing a lock can't block. There's no
harm of a spurious signal because the standard is defined to warn that
they're possible, so code that can't handle that is already broken.
DS
Try again. Hint: condvar may NOT be alive forever.
regards,
alexander.
% Shouldn't the broadcast be inside the mutex lock? As follows:
You usually get fewer replies if you put it outside the mutex in
posts to this newsgroup.
I understand, and that seems to make sense. But then why do so many
books and comp. sci. classes teach the basic pattern is: 1) lock, 2)
signal, 3) unlock?
--
Steve
> I understand, and that seems to make sense. But then why do so many
> books and comp. sci. classes teach the basic pattern is: 1) lock, 2)
> signal, 3) unlock?
Because they don't assume POSIX condition variables. The only reason
the lock/unlock/signal paradigm is safe is because of the two
characteristics of POSIX condition variables that I discussed. It
obviously isn't always safe if, for example, you are using a signal
mechanism that is defined to never cause spurious wakeups and your code
relies upon this. It isn't safe if there's a guaranteed timefrom for
signal delivery with respect to lock operations. And so on.
DS
That behavior would be true if pthread_cond_wait performed a try on
acquiring the mutex. Otherwise it would just block on the mutex until
it is unlocked, and then continue. Does it indeed go back to waiting
if it cannot immediately acquire the mutex? How do you know?
Also, in the linux man page for pthread_cond_broadcast, the example
even shows the signal occurring before the unlock. I do not know which
method is correct. I just know that 90% of the web searches I perform
on this subject show the signal occurring before the unlock.
So I still do not know which is correct. I would hope Butenhof would
read this thread and offer an opinion.
--
Steve
> I don't know what I'm supposed to see there, but it should
> always be safe to release the lock before you broadcast. The
> standard doesn't guarantee that the broadcast will be received in
> any particular time frame, so the delay in sending the signal can't
> cause a problem. There's no chance of deadlock because releasing a
> lock can't block. There's no harm of a spurious signal because the
> standard is defined to warn that they're possible, so code that
> can't handle that is already broken.
Isn't there harm in another thread acquiring/stealing the mutex in
between your unlock and broadcast? Seems to be there is a big
possibility of thread starvation for those threads waiting in
cond_wait's if you broadcast after an unlock.
--
Steve
> > I don't know what I'm supposed to see there, but it should
> > always be safe to release the lock before you broadcast. The
> > standard doesn't guarantee that the broadcast will be received in
> > any particular time frame, so the delay in sending the signal can't
> > cause a problem. There's no chance of deadlock because releasing a
> > lock can't block. There's no harm of a spurious signal because the
> > standard is defined to warn that they're possible, so code that
> > can't handle that is already broken.
> Isn't there harm in another thread acquiring/stealing the mutex in
> between your unlock and broadcast?
How is that harmful?
> Seems to be there is a big
> possibility of thread starvation for those threads waiting in
> cond_wait's if you broadcast after an unlock.
This is a nonsense argument. If I have two threads that can do a job,
and both are waiting to do it, I don't care which thread does it.
DS
> > Why? If you do that, the other thread may wake up (due to the
> > broadcast) and then have to go right back to sleep (trying to
> > acquire the mutex). It's much more sensible to release the lock
> > before broadcasting, don't you think?
> That behavior would be true if pthread_cond_wait performed a try on
> acquiring the mutex. Otherwise it would just block on the mutex until
> it is unlocked, and then continue. Does it indeed go back to waiting
> if it cannot immediately acquire the mutex? How do you know?
What else could it do? I've looked at quite a few pthreads
implementations, and there are only two ways I've seen this done:
1) The pthread_cond_wait code just calls pthread_mutex_lock to
reacquire the mutex after it's woken.
2) The pthread_cond_signal/pthread_cond_broadcast code is smart enough
to 'morph' threads from 'waiting for cv' to 'waiting for mutex', thus
avoiding the excess wakeup just to go back to sleep.
> Also, in the linux man page for pthread_cond_broadcast, the example
> even shows the signal occurring before the unlock. I do not know which
> method is correct. I just know that 90% of the web searches I perform
> on this subject show the signal occurring before the unlock.
Both methods are correct.
> So I still do not know which is correct. I would hope Butenhof would
> read this thread and offer an opinion.
Signalling after unlocking is preferred because it generally provides
better performance.
DS
> Steve Connet wrote:
>
>> Also, in the linux man page for pthread_cond_broadcast, the example
>> even shows the signal occurring before the unlock. I do not know which
>> method is correct. I just know that 90% of the web searches I perform
>> on this subject show the signal occurring before the unlock.
>
> Both methods are correct.
>
>> So I still do not know which is correct. I would hope Butenhof would
>> read this thread and offer an opinion.
I've been reading mostly without responding lately; I've been exhausted (my
9 year old daughter was diagnosed as diabetic last month, and then broke
her ankle last week, and we've been going through many adjustment, ah,
"difficulties") and I've also been busy. As a consequence I've had both
less time and less energy for some of these discussions.
> Signalling after unlocking is preferred because it generally provides
> better performance.
In general I'd tend to agree, all other things being equal. Unfortunately,
many implementations don't do "wait morphing", and as a result signalling
while holding the mutex can result in significant costs.
One of the main reasons for signalling while holding the mutex is "increased
predictability". However, the increase is relatively small, and doesn't
provide any absolute guarantee. (It's based on the fact that mutex
contenders and condition waiters will queue in priority order on the mutex;
but that helps only if they're all actually waiting concurrently, and
you're using realtime priority threads on a uniprocessor.)
The only real requirements are that:
1) The condition wait be based on a shared data PREDICATE condition, and
2) The predicate is manipulated while holding the mutex specified in the
condition wait, BEFORE effecting the wakeup via signal or broadcast of the
condition waiter(s).
Some may find these more easy to manage when you set the predicate and
immediately signal the waiter while still holding the mutex. You'll often
want to change some value and then conditionally signal based on the new
value... if you are going to unlock first you may need to remember the
decision by copying the predicate into a local variable to avoid
unsynchronized (and hence unreliable) access after unlocking.
This may increase the chances of spurious ("stolen") wakeups because some
other thread may change the predicate between the time you unlock and the
time the awakened thread starts running. That may argue also in favor of
signalling while you still hold the mutex despite the risk of inefficiency.
(On the other hand, stolen wakeups, and other types of "spurious wake" can
occur anyway, so this again at best somewhat reduces the risk and cost, but
cannot remove it entirely.)
Some have argued for FIFO mutexes that grant ownership in order of request;
applied to condition waiters using the mutex, that would seem to remove (or
at least reduce) the risk of stolen wakeups. However, FIFO wakeup is
relatively expensive, and substantially reduces concurrency that is, on the
whole, far more valuable. Furthermore, nearly all intended requirements for
FIFO are in any case doomed to failure because they are based on the
mistaken idea that threads can viably COMPETE with each other (e.g., in a
"thread per client" server). Threads are inherently cooperative, because
they share far too many resources to effectively or fairly compete; the
goals of threaded programming are better served by efficient and simple
synchronization on which cooperative applications can build their own
task/data scheduling.
And then, if the thread implementation you use implements wait morphing, the
cost of signalling while you hold the mutex is inconsequential. So one
might argue that, if you have a quality implementation, it really doesn't
matter whether you hold the mutex. ;-)
--
/--------------------[ David.B...@hp.com ]--------------------\
| Hewlett-Packard Company Tru64 UNIX & VMS Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\-------------[ http://homepage.mac.com/~dbutenhof ]--------------/
Unless the state/predicate change [done under mutex protection, of course]
could result in the destruction of the condition variable. AFAICT, in this
case, one just have to perform signaling [broadcasting with multiple waitors]
while holding the mutex. Otherwise, the 'destroyer' thread may race ahead
[either due to NOT waiting at all in that time window if that is possible,
or due to some REAL 'spurious' wakeup consumed in that time window -- it
would then already be in the race for mutex ownership] and destroy condition
variable 'right after' the signaler unlocks the mutex but before it's
finished with signaling. Here is an example of wrong/erroneous logic
[see comments] in my old tennis.c 'program':
http://groups.google.com/groups?selm=3BB18850.6BF7CF24%40web.de
< new comments added >
=====================================
tennis.c
=====================================
#include <stdio.h>
#include <pthread.h>
enum GAME_STATE {
START_GAME,
PLAYER_A, // Player A plays the ball
PLAYER_B, // Player B plays the ball
GAME_OVER,
ONE_PLAYER_GONE,
BOTH_PLAYERS_GONE
};
enum GAME_STATE eGameState;
pthread_mutex_t mtxGameStateLock;
pthread_cond_t cndGameStateChange;
void*
playerA(
void* pParm
)
{
// For access to game state variable
pthread_mutex_lock( &mtxGameStateLock );
// Game loop
while ( eGameState < GAME_OVER ) {
// Play the ball
printf( "\nPLAYER-A\n" );
// Now it is PLAYER-B's turn
eGameState = PLAYER_B;
// Signal it to PLAYER-B
pthread_cond_signal( &cndGameStateChange );
// Wait until PLAYER-B finishes playing the ball
do {
pthread_cond_wait( &cndGameStateChange,&mtxGameStateLock );
if ( PLAYER_B == eGameState )
printf( "\n----PLAYER-A: SPURIOUS WAKEUP!!!\n" );
} while ( PLAYER_B == eGameState );
}
// PLAYER-A gone
eGameState++;
printf( "\nPLAYER-A GONE\n" );
// No more access to state variable needed
pthread_mutex_unlock( &mtxGameStateLock );
// Signal PLAYER-A gone event
pthread_cond_broadcast( &cndGameStateChange );
//^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ WRONG
return 0;
}
void*
playerB(
void* pParm
)
{
// For access to game state variable
pthread_mutex_lock( &mtxGameStateLock );
// Game loop
while ( eGameState < GAME_OVER ) {
// Play the ball
printf( "\nPLAYER-B\n" );
// Now its PLAYER-A's turn
eGameState = PLAYER_A;
// Signal it to PLAYER-A
pthread_cond_signal( &cndGameStateChange );
// Wait until PLAYER-A finishes playing the ball
do {
pthread_cond_wait( &cndGameStateChange,&mtxGameStateLock );
if ( PLAYER_A == eGameState )
printf( "\n----PLAYER-B: SPURIOUS WAKEUP!!!\n" );
} while ( PLAYER_A == eGameState );
}
// PLAYER-B gone
eGameState++;
printf( "\nPLAYER-B GONE\n" );
// No more access to state variable needed
pthread_mutex_unlock( &mtxGameStateLock );
// Signal PLAYER-B gone event
pthread_cond_broadcast( &cndGameStateChange );
//^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ WRONG
return 0;
}
int
main()
{
pthread_t thid;
pthread_mutex_init( &mtxGameStateLock,0 );
pthread_cond_init( &cndGameStateChange,0 );
// Set initial state
eGameState = START_GAME;
// Create players
pthread_create( &thid,0,playerA,0 );
pthread_create( &thid,0,playerB,0 );
// Give them 5 sec. to play
//Sleep( 5000 ); //Win32
sleep( 5 );
// Set game over state
pthread_mutex_lock( &mtxGameStateLock );
eGameState = GAME_OVER;
// Signal game over
pthread_cond_broadcast( &cndGameStateChange );
// Wait for players to stop
do {
pthread_cond_wait( &cndGameStateChange,&mtxGameStateLock );
//^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
//--------- SPURIOUS WAKEUP-RACE could occur here -----------
} while ( eGameState < BOTH_PLAYERS_GONE );
// Thats it
printf( "\nGAME OVER\n" );
pthread_mutex_unlock( &mtxGameStateLock );
pthread_cond_destroy( &cndGameStateChange );
//^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RACE
pthread_mutex_destroy( &mtxGameStateLock );
return 0;
}
Well, another way to 'fix' it here would be simply to *join* both
players prior to the final cleanup, or simply don't do any cleanup
at all; since that does not really matter here.
Where am I wrong here? ;-)
BTW, unless I am missing and/or misunderstanding something,
a similar code/logic (but w/o an error ;-) ) can be found
here:
http://homepage.mac.com/~dbutenhof/Threads/code/workq.c
DPRINTF (("Worker shutting down\n"));
wq->counter--;
/*
* NOTE: Just to prove that every rule has an
* exception, I'm using the "cv" condition for two
* separate predicates here. That's OK, since the
* case used here applies only once during the life
* of a work queue -- during rundown. The overhead
* is minimal and it's not worth creating a separate
* condition variable that would be waited and
* signaled exactly once!
*/
if (wq->counter == 0)
pthread_cond_broadcast (&wq->cv);
pthread_mutex_unlock (&wq->mutex);
return NULL;
"remember the decision by copying the predicate into a local variable"
and doing the signaling AFTER unlock would be INCORRECT here, unless
I'm just missing something {hopefully} subtle here...
< another source code fragment >
if (wq->counter > 0) {
wq->quit = 1;
/* if any threads are idling, wake them. */
if (wq->idle > 0) {
status = pthread_cond_broadcast (&wq->cv);
if (status != 0) {
pthread_mutex_unlock (&wq->mutex);
return status;
}
}
/*
* Just to prove that every rule has an exception, I'm
* using the "cv" condition for two separate predicates
* here. That's OK, since the case used here applies
* only once during the life of a work queue -- during
* rundown. The overhead is minimal and it's not worth
* creating a separate condition variable that would be
* waited and signalled exactly once!
*/
while (wq->counter > 0) {
status = pthread_cond_wait (&wq->cv, &wq->mutex);
if (status != 0) {
pthread_mutex_unlock (&wq->mutex);
return status;
}
}
}
status = pthread_mutex_unlock (&wq->mutex);
if (status != 0)
return status;
status = pthread_mutex_destroy (&wq->mutex);
status1 = pthread_cond_destroy (&wq->cv);
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
regards,
alexander.
>
> David Butenhof wrote:
> [...]
>> And then, if the thread implementation you use implements wait morphing,
>> the cost of signalling while you hold the mutex is inconsequential. So
>> one might argue that, if you have a quality implementation, it really
>> doesn't matter whether you hold the mutex. ;-)
>
> Unless the state/predicate change [done under mutex protection, of course]
> could result in the destruction of the condition variable. AFAICT, in this
> case, one just have to perform signaling [broadcasting with multiple
> waitors] while holding the mutex. Otherwise, the 'destroyer' thread may
> race ahead
> [either due to NOT waiting at all in that time window if that is possible,
> or due to some REAL 'spurious' wakeup consumed in that time window -- it
> would then already be in the race for mutex ownership] and destroy
> condition variable 'right after' the signaler unlocks the mutex but before
> it's finished with signaling. Here is an example of wrong/erroneous logic
> [see comments] in my old tennis.c 'program':
But that's just an application bug.
Sure, a broken program can delete a shared object while other threads are
still using it. The easiest way to manage CV lifetime issues, if you really
need to dynamically create/destroy them, is to be sure that all references
occur while holding a mutex. That might as well be the mutex used by that
CV for waiting and predicate tests.
But none of that is the same as saying "that's the only way". It's just
another program bug, and you are expressing personal preferences for
avoiding that bug. That's fine, and your preferences in this case don't
seem particularly unreasonable (for those rare cases where dynamic CV
destruction really makes sense)... but it's still just a preference.
What else? Ref.counting? Garbage collector [hehe ;-)]? Uhmm. I don't see
anything else that could help... Also, holding some other mutex probably
won't help given the nature of 'wait' operation. ;-) What am I missing
here? TIA.
> But none of that is the same as saying "that's the only way". It's just
> another program bug, and you are expressing personal preferences for
> avoiding that bug. That's fine, and your preferences in this case don't
> seem particularly unreasonable (for those rare cases where dynamic CV
> destruction really makes sense)... but it's still just a preference.
regards,
alexander.