condvar under windows

38 views
Skip to first unread message

Jochen Wilhelmy

unread,
Jan 19, 2003, 1:07:29 PM1/19/03
to
hi!

a common question/problem is how to implement condition variables with
events on windows. there seem to be two different approaches:

- use windows objects (event, semaphore) per condition variable
- use windows objects (an event) per thread

the first approach is used in ace and boost and was discussed here.
the second approach i did not find somewhere, so it even may be
my own invention :)

the idea is to have an event per thread and a queue per condition
variable. if a thread wants to wait on the condvar it adds itself
to the queue, then releases the associated mutex, and then waits
on the event (the order of these operations insures that no
signal() can be missed).
if a thread wants to signal(), it removes the first thread from the
queue and sets its event to wake it up.
i think this is a quite simple and efficient method. see below
the sourcecode and tell me your opinion :)

(explanation: waitForNotify() does a WaitForSingleObject on windows,
wakeUp(1) does a SetEvent on the event of the first thread in the queue)

void CondVar::wait(Mutex& mutex)
{
// get current thread
Thread* current = Thread::getCurrent().getPointer();

// wait until the condition variable is acquired
do
{

// process messages, exception (also interruption exception) may occur
current->processMessages();

{
// acquire the guard
Lock lock(*this->impl);

// add this thread to the queue of the condition variable
this->impl->add(current);
}


{
// scoped unlock for wait
Unlock unlock(mutex);

// wait
current->waitForNotify();
}

{
// acquire the guard
Lock lock(*this->impl);

// remove this thread from the queue of the condition variable
// break if the thread already was removed, which indicates that the
condition variable is acquired
if (!this->impl->remove(current))
break;
}

} while (true);

}


void CondVar::signal()
{
// acquire the guard
Lock lock(*this->impl);

// wake up one thread
this->impl->wakeUp(1);
}


Joseph Seigh

unread,
Jan 19, 2003, 3:33:52 PM1/19/03
to

Jochen Wilhelmy wrote:
>
> hi!
>
> a common question/problem is how to implement condition variables with
> events on windows. there seem to be two different approaches:
>
> - use windows objects (event, semaphore) per condition variable
> - use windows objects (an event) per thread
>
> the first approach is used in ace and boost and was discussed here.
> the second approach i did not find somewhere, so it even may be
> my own invention :)
>

(...)

My understanding of the problem with windows events was determining when
the last thread had exited from the wait set so that you could reuse the
event. So it becomes sort of a GC problem. The maximum number of event
objects needed would be equal to the number of threads. I though that
it may have been thought unscalable and why nobody was using it. Maybe
not.

Joe Seigh

Jochen Wilhelmy

unread,
Jan 19, 2003, 4:36:37 PM1/19/03
to
> My understanding of the problem with windows events was determining when
> the last thread had exited from the wait set so that you could reuse the
> event. So it becomes sort of a GC problem. The maximum number of event
> objects needed would be equal to the number of threads. I though that
> it may have been thought unscalable and why nobody was using it. Maybe
> not.
but one event per thread i needed anyway, so for me this was the
cheapest solution. note that if you have 10 condvars and 10 threads
you need 10 events in either approach. for 100 condvars and 1 thread
you would need 100 in the per-condvar approach and 1 in my approach.
for 1 condvar and 100 threads you would need 1 in the per-condvar
approach and 100 in my approach.

regards, jochen

Joseph Seigh

unread,
Jan 19, 2003, 5:24:06 PM1/19/03
to

No, they wouldn't be part of the condvar but dynamically allocated on a
as needed basis and deallocated thru GC. So the maximum number of events
would be the number of threads.

Joe Seigh

Alexander Terekhov

unread,
Jan 20, 2003, 3:07:07 AM1/20/03
to

Jochen Wilhelmy wrote:
>
> hi!
>
> a common question/problem is how to implement condition variables with
> events on windows. there seem to be two different approaches:
>
> - use windows objects (event, semaphore) per condition variable
> - use windows objects (an event) per thread
>
> the first approach is used in ace and boost and was discussed here.
> the second approach i did not find somewhere, so it even may be
> my own invention :)

A) Ace-condvars are totally broken.

B) Boost-convars are partly broken [dtor lacks some sync, to begin with].

C) Condvars-emuls using "the second approach i did not find somewhere":

http://groups.google.com/groups?selm=3CEA2764.3957A2C8%40web.de
(Subject: Re: The implementation of condition variables in pthreads-win32)

regards,
alexander.

Jochen Wilhelmy

unread,
Jan 20, 2003, 6:47:41 AM1/20/03
to
hi!

> A) Ace-condvars are totally broken.
>
> B) Boost-convars are partly broken [dtor lacks some sync, to begin with].
>
> C) Condvars-emuls using "the second approach i did not find somewhere":
>
> http://groups.google.com/groups?selm=3CEA2764.3957A2C8%40web.de
> (Subject: Re: The implementation of condition variables in pthreads-win32)

ok, then i re-invented it :)

it seems true that win32 is so brain damaged (as you call it :) ) that
there is no possibility to clean up thread specific data.
is this right?
this is no problem if you use a lib like pthreads, but for an active x
plugin i don't really know how to do this, perhaps with the dllmain
stuff.

jochen

Alexander Terekhov

unread,
Jan 20, 2003, 2:26:44 PM1/20/03
to

Jochen Wilhelmy wrote:
[...]

Q)

> it seems true that win32 is so brain damaged (as you call it :) ) that
> there is no possibility to clean up thread specific data.
> is this right?

A)

> this is no problem if you use a lib like pthreads, but for an active x
> plugin i don't really know how to do this, perhaps with the dllmain
> stuff.

Maybe. ;-)

regards,
alexander.

Joseph Seigh

unread,
Jan 20, 2003, 2:31:04 PM1/20/03
to

Alexander Terekhov wrote:
>
> Jochen Wilhelmy wrote:
> >
> > hi!
> >
> > a common question/problem is how to implement condition variables with
> > events on windows. there seem to be two different approaches:
> >
> > - use windows objects (event, semaphore) per condition variable
> > - use windows objects (an event) per thread
> >
> > the first approach is used in ace and boost and was discussed here.
> > the second approach i did not find somewhere, so it even may be
> > my own invention :)
>
> A) Ace-condvars are totally broken.
>
> B) Boost-convars are partly broken [dtor lacks some sync, to begin with].

Umm.., broken broken or just severly suboptimal broken?


Should I write a windows condvar to give Jochem something to benchmark against? :)

Joe Seigh

Alexander Terekhov

unread,
Jan 20, 2003, 2:38:11 PM1/20/03
to

Joseph Seigh wrote:
[...]

> > A) Ace-condvars are totally broken.
> >
> > B) Boost-convars are partly broken [dtor lacks some sync, to begin with].
>
> Umm.., broken broken or just severly suboptimal broken?

Uhmm.

Ace:

http://groups.google.com/groups?selm=C12569D7.0074491A.00%40d12mta05.de.ibm.com
http://groups.google.com/groups?selm=C12569E5.005BE639.00%40d12mta05.de.ibm.com
http://groups.google.com/groups?selm=C1256A72.005F1860.00%40d12mta05.de.ibm.com

Boost:

http://lists.boost.org/MailArchives/boost/msg23913.php

regards,
alexander.

Joseph Seigh

unread,
Jan 21, 2003, 10:05:34 AM1/21/03
to

I meant algorithmically. Bugs can be fixed, algorithms can't. There's only
3 algorithms that I know of, Jochem's, the GC one, and the earlier non GC one
which blocks the signaler until the wait set has cleared out and event can be
reset and reused.

Joe Seigh

Alexander Terekhov

unread,
Jan 21, 2003, 10:30:16 AM1/21/03
to

Joseph Seigh wrote:
[...]

> I meant algorithmically. Bugs can be fixed, algorithms can't. There's only
> 3 algorithms that I know of, Jochem's, the GC one, and the earlier non GC one
> which blocks the signaler until the wait set has cleared out and event can be
> reset and reused.

Uhmm. Why would you want to block *the signaler*? I don't quite follow
you, I'm afraid. Do you mean "deleter"/"destroyer" or what? Well, FWIW:
<http://tinyurl.com/4pak>. Oh, also ``FWIW:''

https://listman.redhat.com/pipermail/phil-list/2003-January/000430.html
(Subject: nptl 0.14)

regards,
alexander.

Jochen Wilhelmy

unread,
Jan 21, 2003, 10:33:05 AM1/21/03
to
> Should I write a windows condvar to give Jochem something to benchmark against? :)
if you mean me, my name is JocheN. if you have some pseudocode (or real
code for some platform) this would be cool to test it.

bye, jochen

Joseph Seigh

unread,
Jan 21, 2003, 12:25:18 PM1/21/03
to

Sorry Jochen.

Here's some pseudo code for now.


class waitset {
friend class condvar;

private:

void wait(); // WaitForSingleObject
void signal(); // SetEvent

int count; // refcount
event_t event;
}

class condvar {

public:

void wait(mutex_t &outer) {
waitset *temp;
lock(mutex);
if (waitobj == 0)
waitobj = new waitset(); // initialized to reset state
temp = waitobj;
temp->count++; // atomic increment
unlock(mutex);

unlock(outer); // unlock caller's mutex
temp->wait();
if (--temp->count == 0) // atomic decrement
delete temp;
lock(outer); // re lock caller's mutex
}

void signal() {
waitset *temp;
lock(mutex);
temp = waitobj;
temp->count++; // atomic increment
waitobj = 0;
unlock(mutex);

temp->signal();
if (--temp->count == 0) // atomic decrement
delete temp;
}

private:

mutex_t mutex;
waitset * waitobj;

}


You can only do a broadcast signal. There's a way to do just signal one waiting
thread without blocking the signaler but I haven't worked on it yet. It involves
keeping up to two waitsets around and only doing single signals to the older waitset.
The reference count would tell you how many are in the waitset. And you'd have
to change from win32 events to win32 semaphores of course.

Joe Seigh

Jochen Wilhelmy

unread,
Jan 22, 2003, 8:55:07 AM1/22/03
to
> You can only do a broadcast signal. There's a way to do just signal one waiting
> thread without blocking the signaler but I haven't worked on it yet. It involves
> keeping up to two waitsets around and only doing single signals to the older waitset.
> The reference count would tell you how many are in the waitset. And you'd have
> to change from win32 events to win32 semaphores of course.

it seems that it creates one waitset (and therefore an event) per wait.
it's an interesting idea, but i don't think it's very fast.
but of course i should write some speed test now.

jochen

Joseph Seigh

unread,
Jan 22, 2003, 1:32:10 PM1/22/03
to

You can recycle them by overriding the new and delete operators. I haven't actually
looked at how expensive event object initialization is, so I don't know which is
more efficient, recycling or allocating heap storage each time.

I'm doing pseudo code for a signal/broadcast version using semaphores instead of events
which I'll post later on.

Joe Seigh

Joseph Seigh

unread,
Jan 22, 2003, 1:46:50 PM1/22/03
to
Question here. I generally like to keep the locked sections of code as
small as possible. In some cases that means slightly more complicated
code as in the prior posting. The code would be somewhat simpler otherwise.
But that would mean moving the win32 event signaling inside the locked region.
That wouldn't be a problem if win32 event signaling does not cause thread
preemption. I wouldn't think that event signaling should cause preemption but
who knows what system architects think these days. And I could have sworn that
that seen that behavior somewhere. Anybody heard of signaling functions that
do cause preemption?

On the other hand, it might be better to keep making as few assumptions as possible,
no matter how implausible it may seem otherwise.

Joe Seigh

Steve Watt

unread,
Jan 22, 2003, 1:55:12 PM1/22/03
to
In article <3E2EE93A...@xemaps.com>,

Joseph Seigh <jsei...@xemaps.com> wrote:
>Question here. I generally like to keep the locked sections of code as
>small as possible. In some cases that means slightly more complicated
>code as in the prior posting. The code would be somewhat simpler otherwise.
>But that would mean moving the win32 event signaling inside the locked region.
>That wouldn't be a problem if win32 event signaling does not cause thread
>preemption. I wouldn't think that event signaling should cause preemption but
>who knows what system architects think these days. And I could have sworn that
>that seen that behavior somewhere. Anybody heard of signaling functions that
>do cause preemption?

Well, it's absolutely guaranteed on a POSIX system if the awakened
thread is higher priority (and something other than SCHED_OTHER).
I'd imagine the same is true for Win32. That's really where it gets
Interesting to write synch primitives on RTOSes, where it's common for
everything to be at different prios.

I have seen implementations where signalling a same-prio peer would
immediately start the peer running, but I'll agree that that seems like
a lousy way to do things.

>On the other hand, it might be better to keep making as few assumptions
>as possible, no matter how implausible it may seem otherwise.

A safer bet, considering...
--
Steve Watt KD6GGD PP-ASEL-IA ICBM: 121W 56' 57.8" / 37N 20' 14.9"
Internet: steve @ Watt.COM Whois: SW32
Free time? There's no such thing. It just comes in varying prices...

Joseph Seigh

unread,
Jan 23, 2003, 7:06:47 AM1/23/03
to
Well, it looks like timeouts w.r.t. signal (wake up one thread) are extremely
interesting (read nasty). No shortage of suboptimal solutions, not to speak
of the ones that don't work.

Joe Seigh

Joseph Seigh

unread,
Jan 24, 2003, 4:28:45 PM1/24/03
to
Just out of perverse curiosity, is there an api in windows
for creating synchronization object w/ handles suitable for
use with WaitForSingleObject/WaitForMultipleObjects? I
suspect these are kernel entities but with windows who
knows.

Joe Seigh

Chris Sheppard

unread,
Jan 24, 2003, 4:53:04 PM1/24/03
to

Joseph Seigh

unread,
Jan 24, 2003, 5:27:52 PM1/24/03
to

No, or there wouldn't be the current topic right now. A recurring topic I might add.

Specifically, I was wondering if one can create (i.e. write from scratch) synchronization objects
other than the ones that Microsoft provides. The object would be reference via a HANDLE type
which could be passed to the wait routines.

Joe Seigh

Jochen Wilhelmy

unread,
Jan 27, 2003, 7:30:43 AM1/27/03
to
> Specifically, I was wondering if one can create (i.e. write from scratch) synchronization objects
> other than the ones that Microsoft provides. The object would be reference via a HANDLE type
> which could be passed to the wait routines.

i currently think that it it is not possible as a HANDLE but when you
write a wrapper for everything then it is possible to implement
everything using handles (with the handle per thread method)

jochen

Joseph Seigh

unread,
Jan 28, 2003, 12:04:17 PM1/28/03
to

Joseph Seigh wrote:
>
> You can recycle them by overriding the new and delete operators. I haven't actually
> looked at how expensive event object initialization is, so I don't know which is
> more efficient, recycling or allocating heap storage each time.
>
> I'm doing pseudo code for a signal/broadcast version using semaphores instead of events
> which I'll post later on.
>

Well, maybe in C instead of pseudo code. Actually haven't tried windows threaded programming
yet. No static initialization so I'll have to write DCL/pthread_once logic as well. Testcases
will probably be most of the work. Header file so far looks like

typedef void * cond_t; // opaque handle to Cond object

extern cond_t cond_create();
extern void cond_destroy(cond_t hCond);

extern LONG cond_reltimedwait(cond_t hCond, HANDLE hMutex, DWORD dwTimeout);
extern LONG cond_wait(cond_t hCond, HANDLE hMutex);
extern LONG cond_signal(cond_t hCond);
extern LONG cond_broadcast(cond_t hCond);

cond_create() will probably have some parms so I can do performance measurements of
alternative logic.

It's not pthreads. I'm not writing a pthreads package. It's a standalone condition
variable.

Joe Seigh

Jochen Wilhelmy

unread,
Feb 1, 2003, 1:51:34 PM2/1/03
to
> It's not pthreads. I'm not writing a pthreads package. It's a standalone condition
> variable.

do you have an url where the latest version can be found?


Joseph Seigh

unread,
Feb 1, 2003, 2:21:35 PM2/1/03
to

Sort of testing it at the moment. And I want to measure the performance cost
of allocating and deallocating semaphores vs. reusing them. And maybe broadcast
vs. signal.

Joe Seigh

Joseph Seigh

unread,
Feb 3, 2003, 4:29:44 PM2/3/03
to

The difference between recycling and not recycling semaphores used as waitsets seems to
be about 10% using condvars with Mutex's and 20% using condvars with Critical_Section's
as the lock object in the condvar wait parameter list. I guess that's significant given
that it's being measured relative the the test application, not just the condvar
routines.

Joe Seigh

Joseph Seigh

unread,
Feb 4, 2003, 8:03:25 AM2/4/03
to

Joseph Seigh wrote:
>
>
> The difference between recycling and not recycling semaphores used as waitsets seems to
> be about 10% using condvars with Mutex's and 20% using condvars with Critical_Section's
> as the lock object in the condvar wait parameter list. I guess that's significant given
> that it's being measured relative the the test application, not just the condvar
> routines.
>

That could be as high as 30 to 50 % difference. I've been fooling around with the
testcase, a consumers/producers thing. If I tweek it in ways you wouldn't normally
do in this case, I get a 10 x increase in throughput. That is just wrong. Microsoft's
implementation of ReleaseSemaphore is really, really screwed up. All I can say is
it's a good thing POSIX allows really, really pathelogical implementations.

An interesting challenge is, can you come up algorihms that provide optimal performance
even if the thread libraries are severely suboptimal?

Joe Seigh

Joseph Seigh

unread,
Feb 5, 2003, 7:11:37 PM2/5/03
to
More details.

This

lock(mutex)
...
cond_signal(condvar)
unlock(mutex)

is about 60% slower than

lock(mutex)
...
unlock(mutex)
cond_signal(condvar)

apparently ReleaseSemaphore() is the culprit. The testcase is a
producer/consumer and the slowdown is the same whether condvars
are used or just plain semaphores are used (producer/consumer
being something that can be done with semaphores).

The condvar version and the semaphore version of the testcase
run at the same speed, so the condvar implementation doesn't
add any appreciable overhead.

The 10x speed difference I mentioned in a prior posting is due
to testcase manipulation and has nothing to do with the condvar
implementation or moving the cond_signal in or out of the locked
region.

Joe Seigh

Joseph Seigh

unread,
Feb 14, 2003, 7:32:35 AM2/14/03
to
I'll post it here. It's not large enough to justify its own URL.

Starting with the header file. Note that cond_reltimedwait() time value is
in millicseconds, same as WaitFor...Object(). Also, the mutex form takes
HANDLE*, not HANDLE, since there's no guarantee that HANDLE fits in PVOID.

Joe Seigh

-- CondVar.h --
#ifndef CONDVAR_H
#define CONDVAR_H

//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------


typedef int (*mutexcb_t)(void *); // mutex lock/unlock callback type

//
//
typedef struct {
mutexcb_t lockcb; // mutex lock callback
mutexcb_t unlockcb; // mutex unlock callback
//
} condattr_t;

//
//
typedef struct waitset_tt {
struct waitset_tt *next; // waitset free queue
LONG refCount; // refcount -- atomic increment/decrement only
LONG waitCount; // wait/signal count -- incr/decr only w/ lock
LONG deferCount; // commited waits
HANDLE sem; // semaphore
} waitset_t;

//
//
typedef struct {
CRITICAL_SECTION cs;
waitset_t *qfree; // waitset free queue

// stats
int qcount;
int qcreate;
int qalloc;
int qdealloc;
} qfree_t;

//
//
typedef struct {
qfree_t *fq; // local
waitset_t *waitset1;
waitset_t *waitset2;
CRITICAL_SECTION cs;

// attributes
condattr_t attr;
} cond_t;


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------

extern condattr_t condattr_default;
#define CONDATTR_DEFAULT condattr_default
extern condattr_t condattr_mutex;
#define CONDATTR_MUTEX condattr_mutex
extern condattr_t condattr_cs;
#define CONDATTR_CS condattr_cs

extern void condattr_init(condattr_t *attr);
extern void condattr_setmutexcb(condattr_t *attr, mutexcb_t lockcb, mutexcb_t unlockcb);


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------

extern LONG cond_init(cond_t *cv, condattr_t *attr);
extern void cond_destroy(cond_t *cv);

extern LONG cond_reltimedwait(cond_t *cv, HANDLE hMutex, DWORD dwTimeout);
extern LONG cond_wait(cond_t *cv, HANDLE hMutex);
extern LONG cond_signal(cond_t *cv);
extern LONG cond_broadcast(cond_t *cv);


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------

extern cond_lock_mutex(void *mutex);
extern cond_unlock_mutex(void *mutex);
extern cond_lock_criticalsection(void *mutex);
extern cond_unlock_criticalsection(void *mutex);

//-----------------------------------------------------------------------------------

/*-*/
#endif // CONDVAR_H

Joseph Seigh

unread,
Feb 14, 2003, 7:40:14 AM2/14/03
to
-- CondCallback.c --
#define _WIN32_WINNT 0x0500
#include <windows.h>
#include <winbase.h>
#include <CondVar.h>


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
int cond_lock_mutex(void *mutex) {
return WaitForSingleObject(*(HANDLE *)mutex, INFINITE);
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
int cond_unlock_mutex(void *mutex) {
return ReleaseMutex(*(HANDLE *)mutex) ? 0 : 1;
}

//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
int cond_lock_criticalsection(void *mutex) {
EnterCriticalSection((LPCRITICAL_SECTION)mutex);
return 0;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
int cond_unlock_criticalsection(void *mutex) {
LeaveCriticalSection((LPCRITICAL_SECTION)mutex);
return 0;
}


/*-*/

Joseph Seigh

unread,
Feb 14, 2003, 7:42:10 AM2/14/03
to
-- CondAttr.c --

#define _WIN32_WINNT 0x0500
#include <windows.h>
#include <winbase.h>
#include <CondVar.h>

//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------

condattr_t condattr_default = {
cond_lock_mutex,
cond_unlock_mutex,
};

condattr_t condattr_mutex = {
cond_lock_mutex,
cond_unlock_mutex,
};

condattr_t condattr_cs = {
cond_lock_criticalsection,
cond_unlock_criticalsection,
};


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
void condattr_init(condattr_t *attr) {
*attr = condattr_default;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
void condattr_setmutexcb(condattr_t *attr, mutexcb_t lockcb, mutexcb_t unlockcb) {
attr->lockcb = lockcb;
attr->unlockcb = unlockcb;
}


/*-*/

Joseph Seigh

unread,
Feb 14, 2003, 7:43:31 AM2/14/03
to
-- CondVar.c --

#define _WIN32_WINNT 0x0500
#include <windows.h>
#include <winbase.h>
#include <stdlib.h>
#include <CondVar.h>

qfree_t *xq = NULL; // free waitset list


//===================================================================================
// Private Methods
//===================================================================================


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
waitset_t * createWaitset() {
waitset_t *waitset;

if ((waitset = (waitset_t *)malloc(sizeof(waitset_t))) != NULL) {
waitset->refCount = 0;
waitset->waitCount = 0;
waitset->deferCount = 0;
waitset->sem = CreateSemaphore(NULL, 0, 999999, NULL);
waitset->next = NULL;
}

return waitset;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
void deleteWaitset(waitset_t *waitset) {
CloseHandle(waitset->sem);
free(waitset);

return;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
qfree_t * initFreeQueue() {
qfree_t *q;
qfree_t *oldq;

q = InterlockedCompareExchangePointer(&xq, NULL, NULL);

if (q == NULL) {
q = (qfree_t *)malloc(sizeof(qfree_t));
InitializeCriticalSection(&(q->cs));
q->qfree = NULL;
q->qcount = 0;
q->qcreate = 0;
q->qalloc = 0;
q->qdealloc = 0;

oldq = InterlockedCompareExchangePointer(&xq, q, NULL);

if (oldq != NULL) {
DeleteCriticalSection(&(q->cs));
free(q);
q = oldq;
}

}

return q;

}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
waitset_t *acquireWaitset(cond_t *cv) {
qfree_t *q;
waitset_t *waitset;

if ((q = cv->fq) == NULL)
q = cv->fq = initFreeQueue();


EnterCriticalSection(&(q->cs));

if ((waitset = q->qfree) != NULL) {
q->qfree = waitset->next;
q->qcount--;
q->qalloc++;

LeaveCriticalSection(&(q->cs));

waitset->refCount = 0;
waitset->waitCount = 0;
// reset semaphore to zero
while (waitset->deferCount > 0) {
waitset->deferCount--;
WaitForSingleObject(waitset->sem, 0);
//WAIT_OBJECT_0;
//WAIT_TIMEOUT:
}

}
else {
q->qcreate++;
LeaveCriticalSection(&(q->cs));
waitset = createWaitset();
}

return waitset;

}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
void releaseWaitset(cond_t *cv, waitset_t *waitset) {
qfree_t *q;

if (InterlockedDecrement(&(waitset->refCount)) == 0) {
if ((q = cv->fq) == NULL)
q = cv->fq = initFreeQueue();

EnterCriticalSection(&(q->cs));

waitset->next = q->qfree;
q->qfree = waitset;
q->qcount++;
q->qdealloc++;

LeaveCriticalSection(&(q->cs));
}
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
LONG cancel_wait(cond_t *cv, waitset_t *waitset) {
LONG rc;

EnterCriticalSection(&(cv->cs));

if (waitset->waitCount > 0) {
if (--(waitset->waitCount) == 0) {
if (cv->waitset2 == waitset) {
cv->waitset2 = NULL;
InterlockedDecrement(&(waitset->refCount)); // != 0
}
}
rc = WAIT_TIMEOUT;
}

else {
// not removed from waitset, signal already committed.
// do not wait for signal. Semaphore will be decremented
// to zero if waitset is reused.
waitset->deferCount++; // number of unclaimed signals
rc = WAIT_OBJECT_0;
}

LeaveCriticalSection(&(cv->cs));

return rc;
}


//===================================================================================
// Public Methods
//===================================================================================


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
LONG cond_init(cond_t *cv, condattr_t *attr) {

// cond attribute settings
if (attr == NULL)
cv->attr = condattr_default;

else
cv->attr = *attr;

InitializeCriticalSection(&(cv->cs));
cv->waitset1 = NULL;
cv->waitset2 = NULL;

return 0;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
void cond_destroy(cond_t *cv) {

if (cv->waitset1 != NULL)
releaseWaitset(cv, cv->waitset1);

if (cv->waitset2 != NULL)
releaseWaitset(cv, cv->waitset2);

DeleteCriticalSection(&(cv->cs));

return;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
LONG cond_reltimedwait(cond_t *cv, void *mutex, DWORD dwWaitInterval) {
waitset_t *temp;
DWORD rc;

EnterCriticalSection(&(cv->cs));

// unlock callers' mutex
if (cv->attr.unlockcb(mutex) != 0) {
LeaveCriticalSection(&(cv->cs));
rc = WAIT_FAILED;
return rc;
}

if (cv->waitset1 == NULL) {
cv->waitset1 = acquireWaitset(cv); // semaphore initialized to zero
cv->waitset1->refCount = 1; // only time atomic not required
}

temp = cv->waitset1;
InterlockedIncrement(&(temp->refCount));
temp->waitCount++; // increment wait count

LeaveCriticalSection(&(cv->cs));

rc = WaitForSingleObject(temp->sem, dwWaitInterval); // wait for signal

switch (rc) {
case WAIT_TIMEOUT:
rc = cancel_wait(cv, temp);
break;

case WAIT_OBJECT_0:
break;

default:
;
}

releaseWaitset(cv, temp);

// relock caller's mutex
if (cv->attr.lockcb(mutex) != 0)
abort(); // no return code defined for this condition

return rc;
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
LONG cond_wait(cond_t *cv, HANDLE hMutex) {
return cond_reltimedwait(cv, hMutex, INFINITE);
}


//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
LONG cond_signal(cond_t *cv) {
waitset_t *temp2;
DWORD rc = 0;

EnterCriticalSection(&(cv->cs));

if (cv->waitset2 != NULL) {
temp2 = cv->waitset2;
if (--(temp2->waitCount) == 0)
cv->waitset2 = NULL;
else
InterlockedIncrement(&(temp2->refCount));
}

else if (cv->waitset1 != NULL && cv->waitset1->waitCount > 0) {
temp2 = cv->waitset2 = cv->waitset1;
cv->waitset1 = NULL;

if (--(temp2->waitCount) == 0)
cv->waitset2 = NULL;
else
InterlockedIncrement(&(temp2->refCount));
}

else
temp2 = NULL;

LeaveCriticalSection(&(cv->cs));

if (temp2 != NULL) {
rc = ReleaseSemaphore(temp2->sem, 1, NULL) ? 0 : 1;
releaseWaitset(cv, temp2);
}

return rc;
}

//-----------------------------------------------------------------------------------
//
//-----------------------------------------------------------------------------------
LONG cond_broadcast(cond_t *cv) {
waitset_t *temp1, *temp2;
LONG count1, count2;
DWORD rc = 0;

EnterCriticalSection(&(cv->cs));

if ((temp1 = cv->waitset1) != NULL) {
cv->waitset1 = NULL; // keep refcount until later
count1 = temp1->waitCount;
temp1->waitCount = 0;
}

if ((temp2 = cv->waitset2) != NULL) {
cv->waitset2 = NULL; // keep refcount until later
count2 = temp2->waitCount;
temp2->waitCount = 0;
}

LeaveCriticalSection(&(cv->cs));

if (temp1 != NULL) {
rc = ReleaseSemaphore(temp1->sem, count1, NULL) ? 0 : 1;
releaseWaitset(cv, temp1);
}

if (temp2 != NULL) {
rc = ReleaseSemaphore(temp2->sem, count2, NULL) ? 0 : 1;
releaseWaitset(cv, temp2);
}

return rc;

}

/*-*/

Joseph Seigh

unread,
Feb 14, 2003, 7:46:18 AM2/14/03
to
CondVar.h
CondCallback.c
CondAttr.c
CondVar.c

That's should be it.

Joe Seigh

Alexander Terekhov

unread,
Feb 14, 2003, 1:06:08 PM2/14/03
to

Joseph Seigh wrote:
>
> -- CondVar.c --

My initial mental translation phase has been halted at this:

[...]


> void cond_destroy(cond_t *cv) {
>
> if (cv->waitset1 != NULL)
> releaseWaitset(cv, cv->waitset1);
>
> if (cv->waitset2 != NULL)
> releaseWaitset(cv, cv->waitset2);
>
> DeleteCriticalSection(&(cv->cs));

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Could it be that you might actually "still need it" for the
"exiting" waters that are going to call your cancel_wait()?

regards,
alexander.

Joseph Seigh

unread,
Feb 14, 2003, 1:19:12 PM2/14/03
to

No. init() and destroy() aren't threadsafe. It's the same as
any pthread function if you destroy the object while it's still
in use. Result is undefined.

I don't write any totally safe crow bar proof anything any more, not since
atomic smart pointers. Nobody want's them. They're too expensive.

Joe Seigh

Alexander Terekhov

unread,
Feb 14, 2003, 1:28:51 PM2/14/03
to

Joseph Seigh wrote:
>
> Alexander Terekhov wrote:
> >
> > Joseph Seigh wrote:
> > >
> > > -- CondVar.c --
> >
> > My initial mental translation phase has been halted at this:
> >
> > [...]
> > > void cond_destroy(cond_t *cv) {
> > >
> > > if (cv->waitset1 != NULL)
> > > releaseWaitset(cv, cv->waitset1);
> > >
> > > if (cv->waitset2 != NULL)
> > > releaseWaitset(cv, cv->waitset2);
> > >
> > > DeleteCriticalSection(&(cv->cs));
> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >
> > Could it be that you might actually "still need it" for the
> > "exiting" waters that are going to call your cancel_wait()?
> >
>
> No. init() and destroy() aren't threadsafe. It's the same as
> any pthread function if you destroy the object while it's still
> in use. Result is undefined.

Ha! Maybe I'm just missing and/or misunderstanding something, but
I think that "Result is undefined" is NOT true for the >>POSIX<<
pthread_cond_destroy():

http://www.opengroup.org/onlinepubs/007904975/functions/pthread_cond_destroy.html

regards,
alexander.

Joseph Seigh

unread,
Feb 14, 2003, 1:52:51 PM2/14/03
to

Up in the textual discussion "undefined" is used. The implementation *may* detect
a busy condvar in which case an appropiate error code will apply.

Joe Seigh

Joseph Seigh

unread,
Feb 14, 2003, 2:07:58 PM2/14/03
to
fyi, this is just the standard two semaphore implementation of
a condition variable with the modification of using reference
counting rather than a lock to determine when the semaphore
could be reused after signaling the waitset.

The only controversial area would likely be the timeout logic.
A lot of people may have trouble understanding how a wait
could timeout and not be a timeout.

Joe Seigh

Alexander Terekhov

unread,
Feb 14, 2003, 2:21:30 PM2/14/03
to

The normative section says:

"It shall be safe to destroy an initialized condition variable
upon which no threads are currently blocked."
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The EBUSY is an optional error that, I think, can and shall
be applied to the next statement:

"Attempting to destroy a condition variable upon which other
threads are currently blocked results in undefined behavior.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The informative "EXAMPLES" section actually says that:

"A condition variable can be destroyed immediately after all
the threads that are blocked on it are awakened. For example,
consider the following code:

struct list {
pthread_mutex_t lm;
...
}

struct elt {
key k;
int busy;
pthread_cond_t notbusy;
...
}

/* Find a list element and reserve it. */
struct elt *
list_find(struct list *lp, key k)
{
struct elt *ep;

pthread_mutex_lock(&lp->lm);
while ((ep = find_elt(l, k) != NULL) && ep->busy)
pthread_cond_wait(&ep->notbusy, &lp->lm);
if (ep != NULL)
ep->busy = 1;
pthread_mutex_unlock(&lp->lm);
return(ep);
}

delete_elt(struct list *lp, struct elt *ep)
{
pthread_mutex_lock(&lp->lm);
assert(ep->busy);
... remove ep from list ...
ep->busy = 0; /* Paranoid. */
(A) pthread_cond_broadcast(&ep->notbusy);
pthread_mutex_unlock(&lp->lm);
(B) pthread_cond_destroy(&rp->notbusy);
free(ep);
}


In this example, the condition variable and its list element
may be freed (line B) immediately after all threads waiting
for it are awakened (line A), since the mutex and the code
ensure that no other thread can touch the element to be
deleted."

Now, I see no reason why this should NOT work for cancellation
and/or timedwait()s presuming that the application DOES ensure
that "all the threads that are blocked on it are awakened" via
pthread_cond_broadcast() [or sufficient number of _signal(s)]
PRIOR to pthread_cond_destroy() invocation like in the example
above.

http://groups.google.com/groups?selm=3B838A05.CD328001%40web.de
(Subject: Re: pthread_mutex_unlock not atomic on Solaris 2.6?)

https://listman.redhat.com/pipermail/phil-list/2003-January/000430.html
(Subject: nptl 0.14)

regards,
alexander.

--
"~futex_condvar() {
mutex::guard guard( m_mutex );
assert( m_waiters[0] == m_wakeups );
while ( m_waiters[0] ) {
int ftx = m_futex = EOC();
mutex::release_guard release_guard( guard );
cancel_off_guard no_cancel;
m_futex.wait( ftx );
}
}" -- <http://www.terekhov.de/DESIGN-futex-CV.cpp>

Joseph Seigh

unread,
Feb 14, 2003, 2:44:39 PM2/14/03
to

Alexander Terekhov wrote:
> The normative section says:
>
> "It shall be safe to destroy an initialized condition variable
> upon which no threads are currently blocked."
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> The EBUSY is an optional error that, I think, can and shall
> be applied to the next statement:
>
> "Attempting to destroy a condition variable upon which other
> threads are currently blocked results in undefined behavior.
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> The informative "EXAMPLES" section actually says that:
>
> "A condition variable can be destroyed immediately after all
> the threads that are blocked on it are awakened. For example,
> consider the following code:
>

Ok, that wouldn't be too hard to fix. Just add a waiter refcount
in the condvar_t structure.

Joe Seigh

Alexander Terekhov

unread,
Feb 14, 2003, 3:16:27 PM2/14/03
to

Joseph Seigh wrote:
[...]

> Ok, that wouldn't be too hard to fix. Just add a waiter refcount

using pthread_refcount_t

> in the condvar_t structure.

Maybe. ;-)

regards,
alexander.

P.S. Care to post the updated version? BTW, an illustration
of deleteWaitset()'s usefulness wouldn't hurt anyone either.

Joseph Seigh

unread,
Feb 14, 2003, 3:34:35 PM2/14/03
to

Not ASAP. Maybe later on. It's not part of condvar behavior
I've exploited myself and I'm a bit mystified as to why somebody
was so insistent on having that as part of Posix. It's not
consistant with other Posix init/destroy behavior. And it's not
obvious what usefulness that may have.


deleteWaitset() isn't currently used. It was when I was measuring
the peformance difference between reusing and recreating the
waitsets. It could be commented out I suppose.

Joe Seigh

Joseph Seigh

unread,
Feb 16, 2003, 2:02:58 PM2/16/03
to

Well, maybe not a refcount. Just a lot of "wait-free" interlocked
hackery with waitCount. Just for the heck of it I guess. I could
have blocked cond_destroy until all waiters had restarted.

I'm not going to post the updated code however. I'm not trying to
duplicate POSIX, not even the condvar part of it. It just plain
condvar with signal and broadcast. This is just proof of concept,
not released code. Remember I had orginally said I would post
pseudo code but I though "working" code would be a little clearer.

Joe Seigh

Alexander Terekhov

unread,
Feb 17, 2003, 3:59:33 AM2/17/03
to

Joseph Seigh wrote:
[...]

> I'm not going to post the updated code however. I'm not trying to
> duplicate POSIX, not even the condvar part of it. It just plain
> condvar with signal and broadcast.

Which seems to be in violation of another POSIX requirement as well.

"If more than one thread is blocked on a condition variable, the
scheduling policy shall determine the order in which threads are
unblocked."

Or am I just missing and/or misunderstanding something?

regards,
alexander.

Joseph Seigh

unread,
Feb 17, 2003, 5:45:19 AM2/17/03
to

Well, maybe I'm missing something. I though the term "condition variable"
didn't necessarily apply to POSIX condition variables only. What do you
call them then? And what are those condvar like thingies in Java? They're
not POSIX condition variables either.

Joe Seigh

Joseph Seigh

unread,
Feb 17, 2003, 8:28:44 AM2/17/03
to
You might notice that there is a singleton pattern here that
avoids most of the overhead of DCL or single locked checking.

One of the ways of avoid the overhead of correct memory visibility
of singleton intialization was for each thread to keep a local
copy of the singleton pointer. The problem is that thread local
storage probably isn't much cheaper than the overhead you are trying
to avoid. The solution to this is to store the singleton address in each
class object either at object initialization/ctor time or through
lazy initialization as used in this version. The object's local
visibility guarantees correct singleton visibility.

This has probably been done before. I don't know what it's called though.

Joe Seigh

Alexander Terekhov

unread,
Feb 17, 2003, 9:14:13 AM2/17/03
to

Joseph Seigh wrote:
>
> You might notice that there is a singleton pattern here that
> avoids most of the overhead of DCL or single locked checking.

Do you mean the double-InterlockedCompareExchangePointer()-
with-if(...!= NULL){...free()...} pattern? If not, please
elaborate.

regards,
alexander.

Joseph Seigh

unread,
Feb 17, 2003, 9:39:18 AM2/17/03
to

No, that was just something to deal with windows critical sections
not having a static initializer. The first InterlockedCompareExchangePointer
is actually just an atomic read.

But you could just execute that code every single time you wanted to access the
singleton (i.e. static global data). In a Posix pthread enviroment you'd have
to use a lock to fetch the pointer to the global singleton. Every time.

Joe Seigh

Alexander Terekhov

unread,
Feb 17, 2003, 10:28:25 AM2/17/03
to

Joseph Seigh wrote:
>
> Alexander Terekhov wrote:
> >
> > Joseph Seigh wrote:
> > >
> > > You might notice that there is a singleton pattern here that
> > > avoids most of the overhead of DCL or single locked checking.
> >
> > Do you mean the double-InterlockedCompareExchangePointer()-
> > with-if(...!= NULL){...free()...} pattern? If not, please
> > elaborate.
>
> No, that was just something to deal with windows critical sections
> not having a static initializer. The first InterlockedCompareExchangePointer
> is actually just an atomic read.

but with a "full-stop" memory fence [under Win64/Itanic], I hope.

>
> But you could just execute that code every single time you wanted to access the
> singleton (i.e. static global data).

The pointer is "static", that's right. The "problem" is that you
initilize/create the dynamic stuff more than once and simply
discard the "too late" ["unneeded"] objects/instances. I'm still
in complete darkness with respect to why did you call THAT a
"singleton pattern"... Probably I'm again just missing and/or
misunderstanding something. ;-)

regards,
alexander.

Joseph Seigh

unread,
Feb 17, 2003, 11:15:00 AM2/17/03
to

Well, I could an initialize static library function call which would have
to be called before any threads were started. And of course any library
that called it would also in turn have to have an intialize static library
call. Or make it a dll which has an on load function.

Joe Seigh

James S. Rogers

unread,
Feb 17, 2003, 12:06:43 PM2/17/03
to
"Joseph Seigh" <jsei...@xemaps.com> wrote in message
news:3E50BD51...@xemaps.com...

What are the "condvar like thingies in Java"?
Java has its strange form of monitors, which can be combined with
a wait() method. This combination does not look like a condvar
to me.

I find the Java handling of wait()/notify() calls to be both
overly complex and non-deterministic. The result is too much
wasted overhead and a very real possibility of unreliable
behavior. For instance, given the Java model, there is no way
to prove that a thread instance will ever complete its wait() call.
Most of the time the model works. The problem is that it
provides not assurance that it will work according to any
logic at all. It relies on probability, not proveability.

POSIX condition variables are simpler and much more
reliable than the Java model.

Jim Rogers


David Holmes

unread,
Feb 18, 2003, 1:20:10 AM2/18/03
to
"James S. Rogers" <jimmaure...@worldnet.att.net> wrote in message
news:DE84a.57173$zF6.3...@bgtnsc04-news.ops.worldnet.att.net...

> What are the "condvar like thingies in Java"?
> Java has its strange form of monitors, which can be combined with
> a wait() method. This combination does not look like a condvar
> to me.

Monitors have "condition queues". Java's notion of a monitor has a single
implicit condition queue accessed via wait/notify/notifyAll. The combination
of monitor+condition queue is basically the same as mutex+condvar.

> I find the Java handling of wait()/notify() calls to be both
> overly complex and non-deterministic. The result is too much
> wasted overhead and a very real possibility of unreliable
> behavior. For instance, given the Java model, there is no way
> to prove that a thread instance will ever complete its wait() call.

This is no different than in POSIX. As Alexander previously posted, the
order in which threads are signalled when in pthread_condwait, is determined
by the scheduling policy. Given a scheduling policy of SCHED_OTHER the
behaviour could be anything at all.

David Holmes


Jim Rogers

unread,
Feb 18, 2003, 10:17:29 PM2/18/03
to
"David Holmes" <dholmes@--remove-this-part-dltech.com.au> wrote in message news:<3e51cff3$1...@news.internex.net.au>...

> "James S. Rogers" <jimmaure...@worldnet.att.net> wrote in message
> news:DE84a.57173$zF6.3...@bgtnsc04-news.ops.worldnet.att.net...
> > What are the "condvar like thingies in Java"?
> > Java has its strange form of monitors, which can be combined with
> > a wait() method. This combination does not look like a condvar
> > to me.
>
> Monitors have "condition queues". Java's notion of a monitor has a single
> implicit condition queue accessed via wait/notify/notifyAll. The combination
> of monitor+condition queue is basically the same as mutex+condvar.

The problem is the limitation to a single "condition queue" which
is really not a queue at all.

What happens if you have two methods that synchronize on the same
object? Say your object is a circular buffer. The "read" method
should "wait" while the buffer is empty. The "write" method should
"wait" while the buffer is full. Threads waiting from both methods
are placed in the same "queue" that is not a queue. Calling notifyAll()
schedules all those threads so that they can evaluate the state of
the buffer when they finally run and acquire the synchronization
lock. In many circumstances, most of the "waiting" threads will
once again execute wait(). This is highly inefficient because it
requires all the context switching to let the threads that will not
procede execute so that they will not procede.

Much better is the Ada model of monitors as implemented in the
Ada protected entry. An entry has a lock and a condition. An
entry has its own queue associated with that condition. Execution
order from that queue is either FIFO or PRIORITY_ORDER, you choose.
All entry conditions for that protected object are evaluated
implicitly at the completion of every protected entry or protected
procedure for that protected object. If an entry condition is TRUE
and a call is in that entry queue, that call is executed by the
current task as a proxy for the calling task. The conditions are
again evaluated. This continues until no more calls are left in
the entry queue(s) or all the boundary conditions evaluate to
FALSE. Only at this point can more calls be added to entry
queues.

Note that the Ada model avoids any kind of broadcast message
and the associated rush to thread scheduling. It also avoids
repeated locking and unlocking of the mutex. Finally, it avoids
the overhead of running a thread for the sole purpose of queuing
it up again.



>
> > I find the Java handling of wait()/notify() calls to be both
> > overly complex and non-deterministic. The result is too much
> > wasted overhead and a very real possibility of unreliable
> > behavior. For instance, given the Java model, there is no way
> > to prove that a thread instance will ever complete its wait() call.
>
> This is no different than in POSIX. As Alexander previously posted, the
> order in which threads are signalled when in pthread_condwait, is determined
> by the scheduling policy. Given a scheduling policy of SCHED_OTHER the
> behaviour could be anything at all.

There is a slight difference. The POSIX model does not limit you
to SCHED_OTHER as a scheduling policy. It also allows you to have
multiple condition queues per monitored object. Both differences
may lead to improved efficiency and reliability in the POSIX model
compared to the Java model.

Jim Rogers

David Holmes

unread,
Feb 19, 2003, 2:00:07 AM2/19/03
to
"Jim Rogers" <jimmaure...@worldnet.att.net> wrote in message
news:82347202.03021...@posting.google.com...

> The problem is the limitation to a single "condition queue" which
> is really not a queue at all.

Agreed this is a limitation. Fortunately it is one that is being removed
with the work in JSR-166. But the "Java monitor thingies" are still like
mutex + 1 condvar. And you can work around the limitation with a little bit
of effort.

> Much better is the Ada model of monitors as implemented in the

Different horse for a different course. The RT aspects of Ada change the way
a lot of things must be done. But note that Ada avoids the need for a
"broadcast" because it checks all conditions whenever they may have changed.
That can also be very inefficient.

> There is a slight difference. The POSIX model does not limit you
> to SCHED_OTHER as a scheduling policy.

No it doesn't, but plain old Java is equivalent to having SCHED_OTHER in
force. Once you move into RT scheduling you need to compare against RT Java.

David Holmes


David Holmes

unread,
Feb 19, 2003, 2:02:38 AM2/19/03
to
"David Holmes" <dholmes@--remove-this-part-dltech.com.au> wrote in message
news:3e532ace$1...@news.internex.net.au...

> No it doesn't, but plain old Java is equivalent to having SCHED_OTHER in
> force. Once you move into RT scheduling you need to compare against RT
Java.

Just to clarify this is in response to the statement that in java you can't
prove that a call to wait() will ever return.

David Holmes


Joseph Seigh

unread,
Feb 19, 2003, 8:22:53 AM2/19/03
to

Jim Rogers wrote:
>
> "David Holmes" <dholmes@--remove-this-part-dltech.com.au> wrote in message news:<3e51cff3$1...@news.internex.net.au>...
> >

> > Monitors have "condition queues". Java's notion of a monitor has a single
> > implicit condition queue accessed via wait/notify/notifyAll. The combination
> > of monitor+condition queue is basically the same as mutex+condvar.
>
> The problem is the limitation to a single "condition queue" which
> is really not a queue at all.

There is a single condvar built in, but it's fairly simple to write your
own condvar class to allow as many condvars as you like. About 20 to 30
lines of java as far as I recall. It turns out that most of the time
you don't need multiple condvars (unless you are a really lousy programmer)
and so nobody had felt a need to use that class or something similar to it.

>
> What happens if you have two methods that synchronize on the same
> object? Say your object is a circular buffer. The "read" method
> should "wait" while the buffer is empty. The "write" method should
> "wait" while the buffer is full. Threads waiting from both methods
> are placed in the same "queue" that is not a queue. Calling notifyAll()
> schedules all those threads so that they can evaluate the state of
> the buffer when they finally run and acquire the synchronization
> lock. In many circumstances, most of the "waiting" threads will
> once again execute wait(). This is highly inefficient because it
> requires all the context switching to let the threads that will not
> procede execute so that they will not procede.

You are assuming worst case. It's unlikely that none of the threads
will change the condition such that subsequent threads can execute
as well.

Joe Seigh

James S. Rogers

unread,
Feb 19, 2003, 8:51:44 AM2/19/03
to
"Joseph Seigh" <jsei...@xemaps.com> wrote in message
news:3E538544...@xemaps.com...

This certainly would be a worst case. It can happen even when
some of the threads do change the condition. The problem is
that you are creating a race condition. A given thread must
compete with all other threads for an execution slice. By the
time the thread executes and acquires the synchronization lock
many other threads may have changed the condition. This is
why the Java approach relies on probability rather than
proveability. Most of the time the thread will eventually
execute when its condition is TRUE. Unfortunately MOST
of the time is not ALL of the time. This approach leaves a
small probability that any given thread will never complete
its wait() loop.

Jim Rogers


Alexander Terekhov

unread,
Feb 19, 2003, 10:06:13 AM2/19/03
to

"James S. Rogers" wrote:
[...]

> This certainly would be a worst case. It can happen even when
> some of the threads do change the condition. The problem is
> that you are creating a race condition. A given thread must
> compete with all other threads for an execution slice. By the
> time the thread executes and acquires the synchronization lock
> many other threads may have changed the condition. This is
> why the Java approach relies on probability rather than
> proveability.

That's why in both POSIX and Java waits shall be done in "while" (or
"do") loops. Unless you happen to have something along the lines of
ADA-"proxy-model", you don't really want to hand-off the mutex
ownership [AFAICS, the ADA-"proxy-model" allows execution of monitor
entries on behalf of other {waiting} thread(s) NOT doing the actual/
full "context-switch"]. As for POSIX: inspite of "races", it is all
rather provable... under predictable realtime scheduling [on a
"uniprocessor", I mean].

Oh, BTW, "enjoy" this:

http://developer.java.sun.com/developer/Books/performance2/chap4.pdf

;-)

regards,
alexander.

Joseph Seigh

unread,
Feb 19, 2003, 1:03:10 PM2/19/03
to

"James S. Rogers" wrote:
>
> "Joseph Seigh" <jsei...@xemaps.com> wrote in message
> news:3E538544...@xemaps.com...

> > You are assuming worst case. It's unlikely that none of the threads


> > will change the condition such that subsequent threads can execute
> > as well.
>
> This certainly would be a worst case. It can happen even when
> some of the threads do change the condition. The problem is
> that you are creating a race condition. A given thread must
> compete with all other threads for an execution slice. By the
> time the thread executes and acquires the synchronization lock
> many other threads may have changed the condition. This is
> why the Java approach relies on probability rather than
> proveability. Most of the time the thread will eventually
> execute when its condition is TRUE. Unfortunately MOST
> of the time is not ALL of the time. This approach leaves a
> small probability that any given thread will never complete
> its wait() loop.
>

Java depends on the cooperative processing model. If you have
a competative processing model then you need to more cpu than
you have work, or else use a real time system or scheduling.
But this really has nothing to do with the issue as to whether
the one condvar limit of Java is really an impediment. I've
never run into it. In fact in order to demonstrate the use of
the condvar class I wrote, I had to come up with a rather
contrived example.

Joe Seigh

James S. Rogers

unread,
Feb 19, 2003, 9:27:42 PM2/19/03
to
"Alexander Terekhov" <tere...@web.de> wrote in message
news:3E539D65...@web.de...

>
> "James S. Rogers" wrote:
> [...]
> > This certainly would be a worst case. It can happen even when
> > some of the threads do change the condition. The problem is
> > that you are creating a race condition. A given thread must
> > compete with all other threads for an execution slice. By the
> > time the thread executes and acquires the synchronization lock
> > many other threads may have changed the condition. This is
> > why the Java approach relies on probability rather than
> > proveability.
>
> That's why in both POSIX and Java waits shall be done in "while" (or
> "do") loops. Unless you happen to have something along the lines of
> ADA-"proxy-model", you don't really want to hand-off the mutex
> ownership [AFAICS, the ADA-"proxy-model" allows execution of monitor
> entries on behalf of other {waiting} thread(s) NOT doing the actual/
> full "context-switch"]. As for POSIX: inspite of "races", it is all
> rather provable... under predictable realtime scheduling [on a
> "uniprocessor", I mean].

That is why I like the POSIX model more than the Java model.
Predictable realtime scheduling is not merely an exercise in probability.

Jim Rogers


James S. Rogers

unread,
Feb 19, 2003, 9:34:04 PM2/19/03
to
"Joseph Seigh" <jsei...@xemaps.com> wrote in message
news:3E53C6F8...@xemaps.com...

I agree. Java's model works well most of the time.
I also assert (and I think your statement above agrees)
that the Java threading model is insufficient for realtime
systems.

I am aware that there has been work on a more robust
realtime model for Java. I do not know how this model
mixes with the original threading model. My guess is that
there must be some incompatibilities.

Jim Rogers


David Holmes

unread,
Feb 20, 2003, 12:16:51 AM2/20/03
to
"James S. Rogers" <jimmaure...@worldnet.att.net> wrote in message
news:w8X4a.61334$zF6.4...@bgtnsc04-news.ops.worldnet.att.net...

> I also assert (and I think your statement above agrees)
> that the Java threading model is insufficient for realtime
> systems.

I don't think anyone suggested otherwise.

> I am aware that there has been work on a more robust
> realtime model for Java. I do not know how this model
> mixes with the original threading model. My guess is that
> there must be some incompatibilities.

The details can be found at www.rtj.org

Basically the Realtime Specification for Java adds a new class of
RealtimeThread which (in the base implementation) are scheduled by a
priority preemptive scheduler. All queues are priority based and priority
inversion avoidance is achieved by priority ceiling emulation and/or
priority inheritance. Normal threads have priorities below that of
RealtimeThreads - unless they get boosted due to priority inheritance.

There are some incompatibilities and oddities. Methods like
Thread.getPriority and Thread.setPriority don't get used on realtime
threads, as there is a completely different mechanism for this. And there
are various additional things for memory management, async transfer of
control etc etc.

But as far as scheduling is concerned it is pretty much compatible with
SCHED_FIFO combined with PRIO_PROTECT and/or PRIO_INHERIT.

David Holmes


Reply all
Reply to author