Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A theoretical question on synchronization

42 views
Skip to first unread message

Timur Aydin

unread,
Apr 20, 2001, 11:28:55 PM4/20/01
to
Hello everybody,

After quite some time doing multithreaded programming under win32, I have
now started to do development under Linux using LinuxThreads. However, I am
noticing that the synchronization objects are quite limited compared to the
ones under win32. As far as I have learned, it is not possible to do a timed
wait on a mutex or a semaphore. Only condition objects support timed waits.
Also, while under win32 the synchronization objects can have both
interprocess and intraprocess scope, under linux the only object that can do
this is the semaphore. So you can't have a mutex or a condition object that
can be accessed by separate processes. And, lastly, it is not possible to
wait on multiple objects simultaneously. Please correct me if I am wrong in
some of these statements. My objective is not to start a religious war on
which is better, I just want to use the mechanisms under Linux in the most
effective manner.

In order to go around the problem with the lack of interprocess
synchronization objects, I wanted to ask if this statement would be correct:

All synchronization mechanisms (mutex, condition and semaphore) can be
obtained by just using a semaphore.

A mutex is a semaphore with two states (count = 0, mutex locked, count = 1,
mutex unlocked). For a mutex, it has to be guaranteed that the count cannot
exceed 1, because only one thread can own a mutex.

A condition is also a semaphore with two states (count = 0, condition not
set. count = something representing infinite, condition set). The set
condition has to be something that represents infinite, because a thread
should never block when the condition is set.

A semaphore, well, it already is a semaphore...

And because the semaphore can be created as both local to the process or
global, we should be able to synthesize mutexes and conditions that are both
local to the process and global.

Is this correct?

Timur.


David Schwartz

unread,
Apr 21, 2001, 12:56:09 AM4/21/01
to

Timur Aydin wrote:

> A condition is also a semaphore with two states (count = 0, condition not
> set. count = something representing infinite, condition set). The set
> condition has to be something that represents infinite, because a thread
> should never block when the condition is set.

Conditions don't have states. Condition variables are stateless.

DS

Patrick TJ McPhee

unread,
Apr 21, 2001, 3:54:48 PM4/21/01
to
In article <O47E6.114$3x.9...@typhoon.snet.net>,
Timur Aydin <tay...@snet.net> wrote:

% After quite some time doing multithreaded programming under win32, I have
% now started to do development under Linux using LinuxThreads. However, I am
% noticing that the synchronization objects are quite limited compared to the
% ones under win32. As far as I have learned, it is not possible to do a timed

The posix approach has typically been to try to provide APIs which perform
the basic necessary tasks, while allowing more specialised requirements
to be easily constructed. For instance, a timeout on a mutex is an odd
concept (do you need to lock it or don't you?), but it's simple to lash
together a mutex with a timeout from first principles:

typedef struct {
pthread_mutex_t mux;
pthread_cond_t cond;
int locked;
} mymux;

int mymux_lock(mymux * mm)
{
pthread_mutex_lock(&mm->mux);
while (mm->locked == 1)
pthread_cond_wait(&mm->cond, &mm->mux);
mm->locked = 1;
pthread_mutex_unlock(&mm->mux);
}

int mymux_timedlock(mymux * mm, const struct timespec * tsp)
{
int rc = 0;
pthread_mutex_lock(&mm->mux);
while (mm->locked == 1 && !rc)
rc = pthread_cond_timedwait(&mm->cond, &mm->mux, tsp);
if (!rc)
mm->locked = 1;
pthread_mutex_unlock(&mm->mux);
return rc;
}

int mymux_unlock(mymux * mm)
{
pthread_mutex_lock(&mm->mux);
mm->locked = 0;
pthread_mutex_unlock(&mm->mux);
}

For a semaphore with a timed wait (or, frankly, a semaphore without one),
you add a counter and some other operations. For locks which know about
ownership, you add a pthread_t and some calls to pthread_equal. For
recursive locks, you add a recursion depth counter. For reader/writer
locks, you change the owner into a queue, change the locked boolean to
represent the type of lock, and add logic to avoid deadlocks due to
more than one reader trying to promote the lock type.

How good a job POSIX has done of getting the essentials and avoiding
dumping a load of work on developers is open to debate. In general,
I find it easier to build the things I want with POSIX than it is to
figure out the sometimes very complicated calls in the win32 API.

% Also, while under win32 the synchronization objects can have both
% interprocess and intraprocess scope, under linux the only object that can do
% this is the semaphore.

This might or might not be true. POSIX provides the _setpshared calls for
mutex and cv attributes, but it makes them optional. The single unix
specification might have made them required, I'm not sure. Most current
commercial Unix implementations certainly allow process-shared mutexes
and cvs. Linux didn't a couple of years ago, but I haven't checked a
recent version. Look for the calls pthread_condattr_setpshared(),
pthread_mutexattr_setpshared(), and pthread_rwlockattr_setpshared()
(and see if they work...).

% can be accessed by separate processes. And, lastly, it is not possible to
% wait on multiple objects simultaneously.

This is a contentious point. You can find arguments against the idea of
multiple waits if you look through an archive of this group. The only
things similar to waitformultipleobjects are the poll() and select()
system calls. These allow you to wait for anything which uses a file
descriptor, but not for, eg, multiple signals on separate cvs.

How to get around this depends a lot on what you're trying to do. If you
want to signal one of many events, and the windows approach is to create
a bunch of event semaphores then wait on all of them simultaneously,
you can get the same effect by using a single cv and having the
predicate be an integer which represents all the states:

pthread_mutex_lock(&ev->mux);
while (ev->state == 0)
pthread_cond_wait(&ev->cv, &ev->mux);

switch (ev->state) {
case BLUE: do_blue(); break;
case RED: do_red(); break;
case GREEN: do_green(); break;
default: do_secondary_colour(ev->state); break;
}
pthread_mutex_unlock(&ev->mux);


[...]

% All synchronization mechanisms (mutex, condition and semaphore) can be
% obtained by just using a semaphore.

I believe a CV requires at least two semaphores, in addition to a third
to represent the mutex, and something else to represent the predicate.

--

Patrick TJ McPhee
East York Canada
pt...@interlog.com

Pete Becker

unread,
Apr 21, 2001, 4:51:48 PM4/21/01
to
Patrick TJ McPhee wrote:
>
> a timeout on a mutex is an odd
> concept (do you need to lock it or don't you?),

It's not all that odd. After all, pthreads supplies
pthread_mutex_trylock.

--
Pete Becker
Dinkumware, Ltd. (http://www.dinkumware.com)

Kaz Kylheku

unread,
Apr 21, 2001, 5:36:20 PM4/21/01
to
On Sat, 21 Apr 2001 16:51:48 -0400, Pete Becker <peteb...@acm.org> wrote:
>Patrick TJ McPhee wrote:
>>
>> a timeout on a mutex is an odd
>> concept (do you need to lock it or don't you?),
>
>It's not all that odd. After all, pthreads supplies
>pthread_mutex_trylock.

This is vastly different from a timed-lock operation. It makes certain
algorithms possible, such as finding an unlocked object in a set. Timed out
locking is not useful in the same way.

Kaz Kylheku

unread,
Apr 21, 2001, 6:03:14 PM4/21/01
to
On Fri, 20 Apr 2001 23:28:55 -0400, Timur Aydin <tay...@snet.net> wrote:
>Hello everybody,
>
>After quite some time doing multithreaded programming under win32, I have
>now started to do development under Linux using LinuxThreads. However, I am
>noticing that the synchronization objects are quite limited compared to the
>ones under win32.

However, the nature and variety of the objects provided by Win32 leaves much to
be desired. Events are simply not very suited for solving a wide variety of
synchronization problems. It's a lot easier to solve synchronization problems
with condition variables because they have no programmer visible state. The
logic is based entirely on the state of your own data. Objects like events or
semaphores carry their own state; to solve a synchronization problem, the
programmer must bring about some meaningful association between the semaphore's
state and the state of the program. In my programming experience, such
associations are fragile and difficult to maintain.

>As far as I have learned, it is not possible to do a timed
>wait on a mutex or a semaphore.

Timed waits on mutexes are braindamaged for most kinds of work. They
are useful to people working in the real-time domain, so the 200X draft
of POSIX has added support for timed mutex waits---it was due to pressure
from some real time groups, apparently. In real time applications, the
duration of a critical region of code may be determined precisely,
so that a timed out mutex wait can serve as a watchdog.
You can find the implementation of pthread_mutex_timedlock in glibc 2.2.
For reasons of efficienty, not every mutex type supports this operation, just
the default one. Glibc 2.2 also adds barriers, and the POSIX timer functions:
timer_create and friends.

Also realize that the Linux pthread_mutex_t is a lot closer to the Windows
CRITICAL_SECTION than to the Windows mutex. Note that there is no timed
lock function for critical sections!

>Also, while under win32 the synchronization objects can have both
>interprocess and intraprocess scope, under linux the only object that can do
>this is the semaphore.

The traditional UNIX semaphore, that is.

>So you can't have a mutex or a condition object that
>can be accessed by separate processes.

There is a provision in the POSIX interface for process shared mutexes and
conditions, but it's not implemented in Linux.

>And, lastly, it is not possible to
>wait on multiple objects simultaneously.

Again, this is a braindamaged concept to begin with, and severely limited
in Windows (only 64 handles can be waited on). Not to mention that
the WaitForMultipleObjects function is broken on Windows CE, so it
cannot be considered portable across all Win32 platforms.
Lastly, it has fairness issues: under the ``wait for any'' semantics, the
interface can report the identity of at most one ready object, regardless of
how many are actually ready. This can lead to one event being serviced
with priority over another one, depending on its position in the array.

With condition varibles, your program is waiting for a *predicate* to become
true. The condition variable is just a place to put the thread to sleep.
If you want to wait for more than one predicate, just form their logical
conjunction or disjunction as needed, and ensure that signaling of the
condition variable is done in all the right circumstances, e.g.

/* wait for any of three predicates */

while (!predicate1() || !predicate2() || !predicate3())
{
pthread_cond_wait(&cond, &mutex);
}

This is equivalent to waiting on three events. The thread is parked in some
wait function, and can wake up for any of three distinct reasons.
A better structure might be this:

int p1 = 0, p2 = 0, p3 = 0;

/* mutex assumed locked */

for (;;) {
p1 = predicate1();
p2 = predicate2();
p3 = predicate3();

if (p1 || p2 || p3)
break;

pthread_cond_wait(&cond, &mutex);
}

if (p1) {
/* action related to predicate1 */
}

if (p2) {
/* action related to predicate2 */
}

if (p3) {
/* action related to predicate3 */
}

Multiple object waiting is primarily useful for I/O multiplexing; for this you
have the poll() or select() function. Both of these functions provide feedback
about which file descriptors are ready, thereby avoiding problems of fairness,
and can handle much more than 64 descriptors.

Timur Aydin

unread,
Apr 22, 2001, 1:38:33 AM4/22/01
to
Kaz Kylheku wrote:

>
> However, the nature and variety of the objects provided by Win32 leaves
> much to
> be desired. Events are simply not very suited for solving a wide variety
> of synchronization problems.  It's a lot easier to solve synchronization
> problems
>
> with condition variables because they have no programmer visible state.
> The logic is based entirely on the state of your own data. Objects like
> events or semaphores carry their own state; to solve a synchronization
> problem, the programmer must bring about some meaningful association
> between the semaphore's
> state and the state of the program. In my programming experience, such
> associations are fragile and difficult to maintain.

One example where using a semaphore provides an elegant solution is a
message queue. Whenever new messages are inserted into the queue, the
semaphore count is increased by the total number of messages. A thread that
processes these messages can just wait on the semaphore and will wake
upprecisely as many times as there are messages in the queue.

This problem could also have been resolved with a win32 style event object,
but it would not be as simple and intuitive as above and this would also
lead to situations where the thread would unnecessarily wake up and find
nothing in the message queue.

This same problem would be very difficult to solve with a POSIX condition
object. Because it is stateless, a thread has to be waiting at the instant
that the condition is signaled, otherwise the thread would not notice. Say
the thread is busy processing earlier messages and a new message arrived.
The condition will be signaled, but the thread is not waiting on it at that
instant, so the thread will not be woken up next time.


> Timed waits on mutexes are braindamaged for most kinds of work. They
> are useful to people working in the real-time domain, so the 200X draft
> of POSIX has added support for timed mutex waits

The other useful thing about timed waits is that if ever you fail to lock a
mutex within a predefined time, you could log it and therefore, you would
know that it has happened. For instance, we are working on device drivers
and API's for multichannel computer telephony hardware. If a thread that is
handling a channel fails to lock a mutex and there is no timeout, that
channel will just freeze and the other channels would continue to work.
However, if there is a timed lock being used, the thread would notice this
and could enter a fatal error entry into the log file.

Timur.

Pete Becker

unread,
Apr 22, 2001, 10:40:42 AM4/22/01
to

I was replying to "do you need to lock it or don't you?"

Patrick TJ McPhee

unread,
Apr 22, 2001, 12:09:34 PM4/22/01
to
In article <g4uE6.665$3x.1...@typhoon.snet.net>,
Timur Aydin <tay...@snet.net> wrote:

% One example where using a semaphore provides an elegant solution is a
% message queue. Whenever new messages are inserted into the queue, the
% semaphore count is increased by the total number of messages. A thread that
% processes these messages can just wait on the semaphore and will wake
% upprecisely as many times as there are messages in the queue.

[...]

% This same problem would be very difficult to solve with a POSIX condition
% object. Because it is stateless, a thread has to be waiting at the instant
% that the condition is signaled, otherwise the thread would not notice. Say

This is simply a misunderstanding of how condition variables work. The
condition variable signals a change in a predicate, but doesn't tell you
what the predicate is -- that's left to the programmer. For instance,
I typically implement queues as linked lists without any counters at all.
The general processing loop is

lock();
while (nothing_on_queue())
wait();
get_something_from_queue();
unlock();
process();

If there's something on the queue to start with, you never wait at all.

Kaz Kylheku

unread,
Apr 22, 2001, 3:22:53 PM4/22/01
to
On Sun, 22 Apr 2001 01:38:33 -0400, Timur Aydin <tay...@snet.net> wrote:
>Kaz Kylheku wrote:
>
>>
>> However, the nature and variety of the objects provided by Win32 leaves
>> much to
>> be desired. Events are simply not very suited for solving a wide variety
>> of synchronization problems.  It's a lot easier to solve synchronization
>> problems
>>
>> with condition variables because they have no programmer visible state.
>> The logic is based entirely on the state of your own data. Objects like
>> events or semaphores carry their own state; to solve a synchronization
>> problem, the programmer must bring about some meaningful association
>> between the semaphore's
>> state and the state of the program. In my programming experience, such
>> associations are fragile and difficult to maintain.
>
>One example where using a semaphore provides an elegant solution is a
>message queue. Whenever new messages are inserted into the queue, the
>semaphore count is increased by the total number of messages.

This is far from elegant, because the queue data structure already knows the
number of messages either implicitly or with an extra counter of its own.
Usually, what is most relevant is whether or not the queue is empty.
So the semaphore is coding redundant information. Note that at times,
this information will be out of sync. For example, say you have
this sequence of actions:

lock();
append(item);
unlock();
signal(semaphore);

In between the append and signal operation, the queue has N+1 items,
but the semaphore's value is out of date at N.

A semaphore has an internal lock which protects incremnts and decrements
of its internal counter. There is no way to extend that lock to cover
additional data such as a queue. With a mutex and condition variable,
the entire queue can be treated as a semaphore based on its *own*
state!

So really, the samephore provides only a kludgy solution to synchronization
problems that are based on simple counting.

Also note that a sempaphore object can be trivially implemented using a mutex,
a condition variable, and a counter variable. Mutexes and conditions cannot at
all be easily implemented using semaphores.

>A thread that
>processes these messages can just wait on the semaphore and will wake
>upprecisely as many times as there are messages in the queue.

This can be inefficient compared to just waking up, noting that there
are messages to be processed and removing all of them.

>This problem could also have been resolved with a win32 style event object,
>but it would not be as simple and intuitive as above and this would also
>lead to situations where the thread would unnecessarily wake up and find
>nothing in the message queue.

Note that an (auto reset) event is nothing more than a binary semaphore
which counts between the unsignaled and signaled states (effectively
0 and 1).

>This same problem would be very difficult to solve with a POSIX condition
>object.

Hardly at all!

>Because it is stateless, a thread has to be waiting at the instant
>that the condition is signaled, otherwise the thread would not notice.
>
>Say
>the thread is busy processing earlier messages and a new message arrived.
>The condition will be signaled, but the thread is not waiting on it at that
>instant, so the thread will not be woken up next time.

Do you have even the slightest clue on how condition variables are used
correctly or are you just guessing based on reading about them in a manual
page?

Tom Payne

unread,
Apr 26, 2001, 12:22:12 PM4/26/01
to
Timur Aydin <tay...@snet.net> wrote:
[...]
: In order to go around the problem with the lack of interprocess

: synchronization objects, I wanted to ask if this statement would be correct:

: All synchronization mechanisms (mutex, condition and semaphore) can be
: obtained by just using a semaphore.

It goes both ways. With a mutex, a counter and a condition, one can
create a semaphore in a fairly obvious way.

Conversely, a semaphore initialized to 1 can serve as a mutex. A
semaphore initialized to one, a counter and a semaphore initialized to
zero can simulate a condition. IIRC, it works as follows:

class Condition {

Semaphore lock;
Semaphore queue;
int waitorCount;

public:

Conition() : lock(1), queue(0), waitorCount(0) {}

void wait( Mutex m ) {
lock.acquire();
++waitorCount;
m.unlock();
queue.acquire();
// implicitly inheriting lock from signaller.
m.lock();
--waitorCount;
}

void signal() {
lock.acquire()
if ( waitorCount ) {
queue.release();
// implicitly passing lock to signallee.
} else {
lock.release();
}
}

};

Tom Payn

Kaz Kylheku

unread,
Apr 27, 2001, 12:30:07 AM4/27/01
to
On Thu, 26 Apr 2001 16:22:12 +0000 (UTC), Tom Payne <t...@hill.cs.ucr.edu> wrote:
>Conversely, a semaphore initialized to 1 can serve as a mutex. A

However, it is relevant to earlier debate that semaphores are not not available
across all Win32 platforms. Implementing condition variables using only
auto-reset events cannot be done in this manner.

>semaphore initialized to one, a counter and a semaphore initialized to
>zero can simulate a condition.
>

>class Condition {
>
> Semaphore lock;
> Semaphore queue;
> int waitorCount;
>
>public:
>
> Conition() : lock(1), queue(0), waitorCount(0) {}
>
> void wait( Mutex m ) {
> lock.acquire();
> ++waitorCount;
> m.unlock();
> queue.acquire();
> // implicitly inheriting lock from signaller.
> m.lock();
> --waitorCount;
> }

Of course you want lock.release() at the end here? But then you have mutual
exclusion in wait(), so waitorCount can never rise above 1?

>
> void signal() {
> lock.acquire()
> if ( waitorCount ) {
> queue.release();
> // implicitly passing lock to signallee.
> } else {
> lock.release();
> }

Don't you want lock.release() to be unconditional? Hmm.

I think what you are trying to do is something like:

void wait()
{
// ``Enqueue'' for the condition while we still have mutex. This
// ensures the atomic release-and-wait semantics, even though
// we don't do the actual semaphore wait until later.

lock.acquire();
++waitorCount;
lock.release();

// Release mutex, wait on semaphore and re-acquire.

m.unlock();

queue.wait(); // overload for .acquire() for readability

m.lock();

// No need to decrement waitorCount; if we got this far then our
// contribution to the count has been taken noted by the
// signaler and taken care of.
}

void broadcast()
{
// Atomically ``dequeue'' all current waiters, then signal the
// semaphore that many times to wake them all.

lock.acquire();
int signalCount = waitorCount;
waitorCount = 0;
lock.release();

while (signalCount--)
queue.signal(); // overload for .release()
}

void signal()
{
// Optimization: wake up at most one waiter

lock.acquire();
bool doSignal = false;
if (waitorCount > 0) {
waitorCount--;
doSignal = true;
}
lock.release();

if (doSignal)
queue.signal();
}

Tom Payne

unread,
Apr 27, 2001, 1:32:36 AM4/27/01
to
Kaz Kylheku <k...@ashi.footprints.net> wrote:

: On Thu, 26 Apr 2001 16:22:12 +0000 (UTC), Tom Payne <t...@hill.cs.ucr.edu> wrote:
:>Conversely, a semaphore initialized to 1 can serve as a mutex. A

: However, it is relevant to earlier debate that semaphores are not not available
: across all Win32 platforms.

Okay.

: Implementing condition variables using only auto-reset events cannot be done in
: this manner.

I'm not sure what an auto-resent event is, but it sounds like something that
wouldn't work to implement, say, semaphores.

:>semaphore initialized to one, a counter and a semaphore initialized to


:>zero can simulate a condition.
:>
:>class Condition {
:>
:> Semaphore lock;
:> Semaphore queue;
:> int waitorCount;
:>
:>public:
:>
:> Conition() : lock(1), queue(0), waitorCount(0) {}
:>
:> void wait( Mutex m ) {
:> lock.acquire();
:> ++waitorCount;
:> m.unlock();
:> queue.acquire();
:> // implicitly inheriting lock from signaller.
:> m.lock();
:> --waitorCount;
:> }

: Of course you want lock.release() at the end here?
: But then you have mutual exclusion in wait(), so waitorCount
: can never rise above 1?


Yup. (I shouldn't try to reproduce things from memory.)

The acquistion of lock in the first line has to be relased prior to
queue.acquire(), and the inheritance of lock, after sleeping, has to
be released prior to returning:

void wait( Mutex m ) {
lock.acquire();
++waitorCount;
m.unlock();

lock.release();


queue.acquire();
// implicitly inheriting lock from signaller.
m.lock();
--waitorCount;

lock.release();
}


:> void signal() {


:> lock.acquire()
:> if ( waitorCount ) {
:> queue.release();
:> // implicitly passing lock to signallee.
:> } else {
:> lock.release();
:> }

: Don't you want lock.release() to be unconditional? Hmm.

What I had in mind was the signaller passing the lock to the
signallee, who ultimately releases it as he exits, a leftover
complexity from a rather different protocol.

: I think what you are trying to do is something like:

: void wait()
: {
: // ``Enqueue'' for the condition while we still have mutex. This
: // ensures the atomic release-and-wait semantics, even though
: // we don't do the actual semaphore wait until later.

: lock.acquire();
: ++waitorCount;
: lock.release();

: // Release mutex, wait on semaphore and re-acquire.

: m.unlock();

: queue.wait(); // overload for .acquire() for readability

: m.lock();

: // No need to decrement waitorCount; if we got this far then our
: // contribution to the count has been taken noted by the
: // signaler and taken care of.

: }

: I prefer the waitor to

: void broadcast()


: {
: // Atomically ``dequeue'' all current waiters, then signal the
: // semaphore that many times to wake them all.

: lock.acquire();
: int signalCount = waitorCount;
: waitorCount = 0;
: lock.release();

: while (signalCount--)
: queue.signal(); // overload for .release()
: }

: void signal()
: {
: // Optimization: wake up at most one waiter

: lock.acquire();
: bool doSignal = false;
: if (waitorCount > 0) {
: waitorCount--;
: doSignal = true;
: }
: lock.release();

: if (doSignal)
: queue.signal();
: }

As far as I can tell, your implementation is equivalent to what I had
in mind but holds lock for a shorter time, which increases concurrency
-- a good thing.

Bottom line: semaphores are equivalent to conditions and mutexes in
their ability to coordinate threads, in the sense that each approach
can be implemented in terms of the other. But, conditions and mutexes
yield better structured solutions.

Tom Payne


Alexander Terekhov

unread,
Apr 27, 2001, 3:56:35 AM4/27/01
to
Kaz Kylheku wrote:

[...]


> void wait()
> {
> // ``Enqueue'' for the condition while we still have mutex. This
> // ensures the atomic release-and-wait semantics, even though
> // we don't do the actual semaphore wait until later.
>
> lock.acquire();
> ++waitorCount;
> lock.release();
>
> // Release mutex, wait on semaphore and re-acquire.
>
> m.unlock();
>
> queue.wait(); // overload for .acquire() for readability
>
> m.lock();
>
> // No need to decrement waitorCount; if we got this far then our
> // contribution to the count has been taken noted by the
> // signaler and taken care of.
> }

[...]


> void signal()
> {
> // Optimization: wake up at most one waiter
>
> lock.acquire();
> bool doSignal = false;
> if (waitorCount > 0) {
> waitorCount--;
> doSignal = true;
> }
> lock.release();
>
> if (doSignal)
> queue.signal();
> }

it does NOT ensure the atomic release-and-wait semantics (POSIX:
``atomically with respect to access by another thread to the
mutex and then the condition variable''. That is, if another
thread is able to acquire the mutex after the about-to-block
thread has released it, then a subsequent call to
pthread_cond_broadcast() or pthread_cond_signal ( ) in that
thread shall behave as if it were issued after the
about-to-block thread has blocked.)

One example when it does not work is the following:

t1. thread A locks the mutex, calls _wait and gets preempted right
after m.unlock() but before queue.wait()

t2. thread B locks the mutex, checks the predicate and sends a signal
to A (calls _signal - this will open the semaphore "queue");

t3. thread B calls _wait and slips through the opened semaphore
"queue" - consumes its own signal as spurious wake up. Thread A
remains blocked on the semaphore "queue".

the implementation above is incorrect.

> However, it is relevant to earlier debate that semaphores are not
> not available across all Win32 platforms. Implementing condition
> variables using only auto-reset events cannot be done in this
> manner.

it can be done, e.g. (pseudo-code):

---------- Algorithm 8c
given:
hevBlockLock - auto-reset event
hevBlockQueue - auto-reset event
mtxExternal - mutex or CS
mtxUnblockLock - mutex or CS
nWaitersGone - int
nWaitersBlocked - int
nWaitersToUnblock - int

wait( timeout ) {

[auto: register int result ] // error checking omitted
[auto: register int nSignalsWasLeft ]
[auto: register int nWaitersWasGone ]

wait( hevBlockLock,INFINITE );
nWaitersBlocked++;
set_event( hevBlockLock );

unlock( mtxExternal );
bTimedOut = wait( hevBlockQueue,timeout );

lock( mtxUnblockLock );
if ( 0 != (nSignalsWasLeft = nWaitersToUnblock) ) {
if ( bTimeout ) {
if ( 0 != nWaitersBlocked ) {
nWaitersBlocked--;
nSignalsWasLeft = 0;
}
else {
nWaitersGone = 1;
}
}
if ( 0 == --nWaitersToUnblock )
if ( 0 != nWaitersBlocked ) {
set_event( hevBlockLock );
nSignalsWasLeft = 0;
}
else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {
nWaitersGone = 0;
}
}
}
else if ( INT_MAX/2 == ++nWaitersGone ) {
wait( hevBlockLock,INFINITE );
nWaitersBlocked -= nWaitersGone;
set_event( hevBlockLock );
nWaitersGone = 0;
}
unlock( mtxUnblockLock );

if ( 1 == nSignalsWasLeft ) {
if ( 0 != nWaitersWasGone ) {
reset_event( hevBlockQueue );
}
set_event( hevBlockLock );
}
else if ( 0 != nSignalsWasLeft ) {
set_event( hevBlockQueue );
}

lock( mtxExternal );

return ( bTimedOut ) ? ETIMEOUT : 0;
}

signal(bAll) {

[auto: register int result ]

lock( mtxUnblockLock );

if ( 0 != nWaitersToUnblock ) {
if ( 0 == nWaitersBlocked ) {
return unlock( mtxUnblockLock );
}
if (bAll) {
nWaitersToUnblock += nWaitersBlocked;
nWaitersBlocked = 0;
}
else {
nWaitersToUnblock++;
nWaitersBlocked--;
}
unlock( mtxUnblockLock );
}
else if ( nWaitersBlocked > nWaitersGone ) {
wait( hevBlockLock,INFINITE );
if ( 0 != nWaitersGone ) {
nWaitersBlocked -= nWaitersGone;
nWaitersGone = 0;
}
if (bAll) {
nWaitersToUnblock = nWaitersBlocked;
nWaitersBlocked = 0;
}
else {
nWaitersToUnblock = 1;
nWaitersBlocked--;
}
unlock( mtxUnblockLock );
set_event( hevBlockQueue );
}
else {
unlock( mtxUnblockLock );
}

return result;
}

regards,
alexander.

Kaz Kylheku

unread,
Apr 27, 2001, 1:47:21 PM4/27/01
to

I clearly see that when you point it out! The semaphore "queue" is not specific
about who it wakes up and when, allowing B to race ahead and consume its own
wakeup, and in general, legitimate wakeups can turn into spurious ones.

I think that this simply underscores the weakness of the semaphore as a
synchronization device.

>the implementation above is incorrect.

Which shows that I have little experience implementing conditions using
semaphores this way, and in general little experience using semaphores
in software development (and, I think, should keep it that way!)

The way I have done it in the past is to assign a suspend semaphore to each
thread; a thread waits on its own semaphore only, and is put into an explicit
wait queue. This avoids ambiguities regarding what wakeup goes to what thread
making it fairly straightforward to get the condition variable semantics
right.

>> However, it is relevant to earlier debate that semaphores are not
>> not available across all Win32 platforms. Implementing condition
>> variables using only auto-reset events cannot be done in this
>> manner.
>
>it can be done, e.g. (pseudo-code):

That is horribly complicated compared to what it takes to implement
an event using a mutex and condition variable!

>---------- Algorithm 8c
>given:
>hevBlockLock - auto-reset event
>hevBlockQueue - auto-reset event
>mtxExternal - mutex or CS
>mtxUnblockLock - mutex or CS
>nWaitersGone - int
>nWaitersBlocked - int
>nWaitersToUnblock - int
>
>wait( timeout ) {
>
> [auto: register int result ] // error checking omitted
> [auto: register int nSignalsWasLeft ]
> [auto: register int nWaitersWasGone ]
>
> wait( hevBlockLock,INFINITE );
> nWaitersBlocked++;
> set_event( hevBlockLock );

Could you elaborate on this? Elsewhere in the code, it appears that
nWaitersBlocked is accessed without the protection of hevBlockLock.

Tom Payne

unread,
Apr 27, 2001, 1:52:48 PM4/27/01
to
Alexander Terekhov <tere...@web.de> wrote:
[...]
: it does NOT ensure the atomic release-and-wait semantics (POSIX:

: ``atomically with respect to access by another thread to the
: mutex and then the condition variable''. That is, if another
: thread is able to acquire the mutex after the about-to-block
: thread has released it, then a subsequent call to
: pthread_cond_broadcast() or pthread_cond_signal ( ) in that
: thread shall behave as if it were issued after the
: about-to-block thread has blocked.)

Good point. That's why in my implementation (corrected version
appears below) the signaller implicitly passes ownership of the lock
to the signallee. No call to wait can then get past lock.acquire()
until the signallee has harvested the release-simulated signal from
queue. [Such implicit transfers of ownership are a bit disturbing to
some folks, but seem to be necessary here.]

Also, in spite of what I said earlier, this seems to show that Kaz's
implementation is not equivalent to mine.


class Condition {

Semaphore lock;
Semaphore queue;
int waitorCount;

public:

Condition() : lock(1), queue(0), waitorCount(0) {}

void wait( Mutex m ) {
lock.acquire();
++waitorCount;
m.unlock();

lock.release();


queue.acquire(); // implicitly inheriting lock from signaller

m.lock();
--waitorCount;
lock.release();
}

void signal() {
lock.acquire();
if ( waitorCount ) {
queue.release(); // implicitly passing lock to signallee

} else {
lock.release();
}
}

};

Tom Payne

Alexander Terekhov

unread,
Apr 27, 2001, 5:02:08 PM4/27/01
to
Tom Payne wrote:

> Alexander Terekhov <tere...@web.de> wrote:
> [...]
> > it does NOT ensure the atomic release-and-wait semantics (POSIX:
> > ``atomically with respect to access by another thread to the
> > mutex and then the condition variable''. That is, if another
> > thread is able to acquire the mutex after the about-to-block
> > thread has released it, then a subsequent call to
> > pthread_cond_broadcast() or pthread_cond_signal ( ) in that
> > thread shall behave as if it were issued after the
> > about-to-block thread has blocked.)
>
> Good point. That's why in my implementation (corrected version
> appears below) the signaller implicitly passes ownership of the lock
> to the signallee. No call to wait can then get past lock.acquire()
> until the signallee has harvested the release-simulated signal from
> queue. [Such implicit transfers of ownership are a bit disturbing to
> some folks, but seem to be necessary here.]

yup. however your locking is too strict - it blocks threads
attempting to do signal that do not really need to be blocked.
a second lock could really help to relax the locking (and the
second lock is needed anyway in order to support timed waits/
cancellation), e.g. (pseudo-code):

---------- Algorithm 8a
given:
semBlockLock - bin.semaphore
semBlockQueue - semaphore
mtxExternal - mutex or CS (or bin.semaphore/auto-reset event)
mtxUnblockLock - mutex or CS (or bin.semaphore/auto-reset event)


nWaitersGone - int
nWaitersBlocked - int
nWaitersToUnblock - int

wait( timeout ) {

[auto: register int result ] // error checking omitted
[auto: register int nSignalsWasLeft ]
[auto: register int nWaitersWasGone ]

sem_wait( semBlockLock );
nWaitersBlocked++;
sem_post( semBlockLock );

unlock( mtxExternal );
bTimedOut = sem_wait( semBlockQueue,timeout );

lock( mtxUnblockLock );
if ( 0 != (nSignalsWasLeft = nWaitersToUnblock) ) {
if ( bTimeout ) {
if ( 0 != nWaitersBlocked ) {
nWaitersBlocked--;
}

else {
nWaitersGone++;

}
}
if ( 0 == --nWaitersToUnblock ) {
if ( 0 != nWaitersBlocked ) {

sem_post( semBlockLock );

nSignalsWasLeft = 0;
}
else if ( 0 != (nWaitersWasGone = nWaitersGone) ) {
nWaitersGone = 0;
}
}
}
else if ( INT_MAX/2 == ++nWaitersGone ) {

sem_wait( semBlockLock );
nWaitersBlocked -= nWaitersGone;
sem_post( semBlockLock );


nWaitersGone = 0;
}
unlock( mtxUnblockLock );

if ( 1 == nSignalsWasLeft ) {
if ( 0 != nWaitersWasGone ) {

// sem_adjust( semBlockQueue,-nWaitersWasGone );
while ( nWaitersWasGone-- ) {
sem_wait( semBlockQueue );
}
}
sem_post( semBlockLock );
}

lock( mtxExternal );

return ( bTimedOut ) ? ETIMEOUT : 0;
}

signal(bAll) {

[auto: register int result ]
[auto: register int nSignalsToIssue]

lock( mtxUnblockLock );

if ( 0 != nWaitersToUnblock ) {
if ( 0 == nWaitersBlocked ) {
return unlock( mtxUnblockLock );
}
if (bAll) {

nWaitersToUnblock += (nSignalsToIssue=nWaitersBlocked);
nWaitersBlocked = 0;
}
else {
nSignalsToIssue = 1;
nWaitersToUnblock++;
nWaitersBlocked--;
}
}
else {
sem_wait( semBlockLock );


if ( nWaitersBlocked > nWaitersGone ) {

if ( 0 != nWaitersGone ) {
nWaitersBlocked -= nWaitersGone;
nWaitersGone = 0;
}
if (bAll) {

nSignalsToIssue = nWaitersToUnblock = nWaitersBlocked;
nWaitersBlocked = 0;
}
else {
nSignalsToIssue = nWaitersToUnblock = 1;
nWaitersBlocked--;
}
}
else {
sem_post( semBlockLock );
return unlock( mtxUnblockLock );
}
}

unlock( mtxUnblockLock );
sem_post( semBlockQueue,nSignalsToIssue );
return result;
}

regards,
alexander.

Alexander Terekhov

unread,
Apr 27, 2001, 6:10:17 PM4/27/01
to
Kaz Kylheku wrote:

> > wait( hevBlockLock,INFINITE );
> > nWaitersBlocked++;
> > set_event( hevBlockLock );
>
> Could you elaborate on this? Elsewhere in the code, it appears that
> nWaitersBlocked is accessed without the protection of hevBlockLock.

the only place without the protection of hevBlockLock is in signal:

else if ( nWaitersBlocked > nWaitersGone ) {
wait( hevBlockLock,INFINITE );

(nWaitersBlocked/nWaitersGone is accessed with protection of
mtxUnblockLock only)

that is an optimisation for Win32 and is based on the fact that Win32
SDK does provide guarantees with respect to atomicity for ints (load/
store is atomic) on all Win32 platforms. The code above does
introduce the race condition with respect to increment in _wait, but i
believe that it is harmless (even on multiprocessors with memory reads
reordering) because of mem. visibility and protection provided by
"external" mutex - we cannot fail to wakeup a thread that should
have been woken up...

David Schwartz wrote:

"The will only introduce race conditions when such race conditions don't
matter. This may not be easy to show, but it is true.

Consider:

lock_mutex();
do_some_stuff();
unlock_mutex();
pthread_cond_signal();

In this case, there is no mutex locked, so you might think that
you
could get the wrong value for the number of threads that have to be
signalled, and this might fail to wakeup a thread that should have been
woken up. But the reality is that this race condition is unavoidable
given that the call to 'pthread_cond_signal' is not mutex protected. So
a thread that calls 'pthread_cond_wait' after the mutex was unlocked
might or might not get woken up, but any code that notices that is
broken."

regards,
alexander.

Tom Payne

unread,
Apr 28, 2001, 6:18:29 AM4/28/01
to
Alexander Terekhov <tere...@web.de> wrote:

: Tom Payne wrote:

:> Alexander Terekhov <tere...@web.de> wrote:
:> [...]
:> > it does NOT ensure the atomic release-and-wait semantics (POSIX:
:> > ``atomically with respect to access by another thread to the
:> > mutex and then the condition variable''. That is, if another
:> > thread is able to acquire the mutex after the about-to-block
:> > thread has released it, then a subsequent call to
:> > pthread_cond_broadcast() or pthread_cond_signal ( ) in that
:> > thread shall behave as if it were issued after the
:> > about-to-block thread has blocked.)
:>
:> Good point. That's why in my implementation (corrected version
:> appears below) the signaller implicitly passes ownership of the lock
:> to the signallee. No call to wait can then get past lock.acquire()
:> until the signallee has harvested the release-simulated signal from
:> queue. [Such implicit transfers of ownership are a bit disturbing to
:> some folks, but seem to be necessary here.]

: yup. however your locking is too strict - it blocks threads
: attempting to do signal that do not really need to be blocked.
: a second lock could really help to relax the locking (and the
: second lock is needed anyway in order to support timed waits/
: cancellation), e.g. (pseudo-code):

Hmmmmm. Between your

: unlock( mtxExternal );

and your

: bTimedOut = sem_wait( semBlockQueue,timeout );

there seems to be an opportunity for a great deal of mischief,
including the possibility of another thread intervenes, e.g.,
signalling and then waiting in a way that collects that signal, (per
your observation about Kaz's implementation). Am I missing something?

Tom Payne


Alexander Terekhov

unread,
Apr 28, 2001, 9:22:59 AM4/28/01
to
Tom Payne wrote:

> Hmmmmm. Between your
>
> > unlock( mtxExternal );
>
> and your
>
> > bTimedOut = sem_wait( semBlockQueue,timeout );
>
> there seems to be an opportunity for a great deal of mischief,
> including the possibility of another thread intervenes, e.g.,
> signalling and then waiting in a way that collects that signal, (per
> your observation about Kaz's implementation). Am I missing something?

you have probably missed that similar to your implementation, FIRST
signaller implicitly passes ownership of the lock - mtxBlockLock
to the signallee(s) and that only the LAST signallee releases it -
reopens the "gate". No call to wait can then get past lock( mtxBlockLock
)
until the last signallee has harvested the release/post-simulated signal
from queue. Consider the following (less complicated version):


---------- Algorithm 6b.2 ----------
gate - semaphore
queue - semaphore
counters - CS or mutex
ex_mutex - CS or mutex
nWaiters - int
nSignaled - int
nGone - int

wait(timeout) {
sem_wait gate // pass gate
nWaiters++
sem_post gate
unlock ex_mutex
_bTimedOut = sem_wait(queue, timeout)
lock counters
if (_bTimedOut && (0 == nSignaled || 0 != nWaiters-nGone)) {
nGone++ // retract our waiter status (ignore overflow)
}
else {
if (_bTimedOut) {
sem_wait queue // better now than spurious later
}
if (0 == --nSignaled) {
sem_post gate // open gate
}
}
unlock counters
lock ex_mutex
return _bTimedOut ? ETIMEDOUT:0
}

signal(bAll) {
_nSig = 0
lock counters
// Win32 - this is safe because nWaiting can only be decremented
// by a thread that owns counters and nGone can only be changed
// by a thread that owns counters.
if (nWaiting > nGone) {
if (0 == nSignaled) {
sema_wait gate // close gate if not already closed
}
if (nGone > 0) {
nWaiting-=nGone
nGone=0
}
_nSig = bAll ? nWaiting:1
nSignaled += _nSig
nWaiting -= _nSig
}
unlock counters
if (0 != _nSig) {
sema_post queue, _nSig
}
}

BTW, my first impl. was nearly the same as the impl.
you have posted (but with a variation that would also
work on auto-reset events/bin.semaphores only
platforms) - signal/broadcast processing was made atomic
with respect to other threads and CV . I was contacted
by Louis Thomas <lth...@arbitrade.com> he pointed to
the shortcoming of too strict locking and suggested a
different solution.. N6xx and N8xx is the result of our
follow on discussions.

Louis Thomas <lth...@arbitrade.com> wrote:

>Subject: RE: Win32 Conditions
>
>
>
>
>In this message, I am going to present 3 algorithms and discussion about
>different approaches. I believe that one of these algorithms is the best so
>far, so we may not want to create a new PThreads release until people have
>had a chance to critique what I am going to present here.
>
>I have studied Alexander's algorithm, and I believe it is correct. However,
>I believe it has two shortcomings which can be overcome. One shortcoming is
>that threads that time out waiting on the semaphore can induce spurious
>wakeups later. This is easy to fix. The other shortcoming is that
>Alexander's locking is too strict. It blocks threads attempting to do
>signal/broadcast that don't really need to be blocked. I will get to this,
>but I think it's best to start at the beginning.
>
>There are many ways to implement condition variables in Win32. Some ways,
>such as having an event associated with each waiting thread, are not even
>being considered because they are too globally invasive. We have been
>concentrating on implementations with a small fixed number of Win32 sync
>objects per condition. One class of implementations has the threads wait on
>one or more event objects. Schmidt & Pyarali list many of these in their
>paper "Strategies for Implementing POSIX Condition Variables on Win32"
>[http://www.cs.wustl.edu/~schmidt/win32-cv-1.html], and, as they indicate,
>none of them are correct. However, there is one that they did not mention
>that is interesting because it is correct. Here it is in pseudocode form:
>---------- Algorithm 1 ----------
>given:
>ex_mutex - mutex
>cond - autoreset event
>nWaiters - int
>
>wait(timeout) {
> nWaiters++
> bTimedOut=SignalAndWait(ex_mutex, cond, timeout)
> lock ex_mutex
> nWaiters--
> return bTimedOut
>}
>
>signal(bAll) {
> do (bAll?nWaiters:1 times) {
> PulseEvent(cond)
> }
>}
>---------- ---------- ----------
>This pseudocode assumes that ex_mutex is always held by the thread calling
>signal, but is easily corrected by adding a CritSec or mutex to protect
>nWaiters. I think that this algorithm meets all the requirements specified
>by PThreads. There is no signal stealing, double broadcast deadlocks, or
>spurious wake ups. (There may be spurious PulseEvents but they do not cause
>wake ups.) It has the nice property that it requires the same number of
>sync
>objects as the Unix version. However, it has one major drawback that is not
>even mentioned in the PThread spec. In Win32, a thread that is in the
>'suspended' state is not in the 'waiting' state. This means that if a
>thread
>is waiting on an auto-reset event but is suspended when the PulseEvent
>occurs, it will miss the notification because it was not really waiting on
>the event. Debuggers suspend and resume threads all the time. This means
>that single stepping a program while a signal occurs can very easily cause
>any number of wake ups to be missed and deadlock to occur. Because of this
>problem, this implementation is basically unusable. PThreads does not even
>say whether or not threads can be suspended, let alone how this should
>affect threads waiting on a condition.
>
>The next class of implementations uses a semaphore to wake up waiting
>threads. A semaphore gives us control over exactly how many threads are
>woken up, but we have no control over how long it takes them to wake up.
>This makes semaphore implementations susceptible to signal stealing. To
>prevent this, we have a new rule that our implementations must obey in
>order
>to be correct:
> A thread may not begin a wait on the semaphore until it is guaranteed
>not to steal a signal.
>The "SignalObjectAndWait Solution" of Schmidt & Pyarali does not follow
>this
>rule and is not correct. The current implementations in PThreads-Win32 and
>ACE use this solution and are not correct, as Alexander has demonstrated.
>
>How can we guarantee that we follow this rule? We need some kind of gate. A
>thread must pass through the gate to wait on our semaphore. We can close
>the
>gate after we do a signal, and the last thread to wake up can open the gate
>again. We must be able to wait on the gate (if we try to pass and it is
>closed) and we must be able to close the gate in one thread and open it in
>another. The Win32 sync objects that meet these requirements are
>manual-reset events and semaphores. We have one more requirement: We must
>know at all times exactly how many threads are in the waiting state. This
>means that passing through the gate and incrementing the number of waiting
>threads must be atomic.
>
>We can do this pretty easily using a manual reset event, if the object that
>protects nWaiters is a mutex.
>---------- Algorithm 2 ----------
>given:
>gate - manual event
>queue - semaphore
>counters - mutex
>ex_mutex - CS or mutex
>nWaiters - int
>nSignaled - int
>
>wait(timeout) {
> waitmultiple(gate, counters, both) // pass gate
> nWaiters++
> unlock counters
> unlock ex_mutex
> bTimedOut=sem_wait(queue, timeout)
> lock counters
> if (bTimedOut && nWaiters>0) {
> nWaiters--
> } else {
> if (bTimedOut) {
> sem_wait queue // better now than spurious later
> bTimedout=false
> }
> nSignaled--
> if (0==nSignaled) {
> set gate // open gate
> }
> }
> unlock counters
> lock ex_mutex
> return bTimedOut?ETIMEDOUT:0
>}
>
>signal(bAll) {
> lock counters
> if (0!=nWaiters) {
> reset gate // close gate
> if (!bAll) {
> sema_post queue
> nSignaled++
> nWaiters--
> } else {
> sema_post (queue, nWaiters)
> nSignaled=nWaiters;
> nWaiters=0;
> }
> }
> unlock counters
>}
>---------- ---------- ----------
>This implementation has no signal stealing, double broadcast deadlocks, or
>spurious wake ups. It is nice and straight forward. On the other hand, it
>requires 'counters' to be a mutex and not a critical section, which means a
>speed hit. Can we do better? The big problem is the last requirement
>mentioned: passing through the gate and incrementing the number of waiting
>threads must be atomic. If we use a critical section, we can't do the
>WaitForMultipleObjects that satisfied that atomicity requirements.
>
>Now is a good time to look at Alexander's implementation. I present it here
>in pseudocode I extracted from his PThread patch. My apologies if I missed
>something.
>---------- Alexander's Algorithm ----------
>given:
>BlockLock - semaphore
>BlockQueue - semaphore
>ex_mutex - mutex or CS
>UnblockLock - mutex or CS


>nWaitersBlocked - int
>nWaitersToUnblock - int

>nWaitersUnblocked - int
>
>wait(timeout) {
>
> sema_wait BlockLock
> nWaitersBlocked++
> sema_post BlockLock
>
> unlock ex_mutex
> bTimedOut=sema_wait(BlockQueue, timeout)
>
> lock UnblockLock
> nWaitersUnblocked++
> if (0==nWaitersToUnblock) {
> lastsignal=-1
> } else {
> nWaitersToUnblock--
> if (0==nWaitersToUnblock) {
> lastsignal=1
> } else {
> lastsignal=0
> }
> }
> unlock UnblockLock
>
> if (1==lastsignal) {
> sema_post BlockLock
> }
>
> lock ex_mutex;
> return bTimedOut?ETIMEOUT:0
>}
>
>signal(bAll) {
>
> sema_wait BlockLock
> lock UnblockLock
> if (nWaitersUnblocked!=0) {
> nWaitersBlocked-=nWaitersUnblocked
> nWaitersBlocked=0
> }
> if (nWaitersBlocked>0) {
> toUnblock=
> nWaitersToUnblock=
> bAll?nWaitersBlocked:1
> unlock UnblockLock
> sema_post(BlockQueue, toUnblock)
> } else {
> sema_post(BlockLock)
> unlock UnblockLock
> }
>}
>---------- ---------- ----------
>As you can see, Alexander has probably followed reasoning similar to my
>own,
>and is using the BlockLock semaphore to ensure that a thread may not begin
>a
>wait on the BlockQueue semaphore until it is guaranteed not to steal a
>signal. This implementation has no signal stealing or double broadcast
>deadlocks. One shortcoming is that threads that time out waiting on the
>semaphore can induce spurious wakeups later. Again, this is easy to fix and
>is not really important. The other shortcoming is that Alexander's locking
>is too strict. It blocks threads attempting to do signal/broadcast that
>don't really need to be blocked. For example, assume there are three
>threads
>waiting on a condition, and a different thread does "pthread_cond_signal();
>pthread_cond_broadcast();". The broadcast will block until the signal
>completes, although it should be able to execute 'concurrently'. Our rule
>states that a thread may not begin a wait on the BlockQueue semaphore while
>the 'signal' is processing. The rules does NOT prevent us from releasing
>more threads, so we should do so. In fact, when trying to do
>"pthread_cond_broadcast(); pthread_cond_signal();" this algorithm will
>block
>on the signal even though the signal is probably a no-op!
>
>Can we relax the blocking and make it more like the Algorithm 2? There are
>two problems: One is maintaining the atomicity of passing the gate an
>incrementing nWaiters. (Note that by atomically incrementing nWaiters I
>really mean locking 'counters'.) In order to allow critical sections, we
>have to do this in two calls. The other problem is that closing the gate
>when the gate is a semaphore is a blocking call (sema_wait). We can't pass
>the gate while holding the 'counters' lock, because we will deadlock when
>the gate is closed. We can't close the gate while holding the 'counters'
>lock, because we will deadlock when the gate is being passed.
>* If we inc nWaiters before passing the gate, we could close the gate and
>think we have more waiters than we do, and thus deadlock waiting for the
>last waiter.
>* If we inc nWaiters after passing the gate, we will think we have less
>waiters than we do, which means some won't be signaled. This is could be
>OK,
>since we could claim we did a signal before those threads were really
>waiters, but we can't guarantee which threads will not wake up. We will
>have
>signal stealing, which is incorrect.
>Note that this is a race condition that only happens when one thread is
>trying to do a wait while another thread that did not lock the ex_mutex
>tries to do a signal at the same time. If we always hold the ex_mutex, we
>never run in to this situation.
>It turns out that in the second case, we can detect this situation and deal
>with it! Then we can maintain the no stealing rule and we win. Here is the
>code. Detecting the race condition makes it a bit messier.
>---------- Algorithm 3 ----------
>given:
>gate - semaphore
>queue - semaphore
>counters - CS or mutex
>ex_mutex - CS or mutex
>nWaiters - int
>nSignaled - int
>bSigAll - bool // could be sign bit of nSignaled
>
>wait(timeout) {
> sem_wait gate // pass gate
> lock counters
> if (0==nSignaled) {
> // normal
> nWaiters++
> sem_post gate
> } else {
> // oh no - in the middle of a signalOne or signalAll - gate stays shut
> if (!bSigAll) {
> // we don't have to wake up. Be a waiter and see what happens
> nWaiters++
> } else {
> // we have to wake up. We could just return - see discussion below
> nSignaled++;
> sem_post queue
> }
> }
> unlock counters
> unlock ex_mutex
> bTimedOut=sem_wait(queue, timeout)
> lock counters
> if (bTimedOut && nWaiters>0) {
> nWaiters--;
> } else {
> if (bTimedOut) {
> sem_wait queue // better now than spurious later
> bTimedout=false;
> }
> nSignaled--;
> if (0==nSignaled) {
> bSigAll=false;
> sem_post gate // open gate
> }
> }
> unlock counters
> lock ex_mutex
> return bTimedOut?ETIMEDOUT:0;
>}
>
>signal(bAll) {
> lock counters
> if (0!=nWaiters) {
> sema_wait(gate, 0) // close gate
> if (!bAll) {
> sema_post queue
> nSignaled++;
> nWaiters--;
> } else {
> bSigAll=true;
> sema_post (queue, nWaiters)
> nSignaled=nWaiters;
> nWaiters=0;
> }
> }
> unlock counters
>}
>---------- ---------- ----------
>At one point we have the option of just returning. I think it is probably
>better not to. The PThreads spec says that a timeout will always unlock
>then
>lock ex_mutex. If we don't return immediately, but do full processing, we
>match this, and we don't have to do anything extra to handle cancellation.
>
>This implementation has no signal stealing, double broadcast deadlocks, or
>spurious wake ups. It requires four sync objects compared to Unix's two,
>but
>no one has been able to find a good way around this. It can utilize
>CritSecs
>for all the mutexes, which are faster than mutexes when there is not much
>contention (like for the 'counters' lock). It does not block any signal()
>calls that can immediately succeed. It only blocks wait() calls that can't
>proceed. It behave correctly even when threads are suspended and resumed
>while it is executing, assuming threads are not suspended indefinitely.
>
>I think this makes Algorithm 3 the best available algorithm. What do you
>think? Are there any bugs that I missed?
>
> Later,
> -Louis! :)

regards,
alexander.

Alexander Terekhov

unread,
Apr 28, 2001, 12:00:17 PM4/28/01
to
Alexander Terekhov wrote:

> Tom Payne wrote:
>
> > Hmmmmm. Between your
> >
> > > unlock( mtxExternal );
> >
> > and your
> >
> > > bTimedOut = sem_wait( semBlockQueue,timeout );
> >
> > there seems to be an opportunity for a great deal of mischief,
> > including the possibility of another thread intervenes, e.g.,
> > signalling and then waiting in a way that collects that signal, (per
> > your observation about Kaz's implementation). Am I missing something?

> you have probably missed that similar to your implementation, FIRST
> signaller implicitly passes ownership of the lock - mtxBlockLock

[...] ^^^^^^^^^^^^

err.. semBlockLock:

you have probably missed that similar to your implementation, FIRST

signaller implicitly passes ownership of the lock - semBlockLock


to the signallee(s) and that only the LAST signallee releases it -

reopens the "gate". No call to wait can then get past lock( semBlockLock

regards,
alexander.

Tom Payne

unread,
Apr 29, 2001, 9:30:27 AM4/29/01
to
Alexander Terekhov <tere...@web.de> wrote:
[...]
: yup. however your locking is too strict - it blocks threads
: attempting to do signal that do not really need to be blocked.

Hmmmmm. As you have pointed out in a later posting, in your
implementation, like mine, the signaller acquires a lock and passes it
to the signallee. Since acquiring that lock is the only blocking that
my signaller does and that lock acquisition seems essential, I'm
puzzled by the above comment that my implementation "blocks threads


attempting to do signal that do not really need to be blocked."

: a second lock could really help to relax the locking (and the
: second lock is needed anyway in order to support timed waits/
: cancellation), e.g. (pseudo-code):

[... complex pseudo-code implementation omitted ...]

All handling of waitorCount has to be covered by the same lock, so I
don't see an opportunity for lock splitting. Also, IMHO, timed waits
and broadcasts don't need to be so complex:

class Condition {

int waitorCount;
Semaphore queue;
Semaphore lock;
bool broadcastInProgress;

public:

int wait( Mutex m, int time = INT_MAX ) {
lock.acquire();
++waitorCount;
m.unlock();
int residualTime = queue.acquire(time); // wait here.
// except on timeouts, lock is inherited from signaller.
if ( ! residualTime ) lock.acquire();
m.lock();
--waitorCt;
if ( waitorCount && broadcastInProgress ) {
// wakup next waitor and implicitly pass him the lock.
queue.release();
} else {
broadcastInProgress = false;
lock.release();
}
return residualTime; // report time left.
}

signal( bool toBroadcast = false ) {
lock.acquire();
broadcastInProgress = toBroadcast;
if ( waitorCount ) {
// wakup next waitor and implicitly pass him the lock.
queue.release();
} else {
broadcastInProgress = false;
lock.release();
}
}

Condition()
: broadcastInProgress(false),
queue(0),
lock(0),
waitorCount(0)
{}

};

Tom Payne

Alexander Terekhov

unread,
Apr 30, 2001, 5:11:27 AM4/30/01
to
Tom Payne wrote:

[...]


> I'm puzzled by the above comment that my implementation
> "blocks threads attempting to do signal that do not really
> need to be blocked."

For example, assume there are three threads waiting on a condition,
and a different thread does "signal; broadcast;". The broadcast will

block until the signal completes, although it should be able to

execute 'concurrently'. In fact, when trying to do "broadcast;
signal;" your algorithm will block on the signal even though
the signal is probably a no-op (and is definitely a no-op in the
case when both calls are done with protection of mutex:
m.lock; ... cv.broadcast; ... cv.signal; ... m.unlock).

> All handling of waitorCount has to be covered by the same lock,
> so I don't see an opportunity for lock splitting.

lock splitting can be done using extra counter(s) - see 6xx/8xx.

> Also, IMHO, timed waits and broadcasts don't need to be so
> complex:

[...]

you have at least two deadlocks:

> // except on timeouts, lock is inherited from signaller.
> if ( ! residualTime ) lock.acquire();
> m.lock();

a) your assumption with respect to "except on timeouts"
is wrong. consider the following scenario:

t1. thread A calls wait
t2. thread B calls signal and acquires the lock
t3. thread A times out (and calls lock.acquire())
t4. thread B implicitly pass the lock to thread A
(does not release the lock) - DEADLOCK

b) it is possible (and correct) to call signal with
protection of mutex (posix even says "if predictable scheduling
behaviour is required, then that mutex is locked by the thread
calling pthread_cond_signal() or pthread_cond_broadcast()"):

mutex.lock;...cv.signal;...mutex.unlock;

you will have the following locking:

signaller:

mutex.lock();
lock.acquire();
...

signallee:

lock.acquire();
mutex.lock();

that will result in a DEADLOCK too

regards,
alexander.

Tom Payne

unread,
Apr 30, 2001, 7:42:22 AM4/30/01
to
Alexander Terekhov <tere...@web.de> wrote:
: Tom Payne wrote:

[...]

:> Also, IMHO, timed waits and broadcasts don't need to be so
:> complex:

[...]

: you have at least two deadlocks:

:> // except on timeouts, lock is inherited from signaller.
:> if ( ! residualTime ) lock.acquire();
:> m.lock();

: a) your assumption with respect to "except on timeouts"
: is wrong. consider the following scenario:

... a waitor times out while a signal is in progress.

Oops! I now appreciate the need for the complexity in your
algorithms.

Thanks,
Tom Payne

Nils Antonsen

unread,
May 1, 2001, 11:42:40 AM5/1/01
to

"Kaz Kylheku" <k...@ashi.footprints.net> schrieb im Newsbeitrag
news:slrn9ehp...@cafe.net...

> On Thu, 26 Apr 2001 16:22:12 +0000 (UTC), Tom Payne <t...@hill.cs.ucr.edu>
wrote:
> >Conversely, a semaphore initialized to 1 can serve as a mutex. A
>
> However, it is relevant to earlier debate that semaphores are not not
available
> across all Win32 platforms. Implementing condition variables using only


Hi Kaz

Windows CE is no Win32 platform - it uses only a subset of the win32 api.
There are many differences between Win32 and Windows CE (i.e. you have no
WaitForMultipleObjects()/waitAll semantics).

regards Nils

0 new messages