Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Condition variable vs semaphore

2,200 views
Skip to first unread message

FiLH

unread,
Jan 9, 2010, 1:47:49 PM1/9/10
to
Hello,

A quite basic question, I'm wondering for an example where a condition
variable is better than a semaphore.

Of course in case of a pthread_cond_broadcast there is a huge
diff�rence, but in the case of a pthread_cond_signal I don't see a case
where a semaphore would not work except for a timed wait.

I have looked for some and I have not found.


Thanks in advance

FiLH

David Schwartz

unread,
Jan 9, 2010, 2:55:39 PM1/9/10
to
On Jan 9, 10:47 am, f...@filh.orgie (FiLH) wrote:

> A quite basic question, I'm wondering for an example where a condition
> variable is better than a semaphore.

Any time the state cannot sensibly be represented as a number.

Any time more than one event is associated with the same state. (You
can associate any number of condition variables with the same mutex.
Consider a queue that has a special handler if it's full and a special
handler if it's empty.)

In any event, a condition variable is traditionally considered to be
lighter than a semaphore. So really the operative question is the
reverse. If you can use either a condition variable or a semaphore, in
general you should prefer a condition variable.

DS

Vaclav Haisman

unread,
Jan 9, 2010, 2:51:22 PM1/9/10
to
FiLH wrote, On 9.1.2010 19:47:
> Hello,
>
> A quite basic question, I'm wondering for an example where a condition
> variable is better than a semaphore.
>
> Of course in case of a pthread_cond_broadcast there is a huge
> différence, but in the case of a pthread_cond_signal I don't see a case

> where a semaphore would not work except for a timed wait.
>
> I have looked for some and I have not found.
Well, let's see.

If you were waiting for some codition and you were using semaphore, then you
will still have to, after you have been woken up, acquire some mutex and
check the condition. With a semaphore, you can forget to do it. OTOH,
conditional variable forces you to provide a mutex to the pthread_cond_wait()
function, so you cannot ever forget.

Besides that, semaphore seems like a heavier primitive.

--
VH

FiLH

unread,
Jan 9, 2010, 7:04:12 PM1/9/10
to
David Schwartz <dav...@webmaster.com> wrote:

Is it really lighter ? In performance terms ?

I know that I can program a semaphore with a condition variable., but is
a just a teaching example ?

In the usual books on parallel programing semaphore are used not
condition variables.

In a way I think that you can do everything without condition variable.
So it's difficult to explain.


Thanks for your answer...

FiLH

FiLH

unread,
Jan 9, 2010, 7:04:12 PM1/9/10
to
Vaclav Haisman <v.ha...@sh.cvut.cz> wrote:

> FiLH wrote, On 9.1.2010 19:47:
> > Hello,
> >
> > A quite basic question, I'm wondering for an example where a condition
> > variable is better than a semaphore.
> >
> > Of course in case of a pthread_cond_broadcast there is a huge

> > diff�rence, but in the case of a pthread_cond_signal I don't see a case


> > where a semaphore would not work except for a timed wait.
> >
> > I have looked for some and I have not found.
> Well, let's see.
>
> If you were waiting for some codition and you were using semaphore, then you
> will still have to, after you have been woken up, acquire some mutex and
> check the condition. With a semaphore, you can forget to do it. OTOH,
> conditional variable forces you to provide a mutex to the pthread_cond_wait()
> function, so you cannot ever forget.

I have an example of two processes (not two thread) that exchange
conidtion variables (empty and full buffers) with shared semaphore...


>
> Besides that, semaphore seems like a heavier primitive.

What is inside the kernel ?

FiLH
>

--
Le fondement du constat bourgeois, c'est le bon sens, c'est-�-dire
une v�rit� qui s'arr�te sur l'ordre arbitraire de celui qui la parle.
Roland Barthes.
http://www.filh.org

Chris M. Thomasson

unread,
Jan 9, 2010, 11:42:53 PM1/9/10
to
"FiLH" <fi...@filh.orgie> wrote in message
news:1jc2bri.4ocz9b1g54mxjN%fi...@filh.orgie...

Condition variables are stateless and semaphores are not. Therefore, it can
indeed be "redundant" to use a semaphore in certain cases. For instance,
usually a queue implementation knows if it is full or empty. Since the queue
already provides the necessary information, IMVHO, a semaphore is simply not
necessary.

David Schwartz

unread,
Jan 10, 2010, 12:30:57 AM1/10/10
to
On Jan 9, 4:04 pm, f...@filh.orgie (FiLH) wrote:

> Is it really lighter ? In performance terms ?

Usually, yes.

> I know that I can program a semaphore with a condition variable., but is
> a just a teaching example ?

Yes, you can, because a condition variable is more primitive. Try to
make a condition variable from a semaphore, it's nearly impossible.

> In the usual books on parallel programing semaphore are used not
> condition variables.

Yes, usually because condition variables are considered too primitive.

> In a way I think that you can do everything without condition variable.
> So it's difficult to explain.

You could do everything with just a mutex, but it would produce
horrible, ugly code. A good programmer understands all the available
tools so he can choose the best one for each job. You can bash in
anything with a 100 pound hammer, but if you just need to put in an
ordinary nail, you don't want to be lugging the 100 pound hammer
around.

DS

David Schwartz

unread,
Jan 10, 2010, 12:31:54 AM1/10/10
to
On Jan 9, 4:04 pm, f...@filh.orgie (FiLH) wrote:

> What is inside the kernel ?
>
> FiLH

On typical UNIX platforms, semaphores are implemented entirely in
kernel and condition variables are primarily implemented in user
space. This generally means a seamphore's fast path still involves a
kernel/user transition while a condition variable's does not.

DS

FiLH

unread,
Jan 10, 2010, 7:03:03 AM1/10/10
to

Thanks, from all the answers I begin to understand that it' may be
intereting to analyse better the semantic of the semaphores operations
and maybe to separate each primitive meaning.

FiLH

unread,
Jan 10, 2010, 7:03:03 AM1/10/10
to
David Schwartz <dav...@webmaster.com> wrote:

> On Jan 9, 4:04 pm, f...@filh.orgie (FiLH) wrote:
>
> > Is it really lighter ? In performance terms ?
>
> Usually, yes.
>
> > I know that I can program a semaphore with a condition variable., but is
> > a just a teaching example ?
>
> Yes, you can, because a condition variable is more primitive. Try to
> make a condition variable from a semaphore, it's nearly impossible.

That could be interesting to check the differences of behaviour betwen a
multithread and a multi process implementations.


>
> > In the usual books on parallel programing semaphore are used not
> > condition variables.
>
> Yes, usually because condition variables are considered too primitive.
>
> > In a way I think that you can do everything without condition variable.
> > So it's difficult to explain.


Thanks for your explanations... what is a bit paradoxal is that
semaphore are simple objects form their behavior, as condition_variable
are more complex, but more primitive..

Thanks for your kind explanations

FiLH

unread,
Jan 10, 2010, 7:03:03 AM1/10/10
to
David Schwartz <dav...@webmaster.com> wrote:

I will try to find the linux source code..

David Schwartz

unread,
Jan 10, 2010, 10:03:43 AM1/10/10
to
On Jan 10, 4:03 am, f...@filh.orgie (FiLH) wrote:

> Thanks for your explanations... what is a bit paradoxal is that
> semaphore are simple objects form their behavior, as condition_variable
> are more complex, but more primitive..

More complex to use, because they're more primitive. Much simpler to
implement.

DS

Maxim Yegorushkin

unread,
Jan 10, 2010, 1:14:41 PM1/10/10
to

It is true for pthread mutexes.

But I don't think this applies to condition variables because there must
be a list of threads blocked on the condition variable. Since condition
variables may be shared between processes, this list must be maintained
in the kernel. This means that signal and broadcast must enter the
kernel to wake up waiting threads, as well as waiting on the condition
must enter the kernel to deschedule the thread off the CPU.

--
Max

Maxim Yegorushkin

unread,
Jan 10, 2010, 1:24:26 PM1/10/10
to

In general, you can't replace a semaphore with a condition. Because
semaphore has state and condition doesn't, with condition variable
signals may be lost.

A straightforward implementation of semaphore is mutex + counter +
convition variable.

--
Max

Chris M. Thomasson

unread,
Jan 10, 2010, 1:57:17 PM1/10/10
to
"Maxim Yegorushkin" <maxim.ye...@gmail.com> wrote in message
news:4b4a1911$0$9752$6e1e...@read.cnntp.org...

I believe you can use futexs to implement process wide condition variables
that have user-space fast-paths.

Maxim Yegorushkin

unread,
Jan 10, 2010, 2:48:54 PM1/10/10
to

I wonder how would that fast path look like for condition wait? Or did
you mean a fast path for condition signal?

--
Max

FiLH

unread,
Jan 11, 2010, 1:36:27 AM1/11/10
to
David Schwartz <dav...@webmaster.com> wrote:

That's an intersting point of view.

David Schwartz

unread,
Jan 11, 2010, 1:38:23 PM1/11/10
to
On Jan 10, 10:14 am, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

> But I don't think this applies to condition variables because there must
> be a list of threads blocked on the condition variable.

First, this only applies to process-shared condition variables.
Second, this only applies to code outside the fast path. You can keep
a count of threads blocks in shared memory protected by the mutex. If
it's zero, signal/broadcast is a no op. For typical semaphores, a
transition to kernel space will always be required in every path.

> Since condition
> variables may be shared between processes, this list must be maintained
> in the kernel. This means that signal and broadcast must enter the
> kernel to wake up waiting threads, as well as waiting on the condition
> must enter the kernel to deschedule the thread off the CPU.

It can mostly be done with shared memory. But one nice thing about
POSIX condition variables is you don't need any of the overhead of
process-wide sharing for the 98% case where they're not shared between
processes.

DS

Chris M. Thomasson

unread,
Jan 11, 2010, 2:01:11 PM1/11/10
to
"Maxim Yegorushkin" <maxim.ye...@gmail.com> wrote in message
news:4b4a2f26$0$9752$6e1e...@read.cnntp.org...

Could be a compare of an eventcount value. Not exactly sure how useful it
would be, but it would technically qualify as a fast-path...

;^)


> Or did you mean a fast path for condition signal?

Mostly signal, which could be a compare of an eventcount value to a "wait
state".

Maxim Yegorushkin

unread,
Jan 11, 2010, 4:37:58 PM1/11/10
to
On 11/01/10 18:38, David Schwartz wrote:
> On Jan 10, 10:14 am, Maxim Yegorushkin<maxim.yegorush...@gmail.com>
> wrote:
>
>> But I don't think this applies to condition variables because there must
>> be a list of threads blocked on the condition variable.
>
> First, this only applies to process-shared condition variables.
> Second, this only applies to code outside the fast path. You can keep
> a count of threads blocks in shared memory protected by the mutex. If
> it's zero, signal/broadcast is a no op. For typical semaphores, a
> transition to kernel space will always be required in every path.

Very interesting, I did not think about optimizing the no-waiters case.

--
Max

Maxim Yegorushkin

unread,
Jan 11, 2010, 5:13:06 PM1/11/10
to

Just out of curiosity took a quick look into glibc-2.10.1-68-gc87c885/nptl:

* pthread_cond_signal() and pthread_cond_broadcast() do a fast path when
there are no waiters (they always lock a futex though), otherwise they
make a syscall to wake up waiters (waiting on another futex).
* pthread_cond_wait() and pthread_cond_timedwait() always make a syscall
to block on the futex in the kernel.

--
Max

David Schwartz

unread,
Jan 11, 2010, 5:33:35 PM1/11/10
to
On Jan 11, 1:37 pm, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

> Very interesting, I did not think about optimizing the no-waiters case.

One quirk is that when you're in pthread_cond_signal or
pthread_cond_broadcast, you may or may not hold the mutex. Otherwise,
you could just do 'if (cv.waiters==0) return;'. On some platforms,
however, you can just do 'if (atomic_fetch(&cv.waiters))==0 return;'
as your first line in pthread_cond_signal and pthread_cond_broadcast.

If there's a race between a waiter and a waker, it doesn't matter
whether you wake or not. If you unlock the mutex and then broadcast
the condition variable, it is only required that threads that were
blocked on the condition variable before you released the mutex get
woken up. Similarly, in the signal case, it is only required that if
there is one or more threads that were blocked on the condition
variable when you released the mutex, at least one of those threads
(or a thread that could have blocked on the condition variable
subsequently) is unblocked.

If you're really, really clever, you can figure out the one case where
this makes changing the order from 'signal then unlock' to 'unlock
then signal' turn code that was guaranteed to work reliably into code
that is not guaranteed to work reliably. (Hint: Consider the set of
possible threads that could be woken with the signal before the unlock
and the set of possible threads that could be woken with the signal
after. How are they different? How can that difference matter?)

Note that no such case is known for broadcast. You can move a
broadcast from before to after or from after to before an unlock with
100% safety. It can only break code that only worked by luck.

It's funny, I was reviewing a book on threading subjects prior to
publication and pointed out the book's technical error when it said
you could always move a signal to after an unlock. I was hoping they
would include a section on why you can't do this, and discussed it in
great detail. Sadly, they just removed the comment about changing the
order.

DS

Maxim Yegorushkin

unread,
Jan 11, 2010, 6:10:48 PM1/11/10
to
On 11/01/10 22:33, David Schwartz wrote:
> On Jan 11, 1:37 pm, Maxim Yegorushkin<maxim.yegorush...@gmail.com>
> wrote:

[]

> If you're really, really clever, you can figure out the one case where
> this makes changing the order from 'signal then unlock' to 'unlock
> then signal' turn code that was guaranteed to work reliably into code
> that is not guaranteed to work reliably. (Hint: Consider the set of
> possible threads that could be woken with the signal before the unlock
> and the set of possible threads that could be woken with the signal
> after. How are they different? How can that difference matter?)

Quite an interesting question.

When unlock-then-signal, I would guess, new waiting threads can skip the
wait loop (while(!c) pthread_cond_wait()) just after the mutex has been
released and before the condition is signalled, which results in
scheduling unfairness when the last thread acquires the mutex by
skipping the wait queue.

This is probably what the documentation below would call unpredictable.

http://www.opengroup.org/onlinepubs/009695399/functions/pthread_cond_signal.html:

...
The pthread_cond_broadcast() or pthread_cond_signal() functions may be
called by a thread whether or not it currently owns the mutex that
threads calling pthread_cond_wait() or pthread_cond_timedwait() have
associated with the condition variable during their waits; however, if
predictable scheduling behavior is required, then that mutex shall be
locked by the thread calling pthread_cond_broadcast() or
pthread_cond_signal().
...

Linux and Solaris pthread_cond_signal man pages copy the above text.

But that should not matter, should it?

> Note that no such case is known for broadcast. You can move a
> broadcast from before to after or from after to before an unlock with
> 100% safety. It can only break code that only worked by luck.
>
> It's funny, I was reviewing a book on threading subjects prior to
> publication and pointed out the book's technical error when it said
> you could always move a signal to after an unlock. I was hoping they
> would include a section on why you can't do this, and discussed it in
> great detail. Sadly, they just removed the comment about changing the
> order.

Do you have a specific use case in mind that breaks when the
signal-release order is reversed?

--
Max

David Schwartz

unread,
Jan 11, 2010, 7:08:29 PM1/11/10
to
On Jan 11, 3:10 pm, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

> When unlock-then-signal, I would guess, new waiting threads can skip the


> wait loop (while(!c) pthread_cond_wait()) just after the mutex has been
> released and before the condition is signalled, which results in
> scheduling unfairness when the last thread acquires the mutex by
> skipping the wait queue.
>
> This is probably what the documentation below would call unpredictable.
>

> http://www.opengroup.org/onlinepubs/009695399/functions/pthread_cond_...

True, but that doesn't matter. Condition variables don't guarantee
scheduling fairness anyway, so any program that relies on fairness or
breaks without it is working by luck.

> Linux and Solaris pthread_cond_signal man pages copy the above text.
>
> But that should not matter, should it?

Nope, it should not.

> > It's funny, I was reviewing a book on threading subjects prior to
> > publication and pointed out the book's technical error when it said
> > you could always move a signal to after an unlock. I was hoping they
> > would include a section on why you can't do this, and discussed it in
> > great detail. Sadly, they just removed the comment about changing the
> > order.

> Do you have a specific use case in mind that breaks when the
> signal-release order is reversed?

Yes. The short version is that once you call pthread_mutex_unlock,
other threads can block on the mutex. So if you unlock before
signaling, you can be guaranteed to wake a thread that decided to
block on the condition variable based on the same protected state that
made you decide to signal it. If you unlock and then signal, your
signal can be consumed by a thread that elected to block on the
condition variable based on the state as you left it when you released
the mutex.

DS

David Schwartz

unread,
Jan 12, 2010, 10:20:21 AM1/12/10
to
I mangled the description. Let's try it again. Consider this code:

pthread_mutex_lock();
if(state==A)
{
state=B;
pthread_cond_signal();
pthread_mutex_unlock();
}

Now, suppose this is the first code to call pthread_cond_signal. All
other code has called pthread_cond_broadcast after changing the state.
When we call pthread_cond_signal, we can be 100% sure that if there's
a thread that chose to block on the condition variable seeing that the
state was A, we will unblock it.

Now, consider:

pthread_mutex_lock();
if(state==A)
{
state=B;
pthread_mutex_unlock();
pthread_cond_signal();
}

But now, consider this. Same assumptions. After we call
pthread_mutex_unlock with the state set to 'B', additional threads
might choose to block on the condition variable. So when we call
pthread_cond_signal, we might wake a thread that chose to block when
the state was A or we might wake a thread that chose to block when the
state was B.

Suppose the only way to change the state from 'B' if for a thread that
chose to block when the state was 'A' to change it. Say, one thread
does this, and it's the only thread that chooses to block when the
state is not B:

while(1)
{
pthread_mutex_lock();
while(state!=B) pthread_cond_wait();
state=A;
pthread_mutex_unlock();
sleep(1);
}

If you signal then unlock, you are guaranteed to unblock this thread
if it is sleeping. So your state will not stay B forever. If you
unlock then signal, you may wake a different thread that chose to
block when the state was A (after you changed it). So you are not
guaranteed to unblock this thread. The state may stay stuck at 'B'
forever. Your 'signal' may wake the 'wrong' thread.

DS

Maxim Yegorushkin

unread,
Jan 12, 2010, 4:11:34 PM1/12/10
to

Summarizing, such code depends on waking a specific thread using signal,
or some order of waiting thread queuing on the condition variable. Is
that right?

If so, I don't think that waiting on a condition variable gives such a
guarantee (as you mentioned in the other post, fairness is not
guaranteed). This is probably because there can be spurious signals that
cause waiting threads to wake up and re-block on the condition wait,
thus reordering the waiting thread queue. That breaks such code, because
the logic required to cope with spurious wake-ups is missing.

http://www.opengroup.org/onlinepubs/009695399/functions/pthread_cond_signal.html
actually mentions the required logic:

...
An added benefit of allowing spurious wakeups is that applications are
forced to code a predicate-testing-loop around the condition wait. This
also makes the application tolerate superfluous condition broadcasts or
signals on the same condition variable that may be coded in some other
part of the application. The resulting applications are thus more
robust. Therefore, IEEE Std 1003.1-2001 explicitly documents that
spurious wakeups may occur.
...

--
Max

David Schwartz

unread,
Jan 12, 2010, 4:41:32 PM1/12/10
to
On Jan 12, 1:11 pm, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

> Summarizing, such code depends on waking a specific thread using signal,


> or some order of waiting thread queuing on the condition variable. Is
> that right?

Yes, but making sure that this is the only thread that could possibly
be blocked on the condition variable.

> If so, I don't think that waiting on a condition variable gives such a
> guarantee (as you mentioned in the other post, fairness is not
> guaranteed).

It does. Specifically, it guarantees that if at least one thread is
blocked on the condition variable, at least one of those threads will
be woken. Since the code assures that only one thread could be
blocked, that thread must be woken if it is blocked.

> This is probably because there can be spurious signals that
> cause waiting threads to wake up and re-block on the condition wait,
> thus reordering the waiting thread queue. That breaks such code, because
> the logic required to cope with spurious wake-ups is missing.

Err, huh? There is no such thing as "reordering the waiting thread
queue". Condition variables do not have a queue.

> An added benefit of allowing spurious wakeups is that applications are
> forced to code a predicate-testing-loop around the condition wait. This
> also makes the application tolerate superfluous condition broadcasts or
> signals on the same condition variable that may be coded in some other
> part of the application. The resulting applications are thus more
> robust. Therefore, IEEE Std 1003.1-2001 explicitly documents that
> spurious wakeups may occur.

This code does that. Spurious wakeups, in fact, can save this code
because the failure mode is a missing wakeup.

DS

Maxim Yegorushkin

unread,
Jan 12, 2010, 6:23:00 PM1/12/10
to
On 12/01/10 21:41, David Schwartz wrote:
> On Jan 12, 1:11 pm, Maxim Yegorushkin<maxim.yegorush...@gmail.com>
> wrote:
>
>> Summarizing, such code depends on waking a specific thread using signal,
>> or some order of waiting thread queuing on the condition variable. Is
>> that right?
>
> Yes, but making sure that this is the only thread that could possibly
> be blocked on the condition variable.

To me there seems to be a contradiction. In the previous post you said:
...


If you signal then unlock, you are guaranteed to unblock this thread
if it is sleeping. So your state will not stay B forever. If you
unlock then signal, you may wake a different thread that chose to
block when the state was A (after you changed it).

...

In short, the use case you gave had more than one waiting thread
involved. Isn't that right?

>> If so, I don't think that waiting on a condition variable gives such a
>> guarantee (as you mentioned in the other post, fairness is not
>> guaranteed).
>
> It does. Specifically, it guarantees that if at least one thread is
> blocked on the condition variable, at least one of those threads will
> be woken. Since the code assures that only one thread could be
> blocked, that thread must be woken if it is blocked.

The definition of fairness I implied is that the waiting threads unblock
from wait in FIFO order when using signal. FIFO order is not required by
POSIX.

>> This is probably because there can be spurious signals that
>> cause waiting threads to wake up and re-block on the condition wait,
>> thus reordering the waiting thread queue. That breaks such code, because
>> the logic required to cope with spurious wake-ups is missing.
>
> Err, huh? There is no such thing as "reordering the waiting thread
> queue". Condition variables do not have a queue.

Hm.., if there is no queue, how then would it be possible to unblock
using broadcast more then one thread blocked in wait? Those threads must
be off the scheduler and require to be rescheduled by the kernel in
order to resume.

--
Max

Maxim Yegorushkin

unread,
Jan 12, 2010, 6:37:11 PM1/12/10
to
On 12/01/10 21:41, David Schwartz wrote:
> On Jan 12, 1:11 pm, Maxim Yegorushkin<maxim.yegorush...@gmail.com>
> wrote:
>
>> Summarizing, such code depends on waking a specific thread using signal,
>> or some order of waiting thread queuing on the condition variable. Is
>> that right?
>
> Yes, but making sure that this is the only thread that could possibly
> be blocked on the condition variable.
>
>> If so, I don't think that waiting on a condition variable gives such a
>> guarantee (as you mentioned in the other post, fairness is not
>> guaranteed).
>
> It does. Specifically, it guarantees that if at least one thread is
> blocked on the condition variable, at least one of those threads will
> be woken. Since the code assures that only one thread could be
> blocked, that thread must be woken if it is blocked.

Thinking more about it (interesting subject indeed), there is an
assumption that signal atomically unblocks one waiting thread and blocks
it on the mutex (so called wait fusion). And once the signalling thread
releases the mutex the thread that received the signal acquires the
mutex. Am I thinking in the right direction?

Well, this may well be not how it works. When signalling while the mutex
is locked, there may be another thread already blocked on the mutex and
after the signalling thread releases the mutex this other thread may
acquire the mutex and not the one that just received the signal.

--
Max

David Schwartz

unread,
Jan 12, 2010, 10:12:40 PM1/12/10
to
On Jan 12, 3:37 pm, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

> Thinking more about it (interesting subject indeed), there is an


> assumption that signal atomically unblocks one waiting thread and blocks
> it on the mutex (so called wait fusion). And once the signalling thread
> releases the mutex the thread that received the signal acquires the
> mutex. Am I thinking in the right direction?

Such an assumption would be erroneous. There's no guarantee the thread
unblocked by the signal gets the mutex. In fact, if more than one
thread is unblocked (which is explicitly allowed), they can't all get
the mutex first.

> Well, this may well be not how it works. When signalling while the mutex
> is locked, there may be another thread already blocked on the mutex and
> after the signalling thread releases the mutex this other thread may
> acquire the mutex and not the one that just received the signal.

Absolutely.

DS

David Schwartz

unread,
Jan 12, 2010, 10:21:04 PM1/12/10
to
On Jan 12, 3:23 pm, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

> > Yes, but making sure that this is the only thread that could possibly


> > be blocked on the condition variable.

> To me there seems to be a contradiction. In the previous post you said:
> ...
> If you signal then unlock, you are guaranteed to unblock this thread
> if it is sleeping. So your state will not stay B forever. If you
> unlock then signal, you may wake a different thread that chose to
> block when the state was A (after you changed it).
> ...

> In short, the use case you gave had more than one waiting thread
> involved. Isn't that right?

Not unless you move the signal to after the unlock. With the signal
*before* the unlock, then only a thread that blocked with the
predicate in the same state we saw it in could be blocked on the
condition variable. With the signal after the unlock, then any thread
could be blocked on the condition variable, possibly a thread that was
blocked the whole time, possibly one that blocking seeing the change
we made to the predicate before we unlocked, possibly even a thread
that blocked based on the predicate in a third state that it got to
after multiple threads ran.

> > It does. Specifically, it guarantees that if at least one thread is
> > blocked on the condition variable, at least one of those threads will
> > be woken. Since the code assures that only one thread could be
> > blocked, that thread must be woken if it is blocked.

> The definition of fairness I implied is that the waiting threads unblock
> from wait in FIFO order when using signal. FIFO order is not required by
> POSIX.

Right, POSIX permits threads to acquire the mutex in any order. And it
also permits a signal to wakeup any one or more threads that is
blocked on the condition variable. However, if there can only be one
thread blocked on the condition variable, then that thread is
guaranteed to be woken up if it is in fact blocked. That's all my
example relies on.

> >> This is probably because there can be spurious signals that
> >> cause waiting threads to wake up and re-block on the condition wait,
> >> thus reordering the waiting thread queue. That breaks such code, because
> >> the logic required to cope with spurious wake-ups is missing.

> > Err, huh? There is no such thing as "reordering the waiting thread
> > queue". Condition variables do not have a queue.

> Hm.., if there is no queue, how then would it be possible to unblock
> using broadcast more then one thread blocked in wait? Those threads must
> be off the scheduler and require to be rescheduled by the kernel in
> order to resume.

Sure, but they can be in a bag. POSIX certainly does not require a
queue. The notion of "reordering the wait queue" only makes sense if
you assume there *is* a queue. What you did is assumed there was a
queue, even though that's not required, and then effectively assumed
there wasn't a queue, by saying it can be reordered. So all you did
was make an assumption and then contradict that assumption.

If there is a queue, it will never be reordered. If it can be
reordered, it's not a queue.

DS

David Schwartz

unread,
Jan 12, 2010, 11:14:46 PM1/12/10
to
On Jan 12, 7:21 pm, David Schwartz <dav...@webmaster.com> wrote:

> Right, POSIX permits threads to acquire the mutex in any order. And it
> also permits a signal to wakeup any one or more threads that is
> blocked on the condition variable. However, if there can only be one
> thread blocked on the condition variable, then that thread is
> guaranteed to be woken up if it is in fact blocked. That's all my
> example relies on.

To be precise, by "woken up", I mean that it will no longer block
indefinitely on the condition variable. It can, of course, block on
the mutex.

DS

Maxim Yegorushkin

unread,
Jan 13, 2010, 1:05:36 PM1/13/10
to
On 12/01/10 15:20, David Schwartz wrote:
> I mangled the description. Let's try it again. Consider this code:
>
> pthread_mutex_lock();
> if(state==A)
> {
> state=B;
> pthread_cond_signal();
> pthread_mutex_unlock();
> }
>
> Now, suppose this is the first code to call pthread_cond_signal. All
> other code has called pthread_cond_broadcast after changing the state.
> When we call pthread_cond_signal, we can be 100% sure that if there's
> a thread that chose to block on the condition variable seeing that the
> state was A, we will unblock it.

Just to make sure, does it mean that after a previous broadcast, while
state=A there can be at most one waiting thread?

> Now, consider:
>
> pthread_mutex_lock();
> if(state==A)
> {
> state=B;
> pthread_mutex_unlock();
> pthread_cond_signal();
> }
>
> But now, consider this. Same assumptions. After we call
> pthread_mutex_unlock with the state set to 'B', additional threads
> might choose to block on the condition variable. So when we call
> pthread_cond_signal, we might wake a thread that chose to block when
> the state was A or we might wake a thread that chose to block when the
> state was B.

I think I see now. It is subtle indeed.

This exposure to the order of signal-unlock can be removed by changing
the state to B in the thread that received the signal, rather than in
the signalling thread.

--
Max

Chris M. Thomasson

unread,
Jan 13, 2010, 7:33:41 PM1/13/10
to
"Maxim Yegorushkin" <maxim.ye...@gmail.com> wrote in message
news:4b4ba272$0$9750$6e1e...@read.cnntp.org...

I am looking at the following code:


http://sources.redhat.com/cgi-bin/cvsweb.cgi/~checkout~/libc/nptl/pthread_cond_signal.c?rev=1.4.2.4&content-type=text/plain&cvsroot=glibc


They do indeed lock a mutex. IIRC, a while back I used the stock NPTL
condvar as a slow path for an eventcount, then created a condvar on top of
it all. The eventcount based condvar actually performed better than the
native one. This was definitely true for programs that frequently performed
a lot of signal/broadcast calls.


> * pthread_cond_wait() and pthread_cond_timedwait() always make a syscall
> to block on the futex in the kernel.

Well, it's probably not even worth fast-pathing the waiters.

David Schwartz

unread,
Jan 13, 2010, 8:23:15 PM1/13/10
to
On Jan 13, 10:05 am, Maxim Yegorushkin <maxim.yegorush...@gmail.com>
wrote:

>  > When we call pthread_cond_signal, we can be 100% sure that if there's


>  > a thread that chose to block on the condition variable seeing that the
>  > state was A, we will unblock it.

> Just to make sure, does it mean that after a previous broadcast, while
> state=A there can be at most one waiting thread?

Right. Although you could create the problem even if there's more than
one waiting thread, so long as every thread that could possibly be
waiting would properly handle the signal if it was unblocked from the
condition variable.

> > But now, consider this. Same assumptions. After we call
> > pthread_mutex_unlock with the state set to 'B', additional threads
> > might choose to block on the condition variable. So when we call
> > pthread_cond_signal, we might wake a thread that chose to block when
> > the state was A or we might wake a thread that chose to block when the
> > state was B.

> I think I see now. It is subtle indeed.

> This exposure to the order of signal-unlock can be removed by changing
> the state to B in the thread that received the signal, rather than in
> the signalling thread.

Right, but in a realistic version of this scenario, that would break
the program logic. The signalling thread setting the state to B would
be some way to communicate that work needs to be done but the
signalling thread cannot do it. The purpose of the signalling is to
ensure that the work does get done by the threads that can do it.

DS

frege

unread,
Jan 15, 2010, 4:48:20 PM1/15/10
to
On Jan 9, 11:42 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> "FiLH" <f...@filh.orgie> wrote in message
>
> news:1jc2bri.4ocz9b1g54mxjN%fi...@filh.orgie...
>
> > Hello,

>
> > A quite basic question, I'm wondering for an example where a condition
> > variable is better than a semaphore.
>
> > Of course in case of a  pthread_cond_broadcast there is a huge
> > différence, but in the case of a pthread_cond_signal I don't see a case
> > where a semaphore would not work except for a timed wait.
>
> > I have looked for some and I have not found.
>
> > Thanks in advance
>
> Condition variables are stateless and semaphores are not. Therefore, it can
> indeed be "redundant" to use a semaphore in certain cases. For instance,
> usually a queue implementation knows if it is full or empty. Since the queue
> already provides the necessary information, IMVHO, a semaphore is simply not
> necessary.

On the other hand, if it is a lock-free queue, there is no mutex to
grab once signalled, so the mutex part of cond-var + mutex is
redundant, not the semaphore.

That's why I wish C++0x included semaphore (or something like it).

Tony

Chris M. Thomasson

unread,
Jan 15, 2010, 6:13:18 PM1/15/10
to
"frege" <gottlo...@gmail.com> wrote in message
news:6fa9dfc0-c7a7-427e...@m26g2000yqb.googlegroups.com...

On Jan 9, 11:42 pm, "Chris M. Thomasson" <n...@spam.invalid> wrote:
> > "FiLH" <f...@filh.orgie> wrote in message
> >
> > news:1jc2bri.4ocz9b1g54mxjN%fi...@filh.orgie...
> >
> > > Hello,
> >
> > > A quite basic question, I'm wondering for an example where a condition
> > > variable is better than a semaphore.
> >
> > > Of course in case of a pthread_cond_broadcast there is a huge
> > > diff�rence, but in the case of a pthread_cond_signal I don't see a
> > > case
> > > where a semaphore would not work except for a timed wait.
> >
> > > I have looked for some and I have not found.
> >
> > > Thanks in advance
> >
> > Condition variables are stateless and semaphores are not. Therefore, it
> > can
> > indeed be "redundant" to use a semaphore in certain cases. For instance,
> > usually a queue implementation knows if it is full or empty. Since the
> > queue
> > already provides the necessary information, IMVHO, a semaphore is simply
> > not
> > necessary.

> On the other hand, if it is a lock-free queue, there is no mutex to
> grab once signalled, so the mutex part of cond-var + mutex is
> redundant, not the semaphore.

It depends on where you place the mutex acquisition/release and condvar wait
logic. Generally, you would put the semaphore in the fast-path; something
like:
_______________________________________________________________
struct waitable_stack
{
lock_free_stack<node*> m_stack;
semaphore m_waitset;


node* pop()
{
m_waitset.wait();

return m_stack.try_pop(); // guaranteed to succeed.
}
};
_______________________________________________________________


That's fine. However, you can use a mutex/condvar and a single word to build
an eventcount which has a general usage pattern like:
_______________________________________________________________
struct waitable_stack
{
lock_free_stack<node*> m_stack;
eventcount m_waitset;


node* pop()
{
node* n;

while (! (n = m_stack.try_pop()))
{
// slow-path...

eventcount::key_type waitkey = m_waitset.get();

if ((n = m_stack.try_pop())) break;

m_waitset.wait(waitkey);
}

return n;
}
};
_______________________________________________________________


Yes, it's more complicated. But notice one key difference? If the stack is
not empty, then the wait logic is completely avoided...


> That's why I wish C++0x included semaphore (or something like it).

Well, you can build one, but it you won't be able to use it to post from a
signal handler. FWIW, C++0x has everything one needs to build an eventcount,
which is my conditional blocking primitive of choice for non-blocking
algorithms.

Francis Moreau

unread,
Feb 17, 2010, 4:57:10 PM2/17/10
to
[ sorry for coming late on this thread ]

On Jan 10, 6:31 am, David Schwartz <dav...@webmaster.com> wrote:
> On Jan 9, 4:04 pm, f...@filh.orgie (FiLH) wrote:
>
> > What is inside the kernel ?
>
> > FiLH
>
> On typical UNIX platforms, semaphores are implemented entirely in
> kernel and condition variables are primarily implemented in user
> space. This generally means a seamphore's fast path still involves a
> kernel/user transition while a condition variable's does not.
>

I don't know if you consider linux a 'typical' Unix plateform, but
this doesn't seem to be true on that plateform. sem_wait() fast path
doesn't involve the kernel at all.

0 new messages