Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

"Dreadlock" Revisited

78 views
Skip to first unread message

Jason Hutchens

unread,
Mar 6, 2003, 12:27:46 PM3/6/03
to
I recently posted some sample code that demonstrated a problem that
I've been grappling with, whereby one thread that acquires and
releases a lock in a very tight loop, and sleeps for a second once it
has acquired the lock, prevents anyone else from acquiring the lock.

Follow-ups to my original posting were most insightful, and I thank
David Butenhof and Steve Watt for their comments.

After a big of digging around, I found the following on our system in
/usr/share/doc/glibc-common-2.2.5/FAQ-threads.html:

------------------------------------------------------------
D.6: Scheduling seems to be very unfair when there is strong
contention on a mutex: instead of giving the mutex to each thread in
turn, it seems that it's almost always the same thread that gets the
mutex. Isn't this completely broken behavior?
------------------------------------------------------------

Now, that's exactly the question I was asking. Great! The answer is
even more inspiring:

------------------------------------------------------------
That behavior has mostly disappeared in recent releases of
LinuxThreads (version 0.8 and up). It was fairly common in older
releases, though. What happens in LinuxThreads 0.7 and before is the
following: when a thread unlocks a mutex, all other threads that were
waiting on the mutex are sent a signal which makes them runnable.
However, the kernel scheduler may or may not restart them immediately.
If the thread that unlocked the mutex tries to lock it again
immediately afterwards, it is likely that it will succeed, because the
threads haven't yet restarted. This results in an apparently very
unfair behavior, when the same thread repeatedly locks and unlocks the
mutex, while other threads can't lock the mutex.

In LinuxThreads 0.8 and up, pthread_unlock restarts only one waiting
thread, and pre-assign the mutex to that thread. Hence, if the thread
that unlocked the mutex tries to lock it again immediately, it will
block until other waiting threads have had a chance to lock and unlock
the mutex. This results in much fairer scheduling.
------------------------------------------------------------

Fantastic! Curious about which version we had, I looked at
/usr/share/doc/glibc-common-2.2.5/Changes.threads on the same system:

------------------------------------------------------------
Release 0.9:
- more ports (SH, IA-64, s390)
- many bug fixes
- timed sync object wait functions
- barrier implementation
- spinlocks implementation
- thread register on x86
- variable stack size and position on some platforms

Release 0.8:
(ehmm, forgot to update, don't know anymore)

Release 0.7:
...
------------------------------------------------------------

Errr... so, we already are running a version that purports to fix the
problem. What gives? Did release 0.9 undo the forgotton changes made
in release 0.8? Or did the implementation of spinlocks screw things up
(one of the blokes here reckons spinlocks are to blame for our
annoying non-problem).

Just to clear things up, we accept that our over-cautious approach to
locking (if in doubt, lock it) is bad, mmmkay, but we would love to
have a "fix" that allowed us to continue down this misguided path
without changing our locking philosophy. The FAQ does, after all,
admit that:

------------------------------------------------------------
Notice however that even the old "unfair" behavior is perfectly
acceptable with respect to the POSIX standard: for the default
scheduling policy, POSIX makes no guarantees of fairness, such as "the
thread waiting for the mutex for the longest time always acquires it
first". Properly written multithreaded code avoids that kind of heavy
contention on mutexes, and does not run into fairness problems. If you
need scheduling guarantees, you should consider using the real-time
scheduling policies SCHED_RR and SCHED_FIFO, which have precisely
defined scheduling behaviors.
------------------------------------------------------------

Anyone care to add their tuppence w'th? Cheers,

- Jason Hutchens.

Jason Hutchens

unread,
Mar 6, 2003, 12:31:42 PM3/6/03
to

Sean P. Burke

unread,
Mar 6, 2003, 7:36:24 PM3/6/03
to

jason.h...@nautronix.com.au (Jason Hutchens) writes:

>
> Just to clear things up, we accept that our over-cautious approach to
> locking (if in doubt, lock it) is bad, mmmkay, but we would love to
> have a "fix" that allowed us to continue down this misguided path
> without changing our locking philosophy. The FAQ does, after all,
> admit that:
>
> ------------------------------------------------------------
> Notice however that even the old "unfair" behavior is perfectly
> acceptable with respect to the POSIX standard: for the default
> scheduling policy, POSIX makes no guarantees of fairness, such as "the
> thread waiting for the mutex for the longest time always acquires it
> first". Properly written multithreaded code avoids that kind of heavy
> contention on mutexes, and does not run into fairness problems. If you
> need scheduling guarantees, you should consider using the real-time
> scheduling policies SCHED_RR and SCHED_FIFO, which have precisely
> defined scheduling behaviors.
> ------------------------------------------------------------
>
> Anyone care to add their tuppence w'th? Cheers,

I think the FAQ really says it all, but if you want
another approach, you could use a second mutex.

As I recall, you have a thread which runs in a tight
loop on some data structure that is protected by a
mutex. Once per iteration, you briefly release the
mutex, in order to give another thread the opportunity
to access the data structure, but this is not dependable.

So, write your loop like this:

for (;;)
{
lock(mutex_A);

defrobulate(big_array);

unlock(mutex_A);

lock(mutex_B);
unlock(mutex_B);
}

Any other thread that wishes to interrupt your loop
merely has to lock mutex_B, then mutex_A, and have
its way with the data.

-SEan

--
"To do good is noble, but to teach others
to do good is nobler, and less trouble." - Mark Twain


David Butenhof

unread,
Mar 7, 2003, 1:31:36 PM3/7/03
to
Jason Hutchens wrote:

> In LinuxThreads 0.8 and up, pthread_unlock restarts only one waiting
> thread, and pre-assign the mutex to that thread. Hence, if the thread
> that unlocked the mutex tries to lock it again immediately, it will
> block until other waiting threads have had a chance to lock and unlock
> the mutex. This results in much fairer scheduling.

"Fair" scheduling is a bad idea. It's inefficient. Fair scheduling can be
built by APPLICATION mechanisms where it's necessary for some particular
application algorithm. Efficient threaded apps are symmetrical and
cooperative, not competitive, and it's always better for the thread with
the "hot cache" to take the mutex than to push it aside for another thread
that was previously blocked. Always.

You can build FAIR on top of efficient where you really want it. (You should
never NEED it.) You cannot build EFFICIENT on top of FAIR.

There would be nothing wrong with building a FAIR mutex type to support
applications that think they want it. (This is much like RECURSIVE. It's
always the wrong answer, but sometimes the wrong answer is the right
answer.) Neither should ever be made the default.

--
/--------------------[ David.B...@hp.com ]--------------------\
| Hewlett-Packard Company Tru64 UNIX & VMS Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\----[ http://homepage.mac.com/dbutenhof/Threads/Threads.html ]---/

David Schwartz

unread,
Mar 7, 2003, 2:20:08 PM3/7/03
to
Jason Hutchens wrote:

> In LinuxThreads 0.8 and up, pthread_unlock restarts only one waiting
> thread, and pre-assign the mutex to that thread. Hence, if the thread
> that unlocked the mutex tries to lock it again immediately, it will
> block until other waiting threads have had a chance to lock and unlock
> the mutex. This results in much fairer scheduling.

Please, tell me you're kidding! If not, please tell me where I can get
diff's to stop this idiocy!

DS

Alexander Terekhov

unread,
Mar 10, 2003, 4:45:50 AM3/10/03
to

David Schwartz wrote:
>
> Jason Hutchens wrote:

[... LinuxThreads: fair mutexes ...]

> Please, tell me you're kidding! If not, please tell me where I can get
> diff's to stop this idiocy!

Things like NPTL/NGPT aside for a moment, try PTHREAD_MUTEX_ADAPTIVE_NP
for the "stop-this-idiocy" locks.

regards,
alexander.

Jason Hutchens

unread,
Mar 10, 2003, 9:01:43 PM3/10/03
to
> Please, tell me you're kidding! If not, please tell me where I can get
> diff's to stop this idiocy!

No, I'm not kidding, although I'm still trying to work out whether
Release 0.9 contains the "fix" mentioned in the FAQ. Could you please
explain why you think "fair" thread scheduling is idiotic?

This is getting more and more interesting. I feel like a detective,
unversed in the subject matter, trying to dig down and shed some light
on why certain things are as they are. At the moment, I'm working on
"The Case of the Disappearing Hack".

- Jason Hutchens.

Jason Hutchens

unread,
Mar 10, 2003, 9:07:13 PM3/10/03
to
David Butenhof <David.B...@hp.com> wrote in message news:<3e68...@usenet01.boi.hp.com>...
>
> "Fair" scheduling is a bad idea. It's inefficient... and it's always better
> for the thread with the "hot cache" to take the mutex... Always.

>
> You can build FAIR on top of efficient where you really want it. (You should
> never NEED it.) You cannot build EFFICIENT on top of FAIR.
>
> There would be nothing wrong with building a FAIR mutex type to support
> applications that think they want it. (This is much like RECURSIVE. It's
> always the wrong answer, but sometimes the wrong answer is the right
> answer.) Neither should ever be made the default.

David, thanks once again for your invaluable comments. I have
personally been convinced that "fair" thread scheduling is
undesirable, but I do think that it is probably the best solution to
our trigger-happy locking philosophy. Besides, I'm becoming more and
more curious as to whether or not the LinuxThreads library implements
"fair" scheduling. It's actually quite difficult to find some things
out without looking at the source...

- Jason Hutchens.

Jason Hutchens

unread,
Mar 10, 2003, 9:09:46 PM3/10/03
to
sbu...@dev0.welkyn.com (Sean P. Burke) wrote in message news:<82of4om...@dev0.welkyn.com>...

>
> I think the FAQ really says it all, but if you want
> another approach, you could use a second mutex...

Thanks for your offer of help, SEan, but we have already implemented
an acceptable solution to our Dreadlock problem (as illustrated by the
example code I originally posted). I'm now more concerned with
understanding the "fair" scheduling that may or may not exist in the
current release of the LinuxThreads library (and whether or not it
should exist). Cheers,

- Jason Hutchens.

David Schwartz

unread,
Mar 10, 2003, 9:33:19 PM3/10/03
to
Jason Hutchens wrote:

> > Please, tell me you're kidding! If not, please tell me where I can get
> > diff's to stop this idiocy!

> No, I'm not kidding, although I'm still trying to work out whether
> Release 0.9 contains the "fix" mentioned in the FAQ. Could you please
> explain why you think "fair" thread scheduling is idiotic?

Because threads don't mind if you treat them unfairly. They don't file
union grievances or lodge official complaints. The programmer has total
control over what the threads do when they get control and has priority
mechanisms if he needs more control over thread scheduling.

Consider in this case two threads each doing something like the
following:

int i;
for(i=0; i<100; i++)
{
lock-mutex();
unlock-mutex();
}

With the 'fairness' rule, 200 context switches will be required to
complete both loops. Without it, just one. A threads implementation
should *never* cause a thread to yield its timeslice early unless that
is totally unavoidable. The fairness comes from the scheduler, giving
each thread a timeslice.

DS

Jason Hutchens

unread,
Mar 11, 2003, 3:11:41 AM3/11/03
to
David Schwartz <dav...@webmaster.com> wrote in message news:<3E6D4AEF...@webmaster.com>...

> Jason Hutchens wrote:
>
> > Could you please explain why you think "fair" thread scheduling
> > is idiotic?
>
> Because threads don't mind if you treat them unfairly. They don't file
> union grievances or lodge official complaints. The programmer has total
> control over what the threads do when they get control and has priority
> mechanisms if he needs more control over thread scheduling.

Thanks for your reply. I am beginning to suspect that the linuxthreads
FAQ included with the latest release of glibc is out of date ;^)

> Consider in this case two threads each doing something like the
> following:
>
> int i;
> for(i=0; i<100; i++)
> {
> lock-mutex();
> unlock-mutex();
> }
>
> With the 'fairness' rule, 200 context switches will be required to
> complete both loops.

Right, but what if both threads are doing:

for(;;)
{
lock-mutex();
unlock-mutex();
}

No context switches will occur, and one of the two threads will
"dreadlock". How do I use the aforementioned "priority mechanisms" to
fix this without falling into the constantly-swicthing-context trap?

- Jason Hutchens.

Jason Hutchens

unread,
Mar 11, 2003, 3:26:37 AM3/11/03
to
Alexander Terekhov <tere...@web.de> wrote in message news:<3E6C5ECE...@web.de>...

> David Schwartz wrote:
> >
> > Please, tell me you're kidding! If not, please tell me where I can get
> > diff's to stop this idiocy!
>
> Things like NPTL/NGPT aside for a moment, try PTHREAD_MUTEX_ADAPTIVE_NP
> for the "stop-this-idiocy" locks.

Alexander, are you confirming that the "fast" mutex (including,
presumable, PTHREAD_MUTEX_RECURSIVE_NP) does not feature the "fair"
scheduling mentioned in the linuxthreads FAQ, and that the "slow"
mutex (i.e. PTHREAD_MUTEX_ERRORCHECK_NP) does? Because this would
explain why one fixes my dreadlock woes and t'other doesn't. I have
been trying variously to look through the latest linuxthread sources
and to build glibc, but enlightenment is difficult to attain.

- Jason Hutchens.

David Schwartz

unread,
Mar 11, 2003, 4:07:12 AM3/11/03
to
Jason Hutchens wrote:

> Right, but what if both threads are doing:
>
> for(;;)
> {
> lock-mutex();
> unlock-mutex();
> }
>
> No context switches will occur, and one of the two threads will
> "dreadlock". How do I use the aforementioned "priority mechanisms" to
> fix this without falling into the constantly-swicthing-context trap?

You have described a fundamental problem for which there is no solution
other than "don't do that". The problem is, how do you fairly distribute
an infinite amount of work? It is not possible.

If you have 40 apples to load onto two trucks, you can put 20 on each.
But if you have an infinite number of apples to load onto two trucks, no
number put on the first truck is any better than any other.

This is an extremely rare case. If you do it, you know you're doing it.
So it's up to you to put in something like:

if(i++>100)
{ i=0; sched_yield(); }

After the unlocking of the mutex.

Or you simply assign the thread that has an infinite amount of work to
do a lower priority. Whatever makes sense in your application.

In any event, all the mutex fairness in the world won't solve the
problem. Consider if this lock/unlock cycle is inside a larger
lock/unlock cycle. Yielding at the unlock or blocking at the lock will
increase the dreadlock over the larger mutex.

The fact is, the threads library can't read the programmer's mind. So
it shouldn't try to, especially if that makes the common cases much
worse for the benefit of excruciatingly rare cases.

DS

Alexander Terekhov

unread,
Mar 11, 2003, 4:57:17 AM3/11/03
to

Jason Hutchens wrote:
>
> Alexander Terekhov <tere...@web.de> wrote in message news:<3E6C5ECE...@web.de>...
> > David Schwartz wrote:
> > >
> > > Please, tell me you're kidding! If not, please tell me where I can get
> > > diff's to stop this idiocy!
> >
> > Things like NPTL/NGPT aside for a moment, try PTHREAD_MUTEX_ADAPTIVE_NP
> > for the "stop-this-idiocy" locks.
>
> Alexander, are you confirming that the "fast" mutex (including,
> presumable, PTHREAD_MUTEX_RECURSIVE_NP) does not feature the "fair"
> scheduling mentioned in the linuxthreads FAQ, and that the "slow"
> mutex (i.e. PTHREAD_MUTEX_ERRORCHECK_NP) does?

The LinuxThreads defines the following mutex types and the mapping
to the standard POSIX/SUS mutexes (__USE_GNU "compatibility" aside):

enum
{
PTHREAD_MUTEX_TIMED_NP,
PTHREAD_MUTEX_RECURSIVE_NP,
PTHREAD_MUTEX_ERRORCHECK_NP,
PTHREAD_MUTEX_ADAPTIVE_NP
#ifdef __USE_UNIX98
,
PTHREAD_MUTEX_NORMAL = PTHREAD_MUTEX_TIMED_NP,
PTHREAD_MUTEX_RECURSIVE = PTHREAD_MUTEX_RECURSIVE_NP,
PTHREAD_MUTEX_ERRORCHECK = PTHREAD_MUTEX_ERRORCHECK_NP,
PTHREAD_MUTEX_DEFAULT = PTHREAD_MUTEX_NORMAL
#endif

AFAIK, PTHREAD_MUTEX_TIMED_NP and PTHREAD_MUTEX_ERRORCHECK_NP mutexes
DO "hand off" the ownership on unlock. OTOH, PTHREAD_MUTEX_ADAPTIVE_NP
and PTHREAD_MUTEX_RECURSIVE_NP mutexes DON'T "hand off" the ownership
on unlock.


> Because this would
> explain why one fixes my dreadlock woes and t'other doesn't. I have
> been trying variously to look through the latest linuxthread sources
> and to build glibc, but enlightenment is difficult to attain.

NGPT aside for a moment, the upcoming LinuxThreads replacement/"new"
library [I mean NPTL... Drepper's persistent denial with respect to
"fixing" the futex-based sync.stuff aside] won't have the "hand-off"
silliness. So, *IF* [for some rather strange reason that is beyond my
understanding] you still want to have a "fully portable and guaranteed
hand off" thing, just code your own {dread}locks using a {presumably}
"fast" mutex and condition variable(s).

regards,
alexander.

Jason Hutchens

unread,
Mar 11, 2003, 8:34:46 PM3/11/03
to
Alexander Terekhov <tere...@web.de> wrote in message news:<3E6DB2FD...@web.de>...

>
> AFAIK, PTHREAD_MUTEX_TIMED_NP and PTHREAD_MUTEX_ERRORCHECK_NP mutexes
> DO "hand off" the ownership on unlock. OTOH, PTHREAD_MUTEX_ADAPTIVE_NP
> and PTHREAD_MUTEX_RECURSIVE_NP mutexes DON'T "hand off" the ownership
> on unlock.

Thanks for painting me a picture; I really appreciate it. Pity this
kind of information needs to be prised from the minds of gurus rather
than being present in the documentation, though ;^)

As I've mentioned earlier, we have implemented an acceptable solution
to our mutex contention problem. Now I'm just curious...

- Jason Hutchens.

Jason Hutchens

unread,
Mar 11, 2003, 8:41:15 PM3/11/03
to
David Schwartz <dav...@webmaster.com> wrote in message news:<3E6DA740...@webmaster.com>...

>
> You have described a fundamental problem for which there is no solution
> other than "don't do that". The problem is, how do you fairly distribute
> an infinite amount of work? It is not possible.

Yes, I understand the problems with scheduling. It's just that we
found it a bit odd that one thread could completely block out another
thread that was waiting on a lock if the first thread released and
re-acquired a mutex in a tight loop. Thanks to everyone's help, I now
understand why this happens, and why we shouldn't rely on "fair"
scheduling.

Thanks once again for your patience and sound advice,

- Jason Hutchens.

Casper H.S. Dik

unread,
Mar 12, 2003, 3:34:05 AM3/12/03
to
jason.h...@nautronix.com.au (Jason Hutchens) writes:

>David Schwartz <dav...@webmaster.com> wrote in message news:<3E6DA740...@webmaster.com>...
>>
>> You have described a fundamental problem for which there is no solution
>> other than "don't do that". The problem is, how do you fairly distribute
>> an infinite amount of work? It is not possible.

>Yes, I understand the problems with scheduling. It's just that we
>found it a bit odd that one thread could completely block out another
>thread that was waiting on a lock if the first thread released and
>re-acquired a mutex in a tight loop. Thanks to everyone's help, I now
>understand why this happens, and why we shouldn't rely on "fair"
>scheduling.


Problems like this also occur on a hardware level; when one CPU in
a multi-CPU system spins locking/unlocking a particular lock,
there's no guarantee that another CPU spinning on that lock waiting
for it to become available will ever see it available.

Casper
--
Expressed in this posting are my opinions. They are in no way related
to opinions held by my employer, Sun Microsystems.
Statements on Sun products included here are not gospel and may
be fiction rather than truth.

David Schwartz

unread,
Mar 12, 2003, 6:24:45 AM3/12/03
to
Jason Hutchens wrote:

> > Things like NPTL/NGPT aside for a moment, try PTHREAD_MUTEX_ADAPTIVE_NP
> > for the "stop-this-idiocy" locks.

> Alexander, are you confirming that the "fast" mutex (including,
> presumable, PTHREAD_MUTEX_RECURSIVE_NP) does not feature the "fair"
> scheduling mentioned in the linuxthreads FAQ, and that the "slow"
> mutex (i.e. PTHREAD_MUTEX_ERRORCHECK_NP) does? Because this would
> explain why one fixes my dreadlock woes and t'other doesn't. I have
> been trying variously to look through the latest linuxthread sources
> and to build glibc, but enlightenment is difficult to attain.

So are we saying the default mutex will work okay without the idiocy
and people don't need to do anything special?

DS

David Schwartz

unread,
Mar 12, 2003, 6:39:15 AM3/12/03
to

By the way, I just looked closely at the unlock code from linuxthreads
for glibc 2.2.5 (which is LinuxThreads 0.8) and couldn't find any such
code. I did, of course, find code that finds the oldest waiting, highest
priority thread and *wakes* it, but that doesn't prevent other threads
from taking ownership. I can find no code where thread A gives ownership
of a mutex to thread B. I both traced the various unlock functions and
grep'ped for changes to 'owner'.

DS

Alexander Terekhov

unread,
Mar 12, 2003, 9:42:45 AM3/12/03
to

See the __pthread_alt_*() logic in the spinlock.c.

regards,
alexander.

--
C:\glibc-linuxthreads-2.3.2\linuxthreads

Jason Hutchens

unread,
Mar 12, 2003, 8:16:52 PM3/12/03
to
David Schwartz <dav...@webmaster.com> wrote in message news:<3E6F1C63...@webmaster.com>...

> By the way, I just looked closely at the unlock code from linuxthreads
> for glibc 2.2.5 (which is LinuxThreads 0.8) and couldn't find any such
> code...

It seems that glibc 2.1.1 was the first version to feature
LinuxThreads 0.8. Latter versions have been changed significantly. You
might want to do diffs between this version and LinuxThreads in glibc
2.0.6. The offending item in the ChangeLog is, I believe, the one
dated 1998-06-09 which states "optimise mutex to wake up only one
thread".

What I don't understand is why this functionality should exist only
for some mutex types? Shouldn't it be all or nothing (presumably
nothing, judging by the majority opinion here)?
- Jason Hutchens.

David Schwartz

unread,
Mar 13, 2003, 2:24:35 PM3/13/03
to

I did. Either I'm dense or it's not in the version I'm looking at
(2.2.5), which is LinuxThreads 0.8.

DS

David Schwartz

unread,
Mar 13, 2003, 2:27:36 PM3/13/03
to Jason Hutchens

I see that code, and it's correct. You *should* only wake up one
thread. The issue has to do with when a thread that was already running,
but wasn't the one that was woken, tries to acquire the mutex.

I think this whole thread may be based upon confusion. Someone alleged
that LinuxThreads pre-assigns the mutex to the woken thread, which would
be a horrible pessimization.

Can anyone actually paste the lines of code that do this?

DS

Jason Hutchens

unread,
Mar 13, 2003, 10:50:52 PM3/13/03
to
> I think this whole thread may be based upon confusion. Someone alleged
> that LinuxThreads pre-assigns the mutex to the woken thread, which would
> be a horrible pessimization.

The person who alleges this is Xavier Leroy, the original author of
LinuxThreads, and he does so in section D.6 of the LinuxThreads FAQ,
which is included in all glibc distributions of LinuxThreads,
including the most recent one. To quote:

"In LinuxThreads 0.8 and up, pthread_unlock restarts only one waiting
thread, and pre-assign the mutex to that thread. Hence, if the thread
that unlocked the mutex tries to lock it again immediately, it will
block until other waiting threads have had a chance to lock and unlock
the mutex. This results in much fairer scheduling."

Now, LinuxThreads 0.8 was introduced in glibc 2.1.1. Looking at
pthread_lock and pthread_unlock in that code does reveal pre-assigned
mutex behaviour. Essentially, the two routines are structured thusly
(simplified to reflect the case where only two threads exist):

pthread_lock(lock)
{
if (lock->status == 0)
{
lock->status = 1;
}
else
{
self->next = lock->status;
lock->status = self;
suspend(self);
}
}

pthread_unlock(lock)
{
if (lock->status == 1)
{
lock->status = 0;
}
else
{
wake = lock->status;
lock->status = wake->next;
wake->next = 0;
restart(wake);
}
}

With two threads contesting the lock, clearly they will be assigned
the lock alternatively. That is, if the order of events is as follows:

Thread 1 : pthread_lock -> get the lock
Thread 2 : pthread_lock -> suspend
Thread 1 : pthread_unlock -> Thread 2 restarted
Thread 1 : pthread_lock -> suspend
Thread 2 : pthread_unlock -> Thread 1 restarted
Thread 2 : pthread_lock -> suspend

So that's our "fair" thread scheduling, exactly as mentioned in the
FAQ. Now we can ask ourselves whether or not someone detected this
"idiocy" and decided to remove it (without updating the FAQ, of
course).

As mentioned earlier, the version of LinuxThreads in glibc 2.3.2 has
two versions of pthread_lock and pthread_unlock. One is used by
PTHREAD_MUTEX_ADAPTIVE_NP and PTHREAD_MUTEX_RECURSIVE_NP while the
other is used by PTHREAD_MUTEX_ERRORCHECK_NP and
PTHREAD_MUTEX_TIMED_NP. The code itself is considerably more difficult
to understand than that of glibc 2.1.1 (well, for a newbie like
myself, anyway), so I'll leave it to the gurus to decide whether or
not the latest version exhibits the same behaviour in pthread_alt_lock
and pthread_alt_unlock.

Of course, I've probably made a glaringly obvious mistake which I'm
sure that one of you will happily point out to me. Cheers,

- Jason Hutchens.

Alexander Terekhov

unread,
Mar 14, 2003, 5:27:39 AM3/14/03
to

Jason Hutchens wrote:
[...]

> As mentioned earlier, the version of LinuxThreads in glibc 2.3.2 has
> two versions of pthread_lock and pthread_unlock. One is used by
> PTHREAD_MUTEX_ADAPTIVE_NP and PTHREAD_MUTEX_RECURSIVE_NP while the
> other is used by PTHREAD_MUTEX_ERRORCHECK_NP and
> PTHREAD_MUTEX_TIMED_NP. The code itself is considerably more difficult
> to understand than that of glibc 2.1.1 (well, for a newbie like
> myself, anyway), so I'll leave it to the gurus to decide whether or
> not the latest version exhibits the same behaviour in pthread_alt_lock
> and pthread_alt_unlock.

You don't need to be a guru, really. Just look at __pthread_alt_lock().
With or without CAS, its logic is basically:

...
if (suspend_needed)
suspend (self);
return;

regards,
alexander.

Alexander Terekhov

unread,
Mar 14, 2003, 6:02:21 AM3/14/03
to

Okay, I guess it's time to drop this:

http://groups.google.com/groups?selm=slrn8sutp2.e4g.kaz%40ashi.FootPrints.net

<quote author=Kaz>

.
.
.
Default mutexes in Linuxthreads have the following behavior: if one or more
thread is queued, waiting on the mutex, the unlock operation will atomically
transfer the ownership of the mutex to the highest priority thread. Among
equal priority threads, the one that has been waiting the longest gets it.

This means that a context switch has to take place to the new owner; in
that time, nobody else can acquire the mutex. That context switch takes
a relatively long time.
.
.
.
The LinuxThreads in glibc 2.2 (which is now in beta testing) features
a new kind of mutex attribute (PTHREAD_MUTEX_ADAPTIVE_NP) for better
throughput, and adaptive behavior on SMP. The new lock does not atomically
transfer ownership to a waiting thread, but allows for unfair competition.

</quote>

http://groups.google.com/groups?selm=slrn99ao0i.afq.kaz%40ashi.FootPrints.net

<quote author=Kaz>

.
.
.
Ulrich Drepper, Xavier Leroy and I have discussed this matter at length
some months since and decided to leave the mutexes as they are, with
the fairness behavior. You can gain access to alternate behaviors by
using one of the non-portable mutex types. In glibc2.2 you have the
following:
.
.
.

</quote>

And, finally: apropos "confusion" (apart from "just a few" rather
confusing bits in the articles referenced above)...

http://groups.google.com/groups?threadm=etai7.108188%24B37.2381726%40news1.rdc1.bc.home.com
(Subject: Re: LinuxThreads question)

regards,
alexander.

Joseph Seigh

unread,
Mar 14, 2003, 6:37:44 AM3/14/03
to

Jason Hutchens wrote:
>
> > I think this whole thread may be based upon confusion. Someone alleged
> > that LinuxThreads pre-assigns the mutex to the woken thread, which would
> > be a horrible pessimization.
>
> The person who alleges this is Xavier Leroy, the original author of
> LinuxThreads, and he does so in section D.6 of the LinuxThreads FAQ,
> which is included in all glibc distributions of LinuxThreads,
> including the most recent one. To quote:
>
> "In LinuxThreads 0.8 and up, pthread_unlock restarts only one waiting
> thread, and pre-assign the mutex to that thread. Hence, if the thread
> that unlocked the mutex tries to lock it again immediately, it will
> block until other waiting threads have had a chance to lock and unlock
> the mutex. This results in much fairer scheduling."
>

(snip)


>
> So that's our "fair" thread scheduling, exactly as mentioned in the
> FAQ. Now we can ask ourselves whether or not someone detected this
> "idiocy" and decided to remove it (without updating the FAQ, of
> course).

I guess it would depend on the scheduling policy in effect. Mutexes are
in effect resources. Schedulers control access to resources. Therefore
the mutex wait queue is a scheduler queue. I suppose you could design a
mutex that was better at deciding when it could or could not optimize
performance.

Joe Seigh

David Schwartz

unread,
Mar 14, 2003, 3:31:57 PM3/14/03
to

I guess I'm stupid about this. Of course we suspend ourself if we can't
get the lock immediately in the lock function.

DS

Alexander Terekhov

unread,
Mar 14, 2003, 4:32:21 PM3/14/03
to

Uhmm. Well, take a look at suspend(), then.

regards,
alexander.

0 new messages