Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Single-Writer Multi-Reader lock for Win98

1,153 views
Skip to first unread message

Ed Peddycoart

unread,
Oct 19, 1999, 3:00:00 AM10/19/99
to
I need a single-writer multi-reader lock which will work with Win98,
preferrably in the form of a C++ class. I have been using the one
described by Jeff Claar in the Jan 99 issue of WDJ, but it is NT only.
Does anyone have one, or can anyone point to towards resources on the
internet etc. that describes such, other than the one described in
Richter's book?
Ed


Nicole Hamilton

unread,
Oct 19, 1999, 3:00:00 AM10/19/99
to
Ed Peddycoart <edpedd...@mindspring.com> wrote:
> I need a single-writer multi-reader lock which will work with Win98,
> preferrably in the form of a C++ class.

There is indeed no primitive in Win32 to achieve this, so it does have to
be built up out of other primitives. Here's an implementation using
CriticalSections that I walk my students through when I teach my Win32
class. This is the one I use myself in my C shell, which is quite highly
multi-threaded.

typedef struct sh {
CRITICAL_SECTION lock;
HANDLE writelock; /* Manual-reset event */
int readers;
} sharedread_sem;

void initialize( sharedread_sem *s )
{
InitializeCriticalSection(&s->lock);
s->readers = 0;
s->writelock = CreateEvent(NULL, TRUE,
FALSE, NULL);
}

void write_lock( sharedread_sem *s )
{
top:
EnterCriticalSection(&s->lock);
if (s->readers)
{
LeaveCriticalSection(&s->lock);
WaitForSingleObject(s->writelock, INFINITE);
goto top;
}
}

void release_write_lock( sharedread_sem *s )
{
LeaveCriticalSection(&s->lock);
}

void read_lock( sharedread_sem *s )
{
EnterCriticalSection(&s->lock);
s->readers++;
ResetEvent(s->writelock);
LeaveCriticalSection(&s->lock);
}

void release_read_lock( sharedread_sem *s )
{
EnterCriticalSection(&s->lock);
if (--s->readers == 0)
SetEvent(s->writelock);
LeaveCriticalSection(&s->lock);
}

(The rest of class is pretty good, too. :) If you'd like to see the whole
list of topics, please see http://www.hamiltonlabs.com/win32class.htm.)
--

Regards,
Nicki Hamilton

Nicole Ashley Hamilton KD1UJ hami...@hamiltonlabs.com
http://www.hamiltonlabs.com Phone 781-487-0008 FAX 781-487-0009
Hamilton Laboratories, 45 Kings Way, Unit 14, Waltham, MA 02451-9039, USA


Douglas C. Schmidt

unread,
Oct 19, 1999, 3:00:00 AM10/19/99
to
Hi Ed,

> I need a single-writer multi-reader lock which will work with Win98,
> preferrably in the form of a C++ class.

Check out the

ACE_RW_Thread_Mutex

class in ACE release, which you can download at

http://www.cs.wustl.edu/~schmidt/ACE.html

You can read about this class at

http://www.cs.wustl.edu/~schmidt/ACE_wrappers/man/html/ACE_RW_Thread_Mutex.html

and in the paper at

http://www.cs.wustl.edu/~schmidt/Concurrency.ps.gz

Take care,

Doug
--
Dr. Douglas C. Schmidt, Associate Professor
Department of Computer Science, Washington University
St. Louis, MO 63130. Work #: (314) 935-4215; FAX #: (314) 935-7302
sch...@cs.wustl.edu, www.cs.wustl.edu/~schmidt/

Ziv Caspi

unread,
Oct 20, 1999, 3:00:00 AM10/20/99
to
On Tue, 19 Oct 1999 16:29:33 GMT, "Nicole Hamilton"
<hami...@hamiltonlabs.com> wrote:

[... A simple implementation for single-writer, multiple-readers
lock...]

Forgive me for saying so, but this implementation favors
readers instead of the writer. If there are many readers,
the writer will never have a chance to write, so the information
the readers use will quickly become obsolete.

---------------------------------------------
Ziv Caspi
zi...@netvision.net.il

Nicole Hamilton

unread,
Oct 20, 1999, 3:00:00 AM10/20/99
to
Ziv Caspi <zi...@netvision.net.il> wrote:
> "Nicole Hamilton" <hami...@hamiltonlabs.com> wrote:
>
> [... A simple implementation for single-writer, multiple-readers
> lock...]
>
> Forgive me for saying so, but this implementation favors
> readers instead of the writer. If there are many readers,
> the writer will never have a chance to write, so the information
> the readers use will quickly become obsolete.

So you'd like to modify it so that if there a writer requests access, that
no new readers can be admitted until the writer has been allowed in
and out? Well, you have the code, how would you fix the sharedread_sem
structure and the various lock procedures? New readers would be
treated somewhat like writers, wouldn't they, with a loop? Can you write
the code?

Yours is the kind of question I'd be delighted to hear in my class as an
excellent exercise to see if the material was absorbed.

But of course, your comment ties in to another point I like to discuss,
which is that if you expect lots of collisions over any given resource,
then it's probably time to think about ways to replicate the resource or
reduce the amount of time each use requires, etc., all in an attempt
to reduce that bottleneck.

Nicki

Ziv Caspi

unread,
Oct 21, 1999, 3:00:00 AM10/21/99
to
On Wed, 20 Oct 1999 12:33:57 GMT, "Nicole Hamilton"
<hami...@hamiltonlabs.com> wrote:

>So you'd like to modify it so that if there a writer requests access, that
>no new readers can be admitted until the writer has been allowed in
>and out?

This is one of several possible ways to do it.

>Well, you have the code, how would you fix the sharedread_sem
>structure and the various lock procedures? New readers would be
>treated somewhat like writers, wouldn't they, with a loop? Can you write
>the code?

I can, and I have.

>Yours is the kind of question I'd be delighted to hear in my class as an
>excellent exercise to see if the material was absorbed.
>
>But of course, your comment ties in to another point I like to discuss,
>which is that if you expect lots of collisions over any given resource,
>then it's probably time to think about ways to replicate the resource or
>reduce the amount of time each use requires, etc., all in an attempt
>to reduce that bottleneck.


---------------------------------------------
Ziv Caspi
zi...@netvision.net.il

Nicole Hamilton

unread,
Oct 21, 1999, 3:00:00 AM10/21/99
to
Ziv Caspi <zi...@netvision.net.il> wrote:
> Nicole Hamilton <hami...@hamiltonlabs.com> wrote::

> >Well, you have the code, how would you fix the sharedread_sem
> >structure and the various lock procedures? ... Can you write

> >the code?
>
> I can, and I have.

Do you care to post your solution for comparison with the original
I gave?

Nicki

Jordan Zimmerman

unread,
Oct 21, 1999, 3:00:00 AM10/21/99
to
In article <01bf1af7$615751a0$0732180c@hamiltonlabs>, Nicole Hamilton
<hami...@hamiltonlabs.com> wrote:

> So you'd like to modify it so that if there a writer requests access, that
> no new readers can be admitted until the writer has been allowed in

> and out? Well, you have the code, how would you fix the sharedread_sem


> structure and the various lock procedures? New readers would be

> treated somewhat like writers, wouldn't they, with a loop? Can you write
> the code?

I've taken a stab at it...

The issue, it seems to me, is that you want to prevent new readers when
there's a pending write. So, it seems to call for another critical
section. Here's a mod on the previous code:

typedef struct sh {
CRITICAL_SECTION lock;

CRITICAL_SECTION readlock;


HANDLE writelock; /* Manual-reset event */
int readers;
} sharedread_sem;

void initialize( sharedread_sem *s )
{
InitializeCriticalSection(&s->lock);

InitializeCriticalSection(&s->readlock);


s->readers = 0;
s->writelock = CreateEvent(NULL, TRUE,
FALSE, NULL);
}

void write_lock( sharedread_sem *s )
{

EnterCriticalSection(&s->readlock);

top:
EnterCriticalSection(&s->lock);
if (s->readers)
{
LeaveCriticalSection(&s->lock);
WaitForSingleObject(s->writelock, INFINITE);
goto top;
}

LeaveCriticalSection(&s->readlock);
}

void release_write_lock( sharedread_sem *s )
{
LeaveCriticalSection(&s->lock);
}

void read_lock( sharedread_sem *s )
{

EnterCriticalSection(&s->readlock);

EnterCriticalSection(&s->lock);
s->readers++;
ResetEvent(s->writelock);
LeaveCriticalSection(&s->lock);

LeaveCriticalSection(&s->readlock);
}

void release_read_lock( sharedread_sem *s )
{
EnterCriticalSection(&s->lock);
if (--s->readers == 0)
SetEvent(s->writelock);
LeaveCriticalSection(&s->lock);
}

--
Jordan Zimmerman
Altura Software, Inc.
Catalog City, Inc. http://www.catalogcity.com

Stop the persecution of Microsoft and Capitalism:
http://www.moraldefense.com/microsoft

Nicole Hamilton

unread,
Oct 22, 1999, 3:00:00 AM10/22/99
to
Jordan Zimmerman <jor...@altura.com> wrote:
> Nicole Hamilton <hami...@hamiltonlabs.com> wrote:
> > So you'd like to modify it so that if there a writer requests access, that
> > no new readers can be admitted until the writer has been allowed in
> > and out? Well, you have the code, how would you fix the sharedread_sem
> > structure and the various lock procedures? ...

>
> I've taken a stab at it...
>
> The issue, it seems to me, is that you want to prevent new readers when
> there's a pending write. So, it seems to call for another critical
> section. Here's a mod on the previous code:

Bravo! A very nice solution!

Joe Seigh

unread,
Oct 22, 1999, 3:00:00 AM10/22/99
to
The problem with writers being indefinitely blocked by readers is
called starvation. It can work both ways. You have give writers
overriding priority by some mechanism and then you could have reader
starvation if updates were always occurring.

To avoid it, you have to guarantee forward progress. The way that
you define forward progress while a thread is waiting and doing
nothing is to specify that the probability that the thread acquires
the lock always increases with time.

Thus the common use of fifo queues in lock implementations. There
is also a simpler technique. To illustrate, suppose you have
general counting semaphore which also supposes you aren't using
win32. (I'm going to use assignment for the semaphore operations
in the pseudo code example here).

init:
sema = MAX; // initialize semaphore to big number

acquire_exclusive:
sema -= MAX; // decrement semaphore by MAX

release_exclusive:
sema += MAX; // increment semaphore by MAX

acquire_shared:
sema -= 1; // decrement semaphore by 1

release_exclusive:
sema += 1; // increment semaphore by 1


This of course has the problem with writer starvation. To get
around it, just use the service order policy already inherent
in mutex and semaphore implementations. So you then get

init:
sema = MAX; // initialize semaphore to big number
semb = 1; // initialize mutex semphore to 1

acquire_exclusive:
semb -= 1; // acquire mutex
sema -= MAX; // decrement semaphore by MAX
semb += 1; // release mutex

release_exclusive:
sema += MAX; // increment semaphore by MAX

acquire_shared:
semb -= 1; // acquire mutex
sema -= 1; // decrement semaphore by 1
semb += 1; // release mutex

release_exclusive:
sema += 1; // increment semaphore by 1

Yes, it's not FIFO unless the mutex is implemented that way,
but a least your read write locks have behavior consistent
with your mutex locks.

Note: the ms docs say that while you can add any positive number
to the semaphore (up to the window size), do subtract that number
you have to do it by repetitively decrementing by one. Not really
acceptable for a real world use, but for academic purposes to
illustrate certain concepts, it might be ok. But avoid this in
the first version above. You'll get deadlock if there is more
than one writer at a time trying to acquire the lock.

Joe Seigh

David Schwartz

unread,
Oct 22, 1999, 3:00:00 AM10/22/99
to
Joe Seigh wrote:
>
> The problem with writers being indefinitely blocked by readers is
> called starvation. It can work both ways. You have give writers
> overriding priority by some mechanism and then you could have reader
> starvation if updates were always occurring.

Actually, this kind of starvation is usually a good idea. What's the
good of allowing a reader to read data that's already obsolete?

As a general rule, if writer-priority is not what you want, you should
re-examine your decision to use a reader/writer lock. If writes are as
frequent or more frequent than reads, you should re-examine your
decision to use a reader/writer lock.

If any synchronization code in your application is expected to
bottleneck like this, you probably need to re-examine your architectural
decisions.

(Note that this doesn't mean you have to actually implement
writer-priority.)

DS

Patrick TJ McPhee

unread,
Oct 23, 1999, 3:00:00 AM10/23/99
to
In article <3810CEF9...@webmaster.com>,
David Schwartz <dav...@webmaster.com> wrote:
% Joe Seigh wrote:
% >
% > The problem with writers being indefinitely blocked by readers is
% > called starvation. It can work both ways. You have give writers
% > overriding priority by some mechanism and then you could have reader
% > starvation if updates were always occurring.
%
% Actually, this kind of starvation is usually a good idea. What's the
% good of allowing a reader to read data that's already obsolete?

On the other hand, what's the good of constantly updating data if nobody
ever gets the chance to read it? No kind of starvation is good.

% If any synchronization code in your application is expected to
% bottleneck like this, you probably need to re-examine your architectural
% decisions.

Agreed.
--

Patrick TJ McPhee
East York Canada
pt...@interlog.com

David Schwartz

unread,
Oct 23, 1999, 3:00:00 AM10/23/99
to
Patrick TJ McPhee wrote:

> % Actually, this kind of starvation is usually a good idea. What's the
> % good of allowing a reader to read data that's already obsolete?
>
> On the other hand, what's the good of constantly updating data if nobody
> ever gets the chance to read it? No kind of starvation is good.

If data is being constantly updated (literally), it should not be
protected by a reader/writer lock. It should be protected by a normal
lock. The extra overhead of the reader/writer lock gains you nothing.

As I said, if this is an issue, you should reexamine your choice to use
a reader/writer lock. The whole philosophy or reader/writer locks is
that reads are common and writes are rare.

DS

Joe Seigh

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to

It all depends. Write priority, reader priority, etc... have different semantics.
Usually you pick your semantics beforehand for various reasons such as the one David
mentions. Then you write the code to the chosen semantics. Because changing such code
to a different scheme and thus different semantics makes no sense, most people assume
that semantics other than the one they're used to aren't valid.

For an example of data that is constantly being updated where a mutex lock is
not used, the current time. The current time is always being updated by
definition. Reading the current time is an example of a wait free algorithm.
Usually it's an atomic read of the clock, or for a multi-read clock, Lamport's
Clock algorithm (I hope I called it right thing here). Anyhow, in theory time
has well understood semantics, though it is arguable since most programmers seem
to be temporally challenged. So maybe David is right and we should be using a
mutex here.

Joe Seigh

Tom Payne

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to
David Schwartz <dav...@webmaster.com> wrote:
[...]

> If data is being constantly updated (literally), it should not
> be protected by a reader/writer lock. It should be protected by a
> normal lock. The extra overhead of the reader/writer lock gains you
> nothing.

> As I said, if this is an issue, you should reexamine your
> choice to use a reader/writer lock. The whole philosophy or
> reader/writer locks is that reads are common and writes are rare.

Huh? Accomplishing reads concurrently (rather than sequentially)
diminishes their impact on writing traffic. That benefit is most
important when the writing traffic is heavy.

Tom Payne

David Schwartz

unread,
Oct 25, 1999, 3:00:00 AM10/25/99
to
Tom Payne wrote:

> Huh? Accomplishing reads concurrently (rather than sequentially)
> diminishes their impact on writing traffic. That benefit is most
> important when the writing traffic is heavy.

When writing traffic is heavy, reader/writer locks degenerate into
mutexes with extra overhead. The point of reader/writer locks is that if
there are lots of reads and few writes, you get less blocking than you
do with a simple mutex.

DS

Tom Payne

unread,
Oct 26, 1999, 3:00:00 AM10/26/99
to
David Schwartz <dav...@webmaster.com> wrote:
[...]
> When writing traffic is heavy, reader/writer locks degenerate into
> mutexes with extra overhead.

How so? AFIK, reader/writer locks allow reads to be performed
concurrently, even in the presence of heavy writing traffic (which a
simple mutex does not).

Tom Payne

David Schwartz

unread,
Oct 26, 1999, 3:00:00 AM10/26/99
to

In the presence of heavy writing traffic, the writers will lock out the
readers, degenerating into one writer holding the lock at any particular
time. The philosophy of reader/writer locks is that there are lots of
readers who can share the lock. As soon as somebody write locks it, all
the readers block and no sharing takes place.

In most cases, a straight mutex is preferable to a reader/writer lock.
The exception would be if reads are frequent and writes are infrequent.
Then the increase in concurrency will outweigh the increase in overhead.

A good example is a multi-threaded server's configuration information.
Read access to the configuration information could be needed in the
processing of every single request. However, write access is only needed
if the configuration is changed. Frequent read, rare write.

The further you get from this extreme, the less benefit you get from a
reader/writer lock. If you've gotten so far that you are worrying about
writers starving readers, you are using the wrong synchronization tool.

DS

Tom Payne

unread,
Oct 26, 1999, 3:00:00 AM10/26/99
to
David Schwartz <dav...@webmaster.com> wrote:

> In the presence of heavy writing traffic, the writers will lock out
> the readers, degenerating into one writer holding the lock at any particular
> time.

Freezing out all reading activity under heavy loads is an avoidable
design flaw. There are many correct solutions dating back 25 years
that avoid that such starvation breakdowns (e.g., Hoare, "Monitors: an
operating system structuring concept," CACM, October 1974. pp 549-557).

> The philosophy of reader/writer locks is that there are lots of
> readers who can share the lock. As soon as somebody write locks it, all
> the readers block and no sharing takes place.

Right. And, as soon as one waiting reader gets the lock, all
currently waiting readers get to proceed, thereby, obtaining the
benefits of concurrency.

> In most cases, a straight mutex is preferable to a reader/writer lock.
> The exception would be if reads are frequent and writes are infrequent.
> Then the increase in concurrency will outweigh the increase in overhead.

[...]
> Frequent read, rare write.

> The further you get from this extreme, the less benefit you get from a
> reader/writer lock. If you've gotten so far that you are worrying about
> writers starving readers, you are using the wrong synchronization tool.

Assuming that you have multiple CPUs, you get benefit whenever multiple
readers can proceed in parallel. The benefit is proportional to the
average number of readers in the queue (when at least one is present).
For a fixed volume of read traffic, as writing traffic increases, so
does the average number of waiting readers. QED

Tom Payne

David Schwartz

unread,
Oct 26, 1999, 3:00:00 AM10/26/99
to
Tom Payne wrote:

> Assuming that you have multiple CPUs, you get benefit whenever multiple
> readers can proceed in parallel.

Yes, and you take the penalty even when they can't.

> The benefit is proportional to the
> average number of readers in the queue (when at least one is present).

Well, that depends. If you give writes priority, then you won't reach
this concurrency level, because as soon as a single write pends, no more
read locks will be granted. If you don't give writes priority (allowing
reads so long as no write lock has been granted), in a heavy write case,
you'll starve out your writers. Anything inbetween (such as FIFO
granting) tends to share in the disadvantages of both techniques and
consume more resources as well.

> For a fixed volume of read traffic, as writing traffic increases, so
> does the average number of waiting readers. QED

Wonderful. So more of them can be waiting for the writers to release
the lock.

Let me strengthen my original statement -- if you are expecting heavy
contention for any shared object, you are doing something wrong. There
may be the occasional case where this is literally unavoidable, but 95%
of the time, you are doing something _very_ wrong if this is the case.

DS

Tom Payne

unread,
Oct 26, 1999, 3:00:00 AM10/26/99
to
David Schwartz <dav...@webmaster.com> wrote:
> Tom Payne wrote:

>> Assuming that you have multiple CPUs, you get benefit whenever multiple
>> readers can proceed in parallel.

> Yes, and you take the penalty even when they can't.

>> The benefit is proportional to the
>> average number of readers in the queue (when at least one is present).

> Well, that depends. If you give writes priority, then you won't reach
> this concurrency level, because as soon as a single write pends, no more
> read locks will be granted.

Right; you get a starvation breakdown if you give all writers priority over
all readers.

> If you don't give writes priority (allowing
> reads so long as no write lock has been granted), in a heavy write case,
> you'll starve out your writers. Anything inbetween (such as FIFO
> granting) tends to share in the disadvantages of both techniques and
> consume more resources as well.

The common fix is to alternate between reading and writing, i.e., when
a writer completes, it releases the lock to any waiting readers, and
similarly the last reader to complete releases the lock to the first
waiting writer. Any reader that arrives when there is a waiting
writer must wait to join the next batch of readers that get the lock.
That way we starve neither group and we get the benefits of concurrent
reading.

>> For a fixed volume of read traffic, as writing traffic increases, so
>> does the average number of waiting readers. QED

> Wonderful. So more of them can be waiting for the writers to release
> the lock.

If you choose to starve readers, that will be so.

> Let me strengthen my original statement -- if you are
> expecting heavy contention for any shared object, you are doing
> something wrong.

Under light loads scheduling policies are irrelevant, since there is
seldom more than one request in the queue. But, even in the presence
of unlimited budgets, contention for access to centralized information
resources is inevitable. In such cases, using better scheduling
protocols can be of significant benefit.

Tom Payne

David Schwartz

unread,
Oct 26, 1999, 3:00:00 AM10/26/99
to
Tom Payne wrote:

> The common fix is to alternate between reading and writing, i.e., when
> a writer completes, it releases the lock to any waiting readers, and
> similarly the last reader to complete releases the lock to the first
> waiting writer. Any reader that arrives when there is a waiting
> writer must wait to join the next batch of readers that get the lock.
> That way we starve neither group and we get the benefits of concurrent
> reading.

That a pretty good compromise. It can be implemented with less overhead
than a FIFO implementation and should allow increased concurrency when
you have large numbers of readers requesting concurrent access.

> Under light loads scheduling policies are irrelevant, since there is
> seldom more than one request in the queue. But, even in the presence
> of unlimited budgets, contention for access to centralized information
> resources is inevitable. In such cases, using better scheduling
> protocols can be of significant benefit.

Agreed on all points. Those who wish to use UNIX98 reader/writer locks,
however, had better use them in ways where no sensible semantics cause
starvation.

DS

Joe Seigh

unread,
Oct 29, 1999, 3:00:00 AM10/29/99
to
Tom Payne wrote:
>
> The common fix is to alternate between reading and writing, i.e., when
> a writer completes, it releases the lock to any waiting readers, and
> similarly the last reader to complete releases the lock to the first
> waiting writer. Any reader that arrives when there is a waiting
> writer must wait to join the next batch of readers that get the lock.
> That way we starve neither group and we get the benefits of concurrent
> reading.

Would this seem to be a reasonable approximation of the above
using fifo locking?

a is a fifo mutex lock
b is a fifo read/write lock

# exclusive
acquire a exclusive
acquire b exclusive

... critical region

release b exclusive
release a exclusive

# shared
acquire b shared

... critical region

release b shared


If you had reasonably efficient fifo lock implementations then this
would be a way to try out that scheme without having to implement your
own lock from scratch. Also I happen to know how to do efficient
fifo r/w spin locks without having to maintain an explicit wait queue
so this would be fairly straightforward.

Additionally, in the exclusive case, the acquire exclusive of the r/w lock
could be changed to read with update to allow further overlap of processing.

Joe Seigh

Tom Payne

unread,
Oct 29, 1999, 3:00:00 AM10/29/99
to
Joe Seigh <jse...@bbnplanet.com> wrote:
> Tom Payne wrote:
>>
>> The common fix is to alternate between reading and writing, i.e., when
>> a writer completes, it releases the lock to any waiting readers, and
>> similarly the last reader to complete releases the lock to the first
>> waiting writer. Any reader that arrives when there is a waiting
>> writer must wait to join the next batch of readers that get the lock.
>> That way we starve neither group and we get the benefits of concurrent
>> reading.

> Would this seem to be a reasonable approximation of the above
> using fifo locking?

> a is a fifo mutex lock
> b is a fifo read/write lock

> # exclusive
> acquire a exclusive
> acquire b exclusive

> ... critical region

> release b exclusive
> release a exclusive

> # shared
> acquire b shared

> ... critical region

> release b shared

I'm not sure of the semantics of your r/w lock. Why does it need to
be protected by a mutex in the "exclusive" case?

> If you had reasonably efficient fifo lock implementations then this
> would be a way to try out that scheme without having to implement your
> own lock from scratch.

The implementations that I've seen were all based on conditions and
mutexes. To absolutely guarantee no starvation, one needs
starvation-free (e.g., fifo) locks and conditions with deterministic
signalling (which most don't have). It's my impression that in
practice spinlocks and standard signalling works well, even under
load. The key is to have two conditions OKtoRead and OKtoWrite.
departing writers signal OKtoRead, if that condition is awaited; they
signal OKtoWrite, otherwise. Similarly, we maintain a reader count
and the last reader to depart, signals OKtoWrite, if it is awaited,
else OKtoRead. Arriving readers wait on OKtoRead whenever a writing
is in progress or OKtoWrite is awaited. Arriving writers wait on
OKtoRead if writing is in progress or the reader count in greater than
zero.

> Also I happen to know how to do efficient fifo r/w spin locks
> without having to maintain an explicit wait queue so this would be
> fairly straightforward.

That would be very interesting for a lot of cases.

> Additionally, in the exclusive case, the acquire exclusive of the
> r/w lock could be changed to read with update to allow further
> overlap of processing.

I don't quite follow what you mean. (The standard protocol is that
wpdating is allowed if and only if you hold exclusive access.)

Tom Payne

jse...@my-deja.com

unread,
Oct 29, 1999, 3:00:00 AM10/29/99
to
In article <7vcekc$uc$1...@pravda.ucr.edu>,
Tom Payne <t...@roam-thp2.cs.ucr.edu> wrote:

> Joe Seigh <jse...@bbnplanet.com> wrote:
>
> > a is a fifo mutex lock
> > b is a fifo read/write lock
>
> > # exclusive
> > acquire a exclusive
> > acquire b exclusive
>
> > ... critical region
>
> > release b exclusive
> > release a exclusive
>
> > # shared
> > acquire b shared
>
> > ... critical region
>
> > release b shared
>
> I'm not sure of the semantics of your r/w lock. Why does it need to
> be protected by a mutex in the "exclusive" case?

The mutex keeps any other "exclusive" requests from the r/w lock
while the current "exclusive" request is active, thereby allowing
"shared" requests to queue up on the r/w lock ahead of any other
exclusive requests. When the current "exclusive" request is finished
and releases the mutex, the next "exclusive" request, if any, queues
up on the r/w lock, blocking subsequent "shared" requests until it
has had its turn. In other words, waiting "shared" requests have,
at most, one "exclusive" request ahead of them.

snip...

> >
> > Also I happen to know how to do efficient fifo r/w spin locks
> > without having to maintain an explicit wait queue so this would be
> > fairly straightforward.
>
> That would be very interesting for a lot of cases.

It's a bakery algorithm. It turns out you can do a r/w bakery style
spin lock in the same number of lines of c as a mutex spin lock. For
the acquire, 4 if one is a fetch_and_add (plus a memory sync, this
is of course c.p.t.). I've posted it a number of times.

>
> > Additionally, in the exclusive case, the acquire exclusive of the
> > r/w lock could be changed to read with update to allow further
> > overlap of processing.
>
> I don't quite follow what you mean. (The standard protocol is that
> wpdating is allowed if and only if you hold exclusive access.)

I've seen it in Oracle's api but I forgot what they called it. Since
you usually read data before updating it, you can overlap the read
part with other read accesses, then you request exclusive and when
all the other read accesses have completed, do the update. It's
a little more complicated then usual, since the request for read
with update has to block other read with update requests.


Joe Seigh


Sent via Deja.com http://www.deja.com/
Before you buy.

james...@my-deja.com

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
Don't we still have a problem with deadlocks? Suppose process A has the
read lock. Process B wants to do a write and is blocked waiting for
process A. Suppose process A decides to get the write lock (without
releasing the read lock). Now A is blocked waiting for B. Deadlock.

David Schwartz

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
james...@my-deja.com wrote:
>
> Don't we still have a problem with deadlocks? Suppose process A has the
> read lock. Process B wants to do a write and is blocked waiting for
> process A. Suppose process A decides to get the write lock (without
> releasing the read lock). Now A is blocked waiting for B. Deadlock.

If you violate interface contracts, you will always have a problem with
deadlocks. But the problem is not in the lock, it's in process A which
tried to acquire the same lock twice.

DS

Carey Gregory

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
james...@my-deja.com wrote:

>Don't we still have a problem with deadlocks? Suppose process A has the
>read lock. Process B wants to do a write and is blocked waiting for
>process A. Suppose process A decides to get the write lock (without
>releasing the read lock). Now A is blocked waiting for B. Deadlock.

Acquiring multiple locks will always lead to deadlocks if you allow blocking
on the second lock. The usual solution is to request the second lock without
blocking. If you can't obtain the second lock, release the first lock and
start over.

--
Carey Gregory

"The average dog is a nicer person than
the average person." -- A. Rooney

Nicole Hamilton

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
Carey Gregory <cgre...@gw-tech.com> wrote:
> Acquiring multiple locks will always lead to deadlocks if you allow blocking
> on the second lock. The usual solution is to request the second lock without
> blocking. If you can't obtain the second lock, release the first lock and
> start over.

Blocking on the second lock does not, in and of itself, lead to deadlock in a
properly designed application. What is required to avoid deadlock is that
if we require multiple resources, we must always, NO EXCEPTIONS!,
request them in the same order.

Nicki

Tom Payne

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
In comp.programming.threads Carey Gregory <cgre...@gw-tech.com> wrote:
> james...@my-deja.com wrote:

>>Don't we still have a problem with deadlocks? Suppose process A has the
>>read lock. Process B wants to do a write and is blocked waiting for
>>process A. Suppose process A decides to get the write lock (without
>>releasing the read lock). Now A is blocked waiting for B. Deadlock.

> Acquiring multiple locks will always lead to deadlocks if you allow blocking
> on the second lock.

Not always, e.g.., in the course of manipulating some lock-protected
data structure you might invoke the lock protected memory allocator.

> The usual solution is to request the second lock without blocking.
> If you can't obtain the second lock, release the first lock and
> start over.

... but taking care to restore invariants before releasing the first
lock.

Tom Payne

T.Totaler

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
David Schwartz wrote:
>
> james...@my-deja.com wrote:
> >
> > Don't we still have a problem with deadlocks? Suppose process A has the
> > read lock. Process B wants to do a write and is blocked waiting for
> > process A. Suppose process A decides to get the write lock (without
> > releasing the read lock). Now A is blocked waiting for B. Deadlock.
>
> If you violate interface contracts, you will always have a problem with
> deadlocks. But the problem is not in the lock, it's in process A which
> tried to acquire the same lock twice.
Not necessarily. If that's a real-world possibility, then the lock has
to "know" to "promote" the read lock to the write one for the process
that already owns this read lock. That is one possibility, that is. Or,
in the simplest case, just fail. You can keep track of who locked what,
by using PIDs, for example.

--
len
if you must email, reply to:
len bel at world net dot att dot net (no spaces, ats2@, dots2.)

Carey Gregory

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
"Nicole Hamilton" <hami...@hamiltonlabs.com> wrote:

>Blocking on the second lock does not, in and of itself, lead to deadlock in a
>properly designed application. What is required to avoid deadlock is that
>if we require multiple resources, we must always, NO EXCEPTIONS!,
>request them in the same order.

True. I should have been more precise. When in doubt, refer to Knuth.

David Schwartz

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
"T.Totaler" wrote:
>
> David Schwartz wrote:
> >
> > james...@my-deja.com wrote:
> > >
> > > Don't we still have a problem with deadlocks? Suppose process A has the
> > > read lock. Process B wants to do a write and is blocked waiting for
> > > process A. Suppose process A decides to get the write lock (without
> > > releasing the read lock). Now A is blocked waiting for B. Deadlock.
> >
> > If you violate interface contracts, you will always have a problem with
> > deadlocks. But the problem is not in the lock, it's in process A which
> > tried to acquire the same lock twice.

> Not necessarily. If that's a real-world possibility, then the lock has
> to "know" to "promote" the read lock to the write one for the process
> that already owns this read lock. That is one possibility, that is. Or,
> in the simplest case, just fail. You can keep track of who locked what,
> by using PIDs, for example.

The overhead required to track this is generally sufficient to at least
double the cost of the locking primitive. It might be appropriate for a
debug build, but it should drop out of a release build.

If you want read/write/promote locks, you know where to find them.
Personally, I prefer to fix the architecture so such complex locking
'primitives' are not needed, but it's not always possible or practical.

DS

T.Totaler

unread,
Nov 2, 1999, 3:00:00 AM11/2/99
to
David Schwartz wrote:
> The overhead required to track this is generally sufficient to at least
> double the cost of the locking primitive.
Oh yeah. So that all really depends on the application.

james...@my-deja.com

unread,
Nov 3, 1999, 3:00:00 AM11/3/99
to
If two processes are reading and one tries to write, the lock can be
promoted (after waiting for the other process to release the read lock).

If both processes try to write, each process will block, waiting for
the other to release their read lock.

Joe Seigh

unread,
Nov 3, 1999, 3:00:00 AM11/3/99
to
james...@my-deja.com wrote:
>
> If two processes are reading and one tries to write, the lock can be
> promoted (after waiting for the other process to release the read lock).
>
> If both processes try to write, each process will block, waiting for
> the other to release their read lock.
>

The read lock is not released, but converted to a write lock after
all other reads have finished. The read lock can't be released
because the update to the shared data is going to be based on
state information gotten while the lock was read only, and if
the lock was released, even for an instant, it would be possible
for another thread to get write access and modify the shared data.

The way deadlock is prevented in the case of a read lock
being promoted to a write lock is to make it conditional and
fail the request if another thread has already requested
the lock to be promoted. Usually though, the intent to
request write promotion is indicated with the request for read access
and ensuring that only one such thread has such read access
at a time will prevent deadlock.

Joe Seigh

Tom Payne

unread,
Nov 3, 1999, 3:00:00 AM11/3/99
to

After thinking about it, I realize that I don't understand the process
of promoting read-access to write-access in the way that is being
discussed. What is it supposed to accomplish and how?

What I think I understand is that in addition to an end_read()
routine, which releases shared/read access, we need:

void end_read_and_start_write() {
if ( ! reader_count == 1 ) {
reader_count = 0;
write_in_progress = true;
} else { // the following should be inline.
end_read();
start_write();
}
}

which promotes shared/read access be promoted to exclusive/write
access. As the last writer departs, he simply sets the
write_in_progress flag, thereby obtaining exclusive access. That
saves a bit of overhead compared to releasing shared/read access and
then competing for exclusive/write access. But that only helps when
he's the last. Otherwise, he has to wait somewhere for exclusive
access, so he might as well compete for exclusive access along with
the other aspiring writers.

Now, if a thread performing such a read-followed-by-write operation
ends his writing with a normal call to end_write(), he would normally
give preference to waiting readers (if any), a protocol that allows a
steady stream of such read-followed-by-write accessing to freeze out
waiting writers. So, he needs to end his read-followed-by-write
access via a call to end_read_followed_by_write(), which behaves just
like end_write() but gives preference to waiting writers (if any) just
as end_read() does. Unfortunately, however, threads doing a
read-followed-by-write get a scheduling advantage over writers who
wait their turn, which messes with whatever scheduling discipline one
might wish to impose.

Tom Payne

0 new messages