Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

upgradeable lock

1 view
Skip to first unread message

Alexander Jasper

unread,
Mar 21, 2002, 4:22:21 PM3/21/02
to
Hi!

in our server we've a bunch of data that is protected by a read/write write
lock. This allows multiple threads to do concurrent reads. Write access is
exclusive. We now have a very long lasting calculation. After the end of
the calculation the result is written.

The written result data corresponds to the read data during the
calculation. This forces me to acquire a write lock on the start of the
calculation - although a long time at the beginning I'm only reading.

Obviously I can't just release the read lock and then get a write lock.
This would not be atomic any more.

So I thought of s.th. like upgrading a lock. Thinking a little bit on this
one lead to the insight that you can't allow all read accesses to be
upgradeable. More than one thread trying to upgrade to write would allways
lead to a deadlock. The upgrade would block against other reads and writes.

So I thought of a three state lock:
read/write/upgradeable read
The upgradeable state is allowed with other reads, but mutually exclusive
and of cause exclusive against write.

A search on the web on this topic did not yield any results.
Any thoughts? Does any body know an other solution?

How to best implement s.th. like this (on Win32)?
The read/write lock is currently using an Event, a critical section and a
normal counter.

By the way: I'm on C++.

TIA

Alexander

David Schwartz

unread,
Mar 21, 2002, 5:12:49 PM3/21/02
to
Alexander Jasper wrote:

> in our server we've a bunch of data that is protected by a read/write write
> lock. This allows multiple threads to do concurrent reads. Write access is
> exclusive. We now have a very long lasting calculation. After the end of
> the calculation the result is written.
>
> The written result data corresponds to the read data during the
> calculation. This forces me to acquire a write lock on the start of the
> calculation - although a long time at the beginning I'm only reading.
>
> Obviously I can't just release the read lock and then get a write lock.
> This would not be atomic any more.

Is there any other thing that could modify the contents of the
protected object? If not, then there's no harm in releasing the read
lock before acquiring the write lock since nothing else could modify the
contents.



> So I thought of a three state lock:
> read/write/upgradeable read
> The upgradeable state is allowed with other reads, but mutually exclusive
> and of cause exclusive against write.

This can be done provided you only allow one ungradeable read at a
time. This only helps if all of the following situations apply:

1) Read operations are much more frequent than write operations.

2) You need to be able to overlap a lot of read operations because
there is no way to minimize the time the read locks are held.

3) You have 'short' write operations and 'long' write operations that
are read-only for most of the time.

4) You are extremely unlikely to ever have more than one 'long' write
operation at a time.

A contrived example might be a class that collects usage statistics.
Most of the time, you just check the statistics. Occasionally you add
one to the count of something. Every once in a while, say every 5
minutes, you compute some very complex function about the statistics
whose result must go back in the same statistics object.

Note that a solution for you might be to store the result of the
complex operation in another object protected by another lock. That way
the statistics function only needs a read lock on the 'busy' object.

Cases where upgradeable locks are really the best solution are extremly
rare. There's about an 80% chance that with full knowledge of your
application, I could suggest a better approach. (That doesn't mean that
upgradeable locks won't work reasonably.)

DS

Alexander Terekhov

unread,
Mar 21, 2002, 6:14:19 PM3/21/02
to
Alexander Jasper wrote:
[...]

>
> So I thought of s.th. like upgrading a lock. Thinking a little bit on this
> one lead to the insight that you can't allow all read accesses to be
> upgradeable.

Why? Well, you can't guarantee that all upgrades will
be atomic (without that extra "upgradeable read" "state"),
but I guess that you could/should just "recalc" you stuff
in the case of some modification by other thread w.r.t
non-atomic upgrade did invalidated/updated some of the
data you have already "processed" in shared-read mode
prior to upgrade (or just do "recalc" unconditionally
on non-atomic upgrade).

> More than one thread trying to upgrade to write would allways
> lead to a deadlock.

Why?

regards,
alexander.

Joe Seigh

unread,
Mar 22, 2002, 5:50:52 AM3/22/02
to

Alexander Jasper wrote:
...


>
> A search on the web on this topic did not yield any results.
> Any thoughts? Does any body know an other solution?
>

Search the Google usenet archives for this. It's already been
discussed several times.

Joe Seigh

David Butenhof

unread,
Mar 22, 2002, 7:21:38 AM3/22/02
to
Alexander Terekhov wrote:

> Alexander Jasper wrote:
> [...]
>>
>> So I thought of s.th. like upgrading a lock. Thinking a little bit on
>> this one lead to the insight that you can't allow all read accesses to be
>> upgradeable.
>
> Why? Well, you can't guarantee that all upgrades will
> be atomic (without that extra "upgradeable read" "state"),
> but I guess that you could/should just "recalc" you stuff
> in the case of some modification by other thread w.r.t
> non-atomic upgrade did invalidated/updated some of the
> data you have already "processed" in shared-read mode
> prior to upgrade (or just do "recalc" unconditionally
> on non-atomic upgrade).

IF one allows only one "upgradeable reader", (UR) as a special type of
access (as he suggested), you certainly could make the guarantee. You can
allow the single UR concurrent access with any number of readers. When the
UR requests upgrade, it releases its read access and in effect becomes the
HEAD of the writer queue. Once all readers have released (there's still no
chance that the data could have been modified from the state last observed
by the UR), the waiting UR must be immediately granted write access and
scheduled to run.

Normally, one is better off making a RW lock behave like normal POSIX
mutexes, where awakened waiters compete equally with running threads for
access. (Rather than forced "FIFO" ownership that, like Ada rendezvous,
causes completely wasted context switches.) In this case, though, we can't
do that because the UR would lose atomicity. That's a BIG inefficiency.
You'd really be better off breaking the atomicity guarantee and requiring
the UR to re-evaluate the data after it gains write access. In which case
you can allow multiple upgradeable readers. However it also means there's
no longer any point to allowing upgrade... you might as well just release
the read lock and contend normally for a write lock. Which is precisely why
"upgrade" isn't in POSIX.

>> More than one thread trying to upgrade to write would allways
>> lead to a deadlock.
>
> Why?

I think "deadlock" here is a matter of interpretation, and somewhat
different from the normal use of the word. As you pointed out, if there are
multiple upgrade requests they CANNOT all be satisfied "atomically". If you
insist on trying, you cannot grant ANY of them, which is, in a loose sense,
a form of behavior one might term a deadlock.

/------------------[ David.B...@compaq.com ]------------------\
| Compaq Computer Corporation POSIX Thread Architect |
| My book: http://www.awl.com/cseng/titles/0-201-63392-2/ |
\-----[ http://home.earthlink.net/~anneart/family/dave.html ]-----/

Alexander Terekhov

unread,
Mar 22, 2002, 10:06:39 AM3/22/02
to

David Butenhof wrote:
>
> Alexander Terekhov wrote:
>
> > Alexander Jasper wrote:
> > [...]
> >>
> >> So I thought of s.th. like upgrading a lock. Thinking a little bit on
> >> this one lead to the insight that you can't allow all read accesses to be
> >> upgradeable.
> >
> > Why? Well, you can't guarantee that all upgrades will
> > be atomic (without that extra "upgradeable read" "state"),
> > but I guess that you could/should just "recalc" you stuff
> > in the case of some modification by other thread w.r.t
> > non-atomic upgrade did invalidated/updated some of the
> > data you have already "processed" in shared-read mode
> > prior to upgrade (or just do "recalc" unconditionally
> > on non-atomic upgrade).
>
> IF one allows only one "upgradeable reader", (UR) as a special type of
> access (as he suggested), you certainly could make the guarantee.

Yes (and that is what I meant with my "(without..." bit), but
I think that generally, you may NOT know whether a request/
transaction started as a read operation will end up asking
for an "upgrade" to gain write access and save/update
something.

> You can
> allow the single UR concurrent access with any number of readers. When the
> UR requests upgrade, it releases its read access and in effect becomes the
> HEAD of the writer queue.

Yup, or the HEAD of a single "FIFO" queue with
readers "served" in bunches, concurrently. ;-)

> Once all readers have released (there's still no
> chance that the data could have been modified from the state last observed
> by the UR), the waiting UR must be immediately granted write access and
> scheduled to run.
>
> Normally, one is better off making a RW lock behave like normal POSIX
> mutexes, where awakened waiters compete equally with running threads for
> access. (Rather than forced "FIFO" ownership that, like Ada rendezvous,
> causes completely wasted context switches.)

Yes, I guess that passing the lock ownership across context
switches is indeed counter productive (having performance and
throughput in mind). And realtime/embedded/"flight control
folks" could always get "predictability" they want/need
using SCHED_FIFO/RR scheduling policies on uniprocessors,
I guess.

> In this case, though, we can't
> do that because the UR would lose atomicity. That's a BIG inefficiency.
> You'd really be better off breaking the atomicity guarantee and requiring
> the UR to re-evaluate the data after it gains write access. In which case
> you can allow multiple upgradeable readers. However it also means there's
> no longer any point to allowing upgrade... you might as well just release
> the read lock and contend normally for a write lock. Which is precisely why
> "upgrade" isn't in POSIX.

Uhmm, but how about the following "extensions":

a) ability for a reader to request a write-upgrade with an
indication whether it ended up being "atomic" or not.

b) upgrade operation could either result in a "write"
ownership always or (optionally) in "read-again ;-)",
if it ended up being non-atomic.

c) ability for a writer to query whether some
upgrades are pending -- that would allow some
extra "optimizations" (basically he could then
keep track of updated/invalidated stuff, to allow
*conditional* re-calcs for non-atomic upgraders),
I think.

?

regards,
alexander.

David Butenhof

unread,
Mar 25, 2002, 7:40:06 AM3/25/02
to
Alexander Terekhov wrote:

> David Butenhof wrote:
>>
>> Normally, one is better off making a RW lock behave like normal POSIX
>> mutexes, where awakened waiters compete equally with running threads for
>> access. (Rather than forced "FIFO" ownership that, like Ada rendezvous,
>> causes completely wasted context switches.)
>
> Yes, I guess that passing the lock ownership across context
> switches is indeed counter productive (having performance and
> throughput in mind). And realtime/embedded/"flight control
> folks" could always get "predictability" they want/need
> using SCHED_FIFO/RR scheduling policies on uniprocessors,
> I guess.

This is really missing the point. FIFO ownership isn't "predictable",
either; it's simply inefficient. Except on a uniprocessor using strictly
SCHED_FIFO, the execution order of threads is at least "slightly
asynchronous". (Even SCHED_RR on a uniprocessor subjects threads to
arbitrary timed interrupts in addition to normal blocking.)

The only purpose for FIFO ownership is that some would like to think that
allocation of shared resources between competing threads should be "fair".
Given that any thread can hold exclusive access to any resource for an
unbounded time, it can't be fair. It can only be efficient or inefficient.
Any attempt at fairness cannot achieve fairness, but WILL achieve
inefficiency.

Efficiently scheduled and synchronized cooperating threads can CHOOSE to be
fair. "Fairly" scheduled and synchronized competing threads are neither
fair nor efficient, and cannot be MADE to be efficient.

To be fairly scheduled, competing entities must be isolated from each
other. If you have to compete, be a process, not a thread. Threads are
friendly. They like each other. They cooperate to achieve common goals.

;-)

>> In this case, though, we can't
>> do that because the UR would lose atomicity. That's a BIG inefficiency.
>> You'd really be better off breaking the atomicity guarantee and requiring
>> the UR to re-evaluate the data after it gains write access. In which case
>> you can allow multiple upgradeable readers. However it also means there's
>> no longer any point to allowing upgrade... you might as well just release
>> the read lock and contend normally for a write lock. Which is precisely
>> why "upgrade" isn't in POSIX.
>
> Uhmm, but how about the following "extensions":
>
> a) ability for a reader to request a write-upgrade with an
> indication whether it ended up being "atomic" or not.
>
> b) upgrade operation could either result in a "write"
> ownership always or (optionally) in "read-again ;-)",
> if it ended up being non-atomic.
>
> c) ability for a writer to query whether some
> upgrades are pending -- that would allow some
> extra "optimizations" (basically he could then
> keep track of updated/invalidated stuff, to allow
> *conditional* re-calcs for non-atomic upgraders),
> I think.

Yes, if you treat upgradeability as an "optimization" on which nobody can
ever depend. Any reader can request upgrade, and one of the readers may be
granted an atomic upgrade while the others need to re-evaluate. Very
dangerous, in my opinion, because it'd be trivially easy to write bad code
that'll be really hard to test. You're adding a weird beastie: an
"alternate success" status that's not really an error. "You got it; but
it's not what you may think." What would we call it? EALMOST? Or maybe
EYOUASKEDFORITBUTYOUDIDNTGETIT?

Actually, you'd be better off with a "request upgrade" that'd either
convert or return failure leaving the caller with read access. This fits
much better with the normal UNIX "do this or fail without changing
anything" model.

And, OK, one could deal with the naming issues. Documentation, though,
would be easily misunderstood. (That MUST be a prime concern in the design
of any interface.) And, in the end, what do you have? Every call site that
requests upgrade MUST be able to handle either return, but there's no way
to predict for sure whether any given call will succeed or "not quite
succeed". Easy to misunderstand and misuse, tough to test. Bad combination.

The query operation sounds like an invitation to even greater screwups, to
me. More complication, harder to test... for very little added value.

Eric Sosman

unread,
Mar 25, 2002, 12:47:55 PM3/25/02
to
Alexander Jasper wrote:
>
> Hi!
>
> in our server we've a bunch of data that is protected by a read/write write
> lock. This allows multiple threads to do concurrent reads. Write access is
> exclusive. We now have a very long lasting calculation. After the end of
> the calculation the result is written.
>
> The written result data corresponds to the read data during the
> calculation. This forces me to acquire a write lock on the start of the
> calculation - although a long time at the beginning I'm only reading.
>
> Obviously I can't just release the read lock and then get a write lock.
> This would not be atomic any more. [...]

If a reading thread knows from the outset that it will
eventually want to become a writer, couldn't you use a mutex
in addition to the rwlock? In Pthreads terms:

Pure reader:
pthread_rwlock_rdlock();
read ...;
pthread_rwlock_unlock();

Upgradeable reader:
pthread_mutex_lock();
pthread_rwlock_rdlock();
read ...;
pthread_rwlock_unlock();
pthread_rwlock_wrlock();
write ...;
pthread_rwlock_unlock();
pthread_mutex_unlock();

Pure writer:
pthread_mutex_lock();
pthread_rwlock_wrlock();
write ...;
pthread_rwlock_unlock();
pthread_mutex_unlock();

Unfortunately, I can't think of a way to make this work
if readers run for a long time and then discover (rarely) that
they need to become writers. You could keep a "modification
count" as part of the data structure and try something like

try_again:
pthread_rwlock_rdlock();
copy_of_mod_count = data_structure_mod_count;
read ...;
pthread_rwlock_unlock();
pthread_rwlock_wrlock();
if (copy_of_mod_count != data_structure_mod_count) {
pthread_rwlock_unlock();
goto try_again;
}
write ...;
++data_structure_mod_count;
pthread_rwlock_unlock();

... but you'd have to assess whether you'd be likely to loop
back and retry too frequently.

--
Eric....@sun.com

Alexander Terekhov

unread,
Mar 25, 2002, 3:37:11 PM3/25/02
to

David Butenhof wrote:
>
> Alexander Terekhov wrote:
>
> > David Butenhof wrote:
> >>
> >> Normally, one is better off making a RW lock behave like normal POSIX
> >> mutexes, where awakened waiters compete equally with running threads for
> >> access. (Rather than forced "FIFO" ownership that, like Ada rendezvous,
> >> causes completely wasted context switches.)
> >
> > Yes, I guess that passing the lock ownership across context
> > switches is indeed counter productive (having performance and
> > throughput in mind). And realtime/embedded/"flight control
> > folks" could always get "predictability" they want/need
> > using SCHED_FIFO/RR scheduling policies on uniprocessors,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

> > I guess.
>
> This is really missing the point. FIFO ownership isn't "predictable",
> either; it's simply inefficient. Except on a uniprocessor using strictly
> SCHED_FIFO, the execution order of threads is at least "slightly
> asynchronous".

With "predictability" I meant "uniprocessor" priority
scheduling (realtime):

"3.286 Priority Scheduling

A performance and determinism improvement facility to allow
applications to determine the order in which threads that are
ready to run are granted access to processor resources."

AND, more importantly, the 2nd "But..." below.

> (Even SCHED_RR on a uniprocessor subjects threads to
> arbitrary timed interrupts in addition to normal blocking.)

But this would NOT affect prioritization, I guess? And
at the same time SCHED_RR would still (despite interrupts)
"ensure that if there are multiple SCHED_RR threads at the
same priority, one of them does not monopolize the processor."
Or am I missing something (interrupt related)?

> The only purpose for FIFO ownership is that some would like to think that
> allocation of shared resources between competing threads should be "fair".

But in the context of THIS discussion, a single "FIFO" queue
for both readers (served in bunches) and writers would just
"help" to avoid "specifying" READ or WRITE preference... and
on a "uniprocessor" with SCHED_FIFO/RR scheduling it (i.e
such "FIFO" rw-lock) would work just like a mutex (but
allowing read "concurrency" for RR w.r.t "bunched" readers,
bunches separated by writer(s)), unless I am missing and/or
misunderstanding something...

> Given that any thread can hold exclusive access to any resource for an
> unbounded time, it can't be fair. It can only be efficient or inefficient.
> Any attempt at fairness cannot achieve fairness, but WILL achieve
> inefficiency.
>
> Efficiently scheduled and synchronized cooperating threads can CHOOSE to be
> fair. "Fairly" scheduled and synchronized competing threads are neither
> fair nor efficient, and cannot be MADE to be efficient.
>
> To be fairly scheduled, competing entities must be isolated from each
> other. If you have to compete, be a process, not a thread. Threads are
> friendly. They like each other. They cooperate to achieve common goals.
>
> ;-)

Yeah, but don't you think that the "rules" are somewhat
different in the "real-time" space?

> >> In this case, though, we can't
> >> do that because the UR would lose atomicity. That's a BIG inefficiency.
> >> You'd really be better off breaking the atomicity guarantee and requiring
> >> the UR to re-evaluate the data after it gains write access. In which case
> >> you can allow multiple upgradeable readers. However it also means there's
> >> no longer any point to allowing upgrade... you might as well just release
> >> the read lock and contend normally for a write lock. Which is precisely
> >> why "upgrade" isn't in POSIX.
> >
> > Uhmm, but how about the following "extensions":
> >
> > a) ability for a reader to request a write-upgrade with an
> > indication whether it ended up being "atomic" or not.
> >
> > b) upgrade operation could either result in a "write"
> > ownership always or (optionally) in "read-again ;-)",
> > if it ended up being non-atomic.
> >
> > c) ability for a writer to query whether some
> > upgrades are pending -- that would allow some
> > extra "optimizations" (basically he could then
> > keep track of updated/invalidated stuff, to allow
> > *conditional* re-calcs for non-atomic upgraders),
> > I think.
>
> Yes, if you treat upgradeability as an "optimization" on which nobody can
> ever depend.

Well, I guess that one could argue that the whole
read/write locking concept is just an "'optimization'
on which nobody can ever depend". ;-)

> Any reader can request upgrade, and one of the readers may be
> granted an atomic upgrade while the others need to re-evaluate. Very
> dangerous, in my opinion, because it'd be trivially easy to write bad code
> that'll be really hard to test. You're adding a weird beastie: an
> "alternate success" status that's not really an error. "You got it; but
> it's not what you may think." What would we call it? EALMOST? Or maybe
> EYOUASKEDFORITBUTYOUDIDNTGETIT?
>
> Actually, you'd be better off with a "request upgrade" that'd either
> convert or return failure leaving the caller with read access. This fits
> much better with the normal UNIX "do this or fail without changing
> anything" model.
>
> And, OK, one could deal with the naming issues. Documentation, though,
> would be easily misunderstood. (That MUST be a prime concern in the design
> of any interface.) And, in the end, what do you have? Every call site that
> requests upgrade MUST be able to handle either return, but there's no way
> to predict for sure whether any given call will succeed or "not quite
> succeed". Easy to misunderstand and misuse, tough to test. Bad combination.
>
> The query operation sounds like an invitation to even greater screwups, to
> me. More complication, harder to test... for very little added value.

Uhmm, NOT bad analysis, I guess. Thanks!

regards,
alexander.

Alexander Jasper

unread,
Mar 25, 2002, 3:38:57 PM3/25/02
to
I thank all of you very much for the helpfull answers.

The conditions David pointed out are met in my case.



1) Read operations are much more frequent than write operations.

Opening a simple dialog box on the client will need to acquire a read lock
on the server.


2) You need to be able to overlap a lot of read operations because
there is no way to minimize the time the read locks are held.

Besides the "long calculation" I hold the read locks only very short. But
that does not matter.


3) You have 'short' write operations and 'long' write operations that
are read-only for most of the time.

Yep. From some data we compute derived information. The derived and
original data interact in some way.


4) You are extremely unlikely to ever have more than one 'long' write
operation at a time.

The users will have to accept that only one operation of that kind is
possible at time. They can't draw on sheet of paper all at once either.
Sure, they can. But ...

It was suggested to keep the existing rwlock and extentendig it by a
potentially non-atomic upgrade function leading to a recalc. Calc:write
ratio is about 9:1 in my case. And it takes half an hour. So recalcing is
not an option.
I don't think the number of context switches does really matter in my case.
We are taking of 1 request per second at the most.

The upgradeable does not seem to be a very special optimization to me.
I wonder why it is not implemented any where.

In reaction to Joe's hint I did another search on groups.google.com. I
found two other threads covering this topic in this news group:
http://groups.google.com/groups?hl=en&threadm=3C8563B5.D35A956C%40webmaster.com&rnum=2&prev=/groups%3Fq%3Dupgradeable%2Bgroup:comp.programming.threads%26hl%3Den%26selm%3D3C8563B5.D35A956C%2540webmaster.com%26rnum%3D2
http://groups.google.com/groups?hl=en&threadm=3C8563B5.D35A956C%40webmaster.com&rnum=2&prev=/groups%3Fq%3Dupgradeable%2Bgroup:comp.programming.threads%26hl%3Den%26selm%3D3C8563B5.D35A956C%2540webmaster.com%26rnum%3D2

Tomorrow I will try to implement the upgradeable rwlock on Win32. I think
using a CRITICAL_SECTION and two EVENTS. I have written a CppUnit-TestCase
so far. That was a considerable effort.

Thanks for your help again

Alexander

Alexander Terekhov

unread,
Mar 25, 2002, 4:31:55 PM3/25/02
to

Alexander Jasper wrote:
[...]

> Tomorrow I will try to implement the upgradeable rwlock on Win32. I think
> using a CRITICAL_SECTION and two EVENTS.

FYI ("version w/o condition variables..."):

http://groups.google.com/groups?as_umsgid=3B166244.F923B993%40web.de

regards,
alexander.

David Butenhof

unread,
Mar 26, 2002, 6:53:12 AM3/26/02
to
Alexander Terekhov wrote:

>
> David Butenhof wrote:
>>
>> Alexander Terekhov wrote:
>>
>> > David Butenhof wrote:
>> >>
>> >> Normally, one is better off making a RW lock behave like normal POSIX
>> >> mutexes, where awakened waiters compete equally with running threads
>> >> for access. (Rather than forced "FIFO" ownership that, like Ada
>> >> rendezvous, causes completely wasted context switches.)
>> >
>> > Yes, I guess that passing the lock ownership across context
>> > switches is indeed counter productive (having performance and
>> > throughput in mind). And realtime/embedded/"flight control
>> > folks" could always get "predictability" they want/need
>> > using SCHED_FIFO/RR scheduling policies on uniprocessors,
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> > I guess.
>>
>> This is really missing the point. FIFO ownership isn't "predictable",
>> either; it's simply inefficient. Except on a uniprocessor using strictly
>> SCHED_FIFO, the execution order of threads is at least "slightly
>> asynchronous".
>
> With "predictability" I meant "uniprocessor" priority
> scheduling (realtime):

I think we're getting too far off the tangent here. (So to speak.) You
seemed to be reluctantly accepting that realtime scheduling might be a sort
of a substitute for FIFO ownership guarantees. Scheduling policies and
synchronization policies overlap, but they're cooperative, not competitive.
FIFO ownership has nothing to do with predictability. On a uniprocessor
with POSIX realtime scheduling rules (including synchronization wakeup
ordering), it's simply irrelevant; otherwise, especially on a
multiprocessor, it's simply inefficient.

>> The only purpose for FIFO ownership is that some would like to think that
>> allocation of shared resources between competing threads should be
>> "fair".
>
> But in the context of THIS discussion, a single "FIFO" queue
> for both readers (served in bunches) and writers would just
> "help" to avoid "specifying" READ or WRITE preference... and
> on a "uniprocessor" with SCHED_FIFO/RR scheduling it (i.e
> such "FIFO" rw-lock) would work just like a mutex (but
> allowing read "concurrency" for RR w.r.t "bunched" readers,
> bunches separated by writer(s)), unless I am missing and/or
> misunderstanding something...

Yes, the use of a "single" queue (whether real or virtual) can produce a
wakeup ordering that's (at least for many if not most uses) superior either
to reader or writer preference. (It's not necessarily FIFO, though, since
it ought to be priority ordered according to realtime rules if any realtime
policy threads are involved.) In any case, that's quite different from
"FIFO ownership", which was specified by the original post, and THAT'S what
I'm talking about.

>> Given that any thread can hold exclusive access to any resource for an
>> unbounded time, it can't be fair. It can only be efficient or
>> inefficient. Any attempt at fairness cannot achieve fairness, but WILL
>> achieve inefficiency.
>>
>> Efficiently scheduled and synchronized cooperating threads can CHOOSE to
>> be fair. "Fairly" scheduled and synchronized competing threads are
>> neither fair nor efficient, and cannot be MADE to be efficient.
>>
>> To be fairly scheduled, competing entities must be isolated from each
>> other. If you have to compete, be a process, not a thread. Threads are
>> friendly. They like each other. They cooperate to achieve common goals.
>>
>> ;-)
>
> Yeah, but don't you think that the "rules" are somewhat
> different in the "real-time" space?

No. First off, realtime isn't efficient. One of the reasons that realtime
scheduling is an option in POSIX is that there is overhead to supporting
the model. The specification omits any description of how non-realtime
threads interact with realtime threads partly because that's "out of
scope", but also because the thread developers desired, to the extent
possible, to keep that overhead from affecting "throughput" threads, which
we all intended to make the default. We, for example, do not priority sort
synchronization wait queues unless and until a realtime thread is a member
of the queue. The sorting takes time, and isn't needed for correct or
reasonable behavior in a throughput (adjustable, timesliced priority)
environment. Realtime thread management has special scheduling paths that,
to the extent possible, are not followed when only throughput threads are
involved in a decision.

The benefit of the overhead in realtime scheduling is PREDICTABLE
transitions of thread queues on a uniprocessor. You know which thread is
removed first, and where and when it gets added. To a limited extent that
may mean you can predict which thread is running at any time, though in
practice that's a chaotic system as blocking events and timeslice events
"churn" the queues over time in extremely complicated patterns.

Realtime scheduling isn't fair. Or at least, it's fair in the Orwellian
sense of "All animals are equal, but some are more equal than others". If
you're a pig, that's about as fair as you can get. For those who find
themselves in the "less equal" category, well, "fair" depends a lot on how
much of a team player they are. If a thread at priority 0 considers being a
"fat null thread" to be "fair" in that it presumably meets the overall
needs of the system, I guess that's fine. The point is that it's still all
about cooperation; it's just that realtime threads aren't cooperating as
equals.

>> Yes, if you treat upgradeability as an "optimization" on which nobody can
>> ever depend.
>
> Well, I guess that one could argue that the whole
> read/write locking concept is just an "'optimization'
> on which nobody can ever depend". ;-)

Well, yes, given the looseness of definitions to allow experimentation in
read/write preference, and all that, it's true that the actual guarantee
for concurrent read access is slim at best. (In fact, it provides for an
"implemention defined" limit on concurrent readership, and there's nothing
to prevent an implementation from defining that to the value "1".)

0 new messages