Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

RW locks-when to use?

75 views
Skip to first unread message

g

unread,
Dec 16, 2005, 7:37:48 AM12/16/05
to
hello!

I am wondering where RW locks are appropriate ??

is it appropriate when only you don't want 'dirty' reads??

when writers are more than readers?????

thanks!

Joe Seigh

unread,
Dec 16, 2005, 9:23:26 AM12/16/05
to
g wrote:
> hello!
>
> I am wondering where RW locks are appropriate ??
>
> is it appropriate when only you don't want 'dirty' reads??

'dirty' reads being data in an inconsistent invalid state? When
would you want 'dirty' reads?


>
> when writers are more than readers?????
>

Depends on the implementation. Posix doesn't specify the implementation
so what may be appropiate for one platform may not work as well for
another. But if you mostly have writes then you might be better off
with a mutex as most rwlock implementations tend to be suboptimal. You
should try both ways, mutex for both reads and writes, and rwlocks, and
see which works better. You may get scheduler artifacts that make one way
work a lot faster.


--
Joe Seigh

When you get lemons, you make lemonade.
When you get hardware, you make software.

g

unread,
Dec 16, 2005, 9:50:13 AM12/16/05
to

> 'dirty' reads being data in an inconsistent invalid state?

not only,I dont't care..if it is....

> When would you want 'dirty' reads?

lets say I wanna scan a container with 1000 objects,I dont care if one
or two objects will be invalid while I am scanning(there is a writer(s)
with mutex,mutex per object),afte scanning my decision
will be based from 1000 objects and not from the two that might be
invalid....dirty read..

so I use RW locks when dirty reads are not acceptable-I want precise
results?

>You may get scheduler artifacts that make one way work a lot faster.

this is an m-to-n implemantation I think...I use nptl.....but the
question was general.

g

Joe Seigh

unread,
Dec 16, 2005, 10:01:31 AM12/16/05
to
g wrote:
>
>>'dirty' reads being data in an inconsistent invalid state?
>
> not only,I dont't care..if it is....
>
>
>>When would you want 'dirty' reads?
>
> lets say I wanna scan a container with 1000 objects,I dont care if one
> or two objects will be invalid while I am scanning(there is a writer(s)
> with mutex,mutex per object),afte scanning my decision
> will be based from 1000 objects and not from the two that might be
> invalid....dirty read..

It's not the object states you need to worry about, it's the container
state. Unless you are using a statically allocated array, most containers
are dynamically linked data structures, you could get into trouble real
quick. We just went over this subject on a recent posting here.


>
> so I use RW locks when dirty reads are not acceptable-I want precise
> results?
>
>
>>You may get scheduler artifacts that make one way work a lot faster.
>
> this is an m-to-n implemantation I think...I use nptl.....but the
> question was general.
>

NPTL has at least one suboptimal artifact that I'm aware of, it preempts
on futex signaling. This can cause performance degradation if you don't
know how to avoid it or work around it.

g

unread,
Dec 16, 2005, 10:06:43 AM12/16/05
to

> It's not the object states you need to worry about, it's the container
> state. Unless you are using a statically allocated array, most containers
> are dynamically linked data structures, you could get into trouble real
> quick. We just went over this subject on a recent posting here.

I dont understand you here.....

private member in a class who keeps objects

static vector<my object *> vec
vec.push_back(...);
is there a problem if I have fine grain locking(mytex/object)?


g.

Dervish

unread,
Dec 16, 2005, 10:12:26 AM12/16/05
to
rwlocks are used to protect data from dirty reads (from readers) and
dirty writes (from writers). One can always replace rwlock with mutex,
but not vice versa (since writing having only read lock should not be
used).

Common practice is to use rwlocks when there are much more readers than
writers. What this "much" depends of program logic. Note that common
implementation of rwlock have two mutex objects inside. This means that
usage of rwlock system calls is twice as expensive (in term of time)
when compared with mutex. Ergo: you can start think about rwlock when
numReaders > 2*numWriters. In real world positive effect usually
arrives only when numReaders > 10*numWriters or more.

Another way to analyze the problem is to answer following question:
what is a probability that more than one reader tries to get lock,
while there are no writers. If this probability is quite big - rwlock
can (possibly) help.

Joe Seigh

unread,
Dec 16, 2005, 10:32:24 AM12/16/05
to
vector classes manage the internal array and reallocate it if it
needs to be increased in size. If you have a read concurrent
with a reallocation, the read could get undefined data. And
if the undefined data was dereferenced, bad things might happen.
Plus you're depending on the heap always being valid, i.e.
the system never unmaps unallocated heap memory. I don't know
of any system that does unmap virtual heap memory this way so
you're probably safe there.
Message has been deleted

g

unread,
Dec 16, 2005, 12:12:03 PM12/16/05
to
>vector classes manage the internal array and reallocate it if it
>needs to be increased in size. If you have a read concurrent
>with a reallocation, the read could get undefined data........

when I use a container I mean that I do caching so the size of the
conteiner is almost fixed
offcourse there are insrtions/deletions

I understand the problem of realloction of vector when need it.....so
pointer based structures are better....

>Dervish


>wlocks are used to protect data from dirty reads (from readers) and

>dirty writes (from writers). One can always replace rwlock with mutex...

dirty writer cant be ......you must have the one and only mutex....

Chris Thomasson

unread,
Dec 17, 2005, 7:50:27 AM12/17/05
to
> I am wondering where RW locks are appropriate ??

Basically, they work fairly well with "read-mostly" data-structures. If you
don't really care about performance and scalability, then most rw-mutex
implementations should work perfectly fine; however, in some cases they are
definitely not an "optimal solution". For instance, IMHO a rw-mutex would
probably not be an ideal candidate for protecting a "critical shared
collection" than can experience episodic periods of sustained
"moderate-to-heavy write activity". Reader threads that use rw-mutexs to
protect such a container would probably not perform and/or scale very well
at all with regard to adding cpus. One of the reasons why is usually due to
the fact that most rw-mutex implementations force reader threads to use some
rather expensive atomic/membar/blocking methods in order to enforce
synchronization and proper data-coherency among the participating
processors; this is not very cache friendly, not at all. Reader threads
would not really be able to keep the processors "caches primed" and probably
would not experience the full benefits of a modern multi-processor system.
For instance, a reader thread that "frequently" executes a store/load style
membar would constantly be "invalidating and thrashing the cache"; not good
at all! The thread could spend most of its time "waiting" for the
modifications to the "rw-mutexs internal state" to become "fully visible" to
all of processors involved. This process is usually made up of fairly
expensive operation(s) that may involve the overhead of cache snooping
and/or locking the system bus. The overhead involved could actually prevent
reader threads from truly exeuting in parallel; not good.

Therefore, IMHO, a rw-mutex is generally a bad solution for a critical
shared data-structure if the overhead of acquiring and releasing it is
greater than the overhead of "reading the data-structure in a
single-threaded application"...


> is it appropriate when only you don't want 'dirty' reads??

Not really, but yes they do eliminate the possibility of processing "stale
data"; however, as stated above the methods involved are usually expensive.
Fortunately, there is a more advanced class of algorithms out there that do
not tend to add a boatload of extra overhead and can allow reader and writer
threads to execute in parallel. Please note that this behavior alone can be
beneficial to many application designs. For instance, since readers no
longer block writers an existing synchronization design can usually be based
on a "much simpler locking scheme"; however, stale reads can and will occur
in this type of setup. Luckily, one can address stale/dirty reads in a
number of interesting ways:

http://groups.google.com/group/comp.programming.threads/msg/523aa1e77990215a?hl=en


Another solution for stale reads involves writer threads making copies of
existing shared data-structures and then atomically swapping them with the
existing ones. A garbage collection scheme can be used to free the "old"
data-structures. This solution does not add overhead to reader threads,
unless, the method of garbage collection used is sub-optimal...


--
Chris Thomasson

http://home.comcast.net/~vzoom/
Virtually Zero-Overhead Object Management (VZOOM)


g

unread,
Dec 17, 2005, 11:23:37 AM12/17/05
to
>Another solution for stale reads involves writer threads making copies of
>existing shared data-structures and then atomically swapping them with the
>existing ones. A garbage collection scheme can be used to free the "old"
>data-structures. This solution does not add overhead to reader threads,
>unless, the method of garbage collection used is sub-optimal...

lock-free:-)
one thing I don't understand with lock-free...the copy!!
what you mean with " making copies of


existing shared data-structures and then atomically swapping them with
the

existing ones. "?????

you mean to copy lets say the whole tree instead of a node that the
writer want to chance?? if is that you mean and lock free works like
this then yes "This solution does not add overhead to reader threads"
but what about the writers???there is a huge overhead if they have to
copy the whole container.....to make the change........

furthermore lock free isn't also very cache friendly,but ok is usefull
only if we have read-mostly data so it isn't a problem.

g.

Joe Seigh

unread,
Dec 17, 2005, 12:08:36 PM12/17/05
to
g wrote:
>>Another solution for stale reads involves writer threads making copies of
>>existing shared data-structures and then atomically swapping them with the
>>existing ones. A garbage collection scheme can be used to free the "old"
>>data-structures. This solution does not add overhead to reader threads,
>>unless, the method of garbage collection used is sub-optimal...
>
>
> lock-free:-)
> one thing I don't understand with lock-free...the copy!!
> what you mean with " making copies of
> existing shared data-structures and then atomically swapping them with
> the
> existing ones. "?????

Atomically thread-safe COW (copy on write) or more often PCOW (partial
copy on write). You mostlly see the latter with linked lists though
you can do it with other linked data structures like trees or hash
tables. You keep the old copy around until you are sure reader threads
has finished with it. That's the GC (garbage collection) part which
can be convention tracing Boehm style GC as in Java, or RCU, hazard
pointers, or VZOOM.

>
> you mean to copy lets say the whole tree instead of a node that the
> writer want to chance?? if is that you mean and lock free works like
> this then yes "This solution does not add overhead to reader threads"
> but what about the writers???there is a huge overhead if they have to
> copy the whole container.....to make the change........

No, you can PCOW trees. I've partially prototyped it in Java with
insert, delete, and rotate (used in balancing the tree) operations.

>
> furthermore lock free isn't also very cache friendly,but ok is usefull
> only if we have read-mostly data so it isn't a problem.
>

The read lock-free is very cache friendly, even NUMA friendly (RCU was
developed at Sequent) so you could relax the cache protocols somewhat
and reduce cache traffic substantially.

David Schwartz

unread,
Dec 17, 2005, 3:14:47 PM12/17/05
to

"g" <geor...@yahoo.com> wrote in message
news:1134836617.7...@f14g2000cwb.googlegroups.com...

> lock-free:-)
> one thing I don't understand with lock-free...the copy!!
> what you mean with " making copies of
> existing shared data-structures and then atomically swapping them with
> the
> existing ones. "?????
>
> you mean to copy lets say the whole tree instead of a node that the
> writer want to chance?? if is that you mean and lock free works like
> this then yes "This solution does not add overhead to reader threads"
> but what about the writers???there is a huge overhead if they have to
> copy the whole container.....to make the change........
>
> furthermore lock free isn't also very cache friendly,but ok is usefull
> only if we have read-mostly data so it isn't a problem.

These are techniques that are only appropriate for read-mostly
applications. By "read-mostly", we mean that nobody cares about the
performance of writes because they don't impact the application overall,
only the read performance matters.

You can sometimes make an application read-mostly when it isn't already.
For example, even if updates happen frequently, if they don't need to be
visible immediately, you can 'flush' them to the active structure every 2
seconds and make modifications to a structure that you don't read from. Now
the active structure is read-mostly, being written to only during a 'flush'.

DS


DS


g

unread,
Dec 17, 2005, 4:23:27 PM12/17/05
to
> You can sometimes make an application read-mostly when it isn't already.
>For example, even if updates happen frequently, if they don't need to be
>visible immediately, you can 'flush' them to the active structure every 2
>seconds and make modifications to a structure that you don't read from. Now
>the active structure is read-mostly, being written to only during a 'flush'.

thats cool ;-)) but.....threre is a but:-)
you need loooots of memory for the copies....
lets say we have an aplication server which used for caching..
5GB is the size of the objects in the server,using lock-free-copies the
system I guess must have around 10 GB of ram,using only the half
mostly...this is expensive....

and is not general(?????) solution.....is only for reads......

multiversion CC is agressive & cool but only for specific tasks.

g.

David Schwartz

unread,
Dec 17, 2005, 4:48:18 PM12/17/05
to

"g" <geor...@yahoo.com> wrote in message
news:1134854607.1...@g43g2000cwa.googlegroups.com...

> thats cool ;-)) but.....threre is a but:-)
> you need loooots of memory for the copies....
> lets say we have an aplication server which used for caching..
> 5GB is the size of the objects in the server,using lock-free-copies the
> system I guess must have around 10 GB of ram,using only the half
> mostly...this is expensive....

You only have to keep copies of the data that has been modified.

> and is not general(?????) solution.....is only for reads......

I don't follow you. If you mean it's only for read-mostly data, this
problem only occurs with read-mostly data.

DS


g

unread,
Dec 17, 2005, 5:15:30 PM12/17/05
to
>You only have to keep copies of the data that has been modified.
then it's ok!

> and is not general(?????) solution.....is only for reads......

I mean lock-free is for read-mostly transactions......

>I don't follow you. If you mean it's only for read-mostly data, this
>problem only occurs with read-mostly data.

which problem??

I want to ask something else...
in the
do{

while(CAS(............))

if i wanna change only one object there is no problem but if in my
transaction
I read or modifie more objects how I will do the
do{

while(CAS(............))
like this

do{

while(CAS(...&.&.&...&....)) I dont think so........

g.

0 new messages