Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: More precision about my new inventions of scalable algorithms..

46 views
Skip to first unread message

Bonita Montero

unread,
Feb 19, 2021, 5:27:34 PM2/19/21
to
> My below powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that are two powerful scalable RWLocks that are FIFO fair

RW-locks are never fair.

Chris M. Thomasson

unread,
Feb 19, 2021, 5:48:29 PM2/19/21
to
There are implementations that can be starvation free.

Bonita Montero

unread,
Feb 19, 2021, 6:01:44 PM2/19/21
to
>>> My below powerful inventions of LW_Fast_RWLockX and Fast_RWLockX that
>>> are two powerful scalable RWLocks that are FIFO fair

>> RW-locks are never fair.

> There are implementations that can be starvation free.

No, impossible. Readers can hold the lock as long as they
want and starve out writers. And writers could do the same.

Chris M. Thomasson

unread,
Feb 19, 2021, 6:17:15 PM2/19/21
to
Ummm... I am not sure what you mean here. The type of starvation I am
talking about is when reader activity is so frequent that writers just
can't seem to get in. Even if the reader critical sections are short.
Every time a writer wants to get in, several readers jump in line. Its
all about reader or writer priority, a lot of rwmutex impls have that
flaw. Sure a reader can hold the lock for an hour, okay. Well, writers
cannot get in for an hour. That's perfectly normal. This is not the same
type of starvation I am referring to...

Chris M. Thomasson

unread,
Feb 19, 2021, 6:32:37 PM2/19/21
to
On 2/19/2021 3:01 PM, Bonita Montero wrote:
Check this out:

https://vorbrodt.blog/2019/02/14/read-write-mutex/

On older rwmutex I invented. Its simple, and is starvation free wrt
reads and writes. Here is a simple Relacy test unit:

__________________________________
// Chris M. Thomassons RW Mutex


//#define RL_DEBUGBREAK_ON_ASSERT
//#define RL_MSVC_OUTPUT
//#define RL_FORCE_SEQ_CST
#include <relacy/relacy_std.hpp>
#include <cstdio>
#include <cstddef>




#if ! defined (NDEBUG)
# define DBG_PRINTF(e) std::printf e
#else
# define DBG_PRINTF(e) ((void)0)
#endif



struct semaphore
{
HANDLE m_waitset;

semaphore(LONG count)
{
m_waitset = CreateSemaphore(nullptr, count, LONG_MAX, nullptr);
}

~semaphore()
{
CloseHandle(m_waitset);
}

void post(LONG count = 1)
{
ReleaseSemaphore(m_waitset, count, nullptr);
}

void wait()
{
WaitForSingleObject(m_waitset, INFINITE);
}
};


class ct_rw_fast_mutex
{
public:
ct_rw_fast_mutex()
: m_wrstate(1), m_count(INT_MAX), m_rdwake(0),
m_rdwset(0), m_wrwset(0), m_wrmtx(0) {}

void read_lock()
{
if (m_count.fetch_add(-1, std::memory_order_acquire) < 1)
m_rdwset.wait();
}

void read_unlock()
{
if (m_count.fetch_add(1, std::memory_order_release) < 0)
if (m_rdwake.fetch_add(-1, std::memory_order_acq_rel) == 1)
m_wrwset.post();
}

void write_lock()
{
if (m_wrstate.fetch_sub(1, std::memory_order_acquire) < 1)
m_wrmtx.wait();
int count = m_count.fetch_add(-INT_MAX, std::memory_order_acquire);
if (count < INT_MAX)
{
int rdwake = m_rdwake.fetch_add(INT_MAX - count,
std::memory_order_acquire);
if (rdwake + INT_MAX - count)
m_wrwset.wait();
}
}

void write_unlock()
{
int count = m_count.fetch_add(INT_MAX, std::memory_order_release);
if (count < 0)
m_rdwset.post(-count);
if (m_wrstate.fetch_add(1, std::memory_order_release) < 0)
m_wrmtx.post();
}

private:
std::atomic<int> m_wrstate;
std::atomic<int> m_count;
std::atomic<int> m_rdwake;

semaphore m_rdwset;
semaphore m_wrwset;
semaphore m_wrmtx;
};


#define ITERS 16
#define WRITERS 3
#define READERS 5
#define THREADS (WRITERS + READERS)


struct proxy_test
: rl::test_suite<proxy_test, THREADS>
{
ct_rw_fast_mutex m_rwmutex;
VAR_T(int) m_shared;

proxy_test() : m_shared(0) {}

~proxy_test() { RL_ASSERT(!VAR(m_shared)); }

void thread(unsigned int tidx)
{
if (tidx < READERS)
{
// readers

for (unsigned long i = 0; i < ITERS; ++i)
{
m_rwmutex.read_lock();
int shared = VAR(m_shared);
m_rwmutex.read_unlock();

RL_ASSERT(shared > -1);
}

}

else
{
// writers
for (unsigned long i = 0; i < ITERS; ++i)
{
m_rwmutex.write_lock();
++VAR(m_shared);
m_rwmutex.write_unlock();

m_rwmutex.write_lock();
--VAR(m_shared);
m_rwmutex.write_unlock();
}
}
}
};


int main()
{
{
rl::test_params p;

p.iteration_count = 10000000;
p.execution_depth_limit = 10000;
//p.search_type = rl::sched_bound;
//p.search_type = rl::fair_full_search_scheduler_type;
//p.search_type = rl::fair_context_bound_scheduler_type;
rl::simulate<proxy_test>(p);
}

return 0;
}
__________________________________


Can you run it?

Chris M. Thomasson

unread,
Feb 19, 2021, 6:37:23 PM2/19/21
to
On 2/19/2021 3:32 PM, Chris M. Thomasson wrote:
> On 2/19/2021 3:01 PM, Bonita Montero wrote:
>>>>> My below powerful inventions of LW_Fast_RWLockX and Fast_RWLockX
>>>>> that are two powerful scalable RWLocks that are FIFO fair
>>
>>>> RW-locks are never fair.
>>
>>> There are implementations that can be starvation free.
>>
>> No, impossible. Readers can hold the lock as long as they
>> want and starve out writers. And writers could do the same.
>
> Check this out:
>
> https://vorbrodt.blog/2019/02/14/read-write-mutex/
>
> On older rwmutex I invented. Its simple, and is starvation free wrt
> reads and writes. Here is a simple Relacy test unit:
[...]

Notice how there are no loops in the rwmutex logic?

Bonita Montero

unread,
Feb 20, 2021, 2:50:58 AM2/20/21
to
> Ummm... I am not sure what you mean here. ...

Readers can hold the mutex as long as they want.
And writers can do the opposite.
So they can prevent other writers or readers to lock the mutex.
So RW-locks are never starvation-free.

Chris M. Thomasson

unread,
Feb 20, 2021, 3:46:42 PM2/20/21
to
On 2/19/2021 11:50 PM, Bonita Montero wrote:
>> Ummm... I am not sure what you mean here. ...
>
> Readers can hold the mutex as long as they want.
> And writers can do the opposite.

Indeed. But you snipped my relevant part.


> So they can prevent other writers or readers to lock the mutex.
> So RW-locks are never starvation-free.

Let me manually quote what I wrote. Its important because the type of
starvation your are talking about is very different than the usual meaning.

Chris M. Thomasson wrote:
_________________________________
[...]
The type of starvation I am talking about is when reader activity is so
frequent that writers just can't seem to get in. Even if the reader
critical sections are short. Every time a writer wants to get in,
several readers jump in line. Its all about reader or writer priority, a
lot of rwmutex impls have that flaw. Sure a reader can hold the lock for
an hour, okay. Well, writers cannot get in for an hour. That's perfectly
normal. This is not the same type of starvation I am referring to...
_________________________________


Starvation on a rw mutex can occur when heavy reader activity actually
starves out writers. Some implementations can actually prioritize reads,
or writes. Some implementations have reader preference by design. So,
starvation can occur when frequent readers prevent writers from
acquiring write access. In other words, writers spend most of their time
tying and failing to acquire the mutex. This is why its important to be
very careful when one is implementing a rwmutex. I linked you to an
algorithm I invented that is loop-free. It fairly alternates between
batches of readers and writers. So, readers can never starve out
writers, even during massive reader activity, and vise versa.

Chris M. Thomasson

unread,
Feb 20, 2021, 3:52:56 PM2/20/21
to
Heavy writer activity starving out readers can happen, but its rather
unusual. General use patterns for a read write mutex is when you have
read mostly workloads.

Bonita Montero

unread,
Feb 21, 2021, 11:16:58 AM2/21/21
to
> I linked you to an algorithm I invented that is loop-free.
> It fairly alternates between batches of readers and writers.

It may alternate, but surely not fair.

Richard Damon

unread,
Feb 21, 2021, 1:15:38 PM2/21/21
to
The problem is there isn't a uniform definition of 'fair', but fairness
requires making value decisions.

Remind me of a discussion of 'fairness' in scheduling algorithms when I
was in collage. This was in the days when you would key punch your
program and submit the deck to the computer center and get your results
some time later.

The question is, which of these algorithms is 'fairer':

60 People come up and submit their jobs at basically the same time. Each
Job is going to take 1 minute of CPU time to run. Which of these
scheduling algorithms is 'fairer'

Method 1, First job in line run to completion, then the second job is
run to completion, then the 3rd and so on. So the first person gets his
results after a minute, next a minute later, and so on so the last gets
their results 60 minutes later.

Method 2, All the jobs are loaded into memory at once, and time sliced
between them, each getting 1 second of time, then the next, and so on.
Every job will finish at the end of the hour.


Many people will label the second method as 'fairer' as everyone gets
the same pain, but no one in the second example was any better off then
in the first, and almost everyone worse, so by most efficiency measures
it is clearly worse.

Now, if all the tasks don't take exactly the same length to run, or
aren't all submitted at once, some of the value arguments of what should
be fair come into play, but 'fairness' can be hard to define.

Bonita Montero

unread,
Feb 21, 2021, 1:59:12 PM2/21/21
to
> 60 People come up and submit their jobs at basically the same time.
> Each Job is going to take 1 minute of CPU time to run. Which of these
> scheduling algorithms is 'fairer'
> Method 1, First job in line run to completion, then the second job is
> run to completion, then the 3rd and so on. So the first person gets his
> results after a minute, next a minute later, and so on so the last gets
> their results 60 minutes later.
> Method 2, All the jobs are loaded into memory at once, and time sliced
> between them, each getting 1 second of time, then the next, and so on.
> Every job will finish at the end of the hour.
> Many people will label the second method as 'fairer' as everyone gets
> the same pain, but no one in the second example was any better off then
> in the first, and almost everyone worse, so by most efficiency measures
> it is clearly worse.

You can't compare that to the discussed issue because there's no
opportunity to capture all the CPU-time by one task.

Richard Damon

unread,
Feb 21, 2021, 3:58:45 PM2/21/21
to
The key point isn't what resource was being managed, but how do we
really define 'fair'. The key part about the example was that many
people thought that the second method was fairer, because there was much
less difference is response times to all clients, even though it was at
the cost of making almost everyone wait longer than they needed to in
the first. The first method has an average response time of 1/2 the
second, and no one got worse performance. Only by a metric on variation
in response can the second be shown to be superior, but some think that
makes it fairer.

To define 'fairness' we need to define some metric to measure it.
Normally we can't get 'perfect' fairness except in vary special
conditions. Normally we define a system as 'fair' if every client can
get some reasonable amount of the resource (even if it isn't a 'fair
share'). Tighter definitions of 'fairness' might try to limit the
inequities in allocation (and if the clients are identical in needs, we
need ways to 'fairly' allocate needs to measure inequality). Generally,
attempts to make a system 'fairer' can often impact the efficiency of
the system. In my example, to reduce the variance in the wait times, the
average wait time doubled, and perhaps by increasing the number of
context switches we might have actually increased the total time to run
all the jobs.

My key point is that the first step is we need to define what we want to
meet as 'fair', and then we can decide if a given strategy will meet
that definition (and maybe how well).

Thus, if you want to call something 'unfair', you need to define what
you mean by fair. It is quite possible to define a RW-lock that will see
to it that as long as no locker fails to release its lock in a
reasonable time period, all requests WILL get met eventually. It just
requires that at some point after a write request gets queue, the system
stops granting new read requests (so the writer will get eventually get
in when all current readers end), and that a somewhat fair method (like
oldest) is used for granting requests when the system is free. Yes,
perhaps the simplest RW-lock will not be able to do this, but then the
issue isn't about RW-locks in general, but that style of
implementations. IF you are using some other definition of 'fair', you
need to define it.

Chris M. Thomasson

unread,
Feb 21, 2021, 5:35:35 PM2/21/21
to
I agree. Fwiw, my rwmutex does not allow readers to starve out writers
and vise versa. So, if a writer requests the lock while there are a shit
load of concurrent readers, it will wait until all of the readers in the
current "batch" are completed and will automatically own the lock for
write access after that. If a batch of readers come in while the writer
is doing its thing, they will be released all at once and automatically
own reader access once the writer is finished. This way, readers cannot
stave out writers and vise versa. Now, if one does not desire that type
of alternating, basically a classic bakery algorihtm like behavior, then
do NOT use my rwmutex.

https://vorbrodt.blog/2019/02/14/read-write-mutex

I like how he gave me proper credit. Nice. :^)

Actually, my rwmutex is pretty short and sweet with no loops.

Chris M. Thomasson

unread,
Feb 22, 2021, 12:48:08 AM2/22/21
to
Why not?

Bonita Montero

unread,
Feb 22, 2021, 12:51:46 AM2/22/21
to
>> You can't compare that to the discussed issue because there's no
>> opportunity to capture all the CPU-time by one task.

> Why not?

Not how he defined it.

Bonita Montero

unread,
Feb 22, 2021, 12:53:04 AM2/22/21
to
> The key point isn't what resource was being managed, but how do we
> really define 'fair'. ...

But with an example that doesn't fit with the discussion.
Chris does know what I mean, you don't.

Bonita Montero

unread,
Feb 22, 2021, 12:54:27 AM2/22/21
to
> I agree. Fwiw, my rwmutex does not allow readers to starve out writers
> and vise versa. ...

Building such a RW-mutex is impossible. You can't prevent a writer or
reader to hold the mutex as long as he wants.

Chris M. Thomasson

unread,
Feb 22, 2021, 1:00:40 AM2/22/21
to
True, you have again snipped out the relevant issue of classic
starvation I posted where frequent heavy read activity can stave out
writers, and vise versa. You are familiar with reader/writer priority on
rwmutex right? My impl alternates them in batches.

Chris M. Thomasson

unread,
Feb 22, 2021, 1:01:32 AM2/22/21
to
Well, lets say five threads have read access... There is nothing
stopping them from using all cpus!

Chris M. Thomasson

unread,
Feb 22, 2021, 1:10:53 AM2/22/21
to
I do know what you mean, its just not the classic way of thinking about
starvation. You are 100% correct that a batch of readers, or a single
writer can hold the lock for hours. It can happen, well, its not
advised, but it still can occur.

Bonita Montero

unread,
Feb 22, 2021, 1:41:14 AM2/22/21
to
> True, you have again snipped out the relevant issue of classic
> starvation I posted where frequent heavy read activity can stave
> out writers, and vise versa. You are familiar with reader/writer
> priority on rwmutex right? My impl alternates them in batches.

That doesn't make fairness.

Bonita Montero

unread,
Feb 22, 2021, 3:21:19 AM2/22/21
to
> You are familiar with reader/writer priority on
> rwmutex right? My impl alternates them in batches.

Chris M. Thomasson

unread,
Feb 22, 2021, 5:54:42 PM2/22/21
to
My version of "fairness", or being "starvation free", is when readers
cannot starve out writers, and vise versa. Oh well. :^)

Bonita Montero

unread,
Feb 23, 2021, 2:39:00 AM2/23/21
to
> My version of "fairness", or being "starvation free", is when readers
> cannot starve out writers, and vise versa. Oh well. :^)

That's impossible to implement.
Readers or writers can hold the mutex as long as they want.

Chris M. Thomasson

unread,
Feb 23, 2021, 3:25:48 AM2/23/21
to
We just have radically different versions of what it means to be
starvation free. :^)

Bonita Montero

unread,
Feb 23, 2021, 2:18:12 PM2/23/21
to
>>> My version of "fairness", or being "starvation free", is when readers
>>> cannot starve out writers, and vise versa. Oh well. :^)

>> That's impossible to implement.
>> Readers or writers can hold the mutex as long as they want.

> We just have radically different versions of what it means to be
> starvation free. :^)

The fairness of a lock doesn't depend on the implementation of the
lock but how it is used.

Chris M. Thomasson

unread,
Feb 23, 2021, 2:29:52 PM2/23/21
to
It does depend on the implementation if frequent reads can starve out
writers.

Richard Damon

unread,
Feb 23, 2021, 4:51:38 PM2/23/21
to
And the standard definition of a 'fair locking' system is one that is
fair when the follow listed rules, like maximum hold time of a lock.

The only way to handle unruly clients is who hold their locks too long
is to have OS support to kill those clients and have it unlock the locks
it held on to. (It could not kill but just deny further access depending
on the resource).

Chris M. Thomasson

unread,
Feb 23, 2021, 5:14:10 PM2/23/21
to
On 2/23/2021 11:17 AM, Bonita Montero wrote:
Are you familiar with bakery algorihtms?

Chris M. Thomasson

unread,
Feb 23, 2021, 5:14:28 PM2/23/21
to
> Are you familiar with bakery algorithms?

bakery algorithms?

Bonita Montero

unread,
Feb 24, 2021, 12:20:42 PM2/24/21
to
> It does depend on the implementation if frequent reads can starve out
> writers.

But if this happens is a matter of the lock-behaviour.

Bonita Montero

unread,
Feb 24, 2021, 12:22:21 PM2/24/21
to
> The only way to handle unruly clients is who hold their locks too long
> is to have OS support to kill those clients and have it unlock the locks
> it held on to. (It could not kill but just deny further access depending
> on the resource).

We're talking about locks which protect shared datastructures. In this
case it would be stupid to kill a process or thread holding such a lock.

Richard Damon

unread,
Feb 24, 2021, 1:29:14 PM2/24/21
to
A lock that works with the OS to so the OS knows what locks the process
holds can kill a 'bad' process and force the release of the lock.

The only way to handle 'bad' processes, is for there to be some
mechanism to deny that process access to the resource that it has taken
the lock on, so it can be given to someone else. That capability is
typically reserved to the OS.

Yes, if you abort a process in the middle of its operation, you may need
to do some 'cleanup' on the resource, but if the process isn't releasing
the resource in the required time, can you really trust that it is doing
its job right?

Chris M. Thomasson

unread,
Feb 24, 2021, 8:16:45 PM2/24/21
to
Are you familiar with robust mutexes? They are designed to allow one to
repair the shared data if a processes dies while holding the lock.

Bonita Montero

unread,
Feb 25, 2021, 1:08:29 AM2/25/21
to
> A lock that works with the OS to so the OS knows what locks the process
> holds can kill a 'bad' process and force the release of the lock.

That's impossible because that what the lock guards is in inconsistent
state at this time.

Chris M. Thomasson

unread,
Feb 25, 2021, 2:39:45 AM2/25/21
to
Think of a "master" process, than can kill a process while its holding a
process-wide mutex. This can be handled and corrected, the state can be
repaired into a consistent state. Its tricky, yet can be done. Wrt
Windows think of the WAIT_ABANDONDED state. POSIX has it with EOWNERDEAD.

Chris M. Thomasson

unread,
Feb 25, 2021, 2:43:23 AM2/25/21
to
On 2/24/2021 10:08 PM, Bonita Montero wrote:
Wrt WAIT_ABANDONED, read the docs:

https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-waitforsingleobject
_____________
WAIT_ABANDONED:

The specified object is a mutex object that was not released by the
thread that owned the mutex object before the owning thread terminated.
Ownership of the mutex object is granted to the calling thread and the
mutex state is set to nonsignaled.
If the mutex was protecting persistent state information, you should
check it for consistency.
_____________

POSIX has, EOWNERDEAD:

https://man7.org/linux/man-pages/man3/pthread_mutex_consistent.3.html

Bonita Montero

unread,
Feb 25, 2021, 3:49:40 AM2/25/21
to
> Think of a "master" process, than can kill a process while its holding a
> process-wide mutex. This can be handled and corrected, the state can be
> repaired into a consistent state. Its tricky, yet can be done. Wrt
> Windows think of the WAIT_ABANDONDED state. POSIX has it with EOWNERDEAD.

If you have shared memory modified while holding the lock there's
nothing to fix.

Chris M. Thomasson

unread,
Feb 25, 2021, 4:05:33 AM2/25/21
to
have you ever had to encounter and deal with EOWNERDEAD, or
WAIT_ABANDONDED ?

Chris M. Thomasson

unread,
Feb 25, 2021, 4:49:21 AM2/25/21
to
The way the user actually uses a rwmutex should be read heavy, writes,
not so frequent. However, frequent sustained reads should not have the
ability to starve out writers; lots of rwmutex impls allow just that.
Some even have so-called reader/writer priorities. Bakery algorithms
cannot suffer from these conditions... They just go in order.

Bonita Montero

unread,
Feb 25, 2021, 5:43:19 AM2/25/21
to
>> If you have shared memory modified while holding the lock there's
>> nothing to fix.

> have you ever had to encounter and deal with EOWNERDEAD, or
> WAIT_ABANDONDED ?

Boy, you're stupid. If you're modifying memory while holding a lock
and the lock is deleted meanwhile, WAIT_ABANDONNED or whatever doesn't
help.

Chris M. Thomasson

unread,
Feb 25, 2021, 6:09:28 AM2/25/21
to
You have never had to clean up after a process dies while holding a mutex?

Chris M. Thomasson

unread,
Feb 25, 2021, 6:10:37 AM2/25/21
to
On 2/25/2021 3:09 AM, Chris M. Thomasson wrote:
> On 2/25/2021 2:43 AM, Bonita Montero wrote:
>>>> If you have shared memory modified while holding the lock there's
>>>> nothing to fix.
>>
>>> have you ever had to encounter and deal with EOWNERDEAD, or
>>> WAIT_ABANDONDED ?
>>
>> Boy, you're stupid. If you're modifying memory while holding a lock
>> and the lock is deleted meanwhile, WAIT_ABANDONNED or whatever doesn't
>> help.

I am not referring to deleting a mutex, I am talking about a process
crashing in the middle of a critical section.
0 new messages