Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

seqlock experiment/benchmark...

89 views
Skip to first unread message

Chris M. Thomasson

unread,
Mar 2, 2022, 4:35:36 AM3/2/22
to
Here is a crude little experiment I wrote that benchmarks a seqlock vs a
std::shared_mutex. To test against the shared_mutex just uncomment the
CT_TEST_SHARED_MUTEX macro.

https://en.wikipedia.org/wiki/Seqlock

On my system I get the following result for the seqlock:

Quick and Dirty SeqLock Experiment ver:(0.0.0)
By: Chris M. Thomasson
________________________________________
Threads = 32
Iterations = 20000000
SeqLock Cells = 64

Launching...
Running...
Complete!
Time = 52.7902
________________________________________
DATA IS COHERENT!!! :^D


and the following result for std::shared_mutex

Quick and Dirty SeqLock Experiment ver:(0.0.0)
By: Chris M. Thomasson
________________________________________
Threads = 32
Iterations = 20000000
SeqLock Cells = 64
Using a shared mutex

Launching...
Running...
Complete!
Time = 86.5224
________________________________________
DATA IS COHERENT!!! :^D


As you can see, the seqlock is 33.7322 seconds faster! Nice so far. Now,
can anybody else please try to compile and run it, when they get some
free time to burn? Fwiw, defining CT_LOG records how many times the
seqlock spins in a read. This will hurt performance, but its interesting
to see. Thanks everybody. :^)


The C++17 code:
___________________________________
#include <iostream>
#include <thread>
#include <atomic>
#include <mutex>
#include <shared_mutex> // for a benchmark
#include <algorithm>
#include <chrono>
#include <ratio>
#include <cstdlib>



// Sorry about the hardcoded macros! ;^(
#define CT_THREADS 32
#define CT_ITERS 20000000
#define CT_SEQLOCK_CELL 64

// uncomment for to log read spins. SeqLock only...
//#define CT_LOG

// uncomment to test shared mutex
//#define CT_TEST_SHARED_MUTEX


#if defined CT_LOG
static std::atomic<unsigned long> g_log_read_spins(0);
#endif


// User data anyone? ;^)
struct ct_data
{
int m_a;
int m_b;
int m_c;

ct_data() : m_a(0), m_b(1), m_c(2) {}

void inc(ct_data const& data)
{
m_a = data.m_a + 1;
m_b = data.m_b + 1;
m_c = data.m_c + 1;
}

bool validate()
{
return (m_a + 1 == m_b && m_b + 1 == m_c);
}
};


// A seqlock data cell
struct ct_cell
{
std::atomic<unsigned long> m_ver;
ct_data m_data;

ct_cell() : m_ver(0) {}
};


// Wrapper for the user
struct ct_cell_ref
{
unsigned long m_ver;
ct_data m_data;
ct_cell* m_next;
};


// The main impl...
struct ct_seqlock
{
std::mutex m_mutex;
std::atomic<ct_cell*> m_cur;
ct_cell m_cells[CT_SEQLOCK_CELL];

ct_seqlock() : m_cur(m_cells) {}


bool validate()
{
for (std::size_t i = 0; i < CT_SEQLOCK_CELL; ++i)
{
if (! m_cells[i].m_data.validate()) return false;
}

return true;
}

ct_cell_ref write_lock()
{
m_mutex.lock();

ct_cell* cur = m_cur.load(std::memory_order_relaxed);
ct_cell* next = m_cells + ((cur - m_cells + 1) % CT_SEQLOCK_CELL);

ct_cell_ref cell_ref = { next->m_ver, cur->m_data, next };

next->m_ver.store(cell_ref.m_ver + 1, std::memory_order_relaxed);

std::atomic_thread_fence(std::memory_order_acq_rel);

return cell_ref;
}

void write_unlock(ct_cell_ref const& cell_ref)
{
cell_ref.m_next->m_ver.store(cell_ref.m_ver + 2,
std::memory_order_release);

// make it visible to the reader...
m_cur.store(cell_ref.m_next, std::memory_order_release);

m_mutex.unlock();
}


ct_data read()
{
unsigned long spins = 0;

ct_data data;

for (;;++spins)
{
ct_cell* cell = m_cur.load(std::memory_order_consume);

unsigned long ver0 =
cell->m_ver.load(std::memory_order_acquire);

if (ver0 & 1)
{
// spin on locked state
// We can spin, backoff or do something else...
// For now, we just yield and spin away! ;^)
std::this_thread::yield();
continue;
}

data = cell->m_data;

std::atomic_thread_fence(std::memory_order_acquire);

unsigned long ver1 =
cell->m_ver.load(std::memory_order_relaxed);

if (ver0 == ver1) break;

// spin on data changed
}

#if defined CT_LOG
g_log_read_spins.fetch_add(spins, std::memory_order_relaxed);
#endif

return data;
}
};



// The test app

// a simple timer
struct ct_time
{
// Type aliases to make accessing nested type easier
using clock_type = std::chrono::steady_clock;
using second_type = std::chrono::duration<double, std::ratio<1> >;

std::chrono::time_point<std::chrono::steady_clock> m_beg;

ct_time() : m_beg(std::chrono::steady_clock::now()) {}

void reset()
{
m_beg = std::chrono::steady_clock::now();
}

double elapsed() const
{
return std::chrono::duration_cast<std::chrono::duration<double,
std::ratio<1> >>
(clock_type::now() - m_beg).count();
}
};


struct ct_shared_state
{
ct_seqlock m_seqlock;

#if defined (CT_TEST_SHARED_MUTEX)
std::shared_mutex m_std_shared_mutex;
#endif

void write()
{
#if ! defined (CT_TEST_SHARED_MUTEX)
ct_cell_ref cell_ref = m_seqlock.write_lock();
cell_ref.m_next->m_data.inc(cell_ref.m_data);
m_seqlock.write_unlock(cell_ref);
#else
m_std_shared_mutex.lock();
m_seqlock.m_cells[0].m_data.inc(m_seqlock.m_cells[0].m_data);
m_std_shared_mutex.unlock();

#endif
}

bool read()
{
#if ! defined (CT_TEST_SHARED_MUTEX)
ct_data data = m_seqlock.read();
return data.validate();
#else
m_std_shared_mutex.lock_shared();
ct_data data = m_seqlock.m_cells[0].m_data;
m_std_shared_mutex.unlock_shared();
return data.validate();
#endif
}
};





void
ct_thread(ct_shared_state& shared)
{
for (unsigned long i = 0; i < CT_ITERS; ++i)
{
if (! shared.read()) break;
if (! shared.read()) break;
shared.write();
if (! shared.read()) break;
shared.write();
if (! shared.read()) break;
if (! shared.read()) break;
}
}



int main()
{
std::cout << "Quick and Dirty SeqLock Experiment ver:(0.0.0)\n";
std::cout << "By: Chris M. Thomasson\n";
std::cout << "________________________________________\n";
std::cout << "Threads = " << CT_THREADS << "\n";
std::cout << "Iterations = " << CT_ITERS << "\n";
std::cout << "SeqLock Cells = " << CT_SEQLOCK_CELL << "\n";

#if defined (CT_LOG)
std::cout << "Logging is on\n";
#endif

#if defined (CT_TEST_SHARED_MUTEX)
std::cout << "Using a shared mutex\n";
#endif


std::cout << "\nLaunching...\n";
std::cout.flush();

ct_shared_state shared_state;

ct_time timer;

{
std::thread threads[CT_THREADS];

for (unsigned int i = 0; i < CT_THREADS; ++i)
{
threads[i] = std::thread(ct_thread, std::ref(shared_state));
}

std::cout << "Running...\n";
std::cout.flush();

for (unsigned int i = 0; i < CT_THREADS; ++i)
{
threads[i].join();
}
}

double time_elpased = timer.elapsed();

std::cout << "Complete!\n";
std::cout << "Time = " << time_elpased << "\n";
std::cout << "________________________________________\n";


if (shared_state.m_seqlock.validate())
{
std::cout << "DATA IS COHERENT!!! :^D\n\n\n";
}

else
{
std::cout << "DATA IS __FOOBAR__! God DAMN IT!!!! ;^o\n\n\n";
}

#if defined CT_LOG
unsigned long log_read_spins =
g_log_read_spins.load(std::memory_order_relaxed);
std::cout << "CT_LOG: log_read_spins = " << log_read_spins << "\n";
#endif

return 0;
}
___________________________________

Bonita Montero

unread,
Mar 2, 2022, 1:14:53 PM3/2/22
to
Seqlocks dont' work in userspace since the writer can easily be
scheduled away:

https://stackoverflow.com/questions/71305278/do-make-seq-locks-sense-in-userspace?noredirect=1#comment126058361_71305278

You're really, really stupid.

Chris M. Thomasson

unread,
Mar 2, 2022, 1:24:59 PM3/2/22
to
Yet, it does work in user-space...

Bonita Montero

unread,
Mar 2, 2022, 1:31:02 PM3/2/22
to
No, it doesn't make sense in userspace as the writer can't disable
the scheduler. Having starving reades because a writer is scheduled
away isn't tolerable.

Chris M. Thomasson

unread,
Mar 2, 2022, 1:35:21 PM3/2/22
to
On 3/2/2022 1:34 AM, Chris M. Thomasson wrote:
> Here is a crude little experiment I wrote that benchmarks a seqlock vs a
> std::shared_mutex. To test against the shared_mutex just uncomment the
> CT_TEST_SHARED_MUTEX macro.
[...]

> // A seqlock data cell
> struct ct_cell
> {
>     std::atomic<unsigned long> m_ver;
>     ct_data m_data;
>
>     ct_cell() : m_ver(0) {}
> };

This can be optimized by making ct_cell aligned on, and padded up to a
l2 cacheline. alignas should work like a charm.

[...]


> // The main impl...
> struct ct_seqlock
> {
>     std::mutex m_mutex;
>     std::atomic<ct_cell*> m_cur;
>     ct_cell m_cells[CT_SEQLOCK_CELL];

ditto. m_cells aligned on a cacheline boundary.


>     ct_seqlock() : m_cur(m_cells) {}
[...]


There are other optimizations I can make here...

Chris M. Thomasson

unread,
Mar 2, 2022, 1:38:24 PM3/2/22
to
Heavy write activity can adversely effect readers. However, this is
meant to be used in a read-mostly, write-rarely scenario. Just like a
read/write mutex, its specialized for a certain usage pattern.

Now, we don't have to actually spin on readers, we can do something else...

Bonita Montero

unread,
Mar 2, 2022, 1:44:46 PM3/2/22
to
No one will use this stupid userland "seqlock" because of what I said.

Bonita Montero

unread,
Mar 2, 2022, 1:48:12 PM3/2/22
to
Seqlocks don't work in userspace because of what I said.
You seem as manic as Aminer.

Chris M. Thomasson

unread,
Mar 2, 2022, 1:59:20 PM3/2/22
to
So be it. I am interested in people testing it out.

Bonita Montero

unread,
Mar 2, 2022, 2:01:54 PM3/2/22
to
If the people aren't stupid they understand that this
code is useless because of what I said.

Chris M. Thomasson

unread,
Mar 2, 2022, 2:08:45 PM3/2/22
to
You are their God that they must follow? OH, I did not know that, sorry.
seqlock in userland can be useful.

Bonita Montero

unread,
Mar 2, 2022, 2:20:53 PM3/2/22
to
Spinning for a writer that's scheduled away isn't tolerable.
Anyone can see that except you.
A seqlock in userland is never useful because of this drawback.
Acoording to your Siggrap-profile you call yourself a computer
scientist - you're really joking.

red floyd

unread,
Mar 2, 2022, 2:28:33 PM3/2/22
to
On 3/2/2022 10:47 AM, Bonita Montero wrote:

> Seqlocks don't work in userspace because of what I said.
> You seem as manic as Aminer.

All hail Bonita, the glorious ruler of c.l.c++

And since I believe your native language isn't English,
let me make this clear. That was sarcasm.

Chris M. Thomasson

unread,
Mar 2, 2022, 2:29:26 PM3/2/22
to
Ever heard of an adaptive rw-mutex? Readers can choose to spin for a
while when there is a writer in the critical section.

> A seqlock in userland is never useful because of this drawback.
> Acoording to your Siggrap-profile you call yourself a computer
> scientist - you're really joking.

You mean me over on:

http://siggrapharts.ning.com/photo/heavenly-visions

http://siggrapharts.ning.com/profile/ChrisMThomasson

?

Chris M. Thomasson

unread,
Mar 2, 2022, 2:32:06 PM3/2/22
to
On 3/2/2022 11:20 AM, Bonita Montero wrote:
> Am 02.03.2022 um 20:08 schrieb Chris M. Thomasson:
>> On 3/2/2022 11:01 AM, Bonita Montero wrote:
>>> Am 02.03.2022 um 19:44 schrieb Bonita Montero:
>>>> Am 02.03.2022 um 19:37 schrieb Chris M. Thomasson:
>>>>> On 3/2/2022 10:30 AM, Bonita Montero wrote:
>>>>>> Am 02.03.2022 um 19:24 schrieb Chris M. Thomasson:
>>>>>>> On 3/2/2022 10:14 AM, Bonita Montero wrote:
[...]
> Acoording to your Siggrap-profile you call yourself a computer
> scientist - you're really joking.

If your into fractals, check this out... My friend kindly dedicated some
of his server space for it:

http://paulbourke.net/fractals/multijulia

I developed this way back, around 13 years ago.

Bonita Montero

unread,
Mar 2, 2022, 2:39:06 PM3/2/22
to
We're talking about seqlocks.

Chris M. Thomasson

unread,
Mar 2, 2022, 2:40:46 PM3/2/22
to
On 3/2/2022 11:38 AM, Bonita Montero wrote:
> Am 02.03.2022 um 20:28 schrieb Chris M. Thomasson:
>> On 3/2/2022 11:20 AM, Bonita Montero wrote:
[...]
>>>>> If the people aren't stupid they understand that this
>>>>> code is useless because of what I said.
>>>>>
>>>>
>>>> You are their God that they must follow? OH, I did not know that,
>>>> sorry. seqlock in userland can be useful.
>>>
>>> Spinning for a writer that's scheduled away isn't tolerable.
>>> Anyone can see that except you.
>>
>> Ever heard of an adaptive rw-mutex? Readers can choose to spin for a
>> while when there is a writer in the critical section.
>
> We're talking about seqlocks.

I know. But a rw-mutex can also choose to spin for a writer that's
scheduled away, as you say. You are missing the point...

Bonita Montero

unread,
Mar 2, 2022, 2:57:49 PM3/2/22
to
I've developed such a rw-mutex on my own - with the calculation of the
spinning-interval taken from the glibc:

// -------------------- .h

#pragma once
#include <cstdint>
#include <cassert>
#include <thread>
#include <new>
#include <atomic>
#include <stdexcept>
#include <exception>
#include <limits>
#include <cstdint>
#include <semaphore>
#include "thread_id.h"
#include "msvc-disabler.h"

#if defined(__llvm__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdangling-else"
#endif

struct wbsm_exception : public std::exception
{
enum reason : std::uint8_t
{
SHARER_COUNTER_SATURATED = 1,
WATING_EXCLUSIVE_COUNTER_SATURATED,
RECURSION_COUNTER_SATURATED,
RECURSION_COUNTER_NOT_ZERO,
INVALID_THREAD_ID
};
wbsm_exception() = delete;
wbsm_exception( reason r, char const *what );
reason get_reason();
virtual
char const *what() const noexcept;
private:
reason m_reason;
char const *m_what;
};

inline
wbsm_exception::wbsm_exception( reason r, char const *what ) :
m_reason( r ),
m_what( what )
{
}

inline
wbsm_exception::reason wbsm_exception::get_reason()
{
return m_reason;
}

static_assert(std::atomic<std::uint64_t>::is_always_lock_free,
"std::uint64_t must be lock-free");

struct alignas(64)
wbias_shared_mutex
{
wbias_shared_mutex( std::int16_t maxExclusiveSpinCount = 100,
std::int16_t maxSharedSpinCount = 100 );
wbias_shared_mutex( wbias_shared_mutex const & ) = delete;
void operator =( wbias_shared_mutex const & ) = delete;
~wbias_shared_mutex();
void lock_shared();
bool try_lock_shared();
void unlock_shared();
void shared_to_exclusive( thread_id const &threadId = thread_id::self() );
void lock_exclusive( thread_id const &threadId = thread_id::self() );
bool try_lock_exclusive( thread_id const &threadId = thread_id::self() );
bool lock_preferred_shared( thread_id const &threadId =
thread_id::self() );
void unlock_exclusive( thread_id const &threadId = thread_id::self() );
void exclusive_to_shared( bool force = false, thread_id const &threadId
= thread_id::self() );
bool is_shared();
bool is_exclusive();
bool we_are_exclusive( thread_id const &threadId = thread_id::self() );
thread_id get_exclusive_thread_id();
std::uint32_t get_exclusive_recursion_count();
std::int16_t max_exclusive_spin_count( std::int16_t max );
std::int16_t max_shared_spin_count( std::int16_t max );
private:
static_assert(std::atomic<std::uint64_t>::is_always_lock_free,
"std::atomic<std::uint64_t> must be always lock-free");
std::atomic<std::uint64_t> m_atomic;
thread_id m_exclusiveThreadId;
std::uint32_t m_exclusiveRecursionCount;
std::int16_t m_exclusiveSpinCount, m_sharedSpinCount; // adaptive
spinning taken from glibc
std::int16_t m_maxExclusiveSpinCount, m_maxSharedSpinCount;
std::counting_semaphore<((unsigned)-1 >> 1)> m_releaseSharedSem;
std::binary_semaphore m_releaseExclusiveSem;

static unsigned const WAITING_SHARERS_BASE = 21, WAITING_EXCLUSIVE_BASE
= 42, EXCLUSIVE_FLAG_BASE = 63;
static std::uint64_t const MASK21 = 0x1FFFFFu;
static std::uint64_t const SHARERS_MASK = MASK21, WAITING_SHARERS_MASK
= MASK21 << WAITING_SHARERS_BASE, WAITING_EXCLUSIVE_MASK = MASK21 <<
WAITING_EXCLUSIVE_BASE, EXCLUSIVE_FLAG_MASK = (std::uint64_t)1 <<
EXCLUSIVE_FLAG_BASE;
static std::uint64_t const SHARER_VALUE = (std::uint64_t)1,
WAITING_SHARERS_VALUE = (std::uint64_t)1 << WAITING_SHARERS_BASE,
WAITING_EXCLUSIVE_VALUE = (std::uint64_t)1 << WAITING_EXCLUSIVE_BASE;
static short const DEFAULT_MAX_SPIN_COUNT = 100;
#if !defined(NDEBUG)
static bool check( std::uint64_t flags );
#endif
bool internal_spin_lock_shared( std::uint64_t &cmp );
void internal_lock_shared( uint64_t cmp );
bool internal_spin_lock_exclusive( std::uint64_t &cmp, thread_id const
&threadId );
void cpu_pause();
};

inline
bool wbias_shared_mutex::is_shared()
{
return m_atomic.load( std::memory_order_relaxed ) & SHARERS_MASK;
}

inline
bool wbias_shared_mutex::is_exclusive()
{
return m_atomic.load( std::memory_order_relaxed ) & EXCLUSIVE_FLAG_MASK;
}

inline
bool wbias_shared_mutex::we_are_exclusive( thread_id const &threadId )
{
return (m_atomic.load( std::memory_order_relaxed ) &
EXCLUSIVE_FLAG_MASK) && m_exclusiveThreadId == threadId;
}

inline
thread_id wbias_shared_mutex::get_exclusive_thread_id()
{
//std::atomic_thread_fence( std::memory_order_acquire );
return m_exclusiveThreadId;
}

inline
std::uint32_t wbias_shared_mutex::get_exclusive_recursion_count()
{
return m_exclusiveRecursionCount;
}


struct wbsm_lock
{
enum : std::uint8_t
{
UNLOCKED,
SHARED,
EXLUSIVE
};
wbsm_lock();
wbsm_lock( wbsm_lock const &other );
wbsm_lock( wbsm_lock &&other ) noexcept;
wbsm_lock( wbias_shared_mutex &mtx, bool lockShared = true, thread_id
const &threadId = thread_id::self() );
~wbsm_lock();
wbsm_lock &operator =( wbsm_lock const &other );
wbsm_lock &operator =( wbsm_lock &&other ) noexcept;
void lock_shared( wbias_shared_mutex &mtx );
void shared_to_exclusive( thread_id const &threadId = thread_id::self() );
void exclusive_to_shared( bool force = false );
void lock_exclusive( wbias_shared_mutex &mtx, thread_id const &threadId
= thread_id::self() );
void unlock();
void lock_preferred_shared( wbias_shared_mutex &mtx, thread_id const
&threadId = thread_id::self() );
bool is_locked() const;
bool is_shared() const;
bool is_exclusive() const;
thread_id get_exclusive_thread_id() const;
wbias_shared_mutex *get_mutex() const;
std::uint8_t get_state() const;
void set_state( std::uint8_t state, wbias_shared_mutex &mtx );
private:
wbias_shared_mutex *m_mtx;
bool m_isShared;
thread_id m_threadId;
};

inline
wbsm_lock::wbsm_lock()
{
m_mtx = nullptr;
}

// may throw wbsm_exception if mutex-counter saturates

inline
wbsm_lock::wbsm_lock( wbsm_lock const &other ) :
m_mtx( other.m_mtx ),
m_isShared( other.m_isShared )
{
if( m_isShared )
m_mtx->lock_shared();
else
m_mtx->lock_exclusive( other.m_threadId ),
m_threadId = m_mtx->get_exclusive_thread_id();
}

inline
wbsm_lock::wbsm_lock( wbsm_lock &&other ) noexcept :
m_mtx( other.m_mtx ),
m_isShared( other.m_isShared ),
m_threadId( other.m_threadId )
{
other.m_mtx = nullptr;
#if !defined(NDEBUG)
other.m_threadId = thread_id();
#endif
}

// may throw wbsm_exception if mutex-counter saturates

inline
wbsm_lock::wbsm_lock( wbias_shared_mutex &mtx, bool lockShared,
thread_id const &threadId )
{
if( lockShared ) // [[likely]]
mtx.lock_shared();
else
mtx.lock_exclusive( threadId ),
m_threadId = mtx.get_exclusive_thread_id();
m_mtx = &mtx;
m_isShared = lockShared;
}

inline
wbsm_lock::~wbsm_lock()
{
if( !m_mtx )
return;
if( m_isShared ) [[likely]]
m_mtx->unlock_shared();
else
m_mtx->unlock_exclusive( m_threadId );
}

// may throw wbsm_exception if mutex-counter saturates

inline
wbsm_lock &wbsm_lock::operator =( wbsm_lock const &other )
{
if( m_mtx == other.m_mtx )
return *this;
unlock();
if( !other.m_mtx ) [[unlikely]]
return *this;
if( other.m_isShared )
other.m_mtx->lock_shared();
else
other.m_mtx->lock_exclusive( other.m_threadId ),
m_threadId = other.m_threadId;
m_mtx = other.m_mtx;
m_isShared = other.m_isShared;
return *this;
}

inline
wbsm_lock &wbsm_lock::operator =( wbsm_lock &&other ) noexcept
{
if( m_mtx == other.m_mtx )
{
other.unlock();
return *this;
}
unlock();
m_mtx = other.m_mtx;
m_isShared = other.m_isShared;
m_threadId = other.m_threadId;
other.m_mtx = nullptr;
#if !defined(NDEBUG)
other.m_threadId = thread_id();
#endif
return *this;
}

// may throw wbsm_exception if mutex-counter saturates

inline
void wbsm_lock::lock_shared( wbias_shared_mutex &mtx )
{
if( m_mtx == &mtx )
if( m_isShared ) [[likely]]
return;
else
{
m_mtx->exclusive_to_shared( false, m_threadId );
m_isShared = true;
#if !defined(NDEBUG)
m_threadId = thread_id();
#endif
return;
}
unlock();
mtx.lock_shared();
m_mtx = &mtx;
m_isShared = true;
#if !defined(NDEBUG)
m_threadId = thread_id();
#endif
}

// may throw wbsm_exception if mutex-counter saturates

inline
void wbsm_lock::shared_to_exclusive( thread_id const &threadId )
{
assert(m_mtx);
if( m_isShared )
m_mtx->shared_to_exclusive( threadId ),
m_isShared = false;
m_threadId = threadId;
}

// may throw wbsm_exception if mutex-counter saturates

inline
void wbsm_lock::exclusive_to_shared( bool force )
{
assert(m_mtx);
if( m_isShared )
return;
m_mtx->exclusive_to_shared( force, m_threadId );
m_isShared = true;
#if !defined(NDEBUG)
m_threadId = thread_id();
#endif
}

// may throw wbsm_exception if mutex-counter saturates

inline
void wbsm_lock::lock_exclusive( wbias_shared_mutex &mtx, thread_id const
&threadId )
{
if( m_mtx == &mtx )
if( !m_isShared )
return;
else
{
m_mtx->shared_to_exclusive( threadId );
m_isShared = false;
m_threadId = threadId;
return;
}
unlock();
mtx.lock_exclusive( threadId );
m_mtx = &mtx;
m_isShared = false;
m_threadId = threadId;
}

inline
void wbsm_lock::unlock()
{
if( !m_mtx ) [[unlikely]]
return;
if( m_isShared ) [[likely]]
m_mtx->unlock_shared();
else
m_mtx->unlock_exclusive( m_threadId );
m_mtx = nullptr;
}

// may throw wbsm_exception if mutex-counter saturates

inline
void wbsm_lock::lock_preferred_shared( wbias_shared_mutex &mtx,
thread_id const &threadId )
{
if( m_mtx != &mtx )
m_isShared = mtx.lock_preferred_shared( threadId ),
m_mtx = &mtx;
m_threadId = threadId;
}

inline
bool wbsm_lock::is_locked() const
{
return m_mtx != nullptr;
}

inline
bool wbsm_lock::is_shared() const
{
assert(m_mtx);
return m_isShared;
}

inline
bool wbsm_lock::is_exclusive() const
{
assert(m_mtx);
return !m_isShared;
}

inline
thread_id wbsm_lock::get_exclusive_thread_id() const
{
return m_threadId;
}

inline
std::uint8_t wbsm_lock::get_state() const
{
if( !m_mtx ) [[unlikely]]
return wbsm_lock::UNLOCKED;
if( m_isShared ) [[likely]]
return wbsm_lock::SHARED;
else
return wbsm_lock::EXLUSIVE;
}

inline
wbias_shared_mutex *wbsm_lock::get_mutex() const
{
return m_mtx;
}

#if defined(__llvm__)
#pragma clang diagnostic pop
#endif

// -------------------- .cpp


#include <utility>
#include "wbias_shared_mutex.h"
#include "xassert.h"
#include "exc_encapsulate.h"
#if defined(_MSC_VER)
#include <intrin.h>
#elif defined(__GNUC__) && (defined(__x86_64__) || defined(__i386__))
#include <immintrin.h>
#endif
#if defined(__llvm__)
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdangling-else"
#endif

char const *wbsm_exception::what() const noexcept
{
return m_what;
}

#if !defined(NDEBUG)
inline
bool wbias_shared_mutex::check( std::uint64_t flags )
{
unsigned
sharers = (unsigned)(flags & MASK21),
waitingExclusive = (unsigned)((flags >> WAITING_EXCLUSIVE_BASE) & MASK21),
exclusiveFlag = (unsigned)((flags >> EXCLUSIVE_FLAG_BASE) & 1);
if( sharers && exclusiveFlag )
return false;
if( waitingExclusive && !exclusiveFlag && !sharers )
return false;
return true;
}
#endif

wbias_shared_mutex::wbias_shared_mutex( std::int16_t
maxExclusiveSpinCount, std::int16_t maxSharedSpinCount ) :
m_atomic( 0 ),
m_exclusiveThreadId(),
m_exclusiveSpinCount( 0 ),
m_sharedSpinCount( 0 ),
m_maxExclusiveSpinCount( maxExclusiveSpinCount ),
m_maxSharedSpinCount( maxSharedSpinCount ),
m_releaseSharedSem( 0 ),
m_releaseExclusiveSem( 0 )
{
}

wbias_shared_mutex::~wbias_shared_mutex()
{
assert(m_atomic == 0);
}

// throws wbsm_exception if waiting sharers counter saturates

void wbias_shared_mutex::lock_shared()
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_relaxed );
if( internal_spin_lock_shared( cmp ) )
return;
internal_lock_shared( cmp );
}

inline
bool wbias_shared_mutex::internal_spin_lock_shared( std::uint64_t &cmp )
{
using namespace std;
if( m_maxSharedSpinCount <= 0 )
return false;
int32_t maxSpin = (int32_t)m_sharedSpinCount * 2 + 10;
maxSpin = maxSpin <= m_maxSharedSpinCount ? maxSpin : m_maxSharedSpinCount;
int32_t spinCount = 0;
bool locked = false;
do
{
assert(check( cmp ));
if( cmp & EXCLUSIVE_FLAG_MASK )
{
cpu_pause();
cmp = m_atomic.load( memory_order_relaxed );
continue;
}
if( (cmp & SHARERS_MASK) == SHARERS_MASK )
return false;
if( m_atomic.compare_exchange_weak( cmp, cmp + SHARER_VALUE,
memory_order_acquire, memory_order_relaxed ) )
{
locked = true;
break;
}
cpu_pause();
} while( ++spinCount < maxSpin );
m_sharedSpinCount += ((int16_t)spinCount - m_sharedSpinCount) / 8;
return locked;
}

// throws wbsm_exception if waiting sharers counter saturates

inline
void wbias_shared_mutex::internal_lock_shared( uint64_t cmp )
{
using namespace std;
for( ; ; )
{
assert(check( cmp ));
if( (cmp & (EXCLUSIVE_FLAG_MASK | WAITING_EXCLUSIVE_MASK)) == 0 )
{
if( (cmp & SHARERS_MASK) == SHARERS_MASK )
throw wbsm_exception( wbsm_exception::SHARER_COUNTER_SATURATED,
"wbsm-lock sharer-counter saturated" );
if( m_atomic.compare_exchange_weak( cmp, cmp + SHARER_VALUE,
memory_order_acquire, memory_order_relaxed ) )
return;
}
else
{
if( (cmp & SHARERS_MASK) + ((cmp & WAITING_SHARERS_MASK) >>
WAITING_SHARERS_BASE) >= SHARERS_MASK )
throw wbsm_exception( wbsm_exception::SHARER_COUNTER_SATURATED,
"wbsm-lock sharer-counter saturated" );
if( m_atomic.compare_exchange_weak( cmp, cmp + WAITING_SHARERS_VALUE,
memory_order_relaxed, memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseSharedSem.acquire(); } );
return;
}
}
}
}

// throws wbsm_exception if recursion-counter saturates

inline
bool wbias_shared_mutex::internal_spin_lock_exclusive( std::uint64_t
&cmp, thread_id const &threadId )
{
using namespace std;
assert(check( cmp ));
if( (cmp & EXCLUSIVE_FLAG_MASK) && m_exclusiveThreadId == threadId )
{
if( m_exclusiveRecursionCount == numeric_limits<uint32_t>::max() )
throw wbsm_exception( wbsm_exception::RECURSION_COUNTER_SATURATED,
"wbsm-lock recursion-counter saturated" );
++m_exclusiveRecursionCount;
return true;
}
if( m_maxExclusiveSpinCount <= 0 )
return false;
int32_t maxSpin = (int32_t)m_exclusiveSpinCount * 2 + 10;
maxSpin = maxSpin <= m_maxExclusiveSpinCount ? maxSpin :
m_maxExclusiveSpinCount;
int32_t spinCount = 0;
bool locked = false;
do
{
assert(check( cmp ));
cmp = 0;
if( m_atomic.compare_exchange_weak( cmp, EXCLUSIVE_FLAG_MASK,
memory_order_acquire, memory_order_relaxed ) )
{
m_exclusiveThreadId = threadId;
m_exclusiveRecursionCount = 0;
locked = true;
break;
}
cpu_pause();
} while( ++spinCount < maxSpin );
m_exclusiveSpinCount += ((int16_t)spinCount - m_exclusiveSpinCount) / 8;
return locked;
}

bool wbias_shared_mutex::try_lock_shared()
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_relaxed );
return internal_spin_lock_shared( cmp );
}

void wbias_shared_mutex::unlock_shared()
{
using namespace std;
for( uint64_t cmp = m_atomic.load( memory_order_relaxed ); ; )
{
assert(check( cmp ));
assert((cmp & SHARERS_MASK) >= SHARER_VALUE);
if( (cmp & WAITING_EXCLUSIVE_MASK) == 0 || (cmp & SHARERS_MASK) !=
SHARER_VALUE )
{
if( m_atomic.compare_exchange_weak( cmp, cmp - SHARER_VALUE,
memory_order_relaxed, memory_order_relaxed ) )
return;
}
else
{
assert(!(cmp & EXCLUSIVE_FLAG_MASK));
if( m_atomic.compare_exchange_weak( cmp, (cmp - SHARER_VALUE -
WAITING_EXCLUSIVE_VALUE) | EXCLUSIVE_FLAG_MASK, memory_order_relaxed,
memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseExclusiveSem.release( 1 ); } );
return;
}
}
}
}

// throws wbsm_exception if waiting exclusive counter saturates

void wbias_shared_mutex::shared_to_exclusive( thread_id const &threadId )
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_relaxed );
if( m_maxExclusiveSpinCount > 0 )
{
int32_t maxSpin = (int32_t)m_exclusiveSpinCount * 2 + 10;
maxSpin = maxSpin <= m_maxExclusiveSpinCount ? maxSpin :
m_maxExclusiveSpinCount;
int32_t spinCount = 0;
bool locked = false;
do
{
assert(check( cmp ));
assert((cmp & SHARERS_MASK) >= SHARER_VALUE);
cmp = (cmp & ~SHARERS_MASK) | 1;
if( m_atomic.compare_exchange_weak( cmp, (cmp & ~SHARERS_MASK) |
EXCLUSIVE_FLAG_MASK, memory_order_acquire, memory_order_relaxed ) )
{
m_exclusiveThreadId = threadId;
m_exclusiveRecursionCount = 0;
locked = true;
break;
}
cpu_pause();
} while( ++spinCount < maxSpin );
m_exclusiveSpinCount += ((int16_t)spinCount - m_exclusiveSpinCount) / 8;
if( locked )
return;
}
for( ; ; )
{
assert(check( cmp ));
assert((cmp & SHARERS_MASK) >= SHARER_VALUE);
if( (cmp & SHARERS_MASK) == SHARER_VALUE )
if( m_atomic.compare_exchange_weak( cmp, (cmp - SHARER_VALUE) |
EXCLUSIVE_FLAG_MASK, memory_order_acquire, memory_order_relaxed ) )
{
m_exclusiveThreadId = threadId;
m_exclusiveRecursionCount = 0;
return;
}
else;
else
{
if( (cmp & WAITING_EXCLUSIVE_MASK) == WAITING_EXCLUSIVE_MASK )
throw wbsm_exception(
wbsm_exception::WATING_EXCLUSIVE_COUNTER_SATURATED, "wbsm-lock
waiting-exclusive-count saturated" );
if( m_atomic.compare_exchange_weak( cmp, cmp - SHARER_VALUE +
WAITING_EXCLUSIVE_VALUE, memory_order_relaxed, memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseExclusiveSem.acquire(); } );
m_exclusiveThreadId = threadId;
m_exclusiveRecursionCount = 0;
return;
}
}
}
}

// throws wbsm_exception if waiting exclusive counter saturates

void wbias_shared_mutex::lock_exclusive( thread_id const &threadId )
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_acquire );
if( internal_spin_lock_exclusive( cmp, threadId ) )
return;
for( ; ; )
{
assert(check( cmp ));
if( (cmp & (EXCLUSIVE_FLAG_MASK | SHARERS_MASK)) == 0 )
if( m_atomic.compare_exchange_weak( cmp, cmp | EXCLUSIVE_FLAG_MASK,
memory_order_acquire, memory_order_relaxed ) )
{
m_exclusiveThreadId = threadId;
m_exclusiveRecursionCount = 0;
return;
}
else;
else
{
if( (cmp & WAITING_EXCLUSIVE_MASK) == WAITING_EXCLUSIVE_MASK )
throw wbsm_exception(
wbsm_exception::WATING_EXCLUSIVE_COUNTER_SATURATED, "wbsm-lock
waiting-waiters-counter saturated" );
if( m_atomic.compare_exchange_weak( cmp, cmp +
WAITING_EXCLUSIVE_VALUE, memory_order_relaxed, memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseExclusiveSem.acquire(); } );
m_exclusiveThreadId = threadId;
m_exclusiveRecursionCount = 0;
return;
}
}
}
}



bool wbias_shared_mutex::try_lock_exclusive( thread_id const &threadId )
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_acquire );
return internal_spin_lock_exclusive( cmp, threadId );
}

// throws wbsm_exception if recursion-counter saturates
// throws wbsm_exception if waiting sharers counter saturates

bool wbias_shared_mutex::lock_preferred_shared( thread_id const &threadId )
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_acquire );
assert(check( cmp ));
if( (cmp & EXCLUSIVE_FLAG_MASK) && m_exclusiveThreadId == threadId )
{
if( m_exclusiveRecursionCount == numeric_limits<uint32_t>::max() )
throw wbsm_exception( wbsm_exception::RECURSION_COUNTER_SATURATED,
"wbsm-lock recursion-counter saturated" );
++m_exclusiveRecursionCount;
return false;
}
if( internal_spin_lock_shared( cmp ) )
return true;
internal_lock_shared( cmp );
return true;
}

void wbias_shared_mutex::unlock_exclusive( thread_id const &threadId )
{
using namespace std;
uint64_t cmp = m_atomic.load( memory_order_acquire );
assert(check( cmp ));
assert((cmp & EXCLUSIVE_FLAG_MASK) && m_exclusiveThreadId == threadId);
if( (cmp & EXCLUSIVE_FLAG_MASK) && m_exclusiveRecursionCount &&
m_exclusiveThreadId == threadId )
{
--m_exclusiveRecursionCount;
return;
}
m_exclusiveThreadId = thread_id();
for( ; ; )
{
assert(check( cmp ));
assert(cmp & EXCLUSIVE_FLAG_MASK);
if( (cmp & WAITING_EXCLUSIVE_MASK) != 0 )
if( m_atomic.compare_exchange_weak( cmp, cmp -
WAITING_EXCLUSIVE_VALUE, memory_order_release, memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseExclusiveSem.release( 1 ); } );
return;
}
else
continue;
if( (cmp & WAITING_SHARERS_MASK) != 0 )
{
uint64_t waitingSharers = cmp & WAITING_SHARERS_MASK,
wakeups = waitingSharers >> WAITING_SHARERS_BASE;
if( m_atomic.compare_exchange_weak( cmp, (cmp & ~EXCLUSIVE_FLAG_MASK)
- waitingSharers + wakeups, memory_order_release, memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseSharedSem.release(
(ptrdiff_t)wakeups ); } );
return;
}
else
continue;
}
if( m_atomic.compare_exchange_weak( cmp, 0, memory_order_release,
memory_order_relaxed ) )
return;
}
}

// may throw wbsm_exception (lock remains exclusive)

void wbias_shared_mutex::exclusive_to_shared( bool force, thread_id
const &threadId )
{
using namespace std;
uint64_t cmp = m_atomic.load( std::memory_order_relaxed );
assert((cmp & EXCLUSIVE_FLAG_MASK));
if( m_exclusiveThreadId != threadId )
throw wbsm_exception( wbsm_exception::INVALID_THREAD_ID, "wbsm-lock,
trying to switch from a foreign thread id into shared mode" );
if( m_exclusiveRecursionCount )
throw wbsm_exception( wbsm_exception::RECURSION_COUNTER_NOT_ZERO,
"wbsm-lock, trying to switch into shared mode with recursion count != 0" );
assert((cmp & EXCLUSIVE_FLAG_MASK) && m_exclusiveRecursionCount == 0);
if( m_maxSharedSpinCount > 0 )
{
int32_t maxSpin = (int32_t)m_sharedSpinCount * 2 + 10;
maxSpin = maxSpin <= m_maxSharedSpinCount ? maxSpin :
m_maxSharedSpinCount;
int32_t spinCount = 0;
bool setSpinCount = true;
do
{
assert(check( cmp ));
//uint64_t wakeups = ((cmp & WAITING_SHARERS_MASK) >>
WAITING_SHARERS_BASE) + (cmp & SHARERS_MASK);
uint64_t wakeups = (cmp & WAITING_SHARERS_MASK) >> WAITING_SHARERS_BASE;
if( wakeups == SHARERS_MASK )
{
setSpinCount = false;
break;
}
if( !force )
cmp &= ~WAITING_EXCLUSIVE_MASK;
if( m_atomic.compare_exchange_weak( cmp, (cmp & ~EXCLUSIVE_FLAG_MASK)
+ wakeups + SHARER_VALUE,
memory_order_release,
memory_order_relaxed ) )
{
if( wakeups )
exc_terminate_or_spin( [&]() { m_releaseSharedSem.release(
(ptrdiff_t)wakeups ); } );
m_sharedSpinCount += ((int16_t)spinCount - m_sharedSpinCount) / 8;
return;
}
cpu_pause();
} while( ++spinCount < maxSpin );
if( setSpinCount )
m_sharedSpinCount += ((int16_t)spinCount - m_sharedSpinCount) / 8;
}
thread_id recoverThreadId = m_exclusiveThreadId;
m_exclusiveThreadId = thread_id();
for( ; ; )
{
assert(check( cmp ));
uint64_t waitingSharers = cmp & WAITING_SHARERS_MASK,
wakeups = waitingSharers >> WAITING_SHARERS_BASE;
if( !(cmp & WAITING_EXCLUSIVE_MASK) || force )
{
if( wakeups == SHARERS_MASK )
{
m_exclusiveThreadId = recoverThreadId;
throw wbsm_exception( wbsm_exception::SHARER_COUNTER_SATURATED,
"wbsm-lock sharer-counter saturated" );
}
if( m_atomic.compare_exchange_weak( cmp, (cmp & ~EXCLUSIVE_FLAG_MASK)
- waitingSharers + wakeups + SHARER_VALUE, memory_order_release,
memory_order_relaxed ) )
{
if( wakeups )
exc_terminate_or_spin( [&]() { m_releaseSharedSem.release(
(ptrdiff_t)wakeups ); } );
return;
}
else
continue;
}
if( wakeups == SHARERS_MASK )
{
m_exclusiveThreadId = recoverThreadId;
throw wbsm_exception( wbsm_exception::SHARER_COUNTER_SATURATED,
"wbsm-lock sharer-counter saturated" );
}
if( m_atomic.compare_exchange_weak( cmp, cmp - WAITING_EXCLUSIVE_VALUE
+ WAITING_SHARERS_VALUE, memory_order_release, memory_order_relaxed ) )
{
exc_terminate_or_spin( [&]() { m_releaseExclusiveSem.release( 1 ); } );
exc_terminate_or_spin( [&]() { m_releaseSharedSem.acquire(); });
return;
}
}
}

inline
void wbias_shared_mutex::cpu_pause()
{
#if defined(_MSC_VER) || defined(__GNUC__) && (defined(__x86_64__) ||
defined(__i386__))
_mm_pause();
#else
#error "need platform-specific pause-instruction"
#endif
}

std::int16_t wbias_shared_mutex::max_exclusive_spin_count( std::int16_t
max )
{
std::swap( max, m_maxExclusiveSpinCount );
return max;
}

std::int16_t wbias_shared_mutex::max_shared_spin_count( std::int16_t max )
{
std::swap( max, m_maxSharedSpinCount );
return max;
}

void wbsm_lock::set_state( std::uint8_t state, wbias_shared_mutex &mtx )
{
xassert(state == wbsm_lock::UNLOCKED || state == wbsm_lock::SHARED ||
state == wbsm_lock::EXLUSIVE);
switch( state )
{
case wbsm_lock::UNLOCKED:
unlock();
break;
case wbsm_lock::SHARED:
lock_shared( mtx );
break;
case wbsm_lock::EXLUSIVE:
lock_exclusive( mtx );
break;
}
}

Chris M. Thomasson

unread,
Mar 2, 2022, 3:08:25 PM3/2/22
to
On 3/2/2022 11:57 AM, Bonita Montero wrote:
> Am 02.03.2022 um 20:40 schrieb Chris M. Thomasson:
>> On 3/2/2022 11:38 AM, Bonita Montero wrote:
>>> Am 02.03.2022 um 20:28 schrieb Chris M. Thomasson:
>>>> On 3/2/2022 11:20 AM, Bonita Montero wrote:
>> [...]
>>>>>>> If the people aren't stupid they understand that this
>>>>>>> code is useless because of what I said.
>>>>>>>
>>>>>>
>>>>>> You are their God that they must follow? OH, I did not know that,
>>>>>> sorry. seqlock in userland can be useful.
>>>>>
>>>>> Spinning for a writer that's scheduled away isn't tolerable.
>>>>> Anyone can see that except you.
>>>>
>>>> Ever heard of an adaptive rw-mutex? Readers can choose to spin for a
>>>> while when there is a writer in the critical section.
>>>
>>> We're talking about seqlocks.
>>
>> I know. But a rw-mutex can also choose to spin for a writer that's
>> scheduled away, as you say. You are missing the point...
>
> I've developed such a rw-mutex on my own - with the calculation of the
> spinning-interval taken from the glibc:
[...]

Wow! I need to dig through that code. Have you tested it against
std::shared_mutex? That's always fun. ;^)

Btw, here is one of my older read/write mutexs that does not have
adaptive spin wait, based on a bakery algorithm:

https://vorbrodt.blog/2019/02/14/read-write-mutex

The code is pretty concise, so to speak.

;^)

Bonita Montero

unread,
Mar 2, 2022, 3:27:43 PM3/2/22
to
Am 02.03.2022 um 21:07 schrieb Chris M. Thomasson:
> On 3/2/2022 11:57 AM, Bonita Montero wrote:
>> Am 02.03.2022 um 20:40 schrieb Chris M. Thomasson:
>>> On 3/2/2022 11:38 AM, Bonita Montero wrote:
>>>> Am 02.03.2022 um 20:28 schrieb Chris M. Thomasson:
>>>>> On 3/2/2022 11:20 AM, Bonita Montero wrote:
>>> [...]
>>>>>>>> If the people aren't stupid they understand that this
>>>>>>>> code is useless because of what I said.
>>>>>>>>
>>>>>>>
>>>>>>> You are their God that they must follow? OH, I did not know that,
>>>>>>> sorry. seqlock in userland can be useful.
>>>>>>
>>>>>> Spinning for a writer that's scheduled away isn't tolerable.
>>>>>> Anyone can see that except you.
>>>>>
>>>>> Ever heard of an adaptive rw-mutex? Readers can choose to spin for
>>>>> a while when there is a writer in the critical section.
>>>>
>>>> We're talking about seqlocks.
>>>
>>> I know. But a rw-mutex can also choose to spin for a writer that's
>>> scheduled away, as you say. You are missing the point...
>>
>> I've developed such a rw-mutex on my own - with the calculation of the
>> spinning-interval taken from the glibc:
> [...]
>
> Wow! I need to dig through that code. Have you tested it against
> std::shared_mutex? That's always fun. ;^)


The code is a special kind of shared mutex: it has relative priority
for writers, i.e. when a writer does apply for ownership all current
readers cann pass but furhter readers are enqueued. The mutex counters
are a 64 bit value: the lower 21 bits are the number of current readers,
the next 21 bits are the number of enqueued readers, the next 21 bit
are the number of enqueued writers and the high-bit is a sign for a
current writer having exclusive access.
I needed this lock for my mostly non-blocking LRU-algorithm.

Chris M. Thomasson

unread,
Mar 3, 2022, 12:01:18 AM3/3/22
to
Hail Ming? ;^)

Juha Nieminen

unread,
Mar 3, 2022, 2:29:24 AM3/3/22
to
Bonita Montero <Bonita....@gmail.com> wrote:
> You're really, really stupid.

I think that the greatest mystery in this newsgroup is why anybody
even bothers paying you any attention.

Somehow you keep insulting, mocking and belittling people, yet
people still keep paying attention to you and answering you, again
and again. How do you do that? What's the secret?

Bonita Montero

unread,
Mar 3, 2022, 2:34:10 AM3/3/22
to
Chris is a frustating person, not understanding what I say
and telling is nonsense over and over.

Chris M. Thomasson

unread,
Mar 11, 2022, 7:46:28 PM3/11/22
to
On 3/2/2022 1:34 AM, Chris M. Thomasson wrote:
> #include <iostream>
[...]
>     if (shared_state.m_seqlock.validate())
>     {
>         std::cout << "DATA IS COHERENT!!! :^D\n\n\n";
>     }
>
>     else
>     {
>         std::cout << "DATA IS __FOOBAR__! God DAMN IT!!!! ;^o\n\n\n";
>     }
>
> #if defined CT_LOG
>     unsigned long log_read_spins =
> g_log_read_spins.load(std::memory_order_relaxed);
>     std::cout << "CT_LOG: log_read_spins = " << log_read_spins << "\n";
> #endif
>
>     return 0;
> }

Humm.... I have an interesting use case for this. Reading the state of a
cell in a distributed DLA algorithm. Its read mostly, write rarely.
Perfect! Here is an example:

http://fractallife247.com/fdla/

Now, this can be multi-threaded!

Chris M. Thomasson

unread,
Mar 11, 2022, 7:51:19 PM3/11/22
to
Particles flowing to the single initial sink in the vector field only
reads the field state. However, when they actually hit a sink point,
they add a field point to the system. So, seqlock for the reads, and
when they hit something use a lock-free stack to push the new point onto
the system field point list.

Chris M. Thomasson

unread,
Mar 13, 2022, 5:39:40 AM3/13/22
to
On 3/11/2022 4:45 PM, Chris M. Thomasson wrote:
Be sure to click around in the real time simulation from time to time.
See what happens. Each click adds an sink, or attractor if you will, to
the field system. Therefore, this would be a "write" wrt the seqlock,
however, I have an idea of using a lock-free stack seqlock hybrid. So it
can efficiently handle more writes...

https://github.com/ChrisMThomasson/CT_fieldDLA/blob/master/cairo_test_penrose/output.png
0 new messages