Guidelines on implementing distributed atomic writes.

Skip to first unread message

Roman Gershman

Apr 17, 2020, 5:42:58 AM4/17/20
to Scalable Synchronization Algorithms

Hello, dear lockless experts.

I need an advice on the following problem I am trying to solve.

The setup is as follows:

2 thread-local hash-tables A and B that are pinned to their managing threads. Each thread has an external interface to write and read from its thread-local data atomically using mpsc queue thus avoiding data races.

I have multiple external entities that write to each of the tables and I need to design an atomic multi-table write/reads, i.e.. If player 1 starts writing to A[k] and B[k] for some key 'k', and writer 2 is reading from them then it either observes their old values or the new ones. 

Important restriction - we never block threads A,B but only the entities that interact with them.

Probably it’s a classic problem but I do not know its name or I do not know how to formalize it

to already known algorithm.

Meanwhile I came up with the following solution:

First Phase:

Writer 1 marks A[k] and B[k] as locked and publishes in each thread its lock[k] pointing to writer1 (using their write interface). Those writes happen in random order since A,B are not synchronized between themselves. 

If any of the tables are already locked on 'k' we apply LockResolution (see below).

Second phase:

Assuming that writer 1 was notified that those 2 writes succeeded,  it knows that all the pending writes to [k] either happened before locks were

set or are handling LockResolution on lock[k]. Now it can write the new data safely and remove the locks (again in random order).

First phase should guarantee the atomicity when the second phase is applied and some locks are lifted and some are not.


If a writer try to lock the already marked key they need to block on its lock. 

Obviously it could cause deadlocks since the order of writes is undefined:

Writer 1 could first lock A, Writer 2  locks B, and they both wait for each other.

So, instead a writer with a smaller id gives up, unblocks already locked entities and starts again with the phase 1.

Readers with single-table reads can ignore the lock and just read the data. Readers with multi-table reads from A[k] or B[k] will block on lock[k] and they do not need to retry since they do not cause deadlocks.

That’s what I have so far. Do you think this algorithm is correct?

Can it be improved? :)

Roman Gershman

Apr 20, 2020, 10:28:58 AM4/20/20
to Scalable Synchronization Algorithms
Was my problem not interesting enough? Not clear enough? Both? :)
Reply all
Reply to author
0 new messages