SPSC Overwriting Queue

Skip to first unread message

Brendon Costa

May 31, 2022, 9:19:05 AM5/31/22
to lock...@googlegroups.com
Hi all,

Does anyone know of a SPSC overwriting bounded queue design they can share information about?

The idea is that when the queue is full instead of blocking the enqueue of new items, the enqueue simply overwrites the oldest items in the queue. This is for initially use in realtime audio applications.

Ideally this would be wait-free on both producer and consumer as in realtime audio applications we want a bounded time for the completion of the work. That said in practice this isn't always a requirement and in the case where the queue overflows and starts overwriting, we could afford to be non-wait-free as we have discontinuities in the audio anyway.

I was wondering if anyone knows of any existing work like this I can read about (I had no luck finding anything relevant yet so I am obviously searching for the wrong thing)?


Mamy Ratsimbazafy

May 31, 2022, 9:25:59 AM5/31/22
to lock...@googlegroups.com
That's called a ringbuffer.


You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lock-free+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/CAGUrrs4EbOCzRMRSoj6jr%2B%3DjgqiU7bNbsqmhpo%2BZW71P5E7vxg%40mail.gmail.com.

Albi Kavo

Jun 6, 2022, 11:44:58 AM6/6/22
to Scalable Synchronization Algorithms
Here is a lock free implementation of a ring-buffer for the SPSC case:

It satisfies your bounded requirement but not the overwriting one. Instead it returns false when the queue is full, for you to handle as you wish. 
How important is overwriting to you?
For the example above, it seems to me that overwriting would require the producer to modify the _head in addition to the _tail which would break the design.

Andrii Merkushev

Jun 7, 2022, 12:14:34 AM6/7/22
to Scalable Synchronization Algorithms
Looks like it's impossible to have overwriting  SPSC queue. There is no way for consumer safely access data because it can be owerrided by the producer at any moment.

Brendon Costa

Jun 7, 2022, 3:28:04 AM6/7/22
to lock...@googlegroups.com
Thanks for the feedback. I was banging my head against a wall thinking I was obviously searching for the wrong thing...

Yes, the whole point of this question is the overwrite support. I have access to a SPSC non-overwrite queue (ringbuffer as mentioned) I wrote ages ago. These non-overwrite ones are all I find when searching the internet. I have probably looked through maybe 15 different queue impls in detail now and not found what I am after. The overwrite makes the problem much more complex.

I have two use-cases one can probably get away without the overwrite the other has significantly worse quality without the overwrite. 

The key problem is that it is really more like a SPMC queue as the producer is also a "discarding consumer"

I have googled this a lot and found a number of unanswered questions and a few badly answered questions with things that just don't work after I worked through them in detail. 

As a result I have started to try write my own using relacy and it looks like it might be feasible.

Mine certainly wont scale 2x (SPSC) because of the sharing involved which I dont care about but at least will be wait-free producer (bounded time) and wait-free consumer in the case I care about (where the queue hasn't overflowed). I will see how I go about solving the remaining issues.

The basic idea of the algo is to publish the "pending write cursor" I am about to do, then do the write then commit by moving the "actual write  cursor" once I am done.

The read is then more like a cas loop where it reads the read/pending-write/actual-write cursors, reads data which may race (must be primitive as this doesn't currently work with proper ownership semantics), then re-reads the pointers and sees if there was a collision (doesnt care about change if done correctly only collision) and if so discards and tries again (taking into account livelock by skipping more ahead each time).

Not nice but is the first thing I have considered.

The other option I was considering was modifying a MPMC queue I have already based on Dimitris ticket system but I dont think it will be easily extendable to read/write N bytes at once which is also very desirable for me.


You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lock-free+...@googlegroups.com.

Andrii Merkushev

Jun 8, 2022, 10:27:12 PM6/8/22
to Scalable Synchronization Algorithms
I would try to use dedicated array of boolean indicator or embed such indicator in each element of buffer instead of  read/pending-write/actual-write cursors.
something like:
struct Item
   bool consumed;
   Data data;

This consumed boolean indicator helps to prevent data races

Producer algo:
1. check if there is empty space - regular insert, consumed = false
2  else
    2.1 Remember current cursor (curCursor) and advice reader cursor (readCursor) to desired distance using atomic_fetch_add. If atomic_fetch_add fails goto #1.
    2.2. Set each element.consumed=true for each element between curCursor and readCursor (value shall be cached on step 2.1)
    2.3 Loop each element from write cursor till cached readCursor.
    2.3.1 if element.consumed=false do busy wait
    2.3.2 write element.data
    2.3.3 element.consumed=false
    2.4 Advance write cursor

Consumer algo:
1. Advance readCursor using atomic_fetch_add
2. Consume data and set element.consumed=true

If such algo works producer will be non-wait-free (wait for consumer who acquire element but doesn't retrieve data from it yet) and wait-free consumer.


Jun 9, 2022, 1:21:42 PM6/9/22
to Scalable Synchronization Algorithms
We have an algorithm in Agrona which is used in Aeron (message transport) for broadcasting from a single producer to many consumers. The producer (Transmitter) is lock-free and the consumers (Receiver) can detect if the buffer has been overrun when they consume so that they can restart consuming after accepting the loss. The CopyBroadcastReceiver makes writing the receiver code a bit easier at the cost of an extra copy.

Java implementation:

C++ implementation:


Brendon Costa

Jun 13, 2022, 1:14:49 AM6/13/22
to lock...@googlegroups.com
Thanks all. I will give this wait-free consumer one a try later as well. Thanks for the suggestion.

In my particular use case I really want the wait-free producer more than the consumer but this idea sounds like it would make a nicer implementation for the queue and useful in other circumstances. I have considered writing a small library of queues with different behaviours I can call on easily when I need. 

Reply all
Reply to author
0 new messages