Re: 3-thread consensus.

416 views
Skip to first unread message

Dmitry Vyukov

unread,
Jan 9, 2020, 2:58:50 AM1/9/20
to bittnkr, lock...@googlegroups.com
On Wed, Jan 8, 2020 at 9:29 PM bittnkr <bit...@gmail.com> wrote:
>
> Dear Dmitry,
>
> I found a nice solution for the problem called 3 thread consensus, considered impossible on the book The art of the multiprocessor programming. I think that is a breakthrough.
>
> Debating on S.O. with someone about if the solution is solid or no, If it is possible to occur data races, he referred relacy.
>
> So I'm writing you to ask your opinion about the solution.
>
> Can you take a little look on it?

+lock-free mailing list

Hi bittnkr,

At first glance the algorithm at https://github.com/bittnkr/uniq looks
blocking and non-linearizable to me.
Very similar in nature to:
http://www.1024cores.net/home/lock-free-algorithms/queues/bounded-mpmc-queue

--
Dmitry Vyukov

All about lockfree/waitfree algorithms, multicore, scalability,
parallel computing and related topics:
http://www.1024cores.net

Mamy Ratsimbazafy

unread,
Jan 9, 2020, 6:00:01 AM1/9/20
to lock...@googlegroups.com, bittnkr
Hi bittnkr

Beyond Relacy, if you are looking to publish a paper, you might want to specify/run your algorithm in a formal language like Spin or TLA+.
They can't verify correct use of C++11 but you can assert everything else and they will exhaustively check all states of your program, including livelocks.
And they also give you a formal notation for papers.

--
Mamy André-Ratsimbazafy



--

---
You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lock-free+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/CAEeQi3t9sGs3RyOHG1rCi5KH-0LhP0AX7LdEiG0EaZH-2Xyzrw%40mail.gmail.com.

Dmitry Vyukov

unread,
Jan 11, 2020, 2:39:01 AM1/11/20
to bittnkr, lock...@googlegroups.com
On Sat, Jan 11, 2020 at 4:09 AM bittnkr <bit...@gmail.com> wrote:
>
> Hello Dmitry and fellows from the group.
>
> If you look carefully, you will see that there is no blocking, just simple CAS, even the buffer is a common buffer.

But consider you look inside of a spin lock, or you take a
spin-lock-based algorithm and inline all spin lock code into the
algorithm. What you will see is exactly the same -- no blocking, just
a CAS. However, it does not really change anything, it's still
blocking mutex-based algorithm.
One can also combine spin lock state with some data, e.g. a particular
value of a data member means "locked" and blocks progress of other
threads. It makes spin lock even less visible, but again does not
change anything on conceptual level.

If I am reading the algorithm correctly, if a thread is preempted
between these 2 operations:

} while ( (data[t & mask]) || (CompareExchange(tail, t+1, t) != t) )
data[t & mask] = item // now is safe to update the buffer

It effectively locks a mutex on the queue and blocks progress of all consumers.

There is also a similar blocks on consumer side here:

} while ( !(data[h & mask]) || CompareExchange(head, h+1, h) != h )
data[h] = 0 // release the seat


> The queue only sleeps if it is empty out of it is full. But this is not locking.
>
> All protection is done by the two atomic variables head an tail.
>
> The cost is constant near a single CAS. For any number of threads.
>
> Take a look on this benchmarks. They speaks for themselves.
>
> https://github.com/bittnkr/uniq/blob/master/README.md#benchmarks
>
> Besides I don't have a formal proof, we have a test with zero checksum.
>
> This is the results I have for a producer/consumer test pushing random numbers is the queue.
>
> Creating 4 producers & 4 consumers
> to flow 10.000.000 items trough the queue.
>
> Produced: 10.743.668.245.000.000
> Consumed: 5.554.289.678.184.004
> Produced: 10.743.668.245.000.000
> Consumed: 15.217.833.969.059.643
> Produced: 10.743.668.245.000.000
> Consumed: 7.380.542.769.600.801
> Produced: 10.743.668.245.000.000
> Consumed: 14.822.006.563.155.552
>
> Checksum: 0 (it must be zero)
>
> The producers increase total and the consumers decrease. The result for 10M random numbers is zero.
>
> Thanks for the advices, I'll investigate about this tools.

Dmitry Vyukov

unread,
Jan 12, 2020, 2:49:25 AM1/12/20
to bittnkr, lock...@googlegroups.com
On Sat, Jan 11, 2020 at 9:26 AM bittnkr <bit...@gmail.com> wrote:
>
> Good observations. Thank you.
>
> If the thread preempts on those points, the seat position will be held on local variables h and t.

This blocks consumes from progressing (consuming next produced items)
and is effectively a mutex. This makes algorithm non-termination-safe
and non-lock-free.


> After the thread line is restored, the CAS can fail, and the loop will just restart in normal flow, without any locking.
>
> I updated the docs, I think is clearer now.

Dmitry Vyukov

unread,
Jan 13, 2020, 1:55:55 AM1/13/20
to bittnkr, lock...@googlegroups.com
On Sun, Jan 12, 2020 at 7:33 PM bittnkr <bit...@gmail.com> wrote:
>
>
> > This blocks consumes from progressing (consuming next produced items)
> and is effectively a mutex.
>
> Suppose the thread A got a local copy of tail in t then is preempted,
>
> another thread will get the same tail an get the seat normally.
>
> When the previous thread retakes the line, the CAS will fail, because the seat was taken.
>
> Restarting the while without any kind of blocking.
>
> Where do you see a mutex here?

I mean preemption between succeeding CAS and writing element /NULL to the array.

If a thread is terminated at that point, the whole queue is broken (no
termination-safety).

Dmitry Vyukov

unread,
Jan 13, 2020, 2:36:01 AM1/13/20
to bittnkr, lock...@googlegroups.com
+lock-free group again, please don't drop it

On Mon, Jan 13, 2020 at 8:19 AM bittnkr <bit...@gmail.com> wrote:
>
> If a thread dies at that point, my entire system is broken...

Which means it's not lock-free.

> But It can preempts without any problem at near zero cost.

Well, if producer is preempted, consumers are blocked from
progressing. This is 100% equivalent to a mutex. If a thread is
preempted while holding a mutex, it also does not result in any
correctness problems.

bittnkr

unread,
Jan 13, 2020, 9:16:23 PM1/13/20
to Dmitry Vyukov, lock...@googlegroups.com

Well, if producer is preempted, consumers are blocked from progressing.

Sorry, but this isn't true. 

The consumers are preempted only in case of a empty queue. Which isn't a lock. 

Meaning that there is nothing to do. If you don't have active producers, three is nothing to consume.

How can you see a lock there?

Please

Brendon Costa

unread,
Jan 14, 2020, 1:38:32 AM1/14/20
to lock...@googlegroups.com, Dmitry Vyukov
I was interested in your queue so had a look as well. Thanks for sharing it. I find it interesting looking at other implementations.

I think Dmitry is correct. See the comment inline below. From my understanding the code locks the consumers waiting on a producer at the point in time where I placed the comment in the code. It has a slight advantage over a standard lock impl though as other producers are not blocked only other consumers. 

  int push(T item) {
    int i;
    do {
      i = in;
      while (i - out > mask)
        sched_yield(); // if full, wait for space
    } while (!isFree[i & mask] || !in.compare_exchange_weak(i, i + 1));

    // At this point in the code, the producer has incremented "in". Until this thread sets isFree[i & mask] = 0, all consumers are blocked waiting on isFree[o & mask] to be false. 
    // Even if another producer thread adds more data. So if the OS pre-empts this thread at this point in time
    // no consumers will make any progress until the OS re-schedules this thread to finish its work.
   
    buffer[i & mask] = item;
    isFree[i & mask] = 0;
    return i;
  }



--

---
You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lock-free+...@googlegroups.com.

Manuel Pöter

unread,
Jan 14, 2020, 3:55:14 AM1/14/20
to Scalable Synchronization Algorithms
The lock-free property guarantees that at any time at least one thread is making progress in a finite number of steps. Or to put this more generally: a stalled thread must not cause all other threads to stall indefinitely.
The arguments about lock-freedom (or the lack thereof) are usually based on the (somewhat artificial) assumption that any thread can fail at any time - i.e., simply stop executing. If such a failed thread causes the whole system to grind to a halt, then it is not lock-free.

You wrote yourself that "If a thread dies at that point, my entire system is broken...", so it is definitely not lock-free.
That being said, lock-freedom is a nice property, but by no means indispensable for scalability. Several of Dmitry's algorithms are not lock-free (like the bounded MPMC queue), which does not mean that they do not scale.

Alistarh et al. showed that lock-free algorithms are practically wait-free. I suppose the same could be shown for several concurrent algorithms that are not strictly lock-free. So it is not so much important whether an algorithm is lock-free or not, but whether it works well in practice for the use case it is designed for.

bit...@gmail.com

unread,
Jan 14, 2020, 9:55:54 AM1/14/20
to Scalable Synchronization Algorithms
Hi Brendon,

Thanks for the observations.

It that point between the CAS and the effective release of the seat, all other threads are free to take another seat without restriction. 

As well in any point of the code. There is no a single point where any thread is blocked from getting  a seat. 

That why I'm sure that it is lock free.

The queue will only stall, if it get full (all seats after that point are ocupied before the next cicle.) 

And only if the consumers get nothing.

If you have a queue of 64k positions, the other threads will have the same amount of chances to continue without locking. 

With our current memory availability, a queue with a million positions is pretty factible. But after some point, the benchmarks shows that there are no speed improvement in increasing the buffer size. The optimum size must fit inside the processor cache size. The optimum I found is only 64 positions. Take a look on the benchmarks

I've update the docs and I think it is clearer now.

Please run the code, stress it, increase the number of threads, reduce and increase the buffer size and change the nature of the data pushed.
 
If you do that, I believe that you will agree with me: Its so lock free as it is possible to be.

To unsubscribe from this group and stop receiving emails from it, send an email to lock...@googlegroups.com.

bit...@gmail.com

unread,
Jan 14, 2020, 10:03:42 AM1/14/20
to Scalable Synchronization Algorithms
Hi Manuel, 

How would a processing line be broken between the CAS and the release of the seat?

This will happen only if the S.O. preempt the thread on that point and never get back.

This is what I mean with the "my entire system is broken"

Only a S.O. scheduling failure can produce that.

Otherwise is impossible that a thread do not terminate that processing line. 

Can you think another possibility?

bit...@gmail.com

unread,
Jan 14, 2020, 11:00:54 AM1/14/20
to Scalable Synchronization Algorithms
Thinking about my own question, I find that an abnormal termination is another possibility, 
yet in this case, the entire program is finished/dead/broken.

Dmitry Vyukov

unread,
Jan 15, 2020, 1:17:36 AM1/15/20
to lock...@googlegroups.com
On Tue, Jan 14, 2020 at 4:03 PM <bit...@gmail.com> wrote:
>
> Hi Manuel,
>
> How would a processing line be broken between the CAS and the release of the seat?
>
> This will happen only if the S.O. preempt the thread on that point and never get back.
>
> This is what I mean with the "my entire system is broken"
>
> Only a S.O. scheduling failure can produce that.
>
> Otherwise is impossible that a thread do not terminate that processing line.
>
> Can you think another possibility?

Let me ask first: what is your definition of a lock-free algorithm?
> --
>
> ---
> You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to lock-free+...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/c1a1fd50-1635-4404-95da-00042df22230%40googlegroups.com.

Manuel Pöter

unread,
Jan 15, 2020, 3:37:13 AM1/15/20
to Scalable Synchronization Algorithms

There exists a universally accepted meaning of lock-freedom. You may find various definitions that try to describe or formalize it in different ways, but the essence is always the same. Here are a few definitions from different sources:

  • An algorithm is lock-free if, when the program threads are run for a sufficiently long time, at least one of the threads makes progress (for some sensible definition of progress). (Wikipedia)
  • A method is lock-free if it guarantees that infinitely often some method call finishes in a finite number of steps. (Herlihy and Shavit, The Art of Multiprocessor Programming)
  • A data structure is lock-free if and only if some operation completes after a finite number of steps system-wide have been executed on the structure. (Keir Fraser, Practical Lock-freedom)

The mental model that is used to reason about this guarantee assumes that a thread can fail at any time anywhere in the code - and such a failed thread must not prevent other threads from finishing their operations.
Or put another way: An algorithm that requires a thread A to wait for another thread B to finish some operation in order to proceed, can never be lock-free, because we have to assume that thread B may never finish its operation.

Whether this assumption is probable/realistic or not is beside the point and therefore irrelevant.


And since a failed thread can break your algorithm, it is not lock-free in the typical sense!

bittnkr

unread,
Jan 15, 2020, 4:58:11 AM1/15/20
to lock...@googlegroups.com
> Let me ask first: what is your definition of a lock-free algorithm?

When there is no locks/waits between threads/actors. 

And a lock happens when some thread A cannot proceed while another thread B do not finish its job.

But for practical reasons, I believe that the most important feature/characteristic of a Lock-free algorithm is to be O(1), The number of elements must not impact the performance. Doesn't matter if you are handling 2 or 10.000 threads, the cost per operation must be the same.



You received this message because you are subscribed to a topic in the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lock-free/MnhH9AlLfOc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lock-free+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/CAEeQi3t8dFODaK22rEM5rRhutouHyDz_hd6a6C-Yi3Utx5HdSQ%40mail.gmail.com.

Mamy Ratsimbazafy

unread,
Jan 15, 2020, 5:36:37 AM1/15/20
to lock...@googlegroups.com
The definition of obstruction-free, lock-free or wait-free is not a guarantee of performance, it's a guarantee of progress, in particular when thread can dies (non-blocking algorithm)

In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread;[1] for some operations, these algorithms provide a useful alternative to traditional blocking implementations. A non-blocking algorithm is lock-free if there is guaranteed system-wide progress, and wait-free if there is also guaranteed per-thread progress.

Your algorithm is lockless (i.e. without lock) but not lock-free according to the definition of lock-freedom.

As you aptly said, in the practical case wait-free or lock-free is not a necessary property to be able to scale a data-structure to multiple contending threads.
For example, Dmitry's famous MPSC node-based queue is also lockless, as if a producer dies in the wrong place, the consumer cannot get to the end of the queue.
There are lockless (or linearizable) variants of it but AFAIK, it requires giving up some performance.
In practice, using this queue means assuming that threads don't die which is a reasonable assumptions most of the time.

Lock-freedom or wait-freedom is valuable is when you need strong worst-case guarantees (real-time scheduling for example) or cannot rely on the OS (because you might be writing one or you have specific fault-tolerance requirements).

--
Mamy

bittnkr

unread,
Jan 15, 2020, 7:47:49 PM1/15/20
to lock...@googlegroups.com

> Your algorithm is lockless (i.e. without lock) but not lock-free according to the definition of lock-freedom.

You make me think if my cellphone is wireless or wirefree. :)

Thinking carefully about, I got an axiom:

An algorithm can be called lock-free if it's O(1).

Condition necessary and sufficient.


Manuel Pöter

unread,
Jan 16, 2020, 3:16:50 AM1/16/20
to Scalable Synchronization Algorithms
Progress guarantees have nothing to do with algorithmic complexity! Take Harris' lock-free linked list - the list operations are definitely not O(1), but it is still lock-free.
I am not sure if you really don't understand this, or if you are deliberately ignoring all comments trying to explain what lock-freedom usually means. Either way, this discussion seems rather pointless by now...
> To unsubscribe from this group and stop receiving emails from it, send an email to lock...@googlegroups.com.

> To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/c1a1fd50-1635-4404-95da-00042df22230%40googlegroups.com.



--
Dmitry Vyukov

All about lockfree/waitfree algorithms, multicore, scalability,
parallel computing and related topics:
http://www.1024cores.net

--

---
You received this message because you are subscribed to a topic in the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lock-free/MnhH9AlLfOc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lock...@googlegroups.com.

--

---
You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lock...@googlegroups.com.

--

---
You received this message because you are subscribed to a topic in the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lock-free/MnhH9AlLfOc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lock...@googlegroups.com.

bittnkr

unread,
Jan 16, 2020, 1:43:14 PM1/16/20
to lock...@googlegroups.com
Dear Manuel,

When pointless people loose their arguments, they said that discussion is pointless. ;D

I made the most right-to-the-point affirmation I ever made in my entire life:

The only requirement to a algorithm be lock-free is to be O(1).

To me, this affirmation is obvious and self contained at the point to be axiomatic.

I don't know about Harri's linked list, but this is my general point about lock freedom.

Can you refute it?

To unsubscribe from this group and all its topics, send an email to lock-free+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/0597887d-f5c2-475c-9f9f-17a034b9a1e4%40googlegroups.com.

Nitsan Wakart

unread,
Jan 17, 2020, 1:55:40 AM1/17/20
to lock...@googlegroups.com
Dear Bittnkr,
You are entitled to your own opinion, not your own facts.
The fact is that lock-free is a concrete term that has a current meaning, one that Manuel has taken the time to collect from different authoritative sources. You seem to go for the Humpty Dumpty approach:
   "When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less."
This is good fun in a story book, not so productive in this context though. If you want lock-free to mean whatever you want it to mean, then I agree your queue is lock-free.
;-)

To unsubscribe from this group and stop receiving emails from it, send an email to lock-free+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/CADQeMNcEzTqDZsaeiEhRwPp2GZT%2B%2B52Xgy7MRb9RE3OOhC1uLA%40mail.gmail.com.

bittnkr

unread,
Jan 17, 2020, 9:02:56 AM1/17/20
to lock...@googlegroups.com
Dear Nitsan,

Thanks for your message. You bring me a smile this morning.

I really don't think so...

I'm not manipulating the words to put an alien meaning on them.

But instead, I'm trying to simplify a problem, considered impossible, with a rather new, but simpler and concrete definition of a complex theme.

Unfortunately, I've seen lots of different positions about the lock free structures, so many that seems everyone has it own ideas of what means to be lock free. 

This topic is a proof of this. 

We can debate for hours about the difference between lock-less or lock-free and go nowhere, just chasing the tail. ;)

Maybe because the lack of a simpler definition the solution is still unknown.

So, let's put some bases and premises to find a common background:

Suppose you have 2 threads A&B incrementing an atomic integer with positive and negative random numbers. Producing a Brownian motion.

Int X: Atomic =0

A() {X += random ());
B() {X -= random ());

C() {print(X)}

I ask you: It this program lock free?

It may be obvious to you that this operation is lock free. Because there is no locks and we are using atomic increment.

But why? What's the primal fact that makes this operation lock-free?

If you add another 50 instances of A&B? What will happen with the performance of the program, it will increase, decrease or neither?

What is the O() of this program/operation?

And if you change the code to update X using a mutex? 

What will happen to the performance when you add those additional threads? And the O()?

bittnkr

unread,
Jan 17, 2020, 9:48:44 AM1/17/20
to lock...@googlegroups.com
Dear Manuel,

Thanks for sending the Harris paper.
Just now I had the time to read it.

Attached I sent a screenshot of the performance chart presented there.

Did you noticed the difference between the Mutex and the New version?

What do you think is the O() of each one?

And about the Valois version? 

Just looking in this chart, It's possible to conclude if it's lock free or no? Why?

Dmitry?
Screenshot_Drive_20200117-112048.png

Nitsan Wakart

unread,
Jan 17, 2020, 10:58:47 AM1/17/20
to lock...@googlegroups.com
"everyone has it own ideas of what means to be lock free. " - So I guess your opinion is just as good? :-)
If we are willing to redefine lock-free, we can call anything lock-free. Manuel provided you with the accepted view, you seem to think those are just opinions.
If there's no agreement on what the term means the rest of the discussion is pointless, which is why Dmitry asked you: "what is your definition of a lock-free algorithm?" when he realized you are using your own definition (and then probably lost interest, given the pointlessness).


bittnkr

unread,
Jan 17, 2020, 9:14:41 PM1/17/20
to lock...@googlegroups.com
> So I guess your opinion is just as good? :-)

No, I know my facts are better.

I'm sure my solution is lock free, because it's O(1) and have no leaks.

If you can't see the same, what can I do?

Manuel Pöter

unread,
Jan 18, 2020, 10:09:59 AM1/18/20
to Scalable Synchronization Algorithms
Hi bittnkr,

I am still not sure whether you are interested in a meaningful discussion or not (but most of your responses convey the feeling that you just want to troll around). I will make one last attempt to try to explain your misconception, but if you continue to ignore all arguments and references to other sources I'm done.

As I said before - progress guarantees (wait-freedom, lock-freedom, ...) have nothing to do with algorithmic complexity. And they have nothing to to with performance either - in many cases (fine-grained) lock-based solutions perform better then lock-free versions. The key difference is, that in a lock-free data structure at least one thread will be able to make progress on its operation, even if other threads are blocked, preempted or otherwise delayed.

Take for example a sorted linked list. Searching/inserting/deleting entries with a specific key in the list is obviously O(n) where n is the number of items in that list (I hope we can at least agree on that). According to your definition, this operation cannot be lock-free. Harris' proposed a lock-free solution that is unbounded, i.e., O(∞), yet it is still lock-free. In fact, most lock-free algorithms are unbounded - if they are bounded, they are not just lock-free, but wait-free.

Regarding the chart in your screenshot: it shows that Harris' list scales much better than the mutex based one, even though the mutex based solution is O(n). But you cannot derive any more information from that chart other then how good each implementation scales with the number of threads. In particular you cannot tell if any of the implementations are lock-free (nor whether their implementation is O(n) for that matter).

Take a closer look at your own code - it is definitely not O(1)!
    do {
       o
= out;
       
while (o == in)
         sched_yield
(); // if empty, wait for item
     
} while (isFree[o & mask] || !out.compare_exchange_weak(o, o + 1));
This piece from the pop operation has two unbounded loops in it. Why would you assume that this is O(1)?  There is no guarantee that either loop terminates after a bounded number of iterations, so it is O(∞). Not only that, but it depends on a concurrent push operation to store the value into the buffer and setting isFree. But you have no guarantee when (or even if for that matter) this is going to happen. And this is what makes your code not lock-free: the fact that one thread (pop) depends on some other thread having to finish another operation (push).

There exists a universally accepted meaning of lock-freedom (as well as all the other progress guarantees) in the scientific community.  The cannonical reference for most of these definitions usually is the paper "A methodology for implementing highly concurrent data objects" by Maurice Herlihy from 1993.

I suggest you get yourself a copy of "The Art of Multiprocessor Programming" and actually read it (well, reading alone won't do it; you should also try to follow the arguments and understand them). Another good starting point is "Is Parallel Programming Hard, And, If So, What Can You Do About It?" by Paul McKenney.

However, if you prefer to live in your own world with your own definitions, you are welcome to do so. But don't expect me to participate in such a pointless discussion (and I suppose the same goes for the other people who responded in this thread).
To unsubscribe from this group and all its topics, send an email to lock...@googlegroups.com.

--

---
You received this message because you are subscribed to the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lock...@googlegroups.com.

--

---
You received this message because you are subscribed to a topic in the Google Groups "Scalable Synchronization Algorithms" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/lock-free/MnhH9AlLfOc/unsubscribe.
To unsubscribe from this group and all its topics, send an email to lock...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/CAJyh1U%3DMP7_eBx4X8-4E_bV3wu%2BnmSQVwp3S19%3DEv93c4_OV8g%40mail.gmail.com.

bittnkr

unread,
Jan 19, 2020, 4:34:00 AM1/19/20
to lock...@googlegroups.com
Hi Manuel,

I really thought this list was done... But It appears you still here. Thanks for your patience.

This is the first time someone call me a troll. I don't know if this is a detriment or an achievement to an introspect and somewhat arrogant nerd like me. 

I think I found the misunderstanding...

> Take for example a sorted linked list. Searching/inserting/deleting entries with a specific key in the list is obviously O(n) where n is the number of items in that list (I hope we can at least agree on that).

When I talk about O(1), I'm referring to the number of threads (the x axis of the chart) and specifically the insert and remove operations(the cost on Y axis), not everything you can do with the structure. 

In my point of view, that's the fundamental behavior of a lock-free structure. 

If a queue/stack/whatever have a constant insert/remove cost which doesn't change with the number of participants, that's the proof its lock-free. 

Almost all lock-free papers I read, at the end have a chart like that as a proof of its lock-freedom.  

In the linked list example, the search is O(n), but inserts and deletes are O(1). Note how the Y values doesn't change with the number of threads (almost a horizontal line). 

About my code, I think it should speak for itself. I know there is no lock on it. No thread will wait for another finish its jobs at any point.

Besides the queue is full/empty. but this is not a lock. As I've said before If you don't have nothing produced, there is no what to be consumed. The loop is not a spin lock, just the normal flow of push()/pop().

Anyway, maybe I'm being too simplistic, and still missing some obscure nuance only reserved for higher minds than that of a brute troll like me (smiles, and no offenses) . ;-)

I was just euphoric willing to share my joy with someone... But I think I must look another place.

Thanks for all. 

To unsubscribe from this group and all its topics, send an email to lock-free+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lock-free/7bc5e0d1-3a65-4ef1-bed4-daaa57c8744d%40googlegroups.com.

Manuel Pöter

unread,
Jan 19, 2020, 9:18:08 AM1/19/20
to Scalable Synchronization Algorithms
Sorry, but what you are writing does not make any sense!

It seems like you don't understand the big O notation, or how to properly determine the complexity of an algorithm. You can't just pick whatever you like to be the argument of O().

BTW: your implementation contains a race condition since isFree is not atomic; therefore you also don't get the necessary happens-before relation for the operations on buffer. Just run your code with TSAN (but I suppose you would dismiss any reports as false positives anyway, so why bother).

You seem to lack some basic understandings, but since you also proved to be completely and utterly resistant to all arguments, this discussion is indeed pointless!

Manuel Pöter

unread,
Jan 19, 2020, 9:21:07 AM1/19/20
to Scalable Synchronization Algorithms
I meant to write data race on isFree, not of race condition...
Reply all
Reply to author
Forward
0 new messages