Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

More of my philosophy about my new inventions and more..

18 views
Skip to first unread message

Amine Moulay Ramdane

unread,
Feb 17, 2022, 2:02:09 PM2/17/22
to
Hello,


More of my philosophy about my new inventions and more..

I am a white arab and i think i am smart since i have also invented many scalable algorithms..


I think i am smart, and i say that x86 processor family features a fairly strict memory model that only allows the reorderings of loads before independent store operations, so i have just invented a fully Lockfree FIFO bounded queue and a fully Lockfree LIFO bounded stack that also run on x86 processor family and they are much more powerful than my following inventions:

https://sites.google.com/site/scalable68/lockfree-bounded-lifo-stack-and-fifo-queue

Also i have shared with you my following invention, read about it in my following thoughts:
\
About smartness and about MCS Lock and more..

I have just read the following article from ACM:

Scalability Techniques for Practical Synchronization Primitives

https://queue.acm.org/detail.cfm?id=2698990

Notice how they are speaking about one of the best scalable Lock that we call MCS lock, but i think that CLH and MCS locks are not smart since those scalable Locks are like intrusive, since they have to hide the required parameter to be passed, this is why i think i am smart since i have invented a scalable Lock that is better than MCS Lock since my scalable Lock doesn't require any parameter to be passed, just call the Enter() and Leave() methods and that's all, here it is, read carefully about it in my website here:

https://sites.google.com/site/scalable68/scalable-mlock


I have also just enhanced it more and i will post it soon.

I have also invented many other scalable algorithms and algorithms..



Thank you,
Amine Moulay Ramdane,.

Bonita Montero

unread,
Feb 17, 2022, 2:27:30 PM2/17/22
to
Am 17.02.2022 um 20:02 schrieb Amine Moulay Ramdane:
> Hello,
>
>
> More of my philosophy about my new inventions and more..
>
> I am a white arab and i think i am smart since i have also invented many scalable algorithms..
>
>
> I think i am smart, and i say that x86 processor family features a fairly strict memory model that only allows the reorderings of loads before independent store operations, so i have just invented a fully Lockfree FIFO bounded queue and a fully Lockfree LIFO bounded stack that also run on x86 processor family and they are much more powerful than my following inventions:

Lock-free queues totally suck since they have to be polled.
The only lock-free structure that is useful is a lock-free stack
for pooing objects and handling back items to a thread like with
modern memory-allocators.

Amine Moulay Ramdane

unread,
Feb 17, 2022, 2:41:26 PM2/17/22
to
Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.

The downside of blocking queues is latency, typically of the order of 2-20 uS, due to kernel signaling. This can be mitigated by designing the system so that the work done by the consumer threads on each queued item takes much longer than this interval.

The downside of non-blocking queues is the waste of CPU and memory bandwidth while polling an empty queue. This can be mitigated by designing the system so that the queue is rarely empty.

As already hinted at by commenters, a non-blocking queue is a very bad idea on single-CPU systems.



Thank you,
Amine Moulay Ramdane.

Amine Moulay Ramdane

unread,
Feb 17, 2022, 2:44:59 PM2/17/22
to
On Thursday, February 17, 2022 at 2:27:30 PM UTC-5, Bonita Montero wrote:
On Thursday, February 17, 2022 at 2:27:30 PM UTC-5, Bonita Montero wrote:
Hello,


Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.

The downside of blocking queues is latency, typically of the order of 2-20 uS, due to kernel signaling. This can be mitigated by designing the system so that the work done by the consumer threads on each queued item takes much longer than this interval.

The downside of non-blocking queues is the waste of CPU and memory bandwidth while polling an empty queue. This can be mitigated by designing the system so that the queue is rarely empty.

A non-blocking queue is a very bad idea on single-CPU systems.

Amine Moulay Ramdane

unread,
Feb 17, 2022, 2:49:23 PM2/17/22
to
Hello,


Here is more of my precision:

Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.

A downside of blocking queues is latency, typically of the order of 2-20 uS, due to kernel signaling. This can be mitigated by designing the system so that the work done by the consumer threads on each queued item takes much longer than this interval.

The downside of non-blocking queues is the waste of CPU and memory bandwidth while polling an empty queue. This can be mitigated by designing the system so that the queue is rarely empty.


Bonita Montero

unread,
Feb 17, 2022, 3:23:22 PM2/17/22
to
Am 17.02.2022 um 20:41 schrieb Amine Moulay Ramdane:
> On Thursday, February 17, 2022 at 2:27:30 PM UTC-5, Bonita Montero wrote:
>> Am 17.02.2022 um 20:02 schrieb Amine Moulay Ramdane:
>>> Hello,
>>>
>>>
>>> More of my philosophy about my new inventions and more..
>>>
>>> I am a white arab and i think i am smart since i have also invented many scalable algorithms..
>>>
>>>
>>> I think i am smart, and i say that x86 processor family features a fairly strict memory model that only allows the reorderings of loads before independent store operations, so i have just invented a fully Lockfree FIFO bounded queue and a fully Lockfree LIFO bounded stack that also run on x86 processor family and they are much more powerful than my following inventions:
>> Lock-free queues totally suck since they have to be polled.
>> The only lock-free structure that is useful is a lock-free stack
>> for pooing objects and handling back items to a thread like with
>> modern memory-allocators.
>
>
> Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.

Forget it - no one wants to poll a queue. Having limited
polling with spinning for a while is the best of both sides.

Amine Moulay Ramdane

unread,
Feb 17, 2022, 4:28:21 PM2/17/22
to
You have to understand Bonita that lockfree has advantages and disadvantages, so since lock-free queue works best when the queue nearly always has entries, so it is used best on those kind of situations, and for you wanting to have limited
polling with spinning for a while.., i think you have to understand that lockfree comes with a number of advantages like
it is not prone to deadlock and it is Pre-emption tolerant so they are good at convoy-avoidance etc. so you have to take
into account those factors too so that to make a good choice depending on the situation or the context.

Bonita Montero

unread,
Feb 18, 2022, 1:31:31 AM2/18/22
to
Am 17.02.2022 um 22:28 schrieb Amine Moulay Ramdane:
> On Thursday, February 17, 2022 at 3:23:22 PM UTC-5, Bonita Montero wrote:
>> Am 17.02.2022 um 20:41 schrieb Amine Moulay Ramdane:
>>> On Thursday, February 17, 2022 at 2:27:30 PM UTC-5, Bonita Montero wrote:
>>>> Am 17.02.2022 um 20:02 schrieb Amine Moulay Ramdane:
>>>>> Hello,
>>>>>
>>>>>
>>>>> More of my philosophy about my new inventions and more..
>>>>>
>>>>> I am a white arab and i think i am smart since i have also invented many scalable algorithms..
>>>>>
>>>>>
>>>>> I think i am smart, and i say that x86 processor family features a fairly strict memory model that only allows the reorderings of loads before independent store operations, so i have just invented a fully Lockfree FIFO bounded queue and a fully Lockfree LIFO bounded stack that also run on x86 processor family and they are much more powerful than my following inventions:
>>>> Lock-free queues totally suck since they have to be polled.
>>>> The only lock-free structure that is useful is a lock-free stack
>>>> for pooing objects and handling back items to a thread like with
>>>> modern memory-allocators.
>>>
>>>
>>> Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.
>> Forget it - no one wants to poll a queue. Having limited
>> polling with spinning for a while is the best of both sides.
>
>
> You have to understand Bonita that lockfree has advantages and disadvantages, so since lock-free queue works best when the queue nearly always has entries, ...

This can't be guaranteed since the producer can be scheduled away.
No one uses lock-free queues, they're simply silly.

Amine Moulay Ramdane

unread,
Feb 18, 2022, 10:58:19 AM2/18/22
to
I think you are not understanding, since you are trying to generalize more
and you are saying that it is not working, but it is not the right way ,
since you have to know how to fix the number of consumers and producers
so that it works. So when you are using lock-free queues you have to like
calculate more precisely like in realtime systems so that it be efficient.

And you have to know that when you fix correctly the number of consumers
and producers so that to be efficient, this "efficient" is not just about the throughput
of the lock-free queues, since i mean the even if you fix the number of consumers
and producers and it gives less throughput than the blocking queues , it can be that you
are wanting to benefit from some or all the characteristics of the being lock-free.

Bonita Montero

unread,
Feb 18, 2022, 11:42:04 AM2/18/22
to
Am 18.02.2022 um 16:58 schrieb Amine Moulay Ramdane:
> On Friday, February 18, 2022 at 1:31:31 AM UTC-5, Bonita Montero wrote:
>> Am 17.02.2022 um 22:28 schrieb Amine Moulay Ramdane:
>>> On Thursday, February 17, 2022 at 3:23:22 PM UTC-5, Bonita Montero wrote:
>>>> Am 17.02.2022 um 20:41 schrieb Amine Moulay Ramdane:
>>>>> On Thursday, February 17, 2022 at 2:27:30 PM UTC-5, Bonita Montero wrote:
>>>>>> Am 17.02.2022 um 20:02 schrieb Amine Moulay Ramdane:
>>>>>>> Hello,
>>>>>>>
>>>>>>>
>>>>>>> More of my philosophy about my new inventions and more..
>>>>>>>
>>>>>>> I am a white arab and i think i am smart since i have also invented many scalable algorithms..
>>>>>>>
>>>>>>>
>>>>>>> I think i am smart, and i say that x86 processor family features a fairly strict memory model that only allows the reorderings of loads before independent store operations, so i have just invented a fully Lockfree FIFO bounded queue and a fully Lockfree LIFO bounded stack that also run on x86 processor family and they are much more powerful than my following inventions:
>>>>>> Lock-free queues totally suck since they have to be polled.
>>>>>> The only lock-free structure that is useful is a lock-free stack
>>>>>> for pooing objects and handling back items to a thread like with
>>>>>> modern memory-allocators.
>>>>>
>>>>>
>>>>> Typically, polling a lock-free queue works best when the queue nearly always has entries, a blocking queue works best when the queue is nearly always empty.
>>>> Forget it - no one wants to poll a queue. Having limited
>>>> polling with spinning for a while is the best of both sides.
>>>
>>>
>>> You have to understand Bonita that lockfree has advantages and disadvantages, so since lock-free queue works best when the queue nearly always has entries, ...
>>
>> This can't be guaranteed since the producer can be scheduled away.
>> No one uses lock-free queues, they're simply silly.
>
>
> I think you are not understanding, since you are trying to generalize more
> and you are saying that it is not working, but it is not the right way ,
> since you have to know how to fix the number of consumers and producers
> so that it works. ..

The number of consumers and producers doesn't matter. If a producer
can be scheduled away a consumer would poll for a vers long time.

Amine Moulay Ramdane

unread,
Feb 18, 2022, 11:56:31 AM2/18/22
to
I don't agree with you, since a non-blocking algorithm
is lock-free if there is guaranteed system-wide progress,
so if you fix correctly the number of consumers and producers,
and you have many more producers than one producer, so if
a producer is scheduled away , another producer will do the job rapidly,
since system-wide progress is guaranteed by the lock-free algorithm,
so i advice you to read more about what is lock-free algorithms.

Amine Moulay Ramdane

unread,
Feb 18, 2022, 3:51:01 PM2/18/22
to
I think you are idiotic, since you are saying that you have
to use like an hybrid way of using blocking and lock-free, but it is an idiotic
way of doing since you will lose some important advantages of lock-free ,
so it is why i am not idiotically throwing away lock-free queues as you are
doing it, since i still want to take advantage of the characteristics of the
being lock-free, so you have to read my following other extended answer
so that you notice that operating systems schedulers such as Windows
and Linux are giving a big enough time slice that allows the next producer
to successfully put the item in the lock-free queue. So read my following
extended thoughts about it:

I say that eventhough the producer is scheduled away ,
you have to think about the big enough time slice that allows
the next producer to successfully put the item in the lock-free queue,
and as we know that each software thread gets a short turn,
called a time slice, to run on a hardware thread. When the time slice
runs out, the scheduler suspends the thread and allows the next
thread waiting its turn to run on the hardware thread. And time slicing
ensures that all software threads make some progress. And as we know
that there is the overhead of saving the register state of a thread when
suspending it, and restoring the state when resuming it.
You might be surprised how much state there is on modern processors.
However, schedulers typically allocate big enough time slices so that
the save/restore overheads are insignificant, so this obvious overhead
is in fact not much of a concern.

So you are like idiotic Bonita Montero, since it is as you are comparing
the locks that use CPU-intensive spinning and mutexes that don't use it,
so we can use CPU-intensive spinning if the critical section is small,
so it is by logical analogy the same as lock-free algorithms, since
lock-free algorithms need from you to control correctly the situation or
the context, i mean that you have to know how to tune the number of
threads of the system including the consumers and producers of the
lock-free queues so that to be able to benefit from advantages of the
0 new messages