Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

About the following algorithms

30 views
Skip to first unread message

Intelli2

unread,
Jan 12, 2018, 7:40:29 PM1/12/18
to
Hello,


I have taken a look on internet at some PhD papers about the scalable
FIFO priority queue from Nir Shavit, i don't think it is efficient,
because it uses a same technic as the elimination array, so the
consumers and the producers must meet to be scalable, and that's not as
efficient as my scalable FIFO queues, also about the scalable FIFO queue
with an elimination array from Nir Shavit, i think it has the same
problem and it uses an elimination array as a backoff after the CAS and
that's not so scalable.

This is why my new scalable FIFO queues are really powerful and really
useful.

I have just deleted my scalable FIFO queues zip files here:

https://sites.google.com/site/aminer68/scalable-fifo-queues-for-c

and here:

https://sites.google.com/site/aminer68/scalable-fifo-queues-for-delphi-and-freepascal

because they were not optimized and they contained a bug, my new
versions that works correctly are coming soon.

And about my new algorithm..

I took a look on internet at the thread-safe FIFO queue using two
condition variables , one for the empty condition and one for the full
condition, and i have noticed that there algorithm is not working, so i
have come up with a new algorithm of a thread-safe FIFO queue that is
starvation-free and that works, and that uses two condition variables,
one for the empty condition and one for the full condition, after that i
have enhanced it to be a new scalable aglorithm that is really powerful,
it is now scalable on multicore and NUMA systems. I have just finished
to implement it and it is really powerful.

So stay tuned !


Thank you,
Amine Moulay Ramdne.



Mr Flibble

unread,
Jan 12, 2018, 7:47:19 PM1/12/18
to
Will you please stop making multiple posts with the exact same fucking
words; it is extremely fucking annoying.

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates
a world that is so full of injustice and pain. That’s what I would say."

Intelli2

unread,
Jan 12, 2018, 8:04:54 PM1/12/18
to
What are you talking about?

I have come with a new algorithm and its implementation of a scalable
reference counting.

Also i have come with my new scalable algorithms of a scalable FIFO
queues and there implementations, they are fully scalable
on NUMA and multicore systems and they are suitable for safe critical
systems.

Also i have come with a superb scalable algorithm of a scalable
Threadpool and its implementation and also another one that is an
almost scalable algorithm of a Threadpool because work-stealing, it
supports three priorities , high , normal and low.

This is my new inventions.

Where have you done that Flibble ? you are talking stupidly here.


Thank you,
Amine Moulay Ramdane.




Mr Flibble

unread,
Jan 12, 2018, 8:41:52 PM1/12/18
to
On 13/01/2018 06:04, Intelli2 wrote:
> On 1/12/2018 7:47 PM, Mr Flibble wrote:
>> On 13/01/2018 05:40, Intelli2 wrote:
[snip]
>>>
>>> Thank you,
>>> Amine Moulay Ramdne.
>>
>> Will you please stop making multiple posts with the exact same fucking
>> words; it is extremely fucking annoying.
>>
>
>
>
> What are you talking about?
>
> I have come with a new algorithm and its implementation of a scalable
> reference counting.
>
> Also i have come with my new scalable algorithms of a scalable FIFO
> queues and there implementations, they are fully scalable
> on NUMA and multicore systems and they are suitable for safe critical
> systems.
>
> Also i have come with a superb scalable algorithm of a scalable
> Threadpool and its implementation and also another one  that is an
> almost scalable algorithm of a Threadpool because work-stealing, it
> supports three priorities  , high , normal and low.
>
> This is my new inventions.
>
> Where have you done that Flibble ? you are talking stupidly here.

Do you not understand plain English? I said you are making MULTIPLE
POSTS TO NEWSGROUP WITH EXACT SAME WORDS. ONE POST IS ENOUGH.

BTW, you are not the first to create a LIFO stack-based allocator (so
you can't claim to have "invented it") and you won't be the last: I am
thinking of making one myself.

There is no download link for your amazing C++ thread pool class so I
have no way of telling how bad it is.

/Flibble

Chris M. Thomasson

unread,
Jan 13, 2018, 4:40:51 PM1/13/18
to
Fwiw, check this out:

https://groups.google.com/d/topic/comp.lang.c++/yt27gw0cbyo/discussion

No padding and cache alignment here in the crude code, but the algorithm
is shown for spmc_queue::push and spmc_queue::flush ->

(trimmed from the original program) contained within the contents of the
link above: C++11 compatible...
__________________________
#define mb_relaxed std::memory_order_relaxed
#define mb_consume std::memory_order_consume
#define mb_acquire std::memory_order_acquire
#define mb_release std::memory_order_release
#define mb_acq_rel std::memory_order_acq_rel
#define mb_seq_cst std::memory_order_seq_cst

#define mb_fence(mb) std::atomic_thread_fence(mb)


// simple single producer/multi-consumer queue
struct node
{
node* m_next;
node() : m_next(nullptr) {} // tidy...
};

struct spmc_queue
{
std::atomic<node*> m_head;

spmc_queue() : m_head(nullptr) {}

// push a single node
void push(node* const n)
{
node* head = m_head.load(mb_relaxed);

for (;;)
{
n->m_next = head;
mb_fence(mb_release);

// failure updates head...
if (m_head.compare_exchange_weak(head, n, mb_relaxed))
{
break;
}
}
}

// try to flush all of our nodes
node* flush(node* const nhead = nullptr)
{
node* n = m_head.exchange(nhead, mb_relaxed);

if (n)
{
mb_fence(mb_acquire);
}

return n;
}
};
__________________________


Well, this is can be a queue and a stack at the same time... The
non-nullptr result from spmc_queue::flush can be used as-is for LIFO.
However, if you reverse the list, a FIFO nature pops out of the queue as
a whole. Any thoughts?

Chris M. Thomasson

unread,
Jan 13, 2018, 4:47:22 PM1/13/18
to
On 1/13/2018 1:40 PM, Chris M. Thomasson wrote:
> On 1/12/2018 5:41 PM, Mr Flibble wrote:
>> On 13/01/2018 06:04, Intelli2 wrote:
>>> On 1/12/2018 7:47 PM, Mr Flibble wrote:
>>>> On 13/01/2018 05:40, Intelli2 wrote:
[...]
>> BTW, you are not the first to create a LIFO stack-based allocator (so
>> you can't claim to have "invented it") and you won't be the last: I am
>> thinking of making one myself.
>
> Fwiw, check this out:
>
> https://groups.google.com/d/topic/comp.lang.c++/yt27gw0cbyo/discussion
>
> No padding and cache alignment here in the crude code, but the algorithm
> is shown for spmc_queue::push and spmc_queue::flush ->
>
> (trimmed from the original program) contained within the contents of the
> link above: C++11 compatible...
> __________________________
> #define mb_relaxed std::memory_order_relaxed
> #define mb_consume std::memory_order_consume
> #define mb_acquire std::memory_order_acquire
> #define mb_release std::memory_order_release
> #define mb_acq_rel std::memory_order_acq_rel
> #define mb_seq_cst std::memory_order_seq_cst
>
> #define mb_fence(mb) std::atomic_thread_fence(mb)
>
>
> // simple single producer/multi-consumer queue
> struct node
> {
>      node* m_next;
>      node() : m_next(nullptr) {} // tidy...
> };

Each node should ideally be the size of a cache line. sizeof(struct
node) == CACHE_LINE.


> struct spmc_queue
> {
>      std::atomic<node*> m_head;

This head should be on its own cache line and a point to spmc_queue
should be aligned on a cache line boundary in memory. A pointer to
spmc_queue::m_head should be aligned on a cache line boundary. This
should be a damn POD! The constructor in struct node should be removed
in favor of an explicit initialization function.

[...]

Chris M. Thomasson

unread,
Jan 13, 2018, 5:45:07 PM1/13/18
to
On 1/13/2018 1:40 PM, Chris M. Thomasson wrote:
[...]
> node* flush(node* const nhead = nullptr)

Still forgot to mention that if nhead is not equal to nullptr in flush,
the calling thread needs to ensure release membar semantics before
calling this function! If nhead is not nullptr the calling code in
inserting a new LIFO object in place of the existing LIFO.

Also, if you call flush with a LIFO, make sure the damn last node has a
null next pointer!

Humm... I this adding the nhead in flush makes it too complicated.

>      node* flush(node* const nhead = nullptr)
>      {
// Release fence wrt non null nhead
if (nhead != nullptr)
{
mb_fence(mb_release);
0 new messages