Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

I correct some typos because i write fast, read again..

9 views
Skip to first unread message

Sky89

unread,
Aug 4, 2018, 6:50:14 PM8/4/18
to
Hello....


I correct some typos because i write fast, read again..

What about the today computing ?

You have to know me more..

I am not thinking becoming an expert of "coding"..

I am not like that..

Because I am an "inventor", and i have invented many scalable algorithms
and there implementations to do better HPC(high performance computing),
i am thinking NUMA systems, and i am thinking "scalability" on manycores
and multicores and on NUMA systems etc. this is my way of "thinking",
and as a proof look at my new scalable reference counting with efficient
support for weak references, here it is:

https://sites.google.com/site/scalable68/scalable-reference-counting-with-efficient-support-for-weak-references

As you have noticed it is "fully" scalable reference counting, so this
is HPC(high performance computing) , and i have implemented this
scalable algorithm that i have "invented" in Delphi and on the Delphi
mode of FreePascal , so that to make Delphi and FreePascal "much"
better, and notice with me that you will not find it on C++ or Rust.
This is my way of thinking , i am "inventing" scalable algorithms and
there implementations.

And i said the following:

"I think that this Parallel ForEach and ParallelFor are like futulities,
because they don't bring "enough" high level abstraction to consider
them interesting, because i think my Threadpool with priorities that
scales very well is capable of easily emulating Parallel ForEach with
"priorities" and ParallelFor with "priorities" that scale very well, so
no need to implement Parallel ForEach or Parallel For."

But to be "nicer", i think i will soon implement both Parallel ForEach
with "priorities" that scales very well and ParallelFor with
"priorities" that scales very well using my Threadpool with priorities
that scales very well, and they will be integrated as methods with my
Threadpool with priorities that scales very well, so that you will be happy.

So i will ask you ? where will you find my Threadpool with priorities
that scales very well? and where you will find my Parallel ForEach and
Parallel For with priorities that scales very well ?

You will not find them on C++ and you will not find them on Rust,
because i have "invented" them, because i am an "inventor", and this is
my way of thinking.

Here is my powerful Threadpool with priorities that scales very well,
read about it and download it from here:

https://sites.google.com/site/scalable68/an-efficient-threadpool-engine-with-priorities-that-scales-very-well

It is a very powerful Threadpool, because:

More precision about my efficient Threadpool that scales very well, my
Threadpool is much more scalable than the one of Microsoft, in the
workers side i am using scalable counting networks to distribute on the
many queues or stacks, so it is scalable on the workers side, on the
consumers side i am also using lock striping to be able to scale very
well, so it is scalable on those parts, on the other part that is work
stealing, i am using scalable counting networks, so globally it scales
very well, and since work stealing is "rare" so i think that my
efficient Threadpool that scales very well is really powerful, and it is
much more optimized and the scalable counting networks eliminate false
sharing, and it works with Windows and Linux.
Read the rest:

Read the rest:

You have to understand my work..

I have invented many scalable algorithms and there implementations, here
is some of them that i have "invented":

1- Scalable Threadpools that are powerful

2- Scalable RWLocks of different sorts.

3- Scalable reference counting with efficient support for weak references

4- Scalable FIFO queues that are node-based and array-based.

5- My Scalable Varfiler

6- Scalable Parallel implementation of Conjugate Gradient Dense Linear
System Solver library that is NUMA-aware and cache-aware, and also a
Scalable Parallel implementation of Conjugate Gradient Sparse Linear
System Solver library that is cache-aware.

7- Scalable MLock that is a scalable Lock.

8- Scalable SeqlockX


And there is also "many" other scalable algorithms that i have "invented".

You can find some of my scalable algorithms and there implementations in
Delphi and FreePascal and C++ on my website here:

https://sites.google.com/site/scalable68/

What i am doing by "inventing" many scalable algorithms and there
implementations, is wanting to make "Delphi" much better and making
FreePascal on the "Delphi" mode much better, my scalable algorithms
and there implementations are like HPC(high performance computing,
and as you have noticed i said also:

You will ask why have i invented many scalable algorithms and
there implementations? because also my work will permit us also to
"revolutionise" science and technology because it is HPC(high
performance computing), this is why i will also sell some of my scalable
algorithms and there implementations to companies such as Google or
Microsoft or Embarcadero.

Also HPC has revolutionised the way science is performed. Supercomputing
is needed for processing sophisticated computational models able to
simulate the cellular structure and functionalities of the brain. This
should enable us to better understand how our brain works and how we can
cope with diseases such as those linked to ageing and to understand more
about HPC, read more here:

https://ec.europa.eu/digital-single-market/en/blog/why-do-supercomputers-matter-your-everyday-life

So i will "sell" some of my scalable algorithms and there
implementations to Google or to Microsoft or to Embarcadero.

I will also enhance my Parallel archiver and my Parallel compression
Library that are powerful and that work with both C++Builder and Delphi
and to perhaps sell them to Embarcadero that sells Delphi and C++Builder.

Also I will implement soon a "scalable" Parallel For and a Parallel
ForEach..

This why i said before that:

"I think that this Parallel ForEach and ParallelFor are like futulities,
because they don't bring "enough" high level abstraction to consider
them interesting, because i think my Threadpool with priorities that
scales very well is capable of easily emulating Parallel ForEach with
"priorities" and ParallelFor with "priorities" that scale very well, so
no need to implement Parallel ForEach or Parallel For."

But to be "nicer", i think i will soon implement both Parallel ForEach
with "priorities" that scales very well and ParallelFor with
"priorities" that scales very well using my Threadpool with priorities
that scales very well, and they will be integrated as methods with my
Threadpool with priorities that scales very well, so that you will be happy.

And my next step soon is also to make my Delphi and FreePascal and C++
Libraries portable to other CPUs like ARM etc. because currently they
work on x86 AMD and Intel CPUs.

And my next step soon is also to make my "scalable" RWLocks NUMA-aware
and efficient on NUMA.


Thank you,
Amine Moulay Ramdane.

























Mr Flibble

unread,
Aug 4, 2018, 7:33:36 PM8/4/18
to
On 05/08/2018 03:50, Sky89 wrote:
> Hello....
>
>
> I correct some typos because i write fast, read again..

Fuck off you egregious fuckwit of a cunt. AND. TAKE. YOUR. MEDICATION.

/Flibble

--
"Suppose it’s all true, and you walk up to the pearly gates, and are
confronted by God," Bryne asked on his show The Meaning of Life. "What
will Stephen Fry say to him, her, or it?"
"I’d say, bone cancer in children? What’s that about?" Fry replied.
"How dare you? How dare you create a world to which there is such misery
that is not our fault. It’s not right, it’s utterly, utterly evil."
"Why should I respect a capricious, mean-minded, stupid God who creates a
world that is so full of injustice and pain. That’s what I would say."
0 new messages