Hello..
As you have noticed i am a white arab, and i am a more srious computer
programmer and i am also an "inventor" of many scalable algorithms and
there implementations, here is my new invention:
I have just "invented" a "highly" scalable Parallel Sort library
for shared memory architectures that supports a highly scalable
Mergesort and a highly scalable Quicksort and a highly scalable
Heapsort. I think that this new invention of mine is the "best" one in
shared memory architectures because it also uses my other inventions of
a fully scalable FIFO queue and a fully scalable Threadpool. So i think
i will sell it to Google or to Microsoft or to Embarcadero or such big
software companies.
Also i have have just invented the following Parallel Sort Library that
supports a new and more efficient Parallel merge algorithms that
improves the worst-case performance:
My algorithm of finding the median of Parallel merge of my Parallel
Sort Library that you will find here in my website:
https://sites.google.com/site/scalable68/parallel-sort-library
Is O(log(min(|A|,|B|))), where |A| is the size of A, since the binary
search is performed within the smaller array and is O(lgN). But the new
algorithm of finding the median of parallel merge of my Parallel Sort
Library is O(log(|A|+|B|)), which is slightly worse. With further
optimizations the order was reduced to O(log(2*min(|A|,|B|))), which is
better, but is 2X more work, since both arrays may have to be searched.
All algorithms are logarithmic. Two binary searches were necessary to
find an even split that produced two equal or nearly equal halves.
Luckily, this part of the merge algorithm is not performance critical.
So, more effort can be spent looking for a better split. This new
algorithm in the parallel merge balances the recursive binary tree of
the divide-and-conquer and improve the worst-case performance of
parallel merge sort.
So stay tuned !
Thank you,
Amine Moulay Ramdane.