You know I got something with pseudorandom algorithms and sorting algorithms. So, for a long time I wondered "what about an algorithm that is optimized to sort an almost sorted list"?
I mean - it happens often. You just made some minute changes and the whole thing is turned upside down because of some overly aggressive algorithm.
So I looked at the "in place merge sort" which (in theory) merges two datasets with little overhead. Let's examine a thing like:
- the address of the left most element of the list to merge with;
- the address of the left most element of the list to merge;
- the address of the right most element of the list to merge.
In this case, that would be: 0, 4, 5
It would run like: 1-6, 1-7, 2-6, 2-7, 8-6 => SWITCH!!
Now, the list turns to 126987 - which would go haywire in a minute, because the algorithm expects BOTH sides to be sorted. So, what we have to end up with is:
126978 (data)
012345 (address)
Of course, 9-7 triggers another switch, which ultimately results in 126789. Job done. Now, for one reason or another, I could get the standard "in place merge sort" to work, so I designed my own. In one version 4tH moves the data in the rightmost part, in the slightly optimized version SMOVE does the moving. However, the difference is not dramatic..
As usual, code in SVN.
BTW, I'm benchmarking the slowest sorting routines. The comparisons are more accurate now - it doesn't mean they've actually become slower. It's means I'm less lazy ;-)
Hans Bezemer