I find it hard to see how one could like merge sorts without liking quick sort. They're kind of inverses of each other. Merge splits the lists and than orders the elements in recombination while quick sort orders the elements while splitting the lists. And if the algorithm is done out of place like most merge sorts, it's very easy to do with fairly reliable n(log(n)) time and in a stable way. In theory bad cases are always possible, and it could be a very bad problem if one of those cases cropped up while processing important data and a return expected in a few days had to wait a few billion years. So merge so is wise.
Still, for my best efforts merge sort is taking longer (almost twice as long as lame for sorted or revers sorted lists. Lame is a halfway decent implementation of quick sort.
I think if had a better idea of how to use extending to combine lists, I might be able to make a faster merge. I think python's built in sort is a sort of merge and it's about five times faster than lame in most cases.
This is what my merge looks like.
def mergesort(L):
if len(L) < 2: return
aList = [[L[i],val] if val > L[i] else [val,L[i]
] for i, val in enumerate(L[1:len(L)])if i%2 == 0
] + [[L[len(L)-1]] if len(L)%2 == 1 else []]
while len(aList) > 1:
count, cmax = 0, len(aList)-1
while count < cmax:
aList[count>>1] = assembler(aList[count], aList[count+1])
count += 2
if count == cmax:
aList[(count>>1)] = aList[cmax]
aList = aList[:(count>>1)+1]
else: aList = aList[:(count>>1)]
for i, val in enumerate(aList[0]):
L[i] = val
# assembler: list, list -> list
def assembler(list1, list2):
inList1,inList2, len1, len2 = 0, 0, len(list1), len(list2)
#outList = list(xrange(len1+len2))
outList = []
for i in xrange(len1+len2):
if inList1 < len1:
if inList2 >= len2:
list(outList.append(val) for val in list1[inList1:])
break
if list1[inList1] <= list2[inList2]:
outList.append(list1[inList1])
inList1 += 1
continue
else:
list (outList.append(val) for val in list2[inList2:])
break
outList.append(list2[inList2])
inList2 += 1
return outList