Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Selection-Sort in C

97 views
Skip to first unread message

arnuld

unread,
Sep 29, 2009, 6:42:15 AM9/29/09
to
Here is the C implementation of Selection-Sort. Any advice for
improvement will be apppreciated:


/* A C program to sort an array of characters using bubble sort algorithm
*
* VERSION 0.0
*
*/


#include <stdio.h>
#include <stdlib.h>
#include <string.h>


void sort_bubble(char* );

int main(void)
{
char arrc[] = "heathfield";

printf("array(unsorted): %s\n", arrc);
sort_bubble(arrc);
printf("array(sorted): %s\n", arrc);

return 0;
}


void sort_bubble(char* c)
{
size_t i, j;
size_t s = strlen(c);

for(i = 0; i < s; ++i)
{
for(j = s-1; j != i; --j)
{
if( c[j] < c[j-1] )
{
char temp = c[j];
c[j] = c[j - 1];
c[j - 1] = temp;
}
}
}
}

========================== OUTPUT ==============================
[arnuld@dune programs]$ gcc -ansi -pedantic -Wall -Wextra selection-sort.c
[arnuld@dune programs]$ ./a.out
Original Array = arnuld
Sorted Array = adlnru
[arnuld@dune programs]$

--
www.lispmachine.wordpress.com
my email is @ the above blog.

Edwin van den Oetelaar

unread,
Sep 29, 2009, 7:04:11 AM9/29/09
to
arnuld wrote:
> Here is the C implementation of Selection-Sort. Any advice for
> improvement will be apppreciated:
>
>
> /* A C program to sort an array of characters using bubble sort algorithm
> *
> * VERSION 0.0
> *
> */
>
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
>
>
> void sort_bubble(char* );
>
> int main(void)
> {
> char arrc[] = "heathfield";
>
> printf("array(unsorted): %s\n", arrc);
> sort_bubble(arrc);
> printf("array(sorted): %s\n", arrc);
>
> return 0;
> }

If this is your source how can it give this answer. Do you want bubble-sort or selection-sort ?

> [arnuld@dune programs]$ ./a.out
> Original Array = arnuld
> Sorted Array = adlnru

this hurts my head.

arnuld

unread,
Sep 29, 2009, 8:49:33 AM9/29/09
to
> On Tue, 29 Sep 2009 13:04:11 +0200, Edwin van den Oetelaar wrote:

> If this is your source how can it give this answer. Do you want
> bubble-sort or selection-sort ?

I really don't have any idea. I just look at the given O(n) notation and
decide.

> this hurts my head.

Why ?

bartc

unread,
Sep 29, 2009, 9:18:26 AM9/29/09
to

"arnuld" <sun...@invalid.address> wrote in message
news:pan.2009.09...@invalid.address...

>> On Tue, 29 Sep 2009 13:04:11 +0200, Edwin van den Oetelaar wrote:
>
>> If this is your source how can it give this answer. Do you want
>> bubble-sort or selection-sort ?
>
> I really don't have any idea. I just look at the given O(n) notation and
> decide.

You posted code which you said was for selection sort, but which looked
remarkably like bubble sort, even to the extent of being called
"bubble-sort" in the code.

--
Bartc


Mark Bluemel

unread,
Sep 29, 2009, 10:13:45 AM9/29/09
to
On 29 Sep, 11:42, arnuld <sunr...@invalid.address> wrote:
[Reordered for clarity (sort of)]


> Here is the C implementation of Selection-Sort.

...


> /* A C program to sort an array of characters using bubble sort algorithm

bubble != selection

> void sort_bubble(char* c)
> {
> size_t i, j;
> size_t s = strlen(c);
>
> for(i = 0; i < s; ++i)
> {
> for(j = s-1; j != i; --j)
> {
> if( c[j] < c[j-1] )
> {
> char temp = c[j];
> c[j] = c[j - 1];
> c[j - 1] = temp;
> }
> }
> }
>
> }

That looks like a bubble sort to me...


>   char arrc[] = "heathfield";
>
>   printf("array(unsorted): %s\n", arrc);
>   sort_bubble(arrc);
>   printf("array(sorted):  %s\n", arrc);

> ========================== OUTPUT ==============================

> Original Array = arnuld
> Sorted Array   = adlnru

How could the code you posted produce this output?

Eric Sosman

unread,
Sep 29, 2009, 8:37:39 PM9/29/09
to
arnuld wrote:
>> On Tue, 29 Sep 2009 13:04:11 +0200, Edwin van den Oetelaar wrote:
>
>> If this is your source how can it give this answer. Do you want
>> bubble-sort or selection-sort ?
>
> I really don't have any idea. I just look at the given O(n) notation and
> decide.
>
>
>
>> this hurts my head.
>
> Why ?

Maybe because bubble sort is the worst[*][**] sorting
algorithm known to Man?

[*] Not true, of course: Any algorithm can be disimproved,
and then the worse algorithm can be further disimproved, and
so on ad infinitum. But bubble sort is surely the worst "basic"
sort algorithm known.[***]

[**] Much depends on the model, of course. Knuth mentions
an idealized machine imagined by H.B. Demuth for which bubble
sort is in fact optimal. He also shows that given sufficient
parallelism of the right kind, bubble sort is equivalent to
straight insertion. But for C's abstract machine ... No: bubble
sort is The Worst.

[***] Yes, despite my own ferreting out of an O(N*N*logN)
sort in actual production code. The sort algorithm itself was
fine, O(N*logN), but the careless programmer had handed it an
O(N) comparison function ...

--
Eric Sosman
eso...@ieee-dot-org.invalid

arnuld

unread,
Sep 30, 2009, 1:14:15 AM9/30/09
to
> On Tue, 29 Sep 2009 13:18:26 +0000, bartc wrote:

> You posted code which you said was for selection sort, but which looked
> remarkably like bubble sort, even to the extent of being called
> "bubble-sort" in the code.

My apologies, here is the actual code I wrote:


/* A C Implementation of Selection Sort.
* Section 8.2 of "Data Structures and Algorithms by Aho, Hopcraft and
Ullman"
*
* VERSION 0.0.
*
*/


#include <stdio.h>
#include <string.h>

void sort_selection(char [], const size_t);


int main(void)
{
char arrc[] = "arnuld";

const size_t arr_size = strlen(arrc);

printf("Original Array = %s\n", arrc);
sort_selection(arrc, arr_size);
printf("Sorted Array = %s\n", arrc);


return 0;
}


void sort_selection(char c[], const size_t sz)
{
size_t i, j;


if(0 == sz)
{
printf("Nothig to sort!\n");
return;
}
else if(1 == sz)
{
printf("There is no need to sort a single element\n");
return;
}

for(i = 0; i != sz; ++i)
{
for(j = i + 1; j != sz; ++j )
{
if(c[j] < c[i])
{
char temp = c[i];
c[i] = c[j];
c[j] = temp;

James Kuyper

unread,
Sep 30, 2009, 7:32:29 AM9/30/09
to
Eric Sosman wrote:
...

> so on ad infinitum. But bubble sort is surely the worst "basic"
> sort algorithm known.[***]

Bubble sort is generally immensely faster than bogosort. ;-} Unless
you're able to implement the quantum version of bogosort, which is the
fastest sorting method I've ever heard of. :-)

user923005

unread,
Sep 30, 2009, 2:48:50 PM9/30/09
to
On Sep 29, 10:14 pm, arnuld <sunr...@invalid.address> wrote:
> > On Tue, 29 Sep 2009 13:18:26 +0000, bartc wrote:
> > You posted code which you said was for selection sort, but which looked
> > remarkably like bubble sort, even to the extent of being called
> > "bubble-sort" in the code.
>
> My apologies, here is the actual code I wrote:
>
> /* A C Implementation of Selection Sort.
>  * Section 8.2 of "Data Structures and Algorithms by Aho, Hopcraft and
> Ullman"
>  *
>  * VERSION 0.0.
>  *
>  */
>
> #include <stdio.h>
> #include <string.h>
>
> void sort_selection(char [], const size_t);
>
> int main(void)
> {
>   char arrc[] = "arnuld";
>
>   const size_t arr_size = strlen(arrc);
>
>   printf("Original Array = %s\n", arrc);
>   sort_selection(arrc, arr_size);
>   printf("Sorted Array   = %s\n", arrc);
>
>   return 0;
>
> }


This sort interface is vary bad as the only thing it can sort is a
zero terminated character string:


> void sort_selection(char c[], const size_t sz)
> {
>   size_t i, j;
>
>   if(0 == sz)
>     {

This is a bug -- there is nothing wrong with sorting zero items:


>       printf("Nothig to sort!\n");
>       return;
>     }
>   else if(1 == sz)
>     {

This is a bug -- there is nothing wrong with sorting a single element:


>       printf("There is no need to sort a single element\n");
>       return;
>     }
>
>   for(i = 0; i != sz; ++i)
>     {
>       for(j = i + 1; j != sz; ++j )
>         {

Since you are studying O(f(N)), why not count the comparisons?


>           if(c[j] < c[i])
>             {

Since you are studying O(f(N)), why not count the exchanges?


>               char temp = c[i];
>               c[i] = c[j];
>               c[j] = temp;
>             }
>         }
>     }
>
> }

If you called the qsort() library interface in your code and a
customer passed a list with a single item in it using your software,
and the library printed messages on the console complaining about the
item count, how would you like it?

It's not a bug to call a sort routine with zero or one items. It *is*
a bug to write messages to standard output complaining about it.

If you write software, general purpose routines such as sorting
interfaces should be generic if at all possible. Your interfaces are
designed to sort a single thing: a character string. Interfaces
designed to sort only a list of integers are just as bad (and you will
see that a lot in C books) but that does not mean that you should copy
their bad methodology.

Some kinds of things (sorting, searching, summation, averaging,
finding partitions, finding medians...) are generic operations that
will be useful on any kind of data. If you design your programs so
that any sort of list can be used, then your software will not have to
be rewritten every time you need to do something with it. So I
suggest that an attempt at being more generic would be valuable.

Now, if the only thing you care about with these routines is to
understand O(f(N)) behavior, then you won't need to fix it for that
purpose. But practice makes perfect only if we practice what is
perfect. If we practice bad ideas it will lead to bad habits.

IMO-YMMV

Mark

unread,
Sep 30, 2009, 11:52:04 PM9/30/09
to
user923005 wrote:
> This sort interface is vary bad as the only thing it can sort is a
> zero terminated character string:


If a string is not zero terminated, it probably would not be a string as C
understands it?

--
Mark

Squeamizh

unread,
Oct 1, 2009, 1:58:47 AM10/1/09
to
On Sep 30, 11:48 am, user923005 <dcor...@connx.com> wrote:

> This sort interface is vary bad as the only thing it can sort is a
> zero terminated character string:
> void sort_selection(char c[], const size_t sz)

The char array does not have to be null-terminated.

Richard Heathfield

unread,
Oct 1, 2009, 2:47:49 AM10/1/09
to

Right. Still, a sort interface that can only sort a string isn't much
use to anyone.

--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within

user923005

unread,
Oct 1, 2009, 4:39:50 PM10/1/09
to

It does with his driver:

user923005

unread,
Oct 1, 2009, 4:40:23 PM10/1/09
to
On Sep 30, 11:47 pm, Richard Heathfield <r...@see.sig.invalid> wrote:

> In <ha190j$e6...@aioe.org>, Mark wrote:
> > user923005 wrote:
> >> This sort interface is vary bad as the only thing it can sort is a
> >> zero terminated character string:
>
> > If a string is not zero terminated, it probably would not be a
> > string as C understands it?
>
> Right. Still, a sort interface that can only sort a string isn't much
> use to anyone.

However, if the only purpose is for O(f(N)) calculations, then it is
adequate, however.

Tim Rentsch

unread,
Oct 2, 2009, 8:48:17 PM10/2/09
to
Eric Sosman <eso...@ieee-dot-org.invalid> writes:

> arnuld wrote:
>>> On Tue, 29 Sep 2009 13:04:11 +0200, Edwin van den Oetelaar wrote:
>>
>>> If this is your source how can it give this answer. Do you want
>>> bubble-sort or selection-sort ?
>>
>> I really don't have any idea. I just look at the given O(n) notation
>> and decide.
>>
>>
>>
>>> this hurts my head.
>>
>> Why ?
>
> Maybe because bubble sort is the worst[*][**] sorting
> algorithm known to Man?

Bubble sort has these advantages. It is easy to understand,
easy to program, easy to get right, and relies solely on
local operations. It is also stable.

Selection sort is easy to understand, and not too bad to
program, but it is not stable.

Insertion sort is stable and easy to understand, but
programming it takes just a little more care than
bubble sort. The natural desire to improve the
search for insertion points (by using binary search)
makes it more complicated.

Quick sort is easy to understand, at least conceptually,
but programming it takes a certain amount of care,
especially if an effort is made to minimize the chances
for N**2 behavior or use of linear stack space. It
also isn't stable.

Heap sort isn't easy to understand, isn't nearly so easy
to program, is easy to get wrong, and /always/ is doing
non-local operations; it also isn't stable. Heap sort
is a great sorting algorithm. :)

Merge sort is usually dismissed because of needing extra
space. Yes it's possible to do a stable merge sort on an
array and use only O(1) extra space, but programming that is
horrendous.


> [***] Yes, despite my own ferreting out of an O(N*N*logN)
> sort in actual production code. The sort algorithm itself was
> fine, O(N*logN), but the careless programmer had handed it an
> O(N) comparison function ...

Even that's not as bad as the O(N**3) sort algorithm used
many years ago in a certain operating system's "help"
command...

Tim Rentsch

unread,
Oct 2, 2009, 9:02:12 PM10/2/09
to
James Kuyper <james...@verizon.net> writes:

Even quantum-bogosort can't keep up with Intelligent Design Sort.

(IDS makes no changes to its input, on the theory that
"they are already sorted according to a higher purpose.")

Keith Thompson

unread,
Oct 2, 2009, 9:18:49 PM10/2/09
to

Similar to Miracle sort.

Check whether the array is already sorted.
If it is, you're done.
If it's not, wait a while and check again, and hope for cosmic rays to
modify the array in memory.

--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"

Tim Rentsch

unread,
Oct 3, 2009, 6:51:32 AM10/3/09
to
Keith Thompson <ks...@mib.org> writes:

> Tim Rentsch <t...@alumni.caltech.edu> writes:
>> James Kuyper <james...@verizon.net> writes:
>>
>>> Eric Sosman wrote:
>>> ...
>>>> so on ad infinitum. But bubble sort is surely the worst "basic"
>>>> sort algorithm known.[***]
>>>
>>> Bubble sort is generally immensely faster than bogosort. ;-} Unless
>>> you're able to implement the quantum version of bogosort, which is the
>>> fastest sorting method I've ever heard of. :-)
>>
>> Even quantum-bogosort can't keep up with Intelligent Design Sort.
>>
>> (IDS makes no changes to its input, on the theory that
>> "they are already sorted according to a higher purpose.")
>
> Similar to Miracle sort.
>
> Check whether the array is already sorted.
> If it is, you're done.
> If it's not, wait a while and check again, and hope for cosmic rays to
> modify the array in memory.

I like it! A sorting algorithm that's O(n),
if only you have faith...

Richard Harter

unread,
Oct 3, 2009, 10:46:48 AM10/3/09
to

Many years ago there was a crackpot posting here who claimed that
sorting was O(n). It turns out that if you think about it
properly, it is.

Richard Harter, c...@tiac.net
http://home.tiac.net/~cri, http://www.varinoma.com
Kafka wasn't an author;
Kafka was a prophet!

Keith Thompson

unread,
Oct 3, 2009, 12:37:33 PM10/3/09
to
c...@tiac.net (Richard Harter) writes:
[...]

> Many years ago there was a crackpot posting here who claimed that
> sorting was O(n). It turns out that if you think about it
> properly, it is.

Sorting is O(1) if, rather than counting comparisons or swaps,
you count calls to qsort().

Richard Harter

unread,
Oct 3, 2009, 1:08:52 PM10/3/09
to
On Sat, 03 Oct 2009 09:37:33 -0700, Keith Thompson
<ks...@mib.org> wrote:

>c...@tiac.net (Richard Harter) writes:
>[...]
>> Many years ago there was a crackpot posting here who claimed that
>> sorting was O(n). It turns out that if you think about it
>> properly, it is.
>
>Sorting is O(1) if, rather than counting comparisons or swaps,
>you count calls to qsort().
>

True, but no cigar. A more precise statement is that the time
required for sorting is O(n) - omega(n) if you prefer.

Eric Sosman

unread,
Oct 3, 2009, 1:31:17 PM10/3/09
to
Keith Thompson wrote:
> c...@tiac.net (Richard Harter) writes:
> [...]
>> Many years ago there was a crackpot posting here who claimed that
>> sorting was O(n). It turns out that if you think about it
>> properly, it is.
>
> Sorting is O(1) if, rather than counting comparisons or swaps,
> you count calls to qsort().

Sorting is O(1) no matter what you count, provided the
sort runs on a machine with finite resources and that it
terminates.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Keith Thompson

unread,
Oct 3, 2009, 1:55:15 PM10/3/09
to
c...@tiac.net (Richard Harter) writes:
> On Sat, 03 Oct 2009 09:37:33 -0700, Keith Thompson
> <ks...@mib.org> wrote:
>>c...@tiac.net (Richard Harter) writes:
>>[...]
>>> Many years ago there was a crackpot posting here who claimed that
>>> sorting was O(n). It turns out that if you think about it
>>> properly, it is.
>>
>>Sorting is O(1) if, rather than counting comparisons or swaps,
>>you count calls to qsort().
>>
>
> True, but no cigar. A more precise statement is that the time
> required for sorting is O(n) - omega(n) if you prefer.

I don't see how your statement is more precise than mine; it's just
a different statement. (My statement is admittedly silly.)

Richard Harter

unread,
Oct 3, 2009, 5:10:38 PM10/3/09
to
On Sat, 03 Oct 2009 10:55:15 -0700, Keith Thompson
<ks...@mib.org> wrote:

>c...@tiac.net (Richard Harter) writes:
>> On Sat, 03 Oct 2009 09:37:33 -0700, Keith Thompson
>> <ks...@mib.org> wrote:
>>>c...@tiac.net (Richard Harter) writes:
>>>[...]
>>>> Many years ago there was a crackpot posting here who claimed that
>>>> sorting was O(n). It turns out that if you think about it
>>>> properly, it is.
>>>
>>>Sorting is O(1) if, rather than counting comparisons or swaps,
>>>you count calls to qsort().
>>>
>>
>> True, but no cigar. A more precise statement is that the time
>> required for sorting is O(n) - omega(n) if you prefer.
>
>I don't see how your statement is more precise than mine; it's just
>a different statement. (My statement is admittedly silly.)

My apologies, the relevant measure is big theta. That is, there
are algorithms such that the worst case cost of sorting is
THETA(n), i.e., if C(n) is the worst case sorting time cost for
data sets of size n, there are constants c1, c2, and n0 such that
c1*n < C(n) <c2*n for n>n0.

Will that do?

Phil Carmody

unread,
Oct 3, 2009, 6:20:48 PM10/3/09
to
Keith Thompson <ks...@mib.org> writes:
> c...@tiac.net (Richard Harter) writes:
> [...]
>> Many years ago there was a crackpot posting here who claimed that
>> sorting was O(n). It turns out that if you think about it
>> properly, it is.
>
> Sorting is O(1) if, rather than counting comparisons or swaps,
> you count calls to qsort().

You seem to be overlooking the 'size of the input' aspect.

Phil
--
Any true emperor never needs to wear clothes. -- Devany on r.a.s.f1

Keith Thompson

unread,
Oct 3, 2009, 7:44:16 PM10/3/09
to
Phil Carmody <thefatphi...@yahoo.co.uk> writes:
> Keith Thompson <ks...@mib.org> writes:
>> c...@tiac.net (Richard Harter) writes:
>> [...]
>>> Many years ago there was a crackpot posting here who claimed that
>>> sorting was O(n). It turns out that if you think about it
>>> properly, it is.
>>
>> Sorting is O(1) if, rather than counting comparisons or swaps,
>> you count calls to qsort().
>
> You seem to be overlooking the 'size of the input' aspect.

How so?

I was making a true statement, but it was (clearly, I thought)
a joke.

Sorting an array, of whatever size, requires exactly 1 call to
qsort(). Since 1 is O(1), sorting an array is O(1) if you count
calls to qsort(). It's obviously O(something bigger) if you count
comparisons, data movement, or anything reasonable.

Ben Bacarisse

unread,
Oct 3, 2009, 7:57:31 PM10/3/09
to
c...@tiac.net (Richard Harter) writes:

> On Sat, 03 Oct 2009 10:55:15 -0700, Keith Thompson
> <ks...@mib.org> wrote:
>
>>c...@tiac.net (Richard Harter) writes:
>>> On Sat, 03 Oct 2009 09:37:33 -0700, Keith Thompson
>>> <ks...@mib.org> wrote:
>>>>c...@tiac.net (Richard Harter) writes:
>>>>[...]
>>>>> Many years ago there was a crackpot posting here who claimed that
>>>>> sorting was O(n). It turns out that if you think about it
>>>>> properly, it is.
>>>>
>>>>Sorting is O(1) if, rather than counting comparisons or swaps,
>>>>you count calls to qsort().
>>>>
>>>
>>> True, but no cigar. A more precise statement is that the time
>>> required for sorting is O(n) - omega(n) if you prefer.
>>
>>I don't see how your statement is more precise than mine; it's just
>>a different statement. (My statement is admittedly silly.)
>
> My apologies, the relevant measure is big theta. That is, there
> are algorithms such that the worst case cost of sorting is
> THETA(n), i.e., if C(n) is the worst case sorting time cost for
> data sets of size n, there are constants c1, c2, and n0 such that
> c1*n < C(n) <c2*n for n>n0.
>
> Will that do?

Not for me. Lots of things are missing: the model of computation; the
unit of measure used for "cost"; the representation of the problem
and, by implication, of the solution; what n denotes...[*] Lots of
apparently surprising remarks can be made about computations when
these sorts of things are omitted.

I think Keith was pointing to one of these ambiguities by suggesting a
new unit of measure for the cost.

BTW, you almost certainly don't need THETA to make the claim
interesting. Big O is an upper bound, so O(n) is the key complexity
measure. That the lower bound is also linear does not seem to add
much.

[*] Typing this was like the Monty Python sketch. I started by
stating that two things were missing, then I added a third... and then
a fourth. I've opted for "lots" and a trailing ellipsis because I bet
I've forgotten some others.

--
Ben.

Richard Harter

unread,
Oct 4, 2009, 12:32:32 AM10/4/09
to

Well no, you don't need all of those things; you only need one of
them and you've actually been given the one you need, albeit with
no attention called to it. Think of this as a challenge: there
is a quite natural interpretation of the statement, sorting is
O(n). The challenge is to realize what that interpretation is.
There is an important principle involved.

>
>I think Keith was pointing to one of these ambiguities by suggesting a
>new unit of measure for the cost.
>
>BTW, you almost certainly don't need THETA to make the claim
>interesting. Big O is an upper bound, so O(n) is the key complexity
>measure. That the lower bound is also linear does not seem to add
>much.

It's a way of dissing O(1) claims.

>
>[*] Typing this was like the Monty Python sketch. I started by
>stating that two things were missing, then I added a third... and then
>a fourth. I've opted for "lots" and a trailing ellipsis because I bet
>I've forgotten some others.

But you see only one of all your things are relevant, just as
they aren't relevant to the claim that sorting is O(n*log n);

Ben Bacarisse

unread,
Oct 4, 2009, 8:52:48 AM10/4/09
to
c...@tiac.net (Richard Harter) writes:

For the moment I must simply take your word for it. I suspect we have
very different notions of how the term "natural" applies to theoretical
concepts like asymptotic bounds of algorithms. Still, I stand by
ready to be amazed!

> The challenge is to realize what that interpretation is.
> There is an important principle involved.

I hope you won't keep us in the dark too long.

>>I think Keith was pointing to one of these ambiguities by suggesting a
>>new unit of measure for the cost.
>>
>>BTW, you almost certainly don't need THETA to make the claim
>>interesting. Big O is an upper bound, so O(n) is the key complexity
>>measure. That the lower bound is also linear does not seem to add
>>much.
>
> It's a way of dissing O(1) claims.

I don't see how. Any O(1) algorithm can be made THETA(n) with a
simple loop, can't it?

>>[*] Typing this was like the Monty Python sketch. I started by
>>stating that two things were missing, then I added a third... and then
>>a fourth. I've opted for "lots" and a trailing ellipsis because I bet
>>I've forgotten some others.
>
> But you see only one of all your things are relevant, just as
> they aren't relevant to the claim that sorting is O(n*log n);

That's another matter altogether. Sorting is a problem not an
algorithm. Personally, I'd never say that sorting is O(f) for any f
since I don't know of any results of that kind for this class of
problem (I am sure there are some, I just don't know any). Whilst I
agree one can allow some of the things I listed to be unstated for an
/algorithm/[1], I don't think any can left out when discussing a
/problem/.

BTW, I will be mildly peeved if the big assumption you think I am
making is to assume that all sorting algorithms are comparison sorts.
Your posting history suggest to me that you might have something more
interesting in mind, but I honestly have no idea what is encompassed
by your notion of "natural".

[1] The reason being (in my narrow-minded view of these things) that
the specification of the algorithm pins several of them down -- maybe
not entirely, but enough to be able to move on to an analysis. For a
problem, a one-word description ("sorting") leaves them all up in the
air.

--
Ben.

pete

unread,
Oct 4, 2009, 1:09:49 PM10/4/09
to


--
pete

pete

unread,
Oct 4, 2009, 1:12:22 PM10/4/09
to
Tim Rentsch wrote:

> Insertion sort is stable and easy to understand, but
> programming it takes just a little more care than
> bubble sort. The natural desire to improve the
> search for insertion points (by using binary search)
> makes it more complicated.

Today for no particular reason,
I wrote a stable, in place, binary insertion sort.

/* BEGIN output from bisort_test.c */

Arrays of (char *),
are being sorted by string length.

Original order of the test array:
zero
one
two
three
four
five
six
seven
eight
nine
ten

Stable binary insertion sort:
one
two
six
ten
zero
four
five
nine
three
seven
eight

/* END output from bisort_test.c */

/* BEGIN bisort_test.c */

#include <stdio.h>
#include <string.h>

#define SORT_FUNCTIONS { \
nosort, "Original order of the test array:", \
bisort, "Stable binary insertion sort:" \
}
#define NUMBERS { \
"zero","one","two","three","four","five", \
"six","seven","eight","nine","ten" \
}
#define NMEMB (sizeof numbers / sizeof *numbers)
#define SORTS (sizeof s_F / sizeof *s_F)

int comp(const void *a, const void *b);
void nosort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *));
void bisort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *));

int main(void)
{
size_t element, sort;
struct sf {
void (*function)(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *));
const char *description;
} s_F[] = SORT_FUNCTIONS;
const char *numbers[] = NUMBERS;
char *array[NMEMB];

puts("\n/* BEGIN output from bisort_test.c */\n\n"
"Arrays of (char *),\n"
"are being sorted by string length.\n");
for (sort = 0; sort != SORTS; ++sort) {
memcpy(array, numbers, sizeof array);
s_F[sort].function(array, NMEMB, sizeof *array, comp);
puts(s_F[sort].description);
for (element = 0; element != NMEMB; ++element) {
puts(array[element]);
}
putchar('\n');
}
puts("/* END output from bisort_test.c */");
return 0;
}

int comp(const void *a, const void *b)
{
const size_t a_len = strlen(*(const char **)a);
const size_t b_len = strlen(*(const char **)b);

return b_len > a_len ? -1 : b_len != a_len;
}

void nosort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *))
{
base, nmemb, size, compar;
}

void bisort(void *base, size_t nmemb, size_t size,
int (*compar)(const void *, const void *))
{
size_t bytes = 0;
const size_t odd_size = size ^ size - 1;
unsigned char *key = base;
unsigned char *const array = key;

while (nmemb-- > 1) {
unsigned char *end;
size_t n;
size_t low = 0;
size_t high = bytes + size;

bytes = high;
key += size;
do {
size_t middle = low;

n = high - middle;
middle += (n & odd_size ? n - size : n) >> 1;
if (0 > compar(key, middle + array)) {
high = middle;
} else {
low = middle;
}
} while (n != size);
if ((end = array + high) != key) {
unsigned char *start = key;
unsigned char *const after = start + size;

do {
unsigned char *p1 = start;
unsigned char *p2 = p1;
const unsigned char temp = *p1;

do {
*p1 = *(p2 -= size);
} while ((p1 = p2) != end);
*p1 = temp;
++end;
} while (++start != after);
}
}
}

/* END bisort_test.c */

--
pete

Richard Harter

unread,
Oct 4, 2009, 1:25:30 PM10/4/09
to
On Sun, 04 Oct 2009 13:52:48 +0100, Ben Bacarisse
<ben.u...@bsb.me.uk> wrote:


[snip a lot of stuff - look at the previous postings if you want
background]

>That's another matter altogether. Sorting is a problem not an
>algorithm.

None-the-less it makes sense to speak of order functions for
problems. Given a problem, there will be a wide variety of
algorithms that can be used to solve it. The intrinsic cost of
solving that problem is the cost of the best algorithms. Thus it
makes sense to say that the traveling salesman problem is
exponential. There is an implicit assumption here that the
algorithms run on a Turing machine equivalent.

>Personally, I'd never say that sorting is O(f) for any f
>since I don't know of any results of that kind for this class of
>problem (I am sure there are some, I just don't know any). Whilst I
>agree one can allow some of the things I listed to be unstated for an
>/algorithm/[1], I don't think any can left out when discussing a
>/problem/.

But they can - it's just a matter of abstraction.


>
>BTW, I will be mildly peeved if the big assumption you think I am
>making is to assume that all sorting algorithms are comparison sorts.
>Your posting history suggest to me that you might have something more
>interesting in mind, but I honestly have no idea what is encompassed
>by your notion of "natural".

I'm wasn't but prepare to be disappointed - I expect your
reaction will be "Is that all?". You asked, quite properly, what
does n denote. And the answer is, and I quote, "data sets of
size n". But what how do you measure size? The natural
measurement of the size of a data set is the amount of storage it
occupies. For specialized data sets such as arrays of elements
it is convenient to use the number of elements as a measure of
size, but it the actual number of bits that is primary. (Don't
argue with me on this point; I'm right by definition. :-))

The thing is, sorting algorithms are developed in terms of
operations on elements so it is natural to characterize their
performance in terms of number of elements with an implict
assumption that the cost of operations on elements is O(1). It's
a convenient assumption that works in practice, because real
sorting problems are of bounded size. However the assumption
creates a blind spot - as the number of distinct elements
increases the intrinsic cost of basic operations also increases.
This increase in cost is usually ignored in analysis.

At this point you may be shaking your head and saying, "So what,
why does it matter?" For most of us, I suppose it doesn't - or
at least we think it doesn't. One place where it matters is in
large systems analysis. The issue is: Does sorting scale?
Primary operations (the things that you use for black boxes)
scale it the cost per data unit passing through the box is O(1).
(That's another definition; don't argue. :-)) So, if you are
thinking in terms of counting elements, sorting doesn't scale; if
you are thinking in terms of counting data volume it does -
perhaps.

After all, we need to establish whether sorting actually is
Theta(n) where n is the number of bits in the data set. The
lower bound is clear enough - whatever the algorithm an oracle
can force it to examine every bit. The upper bound is a bit
murkier. The simple argument runs like this:

Let m be the number of elements and n be the total data set size.
Comparison sorts require O(m * log m) operations. However if
there are m distinct elements then the average size of an element
has to be >= log m. Ergo n >= m * log m. Ergo O(n) is an upper
bound.

There are some fiddles you have to go through to account for
duplicate elements but they're trivial. More importantly, naive
comparisons have a cost of n/m >= log m. However there are
various tricks that can be used to tag the bit/byte position.
The upshot is that the comparison cost can be reduced to O(1).

Where things get murky is the question of data movement. In most
sorting algorithms all of the data is potentially moved log m
times for a n*log(m) cost. However one can avoid moving data by
working with data indices. (With the aid of some hair this can
be done in an O(1) per data bit way.) Give a subsidiary
permutation array all of the data can be moved into place as an
O(n) operation - if setting a bit in an arbitrary location is an
O(1) operation. The trouble is that for an arbitrarily large
address space this is problematic. The upshort is that sorting
isn't really O(n) but if we make assumptions that apply within
ordinary computation it is.

By the by, this little insight seems to be harder to arrive at
than I suppose it to be. When my crackpot was posting everybody
told him he was crazy and nobody caught on that he was talking
about apples whereas they were talking about oranges. Using his
n he was right - sort of. He didn't take into all of the costs
for larg n.

Eric Sosman

unread,
Oct 4, 2009, 1:44:49 PM10/4/09
to
Tim Rentsch wrote:
> [...]

> Heap sort isn't easy to understand, isn't nearly so easy
> to program, is easy to get wrong, and /always/ is doing
> non-local operations; it also isn't stable. Heap sort
> is a great sorting algorithm. :)

Something you forgot to mention is that Heapsort is
O(N*logN) even in the worst case. Heapsort's worst case
isn't "degenerate," unlike that of Quicksort, say.

> Merge sort is usually dismissed because of needing extra
> space. Yes it's possible to do a stable merge sort on an
> array and use only O(1) extra space, but programming that is
> horrendous.

On the other hand, merge sort is flat-out beautiful for
linked lists, where the O(N) space for the links is "free"
because it's already there.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Phil Carmody

unread,
Oct 4, 2009, 5:39:44 PM10/4/09
to
c...@tiac.net (Richard Harter) replied to Ben Bacarisse:
...

> After all, we need to establish whether sorting actually is
> Theta(n) where n is the number of bits in the data set. The
> lower bound is clear enough - whatever the algorithm an oracle
> can force it to examine every bit. The upper bound is a bit
> murkier. The simple argument runs like this:
>
> Let m be the number of elements and n be the total data set size.
> Comparison sorts require O(m * log m) operations. However if
> there are m distinct elements then the average size of an element
> has to be >= log m.

And therefore the cost of those operations is theta(log(m)) elementary
operations. Therefore comparison sorts require O(m*log(m)^2).

> Ergo n >= m * log m. Ergo O(n) is an upper bound.

This no longer follows. Comparison sorts were never going to provide
the tight bound. See threads between Dan Bernstein and, IIRC, Nick
McLaren on sorting for more - probably comp.programming, not so sure
though.

Richard

unread,
Oct 4, 2009, 6:12:54 PM10/4/09
to
Phil Carmody <thefatphi...@yahoo.co.uk> writes:

This has nothing to do with the C standard and should be taken to
comp.programming or comp.algorithms ....

--
"Avoid hyperbole at all costs, its the most destructive argument on
the planet" - Mark McIntyre in comp.lang.c

Ben Bacarisse

unread,
Oct 4, 2009, 7:20:24 PM10/4/09
to
c...@tiac.net (Richard Harter) writes:

> On Sun, 04 Oct 2009 13:52:48 +0100, Ben Bacarisse
> <ben.u...@bsb.me.uk> wrote:
>
>
> [snip a lot of stuff - look at the previous postings if you want
> background]
>
>>That's another matter altogether. Sorting is a problem not an
>>algorithm.
>
> None-the-less it makes sense to speak of order functions for
> problems. Given a problem, there will be a wide variety of
> algorithms that can be used to solve it. The intrinsic cost of
> solving that problem is the cost of the best algorithms. Thus it
> makes sense to say that the traveling salesman problem is
> exponential. There is an implicit assumption here that the
> algorithms run on a Turing machine equivalent.

That's way too vague for me. We are not talking about computability
but sub-polynomial complexity here and I don't know what else might be
"Turing machine equivalent" in that context. What other models have
exactly the same class of "linear-time" problems as TMs?

>>Personally, I'd never say that sorting is O(f) for any f
>>since I don't know of any results of that kind for this class of
>>problem (I am sure there are some, I just don't know any). Whilst I
>>agree one can allow some of the things I listed to be unstated for an
>>/algorithm/[1], I don't think any can left out when discussing a
>>/problem/.
>
> But they can - it's just a matter of abstraction.
>>
>>BTW, I will be mildly peeved if the big assumption you think I am
>>making is to assume that all sorting algorithms are comparison sorts.
>>Your posting history suggest to me that you might have something more
>>interesting in mind, but I honestly have no idea what is encompassed
>>by your notion of "natural".
>
> I'm wasn't but prepare to be disappointed - I expect your
> reaction will be "Is that all?". You asked, quite properly, what
> does n denote. And the answer is, and I quote, "data sets of
> size n". But what how do you measure size? The natural
> measurement of the size of a data set is the amount of storage it
> occupies. For specialized data sets such as arrays of elements
> it is convenient to use the number of elements as a measure of
> size, but it the actual number of bits that is primary. (Don't
> argue with me on this point; I'm right by definition. :-))

And, better, you are right by virtue of being right! The size of the
input is the usual measure for a problem and the usual unit is bits
(which is why the encoding can be so important).

Yes, I am disappointed, but there is enthusiasm as well. The last I
knew anything of the state of the art in this area (more than a decade
ago) it was widely thought that sorting m numbers encoded in (on
average) r bits would be found to be O(mr) but it had not, at the
time, been show to be so. Are you saying that it now is? Is there
any reference to this work that is accessible to someone without a
good library?

Please forgive me for not just talking your argument as a proof. It
seems to be worded as an argument about an abstract model far removed
from a TM. What model is it and how do you know that is it equivalent
to a TM as far as the class of linear problems is concerned?

<snip argument about O(n = mr) sort>
--
Ben.

Richard Harter

unread,
Oct 4, 2009, 11:25:51 PM10/4/09
to

You should have read further to see:


"More importantly, naive comparisons have a cost of n/m >= log m.
However there are various tricks that can be used to tag the
bit/byte position. The upshot is that the comparison cost can be
reduced to O(1)."

Tim Rentsch

unread,
Oct 5, 2009, 5:34:39 AM10/5/09
to
Eric Sosman <eso...@ieee-dot-org.invalid> writes:

> Tim Rentsch wrote:
>> [...]
>> Heap sort isn't easy to understand, isn't nearly so easy
>> to program, is easy to get wrong, and /always/ is doing
>> non-local operations; it also isn't stable. Heap sort
>> is a great sorting algorithm. :)
>
> Something you forgot to mention is that Heapsort is
> O(N*logN) even in the worst case. Heapsort's worst case
> isn't "degenerate," unlike that of Quicksort, say.

Right. It isn't that I forgot really, just that the mention was
rather cryptic (calling it "a great sorting algorithm").
Something of a playful posting, I took a few liberties.

I also didn't mention (at least not directly) Quicksort's
average-case running time of O(N log N), which is a very
important plus.

Speaking of which, I also left out Shell sort, which has a
running time more like O( N**1.5 ). If you're looking for a
suboptimal-but-not-too-suboptimal sorting algorithm, Shell sort
is a good candidate!


>> Merge sort is usually dismissed because of needing extra
>> space. Yes it's possible to do a stable merge sort on an
>> array and use only O(1) extra space, but programming that is
>> horrendous.
>
> On the other hand, merge sort is flat-out beautiful for
> linked lists, where the O(N) space for the links is "free"
> because it's already there.

Merge sort on linked lists is rather nice. It's not so
satisfying in C though, because it's inconvenient to write a C
routine to perform this algorithm for different kinds of linked
lists. C provides arrays in a form that allows arrays with
different element types to be dealt with easily (by means of
(void*) pointers and an extra argument giving element size, etc).
Analogous code for linked lists is rather clunky, requiring
functions to access and update the "next element" link value.
I suspect this has a lot to do with why a linked list sort
function hasn't made its way into the standard library.

user923005

unread,
Oct 5, 2009, 4:21:12 PM10/5/09
to
On Oct 3, 3:20 pm, Phil Carmody <thefatphil_demun...@yahoo.co.uk>
wrote:

> Keith Thompson <ks...@mib.org> writes:
> > c...@tiac.net (Richard Harter) writes:
> > [...]
> >> Many years ago there was a crackpot posting here who claimed that
> >> sorting was O(n).  It turns out that if you think about it
> >> properly, it is.
>
> > Sorting is O(1) if, rather than counting comparisons or swaps,
> > you count calls to qsort().
>
> You seem to be overlooking the 'size of the input' aspect.

FTSI:
;-)

P.S. (for the seriously smiley impaired):
Calls to qsort() will undoubtably be O(log(N)) for random data, unless
qsort() has been converted from recursion to iteration.

Keith Thompson

unread,
Oct 5, 2009, 4:31:52 PM10/5/09
to
user923005 <dco...@connx.com> writes:
> On Oct 3, 3:20 pm, Phil Carmody <thefatphil_demun...@yahoo.co.uk>
> wrote:
>> Keith Thompson <ks...@mib.org> writes:
>> > c...@tiac.net (Richard Harter) writes:
>> > [...]
>> >> Many years ago there was a crackpot posting here who claimed that
>> >> sorting was O(n).  It turns out that if you think about it
>> >> properly, it is.
>>
>> > Sorting is O(1) if, rather than counting comparisons or swaps,
>> > you count calls to qsort().
>>
>> You seem to be overlooking the 'size of the input' aspect.
>
> FTSI:
> ;-)

Googling "FTSI" doesn't give me anything useful. I'm probably missing
an obvious joke.

> P.S. (for the seriously smiley impaired):
> Calls to qsort() will undoubtably be O(log(N)) for random data, unless
> qsort() has been converted from recursion to iteration.

Right, I didn't think about recursion. Of course qsort() itself
needn't be recursive; it could call some other recursive function
that does the actual work. And of course qsort() needn't use Quicksort.

glibc's qsort() uses a non-recursive version of Quicksort, falling
back to insertion sort for small partitions.

user923005

unread,
Oct 5, 2009, 4:41:07 PM10/5/09
to
On Oct 5, 1:31 pm, Keith Thompson <ks...@mib.org> wrote:

> user923005 <dcor...@connx.com> writes:
> > On Oct 3, 3:20 pm, Phil Carmody <thefatphil_demun...@yahoo.co.uk>
> > wrote:
> >> Keith Thompson <ks...@mib.org> writes:
> >> > c...@tiac.net (Richard Harter) writes:
> >> > [...]
> >> >> Many years ago there was a crackpot posting here who claimed that
> >> >> sorting was O(n).  It turns out that if you think about it
> >> >> properly, it is.
>
> >> > Sorting is O(1) if, rather than counting comparisons or swaps,
> >> > you count calls to qsort().
>
> >> You seem to be overlooking the 'size of the input' aspect.
>
> > FTSI:
> > ;-)
>
> Googling "FTSI" doesn't give me anything useful.  I'm probably missing
> an obvious joke.

(F)or (T)he (S)miley (I)mpaired

Similar to:
FTSSI (for the seriously smiley impaired)

> > P.S. (for the seriously smiley impaired):
> > Calls to qsort() will undoubtably be O(log(N)) for random data, unless
> > qsort() has been converted from recursion to iteration.
>
> Right, I didn't think about recursion.  Of course qsort() itself
> needn't be recursive; it could call some other recursive function
> that does the actual work.  And of course qsort() needn't use Quicksort.
>
> glibc's qsort() uses a non-recursive version of Quicksort, falling
> back to insertion sort for small partitions.
>
> --
> Keith Thompson (The_Other_Keith) ks...@mib.org  <http://www.ghoti.net/~kst>
> Nokia
> "We must do something.  This is something.  Therefore, we must do this."

>     -- Antony Jay and Jonathan Lynn, "Yes Minister"- Hide quoted text -
>
> - Show quoted text -

Keith Thompson

unread,
Oct 5, 2009, 6:28:59 PM10/5/09
to
user923005 <dco...@connx.com> writes:
> On Oct 5, 1:31 pm, Keith Thompson <ks...@mib.org> wrote:
>> user923005 <dcor...@connx.com> writes:
[...]

>> > FTSI:
>> > ;-)
>>
>> Googling "FTSI" doesn't give me anything useful.  I'm probably missing
>> an obvious joke.
>
> (F)or (T)he (S)miley (I)mpaired
>
> Similar to:
> FTSSI (for the seriously smiley impaired)

Yeah, that should have been obvious, epecially given the immediately
following line:

>> > P.S. (for the seriously smiley impaired):

*sigh*

[...]

Richard Bos

unread,
Oct 25, 2009, 12:34:48 PM10/25/09
to
Eric Sosman <eso...@ieee-dot-org.invalid> wrote:

> Tim Rentsch wrote:

> > Merge sort is usually dismissed because of needing extra
> > space. Yes it's possible to do a stable merge sort on an
> > array and use only O(1) extra space, but programming that is
> > horrendous.
>
> On the other hand, merge sort is flat-out beautiful for
> linked lists, where the O(N) space for the links is "free"
> because it's already there.

For the links, yes; but even for linked lists, it needs at least one
extra head node per level of recursion, unless I've been missing a
trick. This is not a lot, though, and it's still by far the most elegant
way I know of sorting a linked list.

Richard

pete

unread,
Oct 25, 2009, 10:14:43 PM10/25/09
to
Richard Bos wrote:
> Eric Sosman <eso...@ieee-dot-org.invalid> wrote:

>> On the other hand, merge sort is flat-out beautiful for
>>linked lists, where the O(N) space for the links is "free"
>>because it's already there.
>
>
> For the links, yes; but even for linked lists, it needs at least one
> extra head node per level of recursion, unless I've been missing a
> trick. This is not a lot, though, and it's still by far the most elegant
> way I know of sorting a linked list.

I don't know what you mean by
"it needs at least one extra head node per level of recursion".

--
pete

Eric Sosman

unread,
Oct 25, 2009, 11:05:46 PM10/25/09
to

He means that a list merge sort operates by merging short
ordered sublists into longer sublists, and then merging those
into still longer sublists, and so on until the final merge
puts the entire original list into order. Keeping track of
all those sublists (whether by recursion or by other means)
needs O(log N) additional storage.

But since O(N) + O(log N) = O(N), this makes no difference
at all asymptotically. Even in practice it makes not enough
difference to fret about: sixty-four 64-bit pointers are more
than enough for any machine you or your children are likely
to encounter. If your grandchildren encounter even bigger
machines, they'll have no trouble finding some extra space.

--
Eric Sosman
eso...@ieee-dot-org.invalid

Tim Rentsch

unread,
Oct 26, 2009, 1:36:03 AM10/26/09
to
ral...@xs4all.nl (Richard Bos) writes:

Although not immediately obvious, it is possible to
do a merge sort on linked lists without using
recursion and using only O(1) additional storage
(not counting the O(n) space for the links of course):

extern void
list_merge_sort( ListNode **head, int (*order_f)( ListNode *, ListNode *) ){
Count length = list_length( *head );
ListNode *a, *b;
Count remaining, sorted_length, a_n, b_n;
ListNode **last;
Index i;

for(
sorted_length = 1;
sorted_length < length;
sorted_length += MIN( sorted_length, length - sorted_length )
){
remaining = length;
last = head;
while( remaining > 0 ){
remaining -= a_n = MIN( sorted_length, remaining );
remaining -= b_n = MIN( sorted_length, remaining );
a = b = *last;
for( i = 0; i < a_n; i++ ) b = b->next;

while( a_n > 0 || b_n > 0 ){
if( b_n == 0 || a_n > 0 && order_f( a, b ) < 1 ){
*last = a;
last = & a->next;
a = a->next;
a_n --;
} else {
*last = b;
last = & b->next;
b = b->next;
b_n --;
}
}
*last = b;
}
}
}

Tim Rentsch

unread,
Oct 26, 2009, 1:51:07 AM10/26/09
to
Eric Sosman <eso...@ieee-dot-org.invalid> writes:

> pete wrote:
>> Richard Bos wrote:
>>> Eric Sosman <eso...@ieee-dot-org.invalid> wrote:
>>
>>>> On the other hand, merge sort is flat-out beautiful for
>>>> linked lists, where the O(N) space for the links is "free"
>>>> because it's already there.
>>>
>>> For the links, yes; but even for linked lists, it needs at least one
>>> extra head node per level of recursion, unless I've been missing a
>>> trick. This is not a lot, though, and it's still by far the most elegant
>>> way I know of sorting a linked list.
>>
>> I don't know what you mean by
>> "it needs at least one extra head node per level of recursion".
>
> He means that a list merge sort operates by merging short
> ordered sublists into longer sublists, and then merging those
> into still longer sublists, and so on until the final merge
> puts the entire original list into order. Keeping track of
> all those sublists (whether by recursion or by other means)
> needs O(log N) additional storage.

Actually it doesn't -- it can be done with O(1) additional
storage (algorithm shown elsethread).

> But since O(N) + O(log N) = O(N), this makes no difference

> at all asymptotically. [snip practical comments.]

Even with arrays, without any space for links, stable mergesort
can be done using only O(1) extra space. Those algorithms
aren't really very practical, but not by as much as one
might think. For linked lists the O(1)-space algorithms
are both theoretically interesting and practically useful.

gunjan...@gmail.com

unread,
Jul 11, 2014, 8:05:55 AM7/11/14
to
On Tuesday, September 29, 2009 4:12:15 PM UTC+5:30, arnuld wrote:
> Here is the C implementation of Selection-Sort. Any advice for
> improvement will be apppreciated:
>
>
> /* A C program to sort an array of characters using bubble sort algorithm
> *
> * VERSION 0.0
> *
> */
>
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <string.h>
>
>
> void sort_bubble(char* );
>
> int main(void)
> {
> char arrc[] = "heathfield";
>
> printf("array(unsorted): %s\n", arrc);
> sort_bubble(arrc);
> printf("array(sorted): %s\n", arrc);
>
> return 0;
> }
>
>
> void sort_bubble(char* c)
> {
> size_t i, j;
> size_t s = strlen(c);
>
> for(i = 0; i < s; ++i)
> {
> for(j = s-1; j != i; --j)
> {
> if( c[j] < c[j-1] )
> {
> char temp = c[j];
> c[j] = c[j - 1];
> c[j - 1] = temp;
> }
> }
> }
> }
>
> ========================== OUTPUT ==============================
> [arnuld@dune programs]$ gcc -ansi -pedantic -Wall -Wextra selection-sort.c
> [arnuld@dune programs]$ ./a.out
> Original Array = arnuld
> Sorted Array = adlnru
> [arnuld@dune programs]$
>
>
>
> --
> www.lispmachine.wordpress.com
> my email is @ the above blog.

hey i m a student nd i m using selection sort in this way : is it proper or not
#include<stdio.h>
#include<conio.h>
void selection(int a[],int n)
{
int i,j,temp;
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(a[i]>a[j])
{
temp=a[i];
a[i]=a[j];
a[j]=a[i];
}
}
}
}

Mark Storkamp

unread,
Jul 11, 2014, 9:20:21 AM7/11/14
to
In article <5c5cb1e0-3f1d-4f6e...@googlegroups.com>,
A) What are you using 'temp' for?
B) Why are you responding to a 5 year old post?
C) Don't try to be cute with the 'i m a nd i m' bs. Spell it out. There
are non-native speakers here who will have difficulty with that, and
it's just childish.

Noob

unread,
Jul 11, 2014, 10:24:54 AM7/11/14
to
On 11/07/2014 15:20, Mark Storkamp wrote:

> C) Don't try to be cute with the 'i m a nd i m' bs. Spell it out. There
> are non-native speakers here who will have difficulty with that, and
> it's just childish.

AFAICT, it's participants from the former "British Indian Empire" who
are most likely to use "SMS-speak" in anglophone technical groups.
I've never seen a logical explanation for this observation (if it is
indeed accurate).

0 new messages