Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

How to pick-up 3 elements in a linked list?

1 view
Skip to first unread message

Peppa

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to
Hi,
I have in memory a linked list like this:

struct rec {
char name[20];
int value;
struct rec *next;
};


The problem is: how can I print on the screen the names
of the 3 records with the highest value, using an elegant
algorithm (this means reading the linked list possibly just
one time), and without ordering the list?
e.g. if I have:

name1 9
name2 10
name3 6
name4 16
name5 17

I should obtain on the screen:

name5 17
name4 16
name2 10

Thank You for your help.

Luca

Paul D. Boyle

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to
Peppa (pe...@peppa.com) wrote:
: The problem is: how can I print on the screen the names

: of the 3 records with the highest value, using an elegant
: algorithm (this means reading the linked list possibly just
: one time), and without ordering the list?

It may be easier to order the data as it gets initially read into
the list, and then just print the first three elements on the list.
I think any other algorithm would need multiple traversals of the list,
which eventually (depending on the size of the list) would become slow,
not to mention that the code would be more difficult for someone else
to follow.

Paul

--
Paul D. Boyle | bo...@laue.chem.ncsu.edu
Director, X-ray Structural Facility | phone: (919) 515-7362
Department of Chemistry - Box 8204 | FAX: (919) 515-5079
North Carolina State University |
Raleigh, NC, 27695-8204
http://laue.chem.ncsu.edu/web/xray.welcome.html

Greg Martin

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to

Paul D. Boyle wrote in message <7klhoi$b2i$1...@uni00nw.unity.ncsu.edu>...

>Peppa (pe...@peppa.com) wrote:
>: The problem is: how can I print on the screen the names
>: of the 3 records with the highest value, using an elegant
>: algorithm (this means reading the linked list possibly just
>: one time), and without ordering the list?
>
>It may be easier to order the data as it gets initially read into
>the list, and then just print the first three elements on the list.
>I think any other algorithm would need multiple traversals of the list,

Would it not be possible to have three pointers for the big, bigger and
biggest and keep reassigning them as you traverse the list so that if the
current value is > then big, bigger or biggest it is pointed to by the
appropriate one and one is bumped?

Regards,
Greg.

Paul D. Boyle

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to
Greg Martin (gr...@netidea.com) wrote:

: Paul D. Boyle wrote in message <7klhoi$b2i$1...@uni00nw.unity.ncsu.edu>...


: >Peppa (pe...@peppa.com) wrote:
: >: The problem is: how can I print on the screen the names
: >: of the 3 records with the highest value, using an elegant
: >: algorithm (this means reading the linked list possibly just
: >: one time), and without ordering the list?
: >
: >It may be easier to order the data as it gets initially read into
: >the list, and then just print the first three elements on the list.
: >I think any other algorithm would need multiple traversals of the list,

: Would it not be possible to have three pointers for the big, bigger and
: biggest and keep reassigning them as you traverse the list so that if the
: current value is > then big, bigger or biggest it is pointed to by the
: appropriate one and one is bumped?

I think it would certainly be possible, but I still think that ordering
the list as the data is read in would be the best solution, most
efficient solution.

Greg Martin

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to

Paul D. Boyle wrote in message <7klp1o$od3$1...@uni00nw.unity.ncsu.edu>...

>Greg Martin (gr...@netidea.com) wrote:
>
>: Paul D. Boyle wrote in message <7klhoi$b2i$1...@uni00nw.unity.ncsu.edu>...
>: >Peppa (pe...@peppa.com) wrote:
>: >: The problem is: how can I print on the screen the names
>: >: of the 3 records with the highest value, using an elegant
>: >: algorithm (this means reading the linked list possibly just
>: >: one time), and without ordering the list?
>: >
>: >It may be easier to order the data as it gets initially read into
>: >the list, and then just print the first three elements on the list.
>: >I think any other algorithm would need multiple traversals of the list,
>
>: Would it not be possible to have three pointers for the big, bigger and
>: biggest and keep reassigning them as you traverse the list so that if the
>: current value is > then big, bigger or biggest it is pointed to by the
>: appropriate one and one is bumped?
>
>I think it would certainly be possible, but I still think that ordering
>the list as the data is read in would be the best solution, most
>efficient solution.

I think it would be hard and foolish to argue against that. :-)
Greg.

Lawrence Kirby

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to
In article <7klp1o$od3$1...@uni00nw.unity.ncsu.edu>

bo...@laue.chem.ncsu.edu "Paul D. Boyle" writes:

>Greg Martin (gr...@netidea.com) wrote:
>
>: Paul D. Boyle wrote in message <7klhoi$b2i$1...@uni00nw.unity.ncsu.edu>...
>: >Peppa (pe...@peppa.com) wrote:
>: >: The problem is: how can I print on the screen the names
>: >: of the 3 records with the highest value, using an elegant
>: >: algorithm (this means reading the linked list possibly just
>: >: one time), and without ordering the list?
>: >
>: >It may be easier to order the data as it gets initially read into
>: >the list, and then just print the first three elements on the list.
>: >I think any other algorithm would need multiple traversals of the list,
>
>: Would it not be possible to have three pointers for the big, bigger and
>: biggest and keep reassigning them as you traverse the list so that if the
>: current value is > then big, bigger or biggest it is pointed to by the
>: appropriate one and one is bumped?
>
>I think it would certainly be possible, but I still think that ordering
>the list as the data is read in would be the best solution, most
>efficient solution.

Ordering the data as it is read requires an O(N^^2) algorithm. Reading the
data and performing Greg's algorithm is O(N). So Greg's algorithm
will be signficantly more efficient for anything but trivially small data.
Even if you use a separate O(n log n) sort it still isn't as good.

--
-----------------------------------------
Lawrence Kirby | fr...@genesis.demon.co.uk
Wilts, England | 7073...@compuserve.com
-----------------------------------------


Greg Martin

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to
Well, I thought this another good learning exercise so I tapped out a
version of this. It wouldn't have taken so long if someone hadn't replaced
my == key with a = key :-) but that being said and for what it's worth,
here's my code.
/*******************************************/
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

void makeList(char *, int);
int setOrder(void);

struct Rec


{
char name[20];
int value;
};

struct Link
{
struct Link *pfwd;
struct Rec rec;
};

struct Link *phead;struct Link *ptail;
struct Link *pnew;
struct Rec one;
struct Rec two;
struct Rec three;
int first;

int main(void)
{
char nameList[][6] = {"name1", "name2", "name3", "name4", "name5", "name6",
"name7"};
int valueList[] = {11, 62, 53, 14, 44, 96, 17};
int i;

one.value = 0;
two.value = 0;
three.value = 0;
first = 1;
pnew = NULL;

for (i = 0; i < 7; ++i)
makeList(nameList[i], valueList[i]);

pnew = phead;

for (i = 0; i < 7; ++i)
{
setOrder();
pnew = pnew->pfwd;

}
printf("%s %d\n%s %d\n%s %d\n", one.name, one.value,
two.name, two.value, three.name, three.value);
return 0;
}


void makeList(char *oneName, int oneValue)
{

if((pnew = malloc(sizeof(struct Link))) == NULL)
{
puts("malloc() failed");
exit(EXIT_FAILURE);
}
if(first)
phead = pnew;
pnew->pfwd = NULL;
if(!first)
ptail->pfwd = pnew;
strcpy(pnew->rec.name, oneName);
pnew->rec.value = oneValue;
ptail = pnew;
first = 0;

}

int setOrder(void)
{

if(pnew->rec.value > one.value)
{
three.value = two.value;
strcpy(three.name, two.name);
two.value = one.value;
strcpy(two.name, one.name);
one.value = pnew->rec.value;
strcpy(one.name, pnew->rec.name);


return(EXIT_SUCCESS);
}
if(pnew->rec.value > two.value)
{
three.value = two.value;
strcpy(three.name, two.name);
two.value = pnew->rec.value;
strcpy(two.name, pnew->rec.name);
return(EXIT_SUCCESS);
}
if(pnew->rec.value > three.value)
{
three.value = pnew->rec.value;
strcpy(three.name, pnew->rec.name);
return(EXIT_SUCCESS);
}
return(EXIT_SUCCESS);
}


Peppa wrote in message <7klfla$j2r$1...@usenet46.supernews.com>...


>Hi,
>I have in memory a linked list like this:
>
>struct rec {
> char name[20];
> int value;
> struct rec *next;
> };
>
>

>The problem is: how can I print on the screen the names
>of the 3 records with the highest value, using an elegant
>algorithm (this means reading the linked list possibly just
>one time), and without ordering the list?

Dann Corbit

unread,
Jun 21, 1999, 3:00:00 AM6/21/99
to
Paul D. Boyle <bo...@laue.chem.ncsu.edu> wrote in message
news:7kmqs5$sae$1...@uni00nw.unity.ncsu.edu...
[snip]
> I stand corrected. Thank-you for information. Algorithm analysis is
> certainly a weak point for me. How does one evaluate algorithms to
> determine whether the algorithm is O(N) or O(N^^2)?

Count the operations [often mathematically or by gedanken experiment].

An algorithm is O(n) if, for some constant C, the run time of the algorithm
for a problem of size n is less than or equal to:
C *(k*x + b) where k & b are arbitrary constants.
E.g. from some special point forward, the running time *always* lies under a
straight line written on the graph.

A problem is O(n^2) if, for some constant C, the run time of the algorithm
for a problem of size n is less than or equal to:
C *(k*x*x + c*x + b) where k, c & b are arbitrary constants.
That is to say, from some special point forward, the running time *always*
lies under a parabola written on the graph.

An algorithm that is O(n) is by definition, also O(n*n).

Similarly for O(exp(x)), O(x^n), O(n log n), etc.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
"The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. Newsgroup http://www.dejanews.com/~c_a_p
C.A.P. FAQ: ftp://38.168.214.175/pub/Chess%20Analysis%20Project%20FAQ.htm

Paul D. Boyle

unread,
Jun 22, 1999, 3:00:00 AM6/22/99
to
Lawrence Kirby (fr...@genesis.demon.co.uk) wrote:
: In article <7klp1o$od3$1...@uni00nw.unity.ncsu.edu>

: bo...@laue.chem.ncsu.edu "Paul D. Boyle" writes:

: >Greg Martin (gr...@netidea.com) wrote:
: >
: >: Would it not be possible to have three pointers for the big, bigger and


: >: biggest and keep reassigning them as you traverse the list so that if the
: >: current value is > then big, bigger or biggest it is pointed to by the
: >: appropriate one and one is bumped?
: >
: >I think it would certainly be possible, but I still think that ordering
: >the list as the data is read in would be the best solution, most
: >efficient solution.

: Ordering the data as it is read requires an O(N^^2) algorithm. Reading the
: data and performing Greg's algorithm is O(N). So Greg's algorithm
: will be signficantly more efficient for anything but trivially small data.
: Even if you use a separate O(n log n) sort it still isn't as good.

I stand corrected. Thank-you for information. Algorithm analysis is


certainly a weak point for me. How does one evaluate algorithms to
determine whether the algorithm is O(N) or O(N^^2)?

Paul

J. Wegener NOSPAM

unread,
Jun 22, 1999, 3:00:00 AM6/22/99
to
Lawrence Kirby wrote in message <929997...@genesis.demon.co.uk>...

>In article <7klp1o$od3$1...@uni00nw.unity.ncsu.edu>
> bo...@laue.chem.ncsu.edu "Paul D. Boyle" writes:
>
>
>Ordering the data as it is read requires an O(N^^2) algorithm. Reading the
>data and performing Greg's algorithm is O(N). So Greg's algorithm
>will be signficantly more efficient for anything but trivially small data.
>Even if you use a separate O(n log n) sort it still isn't as good.
>
>--
>-----------------------------------------
>Lawrence Kirby | fr...@genesis.demon.co.uk
>Wilts, England | 7073...@compuserve.com
>-----------------------------------------
>

The answer to which solution is more efficient overall, depends on the
read/write ratio of the list. If it is mostly read, the "sort on insert" is
the best , if new objects are often added to the the list the "sort on read"
would be better...

Cheers,
Johan
-------------------------
Johan Wegener
Email: xjwe...@xpost.xtele.xdk
NOSPAM: Delete xxxx from my email address


Richard Heathfield

unread,
Jun 22, 1999, 3:00:00 AM6/22/99
to
Greg Martin <gr...@netidea.com> wrote in article
<Nyyb3.5118$gK.1...@typ22b.nn.bcandid.com>...

> Well, I thought this another good learning exercise so I tapped out a
> version of this. It wouldn't have taken so long if someone hadn't
replaced
> my == key with a = key :-)

I hate it when that happens.

;-)

--
Richard Heathfield

The bug stops here.


Lawrence Kirby

unread,
Jun 22, 1999, 3:00:00 AM6/22/99
to
In article <7kmtvg$9v6$1...@news.inet.tele.dk>

xjwe...@xpost.xtele.xdk "J. Wegener NOSPAM" writes:

>Lawrence Kirby wrote in message <929997...@genesis.demon.co.uk>...
>>In article <7klp1o$od3$1...@uni00nw.unity.ncsu.edu>
>> bo...@laue.chem.ncsu.edu "Paul D. Boyle" writes:
>>
>>
>>Ordering the data as it is read requires an O(N^^2) algorithm. Reading the
>>data and performing Greg's algorithm is O(N). So Greg's algorithm
>>will be signficantly more efficient for anything but trivially small data.
>>Even if you use a separate O(n log n) sort it still isn't as good.
>>
>>--
>>-----------------------------------------
>>Lawrence Kirby | fr...@genesis.demon.co.uk
>>Wilts, England | 7073...@compuserve.com
>>-----------------------------------------
>>
>
>The answer to which solution is more efficient overall, depends on the
>read/write ratio of the list. If it is mostly read, the "sort on insert" is
>the best , if new objects are often added to the the list the "sort on read"
>would be better...

No, the point is that Greg's solution is better than any solution that
sorts the list (ignoring O(N) sorts for simplicity, the argument is
different there but it almost certainly still holds). Greg's approach can
be applied incrementally as the list is built. The overhead is O(1) per
element added. If you "sort in insert" the overhead is O(N) per element
added. If you read all elements and sort subsequently you can reduce
that to an amortized O(log N) per element added. It still doesn't match
the O(1) of Greg's approach.

Greg's approach could fall over in two respects

1. If you generalised the requirement from the top 3 to the top T elements
then you have a complexity dependent on T.

2. The algorithm falls down if you need to delete elements from the list.

Greg Martin

unread,
Jun 22, 1999, 3:00:00 AM6/22/99
to
I must apologize ... I forgot to free the memory allocated. Hopefully the
folks who write os's have been more diligent!
Greg.
Greg Martin wrote in message ...

>Well, I thought this another good learning exercise so I tapped out a
>version of this. It wouldn't have taken so long if someone hadn't replaced

Chris Torek

unread,
Jun 23, 1999, 3:00:00 AM6/23/99
to
>Paul D. Boyle <bo...@laue.chem.ncsu.edu> wrote in message
>news:7kmqs5$sae$1...@uni00nw.unity.ncsu.edu...
>[snip]
>> I stand corrected. Thank-you for information. Algorithm analysis is
>> certainly a weak point for me. How does one evaluate algorithms to
>> determine whether the algorithm is O(N) or O(N^^2)?

In article <LKCb3.1322$Qv4.8209@client> Dann Corbit
<dco...@solutionsiq.com> writes (brackets his):


>Count the operations [often mathematically or by gedanken experiment].

Note that this requires a careful definition of "operations".
Usually this is phrased as "count the number of steps in the
algorithm", for some suitable definition of "algorithm" that
forces a "step" to be something we can do in a single time-unit.
To keep my own followup at least marginally on topic, some examples:

i++; /* this counts as one step */
j = k / 3; /* this is another single step */
strcpy(d, s); /* but this is many steps */

Even though the strcpy() might look like "one operation" (depending
on how you view it), its actual internal implementation is equivalent
to "N instances of copying one char from s to d", where N is
strlen(s)+1.

Now to expand in the off-topic direction (the real reason for this
followup :-) )....

I flipped through "Fundamentals of Computer Algorithms", by Horowitz
and Sahni; this was the text the professor assigned in the class
where I did algorithm analysis (so many years ago now...). I am
not sure this text itself is a particularly good place to learn
how to "evaluate algorithms to [find their bounds]" ("recurrence
relation" is not even in the index), but it is certainly better
than nothing.

Given an algorithm with loops or recursive calls, the way you find
a big-O equivalence class for that algorithm is to write down a
"recurrence relation" for the time taken to execute the code. For
instance, for merge sort, one gets (fixed width font required here;
$x$ denotes a variable x, etc.):

{
T(n) = { $a$, $n = 1$, $a$ a constant
{ $2T(n/2) + cn$, $n > 1$, $c$ a constant

You then apply induction-type reasoning to show that, e.g., when
$n$ is a power of two, $T(n) = an + cn \log_2 n$. In big-O notation,
all the constant factors vanish, so this shows that mergesort is
O(n log n).

Since big-O provides only an upper bound:

>An algorithm that is O(n) is by definition, also O(n*n).

but one can also find a lower bound, \Omega(n); putting those bounds
together gives you yet another notation, \Theta(n). Quoting from
Horowitz & Sahni, using TeX notation and just fiddling with the
typography a bit:

$f(n) = \Theta(g(n))$ iff there exist positive constants
$c_1$, $c_2$, and $n_0$ such that for all $n > n_0$,
$c_1 |g(n)| \le |f(n)| \le c_2 |g(n)|$.

And:

An even stronger mathematical notation is given by
the following.

Definition: $f(n) \sim o(g(n))$
(read as ``$f(n)$ is asymptotic to $g(n)$'' iff:

$$\lim_{n\rightarrow\infty} {f(n) \over g(n)} = 1$$

However, most people are not interested in getting this precise;
big-O suffices to place an upper limit on runtime as $n$ grows,
which is usually what we want to know.

(Personally, I always found the worst part of dealing with recurrence
relations to be going from an open form to a closed one -- you just
have to have memorized all those formulae like "1 + 2 + ... + n =
n(n+1)/2", etc., and recognize them in the recurrences. Once you
can see this summation and recognize the closed form, it instantly
becomes obvious that, e.g., bubble sort is O(n*n).)

And, to touch again on what I said at the beginning, note that this
kind of analysis often assumes that "a = a + 1" is itself an O(1)
operation, but if "a" is (say) a bignum in Lisp, that might not be
true. On a PC, if "a" is in a machine register and we use "inc"
to increment it, the operation will always take one "clock tick"
(or three or 17 or 1/3 or whatever, depending on the CPU design,
but "one tick" is close enough); but if "a" is one of those bignum
things, "a + 1" may require reallocating "a" into a larger space
as it changes from (say) 340282366920938463463374607431768211455
to 340282366920938463463374607431768211456 (the first number fits
in 128 unsigned bits; the second needs 129). Thus, there are plenty
of places to trip up in analysing algorithms.
--
In-Real-Life: Chris Torek, Berkeley Software Design Inc
El Cerrito, CA Domain: to...@bsdi.com +1 510 234 3167
http://claw.bsdi.com/torek/ (not always up) I report spam to abuse@.

super...@my-deja.com

unread,
Jun 25, 1999, 3:00:00 AM6/25/99
to
A BIG thank you for all
who answered to my question.

Luca

Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.

0 new messages