Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

A fun little problem

25 views
Skip to first unread message

Sigfried Gold

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
Ok, here's a little challenge, not too hard to do for fun, but not
idiotically easy, at least not for me. This seems like a contrived
problem, but actually it came up in real life.

The problem is: write program that finds all the combinations of
a set of whole numbers that add up to a larger number.

Here's the syntax of my version: match total number number number . . .

And here's an example:

>match 9 1 2 3 4 5 6 7
2 3 4 : 9
1 3 5 : 9
4 5 : 9
1 2 6 : 9
3 6 : 9
2 7 : 9

The key here is to come up with something that runs fast. The most
obvious solution (try every combination) runs incredibly slow when
you have more than 20 or 30 numbers to sum, but I came up with an
optimization that ran faster than my screen would print for any set of
numbers. I never tested it further than this, and haven't looked at it
in years. I'm just sharing the problem here because I'm sure there's
a few folks here who would have fun with it.

Also, writing a short program is not nearly as desireable for this
as writing a program that's easy to read and understand. Looking
back at my solution to this, it's plenty short enough, but it could
take days now to figure out what I was trying to do.

David C. Jones

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
In <3vg23j$g...@miso.wwa.com> sigf...@miso.wwa.com (Sigfried Gold) writes:

>Ok, here's a little challenge, not too hard to do for fun, but not
>idiotically easy, at least not for me. This seems like a contrived
>problem, but actually it came up in real life.

>The problem is: write program that finds all the combinations of
>a set of whole numbers that add up to a larger number.

>The key here is to come up with something that runs fast. The most


>obvious solution (try every combination) runs incredibly slow when
>you have more than 20 or 30 numbers to sum, but I came up with an
>optimization that ran faster than my screen would print for any set of
>numbers. I never tested it further than this, and haven't looked at it
>in years. I'm just sharing the problem here because I'm sure there's
>a few folks here who would have fun with it.

I think obvious is dependent upon point of view. The first thought
that came to my mind was to create a recursive algorithm. Set up
a loop that goes from 1 to N where N is the number elements in the
set. Subtract the first element from the total, remove it from the
list (unless you can use the same number twice) and then pass the
new list and total back into the same procedure. All you have to
do is check to see when a null list or a total of zero is passed.
The obvious disadvantage is that your stack space is going to get
wiped out.....

Actually you would have to do a few mods to make it more efficient.
I would sort the set after it was input. Start the loop with the
higest element. After the first run of the loop when you use the
second highest element, delete the first element also. The
program has already computed all possible combinations that use
the high element. This avoids returns like 9-5-3 and 5-9-3
from coming up. One one combination will appear. I'm sure if
I sat down long enough I could come up with a few other things.
There is a type of logic problem called Cross Sums which works
on a very similar basis and there are tricks you can use from
that.

(Actually, now that I think about it, you wouldn't even have to
use a loop. All you have to do is delete the high number from
the list. Then pass the list twice, one with the original
total and one with the high number deleted from the total.)

>Also, writing a short program is not nearly as desireable for this
>as writing a program that's easy to read and understand. Lookin

I have found that this is mostly dependent upon how well you document
your code. I know quite a few people who have taken a programming
class and still can't understand "Hello World" without seeing
comments in it.

Davy J
--
----------------------------------------------------------------------
7800 - 28 carts 2600 - 146 carts INTV - 24 R.E.M. Boots - 14
Maintainer of the Tarot Layouts FAQ
"Wherever you go, Whatever you do, Whatever you say, Say it with Love"

James Irl Waldby

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
sigf...@miso.wwa.com (Sigfried Gold) writes:

>Ok, here's a little challenge, not too hard to do for fun, but not
>idiotically easy, at least not for me. This seems like a contrived
>problem, but actually it came up in real life.

>The problem is: write program that finds all the combinations of
>a set of whole numbers that add up to a larger number.

If you have a program to solve this problem, then it can solve the
"PARTITION" problem, a well-known NP-complete problem. (See page 47
of M. R. Garey & D. S. Johnson, "Computers and Intractability".)
At the moment there is no known solution for PARTITION that runs
in polynomial time.

Roughly, PARTITION says: Given a set A of sized elements, can we find
subsets B and C that partition A (ie, B and C are disjoint and their
union is A) such that the sum of sizes over B equals that over C.

For example, if A={2,3,3,3,5}, B={2,3,3}, C={3,5} we have 8=8 and see
that the answer is Yes. Now this problem is equivalent to saying
"match 8 2 3 3 3 5". Generally, if the sum of sizes over A is x
we would use "match x/2 A1 A2 ... An" and if match prints any subsets
the answer is Yes. (There are some trivial issues that arise if any Ai is
non-integral, or more-importantly, irrational, but argument still works.)

NP-completeness of a problem implies that for any solution method P, and
any polynomial expression t(s) [where s = problem size in some reasonable
metric] there is a data set A such that P takes more time than t(s(A))
to solve with A. In other words, you might have a heuristic approach that
works fast for many problems, but there are data sets it is going to take
a long time to solve. So it would be a big breakthrough if you could
prove you have a good solution!

Incidentally, looking at the amount of output from this problem is another
way to see that it can be very time-consuming to solve. Consider the
problem "match 40 1 1 1 1 ... 1" where there are 50 1's. This isn't a
particularly big problem or hard to solve, but it has (50 choose 40)
or about 10 billion subset solutions.

In contrast, PARTITION doesn't have much output -- it only needs to say
Yes or No -- but still can take more time than any polynomial bound.

>Here's the syntax of my version: match total number number number . . .
>And here's an example:
>>match 9 1 2 3 4 5 6 7
> 2 3 4 : 9
> 1 3 5 : 9
> 4 5 : 9
> 1 2 6 : 9
> 3 6 : 9
> 2 7 : 9

>The key here is to come up with something that runs fast. The most


>obvious solution (try every combination) runs incredibly slow when
>you have more than 20 or 30 numbers to sum, but I came up with an
>optimization that ran faster than my screen would print for any set of
>numbers. I never tested it further than this, and haven't looked at it
>in years. I'm just sharing the problem here because I'm sure there's
>a few folks here who would have fun with it.

>Also, writing a short program is not nearly as desireable for this

Paul Hankin

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
In article <3vgh0p$j...@vixen.cso.uiuc.edu>,

James Irl Waldby <jw1...@glhpx11.cen.uiuc.edu> wrote:
>sigf...@miso.wwa.com (Sigfried Gold) writes:
>
>>Ok, here's a little challenge, not too hard to do for fun, but not
>>idiotically easy, at least not for me. This seems like a contrived
>>problem, but actually it came up in real life.
>
>>The problem is: write program that finds all the combinations of
>>a set of whole numbers that add up to a larger number.
>
>If you have a program to solve this problem, then it can solve the
>"PARTITION" problem, a well-known NP-complete problem. (See page 47
>of M. R. Garey & D. S. Johnson, "Computers and Intractability".)
>At the moment there is no known solution for PARTITION that runs
>in polynomial time.
>
[...snip...]

>
>NP-completeness of a problem implies that for any solution method P, and
>any polynomial expression t(s) [where s = problem size in some reasonable
>metric] there is a data set A such that P takes more time than t(s(A))
>to solve with A. In other words, you might have a heuristic approach that
>works fast for many problems, but there are data sets it is going to take
>a long time to solve. So it would be a big breakthrough if you could
>prove you have a good solution!
>

NP completeness says no such thing. A problem is NP complete if the
problem is NP and the existence of an algorithm which solves the
problem in polynomial time implies the existence of an algorithm for
solving (say) the k-clique problem in polynomial time.
However, I still agree that a good solution would be a big breakthrough,
as it would prove P=NP.

[more snipped]


>
>>Also, writing a short program is not nearly as desireable for this
>>as writing a program that's easy to read and understand. Looking
>>back at my solution to this, it's plenty short enough, but it could
>>take days now to figure out what I was trying to do.

I can't see the point in a programming competition which says `write this
trivial program in a sound way using all the principles of software
engineering that you can think of' because (a) most sensible programming
techniques are a matter of following guidelines, (b) it's impossible
to judge whose program is best, as different people have different ideas
about style and (c) it isn't very interesting IMO. Writing the shortest
bit of code you can think up is a _much_ better competetion as it requires
serious ingenuity to compress the code to its minimal form.
I wouldn't dispute the fact that writing terse code is not as
desirable as clear code for most purposes, but when it comes to
contests, I claim a short-is-best rule makes for more interest.

-- Paul Hankin


Lawrence Foard

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
In article <3vg23j$g...@miso.wwa.com>,

Sigfried Gold <sigf...@miso.wwa.com> wrote:
>The problem is: write program that finds all the combinations of
>a set of whole numbers that add up to a larger number.

Here is a quite recursive hack I made up for it:

/* recursive solution to problem */
#include <stdio.h>

#define MAX_NUMS 1000

/* list of numbers to sum */
int nums[MAX_NUMS];
/* if true number is included in current sum */
char nflags[MAX_NUMS];
/* number of numbers to try summing */
int nnums;

int ic(int *a,int *b)
{
return *a>*b;
}

void show_list(int subsum,int start)
{
int a;
/* oh well didn't make it in this branch */
if (start==nnums)
return;
/* follow branch not including current # */
nflags[start]=0;
show_list(subsum,start+1);
subsum-=nums[start];
/* overshot */
if (subsum<0)
return;
nflags[start]=1;
if (subsum)
{
/* follow branch including current # */
show_list(subsum,start+1);
return;
}
for(a=0;a<=start;a++)
if (nflags[a])
printf("%d ",nums[a]);
printf("\n");
}

main()
{
int big;
char buff[20];
printf("Enter number to find pieces of> ");
fgets(buff,20,stdin);
big=atoi(buff);
printf("Enter list of numbers, one per line ending with <cr>\n");
nnums=0;
while(fgets(buff,20,stdin) && (*buff!=10) && (nnums<MAX_NUMS))
nums[nnums++]=atoi(buff);
/* want highest number first */
qsort(nums,nnums,sizeof(int),ic);
printf("List of numbers adding up to %d\n",big);
show_list(big,0);
}

Ali Rahimi

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
In article <3vgojd$8...@lyra.csx.cam.ac.uk>,

Paul Hankin <pd...@cus.cam.ac.uk> wrote:
>I can't see the point in a programming competition which says `write this
>trivial program in a sound way using all the principles of software
>engineering that you can think of' because (a) most sensible programming
>techniques are a matter of following guidelines, (b) it's impossible
>to judge whose program is best, as different people have different ideas
>about style and (c) it isn't very interesting IMO. Writing the shortest
>bit of code you can think up is a _much_ better competetion as it requires
>serious ingenuity to compress the code to its minimal form.

Paul, that's called ingenuity being misused. If you're smart, you'd use
your ingeniuty to make your code more readable and more useful, not more
obfuscated. If it's a programming contest we're talking about, then the
object should be to write THE BEST program. If it's an obfuscated programming
contest we're talking about, then the object should be to write THE MOST
OBFUSCATED program.
Granted, each has its merits, but i respect the ingeniuty of a builder more
than i respect that of a bomber. What's the point of using your ingeniuty
to render a program useless?

>-- Paul Hankin
Ali "Data Structure + Algorithm < Program" Rahimi.

Ali Rahimi

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
In article <3vg23j$g...@miso.wwa.com>,
Sigfried Gold <sigf...@miso.wwa.com> wrote:
>The problem is: write program that finds all the combinations of
>a set of whole numbers that add up to a larger number.

Using divide and concquer:

(defun knap (sum list &optional sofar)
(cond
((zerop sum) (list sofar))
((< sum 0) nil)
((null list) nil)
(t (append
(knap (- sum (car list)) (cdr list) (cons (car list) sofar))
(knap sum (cdr list) sofar)))))

>(knap 9 '( 1 2 3 4 5 6 7))
((6 2 1) (5 3 1) (4 3 2) (7 2) (6 3) (5 4))

basically, you try to determine if you can make the sum using all the numbers
except for an arbitrary number removed AND see if you can find the sum minus
that arbitrary removed number and the rest of the list.

as other people have pointed out, you can add in optimizations to prune
the recursion tree. one such optimization would be to keep a running
sum of the list (since we are removing number from it, its sum would
decrease), and return nil whenever the required sum is greated than the
available set of numbers. Here's the modified code:

(defun knap-1 (sum list list-sum sofar)
(cond
((zerop sum) (list sofar))
((= list-sum sum) (list (append sofar list))) ; another opt.
((< list-sum sum) nil)
((< sum 0) nil)
((null list) nil)
(t (append
(knap-1 (- sum (car list)) (cdr list) (- list-sum (car list))
(cons (car list) sofar))
(knap-1 sum (cdr list) (- list-sum (car list)) sofar)))))

(defun knap (sum list)
(knap-1 sum list (apply #'+ list) nil))


Here is the effect of the optimization:

BEFORE:
>>(time (knap 9 '( 1 2 3 4 5 6 7)))
real time : 0.283 secs
run time : 0.133 secs
((6 2 1) (5 3 1) (4 3 2) (7 2) (6 3) (5 4))

AFTER:
>>(time (knap 9 '( 1 2 3 4 5 6 7)))
real time : 0.733 secs
run time : 0.250 secs
((6 2 1) (5 3 1) (4 3 2) (7 2) (6 3) (5 4))


To be honest, i don't know whether the reason why the optimized version
runs consistently slower than the simpler method is that the complexity
of the code is affecting the runtime or that i simply have a bug in my
program.

Ali.

Russell Bornschlegel

unread,
Jul 30, 1995, 3:00:00 AM7/30/95
to
In article <3vh0fn$3...@fido.asd.sgi.com>,

Ali Rahimi <al...@bourgeois.engr.sgi.com> wrote:
>In article <3vgojd$8...@lyra.csx.cam.ac.uk>,
>Paul Hankin <pd...@cus.cam.ac.uk> wrote:
>>I can't see the point in a programming competition which says `write this
>>trivial program in a sound way using all the principles of software
>>engineering that you can think of' because (a) most sensible programming
>>techniques are a matter of following guidelines, (b) it's impossible
>>to judge whose program is best, as different people have different ideas
>>about style and (c) it isn't very interesting IMO. Writing the shortest
>>bit of code you can think up is a _much_ better competetion as it requires
>>serious ingenuity to compress the code to its minimal form.

Unfortunately, for most contest program specifications that we've seen here,
the minimal form is a degenerate form.

10 LIST

is not ingenuity, nor is

void main( void ) { printf( "The answer is 53.\n" ); }

>Paul, that's called ingenuity being misused. If you're smart, you'd use
>your ingeniuty to make your code more readable and more useful, not more
>obfuscated. If it's a programming contest we're talking about, then the
>object should be to write THE BEST program. If it's an obfuscated programming
>contest we're talking about, then the object should be to write THE MOST
>OBFUSCATED program.

It seems to me that "best" is a lot more subjective than "most obfuscated"
or "shortest non-degenerate" solution. If people are going to present
contests here, they'd better be awfully damn clear in their definitions.

There's a balance between following all the traditional "clean coding"
style guidelines and actually getting work done. An example my project
leader and I were discussing, while explaining function prototyping to a
novice programmer, is the practice of arranging a module so that
lower-level functions come first just to avoid having to prototype those
functions for forward referencing, versus moving all the prototypes into a
header so the arrangement is less critical. When you're writing a
quick-n-dirty program that won't exceed one module, function prototypes
are pretty much a waste of time, and we know to look for "main" at the
end of the module.

Any "toy program" specification - as for a contest - is probably not
going to be a multimodule affair, so the "best" way of coding it is going
to be very different from how you'd code a "real" program.

"Big" operating systems that try to integrate all the functionality in
the world (e.g. Windoze) pose another "right way of coding" problem. If
you do everything "by the book," your program winds up five times as
large as it might have been otherwise. The OS literature says this is
okay because it will still run on the next version of the OS no matter
what changes they make. So they release a new version of the OS that has
the next big advance in operating systems, and either they put in a ton
of backward compatibility support and bloat the OS, or they don't and
your old app doesn't run, or the worst of both worlds - they support all
but one feature that your app needs - leaving you with a bloated OS and a
bloated app which doesn't run. You will then have nothing except the
consolation of knowing that you wrote a really good program by the book.
At least you commented it heavily, so you'll be able to port it to
Windoze '99. Is this the "right" way of coding?

I'm going to shut up now before someone tells me to go to
comp.programming.philosophy...

--
Kaleja Lutrec / Russell Bornschlegel / kal...@rahul.net
"I don't read minds, but sometimes I look at the pictures."

Paul Hankin

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
In article <3vh0fn$3...@fido.asd.sgi.com>,
Ali Rahimi <al...@bourgeois.engr.sgi.com> wrote:
>In article <3vgojd$8...@lyra.csx.cam.ac.uk>,
>Paul Hankin <pd...@cus.cam.ac.uk> wrote:
>>I can't see the point in a programming competition which says `write this
>>trivial program in a sound way using all the principles of software
>>engineering that you can think of' because (a) most sensible programming
>>techniques are a matter of following guidelines, (b) it's impossible
>>to judge whose program is best, as different people have different ideas
>>about style and (c) it isn't very interesting IMO. Writing the shortest
>>bit of code you can think up is a _much_ better competetion as it requires
>>serious ingenuity to compress the code to its minimal form.
>
>Paul, that's called ingenuity being misused. If you're smart, you'd use
>your ingeniuty to make your code more readable and more useful, not more
>obfuscated. If it's a programming contest we're talking about, then the
>object should be to write THE BEST program. If it's an obfuscated programming
>contest we're talking about, then the object should be to write THE MOST
>OBFUSCATED program.
>Granted, each has its merits, but i respect the ingeniuty of a builder more
>than i respect that of a bomber. What's the point of using your ingeniuty
>to render a program useless?
>
>>-- Paul Hankin
> Ali "Data Structure + Algorithm < Program" Rahimi.

I appreciate your point, but everyone gets to write `good' programs
everyday - for me, a competition is more interesting if it challenges
in ways which are unusual, so I have to invent _new_ ideas to get a
good solution, rather than falling back on everyday methods.
There's already a `good program contest' --- it's called the software
marketplace in the real world.

You're builder analogy is a poor one: I'd say a builder has little ingenuity;
he may be highly skillful at his trade, but it doesn't take imagination
or creativity to build a house, rather knowledge of the techniques of
building, & experience and ability at building. The `good program' contest
is rather like challenging 5 builders to build a wall - you're likely
to get 5 walls which are more or less identical.

-- Paul Hankin


Steve E. Chapel

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
sigf...@miso.wwa.com (Sigfried Gold) writes:

>The problem is: write program that finds all the combinations of
>a set of whole numbers that add up to a larger number.

>Here's the syntax of my version: match total number number number . . .

>And here's an example:

>>match 9 1 2 3 4 5 6 7
> 2 3 4 : 9
> 1 3 5 : 9
> 4 5 : 9
> 1 2 6 : 9
> 3 6 : 9
> 2 7 : 9

[SPOILER]

My entry:

#include <stdio.h>
#include <stdlib.h>

struct Knap {
long Value;
long Rest;
int Selected;
};

int Comp(const void *A, const void *B) {
return ((struct Knap *) B)->Value -
((struct Knap *) A)->Value;
}

void Fill(struct Knap Sack[], int Sacks, int Current, long Left) {
if (Left == 0) {
int I;
for (I = 0; I < Sacks; I++)
if (Sack[I].Selected)
printf("%ld ", Sack[I].Value);
printf("\n");
return;
}

if (Current >= Sacks || Sack[Current].Rest < Left)
return;

if (Sack[Current].Value <= Left) {
Sack[Current].Selected = 1;
Fill(Sack, Sacks, Current + 1, Left - Sack[Current].Value);
Sack[Current].Selected = 0;
}

Fill(Sack, Sacks, Current + 1, Left);
}

int main(int argc, char *argv[]) {
int I;
long Size;
int Sacks = argc - 2;
struct Knap *Sack;

if (Sacks <= 0) {
printf("Not enough arguments.\n");
exit(1);
}

Size = atol(argv[1]);
Sack = malloc(sizeof(struct Knap) * Sacks);
for (I = 0; I < Sacks; I++) {
Sack[I].Value = atol(argv[I + 2]);
Sack[I].Selected = 0;
}
qsort(Sack, Sacks, sizeof(struct Knap), Comp);

Sack[Sacks - 1].Rest = Sack[Sacks - 1].Value;
for (I = Sacks - 2; I >= 0; I--)
Sack[I].Rest = Sack[I].Value + Sack[I + 1].Rest;

Fill(Sack, Sacks, 0, Size);
free(Sack);

return 0;
}
--
Steve Chapel sch...@cs.ucsb.edu | http://www.cs.ucsb.edu/~schapel
University of California, Santa Barbara | finger -l sch...@eci1.ucsb.edu

Sigfried Gold

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
In article <3vh6pk$d...@lyra.csx.cam.ac.uk>,

Paul Hankin <pd...@cus.cam.ac.uk> wrote:
>
>I appreciate your point, but everyone gets to write `good' programs
>everyday - for me, a competition is more interesting if it challenges
>in ways which are unusual, so I have to invent _new_ ideas to get a
>good solution, rather than falling back on everyday methods.
>There's already a `good program contest' --- it's called the software
>marketplace in the real world.

I made the request for this problem that people try to write clear
code for the simple, selfish reason that I want to be able to
understand people's responses to the problem. I am a professional
programmer, but I've never studied computers, mathematics, or anything
other than humanities since high school, so I'm at a disadvantage
trying to keep up with people here.

I haven't had time to thoroughly understand the solutions people have
already given, and even after I get a chance pore over them a while,
I may need to send some email or something asking for explanations.

So -- for a forum like this, I would agree, the point is not to write
professional code, which you could do at work. The point is to
have fun and to exercise and show-off one's brilliance. But before
I can express my appreciation of people's brilliance, I'm going to
have to understand it, and for someone like me, the explanation has to
be pretty clear to keep from going right over my head.

Clive...@armltd.co.uk

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
In article <3vicvc$g...@zonker.cs.ucsb.edu>,

Steve E. Chapel <sch...@zonker.cs.ucsb.edu> wrote:
>sigf...@miso.wwa.com (Sigfried Gold) writes:
>>The problem is: write program that finds all the combinations of
>>a set of whole numbers that add up to a larger number.

#include <stdio.h>
int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}

(-8

--Clive.
(Disclaimer: I wouldn't believe a word of this if I were you...)

Steve E. Chapel

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
Clive...@armltd.co.uk writes:

>In article <3vicvc$g...@zonker.cs.ucsb.edu>,
>Steve E. Chapel <sch...@zonker.cs.ucsb.edu> wrote:
>>sigf...@miso.wwa.com (Sigfried Gold) writes:
>>>The problem is: write program that finds all the combinations of
>>>a set of whole numbers that add up to a larger number.

>#include <stdio.h>
>int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
>"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
>main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}

Okay, I've got a challenge for you.

Let's both modify our programs to handle the case in which an unlimited
number of repitions are allowed. For exapmle:

match 5 3 2 1

Would output something like:
3 + 2 = 5
3 + 1 + 1 = 5
2 + 2 + 1 = 5
2 + 1 + 1 + 1 = 5

I'll post how long it takes me, and you post how long it took you.
Of course, the longer time wins! :-)

Steve E. Chapel

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
sch...@daffy.cs.ucsb.edu (Steve E. Chapel) writes:

>Clive...@armltd.co.uk writes:

>>In article <3vicvc$g...@zonker.cs.ucsb.edu>,
>>Steve E. Chapel <sch...@zonker.cs.ucsb.edu> wrote:
>>>sigf...@miso.wwa.com (Sigfried Gold) writes:
>>>>The problem is: write program that finds all the combinations of
>>>>a set of whole numbers that add up to a larger number.

>>#include <stdio.h>
>>int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
>>"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
>>main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}

>Okay, I've got a challenge for you.

>Let's both modify our programs to handle the case in which an unlimited
>number of repitions are allowed. For exapmle:

>match 5 3 2 1

>Would output something like:
>3 + 2 = 5
>3 + 1 + 1 = 5
>2 + 2 + 1 = 5
>2 + 1 + 1 + 1 = 5

Oops. Forgot the 1 + 1 + 1 + 1 + 1 = 5

>I'll post how long it takes me, and you post how long it took you.
>Of course, the longer time wins! :-)

Three minutes, including testing.
For a further challenge, see if you can find a bug in my program.

match 10 9 8 7 6 5 4 3 2 1

Output:
9 1
8 2
8 1 1
7 3
7 2 1
7 1 1 1
6 4
6 3 1
6 2 2
6 2 1 1
6 1 1 1 1
5 5
5 4 1
5 3 2
5 3 1 1
5 2 2 1
5 2 1 1 1
5 1 1 1 1 1
4 4 2
4 4 1 1
4 3 3
4 3 2 1
4 3 1 1 1
4 2 2 2
4 2 2 1 1
4 2 1 1 1 1
4 1 1 1 1 1 1
3 3 3 1
3 3 2 2
3 3 2 1 1
3 3 1 1 1 1
3 2 2 2 1
3 2 2 1 1 1
3 2 1 1 1 1 1
3 1 1 1 1 1 1 1
2 2 2 2 2
2 2 2 2 1 1
2 2 2 1 1 1 1
2 2 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1

[SPOILER]

#include <stdio.h>
#include <stdlib.h>

struct Knap {
long Value;
long Rest;
int Selected;
};

int Comp(const void *A, const void *B) {
return ((struct Knap *) B)->Value -
((struct Knap *) A)->Value;
}

void Fill(struct Knap Sack[], int Sacks, int Current, long Left) {
if (Left == 0) {
int I;

for (I = 0; I < Sacks; I++) {
int J;
for (J = 0; J < Sack[I].Selected; J++)


printf("%ld ", Sack[I].Value);
}
printf("\n");
return;
}

if (Current >= Sacks)
return;

if (Sack[Current].Value <= Left) {

++Sack[Current].Selected;
Fill(Sack, Sacks, Current, Left - Sack[Current].Value);
--Sack[Current].Selected;
}

Fill(Sack, Sacks, Current + 1, Left);
}

int main(int argc, char *argv[]) {
int I;
long Size;
int Sacks = argc - 2;
struct Knap *Sack;

if (Sacks <= 0) {
printf("Not enough arguments.\n");
exit(1);
}

Size = atol(argv[1]);
Sack = malloc(sizeof(struct Knap) * Sacks);
for (I = 0; I < Sacks; I++) {
Sack[I].Value = atol(argv[I + 2]);
Sack[I].Selected = 0;
}
qsort(Sack, Sacks, sizeof(struct Knap), Comp);

Sack[Sacks - 1].Rest = Sack[Sacks - 1].Value;
for (I = Sacks - 2; I >= 0; I--)
Sack[I].Rest = Sack[I].Value + Sack[I + 1].Rest;

Fill(Sack, Sacks, 0, Size);
free(Sack);

return 0;
}

Max Hailperin

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
I've seen a number of responses now that point out that rather than
searching the whole search-tree of possibilities, the hopeless
sub-trees can be pruned off where using all the remaining numbers
still wouldn't suffice to reach the goal sum.

However, this is missing the forest for the trees, or being
penny-wise but pound-foolish, or whatever your favorite trite-ism is.

Pruning off the hopeless sub-trees is nowhere near as big a win as
making sure you don't recreate identical sub-trees over and over
again. For example, suppose you are trying to reach a goal of 80,
and so far you've chosen to use 23 and 17, so you've got 40 left to
go, and the numbers you've got left to work with are the numbers from
1 to 16. Now suppose you are still trying to reach the goal of 80,
and have so far chose to use 22 and 18, so you've *again* got 40 left
to go, and suppose that you again have the numbers 1 to 16 left to
work with. Are you all over again going to figure out which subsets
of the set of numbers from 1 to 16 total to 40? In other words, are
you going to solve the exact same sub-problem all over again, just
because it arose in a different context?

All the code I've seen posted so far has fallen into this trap of
re-solving the same subproblems. The way to avoid that is to use a
table to store the sub-problem solutions in as they are created.
There are two options as to how the table can be used:

1) You add the table to a normal tree-recursive solution, such as all the ones
I've seen posted here, with the simple addition of checking the table
before starting on a new sub-problem to see whether the solution is
already there, and if so, just re-using it, and if not, go ahead
and do the tree-recursive step, but record the answer into the
table.

2) You reorganize the computation so that it systematically fills in
the table from smaller subproblems to larger, so that when computing
each entry, the other entries needed to compute it from have
already been computed.

The second approach is the classic "dynamic programming" strategy,
while the first approach is the demand-driven, tree-recursive variant
of dynamic programming that is often called "memoization".

For those who can't wait to get to the code, I enclose below a Scheme
program that uses memoization, as well as the pruning of hopeless
subtrees.

-Max Hailperin
Assistant Professor oF Computer Science
Gustavus Adolphus College
St. Peter, MN 56082
USA

========== code follows ==========

;;; (knap lst num) finds all subsets that sum to num of the set of
;;; non-negative numbers represented by the list lst; the result
;;; is a list of lists, one for each subset with the desired sum;
;;; it is assumed thatnum is a non-negative integer
(define knap
(lambda (lst main-goal)
(let ((lngth (length lst)))
;; Table is a vector of vectors used as a two dimensional
;; array for the memoization; there is one entry for each
;; tail of the lst and each remaining goal sum; the length
;; of tail of lst forms the first index, the remaining goal
;; sum is the second index.
;; Each entry of table is #f (false) if that problem hasn't
;; already been solved and recorded; otherwise it is the list
;; of solutions.
;; Note that this relies on #f being different than the empty
;; list, (), so that we can tell the difference between an
;; unsolved subproblem and one that has been determined to have
;; no solutions. Thus, old-fashioned Scheme implementations
;; that confound #f with () will lose.
(let ((table (let loop ((i lngth)
(table (make-vector (+ 1 lngth))))
(if (< i 0)
table
(begin
(vector-set! table i (make-vector (+ 1 main-goal)
#f))
(loop (- i 1) table))))))
(let ((sum (apply + lst)))
;; (try remaining num sum goal) solves a subproblem
;; remaining is a tail of lst, num is the length of that
;; tail and sum is its sum, and goal is the remaining portion
;; of the original target sum (what's left of main-goal)
(define try
(lambda (remaining num sum goal)
(if (or (< goal 0)
(< sum goal)) ; prune hopeless trees
'()
(or (vector-ref (vector-ref table num) goal)
(let ((result (if (null? remaining)
(if (= goal 0)
'(())
'())
(append (try (cdr remaining)
(- num 1)
sum
goal)
(map
(lambda (l)
(cons (car remaining)
l))
(try (cdr remaining)
(- num 1)
(- sum (car remaining))
(- goal
(car remaining))))))))
(vector-set! (vector-ref table num) goal result)
(try lst lngth sum main-goal))))))

Edward J Hanley

unread,
Jul 31, 1995, 3:00:00 AM7/31/95
to
Clive...@armltd.co.uk wrote:

: #include <stdio.h>


: int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
: "%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
: main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}

Come on, guys, this is getting ridiculous. Any idiot can eliminate spaces
from a program. Instead of making these things unreadable, show off your
skill by including enough whitespace to make them readable!

Bruce Mycroft
myc...@selway.umt.edu


Heathbar

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to

Aw, come on! :) That was the first fun post I've seen here. Writing the
shortest program to do a task is infinitely more enjoyable than writing
yet another boring program.

I did, however, decide that the above program was much too long, and
contained far too many spaces, so I eliminated some of the superfluous
code. Getting rid of the gratuitous include of <stdio.h> was easy
enough (who needs to declare library functions anyway?), as was
tweaking the algorithm a bit so I could replace unnecessarily verbose
constructs like "for" and "||" with operators like "," and "?:". But
the hard part was taking advantage of the Well Known Facts [*] that:

1. all compilers represent int, int*, and char** the same way;
2. all compilers guarantee left to right evaluation of arguments; and
3. all compilers love it when you leave off the "int" from global
declarations.

Once I took all that into account, I was able to get the program down
to its current size, a full 27 bytes shorter than the original version.
And, hey, it compiles (with gcc) and runs (on my sparc) correctly (for
at least one test case) so it *must* be correct.

char o[999];T,*p;main(c,v){m(T=atoi(1[p=v]),p+2);}m(t,v)int*v;{int
c=strlen(o),g=*v?atoi(*v):0;t?t>0&&g?sprintf(o+c,"+%d",g),m(t-g,++v),
o[c]=0,m(t,v):0:printf("%d=%s\n",T,o+1);}

Heather
--
[* = I AM JUST KIDDING! Please, ***PLEASE*** do not tell me that those
"facts" aren't true. I was kidding. It was a joke.]

Max Hailperin

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
In my last post I mentioned dynamic programming and memoization as two
reasonable approaches to this problem of finding subsets with a
specified sum. I enclosed with that post a Scheme program using the
memoization option. Now I've decided to go ahead and do a dynamic
programming version as well, for contrast. While I was at it, I
decided to switch to C, for more variety. I also set this version up
to have the original poster's command-line syntax, and a similar
output format to his. (It was supposed to be identical, but I worked
from memory and now I find my memory was faulty.) Enjoy.
-Max Hailperin
Assistant Professor of Computer Science

Gustavus Adolphus College
St. Peter, MN 56082
USA

========== code follows ==========

/* Find the subsets of a set of numbers that sum to a given goal sum.
Usage: progname goal-sum num1 ...
Assumptions: all number are non-negative inteters representable as ints.
Written 8/1/95 by Max Hailperin <m...@gac.edu */

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

static void *checkedMalloc(size_t);
static void printSolutions(int mainGoal, int subGoal, int prefix, char *table,
char **numStrs, char *goalStr, char *buf,
char *bufPtr, int *nums);

main(int argc, char *argv[]){

int goal, *nums;
char *table;
/* table is the heart of the dynamic programming approach used by this
program; it is used as a two-dimensional table, which needs to be
simulated by hand within a one-dimensional array because C doesn't
support dynamically sized multi-dimensional arrays; the indices
into the table are the subgoal sum and the size of the prefix of
the nums available to work with; to get the entry corresponding
to a prefix length of p and a subgoal sum of s, the entry at
index p * (goal + 1) + s is used (simulating a two-dimensional
access with one dimension being of size goal+1); that entry is
treated as a two bit code; the 1s bit means that the sum in question
is achievable within the prefix in question *not* using the last
number of the prefix (i.e., is achievable with a prefix one shorter),
while the 2s bit means that the sum in question is achievable
within the prefix in question *using* the last number of the prefix
(i.e., the balance of the sum minus that last number is achievable
with a prefix one shorter). */

if(argc < 2){
fprintf(stderr, "Usage: %s goal-sum num1 ...\n", argv[0]);
exit(1);
}

goal = atoi(argv[1]);
table = (char *) checkedMalloc((goal + 1) * (argc - 1));
nums = (int *) checkedMalloc((argc - 2) * sizeof(int));
{
int i;
for(i = 2; i < argc; i++)
nums[i-2] = atoi(argv[i]);
}

{
int sum;
for(sum = 1; sum <= goal; sum++)
table[sum] = 0; /* prefix of length 0 can't sum to positive sum */
}

{
int prefix;
for(prefix = 0; prefix <= argc-2; prefix++)
table[prefix * (goal + 1)] = 1; /* sum 0 attainable with any prefix
size, without using any nums */
}

{
/* now the heart of the computation, computing the rest of the entries
from the two cases (without and with the last num of the prefix) */
int prefix;
for(prefix = 1; prefix <= argc-2; prefix++){
int sum;
for(sum = 1; sum <= goal; sum++)
table[prefix * (goal + 1) + sum] =
/* without last */
(table[(prefix - 1) * (goal + 1) + sum] != 0)
+
/* with last */
(((nums[prefix - 1] <= sum) &&
(table[(prefix - 1) * (goal + 1) + (sum - nums[prefix-1])]
!= 0))
<< 1);
}
}

{
char *lineBuf;
{
int i, len;
len = 0;
for(i = 2; i < argc; i++)
len += strlen(argv[i]) + 1;
lineBuf = checkedMalloc(len + 1);
}
printSolutions(goal, goal, argc-2, table, argv+2, argv[1],
lineBuf, lineBuf, nums);
}

exit(0);
}

/* printSolutions does a tree-recursive walk through the solutions
implicitly recorded in the table, making them explicit and writing
them out */
static void printSolutions(int mainGoal, int subGoal, int prefix, char *table,
char **numStrs, char *goalStr, char *buf,
char *bufPtr, int *nums){
if(prefix == 0){
if(subGoal == 0){
printf("%s: %s\n", buf, goalStr);
}
}
else {
char entry = table[prefix * (mainGoal + 1) + subGoal];
if(entry & 2){
printSolutions(mainGoal, subGoal - nums[prefix - 1], prefix - 1,
table, numStrs, goalStr, buf,
bufPtr + sprintf(bufPtr, "%s ", numStrs[prefix - 1]),
nums);
}
if(entry & 1){
printSolutions(mainGoal, subGoal, prefix - 1,
table, numStrs, goalStr, buf,
bufPtr + sprintf(bufPtr, "%*s ",
strlen(numStrs[prefix - 1]),
""),
nums);
}
}
}

void *checkedMalloc(size_t amount){
void *result;
if((result = malloc(amount)) != 0)
return result;
else{
fprintf(stderr, "Couldn't allocate enough memory. Aborting.\n");
exit(1);
}
}


Clive...@armltd.co.uk

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
In article <3vjf1f$n...@umt.umt.edu>,

Edward J Hanley <myc...@selway.umt.edu> wrote:
>Come on, guys, this is getting ridiculous. Any idiot can eliminate spaces
>from a program.

True, but any idiot can put them back in again, too, and I did smiley
that contribution...

Clive...@armltd.co.uk

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
In article <3vj78j$2...@daffy.cs.ucsb.edu>,

Steve E. Chapel <sch...@daffy.cs.ucsb.edu> wrote:
>sch...@daffy.cs.ucsb.edu (Steve E. Chapel) writes:
>>Clive...@armltd.co.uk writes:
>>>#include <stdio.h>
>>>int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
>>>"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
>>>main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}
>
>>Okay, I've got a challenge for you.
>
>>Let's both modify our programs to handle the case in which an unlimited
>>number of repitions are allowed. For exapmle:
[...]

>>I'll post how long it takes me, and you post how long it took you.
>>Of course, the longer time wins! :-)
>
>Three minutes, including testing.
>For a further challenge, see if you can find a bug in my program.

About thirty seconds, including testing:

#include <stdio.h>
int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(

"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,v),o[c]=0,m(t,v+1));}


main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}

It might have been fairer if you'd stipulated that I make the
modification a year later, rather than the following day, or that
someone else should make the modification, though!

I suspect this new version will have unfortunate behaviour if any of
the supplied numbers are negative. Then again, there are an infinite
number of solutions if your input numbers are -1 and 1, so I suppose
it's only reasonable for it to take an infinite amount of time to find
them all...

Richard Kirk

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
sigf...@miso.wwa.com (Sigfried Gold) writes:

>In article <3vh6pk$d...@lyra.csx.cam.ac.uk>,


>I made the request for this problem that people try to write clear
>code for the simple, selfish reason that I want to be able to
>understand people's responses to the problem. I am a professional
>programmer, but I've never studied computers, mathematics, or anything
>other than humanities since high school, so I'm at a disadvantage
>trying to keep up with people here.

We can all recognise what makes a 'neat' bit of code, but it is not
always easy to put it into words. It's not just about fitting the
program into as few registers as possible or makeing it execute in
the smallest possible time. Perhaps it is misleading to put those
into the competition as requirements.

For the particular problem you set, I would look for...

(1) Robustness - trapping the possibility there is not a solution.
The program should work if you replaced the numbers others that are
not co-prime, or things like that. Rather than explicitly testing
that your integer registers are 32-bit or things like that, a comment
saying the test ought to be done ought to be enough. In short, it
means the program should work on a general platform and compiler,
not just on your Crashmaster2000 if the day's not too warm.

(2) Clarity - being able to see what's going on & why. Sometimes it is
preferable to do things by a slightly slower method because the
program structure becomes particularly neat, or your loops nest nicely.
For the particular example you can have a fast method that steps in 7's
until it finds a solution with the right remainder for 5, then steps in
units of 5*7 until it finds a solution for 3. That is a cute method,
and if you wanted to solve for a lot of remainders simultaneously it
might be worth trying. I considered it in this case but rejected it in
my entry for just stepping through in 7's.

Nice layout and good comments do not affect the pure 'code' but they
ought to be part of the solution in a competition.

(3) Cuteness - a highly dangerous property that often obliterates the
previous two. The stepping through in units of 5*7 method fits my
idea of 'cuteness' but it is obscure. However if N is a solution for
the problem then N + 3*5*7 must also be a solution, so we only need
to test numbers up to 3*5*7. That's cute too, but it does not
affect the clarity (provbided you comment why this should be so), and
so I put it in.

(4) Quick to write. There ought to be marks for providing a quick solution.
Timing replies over the net is difficult, but we were seeing later
answers poaching ideas off the earlier ones.

Dunno if this helps.
Hope I win.

Cheers.


--
Richard Kirk 01483-448869 (phone) 01483-448845 (fax)
Canon Research Europe Ltd, r...@canon.co.uk

Paul Hankin

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
In article <3vj6kp$2...@daffy.cs.ucsb.edu>,

Steve E. Chapel <sch...@daffy.cs.ucsb.edu> wrote:
>Clive...@armltd.co.uk writes:
>
>>In article <3vicvc$g...@zonker.cs.ucsb.edu>,
>>Steve E. Chapel <sch...@zonker.cs.ucsb.edu> wrote:
>>>sigf...@miso.wwa.com (Sigfried Gold) writes:
>>>>The problem is: write program that finds all the combinations of
>>>>a set of whole numbers that add up to a larger number.
>
>>#include <stdio.h>
>>int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
>>"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
>>main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}
>
>Okay, I've got a challenge for you.
>
>Let's both modify our programs to handle the case in which an unlimited
>number of repitions are allowed. For exapmle:
>
>match 5 3 2 1
>
>Would output something like:
>3 + 2 = 5
>3 + 1 + 1 = 5
>2 + 2 + 1 = 5
>2 + 1 + 1 + 1 = 5
>
>I'll post how long it takes me, and you post how long it took you.
>Of course, the longer time wins! :-)
>--
>Steve Chapel sch...@cs.ucsb.edu | http://www.cs.ucsb.edu/~schapel
>University of California, Santa Barbara | finger -l sch...@eci1.ucsb.edu
It's strange, but I could see how Clive's program worked at first glance,
but couldn't be bothered to work out your entry because it took up several
screenfuls. Perhaps conciseness has something to say for it?

-- Paul Hankin


Max Hailperin

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
I feel a bit foolish continuing my own thread that no one else may be
paying attention to (except the one person who sent me fan e-mail;
thanks). However, academic honesty compels me to say that I may have
oversold dynamic programming a bit in my previous posts, for this
particular problem.

The reason being that for many reasonable choices of the set of
numbers and the target sum, the output time winds up dominating the
computation time, whether you use dynamic programming or not. I
empirically verified this with Steve Chapel's non-dynamic-programming
version and my dynamic-programming version.

If the problem is recast as merely being to output a count of the
*number* of subsets adding up to the target sum, or to return one
*minimal size* subset that adds up to the target sum, or something
like that, then the dynamic programming version blows the other one
out of the water. (For example, on my Indy, my version is 200 times
faster than Chapel's at finding the number of subsets of {1, 2, ...,
50} that add up to 100; this with both modified in the obvious ways to
keep count rather than outputting.)

With the problem as given, there *are* problem instances that have a
relatively small enough number of solutions to output that the dynamic
programming wins; for example, taking the set of numbers to be the
even numbers from 2 to 98 and then 99, and taking the target sum to be
301, my version was more than twice as fast as Chapel's on my Indy.
But notice that this is a factor of two instead of two hundred! This
is at the level where coding details, architectural suitability,
etc. are as big a factor as the algorithmic difference.

On the other hand, for most of the obvious things I tried, the two
versions took essentially the same time, because it was nearly all
spent in output. This includes the example above that I had the
200-fold advantage on when it was just a matter of counting the
solutions.

-Max Hailperin
Assistant Professor of Computer Science

Ali Rahimi

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
In article <3vkrvv$1...@doc.armltd.co.uk>, <Clive...@armltd.co.uk> wrote:
>#include <stdio.h>
>int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
>"%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,v),o[c]=0,m(t,v+1));}

>main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}

what is the point of removing the spaces from your program? i mean, this
isn't a particularly obfuscated program in the first place, asides from the
lack of descriptive variable names.
if you remove the stupidity from this program (lack of spaces, idiotic
variable names), it becomes quite understandable.

i just want to understand your motivation for writing code like ...
and contrary to another thread, you are most definitely not displaying
ingenuity with this sort of obfuscation.

>--Clive.
Ali\n\tRahimi.

David Neto

unread,
Aug 1, 1995, 3:00:00 AM8/1/95
to
In article <MAX.95Au...@andretti.gac.edu>,

Max Hailperin <m...@gac.edu> wrote:
>I feel a bit foolish continuing my own thread that no one else may be
>paying attention to (except the one person who sent me fan e-mail;
>thanks). However, academic honesty compels me to say that I may have
>oversold dynamic programming a bit in my previous posts, for this
>particular problem.

I'm paying attention. What about the space behaviour of your program?
Isn't your auxiliary table going to be large? E.g. for target M and
about m components, in general won't you need roughly m*2^m words
to store counts under all possible subconfigurations? I thought of
memoization too, but the space complexity can be horrid. (And I was
too lazy to code it myself...)

BTW, I vote for Ali's first LISP version as the nicest program for
this problem to date.

David Neto
ne...@eecg.utoronto.ca

Clive...@armltd.co.uk

unread,
Aug 2, 1995, 3:00:00 AM8/2/95
to
In article <3vmemc$f...@fido.asd.sgi.com>,

Ali Rahimi <al...@bourgeois.engr.sgi.com> wrote:
>what is the point of removing the spaces from your program? i mean, this
>isn't a particularly obfuscated program in the first place, asides from the
>lack of descriptive variable names.

I agree that the algorithm isn't as obfuscated as many. Normally, when
I'm messing about with competitive attempts at tersification, the
point of using single-character symbols and stripping out whitespace
is that it allows more easy comparison of the lengths of different
programs. You can just look at the thing, rather than having to sed -e
"s/ //g"|wc -c or whatever.

Plus, of course, people probably object less to my posting a 3-line
program than a 50-line program. (-8

Steve E. Chapel

unread,
Aug 2, 1995, 3:00:00 AM8/2/95
to
Clive...@armltd.co.uk writes:

>In article <3vj78j$2...@daffy.cs.ucsb.edu>,


>Steve E. Chapel <sch...@daffy.cs.ucsb.edu> wrote:

>>>Of course, the longer time wins! :-)
>>

>>Three minutes, including testing.

>About thirty seconds, including testing:

Yea! I win!!!

(To be fair, I think you would have won if we'd both waited a year
before making the modification).

Steve E. Chapel

unread,
Aug 2, 1995, 3:00:00 AM8/2/95
to
pd...@cus.cam.ac.uk (Paul Hankin) writes:

>It's strange, but I could see how Clive's program worked at first glance,
>but couldn't be bothered to work out your entry because it took up several
>screenfuls. Perhaps conciseness has something to say for it?

I think you have a typo here. Didn't you leave off the smiley face at the
end?

Don't tell me you actually *mean* this?

Steve E. Chapel

unread,
Aug 2, 1995, 3:00:00 AM8/2/95
to
m...@gac.edu (Max Hailperin) writes:

>I feel a bit foolish continuing my own thread that no one else may be
>paying attention to (except the one person who sent me fan e-mail;
>thanks). However, academic honesty compels me to say that I may have
>oversold dynamic programming a bit in my previous posts, for this
>particular problem.

>The reason being that for many reasonable choices of the set of


>numbers and the target sum, the output time winds up dominating the
>computation time, whether you use dynamic programming or not. I
>empirically verified this with Steve Chapel's non-dynamic-programming
>version and my dynamic-programming version.

First, thanks for honoring me by picking my program to pit against yours.

To keep you from feeling like you're posting to a brick wall, I'll
discuss something I was thinking about for the past few days.

I admit, I haven't really looked closely at your code, but it seems like
that in some cases saving the subgoals may be even less efficient than
blindly recreating them. This might happen in the case where there are
many subgoals that occur only once. If you only test the program on
sets on consecutive numbers, you will of course need to solve many subgoals
many times over, and your program seems to be orders of magnitude faster than
mine in those situations.

But what if you enter odd knapsack sizes, maybe even so that no subgoal
gets solved more than once? I wonder if we could get your program to run
slower than mine in these circumstances.

Jens Horstmeier

unread,
Aug 4, 1995, 3:00:00 AM8/4/95
to

I believe you are right. But then, what is wrong with the following
algorithm?

Let S be the set of positive integers we are interested in.

M := MAX{x:x \in S};
N := |S|;
B[0] := { \emptyset };
FOR I:= 1 TO N*M DO
B[I] := \emptyset;
END
FOR ALL ELEMENTS E OF S DO
FOR I := 0 TO N*M DO
IF B[I] <> \emptyset THEN
FOR ALL ELEMENTS X OF B[I] DO
B[I+E] := B[I+E] union {X union E};
END
END
END
END


Since I believe, we can implement the union operation in polynomial time,
the algorithm should work in polynomial time too. Where is the error ?

Jens Horstmeier


ian barland (Ecuadorian ice sculptor)

unread,
Aug 4, 1995, 3:00:00 AM8/4/95
to
Jens Horstmeier <horstm...@sni.de> writes:
|
|I believe you are right. But then, what is wrong with the following
|algorithm?
|
| Let S be the set of positive integers we are interested in.
|
| M := MAX{x:x \in S};
| N := |S|;
|...

| FOR I := 0 TO N*M DO
| ...

|
|Since I believe, we can implement the union operation in polynomial time,
|the algorithm should work in polynomial time too. Where is the error ?
|
The algorithm works in "pseudopolynomial time": although it's
polynomial in N,M as above, this isn't exactly polynomial in the *size*
of the input (in bits): with an input of only (say) 40 bits, the program
might be looking at something like 2^40 iterations.

Many numerical problems are pseudopolynomial -- for instance factoring:
it's trivial to factor a number N in around N steps, but that doesn't mean
that a small input ("only 20 digits") means a short run-time.
(Alternately, you can think of pseudopolynomial as meaning if the input
is given in unary, *then* the problem would be in polynomial time.)
Knapsack is an example of an NP-complete problem which is also
pseudopolynomial.

I think somebody already suggested that the problem is doable in
output-polynomial-time, which your algorithm probably also is;
as long as the goal is to list *all* the answers, you can't get
rid of the exponential time for this problem.
--
-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
ian barland (ucsc cis grad) i...@cse.ucsc.edu

(from a fortune cookie:) Anyone who makes a blanket statement is a fool.

Paul Hankin

unread,
Aug 6, 1995, 3:00:00 AM8/6/95
to
In article <3vt8sh$r...@horus.mch.sni.de>,

Jens Horstmeier <horstm...@sni.de> wrote:
>
>I believe you are right. But then, what is wrong with the following
>algorithm?
>
> Let S be the set of positive integers we are interested in.
>
> M := MAX{x:x \in S};
> N := |S|;
> B[0] := { \emptyset };
> FOR I:= 1 TO N*M DO
> B[I] := \emptyset;
> END
> FOR ALL ELEMENTS E OF S DO
> FOR I := 0 TO N*M DO
> IF B[I] <> \emptyset THEN
> FOR ALL ELEMENTS X OF B[I] DO
> B[I+E] := B[I+E] union {X union E};
> END
> END
> END
> END
>
>
>Since I believe, we can implement the union operation in polynomial time,
>the algorithm should work in polynomial time too. Where is the error ?
>
>Jens Horstmeier
>
>
>
For a start, the loop I:=1 TO N*M is _exponential_ in the size of S. This
is because an integer N is written in log N digits, and so the space it takes
to describe S is approximately (log M)*N. Fixing N at say 1, your algorithm
takes >O(M) time to solve a problem of size (log M). Hence it's exponential
in the size of the problem.

-- Paul Hankin

Clive...@armltd.co.uk

unread,
Aug 7, 1995, 3:00:00 AM8/7/95
to
In article <3vof4a$e...@daffy.cs.ucsb.edu>,

Steve E. Chapel <sch...@daffy.cs.ucsb.edu> wrote:
>I think you have a typo here. Didn't you leave off the smiley face at the
>end?
>
>Don't tell me you actually *mean* this?

I can sort-of see his point. Unnecessary verbosity can make a program
just as confusing as unnecessary terseness, and the two programs
tended towards the opposite extremes. If I were trying to write a
legible and maintainable implementation, I'd have gone for something
between the two in length.

Gareth Rees

unread,
Aug 14, 1995, 3:00:00 AM8/14/95
to
Paul Hankin <pd...@cus.cam.ac.uk> wrote:
> I appreciate your point, but everyone gets to write `good' programs
> everyday - for me, a competition is more interesting if it challenges
> in ways which are unusual, so I have to invent _new_ ideas to get a
> good solution, rather than falling back on everyday methods.
> There's already a `good program contest' --- it's called the software
> marketplace in the real world.

I agree with Paul. Ideally, a programming contest should need no
judges; there should be some well-defined procedure that deduces the
winner (e.g. by seeing which is shortest, or by playing the programs
against each other in a tournament, or running them on some particular
hardware and seeing which is fastest).

Anyone can write solid, top-down designed, regression-tested, portable,
maintainable and efficient code. Not everyone can write a one-line
Tetris program.

Having said the above, I do enjoy reading the results of the Obfuscated
C Contest.

--
Gareth Rees

Gareth Rees

unread,
Aug 14, 1995, 3:00:00 AM8/14/95
to
Clive...@armltd.co.uk wrote:
> In article <3vicvc$g...@zonker.cs.ucsb.edu>,
> Steve E. Chapel <sch...@zonker.cs.ucsb.edu> wrote:
> >sigf...@miso.wwa.com (Sigfried Gold) writes:
> >>The problem is: write program that finds all the combinations of
> >>a set of whole numbers that add up to a larger number.
>
> #include <stdio.h>
> int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
> "%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}

> main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}
>
> (-8

>
> --Clive.
> (Disclaimer: I wouldn't believe a word of this if I were you...)

Clearly C is not the most appropriate tool for this kind of recursive
algorithm! In ML that would be

fun a 0_=[[]]|a _[]=[]|a n(m::p)=(a n p)@(if m>n then[]else map
(fn x=>m::x)(a(n-m)p));

--
Gareth Rees

princeofy...@gmail.com

unread,
May 31, 2014, 5:59:45 PM5/31/14
to
On Sunday, July 30, 1995 12:00:00 AM UTC-7, Paul Hankin wrote:
> In article <3vgh0p$j...@vixen.cso.uiuc.edu>,
> James Irl Waldby <jw1...@glhpx11.cen.uiuc.edu> wrote:
> >sigf...@miso.wwa.com (Sigfried Gold) writes:
> >
> >>Ok, here's a little challenge, not too hard to do for fun, but not
> >>idiotically easy, at least not for me. This seems like a contrived
> >>problem, but actually it came up in real life.
> >
> >>The problem is: write program that finds all the combinations of
> >>a set of whole numbers that add up to a larger number.
> >
> >If you have a program to solve this problem, then it can solve the
> >"PARTITION" problem, a well-known NP-complete problem. (See page 47
> >of M. R. Garey & D. S. Johnson, "Computers and Intractability".)
> >At the moment there is no known solution for PARTITION that runs
> >in polynomial time.
> >
> [...snip...]
> >
> >NP-completeness of a problem implies that for any solution method P, and
> >any polynomial expression t(s) [where s = problem size in some reasonable
> >metric] there is a data set A such that P takes more time than t(s(A))
> >to solve with A. In other words, you might have a heuristic approach that
> >works fast for many problems, but there are data sets it is going to take
> >a long time to solve. So it would be a big breakthrough if you could
> >prove you have a good solution!
> >
>
> NP completeness says no such thing. A problem is NP complete if the
> problem is NP and the existence of an algorithm which solves the
> problem in polynomial time implies the existence of an algorithm for
> solving (say) the k-clique problem in polynomial time.
> However, I still agree that a good solution would be a big breakthrough,
> as it would prove P=NP.
>
> [more snipped]
> >
> >>Also, writing a short program is not nearly as desireable for this
> >>as writing a program that's easy to read and understand. Looking
> >>back at my solution to this, it's plenty short enough, but it could
> >>take days now to figure out what I was trying to do.
>
> I can't see the point in a programming competition which says `write this
> trivial program in a sound way using all the principles of software
> engineering that you can think of' because (a) most sensible programming
> techniques are a matter of following guidelines, (b) it's impossible
> to judge whose program is best, as different people have different ideas
> about style and (c) it isn't very interesting IMO. Writing the shortest
> bit of code you can think up is a _much_ better competetion as it requires
> serious ingenuity to compress the code to its minimal form.
> I wouldn't dispute the fact that writing terse code is not as
> desirable as clear code for most purposes, but when it comes to
> contests, I claim a short-is-best rule makes for more interest.
>
> -- Paul Hankin

A big giant solution

princeofy...@gmail.com

unread,
May 31, 2014, 6:02:13 PM5/31/14
to
On Monday, July 31, 1995 12:00:00 AM UTC-7, Edward J Hanley wrote:
> Clive...@armltd.co.uk wrote:
>
> : #include <stdio.h>
> : int w[99],*p=w;char o[999];m(t,v)int*v;{int g=*v,c=strlen(o);t||printf(
> : "%d=%s\n",w[1],o+1);t>0&&g&&(sprintf(o+c,"+%d",g),m(t-g,++v),o[c]=0,m(t,v));}
> : main(c,v)char**v;{for(;*v;*p++=atoi(*v++));m(w[1],w+2);}
>
> Come on, guys, this is getting ridiculous. Any idiot can eliminate spaces
> from a program. Instead of making these things unreadable, show off your
> skill by including enough whitespace to make them readable!
>
> Bruce Mycroft
> myc...@selway.umt.edu

r e a d a b l e
0 new messages