Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

realistic but short and simple LISP examples?

1,263 views
Skip to first unread message

Wroot

unread,
Dec 15, 2001, 8:33:27 PM12/15/01
to
cubic...@mailandnews.com (Software Scavenger) wrote in message
news:<a6789134.0112...@posting.google.com>...
> Wroot <wr...@my-deja.com> wrote in message
news:<9veuni$qlq$1...@newsmaster.cc.columbia.edu>...
>
> > Could somebody offer an example of a short (a few Kb or less) program
in
> > LISP that does something useful and that would be relatively hard and
> > time-consuming to program in C++? (aside from such built-in niceties as
>
> Give us an example of the kind of program you mean, by posting the C++
> version of it. Be sure it has clear enough comments that a non-C++
> programmer can understand what it's for, what it does, how it works,
> etc. It will be interesting to see how many Lisp versions of it you
> get in response, and how much shorter and neater most of them are than
> the C++ version.

Here's one (challenge to LISP programmers): read space or newline separated
integers from "input.dat", write out all possible permutations (one per
line) into "output.dat" (No broken "input.dat" data or unopenable
"output.dat" checking is necessary)
-------------------C++ version----------------------------
#include <fstream>
#include <algorithm>
#include <vector>
using namespace std;

int main() {
ifstream in("input.dat");
vector<int> L;
int n;
while(in >> n)
L.push_back(n);
ofstream out("output.dat");
sort(L.begin(), L.end());
do {
for(int i = 0; i < L.size(); i++)
out << L[i] << " ";
out << '\n';
} while(next_permutation(L.begin(), L.end()));
return 0;
}
----------------------------------------------------
If "all possible permutations" is too much to ask of LISP,
just write out the numbers in the reverse order (classic problem)

BTW, use less than 12 input values while testing, unless you want
"output.dat" to be 10Gb :)

Wroot

P.S. cubicle584, for some reason, your reply didn't make it to my news
server, but I saw it on google. Perhaps someone canceled it.
--
Please note: mail sent to wr...@my-deja.com is ignored.

Tim Moore

unread,
Dec 15, 2001, 9:48:05 PM12/15/01
to
I hesitate to respond to what seems like an obvious troll, but here goes.
Some of the constructs may seem a little cryptic, but they save me time
when farting around on Usenet. Feel free to do the research to figure out what they do.
I
n article <9vgtl8$6un$1...@newsmaster.cc.columbia.edu>, "Wroot"
<wr...@my-deja.com> wrote:


>
> Here's one (challenge to LISP programmers): read space or newline
> separated integers from "input.dat", write out all possible permutations
> (one per line) into "output.dat" (No broken "input.dat" data or
> unopenable "output.dat" checking is necessary)
> -------------------C++ version---------------------------- #include
> <fstream>
> #include <algorithm>
> #include <vector>
> using namespace std;
> int main() {
> ifstream in("input.dat");
> vector<int> L;
> int n;
> while(in >> n)
> L.push_back(n);
> ofstream out("output.dat");
> sort(L.begin(), L.end());
> do {
> for(int i = 0; i < L.size(); i++)
> out << L[i] << " ";
> out << '\n';
> } while(next_permutation(L.begin(), L.end())); return 0;
> }
> ----------------------------------------------------

I see you've chosen your battle carefully in order to make use of the
STL. That's OK; we can quickly bang out permute without paying too much
attention to efficiency. The rest of your "challenge" follows quickly.
(defun permute (list)
(cond ((null list)
nil)
((null (cdr list))
(list list))
(t (loop
for element in list
nconc (mapcar #'(lambda (sublist)
(cons element sublist))
(permute (remove element list)))))))

(defun permute-stream (in-stream out-stream)
(loop
for element = (read in-stream nil in-stream)
while (not (eq element in-stream))
collect element into elements
finally (format out-stream "~{~{~A~^ ~}~%~}" (permute elements))))

(defun permute-file (in-file out-file)
(with-open-file (in-stream in-file)
(with-open-file (out-stream out-file :direction :output :if-exists :supersede)
(permute-stream in-stream out-stream))))

>If "all possible
> permutations" is too much to ask of LISP, just write out the numbers in
> the reverse order (classic problem) BTW, use less than 12 input values
> while testing, unless you want "output.dat" to be 10Gb :)

You weenie.

Thomas F. Burdick

unread,
Dec 15, 2001, 10:31:21 PM12/15/01
to
Wroot <wr...@my-deja.com> writes:

> cubic...@mailandnews.com (Software Scavenger) wrote in message
> news:<a6789134.0112...@posting.google.com>...
> > Wroot <wr...@my-deja.com> wrote in message
> news:<9veuni$qlq$1...@newsmaster.cc.columbia.edu>...
> >
> > > Could somebody offer an example of a short (a few Kb or less)
> > > program in LISP that does something useful and that would be
> > > relatively hard and time-consuming to program in C++? (aside
> > > from such built-in niceties as
> >
> > Give us an example of the kind of program you mean, by posting the C++
> > version of it. Be sure it has clear enough comments that a non-C++
> > programmer can understand what it's for, what it does, how it works,
> > etc. It will be interesting to see how many Lisp versions of it you
> > get in response, and how much shorter and neater most of them are than
> > the C++ version.
>
> Here's one (challenge to LISP programmers): read space or newline separated
> integers from "input.dat", write out all possible permutations (one per
> line) into "output.dat" (No broken "input.dat" data or unopenable
> "output.dat" checking is necessary)

This is your idea of a program that "does something useful" and
doesn't just use the "built-in niceties" of the standard library?!?!

First, this does nothing useful. Second, all it does is wrap up a STL
function for use in batch processing. If you had made the program a
pipe instead, I might see the point, but even then, a Lisp programmer
wouldn't have needed to wrap it up in the first place. C++
programmers primarily use the Unix shell, so you'll need to wrap
functions up for the shell if you want to test them. Lisp programmers
primarily use the REPL, so you can just call the function directly.

Common Lisp doesn't have a vector permutator built-in. That said,
it's quite easy to build one. Here's mine that I've used maybe 3 times:

(defun map-permutations! (fn vector)
(let ((end (1- (length vector))))
(labels ((helper (start)
(if (= start end)
(funcall fn vector)
(loop for i from start to end
do (rotatef (aref vector start) (aref vector i))
(helper (1+ start))
(rotatef (aref vector start) (aref vector i))))))
(helper 0)
nil)))

;;; This just wraps it up in a syntax I prefer -- something you
;;; can't to in C++, BTW
(defmacro do-permutations! ((var vector) &body body)
`(map-permutations! #'(lambda (,var) ,@body) ,vector))

You can test it like this:

* (do-permutations! (v #(1 2 3))
(format t "~S~%" v))
#(1 2 3)
#(1 3 2)
#(2 1 3)
#(2 3 1)
#(3 2 1)
#(3 1 2)
NIL

A more reasonable example, assuming you're not just trolling, would be
to construct a simple neural network (that's not build into either
language's standard library ;)

--
/|_ .-----------------------.
,' .\ / | No to Imperialist war |
,--' _,' | Wage class war! |
/ / `-----------------------'
( -. |
| ) |
(`-. '--.)
`. )----'

Kaz Kylheku

unread,
Dec 15, 2001, 11:26:45 PM12/15/01
to
In article <9vgtl8$6un$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>Here's one (challenge to LISP programmers): read space or newline separated
>integers from "input.dat", write out all possible permutations (one per
>line) into "output.dat" (No broken "input.dat" data or unopenable
>"output.dat" checking is necessary)

C++ spewing idiot, here is a better challenge for you. ITA software
(www.itasoftware.com) had this on their website, and I solved it, leading
to an invitation to send them my resume. The challenge appeared in a
banner advertizement on slashdot.org. Ironically, I didn't know it was
for a Lisp job until I already had some Lisp code written to solve it;
at which point I clicked on the banner ad.

Consider all possible expressions formed by combining nine nines
using addition, subtraction, multiplication and division, and
arbitrary grouping with parentheses. Here is one possible
expression:

((9 + 9) - (9 * 9)) / (9 - ((9 + 9) / 9) * 9)

here is another:

(9 + 9 - 9 * 9 * 9 / (9 + 9) + (9 * 9))

there is obviously large number of other expressions which combine 9
nines using these operators in all the possible ways. Your task is to
find the smallest positive integer which is not the value of any of
these expressions.

Let's see your C++ program which finds the result.

>If "all possible permutations" is too much to ask of LISP,

There is nothing difficult about generating permutations of a sequence.

But then, you are using someone else's algorithms library to do it,
so maybe you don't have a fscking clue what it entails.

There is nothing you can do in C++ that you cannot do in Lisp (not LISP,
by the way). Any program which can be expressed succinctly in C++ can also
be expressed even more succinctly in Lisp, because the Lisp
program won't have to contain any stupid workarounds for limitations
in the C++ type system, syntax and other braindamage. The reverse is
not the case; some Lisp programs can only be imitated by C++ programs
that are orders of magnitude larger, because they reimplement a good
proportion of common Lisp. Reimplement it badly, and reimplement it
in a way that is incompatible with everyone else's way of reinventing.

I suspect you will run into this when writing the code to solve the Nine
Nines problem above. My Lisp version is 47 physical lines.

Good luck.

>BTW, use less than 12 input values while testing, unless you want
>"output.dat" to be 10Gb :)

It's safe to assume that the people in comp.lang.lisp
know basic facts, like that the number of permutations of a sequence
of length N is the factorial of N.

Pierre R. Mai

unread,
Dec 15, 2001, 11:02:29 PM12/15/01
to
Wroot <wr...@my-deja.com> writes:

> Here's one (challenge to LISP programmers): read space or newline separated
> integers from "input.dat", write out all possible permutations (one per
> line) into "output.dat" (No broken "input.dat" data or unopenable
> "output.dat" checking is necessary)

Ah, a specification in the true spirit of C/C++ (don't ever check for
errors). Well FWIW:

(defun doit ()
(with-open-file (in-stream "input.dat")
(with-open-file (out-stream "output.dat" :direction :output)
(let ((items (loop for item = (read in-stream nil in-stream)
until (eq item in-stream)
collect item)))
(with-all-permutations (set (sort items #'<))
(format out-stream "~{~A~^ ~}~%" set))))))

This makes use of the following facility for permutations on lists
(you have to include at least the macro before the doit function, if
you put everything in one file):

(defun permute (set function)
(labels ((permute-one (tail)
(if (null tail)
(funcall function set)
(loop for rest on tail do
(rotatef (first tail) (first rest))
(permute-one (rest tail))
(rotatef (first tail) (first rest))))))
(permute-one set)))

(defmacro with-all-permutations ((var items) &body body)
`(permute ,items #'(lambda (,var) ,@body)))

The code differs from the C++ version in the following:

a) The C++ STL comes with code for permutation building built-in. CL
doesn't, hence the need for the additional code. So it is 20 LoC
in C++ with STL against 8 + 12 LoC for CL.

b) The CL version doesn't produce superfluous whitespace at the end of
each line, although it can trivially be changed to do that, if
wanted, or produce more complex formats, e.g. use

(format out-stream "~{~A ~}~%" set)

to produce the C++ output, or use

(format out-stream "~{~A~^, ~}~%" set)

to produce lines like "1, 2, 3, 4", or use

(format out-stream "~{~A~^;~}~%" set)

to produce CSV files.

c) At least minimal error checking, for no added work. The C++
version silently assumes that a non-existant file means no entries,
and hence produces an empty output file. The Common Lisp version
will raise an appropriate error in that case.

d) If we elide the sorting step, then the given code will work for all
readable CL objects. With the sorting step the code works for
everything that is acceptable to #'<, i.e. all CL numeric types
except for complex numbers. Again you get useful error messages at
runtime if you violate that condition.

e) Since we use the CL reader, comments, read-time conditionals and
even read-time evaluation work as expected.

Of course the whole exercise is just silly, and doesn't show any of
CL's strengths, but who cares...

Regs, Pierre.

> #include <fstream>
> #include <algorithm>
> #include <vector>
> using namespace std;
>
> int main() {
> ifstream in("input.dat");
> vector<int> L;
> int n;
> while(in >> n)
> L.push_back(n);
> ofstream out("output.dat");
> sort(L.begin(), L.end());
> do {
> for(int i = 0; i < L.size(); i++)
> out << L[i] << " ";
> out << '\n';
> } while(next_permutation(L.begin(), L.end()));
> return 0;
> }

--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein

Wroot

unread,
Dec 15, 2001, 11:51:17 PM12/15/01
to
t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:

> This is your idea of a program that "does something useful" and
> doesn't just use the "built-in niceties" of the standard library?!?!
>
> First, this does nothing useful. Second, all it does is wrap up a STL

> If you had made the program a pipe instead, I might see the point

It's the same, even 2 lines simpler. Since you know what STL is, I'll
refrain from pasting the code.

> C++ programmers primarily use the Unix shell, so you'll need to wrap
> functions up for the shell if you want to test them. Lisp programmers
> primarily use the REPL, so you can just call the function directly.

$ grep tfb /etc/passwd
tfb:x:555:555:Thomas F. Burdick,,,:/home/tfb:/usr/bin/clisp ?

> A more reasonable example, assuming you're not just trolling, would be
> to construct a simple neural network (that's not build into either
> language's standard library ;)

What do you mean by simple neural network? One that doesn't learn? (Product
of weight matrix and input vector with non-linear function applied to each
element of the result? I'll bet it's equally easy in C++ and Lisp with C++
version being much faster) Could you give more precise specs?

Anyways, if you mean backprop, IIRC according to one of the comp.ai FAQs,
C++ is actually the language of choice for this. I think what it said was
"it's *only* about 200 lines in C++". What I was asking in the OP was "what
are the things that Lisp is better suited for compared to C++".

Wroot
P.S. If I were trolling, I would have cross-posted to comp.lang.c++. Why
would you accuse me of something like this?

Wroot

unread,
Dec 16, 2001, 12:15:34 AM12/16/01
to
Kaz Kylheku wrote:

Puhleeze... You think you know this elementary fact and it makes you unique
enough to call others "idiots"?! How do you think I got the 10Gb number?

12! * 12 * 2 / 1024^3 ~ 10.7 (12! lines, 12 elements per line, 2 bytes per
element, 1024^3 bytes per Gb) Lighten up, kid.

Wroot
P.S. Speaking of nine 9s, did you *SOLVE* the problem (as opposed to just
writing a program to solve it)

Erik Naggum

unread,
Dec 16, 2001, 12:29:25 AM12/16/01
to
* Wroot <wr...@my-deja.com>

| Here's one (challenge to LISP programmers)

Have you considered downgrading your arrogance to a reasonable level?

Here is a challenge for you. Write a function that accepts an expression
and a list of values for the variables and returns the lambda expression
applied to these values. (To reduce the complexity of the task when
accepting an expression from the user, you may assume that a compiler is
available in the function compile, so this is _not_ about making an
evaluator.) Call the function you write apply.

For extra bonus points, write an evaluator using the apply function you
just wrote, which evaluates all arguments of an expression before calling
the function with the values collected from that evaluation. Call this
function eval.

For even more extra bonusp points, write a function read that accepts a
character stream and returns the next expression the user has supplied on
that stream. Match it with a function write that accepts an object of
any kind and produces a character stream suitable for read and humans.

If you have time, write a simple loop which calls read to obtain an
expression from the user, then calls eval to obtain the result of
evaluating it, and finally calls write to show the user this result.

If you complete all this, you have what every Common Lisp programmer gets
for free when he starts up his Common Lisp system. Please think through
the challenge before you dismiss any of this as trivial. Basically, a
major part of the Lisp language follows from the decision to make this
interactive loop work correctly. Note that in Common Lisp these
functions are available from the get go with the names I have suggested.

If you still think you have any business coming here with "challenges",
please consider your own reaction to an ignorant splut who came up to you
and "challenged" you do to something he had just learned. _I_ find it on
par with a stupid kid who has just learned a bad word and tries to get
adults to say it so he can snicker.

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

Kaz Kylheku

unread,
Dec 16, 2001, 3:27:04 AM12/16/01
to
In article <9vhalm$fn2$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>Kaz Kylheku wrote:
>
>> In article <9vgtl8$6un$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>>>BTW, use less than 12 input values while testing, unless you want
>>>"output.dat" to be 10Gb :)
>>
>> It's safe to assume that the people in comp.lang.lisp
>> know basic facts, like that the number of permutations of a sequence
>> of length N is the factorial of N.
>
>Puhleeze... You think you know this elementary fact and it makes you unique
>enough to call others "idiots"?! How do you think I got the 10Gb number?

What ``others''? How many of you are there?

I didn't propose any connection between this basic fact regarding
permutations, and calling you an idiot. You are now practising creative
reading comprehension.

Apparently, *you* think you are unique in knowing this factoid, otherwise
why would you deem it necessary to include that warning? Do you think
people are so stupid that they can't estimate for themselves how large
the output will be? If you want a group of people to take you seriously,
you have to take them seriously.

I used the phrase ``C++ spewing idiot'' because the article you wrote
is a pitifully naive apology for an idiotic library that is part of an
idiotic programming language, and because that you appear believe that
that generating permutations of a vector's elements is some kind of hot
computational problem that is out of the reach of the Lisp programming
language.

I wrote a permutation-generating program in elementary school, eighth
grade or thereabouts. Hand-assembled 6502 machine language on an
Apple II+. This is not exactly something that you can use to showcase
the capabilities of a programming language.

>Wroot
>P.S. Speaking of nine 9s, did you *SOLVE* the problem (as opposed to just
>writing a program to solve it)

What's the difference? The program isn't intelligent enough to be
credited with the solution independently of its author.

But what you are getting at is that it's not an analytic solution, but a
brute force solution; that is true, so be it. I saw in the problem
statement a little Lisp hacking opportunity for a rainy afternoon,
because that's what I was predisposed toward seeing at that time.

Say, how is the C++ version coming along?

In the real world, there are deadlines. So, let's say you have until
Tuesday, 00:00 UTC. That is ample, given that the Lisp solution was
hacked in a few hours. The world won't always grant you extensions
to accomodate the productivity-squandering programming language you
are using.

Wroot

unread,
Dec 16, 2001, 5:57:42 AM12/16/01
to
Kaz Kylheku wrote:

> In article <9vhalm$fn2$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>>P.S. Speaking of nine 9s, did you *SOLVE* the problem (as opposed to just
>>writing a program to solve it)
>
> What's the difference?

Did you get an integer number as an answer? Yes or no?

*If* "yes", how about this: I wouldn't want to give you the answer,
especially that you might not even know it, but I'll post THE SUM OF ALL
DIGITS here *this* Sunday on the condition that you publicly promise that,
in case mine coincides with the sum of all digits of your answer or turns
out to be the correct one otherwise, you will start a new thread in this NG
with the subject "Wroot is the greatest" in which you will apologize for
your being an ass and a bigot (literally and succinctly). How does this
sound?

Wroot

> In the real world, there are deadlines. So, let's say you have until
> Tuesday, 00:00 UTC. That is ample, given that the Lisp solution was
> hacked in a few hours. The world won't always grant you extensions
> to accomodate the productivity-squandering programming language you
> are using.

Fernando Rodríguez

unread,
Dec 16, 2001, 7:39:11 AM12/16/01
to
On Sun, 16 Dec 2001 04:26:45 GMT, k...@ashi.footprints.net (Kaz Kylheku) wrote:


>Consider all possible expressions formed by combining nine nines
>using addition, subtraction, multiplication and division, and
>arbitrary grouping with parentheses. Here is one possible
>expression:
>
> ((9 + 9) - (9 * 9)) / (9 - ((9 + 9) / 9) * 9)
>
>here is another:
>
> (9 + 9 - 9 * 9 * 9 / (9 + 9) + (9 * 9))

Are you sure that's a good Lisp vs C++ challenge? It's more like a
mathematical puzzle...

Something that forces you to struggle bypassing C++'s static type system would
be IMHO a better example. You can even add some multiple inheritance for extra
suffering. };-)

For the OP: This C++ book is _full_ of great examples of things which are
trivial in CL, but atrocious monstruosities in C++.

Generative Programming: Methods, Tools, and Applications
by Krzysztof Czarnecki, Ulrich Eisenecker


PS: I remember reading about a shias religious celebration were people would
slap their heads until they bleed. This is very similar to what the authors of
that book are doing...

--
Fernando Rodríguez
frr at wanadoo dot es
--

Fernando Rodríguez

unread,
Dec 16, 2001, 9:30:26 AM12/16/01
to
On Sun, 16 Dec 2001 05:57:42 -0500, Wroot <wr...@my-deja.com> wrote:


>*If* "yes", how about this: I wouldn't want to give you the answer,
>especially that you might not even know it, but I'll post THE SUM OF ALL
>DIGITS here *this* Sunday on the condition that you publicly promise that,

The correct sum could still be obtained from the wrong list of values...

BTW, I wonder if there's an analytical solution to the problem, or the more
general n nines one... Totally useless, of course, but interesting. ;-)



>in case mine coincides with the sum of all digits of your answer or turns
>out to be the correct one otherwise, you will start a new thread in this NG
>with the subject "Wroot is the greatest" in which you will apologize for
>your being an ass and a bigot (literally and succinctly). How does this
>sound?

I sounds like you should both calm down. :-)

Kaz Kylheku

unread,
Dec 16, 2001, 2:03:58 PM12/16/01
to
In article <9vhun7$qk3$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>Kaz Kylheku wrote:
>
>> In article <9vhalm$fn2$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>>>P.S. Speaking of nine 9s, did you *SOLVE* the problem (as opposed to just
>>>writing a program to solve it)
>>
>> What's the difference?
>
>Did you get an integer number as an answer? Yes or no?

Yes, of course.

>*If* "yes", how about this: I wouldn't want to give you the answer,
>especially that you might not even know it, but I'll post THE SUM OF ALL

Imbecile, if you had been paying due attention, you might have noticed
that at the outset I made the following claim: I sent the answer and
the code to the software company which posed the problem, and obtained
an affirmative answer. You can confirm that I did so by asking them,
if you wish.

>DIGITS here *this* Sunday on the condition that you publicly promise that,

I don't care about the answer; the challenge here is to produce a C++
solution. For all anyone knows, you could get the answer using a Lisp program.

Tell me you are only pretending to not understand the point of this!

>in case mine coincides with the sum of all digits of your answer or turns
>out to be the correct one otherwise, you will start a new thread in this NG
>with the subject "Wroot is the greatest" in which you will apologize for
>your being an ass and a bigot (literally and succinctly). How does this
>sound?

Bigot about what? You're the one who barged in here with your
C++ bigotry.

I happen to develop in C++ for a living; I understand its strengths
and limitations.

See? A programming language can be so many different things to different
people. It can pay one man's bills, and give another the occasion to
act like a jackass on Usenet under a childish pseudonym.

Coby Beck

unread,
Dec 16, 2001, 2:24:22 PM12/16/01
to

"Wroot" <wr...@my-deja.com> wrote in message
news:9vhalm$fn2$1...@newsmaster.cc.columbia.edu...

>
> Wroot
> P.S. Speaking of nine 9s, did you *SOLVE* the problem (as opposed to just
> writing a program to solve it)
> --

LOL! A troll and a maroon!


Wroot

unread,
Dec 16, 2001, 2:38:22 PM12/16/01
to
I knew you didn't have the balls to accept. A real man can admit it when he
is wrong. You aren't one. You called me an idiot, and offered to solve this
little problem you spent several hours writing a lisp program for. I
offered you a way to compare answers on the spot.

My advice to you: see shrink and, also, go back to school, kiddo. Maybe
they'll teach you some respect for people who are smarter than you.

Wroot

Kaz Kylheku

unread,
Dec 16, 2001, 3:17:10 PM12/16/01
to
In article <9vit7e$gte$1...@newsmaster.cc.columbia.edu>, Wroot wrote:
>I knew you didn't have the balls to accept. A real man can admit it when he
>is wrong. You aren't one. You called me an idiot, and offered to solve this

A real man posts under his real name. Is it written on your driver's
license that you are named Wroot? Assuming you are old enough to have one.

Would you care to unmask yourself of your pseudonym?

>little problem you spent several hours writing a lisp program for. I
>offered you a way to compare answers on the spot.

You are in no position to offer anything, other than the working C++
program. I'm not interested in comparing numeric answers, so you might
as well know that the solution to the puzzle is 195. I already have a
program that computes it, so I'm not in a contest with you.

This is simply an opportunity for you to defend the technical claims
you made about the superiority of C++, thereby saving some face.

Gareth McCaughan

unread,
Dec 16, 2001, 5:36:14 PM12/16/01
to
Kaz Kylheku wrote:

> C++ spewing idiot, here is a better challenge for you. ITA software
> (www.itasoftware.com) had this on their website, and I solved it, leading
> to an invitation to send them my resume. The challenge appeared in a
> banner advertizement on slashdot.org. Ironically, I didn't know it was
> for a Lisp job until I already had some Lisp code written to solve it;
> at which point I clicked on the banner ad.
>
> Consider all possible expressions formed by combining nine nines
> using addition, subtraction, multiplication and division, and
> arbitrary grouping with parentheses. Here is one possible
> expression:

[etc]


> there is obviously large number of other expressions which combine 9
> nines using these operators in all the possible ways. Your task is to
> find the smallest positive integer which is not the value of any of
> these expressions.

[...]


> I suspect you will run into this when writing the code to solve the Nine
> Nines problem above. My Lisp version is 47 physical lines.

Ha. The one I just wrote is 21 lines, including one blank line
and 4 comment-only lines. Nyaaah!

I reckon it could probably be done in about 60 (non-blank, non-
-comment) lines of C++ using the STL, but it would take me a factor
of 4 longer to do it than it did in Lisp. Of course, if you
measure productivity in LoC per unit time then C++ would come out
almost as good as Lisp on this exercise. :-)

--
Gareth McCaughan Gareth.M...@pobox.com
.sig under construc

Thomas F. Burdick

unread,
Dec 17, 2001, 4:02:21 AM12/17/01
to
Wroot <wr...@my-deja.com> writes:

> t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:
>
> > This is your idea of a program that "does something useful" and
> > doesn't just use the "built-in niceties" of the standard library?!?!
> >
> > First, this does nothing useful. Second, all it does is wrap up a STL

[...]


> > If you had made the program a pipe instead, I might see the point
>
> It's the same, even 2 lines simpler. Since you know what STL is, I'll
> refrain from pasting the code.

Then I would see the point of writing the program, if for some reason
you wanted to use this STL function from the command line. But it's
still not a program that "does something useful" nor does it do a
thing except use one of the "built-in nicities" of the C++ STL. It
doesn't even sort-of fit your criteria.

> > C++ programmers primarily use the Unix shell, so you'll need to wrap
> > functions up for the shell if you want to test them. Lisp programmers
> > primarily use the REPL, so you can just call the function directly.
>
> $ grep tfb /etc/passwd
> tfb:x:555:555:Thomas F. Burdick,,,:/home/tfb:/usr/bin/clisp ?

No, when I'm doing Unix administration, I use a Unix shell. When I'm
writing C++ code, I tend to spend almost all of my time looking at an
editor, a Unix command line, and gdb. When I write Lisp, I tend to
spend almost all of my time looking at an editor and the REPL. I
would have no desire whatsoever to use the Unix command line wrt my
program I'm developing.

> > A more reasonable example, assuming you're not just trolling, would be
> > to construct a simple neural network (that's not build into either
> > language's standard library ;)
>
> What do you mean by simple neural network? One that doesn't learn? (Product
> of weight matrix and input vector with non-linear function applied to each
> element of the result? I'll bet it's equally easy in C++ and Lisp with C++
> version being much faster) Could you give more precise specs?

Why do you bet this? I thought you were interested in learning Lisp,
and wanted to see a small example program that might show it off. I
was merely trying to suggest a direction in which to go. Wasn't this
your orignal post?

I'm trying to understand what it is that people like about LISP and
whether I should learn it. (I'm interested in AI)



Could somebody offer an example of a short (a few Kb or less)
program in LISP that does something useful and that would be
relatively hard and time-consuming to program in C++? (aside from

such built-in niceties as arbitrary-precision arithmetic please)

It seems that since that time, you've developed a lot of concrete
ideas about what is and is not possible in Lisp and C++.

> Anyways, if you mean backprop, IIRC according to one of the comp.ai FAQs,
> C++ is actually the language of choice for this. I think what it said was
> "it's *only* about 200 lines in C++". What I was asking in the OP was "what
> are the things that Lisp is better suited for compared to C++".
>
> Wroot
> P.S. If I were trolling, I would have cross-posted to comp.lang.c++. Why
> would you accuse me of something like this?

Because you sure sound like it. Most cross-posts *are* trolls, but
not all trolls are cross-posters. You're posting from a
pseudo-anonymous account, and you don't seem to have the slightest
desire to see Lisp as a useful tool. You've gone from asking for help
in seeing why Lisp is cool to carefully contriving situations where
you can feel smug about using C++ instead. And in a single day, no
less. Until you show some evidence of having tried to understand why
intelligent people would use Lisp, I'm going to assume you're just
here trolling.

Bruce Hoult

unread,
Dec 17, 2001, 4:05:47 AM12/17/01
to
In article <slrna1q8eu.hf0....@g.local>,
Gareth.M...@pobox.com wrote:

> > I suspect you will run into this when writing the code to solve the Nine
> > Nines problem above. My Lisp version is 47 physical lines.
>
> Ha. The one I just wrote is 21 lines, including one blank line
> and 4 comment-only lines. Nyaaah!
>
> I reckon it could probably be done in about 60 (non-blank, non-
> -comment) lines of C++ using the STL

I don't know how you're doing it in CL, but I'm disappointed because I
can do it in 23 lines of *Perl* (17 if you don't count lines with only a
brace or the Unix #! line) and Perl doesn't even have support for
fractions!!

bruce@k7:~/programs/perl > cat nineNines.pl
#!/usr/bin/perl
my @nums = ({},{"9/1" => 9});
sub gcd {my ($a,$b)=@_; $a?gcd($a<=$b?($a,$b%$a):($b,$a)):$b};
for (my $n=1;$n<=9;++$n){
for (my $p=1;$p<=$n/2;++$p){
for my $a (keys %{$nums[$p]}){
for my $b (keys %{$nums[$n-$p]}){
"$a $b" =~ m!(-*\d+)/(-*\d+) (-*\d+)/(-*\d+)!;
map {my ($a,$b)=@{$_};
($a,$b) = (-$a,-$b) if $b<0;
my $g=gcd(abs($a),$b);
$nums[$n]{$a/$g."/".$b/$g} = 9 if $b;
}([$1*$4+$2*$3,$2*$4], [$1*$4-$2*$3,$2*$4], [$2*$3-$1*$4,$2*$4],
[$1*$3,$2*$4], [$2*$3,$1*$4], [$1*$4,$2*$3]);
}
}
}
for (my $i=0;;++$i){
next if $nums[$n]{"$i/1"};
print "Smallest integer that can't be made from $n nines is $i\n";
last;
}
}
bruce@k7:~/programs/perl > time ./nineNines.pl
Smallest integer that can't be made from 1 nines is 0
Smallest integer that can't be made from 2 nines is 2
Smallest integer that can't be made from 3 nines is 1
Smallest integer that can't be made from 4 nines is 4
Smallest integer that can't be made from 5 nines is 13
Smallest integer that can't be made from 6 nines is 22
Smallest integer that can't be made from 7 nines is 33
Smallest integer that can't be made from 8 nines is 103
Smallest integer that can't be made from 9 nines is 195

real 0m31.510s
user 0m31.470s
sys 0m0.040s

Bruce Hoult

unread,
Dec 17, 2001, 7:53:51 AM12/17/01
to
In article <bruce-172C16....@news.paradise.net.nz>, Bruce
Hoult <br...@hoult.org> wrote:

> In article <slrna1q8eu.hf0....@g.local>,
> Gareth.M...@pobox.com wrote:
>
> > > I suspect you will run into this when writing the code to solve the
> > > Nine
> > > Nines problem above. My Lisp version is 47 physical lines.
> >
> > Ha. The one I just wrote is 21 lines, including one blank line
> > and 4 comment-only lines. Nyaaah!
> >
> > I reckon it could probably be done in about 60 (non-blank, non-
> > -comment) lines of C++ using the STL
>
> I don't know how you're doing it in CL, but I'm disappointed because I
> can do it in 23 lines of *Perl* (17 if you don't count lines with only a
> brace or the Unix #! line) and Perl doesn't even have support for
> fractions!!

OK, now 13 lines of CL. Pointers on how to better use CL appreciated --
I don't like how much this conses with the nconc, but the dolist is so
convenient:

(setq nums (make-array '(10)))
(dotimes (i 10) (setf (aref nums i) (make-hash-table)))
(setf (gethash 9 (aref nums 1)) t)
(dotimes (n 10)
(loop for p from 1 to (/ n 2) do
(loop for a being the hash-keys of (aref nums p) do
(loop for b being the hash-keys of (aref nums (- n p)) do
(dolist (i (nconc (list (+ a b) (- a b) (- b a) (* a b))
(if (/= a 0) (list (/ b a)))
(if (/= b 0) (list (/ a b)))))
(setf (gethash i (aref nums n)) t)))))
(format t "With ~D nines, can't make ~D~%" n
(loop for i from 1 unless (gethash i (aref nums n)) do (return i))))

-- Bruce

Siegfried Gonzi

unread,
Dec 17, 2001, 4:16:13 AM12/17/01
to
"Fernando Rodríguez" wrote:

> Are you sure that's a good Lisp vs C++ challenge? It's more like a
> mathematical puzzle...
>
> Something that forces you to struggle bypassing C++'s static type system would
> be IMHO a better example. You can even add some multiple inheritance for extra
> suffering. };-)
>
> For the OP: This C++ book is _full_ of great examples of things which are
> trivial in CL, but atrocious monstruosities in C++.

The problem with templates (and C++ libraries) actually is, that
they are not really platform and especially compiler-independent.
I did never figure out why most of the templates with the native
Macintosh C/C++ compiler MPW do not work.
It seems that the Micrsoft Visual C++ compiler is better in this
respect (I could at least assume it thanks to my new Windows XP
notebook).

Or take the new Template-Numerical-Library (TNT) which
incorporates things like lu-factorization and therelike
(interestingly, this official libraray is 10 times too slow
compared to normal C). I would not pay for it that it will run
with all C++ compilers.

What has this all to do with Common Lisp? It is a big plus that
Common Lisp does not show all the before-mentioned weakness. I
really appreciate this fact every day more and more.

A few weeks ago I have been a fanatic Clean fellow. I am not
a programmer for my living (academia...) but I have to use the
computer and numerical simulations for my PhD in climatoly. Okay,
what are the reasons, why I now believe Clean is not useful for
my work (I have to emphasize, that I can speak only for myself):


a) It is nearly ridiculous, that one in Clean is forced to write
floating-point numbers always with a precision of 16 places. There is
no direct way, e.g. to write in a precision-formatted way to a file.
Yes I have a second-hand (thanks to Fabien T.) library which
can perform at least formatted single-precision output. But this
second hand library does not always work as required.

b) Absolutely no control over numbers. A floating-point number is a
double-precison representation and an integer-number is a 32byte number.
You cannot say to Clean that you want your calculation with
single-precision, for example. Absolutely no way to round to a specific
precision.

c) No way of exception handling. And this is really severe, because
you cannot print-out intermediate results; there is one starting-
point and one ending-point. In bigger programs, debugging is awfull.

d) The case you have got a function B, which is called by function A,
BUT you do not actually use it (the function A), but you want to use B
with slightly different types, the compiler will complain, that B in A
does not match (even one does not use A). This is the reason why a Clean
programmer often is sitting 8 hours infront of his computer (running
Clean) but only for watching the screen and not typing something useful
in, because he is desperately afraid of Clean's notorious (and in most of
the cases not very informative) compiler complaints.

e) And the biggest problem: Array types and Clean's unique array type
concept (BELIEVE me, C's post-fece error is a pleasure to experience
compared to Clean's unique array concept). There are many array types in
Clean:

e.g. the 2 dimensional real ones:

!*{#*{Real}}, !{{Real}}, !*{#{#Real}}, !{#*{#Real}}, !{!{!Real}},
{#*{Real}}, {{Real}}, *{#{#Real}}, {#*{#Real}}, {{Real}},...

not enough the confusion, you cannot always pass every array to every type.
And array update (in-place) is *only* allowed in unique ones, but this is
often trial-and-error, because you get the wildest compiler errors that
often an unique array is expected but it is not deliverd. And as a programmer
your last chance is to copy the array. AND without any unique informations
one cannot update arrays!
This often absolutely destroys the concept of passing higher-order functions
(which have got as types unique-arrays), because you have then to copy the
array in order to fullfill the array concept. BUT sometimes you may pass
(and the compiler will not complain) arrays as functions, but I haven't
figured out when and why and there is absolutely no documentation discussing
the problem in depth.

f) Files are handeld the same way as unique array ones. I swear you: it is
a nightmare. I get pearls on my forehead when I think that I have
to re-use (addmittedly, a very tiny code base) Clean code (which I am writing for
my PhD). But I will leave Clean programming as soon as possible.

g) And sometimes the compiler produces side-effects (yes!) when updating arrays.
The underlying Clean concept has to be very complicated and error prone.

I write my experience here, because I often read in the computer-language
wars that Common Lispers, C++ victims... are ignorant to new innovations,
but when new innovation actually means that one has to experience the
abovementioned nightmares I think the ignorants are on the right way.


I have been exposed to two programming-language shocks in my life:

- Forth
- Clean

(I did not forget to mention C/C++; C++ is only complicated and maybe
one can learn it in 210 years of hard training, but C++ is not shocking).
I really contemplate to use Allegro Common Lisp for my future PhD work.
The development environment for Windows is really fantastic and even
for students is the expense of updating the license every 60 days
affordable. And when one holds his PhD he can easily buy (wages for
Postdocs are surely better) the complete package.

Corman Lisp shows also a good environment and Roger Corman is surely a very
talented programmer and profi but Corman Lisp is not really useful for
large floating-point arrays.


S. Gonzi

Kaz Kylheku

unread,
Dec 17, 2001, 6:54:17 PM12/17/01
to
In article <sVYS7.9925$ip4.1...@news2.calgary.shaw.ca>, Kaz Kylheku wrote:
>In the real world, there are deadlines. So, let's say you have until
>Tuesday, 00:00 UTC.

Well, the deadline is only a few minutes away as I write this.

Will we see code, or is Wroot a pure troll?

It's possible to generate the list of values of all possible expressions
of N nines by combining the values of expressions of K nines (woof!) and
N - K nines over all operators and all relevant values of K. If N is 1,
the only expression possible is 9, and If N is 2, the expressions are
(+ 9 9), (- 9 9), (/ 9 9) and (* 9 9). All else follows by combination; for
N = 3, combine the N=1 and N=2 results in every possible way, and so on.

This approach should be a piece of cake in C++; an hour's work at
most.

My Lisp solution actually generates expressions, and evaluates them.
storing only the top level results into a hash table for the purpose of
finding the least positive integer that is not in the list. My reason
for doing the puzzle in the first place was to explore this technique.

That solution is not such a piece of cake in C++.

Ed L Cashin

unread,
Dec 17, 2001, 7:09:00 PM12/17/01
to
Wroot <wr...@my-deja.com> writes:

> I knew you didn't have the balls to accept. A real man can admit it when he
> is wrong. You aren't one. You called me an idiot, and offered to solve this
> little problem you spent several hours writing a lisp program for. I
> offered you a way to compare answers on the spot.
>
> My advice to you: see shrink and, also, go back to school, kiddo. Maybe
> they'll teach you some respect for people who are smarter than you.

This newsgroup is weird. Lisp seems to be a drug that makes smart
people mean and dumb people mean.

--
--Ed Cashin integrit file-verification system:
eca...@terry.uga.edu http://integrit.sourceforge.net/

Note: If you want me to send you email, don't munge your address.

Gareth McCaughan

unread,
Dec 17, 2001, 5:29:14 PM12/17/01
to
Bruce Hoult wrote:

[I said:]


> > > Ha. The one I just wrote is 21 lines, including one blank line
> > > and 4 comment-only lines. Nyaaah!
> > >
> > > I reckon it could probably be done in about 60 (non-blank, non-
> > > -comment) lines of C++ using the STL
> >
> > I don't know how you're doing it in CL, but I'm disappointed because I
> > can do it in 23 lines of *Perl* (17 if you don't count lines with only a
> > brace or the Unix #! line) and Perl doesn't even have support for
> > fractions!!

It'll be a little shorter in Perl because of autovivification,
which approximately makes up for the lack of fractions.

> OK, now 13 lines of CL. Pointers on how to better use CL appreciated --
> I don't like how much this conses with the nconc, but the dolist is so
> convenient:

[SNIP]

That's almost identical to my solution. :-) Oh, I forgot
to mention that two of my 16 (or, if you prefer, 21) lines
were FORMATs to keep track of what the program's doing as
it goes.

The other difference is that I wrote a bunch of (setf (gethash ...))
forms explicitly rather than a dolist. This hurt me less than it
would have hurt you because I didn't bother saving a factor of 2
by only looping half way up in the next-to-outer loop.

Bruce Hoult

unread,
Dec 17, 2001, 9:28:40 PM12/17/01
to
In article <slrna1ssdq.qn6....@g.local>,
Gareth.M...@pobox.com wrote:

> The other difference is that I wrote a bunch of (setf (gethash ...))
> forms explicitly rather than a dolist. This hurt me less than it
> would have hurt you because I didn't bother saving a factor of 2
> by only looping half way up in the next-to-outer loop.

Actually, that's probably better than mine. 4 lines for four operators,
vs four lines for all the nconc/dolist stuff. The only cost is that you
end up doing a+b and a*b twice, but that's only two out of eight
operations wasted, and you get to save all the consing and loop overhead.

Is there a better way to set up the array of hashes at the start? Perl
is *so* convenient for that. OTOH, all those $$$ make me go blind.

I was also wondering if there was an easy way to bind a variable to
(aref nums n) in the loop that controls n, but I couldn't immediately
find one better than "and h = (aref nums n) then (aref nums n)", which
is no improvement. I didn't want to use a separate line for a "let".

-- Bruce

Joe Schaefer

unread,
Dec 17, 2001, 11:47:02 PM12/17/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <bruce-172C16....@news.paradise.net.nz>, Bruce
> Hoult <br...@hoult.org> wrote:

[...]

> > I don't know how you're doing it in CL, but I'm disappointed because
> > I can do it in 23 lines of *Perl* (17 if you don't count lines with
> > only a brace or the Unix #! line) and Perl doesn't even have support
> > for fractions!!
>
> OK, now 13 lines of CL. Pointers on how to better use CL appreciated
> -- I don't like how much this conses with the nconc, but the dolist
> is so convenient:
>
> (setq nums (make-array '(10)))
> (dotimes (i 10) (setf (aref nums i) (make-hash-table)))
> (setf (gethash 9 (aref nums 1)) t)
> (dotimes (n 10)
> (loop for p from 1 to (/ n 2) do

IIRC, Your original Perl solution finished on your box at around 30s.
How does the CL version compare? The reason I ask is because when I
ran the perl code you posted, my box took about 200 seconds to finish
it. But by farming out the inner loops to C I got the runtime down to
under 20s. This reduced the Perl to about a dozen lines at the expense
of ~3 hours and ~80 lines of C [1].

If your CL solution is "too slow", I'm wondering what sort of
strategies people might recommend to try and accelerate it.
Recoding slow parts of a perl script in C is a common technique, and
in this case I got a 10-fold increase without "too" much extra
effort. Can something similar also be done for the CL version?


[1] - the script wound up looking like

#!/usr/bin/perl -w
use strict;

use Inline "C"; # source follows __C__ marker after end-of-script
sub generate; # (\%\%) returns a list of rationals generated by
# operating on each pair of argument keys

my $N = shift || 9;
my @nums = ({},{"$N/1" => undef}); # ignore hash values

for my $n (1 .. $N) {
@{ $nums[$n] }{ generate $nums[$_], $nums[$n-$_] } = () for 1..$n/2;

$_ = 1; ++$_ while exists $nums[$n]{"$_/1"};
print "Smallest integer that can't be made from $n $N\'s is $_\n";
}

__END__

__C__
/* ... 50 lines of code to implement rational arithmetic in C
and convert to/from hash key strings ... */

void generate (SV *a, SV *b) {
HV *h = SvRV(a), *k = SvRV(b);
rat_t *r, *s; /* rat_t = struct { ``%ld ; %lu'' } */
I32 i,j;

Inline_Stack_Vars;
Inline_Stack_Reset;

if (SvTYPE(h) != SVt_PVHV || SvTYPE(k) != SVt_PVHV)
croak ("[generate(a,b)] error: args must be hash refs!");

setup_keys(h, r, i);

if (h != k)
setup_keys(k, s, j);
else /* avoid repetiton */
s=r, j=i;

if (!r || !s) { /* setup/malloc failed */
if (r) free(r);
if (s && s != r) free(s);
return;
}

for ( --i ; i>=0 ; --i) { /* XXX: original inner loops */
int m = j;
for (--m ; m>=0 ; --m)
rat_gen(r[i],s[m]); /* PUSH rationals back to Perl */
}

free(r);
if (h != k) free(s);

Inline_Stack_Done;
}


--
Joe Schaefer


Bruce Hoult

unread,
Dec 18, 2001, 12:05:51 AM12/18/01
to
In article <m3sna95...@mumonkan.sunstarsys.com>, Joe Schaefer
<joe+u...@sunstarsys.com> wrote:

> > > I don't know how you're doing it in CL, but I'm disappointed because
> > > I can do it in 23 lines of *Perl* (17 if you don't count lines with
> > > only a brace or the Unix #! line) and Perl doesn't even have support
> > > for fractions!!
> >
> > OK, now 13 lines of CL.
>

> IIRC, Your original Perl solution finished on your box at around 30s.
> How does the CL version compare?

Right. 31.5 seconds. As it turns out, in CMUCL on the same machine the
Lisp runs in +/- one second of the Perl. CMUCL puts out about a million
GC status messages, which might be slowing it down (not sure how to turn
those off).


> The reason I ask is because when I ran the perl code you posted,
> my box took about 200 seconds to finish it.

Mine was a 700 MHz Athlon running Linux. Just tried the Perl on an 867
MHz MacOS X machine and it took 29.2 seconds there. And on the 266 MHz
G3 PowerBook I'm typing this on it takes 86.2 seconds. I don't have CL
on the Macs. You're using a Pentium 100 or something?


> But by farming out the inner loops to C I got the runtime down to
> under 20s. This reduced the Perl to about a dozen lines at the expense
> of ~3 hours and ~80 lines of C [1].

Gads! You can actually read obfuscated Perl code!! You have my
admiration.


> If your CL solution is "too slow", I'm wondering what sort of
> strategies people might recommend to try and accelerate it.

Declaring some variable types would help. So would hand-expanding the
nconc/dolist stuff.


> Recoding slow parts of a perl script in C is a common technique, and
> in this case I got a 10-fold increase without "too" much extra
> effort. Can something similar also be done for the CL version?

I should probably do a Dylan version. It'll be near identical to the
CL, but a bit more wordy (lots of lines with just 'end' instead of
'))))' in the Lisp). Dylan is designed to compile better than Lisp can,
but a) Lisp is pretty good too, and b) we're probably all at the mercy
of the quality of the runtime library hash table implementation.

-- Bruce

Bruce Hoult

unread,
Dec 18, 2001, 12:16:49 AM12/18/01
to
In article <bruce-9BF494....@news.paradise.net.nz>, Bruce
Hoult <br...@hoult.org> wrote:

> > IIRC, Your original Perl solution finished on your box at around 30s.
> > How does the CL version compare?
>
> Right. 31.5 seconds. As it turns out, in CMUCL on the same machine the
> Lisp runs in +/- one second of the Perl.

A tidy-up (thanks to Paul Foley) reduces the CL to 21.6 seconds, a 47%
speed increase.


(let ((nums (make-array 10
:initial-contents (loop repeat 10 collect (make-hash-table)))))


(setf (gethash 9 (aref nums 1)) t)

(loop for n below 10 as h = (aref nums n) do
(loop for p from 1 to (floor n 2) do


(loop for a being the hash-keys of (aref nums p) do
(loop for b being the hash-keys of (aref nums (- n p)) do

(setf (gethash (+ a b) h) t (gethash (* a b) h) t
(gethash (- a b) h) t (gethash (- b a) h) t)
unless (zerop b) do (setf (gethash (/ a b) h) t)
unless (zerop a) do (setf (gethash (/ b a) h) t))))
(format t "~&With ~D nines, can't make ~D" n
(loop for i from 1 unless (gethash i h) do (return i)))))

-- Bruce

Bruce Hoult

unread,
Dec 18, 2001, 1:19:39 AM12/18/01
to
In article <bruce-E1F709....@news.paradise.net.nz>, Bruce
Hoult <br...@hoult.org> wrote:

> In article <bruce-9BF494....@news.paradise.net.nz>, Bruce
> Hoult <br...@hoult.org> wrote:
>
> > > IIRC, Your original Perl solution finished on your box at around 30s.
> > >
> > > How does the CL version compare?
> >
> > Right. 31.5 seconds. As it turns out, in CMUCL on the same machine
> > the Lisp runs in +/- one second of the Perl.
>
> A tidy-up (thanks to Paul Foley) reduces the CL to 21.6 seconds, a 47%
> speed increase.

Ooops.

I just found that CMUCL doesn't compile things by default. So the
previous time was for interpreted code. After..

(defun nineNines ()
<previous code>)

(compile 'nineNines)

(time (nineNines))

... we now get ...

Evaluation took:
2.18 seconds of real time
2.13 seconds of user run time
0.05 seconds of system run time
[Run times include 0.23 seconds GC run time]
0 page faults and
10954632 bytes consed.


So that's ten times faster for compiled code than for interpreted.
Without declarations.

-- Bruce

Thomas F. Burdick

unread,
Dec 18, 2001, 1:24:40 AM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <bruce-9BF494....@news.paradise.net.nz>, Bruce
> Hoult <br...@hoult.org> wrote:
>
> > > IIRC, Your original Perl solution finished on your box at around 30s.
> > > How does the CL version compare?
> >
> > Right. 31.5 seconds. As it turns out, in CMUCL on the same machine the
> > Lisp runs in +/- one second of the Perl.
>
> A tidy-up (thanks to Paul Foley) reduces the CL to 21.6 seconds, a 47%
> speed increase.

This seemed insanely slow to me [you did compile it, right?], so I
tried it myself.

(defun nines ()
(declare (optimize (speed 3) (safety 1)))


> (let ((nums (make-array 10
> :initial-contents (loop repeat 10 collect (make-hash-table)))))
> (setf (gethash 9 (aref nums 1)) t)
> (loop for n below 10 as h = (aref nums n) do

;; start [*]


> (loop for p from 1 to (floor n 2) do
> (loop for a being the hash-keys of (aref nums p) do
> (loop for b being the hash-keys of (aref nums (- n p)) do
> (setf (gethash (+ a b) h) t (gethash (* a b) h) t
> (gethash (- a b) h) t (gethash (- b a) h) t)
> unless (zerop b) do (setf (gethash (/ a b) h) t)
> unless (zerop a) do (setf (gethash (/ b a) h) t))))

;; end [*]


> (format t "~&With ~D nines, can't make ~D" n
> (loop for i from 1 unless (gethash i h) do (return i)))))
>
> -- Bruce

This is all on a 500 MHz Celeron. First, here's a transcript of the
best Perl time I got, of 5 runs:

[noc@malatesta tmp]$ time ./foo.pl


Smallest integer that can't be made from 1 nines is 0
Smallest integer that can't be made from 2 nines is 2
Smallest integer that can't be made from 3 nines is 1
Smallest integer that can't be made from 4 nines is 4
Smallest integer that can't be made from 5 nines is 13
Smallest integer that can't be made from 6 nines is 22
Smallest integer that can't be made from 7 nines is 33
Smallest integer that can't be made from 8 nines is 103
Smallest integer that can't be made from 9 nines is 195

real 1m16.517s
user 1m15.260s
sys 0m0.310s

I compiled the file containing NINES, loaded it, ran the GC, and
here's what I got:

With 9 nines, can't make 195
Evaluation took:
8.74 seconds of real time
7.92 seconds of user run time
0.4 seconds of system run time
[Run times include 1.32 seconds GC run time]
12 page faults and
10968552 bytes consed.

Since a fair amount of that time was spent in the GC, I wrapped the
area of the function I marked with "start [*]" and "end [*]" in an
ext::without-gcing form, so the GC would only run between the major
iterations, and saved a little more time:

* (time (nines))
Compiling LAMBDA NIL:
Compiling Top-Level Form:

With 0 nines, can't make 1
With 1 nines, can't make 1
With 2 nines, can't make 2
With 3 nines, can't make 1
With 4 nines, can't make 4
With 5 nines, can't make 13
With 6 nines, can't make 22
With 7 nines, can't make 33
[GC threshold exceeded with 3,501,600 bytes in use. Commencing GC.]
[GC completed with 1,555,664 bytes retained and 1,945,936 bytes freed.]
[GC will next occur when at least 3,555,664 bytes are in use.]

With 8 nines, can't make 103
[GC threshold exceeded with 9,783,712 bytes in use. Commencing GC.]
[GC completed with 3,714,504 bytes retained and 6,069,208 bytes freed.]
[GC will next occur when at least 5,714,504 bytes are in use.]

With 9 nines, can't make 195
Evaluation took:
7.86 seconds of real time
7.24 seconds of user run time
0.36 seconds of system run time
[Run times include 0.65 seconds GC run time]
0 page faults and
10978104 bytes consed.
NIL

Incidentally, CLISP also kicked the pants off Perl:

[9]> (time (nines))

With 0 nines, can't make 1
With 1 nines, can't make 1
With 2 nines, can't make 2
With 3 nines, can't make 1
With 4 nines, can't make 4
With 5 nines, can't make 13
With 6 nines, can't make 22
With 7 nines, can't make 33
With 8 nines, can't make 103
With 9 nines, can't make 195
Real time: 9.871542 sec.
Run time: 9.6 sec.
Space: 13839052 Bytes
GC: 22, GC time: 5.01 sec.
NIL

Kenny Tilton

unread,
Dec 18, 2001, 1:39:01 AM12/18/01
to

Bruce Hoult wrote:

> So that's ten times faster for compiled code than for interpreted.
> Without declarations.

Good work. But no one would worry about the performance of X if the
performance of X did not matter, and if it matters, we need to optimize
X to the max. so...

.../with/ declarations? :)

Hey, where'd Wrootmeister go? I am thinking that was a Lisper just
trying to stir up their fellow Lispers. The trollishness was a bit over
the top to be convincing.

kenny
clinisys

Bruce Hoult

unread,
Dec 18, 2001, 1:44:46 AM12/18/01
to
In article <xcv6675...@conquest.OCF.Berkeley.EDU>,
t...@conquest.OCF.Berkeley.EDU (Thomas F. Burdick) wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > In article <bruce-9BF494....@news.paradise.net.nz>, Bruce
> > Hoult <br...@hoult.org> wrote:
> >
> > > > IIRC, Your original Perl solution finished on your box at around
> > > > 30s.
> > > > How does the CL version compare?
> > >
> > > Right. 31.5 seconds. As it turns out, in CMUCL on the same machine
> > > the
> > > Lisp runs in +/- one second of the Perl.
> >
> > A tidy-up (thanks to Paul Foley) reduces the CL to 21.6 seconds, a 47%
> > speed increase.
>
> This seemed insanely slow to me [you did compile it, right?], so I
> tried it myself.

No, I simply didn't realize that I had to! I've only previously done
things in CMUCL where speed didn't matter, and the only previous
compiler I used that had an interactive REPL (Functional Developer, nee
Harlequin Dylan) automatically compiles everything before running it.


> (defun nines ()
> (declare (optimize (speed 3) (safety 1)))

This "declare" line has absolutely no effect on the speed. 2.18 seconds
of wallclock time either way.

-- Bruce

Jochen Schmidt

unread,
Dec 18, 2001, 3:11:39 AM12/18/01
to
Bruce Hoult wrote:

You can switch off GC-Messages by (setf *gc-verbose* nil)

What makes me curious is that above code runs on my Duron 800 (256MB Ram) in
2.72 seconds! (I have stuffed the code in a function named FOO)
The output is the following:

(let ((*gc-verbose* nil)) (time (foo)))


Compiling LAMBDA NIL:
Compiling Top-Level Form:

With 0 nines, can't make 1
With 1 nines, can't make 1
With 2 nines, can't make 2
With 3 nines, can't make 1
With 4 nines, can't make 4
With 5 nines, can't make 13
With 6 nines, can't make 22
With 7 nines, can't make 33

With 8 nines, can't make 103
With 9 nines, can't make 195

Evaluation took:
2.72 seconds of real time
2.47 seconds of user run time
0.07 seconds of system run time
[Run times include 0.34 seconds GC run time]
1 page fault and
10938232 bytes consed.

ciao,
Jochen

Raymond Wiker

unread,
Dec 18, 2001, 3:08:33 AM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:

On my work machine (Pentium III at 666 MHz), using SBCL, I got
around 6.5 seconds. Adding a (declare (speed 3) (compilations-speed 0)
(safety 0) and "fixnum" declarations for p, a and b got that down to
about 0.7 seconds.

--
Raymond Wiker Mail: Raymon...@fast.no
Senior Software Engineer Web: http://www.fast.no/
Fast Search & Transfer ASA Phone: +47 23 01 11 60
P.O. Box 1677 Vika Fax: +47 35 54 87 99
NO-0120 Oslo, NORWAY Mob: +47 48 01 11 60

Try FAST Search: http://alltheweb.com/

Bruce Hoult

unread,
Dec 18, 2001, 4:13:47 AM12/18/01
to
In article <864rmp6...@raw.grenland.fast.no>, Raymond Wiker
<Raymon...@fast.no> wrote:

> On my work machine (Pentium III at 666 MHz), using SBCL, I got
> around 6.5 seconds. Adding a (declare (speed 3) (compilations-speed 0)
> (safety 0) and "fixnum" declarations for p, a and b got that down to
> about 0.7 seconds.

a and b aren't fixnums. Are you sure you got the right answers?

-- Bruce

Christophe Rhodes

unread,
Dec 18, 2001, 4:22:16 AM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <slrna1ssdq.qn6....@g.local>,
> Gareth.M...@pobox.com wrote:
>
> > The other difference is that I wrote a bunch of (setf (gethash ...))
> > forms explicitly rather than a dolist. This hurt me less than it
> > would have hurt you because I didn't bother saving a factor of 2
> > by only looping half way up in the next-to-outer loop.
>
> Actually, that's probably better than mine. 4 lines for four operators,
> vs four lines for all the nconc/dolist stuff. The only cost is that you
> end up doing a+b and a*b twice, but that's only two out of eight
> operations wasted, and you get to save all the consing and loop overhead.
>
> Is there a better way to set up the array of hashes at the start?

[ (setq nums (make-array '(10)))


(dotimes (i 10) (setf (aref nums i) (make-hash-table)))

(setf (gethash 9 (aref nums 1)) t) ]

(setq nums (map-into (make-array '(10)) #'make-hash-table))

will save you one line :)

Cheers,

Christophe
--
Jesus College, Cambridge, CB5 8BL +44 1223 510 299
http://www-jcsu.jesus.cam.ac.uk/~csr21/ (defun pling-dollar
(str schar arg) (first (last +))) (make-dispatch-macro-character #\! t)
(set-dispatch-macro-character #\! #\$ #'pling-dollar)

Raymond Wiker

unread,
Dec 18, 2001, 5:20:36 AM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:

Oops... you're right; they're not fixnums, and I got the wrong
results. Declaring a and b as "rational" instead of "fixnum" gives me
the right result, and better speed as well (0.46 seconds).

Pierre R. Mai

unread,
Dec 18, 2001, 5:23:33 AM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:

> Lisp runs in +/- one second of the Perl. CMUCL puts out about a million
> GC status messages, which might be slowing it down (not sure how to turn
> those off).

* (describe 'ext:*gc-verbose*)

*GC-VERBOSE* is an external symbol in the EXTENSIONS package.
It is a special variable; its value is T.
Special documentation:
When non-NIL, causes the functions bound to *GC-NOTIFY-BEFORE* and
*GC-NOTIFY-AFTER* to be called before and after a garbage collection
occurs respectively. If :BEEP, causes the default notify functions to beep
annoyingly.

So try

(setq ext:*gc-verbose* nil)

Regs, Pierre.

--
Pierre R. Mai <pm...@acm.org> http://www.pmsf.de/pmai/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We cause accidents. -- Nathaniel Borenstein

Bruce Hoult

unread,
Dec 18, 2001, 10:23:39 AM12/18/01
to
In article <86zo4g6...@raw.grenland.fast.no>, Raymond Wiker
<Raymon...@fast.no> wrote:

> > a and b aren't fixnums. Are you sure you got the right answers?
>
> Oops... you're right; they're not fixnums, and I got the wrong
> results. Declaring a and b as "rational" instead of "fixnum" gives me
> the right result, and better speed as well (0.46 seconds).

That's pretty fast.

I did a Dylan version:

begin
let nums = make(<vector>, size: 10);
for (i from 0 to 9) nums[i] := make(<equal-table>) end;
nums[1][fraction(9,1)] := #t;
for (n from 1 to 9)
let h :: <equal-table> = nums[n];
for (p from 1 to floor/(n,2))
for (a :: <fraction> in nums[p].key-sequence)
for (b :: <fraction> in nums[n - p].key-sequence)
h[a + b] := #t; h[a * b] := #t;
h[a - b] := #t; h[b - a] := #t;
unless (a.num.zero?) h[b / a] := #t end;
unless (b.num.zero?) h[a / b] := #t end;
end
end
end;
for (i from 1, while: element(h, fraction(i,1), default: #f))
finally format-out("With %d nines, can't make %d\n", n, i);
end
end
end

That's 21 lines with 7 lines of noisewords (begin/end). With Gwydion
d2c it takes 1.9 seconds on the same machine on which CMUCL takes 2.18
seconds. Without the declarations of h, a and b it takes 2.8 seconds.
The rest of the time is wasted in our extremely sucky implementation of
hash tables which is doing GF dispatch up the wazoo.

I've been playing with a better implementation of hash tables for a
while but it's not done & checked in to cvs yet, so it doesn't count :-)
And it would take real work to get as fast as your SBCL (whatever that
is) results. Especially for a user-defined datatype in a standard hash
table.


Unfortunately, since Gwydion doesn't have fractions I needed another 47
lines of support code. Well I guess I'll put a cleaned up and more
complete version into the Gwydion libraries sometime soon, in which case
Gwydion *will* support fractions... :-)

define class <fraction> (<number>)
sealed slot num :: <integer>,
required-init-keyword: n:;
sealed slot den :: <integer>,
required-init-keyword: d:;
end <fraction>;

define sealed method initialize
(f :: <fraction>, #key n :: <integer>, d :: <integer>)
=> ();
let g = gcd(n.abs, d.abs);
if (d < 0) g := -g end;
unless (g = 1) f.num := truncate/(n,g); f.den := truncate/(d,g) end;
end;

define sealed domain make (singleton(<fraction>));

define inline method fraction(n :: <integer>, d :: <integer>)
make(<fraction>, n: n, d: d);
end;

define sealed inline method \+(a :: <fraction>, b :: <fraction>)
fraction(a.num * b.den + a.den * b.num, a.den * b.den);
end;

define sealed inline method \-(a :: <fraction>, b :: <fraction>)
fraction(a.num * b.den - a.den * b.num, a.den * b.den);
end;

define sealed inline method \*(a :: <fraction>, b :: <fraction>)
fraction(a.num * b.num, a.den * b.den);
end;

define sealed inline method \/(a :: <fraction>, b :: <fraction>)
fraction(a.num * b.den, a.den * b.num);
end;

define sealed inline method \=(a :: <fraction>, b :: <fraction>)
=> res :: <boolean>;
a.num = b.num & a.den = b.den;
end;

define inline method equal-hash
(key :: <fraction>, initial-state :: <hash-state>)
=> (id :: <integer>, state :: <hash-state>);
values(merge-hash-ids(key.num, key.den, ordered: #t), initial-state);
end method equal-hash;

Michael Parker

unread,
Dec 18, 2001, 9:25:41 PM12/18/01
to
Bruce Hoult wrote:

> I just found that CMUCL doesn't compile things by default. So the
> previous time was for interpreted code. After..
>
> (defun nineNines ()
> <previous code>)
>
> (compile 'nineNines)
>
> (time (nineNines))
>
> ... we now get ...
>
> Evaluation took:
> 2.18 seconds of real time
> 2.13 seconds of user run time
> 0.05 seconds of system run time
> [Run times include 0.23 seconds GC run time]
> 0 page faults and
> 10954632 bytes consed.
>
> So that's ten times faster for compiled code than for interpreted.
> Without declarations.


For those of you who might be pining for the good old days,
my XL1200 took 43.06 secs compiled.
0.967 secs processing sequence breaks
0.892 secs in the storage system
Consing not measured due to GC flip

Kent M Pitman

unread,
Dec 18, 2001, 10:19:47 PM12/18/01
to
Michael Parker <des...@pdq.net> writes:

> > Evaluation took:
> > 2.18 seconds of real time
> > 2.13 seconds of user run time
> > 0.05 seconds of system run time
> > [Run times include 0.23 seconds GC run time]
> > 0 page faults and
> > 10954632 bytes consed.
> >
> > So that's ten times faster for compiled code than for interpreted.
> > Without declarations.
>
>
> For those of you who might be pining for the good old days,
> my XL1200 took 43.06 secs compiled.
> 0.967 secs processing sequence breaks
> 0.892 secs in the storage system
> Consing not measured due to GC flip

For those of you confused about why we pine for the good old days, it is
not raw processor power. The LispM was quite competitive in its time, but
even now, much slower that it is, it still has so many friendly UI features
no one's ever bothered to copy that I pull it out and use it for some kinds
of work because it's "fast enough" for editing (who really cares if they
type a key and then it takes 5 microseconds or 5000 to do the command as
long as it's ready to take the next command when you need it) and often
still even "fast enough" for debugging... Kinds of things I still miss:

* Meta-X Source Compare

Don't tell me m-x diff or whatever is equivalent until you've got all
the options the LispM did. It let you type 2 keys after you invoked
it to say what quantity (B = Buffer (prompted for), F = File (prompted
for), R = region (chosen one first time used, prompted for by recursive
edit if two regions needed), K or c-Y = top of kill ring,
P or m-Y = second on kill ring. Bit-decoded numeric arguments to control
ignoring case and/or leading whitespace. I don't see diff doing any of
this.

* Integrated Source Compare

Some people have suggested the lispm was a source compare server of sorts,
but what I mean is that every command in the world offered some way to
get at source compare. = in dired, for example. Mail reader had ways
to source compare referenced messages. Typing c-x c-s in emacs to save
file if it said "written by someone else" would prompt to let you see the
diffs before making you say if you wanted to overwrite, and on and on.

* File system with real file-system-level file versions

* Possibilities Buffers

multiple fingers into intermediate state of suspended commands so you
could abort a tags search if you saw something more interesting and
resume it later after stacking another on top of it. I often had these
stacked dozens high

* Complex undo/redo

Including things like undo/redo within region and a way of getting a list
of prior changes so you could only undo certain ones

* Complex command re-execution ("the new yank system")

c-Y (yank last kill), c-sh-Y (yank last kill matching string),
c-m-Y (yank last command), c-m-sh-Y (yank last command matching), etc.
allowing access to prior kills and commands in listable/rotatable/searchable
fashion

* Integrated source and patch system

m-X start patch, m-X finish patch

* Dynamic Windows (the system from which CLIM was created)

integrated into the listeners for debugging. more powerful than CLIM
actually. including presentation debuggers and such.

* Integrated network file system

so that any file operation can be done for any file on the internet

* Customizable and abstract network services

Not just low-level abilities like open-socket but high-level things like
find-path-to-service-on-host where you can ask for an abstract service
and it will choose among available media, protocols, and services to
figure out how best to satisfy things

Even after not having been upgraded in almost 10 years, other than light
patches i've done just for fun, I find my lispm still performs acceptably
and offers me these and more useful things.

Smart and fast are incomparable sometimes.

Bruce Hoult

unread,
Dec 18, 2001, 10:56:02 PM12/18/01
to
In article <sfwpu5b...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> For those of you confused about why we pine for the good old days,
> it is not raw processor power. The LispM was quite competitive in
> its time, but even now, much slower that it is, it still has so
> many friendly UI features no one's ever bothered to copy that I pull
> it out and use it for some kinds of work because it's "fast enough"

I agree with everything you say, but the question is "where is the
source code, and why hasn't someone released it?"


I'm sick to death of incredibly nifty things disappearing along with the
companies that created them. Or, often worse, with the company
continuing and just the project or product getting cancelled and being
sat on, presumably to prevent competitors from getting a leg up.


I've got a lot of points of disagreement with RMS, but the thing that
makes me put time and effort into improving Gwydion Dylan is that at the
end of the day it's something good that No One Can Ever Take Away From
Me.

-- Bruce

Peter S. Housel

unread,
Dec 18, 2001, 11:21:36 PM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:
> That's 21 lines with 7 lines of noisewords (begin/end). With Gwydion
> d2c it takes 1.9 seconds on the same machine on which CMUCL takes 2.18
> seconds. Without the declarations of h, a and b it takes 2.8 seconds.
> The rest of the time is wasted in our extremely sucky implementation of
> hash tables which is doing GF dispatch up the wazoo.
>
> I've been playing with a better implementation of hash tables for a
> while but it's not done & checked in to cvs yet, so it doesn't count :-)
> And it would take real work to get as fast as your SBCL (whatever that
> is) results. Especially for a user-defined datatype in a standard hash
> table.
>
> Unfortunately, since Gwydion doesn't have fractions I needed another 47
> lines of support code. Well I guess I'll put a cleaned up and more
> complete version into the Gwydion libraries sometime soon, in which case
> Gwydion *will* support fractions... :-)

It has them already. See src/d2c/runtime/dylan/ratio.dylan for the
implementation.

-Peter-

Bruce Hoult

unread,
Dec 18, 2001, 11:41:52 PM12/18/01
to
In article <mu9n10f...@cx281057-c.irvn1.occa.home.com>,
hou...@home.com (Peter S. Housel) wrote:

> > Unfortunately, since Gwydion doesn't have fractions I needed another 47
> > lines of support code. Well I guess I'll put a cleaned up and more
> > complete version into the Gwydion libraries sometime soon, in which
> > case
> > Gwydion *will* support fractions... :-)
>
> It has them already. See src/d2c/runtime/dylan/ratio.dylan for the
> implementation.

Oops! So it does!

Turns out that:

- it uses bignums, not machine integers. We'll see how fast those are...

- you have to import it from the Extensions module, it's not in the
normal Common-Dylan.

- you need to explicitly create a ratio, then contagion takes hold after
that. Normal division will never give you one.

- equal-hash(<ratio>) isn't defined so I can't use them in a hash table.

A few minutes work ahead to use this...

-- Bruce

Michael Parker

unread,
Dec 18, 2001, 11:37:24 PM12/18/01
to
Kent M Pitman wrote:
> For those of you confused about why we pine for the good old days, it is
> not raw processor power. The LispM was quite competitive in its time, but
> even now, much slower that it is, it still has so many friendly UI features
> no one's ever bothered to copy that I pull it out and use it for some kinds
> of work because it's "fast enough" for editing (who really cares if they
> type a key and then it takes 5 microseconds or 5000 to do the command as
> long as it's ready to take the next command when you need it) and often
> still even "fast enough" for debugging... Kinds of things I still miss:

Oh I wasn't complaining. I love mine dearly and still use it pretty
frequently. I agree with Bruce, this stuff needs to be saved somehow,
and now that Alpha is dead the VLM's days are numbered.

BTW, it looks like one of the Symbolics now expired patents contains
the microcode (and compiler) along with a simulator for the Ivory, all
written in I think Zetalisp (I haven't bought it from delphion, so can't
say for sure). I don't remember the number, but search for symbolics
and it's the one that's 1000+ pages long.

Given such a thing and different 64-bit processor as the target, how
reasonable would it be to use this as a basis for a new VLM? Any of
the old symbolics hackers know how buggy the Ivory microcode was at
the point the patent was submitted, or how complete the code was?

Any other legal ramifications to using this expired patent as a basis
for a new VLM? I'm pretty sure that the current Ivory images and source
code can't be legally used, so I guess it's all moot unless Symbolics
goes under completely.

Sigh.

Daniel Barlow

unread,
Dec 18, 2001, 12:22:29 PM12/18/01
to
Bruce Hoult <br...@hoult.org> writes:

> And it would take real work to get as fast as your SBCL (whatever that
> is) results. Especially for a user-defined datatype in a standard hash

SBCL = Steel Bank Common Lisp, a CL implementation based on CMUCL -
and still sharing more code with the original than Gwydion Dylan
does, I'd guess ;-)

Find it at http://sbcl.sf.net/

-dan

--

http://ww.telent.net/cliki/ - Link farm for free CL-on-Unix resources

Kent M Pitman

unread,
Dec 19, 2001, 3:46:18 AM12/19/01
to
Bruce Hoult <br...@hoult.org> writes:

> In article <sfwpu5b...@shell01.TheWorld.com>, Kent M Pitman

> <pit...@world.std.com> wrote:
>
> > For those of you confused about why we pine for the good old days,


> > it is not raw processor power. The LispM was quite competitive in
> > its time, but even now, much slower that it is, it still has so
> > many friendly UI features no one's ever bothered to copy that I pull
> > it out and use it for some kinds of work because it's "fast enough"
>

> I agree with everything you say, but the question is "where is the
> source code, and why hasn't someone released it?"

Pardon me for yelling, but I don't do it that often and I am totally
incredulous at this question.

WHAT IN THE WORLD MAKES YOU THINK THEY ARE OBLIGED TO SUPPLY ANYONE
SOURCE CODE OR THAT THESE _IDEAS_ ARE NOT WORTH SOMEONE ELSE
REPRODUCING _WITHOUT_ SUCH SOURCE CODE?????? ABSENT A SENSE OF
OBLIGATION, WHY WOULD SYMBOLICS, A COMPANY THAT MAKES MONEY ON ITS
SOFTWARE AND BARELY MANAGES TO EEK BY, GIVE AWAY STUFF??????

Ok. I feel better now.

Wow. I just can't imagine a less relevant question.

NOT TO MENTION that the real value were the ideas, not the code. I bet it
would be almost as hard to port the code to another processor in many cases
as it would be to write it anew. Source code isn't what's needed here.
It's a mindset that these old ideas are worth learning from and doing.



> I'm sick to death of incredibly nifty things disappearing along with the
> companies that created them.

I'm as sick as you are of that, since I *wrote* some of those changes,
but as it stands now, it is the ABSOLUTE RIGHT of the person that paid
for it. I might join in championing a change to the law that said that
if you failed to market IP created for pay by person X for person Y,
that after some threshold time the IP reverted to be owned by person X.
But I would never say it comes into the public domain except after a
normal period of copyright expires. Ideas should be free, which is why
I'm troubled by patents. But sweat of the brow, which is what copyright
effecitvely protects, is now and ought always to be ownable, IMO.

> Or, often worse, with the company
> continuing and just the project or product getting cancelled and being
> sat on, presumably to prevent competitors from getting a leg up.

In fairness, companies can take a long time to start. It's important for
their to be a predictable period of time known at the outset that they
have to succeed by, so they can make an informed decision to get into the
game. All existing companies are in the position of not having known of
such so would have to be grandfathered, IMO, even if the law were changed.

But again, each person who does this is not buying ideas, they are buying
deeds. Copyright is just the work of another. The only people to whom
they owe any duty, even morally, are the people who did the work. Not the
public. And even that is tenuous if the voluntarily-entered-into employment
contract made it clear that the IP would be yielded.

> I've got a lot of points of disagreement with RMS, but the thing that
> makes me put time and effort into improving Gwydion Dylan is that at the
> end of the day it's something good that No One Can Ever Take Away From
> Me.

Well, I DEFINITELY think people should negotiate better contracts with
employers better. I guess that's what you're saying too. But then,
what I'd get in my contract would be something that would revert
ownership of IP to me if the employer stopped using it. I suspect
you're saying the contract should cause it to be public domained or
GPL'd or something? By having it revert to me, I could still decide
at that time to do such a thing... but if I was starving and needing
money, I see no reason I shouldn't make money on my own work. Having
things go straight to the public to compete against me is in some ways
worse for me even than having someone sit on it. At least if it's
code I wrote before, I can write it again. In the case that stuff I
did before gets given away, I might not be qualified to do the next
step... The community may benefit at my expense. That's pathologically
bad.

Erik Naggum

unread,
Dec 19, 2001, 3:58:08 AM12/19/01
to
* Bruce Hoult <br...@hoult.org>

| I'm sick to death of incredibly nifty things disappearing along with the
| companies that created them. Or, often worse, with the company
| continuing and just the project or product getting cancelled and being
| sat on, presumably to prevent competitors from getting a leg up.

What this actually means is that too few people want to pay enough for
those features to even warrant keeping them listed as products, much less
pay for maintaining them or pay someone who knows them well enough to
offer any sort of customer questions, and no competitor thinks it is
worth paying the original developer for the right to the source.

As you may be aware, art museums are not just _given_ the artwork they
display for free, but go out and actively participate in auctions and
have annual budgets for acquisitions. Other museums display things they
funded, or they are part of universities and their continuing obligation
to present knowledge to the public. This is stuff that very few people
would want to purchase for themselves. Vast numbers of people prefer to
chip in using taxes and admission fees to be able to study these things.

I think there is time for software museums. There are already several
web sites that effectively amount to being museums, such as the DEC-10/20
site, which also houses an emulator so the old software can be run on
modern machines. The biggest challenge faced by a software collector is
to keep it running. Modern machines are fast enough to emulate most of
the "ancient" hardware, even to the point where they can run operating
systems on the emulator. However, one major reason software rots is that
the rest of the world moves on while it does not. Operating systems
change incompatibly, cooperating software changes, library versions
change, etc, etc. Making sure that some "ancient" software keeps running
is no small feat. Unless you get people to pay for this maintenance,
whatever values it once had will depreciate to zero very quickly.

Just having "source" around is no guarantee of anything at all. People
do not read source code. If they did, they would not use languages like
C++ and Perl. Source to an application is not enough -- the rest of the
system with which it interacts is also necessary to understand it. You
can ask running code questions and expect answers. Forensic medicine is
based on extensive experience with an unchanging basis, but forensic
computer science is based on having a running computer around to study
the dead program. Perhaps the software museum should be called The Code
Morgue.

///
--
The past is not more important than the future, despite what your culture
has taught you. Your future observations, conclusions, and beliefs are
more important to you than those in your past ever will be. The world is
changing so fast the balance between the past and the future has shifted.

Bruce Hoult

unread,
Dec 19, 2001, 4:34:02 AM12/19/01
to
In article <sfw4rmn...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > In article <sfwpu5b...@shell01.TheWorld.com>, Kent M Pitman
> > <pit...@world.std.com> wrote:
> >
> > > For those of you confused about why we pine for the good old days,
> > > it is not raw processor power. The LispM was quite competitive in
> > > its time, but even now, much slower that it is, it still has so
> > > many friendly UI features no one's ever bothered to copy that I pull
> > > it out and use it for some kinds of work because it's "fast enough"
> >
> > I agree with everything you say, but the question is "where is the
> > source code, and why hasn't someone released it?"
>
> Pardon me for yelling, but I don't do it that often and I am totally
> incredulous at this question.
>
> WHAT IN THE WORLD MAKES YOU THINK THEY ARE OBLIGED TO SUPPLY ANYONE
> SOURCE CODE OR THAT THESE _IDEAS_ ARE NOT WORTH SOMEONE ELSE
> REPRODUCING _WITHOUT_ SUCH SOURCE CODE?????? ABSENT A SENSE OF
> OBLIGATION, WHY WOULD SYMBOLICS, A COMPANY THAT MAKES MONEY ON ITS
> SOFTWARE AND BARELY MANAGES TO EEK BY, GIVE AWAY STUFF??????

I don't think they're obliged to do *anything*.

If they're dead then it's a pity that no one has been able to get the
rights to the stuff.

If they're alive (I didn't know that) then why aren't they putting it
out on hardware that people actually *have*? Pentiums or PowerPCs or
something? There are plenty of OS kernels out there for free, ranging
from Linux to *BSD to Mach, and they can be built-upon at any level: run
direct on the kernel, or run on top of GNU. Use X or don't.

> NOT TO MENTION that the real value were the ideas, not the code. I
> bet it would be almost as hard to port the code to another processor
> in many cases as it would be to write it anew.

Why is that? Isn't it mostly written in Lisp?


> Source code isn't what's needed here.
> It's a mindset that these old ideas are worth learning from and doing.

How can anyone learn from the ideas if they can't see the software? I'd
be amazed if there was a single Lisp Machine in my country.


> > I'm sick to death of incredibly nifty things disappearing along with
> > the companies that created them.
>
> I'm as sick as you are of that, since I *wrote* some of those changes,
> but as it stands now, it is the ABSOLUTE RIGHT of the person that paid
> for it.

Of course it is. No one ever said otherwise.


> I might join in championing a change to the law that said that
> if you failed to market IP created for pay by person X for person Y,
> that after some threshold time the IP reverted to be owned by person X.

I wouldn't support that.


Please note that my response to the problem is not trying to force
unwilling people to give things away, but to direct as much of my
energies as possible to things that may be freely distributed and
built-upon.

I can't afford to do this full-time, but I can afford to do it a little.
Gradually, the pool of good stuff will accumulate.

-- Bruce

Thomas F. Burdick

unread,
Dec 19, 2001, 4:49:07 AM12/19/01
to
Michael Parker <des...@pdq.net> writes:

> BTW, it looks like one of the Symbolics now expired patents contains
> the microcode (and compiler) along with a simulator for the Ivory, all
> written in I think Zetalisp (I haven't bought it from delphion, so can't
> say for sure). I don't remember the number, but search for symbolics
> and it's the one that's 1000+ pages long.
>
> Given such a thing and different 64-bit processor as the target, how
> reasonable would it be to use this as a basis for a new VLM? Any of
> the old symbolics hackers know how buggy the Ivory microcode was at
> the point the patent was submitted, or how complete the code was?

Ooh, yeah, Somebody Whosnotme should reimplement the emulator on the
Sparc.

> Any other legal ramifications to using this expired patent as a basis
> for a new VLM? I'm pretty sure that the current Ivory images and source
> code can't be legally used, so I guess it's all moot unless Symbolics
> goes under completely.

If an emulator for a non-dead architecture fell in their lap, they
might take it and start selling Genera for Suns. Of course, that
would still leave me very far short of the $$$ needed for Genera.

Bruce Hoult

unread,
Dec 19, 2001, 4:54:11 AM12/19/01
to
In article <32177410...@naggum.net>, Erik Naggum <er...@naggum.net>
wrote:

> * Bruce Hoult <br...@hoult.org>
> | I'm sick to death of incredibly nifty things disappearing along with
> | the
> | companies that created them. Or, often worse, with the company
> | continuing and just the project or product getting cancelled and being
> | sat on, presumably to prevent competitors from getting a leg up.
>
> What this actually means is that too few people want to pay enough for
> those features to even warrant keeping them listed as products, much
> less pay for maintaining them or pay someone who knows them well
> enough to offer any sort of customer questions,

Well, clearly. But my shelves full of documentation and CDs for OS/2
and BeOS and Apple Dylan and the Newton and OpenDoc and other things
tend to indicate that it's not *me* who is unwilling to pay.


> and no competitor thinks it is worth paying the original developer
> for the right to the source.

Many things are not in fact for sale for a realistic price.

I've been using a BBS system called "BIX" for about fifteen years. the
owners announced their intention to shut it down. The community of
users made enquiries about buying some or all of the computer, the
source code, the message database, and the domain name. The owners made
it clear that *they* thought it was worth something in at least six
figures for any *one* of those things (well, maybe not the computer),
and if they couldn't get that then they'd rather melt it down.

So we went back to the people who sold COSY to BIX, they agreed to GPL
the source code, and a group of people took it, ported it to Linux,
added in the features that BIX had but the original didn't, and there is
now a clone of BIX up and running with a number of happy users.

There were a number of people willing to chip in $1k - $5k each to buy
BIX. But the owners (as is their perfect right) preferred to get zero
than to get a realistc amount.


> As you may be aware, art museums are not just _given_ the artwork they
> display for free, but go out and actively participate in auctions and
> have annual budgets for acquisitions. Other museums display things
> they funded, or they are part of universities and their continuing
> obligation to present knowledge to the public. This is stuff that
> very few people would want to purchase for themselves. Vast numbers
> of people prefer to chip in using taxes and admission fees to be able
> to study these things.

This is a reasonable idea. In fact, if the upkeep was paid for I would
expect that most of the exhibits would end up being simply donated. But
I bet a lot of companies would refuse to sell historic things at any
reasonable price.

-- Bruce

Erik Naggum

unread,
Dec 19, 2001, 6:37:09 AM12/19/01
to
* Kent M Pitman <pit...@world.std.com>

| I suspect you're saying the contract should cause it to be public
| domained or GPL'd or something? By having it revert to me, I could still
| decide at that time to do such a thing... but if I was starving and
| needing money, I see no reason I shouldn't make money on my own work.
| Having things go straight to the public to compete against me is in some
| ways worse for me even than having someone sit on it. At least if it's
| code I wrote before, I can write it again. In the case that stuff I did
| before gets given away, I might not be qualified to do the next step...
| The community may benefit at my expense. That's pathologically bad.

The whole idea that anything can be so "shared" as to have no value in
itself is not a problem if the rest of the world ensures that nobody _is_
starving or needing money. For young people who have parents who pay for
them or student grants or loans and basically have yet to figure out that
it costs a hell of a lot of money to live in a highly advanced society,
this is not such a bad idea. Grow up, graduate, marry, start a family,
buy a house, have an accident, get seriously ill for a while, or a number
of other very expensive things people actually do all the time, and the
value of your work starts to get very real and concrete to you, at which
point giving away things to be "nice" to some "community" which turns out
not to be "nice" _enough_ in return that you will actually stay alive, is
no longer an option.

All of this "code sharing" is an economic surplus phenomenon. It works
only when none of the people involved in it are in any form of need. As
soon as the need arises, a lot of people discover that it has cost them
real money to work for the community and they reap very little benefit
from it, because they are sharing value-less services and getting value
out of something that people take for granted is hard to impossible.
This is unfortunately even more true when employees are considered "free"
while consultants are not, so buying the supposed "services" from people
who know the source code is not an _exercised_ option.

Just because it is nice to get things for free does not mean it is a good
idea to organize anything based on removing the value of those things,
but until people _need_ that value, getting stuff for free is _so_ nice
that looking to the future is something most people simply will not do,
and those who refuse think about will also refuse to listen to those who
have. Thus they will continue to deplete the value of software to the
point where nobody _wants_ to pay for any software, be it of a particular
kind or in general. Software development tools are already considered to
be give-aways by some people, threatening commercial vendors and those
who would like to make money providing software tools to developers.

As for giving away things for free, if you cannot make it yourself, just
buy it from someone else and give it away. If someone has something you
want to be free, the problem is no harder than to cough up the money to
make them want to do it, too. If this is not palatable to those who want
things others have made for free, they demonstrate that somebody else
somehow should accept the cost of this operation _without_ compensation.
Since I have not heard about any organization working to buy software
from those who "hoard" it, quite unlike those organization that buy up
tropical forest land and promise never to sell it or develop it, I tend
to believe the whole "free software" thing is really a way of tricking
immature people to give away their work. (I was one of those people.)

Nicolas Neuss

unread,
Dec 19, 2001, 8:11:19 AM12/19/01
to
Erik Naggum <er...@naggum.net> writes:

> Since I have not heard about any organization working to buy software
> from those who "hoard" it, quite unlike those organization that buy up
> tropical forest land and promise never to sell it or develop it, I tend
> to believe the whole "free software" thing is really a way of tricking
> immature people to give away their work. (I was one of those people.)

Since a large part of free software is developed at universities by
people (professors, students) who are paid at least partially from
public money, I see it more as "State against (unbounded) capitalism".

Nicolas.

bruce

unread,
Dec 19, 2001, 10:58:35 AM12/19/01
to
In article <sfw4rmn...@shell01.TheWorld.com>, Kent M Pitman wrote:

>Well, I DEFINITELY think people should negotiate better contracts with
>employers better.

Does anyone doing programming or hardware design get contracts like that?
My impression is that most people are now working under contracts that
yield up any IP they create while employed there, because business knows
that people need jobs, and they have them. Perhaps I am mistaken?

Bruce, a newbie, not a programmer for hire himself

Kent M Pitman

unread,
Dec 19, 2001, 11:35:06 AM12/19/01
to
Bruce Hoult <br...@hoult.org> writes:

> If they're dead then it's a pity that no one has been able to get the
> rights to the stuff.
>
> If they're alive (I didn't know that) then why aren't they putting it
> out on hardware that people actually *have*?

You surely know the answer to this, unless you yourself are lucky enough
to have never been job hunting:

The people with the money sometimes have a different theory of what will
and won't make money than we technical folks do.

> Pentiums or PowerPCs or something?

Business people, having often no actual knowledge of the technology, rely
on simpler rules of thumb than we do. They say pat little things like
"didn't this already lose us money once" and then they go on to other things.

> There are plenty of OS kernels out there for free, ranging from
> Linux to *BSD to Mach, and they can be built-upon at any level: run
> direct on the kernel, or run on top of GNU. Use X or don't.

Little of what I cited requires OS-level support.

> > NOT TO MENTION that the real value were the ideas, not the code. I
> > bet it would be almost as hard to port the code to another processor
> > in many cases as it would be to write it anew.
>
> Why is that? Isn't it mostly written in Lisp?

A different dialect of Lisp than you are familiar with. Very
powerful. Probably slow on conventional processors. Mostly no
declarations, for one example, since hardware did most of what
declarations would do. Also, a lot of code has built-in assumptions
about what the compiler does and doesn't do. This compiler, had a
great many useful optimizations, but optimizations for the instruction
set of a Lisp machine. I'm not sure how general. And it was
integrating to tools many wouldn't rewrite. Some of us want Zmacs,
but many people want GNU Emacs. So the various subsystems would need
to be decoupled and remodularized. It is simplistic to say "oh, we
have the sources, we're all set". These things aren't all rocket
science to program anew, either. But I'm not going to do it myself
because (a) been there done that and (b) I have a lisp machine. I do
think it's sad others don't, but that's the way it goes. Their
choice, I guess. This threadlet started because you made a remark
that suggested there was no reason for me to wish others had LispMs,
which is appearntly the only way anyone will ever see these features,
since no one seems to take time to acquire ideas and move them
forward. At least when RMS made lisp-based GNU Emacs, he went back
and translated a great number of the Teco-based Emacs libraries I and
others had written. When other lisp vendors copied the LispM-style
interface, the copy was quite superficial and seemed not to get down
into the guts of what the system was--just focused on getting the same
initial look mostly. MCL and Harlequin (now Xanalys) did the best job
of getting toward the heart of what it was about, having highly
integrated enviroments, but both stopped short of including certain
features that I, at least, think critical and have enumerated.

> > Source code isn't what's needed here.
> > It's a mindset that these old ideas are worth learning from and doing.
>
> How can anyone learn from the ideas if they can't see the software?

I just gotta laugh at this one.

The ideas? You're going to get those from source code?

How about... (drum roll please)...

Documentation?

It IS available. People don't throw out old manuals, they offer them
up for sale here routinely.

> I'd be amazed if there was a single Lisp Machine in my country.

(From your network domain, I'm not sure what country you're in.
Organia, perhaps? ;-)

You'd be amazed perhaps too how copiously documented the Lisp Machine was.

> > I might join in championing a change to the law that said that
> > if you failed to market IP created for pay by person X for person Y,
> > that after some threshold time the IP reverted to be owned by person X.
>
> I wouldn't support that.

I'm curious about why since I only guessed before. Is it because it
would change the ownership? Or because the target of the ownership
change would not be "the public" or "the free software industry"?

> Please note that my response to the problem is not trying to force
> unwilling people to give things away, but to direct as much of my
> energies as possible to things that may be freely distributed and
> built-upon.

Absent patents, ideas may always be freely built upon. Even when
copyrighted code is in play.



> I can't afford to do this full-time, but I can afford to do it a little.
> Gradually, the pool of good stuff will accumulate.

I appreciate this concern. From my point of view, the problem is that
this attitude, if pervasive (and I think it is), leads to there being
a single, canonical, monolithic "open source" and not a set of
competing ones. I want competition, and I think markets breed that.
I think the market has failed not because markets don't work but
becuase companies must not grow about a certain maximum size. The
problem with MicroSoft is that there is no upper bound on company
size, and markets, to work, have to assume you can't corner them.
Once you hit the walls of the market and start expanding back inward
instead of outward, the market is not really operating. You get the
equivalent effect of having unbounded population growth in a bounded
world... people start to notice resources dwindling and different ethics
are required to stay alive than were needed when growth had not yet hit
the wall.

Kent M Pitman

unread,
Dec 19, 2001, 11:40:08 AM12/19/01
to
Nicolas Neuss <ne...@ortler.iwr.uni-heidelberg.de> writes:

And those of us whose hard-earned taxes were allocated to universities
for what we thought was the purpose of educating, not for the purpose
of wealth redistribution or fostering a competition with unbounded
capitalism, think otherwise.

If the state like the US, founded on capitalism, wants to have a
government sponsored program to fight its OWN economic model, it
should do so explicitly. Not hide it.

Kent M Pitman

unread,
Dec 19, 2001, 11:56:02 AM12/19/01
to
lost...@Bast.debian.org (bruce) writes:

Well, if I get to the point of having enough money to hire people,
which will happen only if the free software movement doesn't undercut
my ability to sell things so badly that I can still make a little
money after charging next to nothing for software, since the wide
availability of free packages makes it hard to charge real money, I
have it on my agenda to try some experiments with making REASONABLE
contracts with people as a way of attracting talent. If one is going
to play social games, they don't all have to be of the form "forgo all
money, it's the only way to be good to people".

Secondly, though I haven't yet enough money to hire anyone, I have
worked to get myself to a position where I can just turn down jobs
that do not offer me contract concessions that I think are reasonable,
and I try to make as regular as possible a practice of doing this. It
might be as simple as "when you have two jobs, turn down the first on
principle and take the second even if it comes on the same terms" just
so the employer who wants you but loses you is sent a clear message
that they could have had you if they'd offered better terms. Sure,
sometimes they'll be the "second company" and will get you anyway, but
statistically they will lose at least some people and will have to
think about whether there is a better way to win. Another thing you
could do is when comparing two offers, tell someone you will sacrifice
dollars or some other important term in exchange for the concessions
you want. I routinely do this. I am annoyingly far from Boston these
days and I am always trying to negotiate telecommuting with people who
think it's fine to demand for me to drive in 1.5hours each way each day
(in good weather/traffic, worse other times). I banged my head against
the wall trying to figure out how to get potential employers to see my
point. I finally decided that it was worth real money to me to not
drive in because it opened a substantial additional number of hours in
my day, plus it kept me from having to brave the ice and snow in winter
(worse still if you're tired after a long day and have a 2 hour commute
ahead of you over ice and snow and need your wits about you). So I
computed how much in dollars this it worth it to me to work from home
and offered that discount. I found it a LOT easier sell because it was
obvious I was trading something I valued to get something else I valued,
not just asking for a handout.

No one negotiates contracts giving up terms out of the goodness of
their heart. I aspire to offer reasonable contracts because I think
it will be a competitive advantage in hiring, and because I hope it
will indirectly change the world in a way that benefits me if I ever
have to work for someone else again. One can't just sit around waiting
for the world to change, one has to come up with a plan and figure out
why that plan will sell. Even if you're not in sales/marketing, all
the world is sales/marketing. You just can't get away from it.

In your case, as a newbie, you might hae to deal with things as they
are. Starting out, one sometimes has to endure things they wish they
didn't. That's why they call it "work", someone once remidned me.
But if you work hard and keep your eyes open, you'll gain experience.
And if the custom doesn't become to expect experience to be on tap as
a free resource, you'll be able to translate that to dollars. Dollars
are not evil, they are just the interchange quantity for allowing
people with unrelated goals/values to cooperate. You marshall yours
to dollars, give them to me, and I marshall them back into something
meaningful for me. Once you've built a stock of stuff you can trade
with, you're on your way to negotiating what you want. The only really
reliable negotiation tool is the power to say "no" and that comes when
you have economic independence. And economic independence doesn't come
free...

Craig Brozefsky

unread,
Dec 19, 2001, 12:38:34 PM12/19/01
to
Erik Naggum <er...@naggum.net> writes:

> I think there is time for software museums. There are already several
> web sites that effectively amount to being museums, such as the DEC-10/20
> site, which also houses an emulator so the old software can be run on
> modern machines. The biggest challenge faced by a software collector is
> to keep it running. Modern machines are fast enough to emulate most of
> the "ancient" hardware, even to the point where they can run operating
> systems on the emulator. However, one major reason software rots is that
> the rest of the world moves on while it does not. Operating systems
> change incompatibly, cooperating software changes, library versions
> change, etc, etc. Making sure that some "ancient" software keeps running
> is no small feat. Unless you get people to pay for this maintenance,
> whatever values it once had will depreciate to zero very quickly.

Hmm, I know that video game systems, particularly the older ones are
much simpler than a LispM, but it seems that there is no shortage of
working emulators, many Free or free, that allow one to run the old
video games. It seems largely driven by enthusiasts donating their
labor. There are free/Free emulators for everything but the latest
generation of home consoles being maintained by donated labor.

That said, I would agree that while the source code for LispMs may be
a useful start, the real limit is the labor available to port and
maintain. It does seem that the pool of donated labor in the CL
community is growing in the last few years, so it is possible that in
the future we will be able to support such projects.

--
Craig Brozefsky <cr...@red-bean.com>
http://www.red-bean.com/~craig
Ask me about Common Lisp Enterprise Eggplants at Red Bean!

Joe Schaefer

unread,
Dec 19, 2001, 1:00:48 PM12/19/01
to
Bruce Hoult <br...@hoult.org> writes:

> I should probably do a Dylan version. It'll be near identical to the
> CL, but a bit more wordy (lots of lines with just 'end' instead of
> '))))' in the Lisp). Dylan is designed to compile better than Lisp can,
> but a) Lisp is pretty good too, and b) we're probably all at the mercy
> of the quality of the runtime library hash table implementation.
^^^^^^^^^^^^^^^^^^^^^^^^^

[ FWIW, I've attached a non-obfuscated version of your original perl code
( it implements rational arithmetic in Perl and is a little (20~30%)
slower than the original ). ]

I've been following the discussion here and was both surprised and
impressed with the large performance gap wrt the perl for this type
of problem. However, I can't help wondering why nobody questioned
the rationale behind using a hash for this in the first place. IMO
the "natural" data structure for this problem would be some sort of
balanced tree, *not* a hash. But I guess nobody cared much since the
compiled CL version was so darned fast ;-)

Thanks to all that posted and emailed. Looks like I've much to learn
about Lisp.


#!/usr/bin/perl -w
sub gcd { $_[0] ? gcd($_[1] % $_[0], $_[0]) : $_[1] }
sub new {
my ($a,$b) = @_;
return if $b == 0;
return bless [0,1] if $a == 0;

if ( $b < 0 ) { $a = - $a; $b = - $b }
bless [ map {$a/$_, $b/$_} gcd( abs($a), $b ) || 1 ];
}

use overload
'+' => sub {local (*a,*b) = @_; new $a[0]*$b[1]+$a[1]*$b[0],$a[1]*$b[1]},
'-' => sub {local (*a,*b) = @_; new $a[0]*$b[1]-$a[1]*$b[0],$a[1]*$b[1]},
'*' => sub {local (*a,*b) = @_; new $a[0]*$b[0] ,$a[1]*$b[1]},
'/' => sub {local (*a,*b) = @_; new $a[0]*$b[1] ,$a[1]*$b[0]},
'""' => sub {local $" = "/"; return "@{$_[0]}" }, fallback => 1;

my $N = shift || 9;
my @nums = ( undef, { "$N/1" => bless [$N,1] } );

for my $n (1 .. $N) {
for my $p (1 .. $n/2) {
for my $a ( values %{$nums[$p]} ) {
for my $b ( values %{$nums[$n-$p]} ) {

$nums[$n]->{$_} = $_ for grep defined($_) =>
$a + $b, $b * $a,
$a - $b, $b - $a,
$a / $b, $b / $a;
}
}
}
printf "Smallest integer that can't be made from $n $N\'s is %3d.\n" =>
do { my $i = 1; ++$i while exists $nums[$n]{"$i/1"}; $i };
}

__END__

--
Joe Schaefer

Craig Brozefsky

unread,
Dec 19, 2001, 1:35:25 PM12/19/01
to
Erik Naggum <er...@naggum.net> writes:

> As for giving away things for free, if you cannot make it yourself, just
> buy it from someone else and give it away. If someone has something you
> want to be free, the problem is no harder than to cough up the money to
> make them want to do it, too. If this is not palatable to those who want
> things others have made for free, they demonstrate that somebody else
> somehow should accept the cost of this operation _without_ compensation.
> Since I have not heard about any organization working to buy software
> from those who "hoard" it, quite unlike those organization that buy up
> tropical forest land and promise never to sell it or develop it, I tend
> to believe the whole "free software" thing is really a way of tricking
> immature people to give away their work. (I was one of those people.)

I see it as an emphasis on the use value of software, as opposed to
its exchange value that is limited by it's inability to expand its
critique beyond software. I don't think this view is in opposition to
your assesment, I'll explain.

Within our current socio-economic system, exchange value is dominant,
so it's only in a situation where participants are able to support
themselves thru exchange value (exchanging their labor in most cases)
in one arena that they can then afford to attempt to maximize the use
value of their software in another arena, often at the expense of it's
exchange value. Maximizing use value is in almost all cases
irreconcilable with maximizing exchange value, particularly at a level
which would generate a profit sufficient enough to attract the capital
needed to fund the production. By use value I mean both the
usefulness of the software and the extent to which it can be utilized,
the size of it's user base.

I think this view captures alot of the problems "Free Software
business" has, as well as the predicament that individual programmers
face.

They way I personally navigate this contradiction is to release the
things which I have no hope of extracting profitable exchange value
from as free, so that their use value can be maximized, while relying
upon the exchange value of things which I have a support structure (a
business plan, salespeople and investors) to fund my labor. I think
that this way provides a global maximum to the use value of my work,
while still providing me with the neccesarry exchange value to
survive.

This strategy works for individuals, but I don't think it works on a
global scale. As long as exchange value dominates use value we will
need large portions of the global population to direct their labor
solely towards the production of exchange value, at the expense of
their labors use value. Wether it's writing proprietary software
instead of free software, or growing cofeee instead of food.

The problem I see with Free Software as it stands today is that it is
incapable of expanding its critique beyond this one realm, and so it
cannot ever hope to escape the historic dominance of exchange value.
Without this expansion of its critique, it does indeed resemble
trickery, but its root is naivety, not malice.

Lieven Marchand

unread,
Dec 19, 2001, 11:49:34 AM12/19/01
to
Bruce Hoult <br...@hoult.org> writes:

> I agree with everything you say, but the question is "where is the
> source code, and why hasn't someone released it?"
>
> I'm sick to death of incredibly nifty things disappearing along with the
> companies that created them. Or, often worse, with the company
> continuing and just the project or product getting cancelled and being
> sat on, presumably to prevent competitors from getting a leg up.

That company still exists and if everyone who has whined here about
them and who could afford one of their Alpha based LM's bought one,
they would be doing a lot better.

This isn't directed at you personally, but some of the whining against
Symbolics gets on my nerves. They're a company and as such probably
open for negotiation. If you want a version for IA-64, try to get
venture capital together, write down a business plan and make them an
offer for the right to port the source and market the IA-64 version.

--
Lieven Marchand <m...@wyrd.be>
She says, "Honey, you're a Bastard of great proportion."
He says, "Darling, I plead guilty to that sin."
Cowboy Junkies -- A few simple words

Nils Goesche

unread,
Dec 19, 2001, 4:34:04 PM12/19/01
to
Nicolas Neuss <ne...@ortler.iwr.uni-heidelberg.de> writes:

If the choice is the German `state' university system vs ``unbounded
capitalism'' which gave us Berkley, MIT, Harvard, Bell-Labs and CMU,
to name only a few, I think we'd be better off with the latter by a
large degree, even when it comes to ``Free Software''...

Regards,
--
Nils Goesche
"Don't ask for whom the <CTRL-G> tolls."

PGP key ID 0x42B32FC9

Bruce Hoult

unread,
Dec 19, 2001, 7:43:08 PM12/19/01
to
In article <m38zbzx...@localhost.localdomain>, Lieven Marchand
<m...@wyrd.be> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > I agree with everything you say, but the question is "where is the
> > source code, and why hasn't someone released it?"
> >
> > I'm sick to death of incredibly nifty things disappearing along with
> > the
> > companies that created them. Or, often worse, with the company
> > continuing and just the project or product getting cancelled and being
> > sat on, presumably to prevent competitors from getting a leg up.
>
> That company still exists and if everyone who has whined here about
> them and who could afford one of their Alpha based LM's bought one,
> they would be doing a lot better.

There are many things that I could afford if they were useful to me, but
what I *can't* afford to do is buy dozens of things on the off-chance
that something will be useful.

I just looked at their web page and could see no mention of prices. The
obvious conclusion is the old joke that if you have to ask the price you
can't afford it.

The recent Xanalys announcement, however, interests me greatly. I don't
know what's in the personal edition (or what is left out), but since it
is actually available for machines that I already own (Mac, Linux,
Windows, in that order of preference) and there is a version cheap
enough to be "throw away" money (free in this case, but that's not
critical) there is close to a 100% chance of me downloading it and
giving it a try to see what the fuss is about.

-- Bruce

Bruce Hoult

unread,
Dec 19, 2001, 8:36:05 PM12/19/01
to
In article <sfw3d27...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> > > Source code isn't what's needed here.
> > > It's a mindset that these old ideas are worth learning from and
> > > doing.
> >
> > How can anyone learn from the ideas if they can't see the software?
>
> I just gotta laugh at this one.
>
> The ideas? You're going to get those from source code?

Once again you misinterpret.

There is no way in the world I will ever see Lisp Machine software
running on actual hardware. If I can't see it running then I can't know
it it does, and can't learn from the ideas.


> > I'd be amazed if there was a single Lisp Machine in my country.
>
> (From your network domain, I'm not sure what country you're in.
> Organia, perhaps? ;-)

New Zealand. I prefer a generic email address, especially since I moved
to the US earlier this year (and then back, rather earlier than I
expected to).


> > > I might join in championing a change to the law that said that
> > > if you failed to market IP created for pay by person X for person Y,
> > > that after some threshold time the IP reverted to be owned by person
> > > X.
> >
> > I wouldn't support that.
>
> I'm curious about why since I only guessed before. Is it because it
> would change the ownership? Or because the target of the ownership
> change would not be "the public" or "the free software industry"?

Because its a matter for individuals to decide and implement via
contracts, not for the government to decide for everyone. It is already
the case that, at least as an independent consultant, I automatically
own the IP to anything I create unless that IP is explicitly assigned to
the client. The position is different as a salaried employee, but I
haven't been one of those in nearly a decade.


> > I can't afford to do this full-time, but I can afford to do it a
> > little. Gradually, the pool of good stuff will accumulate.
>
> I appreciate this concern. From my point of view, the problem is that
> this attitude, if pervasive (and I think it is), leads to there being
> a single, canonical, monolithic "open source" and not a set of
> competing ones. I want competition, and I think markets breed that.

There seems to be a *lot* of competition in the "open source"
programming languages market. Perl and Python are doing extremely well
in capturing mindshare and usage, CMUCL and Gwydion Dylan a lot less
well.


I don't think anyone here has articulated the reason why I contribute (a
little) to "open source".

It's because I think that in the long term it will maximize my wealth.
I use programming languages as a tool to create custom solutions for
clients. Often I have to support and enhance these solutions for a long
period after writing them. Far too many of my early efforts are now
unsupportable because the language or compiler I used to implement them
is now dead, or doesn't work on current machines and operating systems.

I'm not afraid to pay money for good tools, as an examination of my
payments to Metrowerks over the past seven years will show, but the
problem is that I can pay money for them, and become dependent on them,
and then the company can disappear *anyway*, at which point I can become
stuck on the same hardware and same operating system version forevermore.

When I have the source code that can't happen to me.

-- Bruce

Kent M Pitman

unread,
Dec 19, 2001, 9:31:09 PM12/19/01
to
Bruce Hoult <br...@hoult.org> writes:

> There seems to be a *lot* of competition in the "open source"
> programming languages market. Perl and Python are doing extremely well
> in capturing mindshare and usage, CMUCL and Gwydion Dylan a lot less
> well.

There seems to be a lot of competition for things to do, but not for
implementations of the same thing.

If I were writing a virus, I bet I would be able to assume that
everyone in the free software industry was using a canonical
implementation of Perl. By contrast, Lisp implementations are very
different, and writing something to exploit Lisp implementation would
be a lot harder.

> I don't think anyone here has articulated the reason why I contribute (a
> little) to "open source".
>
> It's because I think that in the long term it will maximize my wealth.

This is because you don't intend to make a business from it. This would
not be true if you actually wanted to make your living doing this.

I'll spare you a reiteration of my analogy between this and dumping.
Notwithstanding the probable fact that your motivation may be
different, the economic effect of free software is pretty much the
same. (The motivation of the seller is generally irrelevant to how
money moves in a market.)

Bruce Hoult

unread,
Dec 19, 2001, 10:29:30 PM12/19/01
to
In article <sfwr8pq...@shell01.TheWorld.com>, Kent M Pitman
<pit...@world.std.com> wrote:

> Bruce Hoult <br...@hoult.org> writes:
>
> > There seems to be a *lot* of competition in the "open source"
> > programming languages market. Perl and Python are doing extremely well
> > in capturing mindshare and usage, CMUCL and Gwydion Dylan a lot less
> > well.
>
> There seems to be a lot of competition for things to do, but not for
> implementations of the same thing.

To most people in the world, Lisp and Perl *are* the same thing. I'm
sure that, to my mechanic, a Subaru and a Honda are quite different
things, but to me they're interchangable. And while I consider the
ASK-21 (http://www.emfa.pt/faping/ask-21.htm) and Twin Astir
(http://www.waikerieglidingclub.com.au/TwinAstir.htm) to be quite
different machines, chances are that you wouldn't know the difference if
I took you for a ride in each one.


> If I were writing a virus, I bet I would be able to assume that
> everyone in the free software industry was using a canonical
> implementation of Perl.

Probably, just as virus writers usually assume that everyone is using
Windows. That doesn't mean that there is no competition for either, but
only that they are beating the competition.


> > I don't think anyone here has articulated the reason why I contribute
> > (a little) to "open source".
> >
> > It's because I think that in the long term it will maximize my wealth.
>
> This is because you don't intend to make a business from it. This would
> not be true if you actually wanted to make your living doing this.

I would *love* to make my living from language implementation. I don't
see a way to do this, though. Job offers gratefully accepted.


> I'll spare you a reiteration of my analogy between this and dumping.

Does my car mechanic feel the same way when I change my own oil? I
don't know, but he still seems happy enough when I come to him with more
difficult things.


> Notwithstanding the probable fact that your motivation may be
> different, the economic effect of free software is pretty much the
> same. (The motivation of the seller is generally irrelevant to how
> money moves in a market.)

The economic effect is that people who can offer something of value for
a fair price will still make sales. There are plenty of GUI interfaces
for Linux, but Apple can still do a much better job of writing a GUI on
top of an open source operating system than the Linux folks have. I'm
willing to pay the $120 Apple ask for their work.

The people who won't survive are the ones who have *no* idea of my
needs, and try to impose costs that are not in line with the benefit I
receive. I'm not going to buy a programming language product in the
future unless I can evaluate it (*all* of it) at low cost, I can use it
on every platform I'm interested in, critical bugs get fixed quickly and
reasonable enhancement suggestions implemented. And I need some
assurance that if the vendor disappears after I become dependent on it
then I'll be able to maintain it myself (or pay someone else to do it).

-- Bruce

Nicolas Neuss

unread,
Dec 20, 2001, 5:51:34 AM12/20/01
to
Nils Goesche <car...@cartan.de> writes:

> >
> > Since a large part of free software is developed at universities by
> > people (professors, students) who are paid at least partially from
> > public money, I see it more as "State against (unbounded) capitalism".
>
> If the choice is the German `state' university system vs ``unbounded
> capitalism'' which gave us Berkley, MIT, Harvard, Bell-Labs and CMU,
> to name only a few, I think we'd be better off with the latter by a
> large degree, even when it comes to ``Free Software''...
>
> Regards,

I agree, but as much as I know also MIT and others are not completely
independent from public money. Our German case is extreme, of course,
and I would like to see it changed.

Nicolas.

Brian P Templeton

unread,
Dec 20, 2001, 7:44:26 AM12/20/01
to
Ed L Cashin <eca...@terry.uga.edu> writes:

> Wroot <wr...@my-deja.com> writes:
>
>> I knew you didn't have the balls to accept. A real man can admit it when he
>> is wrong. You aren't one. You called me an idiot, and offered to solve this
>> little problem you spent several hours writing a lisp program for. I
>> offered you a way to compare answers on the spot.
>>
>> My advice to you: see shrink and, also, go back to school, kiddo. Maybe
>> they'll teach you some respect for people who are smarter than you.
>
> This newsgroup is weird. Lisp seems to be a drug that makes smart
> people mean and dumb people mean.
>
The real world(TM) is weird. C++ seems to be a drug that makes some
smart people have to waste time dealing with C++ bigots who cannot
accept that anything could be better than their precious, disgusting,
buzzword-compliant language and some dumb people (temporarily) rich.

> --
> --Ed Cashin integrit file-verification system:
> eca...@terry.uga.edu http://integrit.sourceforge.net/
>
> Note: If you want me to send you email, don't munge your address.

--
BPT <b...@tunes.org> /"\ ASCII Ribbon Campaign
backronym for Linux: \ / No HTML or RTF in mail
Linux Is Not Unix X No MS-Word in mail
Meme plague ;) ---------> / \ Respect Open Standards

Brian P Templeton

unread,
Dec 20, 2001, 7:44:28 AM12/20/01
to
Bruce Hoult <br...@hoult.org> writes:

[...]


> And it would take real work to get as fast as your SBCL (whatever that
> is) results.

SBCL is a fork of CMUCL. The goal is to keep it maintainable, which
(IIANM) sometimes entails making the implementation less optimized
where it makes the source significantly more readable; portability;
and minimizing bootstrapping problems. Cf. <URL:http://sbcl.sf.net/>.

[...]

Thomas F. Burdick

unread,
Dec 20, 2001, 1:31:27 PM12/20/01
to
Nils Goesche <car...@cartan.de> writes:

The University of California, Berkeley, is about as far from
"unbounded capitalism" as you can get in this country. The University
of California is as fantastic as it is, because once upon a time
California was a lot more like a Social Democracy, and we're still
benefiting from the echos of that. Not that Social Democracy is
anything but capitalism, but it's a different sort of capitalism than
has been most in vogue in the US since Reagan and Carter.

Charles Shannon Hendrix

unread,
Dec 30, 2001, 11:14:04 PM12/30/01
to
In article <sfw3d27...@shell01.TheWorld.com>,

Kent M Pitman <pit...@world.std.com> wrote:

> > > Source code isn't what's needed here.
> > > It's a mindset that these old ideas are worth learning from and doing.
> >
> > How can anyone learn from the ideas if they can't see the software?
>
> I just gotta laugh at this one.
>
> The ideas? You're going to get those from source code?

That's a pretty common method, yes.

> How about... (drum roll please)...
>
> Documentation?
>
> It IS available. People don't throw out old manuals, they offer them
> up for sale here routinely.

Depending on what you are trying to learn, documentation, no matter
how copious, is a poor substitute for hands-on. If something is not
available, the documentation is mostly historical, and the demand just
won't be there mostly just because no one sees the idea.

Witness the really cruddy technology we have to use all the time, like
Windows, or C++, not because no one wants better ideas, but because that
is the only thing most people ever see, are allowed to see, or can afford
to see.

The only hopeful change I have seen is the continuing surge of UNIX,
which is about the only technology taking attention away from that morass
of mediocrity in Redmond.

> You'd be amazed perhaps too how copiously documented the Lisp Machine was.

Equally amazing is how hard it is to learn much without the machine
it documents.

> I appreciate this concern. From my point of view, the problem is that
> this attitude, if pervasive (and I think it is), leads to there being
> a single, canonical, monolithic "open source" and not a set of
> competing ones. I want competition, and I think markets breed that.

There is a lot of competition in the free software world. In fact, I
sometimes think it's too much. For example, it causes a great deal
of duplicated effort and mindshare wars, and there are precious few
programming resources to waste on that.

In some cases it seems OK. For example, I generally don't have a
problem with the competition in things like editors (vim, nvi, vile,
etc, for example). However, it's wasteful and unproductive in other
areas like KDE vs Gnome in the desktop UNIX world, or similar areas
where the competition has caused more waste and confusion than benefits.

In fact, I can't currently think of a monolithic open source for anything
as you suggest, at least not offhand.

> I think the market has failed not because markets don't work but
> becuase companies must not grow about a certain maximum size. The
> problem with MicroSoft is that there is no upper bound on company
> size, and markets, to work, have to assume you can't corner them.
> Once you hit the walls of the market and start expanding back inward
> instead of outward, the market is not really operating. You get the
> equivalent effect of having unbounded population growth in a bounded
> world... people start to notice resources dwindling and different ethics
> are required to stay alive than were needed when growth had not yet hit
> the wall.

I think this is a good analysis. Of course, Microsoft was a horrible
little company too, it's only change has been to get bigger. Kind of
like the idea that a dirty old man was probably once a dirty young man,
he just has fewer inhibitions now.

It isn't just companies either. Going into the micro, teams/projects
and divisions have a certain size that works, and a certain level of
management that you should stay below.

The problem is the current market setup, for many reasons, rewards
companies almost purely for growth, not doing actual, steady, and
needed work. The same kind of thing plagues sub-components of
a company, where a small team and it's leader never get recognized
unless they grow. In fact, the rewards often come just before
they fail.

Witness companies laying off thousands because they "only" made a 10%
return.

--

UNIX/Perl/C/Pizza__________________...@widomaker.com

Charles Shannon Hendrix

unread,
Dec 30, 2001, 11:17:47 PM12/30/01
to
In article <sfwr8pq...@shell01.TheWorld.com>,

Kent M Pitman <pit...@world.std.com> wrote:
> Bruce Hoult <br...@hoult.org> writes:
>
> > There seems to be a *lot* of competition in the "open source"
> > programming languages market. Perl and Python are doing extremely well
> > in capturing mindshare and usage, CMUCL and Gwydion Dylan a lot less
> > well.
>
> There seems to be a lot of competition for things to do, but not for
> implementations of the same thing.

I think you should spend an hour browsing sourceforge.com and come
back to this. It's not true, though I believe it could happen if the
community is not careful.

> If I were writing a virus, I bet I would be able to assume that
> everyone in the free software industry was using a canonical
> implementation of Perl. By contrast, Lisp implementations are very
> different, and writing something to exploit Lisp implementation would
> be a lot harder.

I don't think we would benefit much from multiple implementations of Perl.
In fact, I don't like it when competition induces the producers to make
things incompatible, which is what usually happens.

> > I don't think anyone here has articulated the reason why I contribute (a
> > little) to "open source".
> >
> > It's because I think that in the long term it will maximize my wealth.
>
> This is because you don't intend to make a business from it. This would
> not be true if you actually wanted to make your living doing this.

No, he specifically said this was for making a living, and documented
why it would help him.

--

UNIX/Perl/C/Pizza__________________...@widomaker.com

Kent M Pitman

unread,
Dec 31, 2001, 3:19:12 PM12/31/01
to
sha...@news.widomaker.com (Charles Shannon Hendrix) writes:

> I don't think we would benefit much from multiple implementations of Perl.
> In fact, I don't like it when competition induces the producers to make
> things incompatible, which is what usually happens.

I can appreciate this point of view, I just disagree. There is a
difference between "incompatible design" and "incompatible
implementation". CL has a single design but multiple implementations,
and this serves to identify ambiguities so that people will know when
they are relying on things that were only accidentally true and that
the implementors might be waiting to change, not even realizing anyone
relied on them. This could happen to Perl even with only one
implementation...

One useful way I model this is to say that porting must extend both in
time and in space. Even with a "single implementation", there must be
things held constant and things that can change, effectively making
future and pas implementations different implementations among which
people must port. By saying you will never break a program, you tie the
hands of the implementor of your "single implementation". If you say you
might break a program, then your claim about incompatibility is weakened
to the point of being at least useless to me.

But people reason about risk differently, so it's not that I'm saying it
has to have no utility to you. Life's just like that.

> > > I don't think anyone here has articulated the reason why I contribute (a
> > > little) to "open source".
> > >
> > > It's because I think that in the long term it will maximize my wealth.
> >
> > This is because you don't intend to make a business from it. This would
> > not be true if you actually wanted to make your living doing this.
>
> No, he specifically said this was for making a living, and documented
> why it would help him.

You're misunderstanding. I'm saying, "This is because you do not
intend to derive cash flow from the specific thing you have made it
impossible to derive cash flow from." He is not maximizing his wealth
by the thing *I* am deriving my cash flow from because if he were,
he'd be broke taking this approach. This is not a subjective point,
so keep reading this statement until you have a reading that makes my
statement true and then maybe you'll be inside my head about some of
the related points. If you allow yourself the luxury of thinking I'm
wrong, you are also allowing yourself the luxury of not hearing me. I
did not say he couldn't make money another way. I said that his
decision to make money another way makes him less concerned about what
it does to my ability to make money the way I want to.

Michael Parker

unread,
Jan 17, 2002, 7:35:10 PM1/17/02
to
Raymond Wiker <Raymon...@fast.no> wrote in message news:<864rmp6...@raw.grenland.fast.no>...
> Bruce Hoult <br...@hoult.org> writes:
>
> > In article <bruce-E1F709....@news.paradise.net.nz>, Bruce
> > Hoult <br...@hoult.org> wrote:
> >
> > > In article <bruce-9BF494....@news.paradise.net.nz>, Bruce
> > > Hoult <br...@hoult.org> wrote:
> > >
> > > > > IIRC, Your original Perl solution finished on your box at around 30s.
> > > > >
> > > > > How does the CL version compare?
> > > >
> > > > Right. 31.5 seconds. As it turns out, in CMUCL on the same machine
> > > > the Lisp runs in +/- one second of the Perl.
> > >
> > > A tidy-up (thanks to Paul Foley) reduces the CL to 21.6 seconds, a 47%
> > > speed increase.
> >
> > Ooops.
> >
> > I just found that CMUCL doesn't compile things by default. So the
> > previous time was for interpreted code. After..
> >
> > (defun nineNines ()
> > <previous code>)
> >
> > (compile 'nineNines)
> >
> > (time (nineNines))
> >
> > ... we now get ...
> >
> > Evaluation took:
> > 2.18 seconds of real time
> > 2.13 seconds of user run time
> > 0.05 seconds of system run time
> > [Run times include 0.23 seconds GC run time]
> > 0 page faults and
> > 10954632 bytes consed.
> >
> >
> > So that's ten times faster for compiled code than for interpreted.
> > Without declarations.
>
> On my work machine (Pentium III at 666 MHz), using SBCL, I got
> around 6.5 seconds. Adding a (declare (speed 3) (compilations-speed 0)
> (safety 0) and "fixnum" declarations for p, a and b got that down to
> about 0.7 seconds.

Errm, I though that a and b needed to be ratios? At any rate, that's
pretty quick.

At any rate, the ITAsoft thread on slashdot got me to thinking about
this again, so since the C troll never delivered his version I whipped
out the following C++ pgm to satisfy my curiosity.

This was tested under MSVC++ 6.x, using their STL, so the set type is
implemented using a balanced binary tree.

On my box (600 mhz PIII) with full optimizations on, it takes .73
secs.

(I apologize for any bizarre text wrapping)

#include <stdio.h>
#include <time.h>
#include <set>

inline long gcd(long x, long y)
{
while (y != 0) {
long tmpx(x);
x = y;
y = tmpx % y;
}
return x;
}

class ratio
{
friend bool ratio_set_less(const ratio& r1, const ratio& r2);
long _a;
long _b;
public:
ratio() : _a(0), _b(1) {};
ratio(long a) : _a(a), _b(1) { }
ratio(long a, long b) : _a(a), _b(b) { }
const ratio operator+(const ratio& r2) const
{
long newb(_b*r2._b);
long newa((_a*r2._b)+(r2._a*_b));
long gcd_ab(gcd(newb, newa));
if (gcd_ab == 0)
return ratio(0);
else
return ratio((long)(newa/gcd_ab), (long)(newb/gcd_ab));
}
const ratio operator-(const ratio& r2) const
{
long newb(_b*r2._b);
long newa((_a*r2._b)-(r2._a*_b));
long gcd_ab(gcd(newa, newb));
return ratio((long)(newa/gcd_ab), (long)(newb/gcd_ab));
}
const ratio operator*(const ratio& r2) const
{
long newb(_b*r2._b);
long newa(_a*r2._a);
long gcd_ab(gcd(newa, newb));
return ratio((long)(newa/gcd_ab), (long)(newb/gcd_ab));
}
const ratio operator/(const ratio& r2) const
{
long newb(_b*r2._a);
long newa(_a*r2._b);
long gcd_ab(gcd(newa, newb));
return ratio((long)(newa/gcd_ab), (long)(newb/gcd_ab));
}
ratio& operator=(const ratio& r2)
{
_a = r2._a;
_b = r2._b;
return *this;
}
bool operator==(const ratio& r2) const
{
return _a == r2._a && _b == r2._b;
}
bool zerop() const
{
return _a == 0;
}
void print(FILE* outf) const
{
fprintf(outf, "%d/%d", _a, _b);
}
bool operator<(const ratio& r2) const
{
if (_a*r2._b < _b*r2._a) return true;
return false;
}
};


#define NUMNINES 9
void main()
{
clock_t starttime(clock());
typedef std::set<ratio> ratioset;
typedef ratioset::const_iterator ratio_const_iter;
ratioset sets[NUMNINES+1];
sets[1].insert(ratio(9));
for (int round = 0; round <= NUMNINES; ++round) {
ratioset& newflags = sets[round];
for (int i = 0; i <= round/2; ++i) {
const ratioset& flags1 = sets[i];
const ratioset& flags2 = sets[round-i];
for (ratio_const_iter jiter = flags1.begin(); jiter !=
flags1.end(); ++jiter) {
for (ratio_const_iter kiter = flags2.begin(); kiter !=
flags2.end(); ++kiter) {
const ratio& j = *jiter;
const ratio& k = *kiter;
newflags.insert(j+k);
newflags.insert(j-k);
newflags.insert(k-j);
newflags.insert(j*k);
if (!k.zerop())
newflags.insert(j/k);
if (!j.zerop())
newflags.insert(k/j);
}
}
}
}

ratioset& nineflags = sets[NUMNINES];
for (int i = 0; nineflags.find(ratio(i)) != nineflags.end(); ++i)
;
clock_t endtime(clock());
printf("The minimum value that can't be made from 9 9's is %d\n",
i);
printf(" CPU time used: %f secs\n",
(float)(endtime-starttime)/(float)CLOCKS_PER_SEC);
}

Michael Parker

unread,
Jan 18, 2002, 10:16:18 AM1/18/02
to
mpark...@hotmail.com (Michael Parker) wrote in message news:<9f023346.02011...@posting.google.com>...

Here's a cleaned up version, with variables renamed to match the Lisp
version:

#include <stdio.h>
#include <time.h>
#include <set>

inline long gcd(long x, long y)
{
while (y != 0) {
long tmpx(x);
x = y;
y = tmpx % y;
}
return x;
}

class ratio
{
long _n;
long _d;

public:
ratio() : _n(0), _d(1) {};
ratio(long n) : _n(n), _d(1) { }
ratio(long n, long d) : _n(n), _d(d) { }


const ratio operator+(const ratio& r2) const
{

long newd(_d*r2._d), newn((_n*r2._d)+(r2._n*_d));
if (newd == 1) return ratio(newn);
long gcd_nd(gcd(newd, newn));
return ratio((long)(newn/gcd_nd), (long)(newd/gcd_nd));


}
const ratio operator-(const ratio& r2) const
{

long newd(_d*r2._d), newn((_n*r2._d)-(r2._n*_d));
if (newd == 1) return ratio(newn);
long gcd_nd(gcd(newn, newd));
return ratio((long)(newn/gcd_nd), (long)(newd/gcd_nd));


}
const ratio operator*(const ratio& r2) const
{

long newd(_d*r2._d), newn(_n*r2._n);
if (newd == 1) return ratio(newn);
long gcd_nd(gcd(newn, newd));
return ratio((long)(newn/gcd_nd), (long)(newd/gcd_nd));


}
const ratio operator/(const ratio& r2) const
{

long newd(_d*r2._n), newn(_n*r2._d);
if (newd == 1) return ratio(newn);
long gcd_nd(gcd(newn, newd));
return ratio((long)(newn/gcd_nd), (long)(newd/gcd_nd));
}
bool zerop() const
{
return _n == 0;


}
bool operator<(const ratio& r2) const
{

if (_n*r2._d < _d*r2._n) return true;
return false;
}
};


#define NUMNINES 9


typedef std::set<ratio> ratioset;
typedef ratioset::const_iterator ratio_const_iter;

void main()
{
clock_t starttime(clock());
ratioset nums[NUMNINES+1];
nums[1].insert(ratio(9));
for (int n = 0; n <= NUMNINES; ++n) {
ratioset& h = nums[n];
for (int p = 0; p <= n/2; ++p) {
const ratioset& nineset1 = nums[p], nineset2= nums[n-p];
for (ratio_const_iter a = nineset1.begin(); a != nineset1.end();
++a) {
for (ratio_const_iter b = nineset2.begin(); b !=
nineset2.end(); ++b) {
h.insert(*a + *b); h.insert(*a * *b);
h.insert(*a - *b); h.insert(*b - *a);
if (!b->zerop()) h.insert(*a / *b);
if (!a->zerop()) h.insert(*b / *a);
}
}
}
}

ratioset& nineflags = nums[NUMNINES];

0 new messages